Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CM360 Bulkdozer Editor
Bulkdozer is a tool that can reduce trafficking time in Campaign Manager by up to 80%% by providing automated bulk editing capabilities.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter CM360 Bulkdozer Editor Recipe Parameters
Open the Bulkdozer feed.
Make your own copy of the feed by clicking the File -> Make a copy... menu in the feed.
Give it a meaninful name including the version, your name, and team to help you identify it and ensure you are using the correct version.
Under the Account ID field below, enter the your Campaign Manager Network ID.
Under Sheet URL, enter the URL of your copy of the feed that you just created in the steps above.
Go to the Store tab of your new feed, and enter your profile ID in the profileId field (cell B2). Your profile ID is visible in Campaign Manager by clicking your avatar on the top right corner.
Click the Save button below.
After clicking Save, copy this page's URL from your browser address bar, and paste it in the Store tab for the recipe_url field (cell B5) your sheet.
Bulkdozer is ready for use
Review the Bulkdozer documentation.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute CM360 Bulkdozer Editor
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: CM360 Bulkdozer Editor
Bulkdozer is a tool that can reduce trafficking time in Campaign Manager by up to 80%% by providing automated bulk editing capabilities.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'recipe_timezone':'America/Chicago', # Timezone for report dates.
'account_id':None, # Campaign Manager Network ID (optional if profile id provided)
'dcm_profile_id':None, # Campaign Manager Profile ID (optional if account id provided)
'sheet_url':'', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter CM360 Bulkdozer Editor Recipe Parameters
Open the Bulkdozer feed.
Make your own copy of the feed by clicking the File -> Make a copy... menu in the feed.
Give it a meaninful name including the version, your name, and team to help you identify it and ensure you are using the correct version.
Under the Account ID field below, enter the your Campaign Manager Network ID.
Under Sheet URL, enter the URL of your copy of the feed that you just created in the steps above.
Go to the Store tab of your new feed, and enter your profile ID in the profileId field (cell B2). Your profile ID is visible in Campaign Manager by clicking your avatar on the top right corner.
Click the Save button below.
After clicking Save, copy this page's URL from your browser address bar, and paste it in the Store tab for the recipe_url field (cell B5) your sheet.
Bulkdozer is ready for use
Review the Bulkdozer documentation.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'traffic':{
'hour':[
],
'account_id':{'field':{'name':'account_id','kind':'string','order':1,'description':'Campaign Manager Network ID (optional if profile id provided)','default':None}},
'dcm_profile_id':{'field':{'name':'dcm_profile_id','kind':'string','order':1,'description':'Campaign Manager Profile ID (optional if account id provided)','default':None}},
'auth':'user',
'sheet_url':{'field':{'name':'sheet_url','kind':'string','order':2,'description':'Feed Sheet URL','default':''}},
'timezone':{'field':{'name':'recipe_timezone','kind':'timezone','description':'Timezone for report dates.','default':'America/Chicago'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute CM360 Bulkdozer Editor
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
15,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code Generation for the Heat Equation
The <a target="_blank" href="https
Step1: We define the PDE using the pystencils building blocks for transient and spatial derivatives. The definition is implicitly equalled to zero. We use ps.fd.transient for a first derivative by time and ps.fd.diff to express the second derivatives. ps.fd.diff takes a field and a list of spatial dimensions in which the field should be differentiated.
Step2: Next, the PDE will be discretized. We use the Discretization2ndOrder class to apply finite differences discretization to the spatial components, and explicit euler discretization to the transient components.
Step3: It occurs to us that the right-hand summand can be simplified.
Step4: While combining the two fractions on the right as desired, it also put everything above a common denominator. If we generated the kernel from this, we'd be redundantly multiplying $u_{(0,0)}$ by $dx^2$. Let's try something else. Instead of applying simplify to the entire equation, we could apply it only to the second summand.
The outermost operation of heat_pde_discretized is a $+$, so heat_pde_discretized is an instance of sp.Sum. We take it apart by accessing its arguments, simplify the right hand summand, and put it back together again.
Step5: That looks a lot better! There is nothing left to simplify. The right-hand summand still contains a division by $dx^2$, though. Due to their inefficiency, floating-point divisions should be replaced by multiplication with their reciprocals. Before we can eliminate the division, we need to wrap the equation inside an AssignmentCollection. On this collection we can apply add_subexpressions_for_divisions to replace the division by a factor $\xi_1 = \frac{1}{dx^2}$ which in the kernel will be computed ahead of the loop.
Step6: Our numeric solver's symbolic representation is now complete! Next, we use pystencils to generate and compile a C implementation of our kernel. The code is generated as shown below, compiled into a shared library and then bound to kernel_func. All unbound sympy symbols (dx, dt and kappa) as well as the fields u and u_tmp are arguments to the generated kernel function.
Step7: Prototype Simulation
We can set up and run a simple simulation with the generated kernel right here. The first step is to set up the fields and simulation parameters.
Step8: We also need the Dirichlet and Neumann Boundaries.
Step9: After application of the Dirichlet boundary condition, this is our initial situation. In waLBerla, the domain edges would be ghost layers.
Step10: Finally, define the loop function.
Step11: Not only can we run the kernel, we can even view the results as a video! This is a useful tool for debugging and testing the solver before introducing it into a larger application. We can view the simulation animated both as a colorful 2D plot or as a 3D surface plot.
The total time interval being simulated is two seconds (100 frames à 200 steps à 0.1 milliseconds). | Python Code:
u, u_tmp = ps.fields("u, u_tmp: [2D]", layout='fzyx')
kappa = sp.Symbol("kappa")
dx = sp.Symbol("dx")
dt = sp.Symbol("dt")
Explanation: Code Generation for the Heat Equation
The <a target="_blank" href="https://en.wikipedia.org/wiki/Heat_equation">heat equation</a> which is a simple partial differential equation describing the flow of heat through a homogenous medium. We can write it as
$$
\frac{\partial u}{\partial t} =
\kappa \left(
\frac{\partial^2 u}{\partial x^2} +
\frac{\partial^2 u}{\partial y^2}
\right)
$$
where $\kappa$ is the medium's diffusion coefficient and $u(x, y, t)$ is the unknown temperature distribution at the coordinate $(x,y)$ at time $t$.
To discretize this equation using pystencils, we first need to define all the fields and other symbols involved.
End of explanation
heat_pde = ps.fd.transient(u) - kappa * ( ps.fd.diff( u, 0, 0 ) + ps.fd.diff( u, 1, 1 ) )
heat_pde
Explanation: We define the PDE using the pystencils building blocks for transient and spatial derivatives. The definition is implicitly equalled to zero. We use ps.fd.transient for a first derivative by time and ps.fd.diff to express the second derivatives. ps.fd.diff takes a field and a list of spatial dimensions in which the field should be differentiated.
End of explanation
discretize = ps.fd.Discretization2ndOrder(dx=dx, dt=dt)
heat_pde_discretized = discretize(heat_pde)
heat_pde_discretized
Explanation: Next, the PDE will be discretized. We use the Discretization2ndOrder class to apply finite differences discretization to the spatial components, and explicit euler discretization to the transient components.
End of explanation
heat_pde_discretized.simplify()
Explanation: It occurs to us that the right-hand summand can be simplified.
End of explanation
heat_pde_discretized = heat_pde_discretized.args[1] + heat_pde_discretized.args[0].simplify()
heat_pde_discretized
Explanation: While combining the two fractions on the right as desired, it also put everything above a common denominator. If we generated the kernel from this, we'd be redundantly multiplying $u_{(0,0)}$ by $dx^2$. Let's try something else. Instead of applying simplify to the entire equation, we could apply it only to the second summand.
The outermost operation of heat_pde_discretized is a $+$, so heat_pde_discretized is an instance of sp.Sum. We take it apart by accessing its arguments, simplify the right hand summand, and put it back together again.
End of explanation
@ps.kernel
def update():
u_tmp.center @= heat_pde_discretized
ac = ps.AssignmentCollection(update)
ac = ps.simp.simplifications.add_subexpressions_for_divisions(ac)
ac
Explanation: That looks a lot better! There is nothing left to simplify. The right-hand summand still contains a division by $dx^2$, though. Due to their inefficiency, floating-point divisions should be replaced by multiplication with their reciprocals. Before we can eliminate the division, we need to wrap the equation inside an AssignmentCollection. On this collection we can apply add_subexpressions_for_divisions to replace the division by a factor $\xi_1 = \frac{1}{dx^2}$ which in the kernel will be computed ahead of the loop.
End of explanation
config = ps.CreateKernelConfig(cpu_openmp=4)
kernel_ast = ps.create_kernel(update, config=config)
kernel_func = kernel_ast.compile()
ps.show_code(kernel_ast)
Explanation: Our numeric solver's symbolic representation is now complete! Next, we use pystencils to generate and compile a C implementation of our kernel. The code is generated as shown below, compiled into a shared library and then bound to kernel_func. All unbound sympy symbols (dx, dt and kappa) as well as the fields u and u_tmp are arguments to the generated kernel function.
End of explanation
domain_size = 1.0
cells = 25
delta_x = domain_size / cells
delta_t = 0.0001
kappa_v = 1.0
u = np.zeros((cells, cells))
u_tmp = np.zeros_like(u)
Explanation: Prototype Simulation
We can set up and run a simple simulation with the generated kernel right here. The first step is to set up the fields and simulation parameters.
End of explanation
def f(x):
return (1 + np.sin(2 * np.pi * x) * x**2)
def init_domain(domain, domain_tmp):
domain.fill(0)
domain_tmp.fill(0)
domain[:,-1] = f( np.linspace(0, 1, domain.shape[0]) )
domain_tmp[:,-1] = f( np.linspace(0, 1, domain_tmp.shape[0]) )
return domain, domain_tmp
def neumann(domain):
domain[0,:] = domain[1, :]
domain[-1,:] = domain[-2,:]
domain[:,0] = domain[:,1]
return domain
Explanation: We also need the Dirichlet and Neumann Boundaries.
End of explanation
init_domain(u, u_tmp)
ps.plot.scalar_field_surface(u)
Explanation: After application of the Dirichlet boundary condition, this is our initial situation. In waLBerla, the domain edges would be ghost layers.
End of explanation
def loop(steps = 5):
global u, u_tmp
for _ in range(steps):
neumann(u)
kernel_func(u=u, u_tmp=u_tmp, dx=delta_x, dt=delta_t, kappa=kappa_v)
u, u_tmp = u_tmp, u
return u
200 * 100 * delta_t
Explanation: Finally, define the loop function.
End of explanation
init_domain(u, u_tmp)
anim = ps.plot.scalar_field_animation(lambda : loop(200), frames = 100, rescale = False)
result = ps.jupyter.display_as_html_video(anim)
result
init_domain(u, u_tmp)
anim = ps.plot.surface_plot_animation(lambda : loop(200), frames = 100)
result = ps.jupyter.display_as_html_video(anim)
result
Explanation: Not only can we run the kernel, we can even view the results as a video! This is a useful tool for debugging and testing the solver before introducing it into a larger application. We can view the simulation animated both as a colorful 2D plot or as a 3D surface plot.
The total time interval being simulated is two seconds (100 frames à 200 steps à 0.1 milliseconds).
End of explanation |
15,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantum SVM (quantum kernel method)
Introduction
Please refer to this file for introduction.
This file shows an example how to use Aqua API to build SVM classifier and keep the instance for future prediction.
Step1: First we prepare the dataset, which is used for training, testing and the finally prediction.
Note
Step2: With the dataset ready we initialize the necessary inputs for the algorithm
Step3: With everything setup, we can now run the algorithm.
The run method includes training, testing and predict on unlabeled data.
For the testing, the result includes the success ratio.
For the prediction, the result includes the predicted class names for each data.
After that the trained model is also stored in the svm instance, you can use it for future prediction.
Step4: Use the trained model to evaluate data directly, and we store a label_to_class and class_to_label for helping converting between label and class name | Python Code:
from datasets import *
from qiskit_aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name
from qiskit_aqua.input import get_input_instance
from qiskit_aqua import run_algorithm, get_feature_map_instance, get_algorithm_instance, get_multiclass_extension_instance
Explanation: Quantum SVM (quantum kernel method)
Introduction
Please refer to this file for introduction.
This file shows an example how to use Aqua API to build SVM classifier and keep the instance for future prediction.
End of explanation
n = 2 # dimension of each data point
training_dataset_size = 20
testing_dataset_size = 10
sample_Total, training_input, test_input, class_labels = ad_hoc_data(training_size=training_dataset_size,
test_size=testing_dataset_size,
n=n, gap=0.3, PLOT_DATA=False)
datapoints, class_to_label = split_dataset_to_data_and_labels(test_input)
print(class_to_label)
Explanation: First we prepare the dataset, which is used for training, testing and the finally prediction.
Note: You can easily switch to a different dataset, such as the Breast Cancer dataset, by replacing 'ad_hoc_data' to 'Breast_cancer' below.
End of explanation
svm = get_algorithm_instance("QSVM.Kernel")
svm.random_seed = 10598
svm.setup_quantum_backend(backend='statevector_simulator')
feature_map = get_feature_map_instance('SecondOrderExpansion')
feature_map.init_args(num_qubits=2, depth=2, entanglement='linear')
svm.init_args(training_input, test_input, datapoints[0], feature_map)
Explanation: With the dataset ready we initialize the necessary inputs for the algorithm:
- build all components required by SVM
- feature_map
- multiclass_extension (optional)
End of explanation
result = svm.run()
print("kernel matrix during the training:")
kernel_matrix = result['kernel_matrix_training']
img = plt.imshow(np.asmatrix(kernel_matrix),interpolation='nearest',origin='upper',cmap='bone_r')
plt.show()
print("testing success ratio: ", result['testing_accuracy'])
print("predicted classes:", result['predicted_classes'])
Explanation: With everything setup, we can now run the algorithm.
The run method includes training, testing and predict on unlabeled data.
For the testing, the result includes the success ratio.
For the prediction, the result includes the predicted class names for each data.
After that the trained model is also stored in the svm instance, you can use it for future prediction.
End of explanation
predicted_labels = svm.predict(datapoints[0])
predicted_classes = map_label_to_class_name(predicted_labels, svm.label_to_class)
print("ground truth: {}".format(datapoints[1]))
print("preduction: {}".format(predicted_labels))
Explanation: Use the trained model to evaluate data directly, and we store a label_to_class and class_to_label for helping converting between label and class name
End of explanation |
15,603 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I want to make an 4 dimensional array of zeros in python. I know how to do this for a square array but I want the lists to have different lengths. | Problem:
import numpy as np
arr = np.zeros((20,10,10,2)) |
15,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explorando os contratos da cidade de São Paulo
Na primeira parte deste tutorial, procurei mostrar como
Step1: Um parêntesis! Como saber qual é o código do órgão que quero pesquisar?
Na consulta anterior, eu indiquei que o Código da Educação no Orçamento, por exemplo, é 16; o Fundo Municipal de Saúde, 84. Quem mexe com o orçamento, acaba se acostumando com os números. Mas como saber o código de todas as secretarias e empresas municipais? Existe uma consulta específica para isso na API, que você pode fazer lá no próprio console. Mas vamos trazer essa tabela para cá, para facilitar
Step2: Atenção para uma distinção conceitual importante!
ÓRGÃO = Administração Direta (Prefeituras Regionais, Secretarias)
EMPRESA = Administração Indireta (Fundações, Empresas Públicas)
Os campos na consulta da API são diferentes. Então se você inserir o Código da Prodam (uma empresa pública) no Campo Órgão, por exemplo, a consulta vai retornar 0 Registros.
Pronto, aí temos a lista completa. Fecha parêntesis!
Seguindo, vamos fazer a consulta dos contratos da Secretaria do Verde e Meio Ambiente -- Órgão 27, conforme a lista acima.
Step3: Passo 2. Mãos ao Pandas!
Step4: Vamos checar como ficou nossa base de dados de contratos, mostrando os últimos 5 registros
Step5: São muitas as colunas dessa consulta (a visualização acima omite algumas). Vamos checar quais são
Step6: Vamos remover o txtRazaoSocial, para não nos atrapalhar mais tarde (vamos obter o dado de outro local). Esta consulta não retorna o nome das empresas contratadas, mas "PREFEITURA DE SÃO PAULO" para todos os valores -- provavelmente um erro da API.
Step8: Cruzando as bases
Infelizmente, a consulta não vem "pronta". Vamos precisar cruzar com a base de empenhos do primeiro tutorial para conseguir, por exemplo, saber quais são os credores POR CONTRATO. A imagem abaixo foi exibida durante o lançamento da API, no Café Hacker da Controladoria Geral do Município e da Secretaria Municipal da Fazenda, e explica como as consultas se relacionam (o 'Simplificada' é por conta deles, hehe)
Step9: Agora já temos o DataFrame que junta todos Empenhos do Verde de 2017 com as informações de Contrato, deixando a base mais rica -- e corrigindo essa falha da falta de Razão Social e CNPJ na consulta de contratos!
Step10: A tabela acima vai ter muitos valores "Nan" para codContrato -- pois há 493 empenhos e apenas 80 contratos. Como o interesse agora é só nos contratos, vamos retirar esses casos e montar um novo DataFrame que contém apenas contratos com algum empenho relacionado
Step11: Agora temos duas bases de dados para trabalhar
Step12: 'Top 10' Contratos de 2017
Agora com a lista de todos os contratos do ano, vamos montar uma tabela e ordenar os dados pelo Valor Principal do Contrato. No Manual da API, aprende-se que esse campo 'valPrincipal'significa o "Valor do contrato sem ocorrência de reajustamentos, ou aditamentos".
Step13: Modalidades de Contratação
Várias são as análises possíveis com os campos acima. Vamos ver quantas as contratações a SMVMA fez em 2017, por tipo de licitação | Python Code:
import pandas as pd
import requests
import json
import numpy as np
TOKEN = '198f959a5f39a1c441c7c863423264'
base_url = "https://gatewayapi.prodam.sp.gov.br:443/financas/orcamento/sof/v2.1.0"
headers={'Authorization' : str('Bearer ' + TOKEN)}
Explanation: Explorando os contratos da cidade de São Paulo
Na primeira parte deste tutorial, procurei mostrar como:
1. Fazer o cadastro na API do SOF
2. Acessar os registros de empenhos (a execução orçamentária)
3. Utilizar o Pandas para explorar algumas análises
4. Fazer download da base em CSV
Neste segundo tutorial, vamos focar em outra consulta disponível na mesma API: os contratos.
A Prefeitura de São Paulo ainda não dispõe de um sistema centralizado para gestão de contratos (existe um em implementação).
A boa notícia é que, com relação a contratos, se pode considerar uma das cidades mais transparentes do Brasil por ser uma das poucas a disponibilizar os termos de seus contratos e convênios na íntegra, desde 2014, nesta página do Portal da Transparência. Com o Marco Regulatório da Sociedade Civil, vigente a partir deste ano, também começam a aparecer por lá os Termos de Cooperação (sem transferência de recursos) que antes não eram publicados.Também as doações começaram a aparecer aí.
A má notícia é que nessa base do Portal você consegue extrair as informações publicadas em Diário Oficial, mas pode haver inconsistências (geradas por erros de publicação no D.O.). Às vezes há duplicidade, republicação, valores equivocados (pois são digitados manualmente na hora de publicar).
Por isso, podemos considerar que o registro mais confiável que há sobre a execução dos contratos é o do próprio sistema de Execução Orçamentária (o SOF), uma vez que é necessário cadastrar os contratos para realizar os pagamentos. E é daí a relevância desta API.
Importante!
A diferença com relação ao que está no Portal da Transparência é que os contratos não são imediatamente cadastrados no SOF (ele passa a existir no sistema apenas quando é gerado empenho vinculado a ele). Por isso, se quiser tratar no universo inteiro de contratos e não apenas da execução orçamentária, recomendo o 'double check' com a base do Portal da Transparência.
Passo 1. Consulta à API
Seguiremos os passos detalhados no tutorial anterior para acessar essa consulta na API e utilizar o Pandas para montar um DataFrame.
End of explanation
url_orgaos = '{base_url}/consultarOrgaos?anoExercicio=2017'.format(base_url=base_url)
request_orgaos = requests.get(url_orgaos,
headers=headers,
verify=True).json()
df_orgaos = pd.DataFrame(request_orgaos['lstOrgaos'])
df_orgaos
Explanation: Um parêntesis! Como saber qual é o código do órgão que quero pesquisar?
Na consulta anterior, eu indiquei que o Código da Educação no Orçamento, por exemplo, é 16; o Fundo Municipal de Saúde, 84. Quem mexe com o orçamento, acaba se acostumando com os números. Mas como saber o código de todas as secretarias e empresas municipais? Existe uma consulta específica para isso na API, que você pode fazer lá no próprio console. Mas vamos trazer essa tabela para cá, para facilitar:
End of explanation
url_contratos = '{base_url}/consultaContrato?anoContrato=2017&codOrgao=27'.format(base_url=base_url)
request_contratos = requests.get(url_contratos,
headers=headers,
verify=True).json()
number_of_pages = request_contratos['metadados']['qtdPaginas']
todos_contratos = []
todos_contratos = todos_contratos + request_contratos['lstContratos']
if number_of_pages>1:
for p in range(2, number_of_pages+1):
request_contratos = requests.get(url_contratos + pagination.format(PAGE=p), headers=headers, verify=True).json()
todos_contratos = todos_contratos + request_contratos['lstContratos']
Explanation: Atenção para uma distinção conceitual importante!
ÓRGÃO = Administração Direta (Prefeituras Regionais, Secretarias)
EMPRESA = Administração Indireta (Fundações, Empresas Públicas)
Os campos na consulta da API são diferentes. Então se você inserir o Código da Prodam (uma empresa pública) no Campo Órgão, por exemplo, a consulta vai retornar 0 Registros.
Pronto, aí temos a lista completa. Fecha parêntesis!
Seguindo, vamos fazer a consulta dos contratos da Secretaria do Verde e Meio Ambiente -- Órgão 27, conforme a lista acima.
End of explanation
df_contratos = pd.DataFrame(todos_contratos)
Explanation: Passo 2. Mãos ao Pandas!
End of explanation
df_contratos.tail()
Explanation: Vamos checar como ficou nossa base de dados de contratos, mostrando os últimos 5 registros:
End of explanation
list(df_contratos)
df_contratos.to_csv('contratos_verde.csv')
Explanation: São muitas as colunas dessa consulta (a visualização acima omite algumas). Vamos checar quais são:
End of explanation
df_contratos.drop('txtRazaoSocial', axis=1, inplace=True)
Explanation: Vamos remover o txtRazaoSocial, para não nos atrapalhar mais tarde (vamos obter o dado de outro local). Esta consulta não retorna o nome das empresas contratadas, mas "PREFEITURA DE SÃO PAULO" para todos os valores -- provavelmente um erro da API.
End of explanation
url_empenho = '{base_url}/consultaEmpenhos?anoEmpenho=2017&mesEmpenho=08&codOrgao=27'.format(base_url=base_url)
num_contrato = '&codContrato={CONTRATO}'
lista_contratos = list(df_contratos['codContrato'])
len(lista_contratos)
request_empenhos = requests.get(url_empenho,
headers=headers,
verify=True).json()
def add_codigo_contrato(empenhos, cod_contrato):
""
Adiciona Código de Contrato no dict de cada empenho consultado.
""
for item in empenhos:
item.update({'codContrato': cod_contrato})
return empenhos
todos_empenhos = []
todos_empenhos = todos_empenhos + request_empenhos['lstEmpenhos']
for n in lista_contratos:
response = requests.get(url_empenho + num_contrato.format(CONTRATO=n), headers=headers, verify=True).json()
empenhos_c_cod = add_codigo_contrato(response['lstEmpenhos'], n)
todos_empenhos = todos_empenhos + empenhos_c_cod
df_empenhos_c_contratos = pd.DataFrame(todos_empenhos)
Explanation: Cruzando as bases
Infelizmente, a consulta não vem "pronta". Vamos precisar cruzar com a base de empenhos do primeiro tutorial para conseguir, por exemplo, saber quais são os credores POR CONTRATO. A imagem abaixo foi exibida durante o lançamento da API, no Café Hacker da Controladoria Geral do Município e da Secretaria Municipal da Fazenda, e explica como as consultas se relacionam (o 'Simplificada' é por conta deles, hehe):
O Código de Contrato é um parâmetro facultativo da consulta de empenhos, mas não vem no retorno de dados (outro possível furo da API, que vamos alertar aos desenvolvedores para tentar corrigir). O que isso significa? Que precisaríamos consultar um a um para obter todos os contratos na lista de empenhos! Felizmente, a programação tá aí pra isso. Vamos criar uma estrutura de repetição semelhante à que fizemos para a paginação:
End of explanation
df_empenhos_c_contratos.tail()
Explanation: Agora já temos o DataFrame que junta todos Empenhos do Verde de 2017 com as informações de Contrato, deixando a base mais rica -- e corrigindo essa falha da falta de Razão Social e CNPJ na consulta de contratos!
End of explanation
df_empenhos_c_contratos = df_empenhos_c_contratos.dropna(axis=0).reset_index(drop=True)
df_empenhos_c_contratos['codContrato'] = df_empenhos_c_contratos.loc[:,'codContrato'].astype(int)
Explanation: A tabela acima vai ter muitos valores "Nan" para codContrato -- pois há 493 empenhos e apenas 80 contratos. Como o interesse agora é só nos contratos, vamos retirar esses casos e montar um novo DataFrame que contém apenas contratos com algum empenho relacionado:
End of explanation
cols_to_use = df_empenhos_c_contratos.columns.difference(df_contratos.columns)
df_contratos_empenhados = df_contratos.merge(df_empenhos_c_contratos[cols_to_use], left_index=True, right_index=True, how='outer')
Explanation: Agora temos duas bases de dados para trabalhar:
* df_contratos = Primeira base que extraímos, contém todos os contratos (sem Razão Social ou CNPJ, por limitação da API)
* df_empenhos_c_contratos = Base que contém todos os empenhos com código de contrato, depois do cruzamento que fizemos
Ambas têm várias colunas em comum. Então, antes de juntá-las, vamos apenas retirar as colunas únicas nas duas, com o método "difference":
End of explanation
df_contratos_empenhados.columns
top20 = df_contratos_empenhados[['txtDescricaoModalidade',
'txtObjetoContrato',
'txtRazaoSocial',
'numCpfCnpj',
'valPrincipal']]
#top20.sort_values(['valPrincipal'], ascending=False)
Explanation: 'Top 10' Contratos de 2017
Agora com a lista de todos os contratos do ano, vamos montar uma tabela e ordenar os dados pelo Valor Principal do Contrato. No Manual da API, aprende-se que esse campo 'valPrincipal'significa o "Valor do contrato sem ocorrência de reajustamentos, ou aditamentos".
End of explanation
tipo_licitacao = df_contratos_Verde[]
Explanation: Modalidades de Contratação
Várias são as análises possíveis com os campos acima. Vamos ver quantas as contratações a SMVMA fez em 2017, por tipo de licitação:
End of explanation |
15,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
15,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a longger and a new Bundle.
Step2: Accessing Settings
Settings are found with their own context in the Bundle and can be accessed through the get_setting method
Step3: or via filtering/twig access
Step4: and can be set as any other Parameter in the Bundle
Available Settings
Now let's look at each of the available settings and what they do
phoebe_version
phoebe_version is a read-only parameter in the settings to store the version of PHOEBE used.
dict_set_all
dict_set_all is a BooleanParameter (defaults to False) that controls whether attempting to set a value to a ParameterSet via dictionary access will set all the values in that ParameterSet (if True) or raise an error (if False)
Step5: In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do
Step6: If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter
Step7: Now let's disable this so it doesn't confuse us while looking at the other options
Step8: dict_filter
dict_filter is a Parameter that accepts a dictionary. This dictionary will then always be sent to the filter call which is done under-the-hood during dictionary access.
Step9: In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned).
This can be inconvenient... if you want to set the value of the binary's inclination, you must always provide extra information (like '@component').
Instead, we can always have the dictionary access search in the component context by doing the following
Step10: Now we no longer see the constraint parameters.
All parameters are always accessible with method access
Step11: Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter
Step12: run_checks_compute (/figure/solver/solution)
The run_checks_compute option allows setting the default compute option(s) sent to b.run_checks, including warnings in the logger raised by interactive checks (see phoebe.interactive_checks_on).
Similar options also exist for checks at the figure, solver, and solution level.
Step13: auto_add_figure, auto_remove_figure
The auto_add_figure and auto_remove_figure determine whether new figures are automatically added to the Bundle when new datasets, distributions, etc are added. This is False by default within Python, but True by default within the UI clients.
Step14: web_client, web_client_url
The web_client and web_client_url settings determine whether the client is opened in a web-browser or with the installed desktop client whenever calling b.ui or b.ui_figures. For more information, see the UI from Jupyter tutorial. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Advanced: Settings
The Bundle also contains a few Parameters that provide settings for that Bundle. Note that these are not system-wide and only apply to the current Bundle. They are however maintained when saving and loading a Bundle.
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a longger and a new Bundle.
End of explanation
b.get_setting()
Explanation: Accessing Settings
Settings are found with their own context in the Bundle and can be accessed through the get_setting method
End of explanation
b['setting']
Explanation: or via filtering/twig access
End of explanation
b['dict_set_all@setting']
b['teff@component']
Explanation: and can be set as any other Parameter in the Bundle
Available Settings
Now let's look at each of the available settings and what they do
phoebe_version
phoebe_version is a read-only parameter in the settings to store the version of PHOEBE used.
dict_set_all
dict_set_all is a BooleanParameter (defaults to False) that controls whether attempting to set a value to a ParameterSet via dictionary access will set all the values in that ParameterSet (if True) or raise an error (if False)
End of explanation
b.set_value_all('teff@component', 4000)
print(b['value@teff@primary@component'], b['value@teff@secondary@component'])
Explanation: In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do:
b['teff@component'] = 6000
this would raise an error. Under-the-hood, this is simply calling:
b.set_value('teff@component', 6000)
which of course would also raise an error.
In order to set both temperatures to 6000, you would either have to loop over the components or call the set_value_all method:
End of explanation
b['dict_set_all@setting'] = True
b['teff@component'] = 8000
print(b['value@teff@primary@component'], b['value@teff@secondary@component'])
Explanation: If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter
End of explanation
b.set_value_all('teff@component', 6000)
b['dict_set_all@setting'] = False
Explanation: Now let's disable this so it doesn't confuse us while looking at the other options
End of explanation
b['incl']
Explanation: dict_filter
dict_filter is a Parameter that accepts a dictionary. This dictionary will then always be sent to the filter call which is done under-the-hood during dictionary access.
End of explanation
b['dict_filter@setting'] = {'context': 'component'}
b['incl']
Explanation: In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned).
This can be inconvenient... if you want to set the value of the binary's inclination, you must always provide extra information (like '@component').
Instead, we can always have the dictionary access search in the component context by doing the following
End of explanation
b.filter(qualifier='incl')
Explanation: Now we no longer see the constraint parameters.
All parameters are always accessible with method access:
End of explanation
b.set_value('dict_filter@setting', {})
Explanation: Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter
End of explanation
b['run_checks_compute@setting']
b.add_dataset('lc')
b.add_compute('legacy')
print(b.run_checks())
b['run_checks_compute@setting'] = ['phoebe01']
print(b.run_checks())
Explanation: run_checks_compute (/figure/solver/solution)
The run_checks_compute option allows setting the default compute option(s) sent to b.run_checks, including warnings in the logger raised by interactive checks (see phoebe.interactive_checks_on).
Similar options also exist for checks at the figure, solver, and solution level.
End of explanation
b['auto_add_figure']
b['auto_add_figure'].description
b['auto_remove_figure']
b['auto_remove_figure'].description
Explanation: auto_add_figure, auto_remove_figure
The auto_add_figure and auto_remove_figure determine whether new figures are automatically added to the Bundle when new datasets, distributions, etc are added. This is False by default within Python, but True by default within the UI clients.
End of explanation
b['web_client']
b['web_client'].description
b['web_client_url']
b['web_client_url'].description
Explanation: web_client, web_client_url
The web_client and web_client_url settings determine whether the client is opened in a web-browser or with the installed desktop client whenever calling b.ui or b.ui_figures. For more information, see the UI from Jupyter tutorial.
End of explanation |
15,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to BioImage Data Analysis with Python
developed by Jonas Hartmann (Gilmour group, EMBL Heidelberg)<br>
as part of the EMBL Bio-IT Course on Intermediate Python Programming<br>
20.09.2017
About
Purpose
This tutorial introduces a number of tools and strategies for image analysis (specifically fluorescence microscopy images as produced in the biosciences) available in python. It aims to give the course attendees a starting point to further explore image analysis packages and pipelines. Furthermore, it serves as another practical example of scientific python programming.
Format
In the course, teacher and students develop the pipeline below together in an open session over the course of about two hours. The tutorial can also be used for self-study, which is best done by re-implementing, testing and playing around with each step of the pipeline.
Content
The tutorial pipeline encompasses the following parts
Step1: 1b. Loading Image Data
Step2: 1c. Viewing Image Data
Step3: 2. Image Processing & Segmentation
2a. Preprocessing by Smoothing
Smoothing an image to reduce technical noise is almost always the first step in image analysis. The most common smoothing algorithm is the Gaussian filter.
The Gaussian filter is an example of a key technique of image analysis
Step4: 2b. Simple Nucleus Segmentation
A simple way of segmenting nuclei in these images is to combine adaptive background subtraction and thresholding.
The idea of adaptive background subtraction is to compute a local background for each position of the image. If there is a slow continuous change in the image background, the local background can be adjusted for this, hence evening out the image.
A simple way of computing the local background is a convolution with a relatively large uniform (mean) kernel (fig. 2). If this kernel is large compared to the structures in the image, the mean will usually end up lower than the foreground but higher than the background - perfect for background subtraction.
<br><img src="..\pictures\uniform_kernel_grid.png" alt="Uniform Filter Kernel" style="width
Step5: Note
Step6: 2c. Cell Segmentation by Watershed
For many structures, simply filtering and thresholding the image is not enough to get a segmentation. In these cases, one of many alternatives must be applied.
A very common approach is the watershed algorithm (fig. 3), which works by treating the image as a topographical map and slowly filling up the valleys in the map with water, starting from so-called seeds. Wherever the waterfronts of two different seeds meet, the boundary between these two objects is generated.
Here we can use the labeled nuclei as seeds for a watershed segmentation of the cells based on the phalloidin channel.
<br><img src="..\pictures\watershed_illustration.png" alt="Watershed Explanation" style="width
Step7: 3. Data Extraction & Analysis
3a. Extracting Region Data
Step8: 3b. Visualizing Region Data
Step9: 3b. Identifying Dividing Cells using a Support Vector Machine
Based on the pH3 channel, cells currently undergoing mitosis can be identified without any ambiguity. However, if the Hoechst channel holds enough information on its own to confidently classify cells as mitotic and non-mitotic, the pH3 channel is no longer needed and its wavelength is freed up for other purposes.
Here, we can use the pH3 channel to create a ground truth for the mitotic vs. non-mitotic classification. We can then use this to train a support vector classifier to identify mitotic cells without use of the pH3 channel.
<font color=red>Warning | Python Code:
# For python 2 users
from __future__ import division, print_function
# Scientific python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Image analysis
import scipy.ndimage as ndi
from skimage import io, segmentation, graph, filters, measure
# Machine learning
from sklearn import preprocessing, svm, metrics
Explanation: An Introduction to BioImage Data Analysis with Python
developed by Jonas Hartmann (Gilmour group, EMBL Heidelberg)<br>
as part of the EMBL Bio-IT Course on Intermediate Python Programming<br>
20.09.2017
About
Purpose
This tutorial introduces a number of tools and strategies for image analysis (specifically fluorescence microscopy images as produced in the biosciences) available in python. It aims to give the course attendees a starting point to further explore image analysis packages and pipelines. Furthermore, it serves as another practical example of scientific python programming.
Format
In the course, teacher and students develop the pipeline below together in an open session over the course of about two hours. The tutorial can also be used for self-study, which is best done by re-implementing, testing and playing around with each step of the pipeline.
Content
The tutorial pipeline encompasses the following parts:
Loading & viewing images
Image processing & segmentation
Data Extraction & analysis
It is based on an example 3-channel image of human HT29 colon cancer cells in culture, labeled with...
Hoechst stain (DNA)
Phalloidin (actin)
Histone H3 phosphorylated on serine 10 [pH3] antibody (mitosis marker)
The example image was obtained from the CellProfiler website and derives from [Moffat et al., 2006].
Dependencies
python (2.7 or 3.x)
numpy, scipy, matplotlib
skimage, pandas, sklearn
All packages used in this tutorial are part of the Anaconda distribution of python.
1. Loading & Viewing Images
1a. Imports
End of explanation
# Read data
raw = io.imread('../data/HT29.tif')
# Check data structure and type
print(type(raw))
print(raw.shape)
print(raw.dtype)
print(raw.min(), raw.max(), raw.mean())
# Split channels
nuc = raw[:,:,0]
pH3 = raw[:,:,1]
act = raw[:,:,2]
Explanation: 1b. Loading Image Data
End of explanation
# Simple imshow
plt.imshow(nuc, interpolation='none', cmap='gray')
plt.show()
# Nice subplots
fig,ax = plt.subplots(1, 3, figsize=(12,4))
for axis,image,title in zip(ax, [nuc,pH3,act], ['Hoechst','pH3','Phalloidin']):
axis.imshow(image, interpolation='none', cmap='gray')
axis.set_title(title)
axis.axis('off')
plt.show()
# Colored overlay using rgb
fig = plt.figure(figsize=(4,4))
plt.imshow(np.zeros_like(nuc), vmax=1) # Black background
rgb = np.zeros(image.shape+(3,)) # Empty RGB
for i,image in enumerate([act,pH3,nuc]): # Add each channel to RGB
rgb[:,:,i] = (image.astype(np.float) - image.min()) / (image.max() - image.min()) # Normalize images to [0,1]
plt.imshow(rgb, interpolation='none')
plt.axis('off')
plt.show()
Explanation: 1c. Viewing Image Data
End of explanation
# Gaussian smoothing
nuc_smooth = ndi.gaussian_filter(nuc, sigma=1)
# Show
fig,ax = plt.subplots(1, 2, figsize=(12,12))
ax[0].imshow(nuc, interpolation='none', cmap='gray')
ax[0].set_title('Raw')
ax[0].axis('off')
ax[1].imshow(nuc_smooth, interpolation='none', cmap='gray')
ax[1].set_title('Smoothed')
ax[1].axis('off')
plt.show()
Explanation: 2. Image Processing & Segmentation
2a. Preprocessing by Smoothing
Smoothing an image to reduce technical noise is almost always the first step in image analysis. The most common smoothing algorithm is the Gaussian filter.
The Gaussian filter is an example of a key technique of image analysis: kernel convolution. In image analysis, a kernel is a small 'mask' that is moved over each pixel in the image. At each pixel position, the kernel determines which of the surrounding pixels are used to compute the new value and how much each surrounding pixel contributes. Kernel convolutions can be implemented using Fast Fourier Transforms (FFTs), which makes them very fast.
For the Gaussian filter, the kernel is a small Gaussian-like distribution (fig. 1). To compute a pixel of the smoothed image, the values of the surrounding pixels are multiplied by the corresponding kernel value, summed up, and normalized again (by dividing by the sum of kernel values). Thus, by 'diluting' the values of individual pixels with the values of neighboring pixels, a convolution with a Gaussian kernel leads to a smoothing of the image.
<br><img src="..\pictures\gaussian_kernel_grid.png" alt="Gaussian Filter Kernel" style="width: 300px">
<center>Fig 1: Example of a Gaussian convolution kernel.
End of explanation
# Adaptive background subtraction
nuc_smooth_bg = ndi.uniform_filter(nuc_smooth, size=20)
nuc_smooth_bgsub = nuc_smooth - nuc_smooth_bg
nuc_smooth_bgsub[nuc_smooth < nuc_smooth_bg] = 0
# Show
fig,ax = plt.subplots(1, 2, figsize=(12,12))
ax[0].imshow(nuc_smooth, interpolation='none', cmap='gray')
ax[0].set_title('Smoothed')
ax[0].axis('off')
ax[1].imshow(nuc_smooth_bgsub, interpolation='none', cmap='gray')
ax[1].set_title('Background-subtracted')
ax[1].axis('off')
plt.show()
# Interactive search for a good threshold
# Plotting function
def threshold_plot(threshold=10):
# Threshold
nuc_mask = nuc_smooth_bgsub > threshold
# Show
fig = plt.figure(figsize=(6,6))
plt.imshow(nuc_smooth_bgsub, interpolation='none', cmap='gray')
plt.imshow(np.ma.array(nuc_mask, mask=nuc_mask==0), interpolation='none', cmap='autumn', alpha=0.5)
plt.axis('off')
plt.show()
# Interactive widget
from ipywidgets import interactive
interactive(threshold_plot, threshold=(1,255,1))
# Apply threshold
nuc_mask = nuc_smooth_bgsub > 10
Explanation: 2b. Simple Nucleus Segmentation
A simple way of segmenting nuclei in these images is to combine adaptive background subtraction and thresholding.
The idea of adaptive background subtraction is to compute a local background for each position of the image. If there is a slow continuous change in the image background, the local background can be adjusted for this, hence evening out the image.
A simple way of computing the local background is a convolution with a relatively large uniform (mean) kernel (fig. 2). If this kernel is large compared to the structures in the image, the mean will usually end up lower than the foreground but higher than the background - perfect for background subtraction.
<br><img src="..\pictures\uniform_kernel_grid.png" alt="Uniform Filter Kernel" style="width: 300px">
<center>Fig 2: Example of a circular uniform convolution kernel.
End of explanation
# Label the image to give each object a unique number
nuc_labeled = ndi.label(nuc_mask)[0]
# Show
fig = plt.figure(figsize=(6,6))
plt.imshow(nuc_smooth_bgsub, interpolation='none', cmap='gray')
plt.imshow(np.ma.array(nuc_labeled, mask=nuc_labeled==0), interpolation='none', cmap='prism', alpha=0.5)
plt.axis('off')
plt.show()
Explanation: Note: There are a number of problems with a simple segmentation like this, namely the risk of fused nuclei and of artefacts, e.g. small debris or background fluctuations that are wrongly considered a nucleus. There are a number of ways to address these problems but for the purpose of this course we will consider the current result good enough.
End of explanation
# Identify a background seed
# Here, the 5th percentile on signal intensity is used.
act_bgsub = act - np.percentile(act,5)
act_bgsub[act < np.percentile(act,5)] = 0
# Show
fig = plt.figure(figsize=(6,6))
plt.imshow(act, interpolation='none', cmap='gray')
plt.imshow(np.ma.array(act_bgsub==0, mask=act_bgsub!=0), interpolation='none', cmap='autumn', alpha=0.5)
plt.axis('off')
plt.show()
# Prepare the seeds
seeds = np.copy(nuc_labeled) # Cell seeds
seeds[act_bgsub==0] = nuc_labeled.max()+1 # Add background seeds
# Prepare the image by Sobel edge filtering
act_sobel = filters.sobel(act_bgsub)
# Show
fig = plt.figure(figsize=(6,6))
plt.imshow(act_sobel, interpolation='none', cmap='gray')
plt.axis('off')
plt.show()
# Run watershed
act_ws = segmentation.watershed(act_sobel, seeds)
# Remove background
act_ws[act_ws==nuc_labeled.max()+1] = 0
# Show
fig = plt.figure(figsize=(6,6))
plt.imshow(act_bgsub, interpolation='none', cmap='gray')
plt.imshow(np.ma.array(act_ws, mask=act_ws==0), interpolation='none', cmap='prism', alpha=0.5)
plt.axis('off')
plt.show()
# Better visualization
fig = plt.figure(figsize=(12,12))
plt.imshow(act_bgsub, interpolation='none', cmap='gray')
plt.imshow(np.ma.array(nuc_labeled, mask=nuc_labeled==0), interpolation='none', cmap='prism', alpha=0.3)
boundaries = filters.sobel(act_ws) > 0
plt.imshow(np.ma.array(boundaries, mask=boundaries==0), interpolation='none', cmap='autumn', alpha=0.5)
plt.axis('off')
plt.show()
Explanation: 2c. Cell Segmentation by Watershed
For many structures, simply filtering and thresholding the image is not enough to get a segmentation. In these cases, one of many alternatives must be applied.
A very common approach is the watershed algorithm (fig. 3), which works by treating the image as a topographical map and slowly filling up the valleys in the map with water, starting from so-called seeds. Wherever the waterfronts of two different seeds meet, the boundary between these two objects is generated.
Here we can use the labeled nuclei as seeds for a watershed segmentation of the cells based on the phalloidin channel.
<br><img src="..\pictures\watershed_illustration.png" alt="Watershed Explanation" style="width: 900px">
<center>Fig 3: Graphical explanation of watershed segmentation.</center>
End of explanation
# Regionprops provides a number of measurements per label
nuc_props_nuc = measure.regionprops(nuc_labeled, intensity_image=nuc) # Props for nuclear mask, nuc channel
nuc_props_pH3 = measure.regionprops(nuc_labeled, intensity_image=pH3) # Props for nuclear mask, pH3 channel
nuc_props_act = measure.regionprops(nuc_labeled, intensity_image=act) # Props for nuclear mask, act channel
# To better handle these, they can be transformed into dictionaries or pandas dataframes
# Function to convert to dict
def props2dict(props):
# Get prop names (excluding non-scalar props!)
propdict = {prop_name:[] for prop_name in props[0]
if not (type(props[0][prop_name]) in [tuple, np.ndarray])}
# For each prop name...
for prop_name in propdict:
# For each region...
for region in props:
# Add the corresponding value
propdict[prop_name].append(region[prop_name])
# Convert the values to an array
propdict[prop_name] = np.array(propdict[prop_name])
# Return results
return propdict
# Converting nuc_props_pH3 and nuc_props_act to dicts
propdict_pH3 = props2dict(nuc_props_pH3)
propdict_act = props2dict(nuc_props_act)
# Converting nuc_props_nuc to a pandas df
propdf = pd.DataFrame(props2dict(nuc_props_nuc))
propdf = propdf.drop(['bbox_area', 'euler_number'], axis=1)
Explanation: 3. Data Extraction & Analysis
3a. Extracting Region Data
End of explanation
propdf.head()
propdf.describe()
# Boxplot
fig = plt.figure(figsize=(12,4))
propdf.boxplot()
fig.autofmt_xdate()
plt.show()
# Backmapping onto image
color_prop = 'area'
nuc_propcolored = np.zeros(nuc.shape)
for row,label in enumerate(propdf.label):
nuc_propcolored[nuc_labeled==label] = propdict_act[color_prop][row]
# Show
fig = plt.figure(figsize=(6,6))
plt.imshow(act_bgsub, interpolation='none', cmap='gray')
plt.imshow(np.ma.array(nuc_propcolored, mask=nuc_propcolored==0),
interpolation='none', cmap='plasma')
plt.colorbar(label=color_prop, fraction=0.046, pad=0.04)
plt.axis('off')
plt.show()
# Scatter plot to look at relations
fig = plt.figure(figsize=(4,4))
plt.scatter(propdf.area, propdf.perimeter, edgecolor='', alpha=0.5)
plt.plot(np.sort(propdf.perimeter)**2.0 / (4*np.pi), np.sort(propdf.perimeter), color='r', alpha=0.8)
plt.legend(['theoretical circles', 'data'], loc=4, fontsize=10)
plt.xlabel('area')
plt.ylabel('perimeter')
plt.show()
Explanation: 3b. Visualizing Region Data
End of explanation
# Standardizing the features to zero mean and unit variance
propdf_stand = (propdf - propdf.mean()) / propdf.std()
# Show
fig = plt.figure(figsize=(12,4))
propdf_stand.boxplot()
fig.autofmt_xdate()
plt.show()
# Use pH3 signal to create ground truth labels (True: "in mitosis" | False: "not in mitosis")
# Check pH3 signal distribution with histogram
plt.hist(propdict_pH3['mean_intensity'], bins=50)
plt.ylim([0,20])
plt.show()
# Create ground truth
ground_truth = propdict_pH3['mean_intensity'] > 20
# Train Support Vector Classifier
svc = svm.SVC()
svc.fit(propdf_stand, ground_truth)
# Predict on the training data
prediction = svc.predict(propdf_stand)
# Evaluate prediction with a confusion matrix
cmat = metrics.confusion_matrix(ground_truth, prediction)
# Show
plt.imshow(cmat,interpolation='none',cmap='Blues')
for (i, j), z in np.ndenumerate(cmat):
plt.text(j, i, z, ha='center', va='center')
plt.xticks([0,1], ["Non-Mitotic","Mitotic"])
plt.yticks([0,1], ["Non-Mitotic","Mitotic"], rotation=90)
plt.xlabel("prediction")
plt.ylabel("ground truth")
plt.show()
Explanation: 3b. Identifying Dividing Cells using a Support Vector Machine
Based on the pH3 channel, cells currently undergoing mitosis can be identified without any ambiguity. However, if the Hoechst channel holds enough information on its own to confidently classify cells as mitotic and non-mitotic, the pH3 channel is no longer needed and its wavelength is freed up for other purposes.
Here, we can use the pH3 channel to create a ground truth for the mitotic vs. non-mitotic classification. We can then use this to train a support vector classifier to identify mitotic cells without use of the pH3 channel.
<font color=red>Warning:</font> This is just a mock example! Larger datasets will feature many cases that are less clear-cut than the ones observed here, so much more data would be needed to train a robust classifier. In particular, many more cases of mitotic cells would be needed so that a balanced training set can be constructed. Furthermore, because of the lack of data in this example, the classifier's performance is evaluated on the training set itself, which means that overfitting goes unnoticed. In a real case, it is imperative to evaluate classifiers on separate training and test sets!
End of explanation |
15,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 01
Import
Step2: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
Step3: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
Step5: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
Step6: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 01
Import
End of explanation
def print_sum(a, b):
Print the sum of the arguments a and b.
print(a+b)
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
interact(print_sum,a=(-10.,10.,.1),b=(-8,8,2));
assert True # leave this for grading the print_sum exercise
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
def print_string(s, length=False):
Print the string s and optionally its length.
print(s)
if length == True:
print(len(s))
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
# YOUR CODE HERE
interact(print_string,s='Hello World!',length=True);
assert True # leave this for grading the print_string exercise
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation |
15,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hidden Markov Models
author
Step1: CG rich region identification example
Lets take the simplified example of CG island detection on a sequence of DNA. DNA is made up of the four canonical nucleotides, abbreviated 'A', 'C', 'G', and 'T'. We can say that regions of the genome that are enriched for nucleotides 'C' and 'G' are 'CG islands', which is a simplification of the real biological concept but sufficient for our example. The issue with identifying these regions is that they are not exclusively made up of the nucleotides 'C' and 'G', but have some 'A's and 'T's scatted amongst them. A simple model that looked for long stretches of C's and G's would not perform well, because it would miss most of the real regions.
We can start off by building the model. Because HMMs involve the transition matrix, which is often represented using a graph over the hidden states, building them requires a few more steps that a simple distribution or the mixture model. Our simple model will be composed of two distributions. One distribution wil be a uniform distribution across all four characters and one will have a preference for the nucleotides C and G, while still allowing the nucleotides A and T to be present.
Step2: For the HMM we have to first define states, which are a pair of a distribution and a name.
Step3: Now we define the HMM and pass in the states.
Step4: Then we have to define the transition matrix, which is the probability of going from one hidden state to the next hidden state. In some cases, like this one, there are high self-loop probabilities, indicating that it's likely that one will stay in the same hidden state from one observation to the next in the sequence. Other cases have a lower probability of staying in the same state, like the part of speech tagger. A part of the transition matrix is the start probabilities, which is the probability of starting in each of the hidden states. Because we create these transitions one at a time, they are very amenable to sparse transition matrices, where it is impossible to transition from one hidden state to the next.
Step5: Now, finally, we need to bake the model in order to finalize the internal structure. Bake must be called when the model has been fully specified.
Step6: Now we can make predictions on some sequence. Let's create some sequence that has a CG enriched region in the middle and see whether we can identify it.
Step7: It looks like it successfully identified a CG island in the middle (the long stretch of 0's) and another shorter one at the end. The predicted integers don't correspond to the order in which states were added to the model, but rather, the order that they exist in the model after a topological sort. More importantly, the model wasn't tricked into thinking that every CG or even pair of CGs was an island. It required many C's and G's to be part of a longer stretch to identify that region as an island. Naturally, the balance of the transition and emission probabilities will heavily influence what regions are detected.
Let's say, though, that we want to get rid of that CG island prediction at the end because we don't believe that real islands can occur at the end of the sequence. We can take care of this by adding in an explicit end state that only the non-island hidden state can get to. We enforce that the model has to end in the end state, and if only the non-island state gets there, the sequence of hidden states must end in the non-island state. Here's how
Step8: Note that all we did was add a transition from s1 to model.end with some low probability. This probability doesn't have to be high if there's only a single transition there, because there's no other possible way of getting to the end state.
Step9: This seems far more reasonable. There is a single CG island surrounded by background sequence, and something at the end. If we knew that CG islands cannot occur at the end of sequences, we need only modify the underlying structure of the HMM in order to say that the sequence must end from the background state.
In the same way that mixtures could provide probabilistic estimates of class assignments rather than only hard labels, hidden Markov models can do the same. These estimates are the posterior probabilities of belonging to each of the hidden states given the observation, but also given the rest of the sequence.
Step10: We can see here the transition from the first non-island region to the middle island region, with high probabilities in one column turning into high probabilities in the other column. The predict method is just taking the most likely element, the maximum-a-posteriori estimate.
In addition to using the forward-backward algorithm to just calculate posterior probabilities for each observation, we can count the number of transitions that are predicted to occur between the hidden states.
Step11: This is the transition table, which has the soft count of the number of transitions across an edge in the model given a single sequence. It is a square matrix of size equal to the number of states (including start and end state), with number of transitions from (row_id) to (column_id). This is exemplified by the 1.0 in the first row, indicating that there is one transition from background state to the end state, as that's the only way to reach the end state. However, the third (or fourth, depending on ordering) row is the transitions from the start state, and it only slightly favors the background state. These counts are not normalized to the length of the input sequence, but can easily be done so by dividing by row sums, column sums, or entire table sums, depending on your application.
A possible reason not to normalize is to run several sequences through and add up their tables, because normalizing in the end and extracting some domain knowledge. It is extremely useful in practice. For example, we can see that there is an expectation of ~2.9 transitions from CG island to background, and ~2.4 from background to CG island. This could be used to infer that there are ~2-3 edges, which makes sense if you consider that the start and end of the sequence seem like they might be part of the CG island states except for the strict transition probabilities used (look at the first few rows of the emission table above.)
Sequence Alignment Example
Lets move on to a more complicated structure, that of a profile HMM. A profile HMM is used to align a sequence to a reference 'profile', where the reference profile can either be a single sequence, or an alignment of many sequences (such as a reference genome). In essence, this profile has a 'match' state for every position in the reference profile, and 'insert' state, and a 'delete' state. The insert state allows the external sequence to have an insertion into the sequence without throwing off the entire alignment, such as the following
Step12: Now lets try to align some sequences to it and see what happens!
Step13: The first and last sequence are entirely matches, meaning that it thinks the most likely alignment between the profile ACT and ACT is A-A, C-C, and T-T, which makes sense, and the most likely alignment between ACT and ACC is A-A, C-C, and T-C, which includes a mismatch. Essentially, it's more likely that there's a T-C mismatch at the end then that there was a deletion of a T at the end of the sequence, and a separate insertion of a C.
The two middle sequences don't match very well, as expected! G's are not very likely in this profile at all. It predicts that the two G's are inserts, and that the C matches the C in the profile, before hitting the delete state because it can't emit a T. The third sequence thinks that the G is an insert, as expected, and then aligns the A and T in the sequence to the A and T in the master sequence, missing the middle C in the profile.
By using deletes, we can handle other sequences which are shorter than three characters. Lets look at some more sequences of different lengths.
Step15: Again, more of the same expected. You'll notice most of the use of insertion states are at I0, because most of the insertions are at the beginning of the sequence. It's more probable to simply stay in I0 at the beginning instead of go from I0 to D1 to I1, or going to another insert state along there. You'll see other insert states used when insertions occur in other places in the sequence, like 'ATTT' and 'ACGTG'.
Now that we have the path, we need to convert it into an alignment, which is significantly more informative to look at.
Step16: Training Hidden Markov Models
There are two main algorithms for training hidden Markov models-- Baum Welch (structured version of Expectation Maximization), and Viterbi training. Since we don't start off with labels on the data, these are both unsupervised training algorithms. In order to assign labels, Baum Welch uses EM to assign soft labels (weights in this case) to each point belonging to each state, and then using weighted MLE estimates to update the distributions. Viterbi assigns hard labels to each observation using the Viterbi algorithm, and then updates the distributions based on these hard labels.
pomegranate is extremely well featured when it comes to regularization methods for training, supporting tied emissions and edges, edge and emission inertia, freezing nodes or edges, edge pseudocounts, and multithreaded training. Lets look at some examples of the following
Step17: You have now indicated that these two states are tied, and when training, the weights of all points going to s2 will be added to the weights of all points going to s1 when updating d. As a side note, this is implemented in a computationally efficient manner such that d will only be updated once, not twice (but giving the same result). s3 and s4 are not tied together, because while they have the same distribution, it is not the same python object.
Tied Edges
Edges can be tied together for the same reason. If you have a modular structure to your HMM, perhaps you believe this repeating structure doesn't (or shouldn't) have a position specific edge structure. You can do this simply by adding a group when you add transitions.
Step18: The above model doesn't necessarily make sense, but it shows how simple it is to tie edges as well. You can go ahead and train normally from this point, without needing to change any code.
Inertia
The next options are inertia on edges or on distributions. This simply means that you update your parameters as (previous_parameter * inertia) + (new_parameter * (1-inertia) ). It is a way to prevent your updates from overfitting immediately. You can specify this in the train function using either edge_inertia or distribution_inertia. These default to 0, with 1 being the maximum, meaning that you don't update based on new evidence, the same as freezing a distribution or the edges.
Step19: Pseudocounts
Another way of regularizing your model is to add pseudocounts to your edges (which have non-zero probabilities). When updating your edges in the future, you add this pseudocount to the count of transitions across that edge in the future. This gives a more Bayesian estimate of the edge probability, and is useful if you have a large model and don't expect to cross most of the edges with your training data. An example might be a complicated profile HMM, where you don't expect to see deletes or inserts at all in your training data, but don't want to change from the default values.
In pomegranate, pseudocounts default to the initial probabilities, so that if you don't see data, the edge values simply aren't updated. You can define both edge specific pseudocounts when you define the transition. When you train, you must define use_pseudocount=True.
Step20: The other way is to put a blanket pseudocount on all edges.
Step21: We can see that there isn't as much of an improvement. This is part of regularization, though. We sacrifice fitting the data exactly in order for our model to generalize better to future data. The majority of the training improvement is likely coming from the emissions better fitting the data, though.
Multithreaded Training
Since pomegranate is implemented in cython, the majority of functions are written with the GIL released. A benefit of doing this is that we can use multithreading in order to make some computationally intensive tasks take less time. However, a downside is that python doesn't play nicely with multithreading, and so there are some cases where training using multithreading can make your model training take significantly longer. I investigate this in an early multithreading pull request <a href="https
Step22: Serialization
General Mixture Models support serialization to JSONs using to_json() and from_json( json ). This is useful is you want to train a GMM on large amounts of data, taking a significant amount of time, and then use this model in the future without having to repeat this computationally intensive step (sounds familiar by now). Lets look at the original CG island model, since it's significantly smaller. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
import numpy
from pomegranate import *
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
Explanation: Hidden Markov Models
author: Jacob Schreiber <br>
contact: jmschreiber91@gmail.com
Hidden Markov models (HMMs) are the flagship of the pomegranate package in that they have the most features of all of the models and that they were the first algorithm implemented.
Hidden Markov models are a form of structured prediction method that are popular for tagging all elements in a sequence with some "hidden" state. They can be thought of as extensions of Markov chains where, instead of the probability of the next observation being dependant on the current observation, the probability of the next hidden state is dependant on the current hidden state, and the next observation is derived from that hidden state. An example of this can be part of speech tagging, where the observations are words and the hidden states are parts of speech. Each word gets tagged with a part of speech, but dynamic programming is utilized to search through all potential word-tag combinations to identify the best set of tags across the entire sentence.
Another perspective of HMMs is that they are an extension on mixture models that includes a transition matrix. Conceptually, a mixture model has a set of "hidden" states---the mixture components---and one can calculate the probability that each sample belongs to each component. This approach treats each observations independently. However, like in the part-of-speech example we know that an adjective typically is followed by a noun, and so position in the sequence matters. A HMM adds a transition matrix between the hidden states to incorporate this information across the sequence, allowing for higher probabilities of transitioning from the "adjective" hidden state to a noun or verb.
pomegranate implements HMMs in a flexible manner that goes beyond what other packages allow. Let's see some examples.
End of explanation
d1 = DiscreteDistribution({'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25})
d2 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
Explanation: CG rich region identification example
Lets take the simplified example of CG island detection on a sequence of DNA. DNA is made up of the four canonical nucleotides, abbreviated 'A', 'C', 'G', and 'T'. We can say that regions of the genome that are enriched for nucleotides 'C' and 'G' are 'CG islands', which is a simplification of the real biological concept but sufficient for our example. The issue with identifying these regions is that they are not exclusively made up of the nucleotides 'C' and 'G', but have some 'A's and 'T's scatted amongst them. A simple model that looked for long stretches of C's and G's would not perform well, because it would miss most of the real regions.
We can start off by building the model. Because HMMs involve the transition matrix, which is often represented using a graph over the hidden states, building them requires a few more steps that a simple distribution or the mixture model. Our simple model will be composed of two distributions. One distribution wil be a uniform distribution across all four characters and one will have a preference for the nucleotides C and G, while still allowing the nucleotides A and T to be present.
End of explanation
s1 = State(d1, name='background')
s2 = State(d2, name='CG island')
Explanation: For the HMM we have to first define states, which are a pair of a distribution and a name.
End of explanation
model = HiddenMarkovModel()
model.add_states(s1, s2)
Explanation: Now we define the HMM and pass in the states.
End of explanation
model.add_transition(model.start, s1, 0.5)
model.add_transition(model.start, s2, 0.5)
model.add_transition(s1, s1, 0.9)
model.add_transition(s1, s2, 0.1)
model.add_transition(s2, s1, 0.1)
model.add_transition(s2, s2, 0.9)
Explanation: Then we have to define the transition matrix, which is the probability of going from one hidden state to the next hidden state. In some cases, like this one, there are high self-loop probabilities, indicating that it's likely that one will stay in the same hidden state from one observation to the next in the sequence. Other cases have a lower probability of staying in the same state, like the part of speech tagger. A part of the transition matrix is the start probabilities, which is the probability of starting in each of the hidden states. Because we create these transitions one at a time, they are very amenable to sparse transition matrices, where it is impossible to transition from one hidden state to the next.
End of explanation
model.bake()
Explanation: Now, finally, we need to bake the model in order to finalize the internal structure. Bake must be called when the model has been fully specified.
End of explanation
seq = numpy.array(list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC'))
hmm_predictions = model.predict(seq)
print("sequence: {}".format(''.join(seq)))
print("hmm pred: {}".format(''.join(map( str, hmm_predictions))))
Explanation: Now we can make predictions on some sequence. Let's create some sequence that has a CG enriched region in the middle and see whether we can identify it.
End of explanation
model = HiddenMarkovModel()
model.add_states(s1, s2)
model.add_transition(model.start, s1, 0.5)
model.add_transition(model.start, s2, 0.5)
model.add_transition(s1, s1, 0.89 )
model.add_transition(s1, s2, 0.10 )
model.add_transition(s1, model.end, 0.01)
model.add_transition(s2, s1, 0.1 )
model.add_transition(s2, s2, 0.9)
model.bake()
Explanation: It looks like it successfully identified a CG island in the middle (the long stretch of 0's) and another shorter one at the end. The predicted integers don't correspond to the order in which states were added to the model, but rather, the order that they exist in the model after a topological sort. More importantly, the model wasn't tricked into thinking that every CG or even pair of CGs was an island. It required many C's and G's to be part of a longer stretch to identify that region as an island. Naturally, the balance of the transition and emission probabilities will heavily influence what regions are detected.
Let's say, though, that we want to get rid of that CG island prediction at the end because we don't believe that real islands can occur at the end of the sequence. We can take care of this by adding in an explicit end state that only the non-island hidden state can get to. We enforce that the model has to end in the end state, and if only the non-island state gets there, the sequence of hidden states must end in the non-island state. Here's how:
End of explanation
seq = numpy.array(list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC'))
hmm_predictions = model.predict(seq)
print("sequence: {}".format(''.join(seq)))
print("hmm pred: {}".format(''.join(map( str, hmm_predictions))))
Explanation: Note that all we did was add a transition from s1 to model.end with some low probability. This probability doesn't have to be high if there's only a single transition there, because there's no other possible way of getting to the end state.
End of explanation
print(model.predict_proba(seq)[12:19])
Explanation: This seems far more reasonable. There is a single CG island surrounded by background sequence, and something at the end. If we knew that CG islands cannot occur at the end of sequences, we need only modify the underlying structure of the HMM in order to say that the sequence must end from the background state.
In the same way that mixtures could provide probabilistic estimates of class assignments rather than only hard labels, hidden Markov models can do the same. These estimates are the posterior probabilities of belonging to each of the hidden states given the observation, but also given the rest of the sequence.
End of explanation
trans, ems = model.forward_backward(seq)
print(trans)
Explanation: We can see here the transition from the first non-island region to the middle island region, with high probabilities in one column turning into high probabilities in the other column. The predict method is just taking the most likely element, the maximum-a-posteriori estimate.
In addition to using the forward-backward algorithm to just calculate posterior probabilities for each observation, we can count the number of transitions that are predicted to occur between the hidden states.
End of explanation
model = HiddenMarkovModel( "Global Alignment")
# Define the distribution for insertions
i_d = DiscreteDistribution( { 'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25 } )
# Create the insert states
i0 = State( i_d, name="I0" )
i1 = State( i_d, name="I1" )
i2 = State( i_d, name="I2" )
i3 = State( i_d, name="I3" )
# Create the match states
m1 = State( DiscreteDistribution({ "A": 0.95, 'C': 0.01, 'G': 0.01, 'T': 0.02 }) , name="M1" )
m2 = State( DiscreteDistribution({ "A": 0.003, 'C': 0.99, 'G': 0.003, 'T': 0.004 }) , name="M2" )
m3 = State( DiscreteDistribution({ "A": 0.01, 'C': 0.01, 'G': 0.01, 'T': 0.97 }) , name="M3" )
# Create the delete states
d1 = State( None, name="D1" )
d2 = State( None, name="D2" )
d3 = State( None, name="D3" )
# Add all the states to the model
model.add_states( [i0, i1, i2, i3, m1, m2, m3, d1, d2, d3 ] )
# Create transitions from match states
model.add_transition( model.start, m1, 0.9 )
model.add_transition( model.start, i0, 0.1 )
model.add_transition( m1, m2, 0.9 )
model.add_transition( m1, i1, 0.05 )
model.add_transition( m1, d2, 0.05 )
model.add_transition( m2, m3, 0.9 )
model.add_transition( m2, i2, 0.05 )
model.add_transition( m2, d3, 0.05 )
model.add_transition( m3, model.end, 0.9 )
model.add_transition( m3, i3, 0.1 )
# Create transitions from insert states
model.add_transition( i0, i0, 0.70 )
model.add_transition( i0, d1, 0.15 )
model.add_transition( i0, m1, 0.15 )
model.add_transition( i1, i1, 0.70 )
model.add_transition( i1, d2, 0.15 )
model.add_transition( i1, m2, 0.15 )
model.add_transition( i2, i2, 0.70 )
model.add_transition( i2, d3, 0.15 )
model.add_transition( i2, m3, 0.15 )
model.add_transition( i3, i3, 0.85 )
model.add_transition( i3, model.end, 0.15 )
# Create transitions from delete states
model.add_transition( d1, d2, 0.15 )
model.add_transition( d1, i1, 0.15 )
model.add_transition( d1, m2, 0.70 )
model.add_transition( d2, d3, 0.15 )
model.add_transition( d2, i2, 0.15 )
model.add_transition( d2, m3, 0.70 )
model.add_transition( d3, i3, 0.30 )
model.add_transition( d3, model.end, 0.70 )
# Call bake to finalize the structure of the model.
model.bake()
Explanation: This is the transition table, which has the soft count of the number of transitions across an edge in the model given a single sequence. It is a square matrix of size equal to the number of states (including start and end state), with number of transitions from (row_id) to (column_id). This is exemplified by the 1.0 in the first row, indicating that there is one transition from background state to the end state, as that's the only way to reach the end state. However, the third (or fourth, depending on ordering) row is the transitions from the start state, and it only slightly favors the background state. These counts are not normalized to the length of the input sequence, but can easily be done so by dividing by row sums, column sums, or entire table sums, depending on your application.
A possible reason not to normalize is to run several sequences through and add up their tables, because normalizing in the end and extracting some domain knowledge. It is extremely useful in practice. For example, we can see that there is an expectation of ~2.9 transitions from CG island to background, and ~2.4 from background to CG island. This could be used to infer that there are ~2-3 edges, which makes sense if you consider that the start and end of the sequence seem like they might be part of the CG island states except for the strict transition probabilities used (look at the first few rows of the emission table above.)
Sequence Alignment Example
Lets move on to a more complicated structure, that of a profile HMM. A profile HMM is used to align a sequence to a reference 'profile', where the reference profile can either be a single sequence, or an alignment of many sequences (such as a reference genome). In essence, this profile has a 'match' state for every position in the reference profile, and 'insert' state, and a 'delete' state. The insert state allows the external sequence to have an insertion into the sequence without throwing off the entire alignment, such as the following:
ACCG : Sequence <br>
|| | <br>
AC-G : Reference
or a deletion, which is the opposite:
A-G : Sequence <br>
| | <br>
ACG : Reference
The bars in the middle refer to a perfect match, whereas the lack of a bar means either a deletion/insertion, or a mismatch. A mismatch is where two positions are aligned together, but do not match. This models the biological phenomena of mutation, where one nucleotide can convert to another over time. It is usually more likely in biological sequences that this type of mutation occurs than that the nucleotide was deleted from the sequence (shifting all nucleotides over by one) and then another was inserted at the exact location (moving all nucleotides over again). Since we are using a probabilistic model, we get to define these probabilities through the use of distributions! If we want to model mismatches, we can just set our 'match' state to have an appropriate distribution with non-zero probabilities over mismatches.
Lets now create a three nucleotide profile HMM, which models the sequence 'ACT'. We will fuzz this a little bit in the match states, pretending to have some prior information about what mutations occur at each position. If you don't have any information, setting a uniform, small, value over the other values is usually okay.
End of explanation
for sequence in map( list, ('ACT', 'GGC', 'GAT', 'ACC') ):
logp, path = model.viterbi( sequence )
print("Sequence: '{}' -- Log Probability: {} -- Path: {}".format(
''.join( sequence ), logp, " ".join( state.name for idx, state in path[1:-1] ) ))
Explanation: Now lets try to align some sequences to it and see what happens!
End of explanation
for sequence in map( list, ('A', 'GA', 'AC', 'AT', 'ATCC', 'ACGTG', 'ATTT', 'TACCCTC', 'TGTCAACACT') ):
logp, path = model.viterbi( sequence )
print("Sequence: '{}' -- Log Probability: {} -- Path: {}".format(
''.join( sequence ), logp, " ".join( state.name for idx, state in path[1:-1] ) ))
Explanation: The first and last sequence are entirely matches, meaning that it thinks the most likely alignment between the profile ACT and ACT is A-A, C-C, and T-T, which makes sense, and the most likely alignment between ACT and ACC is A-A, C-C, and T-C, which includes a mismatch. Essentially, it's more likely that there's a T-C mismatch at the end then that there was a deletion of a T at the end of the sequence, and a separate insertion of a C.
The two middle sequences don't match very well, as expected! G's are not very likely in this profile at all. It predicts that the two G's are inserts, and that the C matches the C in the profile, before hitting the delete state because it can't emit a T. The third sequence thinks that the G is an insert, as expected, and then aligns the A and T in the sequence to the A and T in the master sequence, missing the middle C in the profile.
By using deletes, we can handle other sequences which are shorter than three characters. Lets look at some more sequences of different lengths.
End of explanation
def path_to_alignment( x, y, path ):
This function will take in two sequences, and the ML path which is their alignment,
and insert dashes appropriately to make them appear aligned. This consists only of
adding a dash to the model sequence for every insert in the path appropriately, and
a dash in the observed sequence for every delete in the path appropriately.
for i, (index, state) in enumerate( path[1:-1] ):
name = state.name
if name.startswith( 'D' ):
y = y[:i] + '-' + y[i:]
elif name.startswith( 'I' ):
x = x[:i] + '-' + x[i:]
return x, y
for sequence in map( list, ('A', 'GA', 'AC', 'AT', 'ATCC', 'ACGTG', 'ATTT', 'TACCCTC', 'TGTCAACACT') ):
logp, path = model.viterbi( sequence )
x, y = path_to_alignment( 'ACT', ''.join(sequence), path )
print("Sequence: {}, Log Probability: {}".format( ''.join(sequence), logp ))
print("{}\n{}".format( x, y ))
print()
Explanation: Again, more of the same expected. You'll notice most of the use of insertion states are at I0, because most of the insertions are at the beginning of the sequence. It's more probable to simply stay in I0 at the beginning instead of go from I0 to D1 to I1, or going to another insert state along there. You'll see other insert states used when insertions occur in other places in the sequence, like 'ATTT' and 'ACGTG'.
Now that we have the path, we need to convert it into an alignment, which is significantly more informative to look at.
End of explanation
d = NormalDistribution( 5, 2 )
s1 = State( d, name="Tied1" )
s2 = State( d, name="Tied2" )
s3 = State( NormalDistribution( 5, 2 ), name="NotTied1" )
s4 = State( NormalDistribution( 5, 2 ), name="NotTied2" )
Explanation: Training Hidden Markov Models
There are two main algorithms for training hidden Markov models-- Baum Welch (structured version of Expectation Maximization), and Viterbi training. Since we don't start off with labels on the data, these are both unsupervised training algorithms. In order to assign labels, Baum Welch uses EM to assign soft labels (weights in this case) to each point belonging to each state, and then using weighted MLE estimates to update the distributions. Viterbi assigns hard labels to each observation using the Viterbi algorithm, and then updates the distributions based on these hard labels.
pomegranate is extremely well featured when it comes to regularization methods for training, supporting tied emissions and edges, edge and emission inertia, freezing nodes or edges, edge pseudocounts, and multithreaded training. Lets look at some examples of the following:
Tied Emissions
Sometimes we want to say that multiple states model the same phenomena, but are simply at different points in the graph because we are utilizing complicated edge structure. An example is in the example of the global alignment HMM we saw. All insert states represent the same phenomena, which is nature randomly inserting a nucleotide, and this probability should be the same regardless of position. However, we can't simply have a single insert state, or we'd be allowed to transition from any match state to any other match state.
You can tie emissions together simply by passing the same distribution object to multiple states. That's it.
End of explanation
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5, group='a' )
model.add_transition( model.start, s2, 0.5, group='b' )
model.add_transition( s1, s2, 0.5, group='a' )
model.add_transition( s2, s1, 0.5, group='b' )
model.bake()
Explanation: You have now indicated that these two states are tied, and when training, the weights of all points going to s2 will be added to the weights of all points going to s1 when updating d. As a side note, this is implemented in a computationally efficient manner such that d will only be updated once, not twice (but giving the same result). s3 and s4 are not tied together, because while they have the same distribution, it is not the same python object.
Tied Edges
Edges can be tied together for the same reason. If you have a modular structure to your HMM, perhaps you believe this repeating structure doesn't (or shouldn't) have a position specific edge structure. You can do this simply by adding a group when you add transitions.
End of explanation
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], distribution_inertia=0.3, edge_inertia=0.25 )
Explanation: The above model doesn't necessarily make sense, but it shows how simple it is to tie edges as well. You can go ahead and train normally from this point, without needing to change any code.
Inertia
The next options are inertia on edges or on distributions. This simply means that you update your parameters as (previous_parameter * inertia) + (new_parameter * (1-inertia) ). It is a way to prevent your updates from overfitting immediately. You can specify this in the train function using either edge_inertia or distribution_inertia. These default to 0, with 1 being the maximum, meaning that you don't update based on new evidence, the same as freezing a distribution or the edges.
End of explanation
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5, pseudocount=4.2 )
model.add_transition( model.start, s2, 0.5, pseudocount=1.3 )
model.add_transition( s1, s2, 0.5, pseudocount=5.2 )
model.add_transition( s2, s1, 0.5, pseudocount=0.9 )
model.bake()
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], max_iterations=5, use_pseudocount=True )
Explanation: Pseudocounts
Another way of regularizing your model is to add pseudocounts to your edges (which have non-zero probabilities). When updating your edges in the future, you add this pseudocount to the count of transitions across that edge in the future. This gives a more Bayesian estimate of the edge probability, and is useful if you have a large model and don't expect to cross most of the edges with your training data. An example might be a complicated profile HMM, where you don't expect to see deletes or inserts at all in your training data, but don't want to change from the default values.
In pomegranate, pseudocounts default to the initial probabilities, so that if you don't see data, the edge values simply aren't updated. You can define both edge specific pseudocounts when you define the transition. When you train, you must define use_pseudocount=True.
End of explanation
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], max_iterations=5, transition_pseudocount=20, use_pseudocount=True )
Explanation: The other way is to put a blanket pseudocount on all edges.
End of explanation
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4, 7, 3, 6, 3, 5, 2, 4], [5, 7, 2, 3, 5, 1, 3, 5, 6, 2]], max_iterations=5 )
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4, 7, 3, 6, 3, 5, 2, 4], [5, 7, 2, 3, 5, 1, 3, 5, 6, 2]], max_iterations=5, n_jobs=4 )
Explanation: We can see that there isn't as much of an improvement. This is part of regularization, though. We sacrifice fitting the data exactly in order for our model to generalize better to future data. The majority of the training improvement is likely coming from the emissions better fitting the data, though.
Multithreaded Training
Since pomegranate is implemented in cython, the majority of functions are written with the GIL released. A benefit of doing this is that we can use multithreading in order to make some computationally intensive tasks take less time. However, a downside is that python doesn't play nicely with multithreading, and so there are some cases where training using multithreading can make your model training take significantly longer. I investigate this in an early multithreading pull request <a href="https://github.com/jmschrei/pomegranate/pull/30">here</a>. Things have improved since then, but the gist is that if you have a small model (less than 15 states), it may be detrimental, but the larger your model is, the more it scales towards getting a speed improvement exactly the number of threads you use. You can specify multithreading using the n_jobs keyword. All structures in pomegranate are thread safe, so you don't need to worry about race conditions.
End of explanation
seq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')
d1 = DiscreteDistribution({'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25})
d2 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
s1 = State( d1, name='background' )
s2 = State( d2, name='CG island' )
hmm = HiddenMarkovModel()
hmm.add_states(s1, s2)
hmm.add_transition( hmm.start, s1, 0.5 )
hmm.add_transition( hmm.start, s2, 0.5 )
hmm.add_transition( s1, s1, 0.5 )
hmm.add_transition( s1, s2, 0.5 )
hmm.add_transition( s2, s1, 0.5 )
hmm.add_transition( s2, s2, 0.5 )
hmm.bake()
print(hmm.to_json())
seq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')
print(hmm.log_probability( seq ))
hmm_2 = HiddenMarkovModel.from_json( hmm.to_json() )
print(hmm_2.log_probability( seq ))
Explanation: Serialization
General Mixture Models support serialization to JSONs using to_json() and from_json( json ). This is useful is you want to train a GMM on large amounts of data, taking a significant amount of time, and then use this model in the future without having to repeat this computationally intensive step (sounds familiar by now). Lets look at the original CG island model, since it's significantly smaller.
End of explanation |
15,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Featurizing Ligand-Protein trajectories
Trajectories containing a protein and ligand can now be featurized in several ways. A reference frame with at least two chains, one of which is the protein and one of which is the ligand, is required for all featurizations. These chains can be manually specified by their indexes or MSMBuilder can guess which trajectory is the protein (by choosing the longest CA-containing chain) and which is the ligand (by choosing the longest chain containing up to 200 atoms; tie goes to the lower index).
Here we explore Ligand-Protein contact featurizations and their binary transforms as well as RMSD calculations with customizable alignment and calcuation indices.
Generate a toy trajectory
Step1: Identify residues characterizing a binding pocket with respect to a reference structure
Step2: Create a histogram of instances each residue is within a certain cutoff distance of the ligand
Step3: Using the LigandRMSDFeaturizer
Compute the RMSD of each frame in the trajectory to each frame in a reference trajectory for any set of alignment indices and any set of indices to use for the RMSD calculation. By default, structures are aligned by the protein atoms and the RMSD is calculated for ligand atoms.
Step4: Specific indices of the ligand and protein can be specified for alignment and calculation. If no reference trajectory is provided, the reference frame is used.
Step5: Custom indices can also be provided. For example, here we have aligned by the protein (this is the default option but has been enumerated here for clarity) but calculated the RMSD based on all atoms in the reference frame.
Step6: Using all atoms for both aligning and calculating RMSD is equivalent to mdtraj's implementation of RMSD calculations. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import mdtraj as md
top = md.Topology()
c = top.add_chain()
r0 = top.add_residue('HET', c)
r1 = top.add_residue('HET', c)
r2 = top.add_residue('HET', c)
r3 = top.add_residue('HET', c)
r4 = top.add_residue('HET', c)
r5 = top.add_residue('HET', c)
r6 = top.add_residue('HET', c)
r7 = top.add_residue('HET', c)
r8 = top.add_residue('HET', c)
r9 = top.add_residue('HET', c)
residues = [r0,r1,r2,r3,r4,r5,r6,r7,r8,r9]
c_ligand = top.add_chain()
r_ligand = top.add_residue('HET', c_ligand)
for _ in range(10):
for _, res in enumerate(residues):
top.add_atom('CA', md.element.carbon, res)
for _ in range(10):
top.add_atom('CA', md.element.carbon, r_ligand)
traj = md.Trajectory(xyz=np.random.uniform(size=(100, 110, 3)),
topology=top,
time=np.arange(100))
ref = md.Trajectory(xyz=np.random.uniform(size=(1, 110, 3)),
topology=top,
time=np.arange(1))
Explanation: Featurizing Ligand-Protein trajectories
Trajectories containing a protein and ligand can now be featurized in several ways. A reference frame with at least two chains, one of which is the protein and one of which is the ligand, is required for all featurizations. These chains can be manually specified by their indexes or MSMBuilder can guess which trajectory is the protein (by choosing the longest CA-containing chain) and which is the ligand (by choosing the longest chain containing up to 200 atoms; tie goes to the lower index).
Here we explore Ligand-Protein contact featurizations and their binary transforms as well as RMSD calculations with customizable alignment and calcuation indices.
Generate a toy trajectory
End of explanation
from msmbuilder.featurizer import LigandContactFeaturizer
from msmbuilder.featurizer import BinaryLigandContactFeaturizer
feat = LigandContactFeaturizer(reference_frame=ref, binding_pocket=0.1)
df = pd.DataFrame(feat.describe_features(ref))
df
pocket_contacts = feat.transform(traj)
print("Number of frames is {}".format(len(pocket_contacts)))
print("Number of features is {}".format(pocket_contacts[0].shape[1]))
Explanation: Identify residues characterizing a binding pocket with respect to a reference structure
End of explanation
feat = BinaryLigandContactFeaturizer(reference_frame=ref, cutoff=0.1)
pocket_bins = feat.transform(traj)
print("Number of residues is {}".format(pocket_bins[0].shape[1]))
count_list = []
for res in range(pocket_bins[0].shape[1]):
count_list.append(sum([pocket_bins[i][0][res]
for i in range(len(pocket_bins))]))
fig = plt.figure(figsize=(6,5))
plt.title('Instances within {} nm cutoff'.format(feat.cutoff),fontsize=18)
plt.bar(range(pocket_bins[0].shape[1]),count_list)
plt.ylabel('Counts', fontsize=16)
plt.xlabel('Residue index', fontsize=16)
plt.xticks(np.linspace(0.4,10.4,pocket_bins[0].shape[1]+1),range(pocket_bins[0].shape[1]))
plt.tight_layout()
Explanation: Create a histogram of instances each residue is within a certain cutoff distance of the ligand
End of explanation
from msmbuilder.featurizer import LigandRMSDFeaturizer
feat = LigandRMSDFeaturizer(reference_frame=ref, reference_traj=traj[0:2])
rmsds = feat.transform([traj])
print(rmsds[0][:2])
Explanation: Using the LigandRMSDFeaturizer
Compute the RMSD of each frame in the trajectory to each frame in a reference trajectory for any set of alignment indices and any set of indices to use for the RMSD calculation. By default, structures are aligned by the protein atoms and the RMSD is calculated for ligand atoms.
End of explanation
feat_indices = LigandRMSDFeaturizer(reference_frame=ref, align_indices=range(50),
calculate_indices=[105])
rmsds_indices = feat_indices.transform([traj])
print(rmsds_indices[0][:2])
Explanation: Specific indices of the ligand and protein can be specified for alignment and calculation. If no reference trajectory is provided, the reference frame is used.
End of explanation
feat_custom = LigandRMSDFeaturizer(reference_frame=ref, align_by='protein',
calculate_for='custom', calculate_indices=range(ref.n_atoms))
rmsds_custom = feat_custom.transform([traj])
print(rmsds_custom[0][:2])
Explanation: Custom indices can also be provided. For example, here we have aligned by the protein (this is the default option but has been enumerated here for clarity) but calculated the RMSD based on all atoms in the reference frame.
End of explanation
feat_mdtraj = LigandRMSDFeaturizer(reference_frame=ref, align_by='custom',
align_indices=range(ref.n_atoms), calculate_for='custom',
calculate_indices=range(ref.n_atoms))
rmsds_mdtraj = feat_mdtraj.transform([traj])
real_mdtraj = md.rmsd(traj, ref, frame=0)
print("multichain implementation:\t {}, ...".format((rmsds_mdtraj[0][0][0],
rmsds_mdtraj[0][1][0])))
print("mdtraj implementation:\t\t {}, ...".format((real_mdtraj[0], real_mdtraj[1])))
Explanation: Using all atoms for both aligning and calculating RMSD is equivalent to mdtraj's implementation of RMSD calculations.
End of explanation |
15,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question
Step1: Learning Curves
What the right model for a dataset is depends critically on how much data we have. More data allows us to be more confident about building a complex model. Lets built some intuition on why that is. Look at the following datasets
Step2: They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty.
A great way to explore how a model fit evolves with different dataset sizes are learning curves.
A learning curve plots the validation error for a given model against different training set sizes.
But first, take a moment to think about what we're going to see
Step3: You can see that for the model with kernel = linear, the validation score doesn't really improve as more data is given.
Notice that the validation error generally improves with a growing training set,
while the training error generally gets worse with a growing training set. From
this we can infer that as the training size increases, they will converge to a single
value.
From the above discussion, we know that kernel = linear
underfits the data. This is indicated by the fact that both the
training and validation errors are very poor. When confronted with this type of learning curve,
we can expect that adding more training data will not help matters | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.svm import SVR
from sklearn import cross_validation
rng = np.random.RandomState(42)
n_samples = 200
kernels = ['linear', 'poly', 'rbf']
true_fun = lambda X: X ** 3
X = np.sort(5 * (rng.rand(n_samples) - .5))
y = true_fun(X) + .01 * rng.randn(n_samples)
plt.figure(figsize=(14, 5))
for i in range(len(kernels)):
ax = plt.subplot(1, len(kernels), i + 1)
plt.setp(ax, xticks=(), yticks=())
model = SVR(kernel=kernels[i], C=5)
model.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_validation.cross_val_score(model,
X[:, np.newaxis], y, scoring="mean_squared_error", cv=10)
X_test = np.linspace(3 * -.5, 3 * .5, 100)
plt.plot(X_test, model.predict(X_test[:, np.newaxis]), label="Model")
plt.plot(X_test, true_fun(X_test), label="True function")
plt.scatter(X, y, label="Samples")
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((-3 * .5, 3 * .5))
plt.ylim((-1, 1))
plt.legend(loc="best")
plt.title("Kernel {}\nMSE = {:.2e}(+/- {:.2e})".format(
kernels[i], -scores.mean(), scores.std()))
plt.show()
Explanation: The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, sometimes using a
more complicated model will give worse results. Also, sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Learning Curves and Validation Curves
One way to address this issue is to use what are often called Learning Curves.
Given a particular dataset and a model we'd like to fit (e.g. using feature creation and linear regression), we'd
like to tune our value of the hyperparameter kernel to give us the best fit. We can visualize the different regimes with the following plot, modified from the sklearn examples here
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cross_validation
rng = np.random.RandomState(0)
n_samples = 200
true_fun = lambda X: X ** 3
X = np.sort(5 * (rng.rand(n_samples) - .5))
y = true_fun(X) + .01 * rng.randn(n_samples)
X = X[:, None]
y = y
f, axarr = plt.subplots(1, 3)
axarr[0].scatter(X[::20], y[::20])
axarr[0].set_xlim((-3 * .5, 3 * .5))
axarr[0].set_ylim((-1, 1))
axarr[1].scatter(X[::10], y[::10])
axarr[1].set_xlim((-3 * .5, 3 * .5))
axarr[1].set_ylim((-1, 1))
axarr[2].scatter(X, y)
axarr[2].set_xlim((-3 * .5, 3 * .5))
axarr[2].set_ylim((-1, 1))
plt.show()
Explanation: Learning Curves
What the right model for a dataset is depends critically on how much data we have. More data allows us to be more confident about building a complex model. Lets built some intuition on why that is. Look at the following datasets:
End of explanation
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
# This is actually negative MSE!
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='linear'), X, y, cv=10,
scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to maximize score
print(train_scores.mean(axis=1))
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
#plt.ylim((0, 50))
plt.legend(loc='best')
Explanation: They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty.
A great way to explore how a model fit evolves with different dataset sizes are learning curves.
A learning curve plots the validation error for a given model against different training set sizes.
But first, take a moment to think about what we're going to see:
Questions:
As the number of training samples are increased, what do you expect to see for the training error? For the validation error?
Would you expect the training error to be higher or lower than the validation error? Would you ever expect this to change?
We can run the following code to plot the learning curve for a kernel = linear model:
End of explanation
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='rbf'), X, y, cv=10,
scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to minimize squared error
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
plt.legend(loc='best')
Explanation: You can see that for the model with kernel = linear, the validation score doesn't really improve as more data is given.
Notice that the validation error generally improves with a growing training set,
while the training error generally gets worse with a growing training set. From
this we can infer that as the training size increases, they will converge to a single
value.
From the above discussion, we know that kernel = linear
underfits the data. This is indicated by the fact that both the
training and validation errors are very poor. When confronted with this type of learning curve,
we can expect that adding more training data will not help matters: both
lines will converge to a relatively high error.
When the learning curves have converged to a poor error, we have an underfitting model.
An underfitting model can be improved by:
Using a more sophisticated model (i.e. in this case, increase complexity of the kernel parameter)
Gather more features for each sample.
Decrease regularization in a regularized model.
A underfitting model cannot be improved, however, by increasing the number of training
samples (do you see why?)
Now let's look at an overfit model:
End of explanation |
15,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In many simulations particles continuously enter the simulation domain while other particles leave the simulation domain. This example illustrates two key concepts in the coldatom library that can be used to deal with such situations
Step2: Next we define our source
Step3: Sources should derive from coldatoms.Source. In a time step of duration $dt$ we produce on average $dt \dot{n}$ particles. The actual number of particles produced changes each time we generate particles. The distribution of particles is a Poissonian with mean $dt \dot{n}$.
When we are then asked to actually generate the particles and insert them into the ensemble we uniformly distribute them over a circular disk of radius $R$ in the $x--y$ plane. The particles are moving in the positive $z$ direction. The starting position is uniformly distributed along $z$ such that there are no gaps and bunches of the particles. They should emerge from the oven as a uniform stream.
Then we produce the velocity distribution. In OvenSource we specify the velocity distribution kinematically, i.e. we describe the velocity distribution directly. This is because a more physical distribution (e.g in terms of temperature) requires more parameters (e.g. particle mass and Boltzmann constant). It is easy for the calling code to determine
$\bar{v}$ and $\Delta v$ from a physical model.
Note that due to the beam's divergence the diameter of the disk from which particles are emitted is slightly larger than $R$. This is an artifact our simplistic treatment of the initial positions of the particle. Some particles start a distance before the aperture and in general they have a non-zero transverse velocity. A particle starting out very close to the edge of the aperture but with an outward transverse velocity component will pass through the $z=0$ plane outside of the disk with radius $R$.
So now lets create one of these sources
Step4: In $1 \mu \rm{s}$ the source emits on average 1000 particles
Step5: To actually generate particles we first need an ensemble into which the particles are to be inserted.
Step6: Now our ensemble contains particles
Step7: Here is a snapshot of the positions of the particles we just created, looking into the beam
Step8: And here is a view from the side
Step9: The atomic density distribution has this thin pancake shape because at a velocity of 100m/s they only travel 0.1mm in $dt$.
The density distribution of the particles is Gaussian in the transverse direction (with mean $0$) and in the direction along the beam (with mean $\mathbf{v}=(0,0,100\rm{m/s})^T$
Step10: Now, if we want to generate a particle beam we cannot simply keep generating particles. The particles would simply be generated on top of one another. Their density would keep increasing but they wouldn't form a beam.
To produce a beam we have to let the particles move. For that purpose we can use the drift-kick particle push and simply interleave it with the production of particles. Here is the resulting evolution of the particle beam during the first $20 \mu\rm{s}$.
Step13: Sinks
Besides particle sources we also need a mechanism to remove particles from the simulation. Sometimes this is just a practical matter
Step14: To see how this works we'll consider a specific example. A circular aperture in the $y-z$ plane with radius 1. We'll have a look at the fate of three particles
Step15: And after the sink has been processed, the ensemble contains only two particles
Step16: A collimated atom beam
With our oven source and circular aperture we have the building blocks of a collimated beam experiment. All we need to do is combine source, sink, and particle push algorithm. We have already seen above that sources and particle push can be combined rather easily. We simply need to generate particles at each time step and then update the particle positions with the particle push.
Unfortunately, combining sinks with the particle push is not quite as straightforward in general. The fundamental reason for this is that sinks demand that particle motion be along a straight line. The velocity must not change during the time step. Granted, in our case this is what happens because we are neglecting all forces acting on the particles and they therefore move along straight lines. However, in general this is not the case. When there are forces acting on the particles our drift-kick integrator produces trajectories with a kink.
This means that the particle push itself has to be informed about the sinks. It can then process the sinks for each of the straight line segment making up the particle trajectory.
First we define the atom source representing the oven
Step17: Remember that our oven source is located at the origin, emitting particles from a circular disk in the $x-y$ plane moving in the positive $z$ direction. We place an aperture of diameter $1{\rm mm}$ at a distance of $0.5m$ downstream from the source. At this location the beam expands to about $17{\rm mm}$, so a significant fraction of the atoms get absorbed by the aperture.
Step18: We integrate the particle motion with a time step size of $dt=1.0 \times 10^{-4}{\rm s}$. During that time the particles travel $3{\rm cm}$ on average.
Step19: The following picture shows a side view of the beam | Python Code:
import coldatoms
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
Explanation: Introduction
In many simulations particles continuously enter the simulation domain while other particles leave the simulation domain. This example illustrates two key concepts in the coldatom library that can be used to deal with such situations: sources and sinks. We go over how to define a source and a sink and then we use these in conjunction with one another to simulate a collimated atomic beam generated by an oven.
Sources
In the coldatom library, a source needs to have two essential capabilities. It needs to be able to tell us, how many particles it will generate the next time it is asked to produce particles. And then it needs to be able to generate the promised particles.
As an example, we consider a source that generates thermal atoms emitted from a circular aperture. First we include a few libraries:
End of explanation
class OvenSource(coldatoms.Source):
def __init__(self, R, divergence, n_dot, v_bar, delta_v):
Create an OvenSource.
R -- Radius of circular aperture of oven.
divergence -- Divergence angle of atoms emitted by the oven.
n_dot -- Number of atoms emitted per second.
v_bar -- Average velocity of the emitted atoms.
delta_v -- Standard deviation of velocities.
self.R = R
self.divergence = divergence
self.n_dot = n_dot
self.v_bar = v_bar
self.delta_v = delta_v
def num_ptcls_produced(self, dt):
n_bar = dt * self.n_dot
return np.random.poisson(n_bar)
def produce_ptcls(self, dt, start, end, ensemble):
for i in range(start, end):
# First we generate the positions
while True:
x = np.random.uniform(-1, 1)
y = np.random.uniform(-1, 1)
if (x*x + y*y) < 1:
break
x *= self.R
y *= self.R
z = np.random.uniform(-self.v_bar * dt, 0)
# Now generate the velocities
vz = np.random.normal(self.v_bar, self.delta_v)
vx = vz * np.random.normal(0, self.divergence)
vy = vz * np.random.normal(0, self.divergence)
ensemble.x[i, 0] = x
ensemble.x[i, 1] = y
ensemble.x[i, 2] = z
ensemble.v[i, 0] = vx
ensemble.v[i, 1] = vy
ensemble.v[i, 2] = vz
Explanation: Next we define our source:
End of explanation
src = OvenSource(R=1.0e-3, divergence=1.0e-2, n_dot=1.0e9, v_bar=100.0, delta_v=10.0)
Explanation: Sources should derive from coldatoms.Source. In a time step of duration $dt$ we produce on average $dt \dot{n}$ particles. The actual number of particles produced changes each time we generate particles. The distribution of particles is a Poissonian with mean $dt \dot{n}$.
When we are then asked to actually generate the particles and insert them into the ensemble we uniformly distribute them over a circular disk of radius $R$ in the $x--y$ plane. The particles are moving in the positive $z$ direction. The starting position is uniformly distributed along $z$ such that there are no gaps and bunches of the particles. They should emerge from the oven as a uniform stream.
Then we produce the velocity distribution. In OvenSource we specify the velocity distribution kinematically, i.e. we describe the velocity distribution directly. This is because a more physical distribution (e.g in terms of temperature) requires more parameters (e.g. particle mass and Boltzmann constant). It is easy for the calling code to determine
$\bar{v}$ and $\Delta v$ from a physical model.
Note that due to the beam's divergence the diameter of the disk from which particles are emitted is slightly larger than $R$. This is an artifact our simplistic treatment of the initial positions of the particle. Some particles start a distance before the aperture and in general they have a non-zero transverse velocity. A particle starting out very close to the edge of the aperture but with an outward transverse velocity component will pass through the $z=0$ plane outside of the disk with radius $R$.
So now lets create one of these sources:
End of explanation
src.num_ptcls_produced(1.0e-6)
Explanation: In $1 \mu \rm{s}$ the source emits on average 1000 particles:
End of explanation
ensemble = coldatoms.Ensemble(num_ptcls=0)
coldatoms.produce_ptcls(1.0e-6, ensemble, sources=[src])
Explanation: To actually generate particles we first need an ensemble into which the particles are to be inserted.
End of explanation
ensemble.num_ptcls
Explanation: Now our ensemble contains particles:
End of explanation
fig = plt.figure()
plt.plot(1.0e3*ensemble.x[:,0], 1.0e3*ensemble.x[:, 1], '.', markersize=3)
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.xlabel(r'$x/\rm{mm}$')
plt.ylabel(r'$y/\rm{mm}$')
plt.axes().set_aspect(1)
fig.tight_layout()
Explanation: Here is a snapshot of the positions of the particles we just created, looking into the beam:
End of explanation
fig = plt.figure()
plt.plot(1.0e3*ensemble.x[:,2], 1.0e3*ensemble.x[:, 1], '.', markersize=3)
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.xlabel(r'$z/\rm{mm}$')
plt.ylabel(r'$y/\rm{mm}$')
plt.axes().set_aspect(1)
fig.tight_layout()
Explanation: And here is a view from the side:
End of explanation
fig = plt.figure()
plt.plot(ensemble.v[:,0], ensemble.v[:, 1], '.', markersize=3)
vmax = 5
plt.xlim([-vmax, vmax])
plt.ylim([-vmax, vmax])
plt.xlabel(r'$v_x/\rm{ms^{-1}}$')
plt.ylabel(r'$v_y/\rm{ms^{-1}}$')
plt.axes().set_aspect(1)
fig.tight_layout()
fig = plt.figure()
plt.plot(ensemble.v[:,2], ensemble.v[:, 0], '.', markersize=3)
vmax = 5
plt.xlim([-0, 30*vmax])
plt.ylim([-vmax, vmax])
plt.xlabel(r'$v_z/\rm{ms^{-1}}$')
plt.ylabel(r'$v_x/\rm{ms^{-1}}$')
fig.tight_layout()
Explanation: The atomic density distribution has this thin pancake shape because at a velocity of 100m/s they only travel 0.1mm in $dt$.
The density distribution of the particles is Gaussian in the transverse direction (with mean $0$) and in the direction along the beam (with mean $\mathbf{v}=(0,0,100\rm{m/s})^T$:
End of explanation
fig = plt.figure()
subplots = [plt.subplot(141), plt.subplot(142), plt.subplot(143), plt.subplot(144)]
dt = 5.0e-6
ensemble = coldatoms.Ensemble(num_ptcls=0)
for ax in subplots:
coldatoms.produce_ptcls(dt, ensemble, sources=[src])
ax.plot(1.0e3*ensemble.x[:,2], 1.0e3*ensemble.x[:, 1], marker='.', markersize=3)
ax.set_xlim([-2, 2])
ax.set_ylim([-2, 2])
coldatoms.drift_kick(dt, ensemble)
ax.set_xlabel(r'$x$')
Explanation: Now, if we want to generate a particle beam we cannot simply keep generating particles. The particles would simply be generated on top of one another. Their density would keep increasing but they wouldn't form a beam.
To produce a beam we have to let the particles move. For that purpose we can use the drift-kick particle push and simply interleave it with the production of particles. Here is the resulting evolution of the particle beam during the first $20 \mu\rm{s}$.
End of explanation
class SinkCircularAperture(coldatoms.Sink):
A sink representing a circular aperture.
The aperture is defined by the center of the aperture, the aperture's radius,
and the normal of the plane in which the aperture is situated.
def __init__(self, center, radius, normal):
Create a circular aperture sink.
center -- The center of the circular apertur.
radius -- Radius of the aperture.
normal -- Normal of the plane in which the aperture is cut.
self.center = np.copy(center)
self.radius = radius
self.normal = np.copy(normal)
def find_absorption_time(self, x, v, dt):
num_ptcls = x.shape[0]
taus = np.empty(num_ptcls)
for i in range(num_ptcls):
normal_velocity = self.normal.dot(v[i])
# Deal with particles that travel parallel to the plane in which
# the aperture lies.
if (normal_velocity == 0.0):
taus[i] = 2.0 * dt
else:
# First we compute the time at which the particle intersects
# the plane in which the aperture lies.
taus[i] = self.normal.dot(self.center - x[i]) / normal_velocity
intersection = x[i] + taus[i] * v[i]
distance_from_center = np.linalg.norm(intersection - self.center)
if distance_from_center < self.radius:
# Intersects within the aperture, so don't absorb.
taus[i] = 2.0 * dt
return taus
def absorb_particles(self, ensemble, dt, absorption_times, absorption_indices):
print("The following particles will be absorbed:")
print(absorption_indices)
Explanation: Sinks
Besides particle sources we also need a mechanism to remove particles from the simulation. Sometimes this is just a practical matter: Some particles may have escaped a trapping potential and moved to a location where they do not participate in the dynamics of the ensemble in an interesting way. We will then want to remove them from the ensemble so that they don't slow down the simulation unnecessarily. We could imagine surrouding the simulation domain by a box that absorbs any particles hitting its boundary. This is particularly important if we have a source that continuously emits particles into the simulation. Without sinks it would be impossible to reach steady state.
In other cases the absorption is a more physically real process. For example, the atoms in an atom beam may hit an aperture that is used to collimate the beam.
Conceptually, sinks are represented by surfaces in the coldatoms library. When a particle crosses the surface, it will be removed from the simulations. To implement a sink, we derive from the Sink class. We need to implement two essential methods. find_absorption_time computes the time at which a particle starting at position $x$ and moving along a straight line with velocity $v$ for a time $dt$ will hit the absorbing surface. If the particle will not cross the surface (or if the sink will not absorb the particle for a different reason) the sink should return an absorption time outside the interval $[0, dt]$.
After the absorption time has been computed, coldatoms calls the sink's record_absorption method. This method call gives the sink a chance to process the absorption of a particle. For example, the sink may log the absorption to a file or add the absorption position to an array of absorption events for visualization.
To illustrate the process we consider a circular aperture. We model the aperture as a circular hole drilled into a plane.
End of explanation
aperture = SinkCircularAperture(
center=np.array([0.0, 0.0, 0.0]), radius=1.0, normal=np.array([1.0, 0.0, 0.0]))
ensemble = coldatoms.Ensemble(num_ptcls=3)
ensemble.x = np.array([[-1.0, 0.0, 0.0], [-1.0, 0.0, 0.0], [-1.0, 0.0, 0.0]])
ensemble.v = np.array([[-1.0, 0.0, 0.0], [1.0, 0.0, 0.0], [1.0, 4.0, 0.0]])
coldatoms.process_sink(10.0, ensemble, aperture)
Explanation: To see how this works we'll consider a specific example. A circular aperture in the $y-z$ plane with radius 1. We'll have a look at the fate of three particles: The first is traveling away from the aperature, the second hits the aperture, and the third one hits the plane outside of the aperture. As expected, only the third particle gets absorbed (its index is 2 because numpy and python indexing is zero based):
End of explanation
print(ensemble.num_ptcls)
print(ensemble.v)
Explanation: And after the sink has been processed, the ensemble contains only two particles:
End of explanation
beam_source = OvenSource(R=1.0e-3, divergence=1.0e-2, n_dot=1.0e7, v_bar=300.0, delta_v=10.0)
Explanation: A collimated atom beam
With our oven source and circular aperture we have the building blocks of a collimated beam experiment. All we need to do is combine source, sink, and particle push algorithm. We have already seen above that sources and particle push can be combined rather easily. We simply need to generate particles at each time step and then update the particle positions with the particle push.
Unfortunately, combining sinks with the particle push is not quite as straightforward in general. The fundamental reason for this is that sinks demand that particle motion be along a straight line. The velocity must not change during the time step. Granted, in our case this is what happens because we are neglecting all forces acting on the particles and they therefore move along straight lines. However, in general this is not the case. When there are forces acting on the particles our drift-kick integrator produces trajectories with a kink.
This means that the particle push itself has to be informed about the sinks. It can then process the sinks for each of the straight line segment making up the particle trajectory.
First we define the atom source representing the oven:
End of explanation
beam_aperture = SinkCircularAperture(
center=np.array([0.0, 0.0, 0.5]), radius=1.0e-3, normal=np.array([0.0, 0.0, 1.0]))
Explanation: Remember that our oven source is located at the origin, emitting particles from a circular disk in the $x-y$ plane moving in the positive $z$ direction. We place an aperture of diameter $1{\rm mm}$ at a distance of $0.5m$ downstream from the source. At this location the beam expands to about $17{\rm mm}$, so a significant fraction of the atoms get absorbed by the aperture.
End of explanation
%%capture
beam_ensemble = coldatoms.Ensemble(num_ptcls=0)
dt = 1.0e-4
for i in range(30):
coldatoms.produce_ptcls(dt, beam_ensemble, sources=[beam_source])
coldatoms.drift_kick(dt, beam_ensemble, forces=[], sink=beam_aperture)
Explanation: We integrate the particle motion with a time step size of $dt=1.0 \times 10^{-4}{\rm s}$. During that time the particles travel $3{\rm cm}$ on average.
End of explanation
fig = plt.figure()
plt.plot(beam_ensemble.x[:,2], beam_ensemble.x[:, 0], '.', markersize=3)
plt.xlim([-0, 1.0])
plt.ylim([-20.0e-3, 20.0e-3])
plt.xlabel(r'$z/\rm{m}$')
plt.ylabel(r'$x/\rm{m}$')
fig.tight_layout()
Explanation: The following picture shows a side view of the beam:
End of explanation |
15,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using ICsound
csoundmagics includes an ICsound class which is adapted from Andrés Cabrera's icsound module. ICsound is bound to the %%csound and %csound magics command.
This notebook is an adaptation of Andrés' icsound test notebook.
Starting the Csound engine
To use ICsound create an ICsound instance
Step1: Creating an ICsound object automatically starts the engine
Step2: You can set the properties of the Csound engine with parameters to the startEngine() function.
Step3: The engine runs in a separate thread, so it doesn't block execution of python.
Step4: Use the %%csound magic command to directly type csound language code in the cell and send it to the engine. The number after the magic command is optional; it references the slot where the engine is running. If omitted, slot#1 is assumed.
Step5: So where did it print?
Step6: By default, messages from Csound are not shown, but they are stored in an internal buffer. You can view them with the printLog() function. If the log is getting too long and confusing, use the clearLog() function.
Function tables
You can create csound f-tables directly from python lists or numpy arrays
Step7: Tables can be plotted in the usual matplotlib way, but ICsound provides a plotTable function which styles the graphs.
Step8: You can get the function table values from the csound instance
Step9: Tables can also be passed by their variable name in Csound
Step10: The following will create 320 tables with 720 points each
Step11: Sending instruments
You can send instruments to a running csound engine with the %%csound magic. Any syntax errors will be displayed inline.
Step12: Channels
Csound channels can be used to send values to Csound. They can affect running instances of instruments by using the invalue/chnget opcodes
Step13: You can also read the channels from Csound. These channels can be set from ICsound or within instruments with the outvalue/chnset opcodes
Step14: Recording the output
You can record the realtime output from csound
Step15: Remote engines
You can also interact with engines through UDP. Note that not all operations are available, notably reading f-tables, but you can send instruments and note events to the remote engine.
Step16: Now send notes and instruments from the client
Step17: And show the log in the server
Step18: Stopping the engine
Step19: If we don't need cs_client anymore, we can delete its slot with the %csound line magic (note the single % sign and the negative slot#). The python instance cs_client can then be deleted
Step20: Audification
Reading Earthquake data through a web API (might take a few minutes)
Step21: Instrument to play back the earthquake data stored in a table
Step22: Listen
Step23: Slower
Step24: Quicker
Step25: Other tests
Another engine | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%load_ext csoundmagics
Explanation: Using ICsound
csoundmagics includes an ICsound class which is adapted from Andrés Cabrera's icsound module. ICsound is bound to the %%csound and %csound magics command.
This notebook is an adaptation of Andrés' icsound test notebook.
Starting the Csound engine
To use ICsound create an ICsound instance:
End of explanation
cs = ICsound(port=12894)
Explanation: Creating an ICsound object automatically starts the engine:
End of explanation
help(cs.startEngine)
Explanation: You can set the properties of the Csound engine with parameters to the startEngine() function.
End of explanation
cs.startEngine()
Explanation: The engine runs in a separate thread, so it doesn't block execution of python.
End of explanation
%%csound 1
gkinstr init 1
%%csound
print i(gkinstr)
Explanation: Use the %%csound magic command to directly type csound language code in the cell and send it to the engine. The number after the magic command is optional; it references the slot where the engine is running. If omitted, slot#1 is assumed.
End of explanation
cs.printLog()
Explanation: So where did it print?
End of explanation
cs.fillTable(1, np.array([8, 7, 9, 1, 1, 1]))
cs.fillTable(2, [4, 5, 7, 0, 8, 7, 9, 6])
Explanation: By default, messages from Csound are not shown, but they are stored in an internal buffer. You can view them with the printLog() function. If the log is getting too long and confusing, use the clearLog() function.
Function tables
You can create csound f-tables directly from python lists or numpy arrays:
End of explanation
cs.plotTable(1)
cs.plotTable(2, reuse=True)
plt.grid()
Explanation: Tables can be plotted in the usual matplotlib way, but ICsound provides a plotTable function which styles the graphs.
End of explanation
cs.table(2)
cs.makeTable(2, 1024, 10, 1)
cs.makeTable(3, 1024, -10, 0.5, 1)
cs.plotTable(2)
cs.plotTable(3, reuse=True)
#ylim((-1.1,1.1))
cs.table(2)[100: 105]
Explanation: You can get the function table values from the csound instance:
End of explanation
%%csound 1
giHalfSine ftgen 0, 0, 1024, 9, .5, 1, 0
cs.plotTable('giHalfSine')
Explanation: Tables can also be passed by their variable name in Csound:
End of explanation
randsig = np.random.random((320, 720))
i = 0
for i, row in enumerate(randsig):
cs.fillTable(50 + i, row)
print(i, '..', end=' ')
cs.plotTable(104)
Explanation: The following will create 320 tables with 720 points each:
End of explanation
%%csound 1
instr 1
asig asds
%%csound 1
instr 1
asig oscil 0.5, 440
outs asig, asig
%%csound 1
instr 1
asig oscil 0.5, 440
outs asig, asig
endin
Explanation: Sending instruments
You can send instruments to a running csound engine with the %%csound magic. Any syntax errors will be displayed inline.
End of explanation
cs.setChannel("val", 20)
Explanation: Channels
Csound channels can be used to send values to Csound. They can affect running instances of instruments by using the invalue/chnget opcodes:
End of explanation
cs.channel("val")
Explanation: You can also read the channels from Csound. These channels can be set from ICsound or within instruments with the outvalue/chnset opcodes:
End of explanation
cs.startRecord("out.wav")
cs.sendScore("i 1 0 1")
import time
time.sleep(1)
cs.stopRecord()
!aplay out.wav
Explanation: Recording the output
You can record the realtime output from csound:
End of explanation
cs_client = ICsound()
cs_client.startClient()
cs.clearLog()
Explanation: Remote engines
You can also interact with engines through UDP. Note that not all operations are available, notably reading f-tables, but you can send instruments and note events to the remote engine.
End of explanation
cs_client.sendScore("i 1 0 1")
cs_client.sendCode("print i(gkinstr)")
Explanation: Now send notes and instruments from the client:
End of explanation
cs.printLog()
Explanation: And show the log in the server:
End of explanation
cs.stopEngine()
cs
Explanation: Stopping the engine
End of explanation
%csound -2
del cs_client
Explanation: If we don't need cs_client anymore, we can delete its slot with the %csound line magic (note the single % sign and the negative slot#). The python instance cs_client can then be deleted:
End of explanation
prefix = 'http://service.iris.edu/irisws/timeseries/1/query?'
SCNL_parameters = 'net=IU&sta=ANMO&loc=00&cha=BHZ&'
times = 'starttime=2005-01-01T00:00:00&endtime=2005-01-02T00:00:00&'
output = 'output=ascii'
import urllib
f = urllib.request.urlopen(prefix + SCNL_parameters + times + output)
timeseries = f.read()
import ctcsound
data = ctcsound.pstring(timeseries).split('\n')
dates = []
values = []
for line in data[1:-1]:
date, val = line.split()
dates.append(date)
values.append(float(val))
plt.plot(values)
cs.startEngine()
cs.fillTable(1, values)
Explanation: Audification
Reading Earthquake data through a web API (might take a few minutes):
End of explanation
%%csound 1
instr 1
idur = p3
itable = p4
asig poscil 1/8000, 1/p3, p4
outs asig, asig
endin
Explanation: Instrument to play back the earthquake data stored in a table:
End of explanation
cs.sendScore('i 1 0 3 1')
Explanation: Listen:
End of explanation
cs.sendScore('i 1 0 7 1')
Explanation: Slower:
End of explanation
cs.sendScore('i 1 0 1 1')
Explanation: Quicker:
End of explanation
ics = ICsound(bufferSize=64)
ics.listInterfaces()
%%csound 2
instr 1
asig oscil 0.5, 440
outs asig, asig
endin
ics.sendScore("i 1 0 0.5")
%csound -2
del ics
cs.stopEngine()
Explanation: Other tests
Another engine:
End of explanation |
15,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Beautiful soup on test data
Here, we create some simple HTML that include some frequently used tags.
Note, however, that we have also left one paragraph tag unclosed.
Step2: Once the soup object has been created successfully, we can execute a number of queries on the DOM.
First we request all data from the head tag.
Note that while it looks like a list of strings was returned, actually, a bs4.element.Tag type is returned.
These examples expore how to extract tags, the text from tags, how to filter queries based on
attributes, how to retreive attributes from a returned query, and how the BeautifulSoup engine
is tolerant of unclosed tags.
Step3: Beautilful soup on real data
In this example I will show how you can use BeautifulSoup to retreive information from live web pages.
We make use of The Guardian newspaper, and retreive the HTML from an arbitrary article.
We then create the BeautifulSoup object, and query the links that were discovered in the DOM.
Since a large number are returned, we then apply attribute filters that let us reduce significantly
the number of returned links.
I selected the filters selected for this example in order to focus on the names in the paper.
The parameterisation of the attributes was discovered by using the inspect functionality of Google Chrome
Step4: Chaining queries
Now, let us conisder a more general query that might be done on a website such as this.
We will query the base technology page, and attempt to list all articles that pertain to this main page
Step5: After inspecting the DOM (via the inspect tool in my browser), I see that the attributes that define
a technology article are | Python Code:
source =
<!DOCTYPE html>
<html>
<head>
<title>Scraping</title>
</head>
<body class="col-sm-12">
<h1>section1</h1>
<p>paragraph1</p>
<p>paragraph2</p>
<div class="col-sm-2">
<h2>section2</h2>
<p>paragraph3</p>
<p>unclosed
</div>
</body>
</html>
soup = BeautifulSoup(source, "html.parser")
Explanation: Beautiful soup on test data
Here, we create some simple HTML that include some frequently used tags.
Note, however, that we have also left one paragraph tag unclosed.
End of explanation
print(soup.prettify())
print('Head:')
print('', soup.find_all("head"))
# [<head>\n<title>Scraping</title>\n</head>]
print('\nType of head:')
print('', map(type, soup.find_all("head")))
# [<class 'bs4.element.Tag'>]
print('\nTitle tag:')
print('', soup.find("title"))
# <title>Scraping</title>
print('\nTitle text:')
print('', soup.find("title").text)
# Scraping
divs = soup.find_all("div", attrs={"class": "col-sm-2"})
print('\nDiv with class=col-sm-2:')
print('', divs)
# [<div class="col-sm-2">....</div>]
print('\nClass of first div:')
print('', divs[0].attrs['class'])
# [u'col-sm-2']
print('\nAll paragraphs:')
print('', soup.find_all("p"))
# [<p>paragraph1</p>,
# <p>paragraph2</p>,
# <p>paragraph3</p>,
# <p>unclosed\n </p>]
Explanation: Once the soup object has been created successfully, we can execute a number of queries on the DOM.
First we request all data from the head tag.
Note that while it looks like a list of strings was returned, actually, a bs4.element.Tag type is returned.
These examples expore how to extract tags, the text from tags, how to filter queries based on
attributes, how to retreive attributes from a returned query, and how the BeautifulSoup engine
is tolerant of unclosed tags.
End of explanation
url = 'https://www.theguardian.com/technology/2017/jan/31/amazon-expedia-microsoft-support-washington-action-against-donald-trump-travel-ban'
req = requests.get(url)
source = req.text
soup = BeautifulSoup(source, 'html.parser')
print(source)
links = soup.find_all('a')
links
links = soup.find_all('a', attrs={
'data-component': 'auto-linked-tag'
})
for link in links:
print(link['href'], link.text)
Explanation: Beautilful soup on real data
In this example I will show how you can use BeautifulSoup to retreive information from live web pages.
We make use of The Guardian newspaper, and retreive the HTML from an arbitrary article.
We then create the BeautifulSoup object, and query the links that were discovered in the DOM.
Since a large number are returned, we then apply attribute filters that let us reduce significantly
the number of returned links.
I selected the filters selected for this example in order to focus on the names in the paper.
The parameterisation of the attributes was discovered by using the inspect functionality of Google Chrome
End of explanation
url = 'https://www.theguardian.com/uk/technology'
req = requests.get(url)
source = req.text
soup = BeautifulSoup(source, 'html.parser')
Explanation: Chaining queries
Now, let us conisder a more general query that might be done on a website such as this.
We will query the base technology page, and attempt to list all articles that pertain to this main page
End of explanation
articles = soup.find_all('a', attrs={
'class': 'js-headline-text'
})
for article in articles:
print(article['href'][:], article.text[:20])
Explanation: After inspecting the DOM (via the inspect tool in my browser), I see that the attributes that define
a technology article are:
class = "js-headline-text"
End of explanation |
15,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conventions
Our main data structure is a Box, which represents a bounding box somewhere in the image.
Our coordinate system has its origin in the bottom-left of the image (with positive up and right), and all coordinates are specified as (y,x). Our box is specified as (y,x,h,w)
Define our boxes.
Step3: Define helper functions.
Step7: Integral image
These functions turn a raw image into a discretised integral image suitable for finding histograms with. The main function of interest is integral_image, which takes an image and a number of bins and does all the work.
Why not equal-occupancy?
Good question. If we discretise our image using equal-occupancy bins, we lose any knowledge of the background distribution. For example, suppose the background strictly follows a uniform distribution, and the image contains a single very bright source. On binning, the top bins will be almost entirely filled with the source leaving little room for the background; this means the discrete version of the background will not follow a uniform distribution over the bins, as it lacks presence in the top ones.
But we do want to maintain information on the distribution of the background, so instead we use equal-range bins. There is probably an improvement on this, for example using manually specified unequal-range bins that suit our images better.
Step8: The algorithm
Step9: Testing fake data
Step10: Testing real data | Python Code:
class Box:
def __init__(self, y, x, h, w):
self.pos = (y, x)
self.size = (h, w)
@property
def pos(self):
return self._pos
@pos.setter
def pos(self, value):
assert len(value) == 2
self._pos = np.array(value, dtype=int)
@property
def size(self):
return self._size
@size.setter
def size(self, value):
assert len(value) == 2
assert all( value[i] > 0 for i in range(len(value)) )
self._size = np.array(value, dtype=int)
def __str__(self):
return '(pos={} sze={})'.format(self.pos , self.size)
def __eq__(self, other):
return np.all(self.pos == other.pos) and np.all(self.size == other.size)
def _get_integral_image_elt(self, pos, integral_image):
M, ny, nx = integral_image.shape
cpos = np.clip(pos, [0, 0], [ny, nx])
if np.any(pos == 0):
return np.zeros(M)
return integral_image[:, cpos[0]-1, cpos[1]-1]
def counts(self, integral_image):
corners = (self.pos,
self.pos + [self.size[0], 0],
self.pos + [self.size[0], self.size[1]],
self.pos + [0, self.size[1]])
return (self._get_integral_image_elt(corners[0], integral_image)
- self._get_integral_image_elt(corners[1], integral_image)
+ self._get_integral_image_elt(corners[2], integral_image)
- self._get_integral_image_elt(corners[3], integral_image))
Explanation: Conventions
Our main data structure is a Box, which represents a bounding box somewhere in the image.
Our coordinate system has its origin in the bottom-left of the image (with positive up and right), and all coordinates are specified as (y,x). Our box is specified as (y,x,h,w)
Define our boxes.
End of explanation
from matplotlib.patches import Rectangle
SAVE_DIR = '../output'
# represents loaded data. the truth field may be none.
Data = namedtuple('Data', ['img','truth'])
def as_boxes(*args):
Turns some parameter triplets into boxes. Handles all of the following:
as_boxes( [(10,0,10,10), (20,50,10,10)] )
as_boxes( (10,0,10,10), (20,50,10,10) )
as_boxes(10,0,10,10)
if type(args[0]) in [float, int]:
return [ Box(*args) ]
elif type(args[0]) == tuple:
return [ Box(*t) for t in args ]
elif type(args[0]) == list:
return [ Box(*t) for t in args[0] ]
def show (image, *args, file=None, figsize=(8,8), dpi=250, pad_inches=0):
Show the given image. The varargs can be used to pass models (ie. lists of boxes) to draw on top
of the image. This can take two forms:
1. pass in a single dict of the form { 'colour1':model1, 'colour2':model2 }
2. pass in up to 6 models as individual parameters, and they will be automatically coloured.
plt.figure(figsize=figsize, dpi=dpi)
COLOURS = ['blue', 'red', 'yellow', 'orange', 'purple', 'green']
plt.gray()
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.imshow(image, interpolation='nearest', origin='lower')
if len(args) > 0 and type(args[0]) == list: # [ model1, model2, ...]
for colour, model in zip(COLOURS, args):
for box in model:
plt.gca().add_patch(Rectangle(tuple(reversed(box.pos)), box.size[1], box.size[0], alpha=0.9, facecolor='None', edgecolor=colour))
elif len(args) == 1 and type(args[0]) == dict: # { 'red':model1, 'blue':model2 }
for colour, model in args[0].items():
for box in model:
plt.gca().add_patch(Rectangle(tuple(reversed(box.pos)), box.size[1], box.size[0], alpha=0.9, facecolor='None', edgecolor=colour))
if file != None:
if isdir(SAVE_DIR):
plt.savefig('{}/{}'.format(SAVE_DIR, file), bbox_inches='tight', pad_inches=pad_inches)
else:
print("output directory '{}' does not exist.".format(SAVE_DIR))
def fake_data (sources:'[(y, x, h, w, brightness, variance)]'=[], size:'(y, x)'=(1000,1000)):
img = rng.uniform(size=size)
boxes = []
# define each source by making a Box, and manipulating the image.
for (y, x, h, w, b, v) in sources:
boxes.append( Box(y, x, h, w) )
img[y:y+h, x:x+w] = np.clip((img[y:y+h, x:x+w] + b) * v, 0, 1)
return Data(img, boxes)
def real_data (filename='frame-r-000094-1-0131.fits.gz'):
hdulist = fits.open('../data/{}'.format(filename))
img = hdulist[0].data
hdulist.close()
return Data(img, None)
Explanation: Define helper functions.
End of explanation
def as_bins(img, M):
Discretises an image into M equal-range bins, the output is an array of the same dimensions.
Assumes data is in [0,1]?
lo, hi = np.min(img), np.max(img)
f = np.vectorize(lambda x: (int)(M * (x-lo) / (hi-lo)) if x < hi else M-1, otypes=[np.int])
return f(img)
def as_booleans(dimg, M):
Takes a discrete image and a number of blocks, and returns a 3D array. The 2D array at index i contains
True's at each coordinate where the discrete image contained bin index i. There are M indices, of course.
ny, nx = dimg.shape
N = ny * nx
bimg = np.zeros((M, ny, nx),dtype=bool)
for m in range(M):
bimg[m, :, :] = (dimg == m)
return bimg
def as_accumulation(bimg):
Given a boolean image array, creates the integral image by summing up each 2D array from bottom-left to
top-right.
return np.cumsum(np.cumsum(bimg, axis=2), axis=1)
def integral_image(img, M):
return as_accumulation(as_booleans(as_bins(img, M), M))
Explanation: Integral image
These functions turn a raw image into a discretised integral image suitable for finding histograms with. The main function of interest is integral_image, which takes an image and a number of bins and does all the work.
Why not equal-occupancy?
Good question. If we discretise our image using equal-occupancy bins, we lose any knowledge of the background distribution. For example, suppose the background strictly follows a uniform distribution, and the image contains a single very bright source. On binning, the top bins will be almost entirely filled with the source leaving little room for the background; this means the discrete version of the background will not follow a uniform distribution over the bins, as it lacks presence in the top ones.
But we do want to maintain information on the distribution of the background, so instead we use equal-range bins. There is probably an improvement on this, for example using manually specified unequal-range bins that suit our images better.
End of explanation
def fast(iimg, background, min_area=1000, threshold=0.0000001):
_, h, w = iimg.shape
def is_odd(box):
counts, area = box.counts(iimg), box.size[0] * box.size[1]
scale = min(counts[i]/background[i] if background[i] > 0 else np.inf for i in range(len(counts)))
dist_diff = [counts[i] - scale*background[i] for i in range(len(counts))]
src_proportion = sum(dist_diff)/area
return src_proportion > threshold
count = 0
inactive, result = [], []
active = deque([ Box(0,0,h,w) ])
#active.extend([Box(0,0,w/4,h/4), Box(w/4,0,w/2,h/4), Box(3*w/4,0,w/4,h/4),
# Box(0,h/4,w/4,h/2), Box(w/4,h/4,w/2,h/2), Box(3*w/4,h/4,w/4,h/2),
# Box(0,3*h/4,w/4,h/4), Box(w/4,3*h/4,w/2,h/4), Box(3*w/4,3*h/4,w/4,h/4)])
#TODO add third grid?
while len(active) > 0:
box = active.popleft()
if box.size[0]*box.size[1] <= min_area:
result.append(box)
continue
sub_boxes = [ Box(box.pos[0] + dy*box.size[0]/2, box.pos[1] + dx*box.size[1]/2, box.size[0]/2, box.size[1]/2)
for dy in range(2) for dx in range(2) ]
sub_boxes = [ t for t in sub_boxes if is_odd(t) ]
count += 4
if len(sub_boxes) == 0:
inactive.append(box)
active.extend(sub_boxes)
print('evaluations: {}'.format(count))
return result
Explanation: The algorithm
End of explanation
BINS = 16
background = [1/BINS for _ in range(BINS)]
sources = [(100,200,50,400,-0.05,1), (300,700,200,200,-0.05,1), (800,600,100,200,0,0.9), (500,200,100,100,0,0.8)]
img, truth = fake_data(sources)
iimg = integral_image(img, BINS)
result = fast(iimg, background, threshold=0.05)
show(img, result, figsize=(10,10), dpi=500, file='fake_{}_bins.png'.format(BINS))
Explanation: Testing fake data
End of explanation
BINS = 32
# good for 32 bins
#background = [0.7648, 0.1940, 0.03642, 0.0047] + [0 for _ in range(BINS-1)]
background = [1] + [0]*(BINS - 1)
background = [b/sum(background) for b in background]
img, truth = real_data()
iimg = integral_image(img, BINS)
result = fast(iimg, background, threshold=0.000001)
show(img, result, figsize=(20,20), dpi=500, file='real.png')
Explanation: Testing real data
End of explanation |
15,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center><img src="http
Step1: Simple standard analysis
Here we can see a simple example of basic spectral analysis for FITS PHA files from the Fermi Gamma-ray Burst Monitor. FITS PHA files are read in with the OGIPLike plugin that supports reading TYPE I & II PHA files with properly formatted <a href=https
Step2: As we can see, the plugin probes the data to choose the appropriate likelihood for the given obseration and background data distribution.
In GBM, the background is estimated from a polynomial fit and hence has Gaussian distributed errors via error propagation.
We can also select the energies that we would like to use in a spectral fit. To understand how energy selections work, there is a detailed docstring
Step3: Signature
Step4: To examine our energy selections, we can view the count spectrum
Step5: Deselected regions are marked shaded in grey.
We have also view which channels fall below a given significance level
Step6: Setup for spectral fitting
Now we will prepare the plugin for fitting by
Step7: Examining the fitted model
We can now look at the asymmetric errors, countours, etc.
Step8: And to plot the fit in the data space we call the data spectrum plotter
Step9: Or we can examine the fit in model space. Note that we must use the analysis_results of the joint likelihood for the model plotting
Step10: We can go Bayesian too! | Python Code:
from threeML import *
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib notebook
Explanation: <center><img src="http://identity.stanford.edu/overview/images/emblems/SU_BlockStree_2color.png" width="200" style="display: inline-block"><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/c/c2/Main_fermi_logo_HI.jpg/682px-Main_fermi_logo_HI.jpg" width="200" style="display: inline-block"><img src="http://www.astro.wisc.edu/~russell/HAWCLogo.png" width="200" style="display: inline-block"></center>
<h1> Basic PHA FITS Analysis with 3ML</h1>
<br/>
Giacomo Vianello (Stanford University)
<a href="mailto:giacomov@stanford.edu">giacomov@stanford.edu</a>
<h2>IPython Notebook setup. </h2>
This is needed only if you are using the <a href=http://ipython.org/notebook.html>IPython Notebook</a> on your own computer, it is NOT needed if you are on threeml.stanford.edu.
This line will activate the support for inline display of matplotlib images:
End of explanation
triggerName = 'bn090217206'
ra = 204.9
dec = -8.4
#Data are in the current directory
datadir = os.path.abspath('.')
#Create an instance of the GBM plugin for each detector
#Data files
obsSpectrum = os.path.join( datadir, "bn090217206_n6_srcspectra.pha{1}" )
bakSpectrum = os.path.join( datadir, "bn090217206_n6_bkgspectra.bak{1}" )
rspFile = os.path.join( datadir, "bn090217206_n6_weightedrsp.rsp{1}" )
#Plugin instance
NaI6 = OGIPLike( "NaI6", observation=obsSpectrum, background=bakSpectrum, response=rspFile )
#Choose energies to use (in this case, I exclude the energy
#range from 30 to 40 keV to avoid the k-edge, as well as anything above
#950 keV, where the calibration is uncertain)
NaI6.set_active_measurements( "10.0-30.0", "40.0-950.0" )
Explanation: Simple standard analysis
Here we can see a simple example of basic spectral analysis for FITS PHA files from the Fermi Gamma-ray Burst Monitor. FITS PHA files are read in with the OGIPLike plugin that supports reading TYPE I & II PHA files with properly formatted <a href=https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/node5.html>OGIP</a> keywords.
Here, we examine a TYPE II PHA file. Since there are multiple spectra embedded in the file, we must either use the XSPEC style {spectrum_number} syntax to access the appropriate spectum or use the keyword spectrum_number=1.
End of explanation
NaI6.set_active_measurements?
Explanation: As we can see, the plugin probes the data to choose the appropriate likelihood for the given obseration and background data distribution.
In GBM, the background is estimated from a polynomial fit and hence has Gaussian distributed errors via error propagation.
We can also select the energies that we would like to use in a spectral fit. To understand how energy selections work, there is a detailed docstring:
End of explanation
NaI6
NaI6.display()
Explanation: Signature: NaI6.set_active_measurements(args, *kwargs)
Docstring:
Set the measurements to be used during the analysis. Use as many ranges as you need, and you can specify
either energies or channels to be used.
NOTE to Xspec users: while XSpec uses integers and floats to distinguish between energies and channels
specifications, 3ML does not, as it would be error-prone when writing scripts. Read the following documentation
to know how to achieve the same functionality.
Energy selections:
They are specified as 'emin-emax'. Energies are in keV. Example:
set_active_measurements('10-12.5','56.0-100.0')
which will set the energy range 10-12.5 keV and 56-100 keV to be
used in the analysis. Note that there is no difference in saying 10 or 10.0.
Channel selections:
They are specified as 'c[channel min]-c[channel max]'. Example:
set_active_measurements('c10-c12','c56-c100')
This will set channels 10-12 and 56-100 as active channels to be used in the analysis
Mixed channel and energy selections:
You can also specify mixed energy/channel selections, for example to go from 0.2 keV to channel 20 and from
channel 50 to 10 keV:
set_active_measurements('0.2-c10','c50-10')
Use all measurements (i.e., reset to initial state):
Use 'all' to select all measurements, as in:
set_active_measurements('all')
Use 'reset' to return to native PHA quality from file, as in:
set_active_measurements('reset')
Exclude measurements:
Excluding measurements work as selecting measurements, but with the "exclude" keyword set to the energies and/or
channels to be excluded. To exclude between channel 10 and 20 keV and 50 keV to channel 120 do:
set_active_measurements(exclude=["c10-20", "50-c120"])
Select and exclude:
Call this method more than once if you need to select and exclude. For example, to select between 0.2 keV and
channel 10, but exclude channel 30-50 and energy , do:
set_active_measurements("0.2-c10",exclude=["c30-c50"])
Using native PHA quality:
To simply add or exclude channels from the native PHA, one can use the use_quailty
option:
set_active_measurements("0.2-c10",exclude=["c30-c50"], use_quality=True)
This translates to including the channels from 0.2 keV - channel 10, exluding channels
30-50 and any channels flagged BAD in the PHA file will also be excluded.
:param args:
:param exclude: (list) exclude the provided channel/energy ranges
:param use_quality: (bool) use the native quality on the PHA file (default=False)
:return:
File: ~/coding/3ML/threeML/plugins/SpectrumLike.py
Type: instancemethod
Investigating the contents of the data
We can examine some quicklook properties of the plugin by executing it or calling its display function:
End of explanation
NaI6.view_count_spectrum()
Explanation: To examine our energy selections, we can view the count spectrum:
End of explanation
NaI6.view_count_spectrum(significance_level=5)
Explanation: Deselected regions are marked shaded in grey.
We have also view which channels fall below a given significance level:
End of explanation
#This declares which data we want to use. In our case, all that we have already created.
data_list = DataList( NaI6 )
powerlaw = Powerlaw()
GRB = PointSource( triggerName, ra, dec, spectral_shape=powerlaw )
model = Model( GRB )
jl = JointLikelihood( model, data_list, verbose=False )
res = jl.fit()
Explanation: Setup for spectral fitting
Now we will prepare the plugin for fitting by:
* Creating a DataList
* Selecting a spectral shape
* Creating a likelihood model
* Building a joint analysis object
End of explanation
res = jl.get_errors()
res = jl.get_contours(powerlaw.index,-1.3,-1.1,20)
res = jl.get_contours(powerlaw.index,-1.25,-1.1,60,powerlaw.K,1.8,3.4,60)
Explanation: Examining the fitted model
We can now look at the asymmetric errors, countours, etc.
End of explanation
jl.restore_best_fit()
_=display_spectrum_model_counts(jl)
Explanation: And to plot the fit in the data space we call the data spectrum plotter:
End of explanation
plot_point_source_spectra(jl.results,flux_unit='erg/(s cm2 keV)')
Explanation: Or we can examine the fit in model space. Note that we must use the analysis_results of the joint likelihood for the model plotting:
End of explanation
powerlaw.index.prior = Uniform_prior(lower_bound=-5.0, upper_bound=5.0)
powerlaw.K.prior = Log_uniform_prior(lower_bound=1.0, upper_bound=10)
bayes = BayesianAnalysis(model, data_list)
samples = bayes.sample(n_walkers=50,burn_in=100, n_samples=1000)
fig = bayes.corner_plot()
fig = bayes.corner_plot_cc()
plot_point_source_spectra(bayes.results, flux_unit='erg/(cm2 s keV)',equal_tailed=False)
Explanation: We can go Bayesian too!
End of explanation |
15,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<figure>
<IMG SRC="gfx/Logo_norsk_pos.png" WIDTH=100 ALIGN="right">
</figure>
Linear algebra crash course
Roberto Di Remigio, Luca Frediani
This is a very brief and incomplete introduction to a few selected topics in linear algebra.
An extremely well-written book, covering many topics in mathematics heavily used in physics is James Nearing's Mathematical Tools for Physics. The book is freely available online on the author's website.
These notes were translated from the italian version, originally written by Filippo Lipparini and Benedetta Mennucci.
Vector spaces and bases
Definition
Step1: As you can see, we can easily access all this information. One-dimensional arrays in NumPy can be used to represent vectors, while two-dimensional arrays can be used to represent matrices.
a = np.array([1, 2, 3]) is equivalent to the vector
Step2: index counting in Python starts from 0 not from 1! As you can see, subscript notation can also be used to modify the value at a given index.
There a number of predefined functions to create arrays with certain properties.
We have already met linspace and here is a list of some additional functions
Step3: NumPy supports indexing arrays via slicing like Matlab does. While indexing via a single integer number will return the element at that position of the array, i.e. a scalar, slicing will return another array with fewer dimensions. For example
Step4: As you can see, we can quite easily select just parts of an array and either create new, smaller arrays or just obtain the value at a certain position.
Algebraic operations and functions are also defined on arrays. The definitions for addition and subtraction are obvious. Those for multiplication, division and other mathematical functions are less obvious | Python Code:
import numpy as np
a = np.array([1, 2, 3]) # Create a rank 1 array
print(type(a)) # Prints "<type 'numpy.ndarray'>"
print(a.shape) # Prints "(3,)"
Explanation: <figure>
<IMG SRC="gfx/Logo_norsk_pos.png" WIDTH=100 ALIGN="right">
</figure>
Linear algebra crash course
Roberto Di Remigio, Luca Frediani
This is a very brief and incomplete introduction to a few selected topics in linear algebra.
An extremely well-written book, covering many topics in mathematics heavily used in physics is James Nearing's Mathematical Tools for Physics. The book is freely available online on the author's website.
These notes were translated from the italian version, originally written by Filippo Lipparini and Benedetta Mennucci.
Vector spaces and bases
Definition: Vector space
Let $\mathbb{R}$ be the set of real numbers. Elements of $\mathbb{R}$ will be called scalars, to distinguish them from vectors. We define as a vector space $V$ on the field $\mathbb{R}$ the set of vectors such that:
the sum of two vectors is still a vector. For any pair of vectors $\mathbf{u},\mathbf{v}\in V$ their sum
$\mathbf{w} = \mathbf{u}+\mathbf{v} = \mathbf{v} + \mathbf{u}$ is still an element of $V$.
the product of a vector and a scalar is still a vector. For any $\mathbf{v}\in V$ and for any $\alpha \in \mathbb{R}$
the product $\mathbf{w} = \alpha\mathbf{v}$ is still an element of $V$.
We will write vectors using a boldface font, while we will use normal font for scalars.
Real numbers already are an example of a vector space: the sum of two
real numbers is still a real number, as is their product.
Vectors in the plane are yet another example (when their starting point is the origin of axes):
a sum of two vectors, given by the parallelogram rule, is still a vector in the plane; the product of a vector by a scalar is a vector with direction equal to that of the original vector and magnitude given by the product of the scalar and the magnitude
of the original vector.
How big is a vector space? It is possible to get an intuitive idea of dimension for ``geometric'' vector spaces:
the plane has dimension 2, the space has dimension 3, while the set of real numbers has dimension 1.
Some more definitions will help in getting a more precise idea of what the dimension of a vector space is.
Definition: Linear combination
A linear combination of vectors $\mathbf{v}1, \mathbf{v}_2, \ldots, \mathbf{v}_n$ is the vector obtained by summing
these vectors, possibly multiplied by a scalar:
\begin{equation}
\mathbf{w} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \ldots + c_n\mathbf{v}_n = \sum{i = 1}^n c_i\mathbf{v}_i
\end{equation}
Definition: Linear dependence
Two non-zero vectors are said to be linearly dependent if and only if (iff) there exist two non-zero scalars $c_1, c_2$
such that:
\begin{equation}
c_1\mathbf{v}1 + c_2\mathbf{v}_2 = \mathbf{0}
\end{equation}
where $\mathbf{0}$ is the null vector. Equivalently, two vectors are said to be linearly dependent if one can be written as
the other multiplied by a scalar:
\begin{equation}
\mathbf{v}_2 = -\frac{c_1}{c_2} \mathbf{v}_1
\end{equation}
Conversely, two vectors are said to be _linearly independent when the following is true:
\begin{equation}
c_1\mathbf{v}_1 + c_2\mathbf{v}_2 = \mathbf{0} \quad\Leftrightarrow\quad c_1=c_2 = 0
\end{equation}
Consider now two linearly dependent vectors in the plane. What does this imply? As one can be expressed as a scalar multiplied by the other,
they have the same direction and magnitude differing by a factor.
The notions above are easily generalized to more than a pair of vectors: vectors $\mathbf{v}1, \mathbf{v}_2, \ldots\mathbf{v}_n$
are linearly independent iff the only linear combination that gives the null vector is the one where all the coefficients are zero:
\begin{equation}
\sum{i=1}^n c_n\mathbf{v}_n = \mathbf{0} \quad\Leftrightarrow\quad c_i = 0 \forall i
\end{equation}
How many linearly independent vectors are there in the plane? As our intuition told us before: only 2! Let us think about the unit vectors
$\mathbf{i}$ and $\mathbf{j}$, i.e. the orthogonal vectors of unit magnitude we can draw on the Cartesian $x$ and $y$ axes. All the vectors in
the plane can be written as a linear combination of those two:
\begin{equation}
\mathbf{r} = x\mathbf{i} + y\mathbf{j}
\end{equation}
We are already used to specify a vector by means of its components $x$ and $y$. So, by taking linear combinations of the two unit vectors
we are able to generate all the vectors in the plane. Doesn't this look familiar? Think about the set of complex numbers $\mathbb{C}$
and the way we represent it.
It's now time for some remarks:
1. the orthogonal, unit vectors in the plane are linearly independent;
2. they are the minimal number of vectors needed to span, i.e. to generate, all the other vectors in the plane.
Definition: Basis
Given the vector space $V$ its basis is defined as the minimal set of linearly independent vectors
$\mathcal{B} = \lbrace \mathbf{e}1,\mathbf{e}_2, \ldots,\mathbf{e}_n \rbrace$ spanning all the vectors in the space.
This means that any suitable linear combination of vectors in $\mathcal{B}$ generates a vector $\mathbf{v}\in V$:
\begin{equation}
\mathbf{v} = c_1\mathbf{e}_1 + c_2\mathbf{e}_2 + \ldots + c_n\mathbf{e}_n = \sum{i=1}^n c_n \mathbf{e}n
\end{equation}
the coefficients in the linear combination are called _components or coordinates of the vector $\mathbf{v}$ in the given
basis $\mathcal{B}$.
Turning back to our vectors in the plane, the set $\lbrace \mathbf{i}, \mathbf{j} \rbrace$ is a basis for the plane and that
that $x$ and $y$ are the components of the vector $\mathbf{r}$.
We are now ready to answer the question: how big is a vector space?
Defintion: Dimension of a vector space
The dimension of a vector space is the number of vectors making up its basis.
Vectors using NumPy
We have already seen how to use NumPy to handle quantities that are vector-like. For example, when we needed to plot a function with matplotlib, we first generated a linearly spaced vector of $N$ points inside an interval and then evaluated the function in those points.
Vectors, matrices and objects with higher dimensions are all represented as arrays.
An array is a homogeneous grid of values that is indexed by a tuple of nonnegative integers.
There are two important terms related to NumPy arrays:
1. the rank of an array is the number of dimensions,
2. the shape is a tuple of nonnegative integers, giving the size of the array along each dimension.
Let's see what that means by starting with the simplest array: a one-dimensional array.
End of explanation
print(a[0], a[1], a[2]) # Prints "1 2 3"
a[0] = 5 # Change an element of the array
print(a) # Prints "[5, 2, 3]"
Explanation: As you can see, we can easily access all this information. One-dimensional arrays in NumPy can be used to represent vectors, while two-dimensional arrays can be used to represent matrices.
a = np.array([1, 2, 3]) is equivalent to the vector:
\begin{equation}
\mathbf{a} = \begin{pmatrix}
1 \
2 \
3
\end{pmatrix}
\end{equation}
The components of the vector can be accessed using so-called subscript notation, as in the following code snippet:
End of explanation
a = np.zeros(10) # Create an array of all zeros
print(a)
b = np.ones(10) # Create an array of all ones
print(b) # Prints "[[ 1. 1.]]"
c = np.full(10, 7) # Create a constant array
print(c) # Prints "[[ 7. 7.]
# [ 7. 7.]]"
e = np.random.random(10) # Create an array filled with random values
print(e)
Explanation: index counting in Python starts from 0 not from 1! As you can see, subscript notation can also be used to modify the value at a given index.
There a number of predefined functions to create arrays with certain properties.
We have already met linspace and here is a list of some additional functions:
End of explanation
a = np.random.random(15) # Create an array of rank 15
print(a.shape)
print(a)
# Select elements 10, 11 and 15 of the array
print(a[9], a[10], a[14])
# Now take a slice: all elements, but the first and second
print('Taking a slice!')
b = a[2:]
print(b)
# And notice how the rank changed!
print('We have removed two elements!')
print(b.shape)
print('Take another slice: all elements in between the third and tenth')
c = a[2:11]
print(c)
Explanation: NumPy supports indexing arrays via slicing like Matlab does. While indexing via a single integer number will return the element at that position of the array, i.e. a scalar, slicing will return another array with fewer dimensions. For example:
End of explanation
a = np.full(10, 7.)
b = np.full(10, 4.)
print(a)
print(b)
print('Sum of a and b')
print(a + b)
print('Difference of a and b')
print(a - b)
print('Elementwise product of a and b')
print(a * b)
print('Elementwise division of a and b')
print(a / b)
print('Elementwise square root of b')
print(np.sqrt(b))
Explanation: As you can see, we can quite easily select just parts of an array and either create new, smaller arrays or just obtain the value at a certain position.
Algebraic operations and functions are also defined on arrays. The definitions for addition and subtraction are obvious. Those for multiplication, division and other mathematical functions are less obvious: in NumPy they are always performed element-wise.
The following example will clarify.
End of explanation |
15,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploratory Data Analysis with Pandas
Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods.
We will go through this notebook to
Step1: Here you can find an explanation of each variable
Step2: Convert alt to m
Step3: Check if we have nans.
Step4: Let's check errors.
* Latitudes range from -90 to 90.
* Longitudes range from -180 to 180.
Step5: We can chech outliers in the altitude
Step6: let's explore 5 and 95 percentiles
Step7: Additionaly to what we have seen, we have extra functions to see how shaped and what values our data has.
* sample data
Step8: We can create new variables
Step9: We can place hemisfere
Step10: We can calculate percentages.
Step11: Let's transformate alt into qualitative
Step12: Let's group data
Step13: The groups attribute is a dict whose keys are the computed unique groups and corresponding values being the axis labels belonging to each group. In the above example we have
Step14: Once the GroupBy object has been created, several methods are available to perform a computation on the grouped data.
Step15: Pandas has a handy .unstack() method—use it to convert the results into a more readable format and store that as a new variable
Step16: Remember that we also saw how to pivot table
Step17: Visualizing data
One of the most useful tools for exploring data anf presenting results is through visual representations.
Step18: We can plot with different plot types
Step19: Multiple Bars
Step20: Histogram
Step21: Box Plots
Step22: Area Plots
Step23: Scatter Plot
Step24: Hex Bins
Step25: Density Plot | Python Code:
import urllib3
import pandas as pd
url = "https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat"
#load the csv
airports = pd.read_csv(url,header=None)
print("Check DataFrame types")
display(airports.dtypes)
Explanation: Exploratory Data Analysis with Pandas
Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods.
We will go through this notebook to:
1. Learn to reshape our data and see some features by performing operations over it
2. Discover visualization methods
Transforming and summarizing data
End of explanation
import numpy as np
print("-> Original DF")
display(airports.head())
#we can add a name to each variable
h = ["airport_id","name","city","country","IATA","ICAO","lat","lon","alt","tz","DST","tz_db"]
airports = airports.iloc[:,:12]
airports.columns = h
print("-> Original DF with proper names")
display(airports.head())
print("-> With the proper names it is easier to check correctness")
display(airports.dtypes)
Explanation: Here you can find an explanation of each variable:
Airport ID Unique OpenFlights identifier for this airport.
Name Name of airport. May or may not contain the City name.
City Main city served by airport. May be spelled differently from Name.
Country Country or territory where airport is located.
IATA/FAA 3-letter FAA code, for airports located in Country "United States of America". 3-letter IATA code, for all other airports. Blank if not assigned.
ICAO 4-letter ICAO code. Blank if not assigned.
Latitude Decimal degrees, usually to six significant digits. Negative is South, positive is North.
Longitude Decimal degrees, usually to six significant digits. Negative is West, positive is East.
Altitude In feet.
Timezone Hours offset from UTC. Fractional hours are expressed as decimals, eg. India is 5.5.
DST Daylight savings time. One of E (Europe), A (US/Canada), S (South America), O (Australia), Z (New Zealand), N (None) or U (Unknown). See also: Help: Time
Tz database time zone
End of explanation
airports.alt.describe()
airports.alt = airports.alt * 0.3048
airports.dtypes
Explanation: Convert alt to m
End of explanation
airports.isnull().sum(axis=0)
# we can create a new label whoch corresponds to not having data
airports.IATA.fillna("Blank", inplace=True)
airports.ICAO = airports.ICAO.fillna("Blank")
airports.isnull().sum(axis=0)
Explanation: Check if we have nans.
End of explanation
((airports.lat > 90) & (airports.lat < -90)).any()
((airports.lon > 180) & (airports.lon < -180)).any()
Explanation: Let's check errors.
* Latitudes range from -90 to 90.
* Longitudes range from -180 to 180.
End of explanation
airports.alt.describe()
Explanation: We can chech outliers in the altitude
End of explanation
qtls = airports.alt.quantile([.05,.5,.95],interpolation="higher")
qtls
# check how many of them are below the median
(airports.alt <= qtls[0.5]).sum()
#check how many of them are above of the median
(airports.alt >= qtls[0.5]).sum()
#check how many of them are below the .05 percentile
(airports.alt <= qtls[0.05]).sum()
#check how many of them are above the .95 percentile
(airports.alt >= qtls[0.95]).sum()
airports.shape[0]*.05
print("-> Check which airports are out of 5% range")
display(airports[(airports.alt < qtls[0.05])].head(10))
Explanation: let's explore 5 and 95 percentiles
End of explanation
print("-> Showing a sample of ten values")
airports.sample(n=10)
print("-> Showing the airports in higher positions")
airports.sort_values(by="alt",ascending=True)[:10]
Explanation: Additionaly to what we have seen, we have extra functions to see how shaped and what values our data has.
* sample data: we can take a random sample of the obesrvations to avoid the ordering bias (if we head data, it can be sorted so some of the examples are ok, let's say the 100 first and the rest have some errors)
* sort data: to get obesrvations with higher or lower values
End of explanation
airports.tz_db
airports["continent"] = airports.tz_db.str.split("/").str[0]
airports.continent.unique()
airports.continent.value_counts()
(airports.continent.value_counts()/airports.continent.value_counts().sum())*100
airports[airports.continent == "\\N"].shape
airports.continent = airports.continent.replace('\\N',"unknown")
airports.tz_db = airports.tz_db.replace('\\N',"unknown")
airports.continent.unique()
airports[airports.continent == "unknown"].head()
Explanation: We can create new variables
End of explanation
hem_select = lambda x: "South" if x < 0 else "North"
airports["hemisphere"] = airports.lat.apply(hem_select)
Explanation: We can place hemisfere
End of explanation
(airports.hemisphere.value_counts() / airports.shape[0]) * 100
(airports.continent.value_counts() / airports.shape[0]) * 100
((airports.country.value_counts() / airports.shape[0]) * 100).sample(10)
((airports.country.value_counts() / airports.shape[0]) * 100).head(10)
type(airports.country.value_counts())
Explanation: We can calculate percentages.
End of explanation
airports["alt_type"] = pd.cut(airports.alt,bins=3,labels=["low","med","high"])
airports.head()
Explanation: Let's transformate alt into qualitative
End of explanation
airp_group = airports.groupby(["continent","alt_type"])
Explanation: Let's group data:
End of explanation
airp_group.groups.keys()
Explanation: The groups attribute is a dict whose keys are the computed unique groups and corresponding values being the axis labels belonging to each group. In the above example we have:
End of explanation
airp_group.size()
airp_group["alt"].agg({"max":np.max,"min":np.min,"mean":np.mean}).head()
airports.alt.hist(bins=100)
Explanation: Once the GroupBy object has been created, several methods are available to perform a computation on the grouped data.
End of explanation
airp_group["alt"].sum().unstack()
Explanation: Pandas has a handy .unstack() method—use it to convert the results into a more readable format and store that as a new variable
End of explanation
airports.pivot_table(index="hemisphere",values="alt",aggfunc=np.mean)
airports.groupby("hemisphere").alt.mean()
Explanation: Remember that we also saw how to pivot table
End of explanation
my_df = pd.DataFrame(np.ones(100),columns=["y"])
my_df.head(10)
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
matplotlib.style.use('ggplot')
plt.rcParams['figure.figsize'] = [10, 8]
my_df.plot()
my_df["z"] = my_df.y.cumsum()
my_df.plot()
my_df.y = my_df.z ** 2
my_df.plot()
my_df.z = np.log(my_df.y)
my_df.z.plot()
Explanation: Visualizing data
One of the most useful tools for exploring data anf presenting results is through visual representations.
End of explanation
airports.groupby("continent").size().plot.bar()
Explanation: We can plot with different plot types:
* ‘bar’ or ‘barh’ for bar plots
* ‘hist’ for histogram
* ‘box’ for boxplot
* ‘kde’ or 'density' for density plots
* ‘area’ for area plots
* ‘scatter’ for scatter plots
* ‘hexbin’ for hexagonal bin plots
* ‘pie’ for pie plots
Bar
End of explanation
airports.groupby("continent").alt.agg({"max":np.max,"min":np.min,"mean":np.mean}).plot(kind="bar")
airports.groupby("continent").alt.agg({"max":np.max,"min":np.min,"mean":np.mean}).plot(kind="bar",stacked=True)
airports.groupby("continent").alt.agg({"max":np.max,"min":np.min,"mean":np.mean}).plot(kind="barh",stacked=True)
Explanation: Multiple Bars
End of explanation
airports.alt.plot(kind="hist",bins=100)
airports.loc[:,["alt"]].plot(kind="hist")
airports.loc[:,["lat"]].plot(kind="hist",bins=100)
airports.loc[:,["lon"]].plot(kind="hist",bins=100)
Explanation: Histogram
End of explanation
airports.plot.box()
airports.alt.plot.box()
airports.pivot(columns="continent").alt.plot.box()
Explanation: Box Plots
End of explanation
sp_airp = airports[airports.country=="Spain"]
spain_alt = sp_airp.sort_values(by="alt").alt
spain_alt.index = range(spain_alt.size)
spain_alt.plot.area()
Explanation: Area Plots
End of explanation
airports.plot.scatter(y="lat",x="lon")
airports.plot.scatter(y="lat",x="lon",c="alt")
airports.plot.scatter(y="lat",x="lon",s=airports["alt"]/20)
Explanation: Scatter Plot
End of explanation
airports.plot.hexbin(x="lon",y="lat",C="alt",gridsize=20)
Explanation: Hex Bins
End of explanation
airports.alt.plot.kde()
airports.lat.plot.kde()
airports.lon.plot.kde()
Explanation: Density Plot
End of explanation |
15,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Last updated
Step1: 1. Loading data
More details, see http
Step2: From a local text file
Let's first load some temperature data which covers all lattitudes. Since read_table is supposed to do the job for a text file, let's just try it
Step3: There is only 1 column! Let's try again stating that values are separated by any number of spaces
Step4: There are columns but the column names are 1880 and -0.1591!
Step5: Since we only have 2 columns, one of which would be nicer to access the data (the year of the record), let's try using the index_col option
Step6: Last step
Step7: From a chunked file
Since every dataset can contain mistakes, let's load a different file with temperature data. NASA's GISS dataset is written in chunks
Step8: QUIZ
Step9: From a remote text file
So far, we have only loaded temperature datasets. Climate change also affects the sea levels on the globe. Let's load some datasets with the sea levels. The university of colorado posts updated timeseries for mean sea level globably, per
hemisphere, or even per ocean, sea, ... Let's download the global one, and the ones for the northern and southern hemisphere.
That will also illustrate that to load text files that are online, there is no more work than replacing the filepath by a URL n read_table
Step10: There are clearly lots of cleanup to be done on these datasets. See below...
From a local or remote HTML file
To be able to grab more local data about mean sea levels, we can download and extract data about mean sea level stations around the world from the PSMSL (http
Step11: That table can be used to search for a station in a region of the world we choose, extract an ID for it and download the corresponding time series with the URL http
Step12: Descriptors for the vertical axis (axis=0)
Step13: Descriptors for the horizontal axis (axis=1)
Step14: A lot of information at once including memory usage
Step15: Series, the pandas 1D structure
A series can be constructed with the pd.Series constructor (passing a list or array of values) or from a DataFrame, by extracting one of its columns.
Step16: Core attributes/information
Step17: Probably the most important attribute of a Series or DataFrame is its index since we will use that to, well, index into the structures to access te information
Step18: NumPy arrays as backend of Pandas
It is always possible to fall back to a good old NumPy array to pass on to scientific libraries that need them
Step19: Creating new DataFrames manually
DataFrames can also be created manually, by grouping several Series together. Let's make a new frame from the 3 sea level datasets we downloaded above. They will be displayed along the same index. Wait, does that makes sense to do that?
Step20: So the northern hemisphere and southern hemisphere datasets are aligned. What about the global one?
Step21: For now, let's just build a DataFrame with the 2 hemisphere datasets then. We will come back to add the global one later...
Step22: Note
Step23: Now the fact that it is failing show that Pandas does auto-alignment of values
Step24: 3. Cleaning and formatting data
The datasets that we obtain straight from the reading functions are pretty raw. A lot of pre-processing can be done during data read but we haven't used all the power of the reading functions. Let's learn to do a lot of cleaning and formatting of the data.
The GISS temperature dataset has a lot of issues too
Step25: We can also rename an index by setting its name. For example, the index of the mean_sea_level dataFrame could be called date since it contains more than just the year
Step26: Setting missing values
In the full globe dataset, -999.00 was used to indicate that there was no value for that year. Let's search for all these values and replace them with the missing value that Pandas understand
Step27: Choosing what is the index
Step28: Dropping rows and columns
Step29: Let's also set **** to a real missing value (np.nan). We can often do it using a boolean mask, but that may trigger pandas warning. Another way to assign based on a boolean condition is to use the where method
Step30: Adding columns
While building the mean_sea_level dataFrame earlier, we didn't include the values from global_sea_level since the years were not aligned. Adding a column to a dataframe is as easy as adding an entry to a dictionary. So let's try
Step31: The column is full of NaNs again because the auto-alignment feature of Pandas is searching for the index values like 1992.9323 in the index of global_sea_level["msl_ib_ns(mm)"] series and not finding them. Let's set its index to these years so that that auto-alignment can work for us and figure out which values we have and not
Step32: EXERCISE
Step33: Changing dtype of series
Now that the sea levels are looking pretty good, let's got back to the GISS temperature dataset. Because of the labels (strings) found in the middle of the timeseries, every column only assumed to contain strings (didn't convert them to floating point values)
Step34: That can be changed after the fact (and after the cleanup) with the astype method of a Series
Step35: An index has a dtype just like any Series and that can be changed after the fact too.
Step36: For now, let's change it to an integer so that values can at least be compared properly. We will learn below to change it to a datetime object.
Step37: Removing missing values
Removing missing values - once they have been converted to np.nan - is very easy. Entries that contain missing values can be removed (dropped), or filled with many strategies.
Step38: Let's also mention the .interpolate method on a Series
Step39: For now, we will leave the missing values in all our datasets, because it wouldn't be meaningful to fill them.
EXERCISE
Step40: Showing distributions information
Step41: QUIZ
Step42: Correlations
There are more plot options inside pandas.tools.plotting
Step43: We will confirm the correlations we think we see further down...
5. Storing our work
For each read_** function to load data, there is a to_** method attached to Series and DataFrames.
EXERCISE
Step44: Another, more powerful file format to store binary data, which allows us to store both Series and DataFrames without having to cast anybody is HDF5. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option("display.max_rows", 16)
LARGE_FIGSIZE = (12, 8)
# Change this cell to the demo location on YOUR machine
%cd ~/Projects/pandas_tutorial/climate_timeseries/
%ls
Explanation: Last updated: June 29th 2016
Climate data exploration: a journey through Pandas
Welcome to a demo of Python's data analysis package called Pandas. Our goal is to learn about Data Analysis and transformation using Pandas while exploring datasets used to analyze climate change.
The story
The global goal of this demo is to provide the tools to be able to try and reproduce some of the analysis done in the IPCC global climate reports published in the last decade (see for example https://www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full.pdf).
We are first going to load a few public datasets containing information about global temperature, global and local sea level infomation, and global concentration of greenhouse gases like CO2, to see if there are correlations and how the trends are to evolve, assuming no fundamental change in the system. For all these datasets, we will download them, visualize them, clean them, search through them, merge them, resample them, transform them and summarize them.
In the process, we will learn about:
Part 1:
1. Loading data
2. Pandas datastructures
3. Cleaning and formatting data
4. Basic visualization
Part 2:
5. Accessing data
6. Working with dates and times
7. Transforming datasets
8. Statistical analysis
9. Data agregation and summarization
10. Correlations and regressions
11. Predictions from auto regression models
Some initial setup
End of explanation
#pd.read_<TAB>
pd.read_table?
Explanation: 1. Loading data
More details, see http://pandas.pydata.org/pandas-docs/stable/io.html
To find all reading functions in pandas, ask ipython's tab completion:
End of explanation
filename = "data/temperatures/annual.land_ocean.90S.90N.df_1901-2000mean.dat"
full_globe_temp = pd.read_table(filename)
full_globe_temp
Explanation: From a local text file
Let's first load some temperature data which covers all lattitudes. Since read_table is supposed to do the job for a text file, let's just try it:
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+")
full_globe_temp
Explanation: There is only 1 column! Let's try again stating that values are separated by any number of spaces:
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"])
full_globe_temp
Explanation: There are columns but the column names are 1880 and -0.1591!
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"],
index_col=0)
full_globe_temp
Explanation: Since we only have 2 columns, one of which would be nicer to access the data (the year of the record), let's try using the index_col option:
End of explanation
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"],
index_col=0, parse_dates=True)
full_globe_temp
Explanation: Last step: the index is made of dates. Let's make that explicit:
End of explanation
giss_temp = pd.read_table("data/temperatures/GLB.Ts+dSST.txt", sep="\s+", skiprows=7,
skip_footer=11, engine="python")
giss_temp
Explanation: From a chunked file
Since every dataset can contain mistakes, let's load a different file with temperature data. NASA's GISS dataset is written in chunks: look at it in data/temperatures/GLB.Ts+dSST.txt
End of explanation
# Your code here
Explanation: QUIZ: What happens if you remove the skiprows? skipfooter? engine?
EXERCISE: Load some readings of CO2 concentrations in the atmosphere from the data/greenhouse_gaz/co2_mm_global.txt data file.
End of explanation
# Local backup: data/sea_levels/sl_nh.txt
northern_sea_level = pd.read_table("http://sealevel.colorado.edu/files/current/sl_nh.txt",
sep="\s+")
northern_sea_level
# Local backup: data/sea_levels/sl_sh.txt
southern_sea_level = pd.read_table("http://sealevel.colorado.edu/files/current/sl_sh.txt",
sep="\s+")
southern_sea_level
# The 2015 version of the global dataset:
# Local backup: data/sea_levels/sl_ns_global.txt
url = "http://sealevel.colorado.edu/files/2015_rel2/sl_ns_global.txt"
global_sea_level = pd.read_table(url, sep="\s+")
global_sea_level
Explanation: From a remote text file
So far, we have only loaded temperature datasets. Climate change also affects the sea levels on the globe. Let's load some datasets with the sea levels. The university of colorado posts updated timeseries for mean sea level globably, per
hemisphere, or even per ocean, sea, ... Let's download the global one, and the ones for the northern and southern hemisphere.
That will also illustrate that to load text files that are online, there is no more work than replacing the filepath by a URL n read_table:
End of explanation
# Needs `lxml`, `beautifulSoup4` and `html5lib` python packages
# Local backup in data/sea_levels/Obtaining Tide Gauge Data.html
table_list = pd.read_html("http://www.psmsl.org/data/obtaining/")
# there is 1 table on that page which contains metadata about the stations where
# sea levels are recorded
local_sea_level_stations = table_list[0]
local_sea_level_stations
Explanation: There are clearly lots of cleanup to be done on these datasets. See below...
From a local or remote HTML file
To be able to grab more local data about mean sea levels, we can download and extract data about mean sea level stations around the world from the PSMSL (http://www.psmsl.org/). Again to download and parse all tables in a webpage, just give read_html the URL to parse:
End of explanation
# Type of the object?
type(giss_temp)
# Internal nature of the object
print(giss_temp.shape)
print(giss_temp.dtypes)
Explanation: That table can be used to search for a station in a region of the world we choose, extract an ID for it and download the corresponding time series with the URL http://www.psmsl.org/data/obtaining/met.monthly.data/< ID >.metdata
2. Pandas DataStructures
For more details, see http://pandas.pydata.org/pandas-docs/stable/dsintro.html
Now that we have used read_** functions to load datasets, we need to understand better what kind of objects we got from them to learn to work with them.
DataFrame, the pandas 2D structure
End of explanation
giss_temp.index
Explanation: Descriptors for the vertical axis (axis=0)
End of explanation
giss_temp.columns
Explanation: Descriptors for the horizontal axis (axis=1)
End of explanation
giss_temp.info()
Explanation: A lot of information at once including memory usage:
End of explanation
# Do we already have a series for the full_globe_temp?
type(full_globe_temp)
# Since there is only one column of values, we can make this a Series without
# loosing information:
full_globe_temp = full_globe_temp["mean temp"]
Explanation: Series, the pandas 1D structure
A series can be constructed with the pd.Series constructor (passing a list or array of values) or from a DataFrame, by extracting one of its columns.
End of explanation
print(type(full_globe_temp))
print(full_globe_temp.dtype)
print(full_globe_temp.shape)
print(full_globe_temp.nbytes)
Explanation: Core attributes/information:
End of explanation
full_globe_temp.index
Explanation: Probably the most important attribute of a Series or DataFrame is its index since we will use that to, well, index into the structures to access te information:
End of explanation
full_globe_temp.values
type(full_globe_temp.values)
Explanation: NumPy arrays as backend of Pandas
It is always possible to fall back to a good old NumPy array to pass on to scientific libraries that need them: SciPy, scikit-learn, ...
End of explanation
# Are they aligned?
southern_sea_level.year == northern_sea_level.year
# So, are they aligned?
np.all(southern_sea_level.year == northern_sea_level.year)
Explanation: Creating new DataFrames manually
DataFrames can also be created manually, by grouping several Series together. Let's make a new frame from the 3 sea level datasets we downloaded above. They will be displayed along the same index. Wait, does that makes sense to do that?
End of explanation
len(global_sea_level.year) == len(northern_sea_level.year)
Explanation: So the northern hemisphere and southern hemisphere datasets are aligned. What about the global one?
End of explanation
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"],
"southern_hem": southern_sea_level["msl_ib(mm)"],
"date": northern_sea_level.year})
mean_sea_level
Explanation: For now, let's just build a DataFrame with the 2 hemisphere datasets then. We will come back to add the global one later...
End of explanation
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"],
"southern_hem": southern_sea_level["msl_ib(mm)"]},
index = northern_sea_level.year)
mean_sea_level
Explanation: Note: there are other ways to create DataFrames manually, for example from a 2D numpy array.
There is still the date in a regular column and a numerical index that is not that meaningful. We can specify the index of a DataFrame at creation. Let's try:
End of explanation
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"].values,
"southern_hem": southern_sea_level["msl_ib(mm)"].values},
index = northern_sea_level.year)
mean_sea_level
Explanation: Now the fact that it is failing show that Pandas does auto-alignment of values: for each value of the index, it searches for a value in each Series that maps the same value. Since these series have a dumb numerical index, no values are found.
Since we know that the order of the values match the index we chose, we can replace the Series by their values only at creation of the DataFrame:
End of explanation
# The columns of the local_sea_level_stations aren't clean: they contain spaces and dots.
local_sea_level_stations.columns
# Let's clean them up a bit:
local_sea_level_stations.columns = [name.strip().replace(".", "")
for name in local_sea_level_stations.columns]
local_sea_level_stations.columns
Explanation: 3. Cleaning and formatting data
The datasets that we obtain straight from the reading functions are pretty raw. A lot of pre-processing can be done during data read but we haven't used all the power of the reading functions. Let's learn to do a lot of cleaning and formatting of the data.
The GISS temperature dataset has a lot of issues too: useless numerical index, redundant columns, useless rows, placeholder (****) for missing values, and wrong type for the columns. Let's fix all this:
Renaming columns
End of explanation
mean_sea_level.index.name = "date"
mean_sea_level
Explanation: We can also rename an index by setting its name. For example, the index of the mean_sea_level dataFrame could be called date since it contains more than just the year:
End of explanation
full_globe_temp == -999.000
full_globe_temp[full_globe_temp == -999.000] = np.nan
full_globe_temp.tail()
Explanation: Setting missing values
In the full globe dataset, -999.00 was used to indicate that there was no value for that year. Let's search for all these values and replace them with the missing value that Pandas understand: np.nan
End of explanation
# We didn't set a column number of the index of giss_temp, we can do that afterwards:
giss_temp = giss_temp.set_index("Year")
giss_temp.head()
Explanation: Choosing what is the index
End of explanation
# 1 column is redundant with the index:
giss_temp.columns
# Let's drop it:
giss_temp = giss_temp.drop("Year.1", axis=1)
giss_temp
# We can also just select the columns we want to keep:
giss_temp = giss_temp[[u'Jan', u'Feb', u'Mar', u'Apr', u'May', u'Jun', u'Jul',
u'Aug', u'Sep', u'Oct', u'Nov', u'Dec']]
giss_temp
# Let's remove all these extra column names (Year Jan ...). They all correspond to the index "Year"
giss_temp = giss_temp.drop("Year")
giss_temp
Explanation: Dropping rows and columns
End of explanation
#giss_temp[giss_temp == "****"] = np.nan
giss_temp = giss_temp.where(giss_temp != "****", np.nan)
giss_temp.tail()
Explanation: Let's also set **** to a real missing value (np.nan). We can often do it using a boolean mask, but that may trigger pandas warning. Another way to assign based on a boolean condition is to use the where method:
End of explanation
mean_sea_level["mean_global"] = global_sea_level["msl_ib_ns(mm)"]
mean_sea_level
Explanation: Adding columns
While building the mean_sea_level dataFrame earlier, we didn't include the values from global_sea_level since the years were not aligned. Adding a column to a dataframe is as easy as adding an entry to a dictionary. So let's try:
End of explanation
global_sea_level = global_sea_level.set_index("year")
global_sea_level["msl_ib_ns(mm)"]
mean_sea_level["mean_global"] = global_sea_level["msl_ib_ns(mm)"]
mean_sea_level
Explanation: The column is full of NaNs again because the auto-alignment feature of Pandas is searching for the index values like 1992.9323 in the index of global_sea_level["msl_ib_ns(mm)"] series and not finding them. Let's set its index to these years so that that auto-alignment can work for us and figure out which values we have and not:
End of explanation
# Your code here
Explanation: EXERCISE: Create a new series containing the average of the 2 hemispheres minus the global value to see if that is close to 0. Work inside the mean_sea_level dataframe first. Then try with the original Series to see what happens with data alignment while doing computations.
End of explanation
giss_temp.dtypes
Explanation: Changing dtype of series
Now that the sea levels are looking pretty good, let's got back to the GISS temperature dataset. Because of the labels (strings) found in the middle of the timeseries, every column only assumed to contain strings (didn't convert them to floating point values):
End of explanation
giss_temp["Jan"].astype("float32")
for col in giss_temp.columns:
giss_temp.loc[:, col] = giss_temp[col].astype(np.float32)
Explanation: That can be changed after the fact (and after the cleanup) with the astype method of a Series:
End of explanation
giss_temp.index.dtype
Explanation: An index has a dtype just like any Series and that can be changed after the fact too.
End of explanation
giss_temp.index = giss_temp.index.astype(np.int32)
Explanation: For now, let's change it to an integer so that values can at least be compared properly. We will learn below to change it to a datetime object.
End of explanation
full_globe_temp
full_globe_temp.dropna()
# This will remove any year that has a missing value. Use how='all' to keep partial years
giss_temp.dropna(how="any").tail()
giss_temp.fillna(value=0).tail()
# This fills them with the previous year. See also temp3.interpolate
giss_temp.fillna(method="ffill").tail()
Explanation: Removing missing values
Removing missing values - once they have been converted to np.nan - is very easy. Entries that contain missing values can be removed (dropped), or filled with many strategies.
End of explanation
giss_temp.Aug.interpolate().tail()
Explanation: Let's also mention the .interpolate method on a Series:
End of explanation
full_globe_temp.plot()
giss_temp.plot(figsize=LARGE_FIGSIZE)
mean_sea_level.plot(subplots=True, figsize=(16, 12));
Explanation: For now, we will leave the missing values in all our datasets, because it wouldn't be meaningful to fill them.
EXERCISE: Go back to the reading functions, and learn more about other options that could have allowed us to fold some of these pre-processing steps into the data loading.
4. Basic visualization
Now they have been formatted, visualizing your datasets is the next logical step and is trivial with Pandas. The first thing to try is to invoke the .plot to generate a basic visualization (uses matplotlib under the covers).
Line plots
End of explanation
# Distributions of mean sean level globally and per hemisphere?
mean_sea_level.plot(kind="kde", figsize=(12, 8))
Explanation: Showing distributions information
End of explanation
# Distributions of temperature in each month since 1880
giss_temp.boxplot();
Explanation: QUIZ: How to list the possible kinds of plots that the plot method can allow?
End of explanation
# Is there correlations between the northern and southern sea level timeseries we loaded?
from pandas.tools.plotting import scatter_matrix
scatter_matrix(mean_sea_level, figsize=LARGE_FIGSIZE);
Explanation: Correlations
There are more plot options inside pandas.tools.plotting:
End of explanation
writer = pd.ExcelWriter("test.xls")
giss_temp.to_excel(writer, sheet_name="GISS temp data")
full_globe_temp.to_excel(writer, sheet_name="NASA temp data")
writer.close()
with pd.ExcelWriter("test.xls") as writer:
giss_temp.to_excel(writer, sheet_name="GISS temp data")
pd.DataFrame({"Full Globe Temp": full_globe_temp}).to_excel(writer, sheet_name="FullGlobe temp data")
%ls
Explanation: We will confirm the correlations we think we see further down...
5. Storing our work
For each read_** function to load data, there is a to_** method attached to Series and DataFrames.
EXERCISE: explore how the to_csv method work using ipython's ? and store the giss_temp dataframe. Do the same to store the full_globe_temp series to another file.
Another file format that is commonly used is Excel, and there multiple datasets can be stored in 1 file.
End of explanation
with pd.HDFStore("all_data.h5") as writer:
giss_temp.to_hdf(writer, "/temperatures/giss")
full_globe_temp.to_hdf(writer, "/temperatures/full_globe")
mean_sea_level.to_hdf(writer, "/sea_level/mean_sea_level")
local_sea_level_stations.to_hdf(writer, "/sea_level/stations")
Explanation: Another, more powerful file format to store binary data, which allows us to store both Series and DataFrames without having to cast anybody is HDF5.
End of explanation |
15,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Градиентный бустинг своими руками
Внимание
Step1: Задание 1
Как вы уже знаете из лекций, бустинг - это метод построения композиций базовых алгоритмов с помощью последовательного добавления к текущей композиции нового алгоритма с некоторым коэффициентом.
Градиентный бустинг обучает каждый новый алгоритм так, чтобы он приближал антиградиент ошибки по ответам композиции на обучающей выборке. Аналогично минимизации функций методом градиентного спуска, в градиентном бустинге мы подправляем композицию, изменяя алгоритм в направлении антиградиента ошибки.
Воспользуйтесь формулой из лекций, задающей ответы на обучающей выборке, на которые нужно обучать новый алгоритм (фактически это лишь чуть более подробно расписанный градиент от ошибки), и получите частный ее случай, если функция потерь L - квадрат отклонения ответа композиции a(x) от правильного ответа y на данном x.
Если вы давно не считали производную самостоятельно, вам поможет таблица производных элементарных функций (которую несложно найти в интернете) и правило дифференцирования сложной функции. После дифференцирования квадрата у вас возникнет множитель 2 — т.к. нам все равно предстоит выбирать коэффициент, с которым будет добавлен новый базовый алгоритм, проигноируйте этот множитель при дальнейшем построении алгоритма.
Step2: Задание 2
Заведите массив для объектов DecisionTreeRegressor (будем их использовать в качестве базовых алгоритмов) и для вещественных чисел (это будут коэффициенты перед базовыми алгоритмами).
В цикле от обучите последовательно 50 решающих деревьев с параметрами max_depth=5 и random_state=42 (остальные параметры - по умолчанию). В бустинге зачастую используются сотни и тысячи деревьев, но мы ограничимся 50, чтобы алгоритм работал быстрее, и его было проще отлаживать (т.к. цель задания разобраться, как работает метод). Каждое дерево должно обучаться на одном и том же множестве объектов, но ответы, которые учится прогнозировать дерево, будут меняться в соответствие с полученным в задании 1 правилом.
Попробуйте для начала всегда брать коэффициент равным 0.9. Обычно оправдано выбирать коэффициент значительно меньшим - порядка 0.05 или 0.1, но т.к. в нашем учебном примере на стандартном датасете будет всего 50 деревьев, возьмем для начала шаг побольше.
В процессе реализации обучения вам потребуется функция, которая будет вычислять прогноз построенной на данный момент композиции деревьев на выборке X
Step3: Задание 3
Вас может также беспокоить, что двигаясь с постоянным шагом, вблизи минимума ошибки ответы на обучающей выборке меняются слишком резко, перескакивая через минимум.
Попробуйте уменьшать вес перед каждым алгоритмом с каждой следующей итерацией по формуле 0.9 / (1.0 + i), где i - номер итерации (от 0 до 49). Используйте качество работы алгоритма как ответ в пункте 3.
В реальности часто применяется следующая стратегия выбора шага
Step4: Задание 4
Реализованный вами метод - градиентный бустинг над деревьями - очень популярен в машинном обучении. Он представлен как в самой библиотеке sklearn, так и в сторонней библиотеке XGBoost, которая имеет свой питоновский интерфейс. На практике XGBoost работает заметно лучше GradientBoostingRegressor из sklearn, но для этого задания вы можете использовать любую реализацию.
Исследуйте, переобучается ли градиентный бустинг с ростом числа итераций (и подумайте, почему), а также с ростом глубины деревьев. На основе наблюдений выпишите через пробел номера правильных из приведенных ниже утверждений в порядке возрастания номера (это будет ответ в п.4)
Step5: Задание 5
Сравните получаемое с помощью градиентного бустинга качество с качеством работы линейной регрессии.
Для этого обучите LinearRegression из sklearn.linear_model (с параметрами по умолчанию) на обучающей выборке и оцените для прогнозов полученного алгоритма на тестовой выборке RMSE. Полученное качество - ответ в пункте 5.
В данном примере качество работы простой модели должно было оказаться хуже, но не стоит забывать, что так бывает не всегда. В заданиях к этому курсу вы еще встретите пример обратной ситуации. | Python Code:
from sklearn import cross_validation, datasets, metrics, tree, ensemble, learning_curve
import numpy as np
import pandas as pd
%pylab inline
boston = datasets.load_boston()
X_train = boston.data[:379]
X_test = boston.data[379:]
y_train = boston.target[:379]
y_test = boston.target[379:]
Explanation: Градиентный бустинг своими руками
Внимание: в тексте задания произошли изменения - поменялось число деревьев (теперь 50), правило изменения величины шага в задании 3 и добавился параметр random_state у решающего дерева. Правильные ответы не поменялись, но теперь их проще получить. Также исправлена опечатка в функции gbm_predict.
В этом задании будет использоваться датасет boston из sklearn.datasets. Оставьте последние 25% объектов для контроля качества, разделив X и y на X_train, y_train и X_test, y_test.
Целью задания будет реализовать простой вариант градиентного бустинга над регрессионными деревьями для случая квадратичной функции потерь.
End of explanation
def grad(y,z):
return y - z
Explanation: Задание 1
Как вы уже знаете из лекций, бустинг - это метод построения композиций базовых алгоритмов с помощью последовательного добавления к текущей композиции нового алгоритма с некоторым коэффициентом.
Градиентный бустинг обучает каждый новый алгоритм так, чтобы он приближал антиградиент ошибки по ответам композиции на обучающей выборке. Аналогично минимизации функций методом градиентного спуска, в градиентном бустинге мы подправляем композицию, изменяя алгоритм в направлении антиградиента ошибки.
Воспользуйтесь формулой из лекций, задающей ответы на обучающей выборке, на которые нужно обучать новый алгоритм (фактически это лишь чуть более подробно расписанный градиент от ошибки), и получите частный ее случай, если функция потерь L - квадрат отклонения ответа композиции a(x) от правильного ответа y на данном x.
Если вы давно не считали производную самостоятельно, вам поможет таблица производных элементарных функций (которую несложно найти в интернете) и правило дифференцирования сложной функции. После дифференцирования квадрата у вас возникнет множитель 2 — т.к. нам все равно предстоит выбирать коэффициент, с которым будет добавлен новый базовый алгоритм, проигноируйте этот множитель при дальнейшем построении алгоритма.
End of explanation
def gbm_predict(X):
return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip
(base_algorithms_list, coefficients_list)]) for x in X]
#(считаем, что base_algorithms_list - список с базовыми алгоритмами,
#coefficients_list - список с коэффициентами перед алгоритмами)
base_algorithms_list = []
coefficients_list = []
error_2 = []
estimator = tree.DecisionTreeRegressor(max_depth=5, random_state = 42)
estimator.fit(X_train, y_train)
base_algorithms_list.append(estimator)
coefficients_list.append(0.9)
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
error_2.append(err)
for i in range(1, 50):
estimator = tree.DecisionTreeRegressor(max_depth=5,random_state = 42)
y_pred = gbm_predict(X_train)
estimator.fit(X_train, grad(y_train, y_pred))
base_algorithms_list.append(estimator)
coefficients_list.append(0.9)
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
error_2.append(err)
print 'error № ',i,' = ', err,'\n'
ans1 = error_2[49]
print ans1
with open('grad_boost_ans1.txt', 'w') as file_out:
file_out.write(str(ans1))
Explanation: Задание 2
Заведите массив для объектов DecisionTreeRegressor (будем их использовать в качестве базовых алгоритмов) и для вещественных чисел (это будут коэффициенты перед базовыми алгоритмами).
В цикле от обучите последовательно 50 решающих деревьев с параметрами max_depth=5 и random_state=42 (остальные параметры - по умолчанию). В бустинге зачастую используются сотни и тысячи деревьев, но мы ограничимся 50, чтобы алгоритм работал быстрее, и его было проще отлаживать (т.к. цель задания разобраться, как работает метод). Каждое дерево должно обучаться на одном и том же множестве объектов, но ответы, которые учится прогнозировать дерево, будут меняться в соответствие с полученным в задании 1 правилом.
Попробуйте для начала всегда брать коэффициент равным 0.9. Обычно оправдано выбирать коэффициент значительно меньшим - порядка 0.05 или 0.1, но т.к. в нашем учебном примере на стандартном датасете будет всего 50 деревьев, возьмем для начала шаг побольше.
В процессе реализации обучения вам потребуется функция, которая будет вычислять прогноз построенной на данный момент композиции деревьев на выборке X:
def gbm_predict(X):
return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X]
(считаем, что base_algorithms_list - список с базовыми алгоритмами, coefficients_list - список с коэффициентами перед алгоритмами)
Эта же функция поможет вам получить прогноз на контрольной выборке и оценить качество работы вашего алгоритма с помощью mean_squared_error в sklearn.metrics.
Возведите результат в степень 0.5, чтобы получить RMSE. Полученное значение RMSE — ответ в пункте 2.
End of explanation
base_algorithms_list = []
coefficients_list = []
error_3 = []
estimator = tree.DecisionTreeRegressor(max_depth=5, random_state = 42)
estimator.fit(X_train, y_train)
base_algorithms_list.append(estimator)
coefficients_list.append(0.9)
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
error_3.append(err)
for i in range(1, 50):
estimator = tree.DecisionTreeRegressor(max_depth=5,random_state = 42)
y_pred = gbm_predict(X_train)
estimator.fit(X_train, grad(y_train, y_pred))
base_algorithms_list.append(estimator)
coefficients_list.append(0.9/(1. + i))
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
error_3.append(err)
print 'error № ',i,' = ', err,'\n'
ans2 = error_3[49]
print ans2
with open('grad_boost_ans2.txt', 'w') as file_out:
file_out.write(str(ans2))
Explanation: Задание 3
Вас может также беспокоить, что двигаясь с постоянным шагом, вблизи минимума ошибки ответы на обучающей выборке меняются слишком резко, перескакивая через минимум.
Попробуйте уменьшать вес перед каждым алгоритмом с каждой следующей итерацией по формуле 0.9 / (1.0 + i), где i - номер итерации (от 0 до 49). Используйте качество работы алгоритма как ответ в пункте 3.
В реальности часто применяется следующая стратегия выбора шага: как только выбран алгоритм, подберем коэффициент перед ним численным методом оптимизации таким образом, чтобы отклонение от правильных ответов было минимальным. Мы не будем предлагать вам реализовать это для выполнения задания, но рекомендуем попробовать разобраться с такой стратегией и реализовать ее при случае для себя.
End of explanation
# iteration
err_list = []
coefficients_list = []
base_algorithms_list = []
estimator = tree.DecisionTreeRegressor(max_depth=5, random_state = 42)
estimator.fit(X_train, y_train)
base_algorithms_list.append(estimator)
coefficients_list.append(0.9)
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
err_list.append(err)
for i in range(1, 100):
estimator = tree.DecisionTreeRegressor(max_depth=5,random_state = 42)
y_pred = gbm_predict(X_train)
estimator.fit(X_train, grad(y_train, y_pred))
base_algorithms_list.append(estimator)
coefficients_list.append(0.9/(1. + i))
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
err_list.append(err)
print i,
plt.plot(range(0, 100), err_list)
# max_depth
err_list = []
coefficients_list = []
base_algorithms_list = []
depths = range(2, 2*50 + 2, 2)
estimator = tree.DecisionTreeRegressor(max_depth=depths[0], random_state = 42)
estimator.fit(X_train, y_train)
base_algorithms_list.append(estimator)
coefficients_list.append(0.9)
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
err_list.append(err)
for i in range(1, 50):
estimator = tree.DecisionTreeRegressor(max_depth=depths[i],random_state = 42)
y_pred = gbm_predict(X_train)
estimator.fit(X_train, grad(y_train, y_pred))
base_algorithms_list.append(estimator)
coefficients_list.append(0.9/(1.0 + i))
err = np.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
err_list.append(err)
print i,
plt.plot(depths, err_list)
ans3 = '2 3'
with open('grad_boost_ans3.txt', 'w') as file_out:
file_out.write(ans3)
Explanation: Задание 4
Реализованный вами метод - градиентный бустинг над деревьями - очень популярен в машинном обучении. Он представлен как в самой библиотеке sklearn, так и в сторонней библиотеке XGBoost, которая имеет свой питоновский интерфейс. На практике XGBoost работает заметно лучше GradientBoostingRegressor из sklearn, но для этого задания вы можете использовать любую реализацию.
Исследуйте, переобучается ли градиентный бустинг с ростом числа итераций (и подумайте, почему), а также с ростом глубины деревьев. На основе наблюдений выпишите через пробел номера правильных из приведенных ниже утверждений в порядке возрастания номера (это будет ответ в п.4):
1. С увеличением числа деревьев, начиная с некоторого момента, качество работы градиентного бустинга не меняется существенно.
2. С увеличением числа деревьев, начиная с некоторого момента, градиентный бустинг начинает переобучаться.
3. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга на тестовой выборке начинает ухудшаться.
4. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга перестает существенно изменяться
End of explanation
from sklearn.linear_model import LinearRegression
LineReg = LinearRegression().fit(X_train, y_train)
y_pred = LineReg.predict(X_test)
ans5 = np.sqrt(metrics.mean_squared_error(y_test, y_pred))
print ans5
with open('grad_boost_ans4.txt', 'w') as file_out:
file_out.write(str(ans5))
Explanation: Задание 5
Сравните получаемое с помощью градиентного бустинга качество с качеством работы линейной регрессии.
Для этого обучите LinearRegression из sklearn.linear_model (с параметрами по умолчанию) на обучающей выборке и оцените для прогнозов полученного алгоритма на тестовой выборке RMSE. Полученное качество - ответ в пункте 5.
В данном примере качество работы простой модели должно было оказаться хуже, но не стоит забывать, что так бывает не всегда. В заданиях к этому курсу вы еще встретите пример обратной ситуации.
End of explanation |
15,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-h', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-F3-H
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
15,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations [1] [2] [3]_.
In this example, we perform RSA on responses to 24 object images (among
a list of 92 images). Subjects were presented with images of human, animal
and inanimate objects [4]_. Here we use the 24 unique images of faces
and body parts.
<div class="alert alert-info"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not
build the images below.</p></div>
References
.. [1] Shepard, R. "Multidimensional scaling, tree-fitting, and clustering."
Science 210.4468 (1980)
Step1: Let's restrict the number of conditions to speed up computation
Step2: Define stimulus - trigger mapping
Step3: Let's make the event_id dictionary
Step4: Read MEG data
Step5: Epoch data
Step6: Let's plot some conditions
Step7: Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately.
Step8: Compute confusion matrix using ROC-AUC
Step9: Plot
Step10: Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together. | Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from pandas import read_csv
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.manifold import MDS
import mne
from mne.io import read_raw_fif, concatenate_raws
from mne.datasets import visual_92_categories
print(__doc__)
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
print(conds.head(5))
Explanation: Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations [1] [2] [3]_.
In this example, we perform RSA on responses to 24 object images (among
a list of 92 images). Subjects were presented with images of human, animal
and inanimate objects [4]_. Here we use the 24 unique images of faces
and body parts.
<div class="alert alert-info"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not
build the images below.</p></div>
References
.. [1] Shepard, R. "Multidimensional scaling, tree-fitting, and clustering."
Science 210.4468 (1980): 390-398.
.. [2] Laakso, A. & Cottrell, G.. "Content and cluster analysis:
assessing representational similarity in neural systems." Philosophical
psychology 13.1 (2000): 47-76.
.. [3] Kriegeskorte, N., Marieke, M., & Bandettini. P. "Representational
similarity analysis-connecting the branches of systems neuroscience."
Frontiers in systems neuroscience 2 (2008): 4.
.. [4] Cichy, R. M., Pantazis, D., & Oliva, A. "Resolving human object
recognition in space and time." Nature neuroscience (2014): 17(3),
455-462.
End of explanation
max_trigger = 24
conds = conds[:max_trigger] # take only the first 24 rows
Explanation: Let's restrict the number of conditions to speed up computation
End of explanation
conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[:10])
Explanation: Define stimulus - trigger mapping
End of explanation
event_id = dict(zip(conditions, conds.trigger + 1))
event_id['0/human bodypart/human/not-face/animal/natural']
Explanation: Let's make the event_id dictionary
End of explanation
n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block, verbose='error')
for block in range(n_runs)] # ignore filename warnings
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
Explanation: Read MEG data
End of explanation
picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
Explanation: Epoch data
End of explanation
epochs['face'].average().plot()
epochs['not-face'].average().plot()
Explanation: Let's plot some conditions
End of explanation
# Classify using the average signal in the window 50ms to 300ms
# to focus the classifier on the time interval with best SNR.
clf = make_pipeline(StandardScaler(),
LogisticRegression(C=1, solver='liblinear',
multi_class='auto'))
X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
y = epochs.events[:, 2]
classes = set(y)
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
# Compute confusion matrix for each cross-validation fold
y_pred = np.zeros((len(y), len(classes)))
for train, test in cv.split(X, y):
# Fit
clf.fit(X[train], y[train])
# Probabilistic prediction (necessary for ROC-AUC scoring metric)
y_pred[test] = clf.predict_proba(X[test])
Explanation: Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately.
End of explanation
confusion = np.zeros((len(classes), len(classes)))
for ii, train_class in enumerate(classes):
for jj in range(ii, len(classes)):
confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
confusion[jj, ii] = confusion[ii, jj]
Explanation: Compute confusion matrix using ROC-AUC
End of explanation
labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
fig, ax = plt.subplots(1)
im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
ax.set_yticks(range(len(classes)))
ax.set_yticklabels(labels)
ax.set_xticks(range(len(classes)))
ax.set_xticklabels(labels, rotation=40, ha='left')
ax.axhline(11.5, color='k')
ax.axvline(11.5, color='k')
plt.colorbar(im)
plt.tight_layout()
plt.show()
Explanation: Plot
End of explanation
fig, ax = plt.subplots(1)
mds = MDS(2, random_state=0, dissimilarity='precomputed')
chance = 0.5
summary = mds.fit_transform(chance - confusion)
cmap = plt.get_cmap('rainbow')
colors = ['r', 'b']
names = list(conds['condition'].values)
for color, name in zip(colors, set(names)):
sel = np.where([this_name == name for this_name in names])[0]
size = 500 if name == 'human face' else 100
ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
facecolors=color, label=name, edgecolors='k')
ax.axis('off')
ax.legend(loc='lower right', scatterpoints=1, ncol=2)
plt.tight_layout()
plt.show()
Explanation: Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together.
End of explanation |
15,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Schelling Segregation Model
Background
The Schelling (1971) segregation model is a classic of agent-based modeling, demonstrating how agents following simple rules lead to the emergence of qualitatively different macro-level outcomes. Agents are randomly placed on a grid. There are two types of agents, one constituting the majority and the other the minority. All agents want a certain number (generally, 3) of their 8 surrounding neighbors to be of the same type in order for them to be happy. Unhappy agents will move to a random available grid space. While individual agents do not have a preference for a segregated outcome (e.g. they would be happy with 3 similar neighbors and 5 different ones), the aggregate outcome is nevertheless heavily segregated.
Implementation
This is a demonstration of running a Mesa model in an IPython Notebook. The actual model and agent code are implemented in Schelling.py, in the same directory as this notebook. Below, we will import the model class, instantiate it, run it, and plot the time series of the number of happy agents.
Step1: Now we instantiate a model instance
Step2: We want to run the model until all the agents are happy with where they are. However, there's no guarentee that a given model instantiation will ever settle down. So let's run it for either 100 steps or until it stops on its own, whichever comes first
Step3: The model has a DataCollector object, which checks and stores how many agents are happy at the end of each step. It can also generate a pandas DataFrame of the data it has collected
Step4: Finally, we can plot the 'happy' series
Step5: For testing purposes, here is a table giving each agent's x and y values at each step.
Step6: Effect of Homophily on segregation
Now, we can do a parameter sweep to see how segregation changes with homophily.
First, we create a function which takes a model instance and returns what fraction of agents are segregated -- that is, have no neighbors of the opposite type.
Step7: Now, we set up the batch run, with a dictionary of fixed and changing parameters. Let's hold everything fixed except for Homophily. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from model import SchellingModel
Explanation: Schelling Segregation Model
Background
The Schelling (1971) segregation model is a classic of agent-based modeling, demonstrating how agents following simple rules lead to the emergence of qualitatively different macro-level outcomes. Agents are randomly placed on a grid. There are two types of agents, one constituting the majority and the other the minority. All agents want a certain number (generally, 3) of their 8 surrounding neighbors to be of the same type in order for them to be happy. Unhappy agents will move to a random available grid space. While individual agents do not have a preference for a segregated outcome (e.g. they would be happy with 3 similar neighbors and 5 different ones), the aggregate outcome is nevertheless heavily segregated.
Implementation
This is a demonstration of running a Mesa model in an IPython Notebook. The actual model and agent code are implemented in Schelling.py, in the same directory as this notebook. Below, we will import the model class, instantiate it, run it, and plot the time series of the number of happy agents.
End of explanation
model = SchellingModel(10, 10, 0.8, 0.2, 3)
Explanation: Now we instantiate a model instance: a 10x10 grid, with an 80% change of an agent being placed in each cell, approximately 20% of agents set as minorities, and agents wanting at least 3 similar neighbors.
End of explanation
while model.running and model.schedule.steps < 100:
model.step()
print(model.schedule.steps) # Show how many steps have actually run
Explanation: We want to run the model until all the agents are happy with where they are. However, there's no guarentee that a given model instantiation will ever settle down. So let's run it for either 100 steps or until it stops on its own, whichever comes first:
End of explanation
model_out = model.datacollector.get_model_vars_dataframe()
model_out.head()
Explanation: The model has a DataCollector object, which checks and stores how many agents are happy at the end of each step. It can also generate a pandas DataFrame of the data it has collected:
End of explanation
model_out.happy.plot()
Explanation: Finally, we can plot the 'happy' series:
End of explanation
x_positions = model.datacollector.get_agent_vars_dataframe()
x_positions.head()
Explanation: For testing purposes, here is a table giving each agent's x and y values at each step.
End of explanation
from mesa.batchrunner import BatchRunner
def get_segregation(model):
'''
Find the % of agents that only have neighbors of their same type.
'''
segregated_agents = 0
for agent in model.schedule.agents:
segregated = True
for neighbor in model.grid.neighbor_iter(agent.pos):
if neighbor.type != agent.type:
segregated = False
break
if segregated:
segregated_agents += 1
return segregated_agents / model.schedule.get_agent_count()
Explanation: Effect of Homophily on segregation
Now, we can do a parameter sweep to see how segregation changes with homophily.
First, we create a function which takes a model instance and returns what fraction of agents are segregated -- that is, have no neighbors of the opposite type.
End of explanation
parameters = {"height": 10, "width": 10, "density": 0.8, "minority_pc": 0.2,
"homophily": range(1,9)}
model_reporters = {"Segregated_Agents": get_segregation}
param_sweep = BatchRunner(SchellingModel, parameters, iterations=10,
max_steps=200,
model_reporters=model_reporters)
param_sweep.run_all()
df = param_sweep.get_model_vars_dataframe()
plt.scatter(df.homophily, df.Segregated_Agents)
plt.grid(True)
Explanation: Now, we set up the batch run, with a dictionary of fixed and changing parameters. Let's hold everything fixed except for Homophily.
End of explanation |
15,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ian's
Step3: [[ 0.56849023 0.56123475 -0.60152673]
[ 0.81728005 -0.30155688 0.49103641]
[ 0.09419218 -0.7707652 -0.63011811]]
For a framing camera the interior orientation (intrinsic matrix) requires (at a minimum)
Step4: $u = (f/s_{x}) * (v1/v3) + pp_{x}$
$u = -0.007$
Step5: Example from Mikhail
Step6: Now with our Messenger Camera
Step8: Now using ISIS data | Python Code:
print(isd['omega'])
print(isd['phi'])
print(isd['kappa'])
o = isd['omega']
p = isd['phi']
k = isd['kappa']
XL = 1728357.70312
YL = -2088409.0061
ZL = 2082873.92806
print(XL, YL, ZL)
opk_to_rotation(o, p, k)
Explanation: Ian's:
o = 2.25613094079
p = 0.094332016311
k = -0.963037547862
XL = 1728357.70312
YL = -2088409.0061
ZL = 2082873.92806
End of explanation
def opk_to_rotation(o, p, k):
Convert from Omega, Phi, Kappa to a 3x3 rotation matrix
Parameters
----------
o : float
Omega in radians
p : float
Phi in radians
k : float
Kappa in radians
Returns
-------
: ndarray
(3,3) rotation array
om = np.empty((3,3))
om[:,0] = [1,0,0]
om[:,1] = [0, cos(o), -sin(o)]
om[:,2] = [0, sin(o), cos(o)]
pm = np.empty((3,3))
pm[:,0] = [cos(p), 0, sin(p)]
pm[:,1] = [0,1,0]
pm[:,2] = [-sin(p), 0, cos(p)]
km = np.empty((3,3))
km[:,0] = [cos(k), -sin(k), 0]
km[:,1] = [sin(k), cos(k), 0]
km[:,2] = [0,0,1]
return km.dot(pm).dot(om)
def collinearity(f, M, camera_position, ground_position, principal_point=(0,0)):
XL, YL, ZL = camera_position
X, Y, Z = ground_position
x0, y0 = principal_point
x = (-f * ((M[0,0] * (X - XL) + M[0,1] * (Y - YL) + M[0,2] * (Z - ZL))/
(M[2,0] * (X - XL) + M[2,1] * (Y - YL) + M[2,2] * (Z - ZL)))) + x0
y = (-f * ((M[1,0] * (X - XL) + M[1,1] * (Y - YL) + M[1,2] * (Z - ZL))/
(M[2,0] * (X - XL) + M[2,1] * (Y - YL) + M[2,2] * (Z - ZL)))) + y0
return x, y, -f
def collinearity_inv(f, M, camera_position, pixel_position, elevation, principal_point=(0,0)):
XL, YL, ZL = camera_position
x, y = pixel_position
Z = elevation
x0, y0 = principal_point
X = (Z-ZL) * ((M[0,0] * (x - x0) + M[1,0] * (y - y0) + M[2,0] * (-f))/
(M[0,2] * (x - x0) + M[1,2] * (y - y0) + M[2,2] * (-f))) + XL
Y = (Z-ZL) * ((M[0,1] * (x - x0) + M[1,1] * (y - y0) + M[2,1] * (-f))/
(M[0,2] * (x - x0) + M[1,2] * (y - y0) + M[2,2] * (-f))) + YL
return X,Y
def pixel_to_focalplane(x, y, tx, ty):
Convert from camera pixel space to undistorted focal plane space.
focal_x = tx[0] + (tx[1] * x) + (tx[2] * y)
focal_y = ty[0] + (ty[1] * x) + (ty[2] * y)
return focal_x, focal_y
def distorted_mdisnac_focal(x, y, tx, ty):
# This is working when compared to ISIS3
#xp, yp = pixel_to_focalplane(x, y, tx, ty)
d = np.array([1, xp, yp, xp*xp, xp*yp, yp*yp,
xp**3, xp*yp**2, yp**3])
x = np.asarray(odtx).dot(d)
y = np.asarray(odty).dot(d)
return x, y
Explanation: [[ 0.56849023 0.56123475 -0.60152673]
[ 0.81728005 -0.30155688 0.49103641]
[ 0.09419218 -0.7707652 -0.63011811]]
For a framing camera the interior orientation (intrinsic matrix) requires (at a minimum):
a distortion model
focal point
principal point offset
The example that we have been working on looks like a pinhole ground to image projection, defined as:
$$\begin{bmatrix}
w \cdot u \
w \cdot v \
w
\end{bmatrix} = \mathbf{K}
\begin{bmatrix}
\mathbf{Rt}
\end{bmatrix}
\begin{bmatrix}
X\
Y\
Z\
1
\end{bmatrix}
$$
or
$$\begin{bmatrix}
w \cdot u \
w \cdot v \
w
\end{bmatrix} =
\begin{bmatrix}
f & s & u_{0} \
0 & \alpha f & v_{0} \
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
r_{11} & r_{12} & r_{13} & t_{x} \
r_{21} & r_{22} & r_{23} & t_{y} \
r_{31} & r_{32} & r_{33} & t_{z} \
\end{bmatrix}
\begin{bmatrix}
X\
Y\
Z\
1
\end{bmatrix}
$$
K is the intrinsic matrix (interior orientation), R is the extrinsic matrix (exterior orientation), and t is the translation. In the extrinsic matrix $\alpha$ (pixel aspect ratio) and $s$ (skew) are often assume to be unit and zero, respectively. $f$ is the focal length (in pixels) and ($u_{0}, v_{0}$) are the optical center (principal point).
The second resource below suggests that t can be thought of as the world origin in camera coordinates.
Focal Length Conversion from mm to pixels
If the sensor's physical width is known: $focal_{pixel} = (focal_{mm} / sensor_{width}) * imagewidth_{pixels}$
If the horizontal FoV is known: $focal_{pixel} = (imagewidth_{pixels} * 0.5) / \tan(FoV * 0.5)$
Resources:
http://ksimek.github.io/2013/08/13/intrinsic/
http://ksimek.github.io/2012/08/22/extrinsic/
http://slazebni.cs.illinois.edu/spring16/3dscene_book_svg.pdf
Here we define:
$$L = \begin{bmatrix}
X_{L}\
Y_{L}\
Z_{L}
\end{bmatrix}
$$
$$\begin{bmatrix}
x\
y\
z \end{bmatrix} = k\mathbf{M} \begin{bmatrix}
X - X_{L}\
Y - Y_{L}\
Z - Z_{L}
\end{bmatrix}$$, where $(x, y, -f)$ are in image coordinates, $k$ is a scale factor, $\mathbf{M}$ is a 3x3 rotation matrix, and $(X,Y,Z)$ represent the object point.
End of explanation
pixel_to_focalplane(0, 0, isd['transx'], isd['transy'])
Explanation: $u = (f/s_{x}) * (v1/v3) + pp_{x}$
$u = -0.007$
End of explanation
o = radians(2)
p = radians(5)
k = radians(15)
XL = 5000
YL = 10000
ZL = 2000
# Interior Orientation
x0 = 0.015 # mm
y0 = -0.02 # mm
f = 152.4 # mm
# Ground Points
X = 5100
Y = 9800
Z = 100
M = opk_to_rotation(o,p,k)
# This is correct as per Mikhail
x, y, _ = collinearity(f, M, [XL, YL, ZL], [X, Y, Z], [0,0])
x, y, _ = collinearity(f, M, [XL, YL, ZL], [X, Y, Z], [x0,y0])
# And now the inverse, find X, Y
Z = 100 # Provided by Mikhail - his random number
computedX, computedY = collinearity_inv(f, M, [XL, YL, ZL], [x, y], Z, (x0, y0))
assert(computedX == X)
assert(computedY == Y)
# Mikhail continued - this is the implementation used in the pinhole CSM (also working)
xo = X - XL # Ground point - Camera position in body fixed
yo = Y - YL
zo = Z - ZL
o = radians(2)
p = radians(5)
k = radians(15)
m = opk_to_rotation(o,p,k)
u = m[0][0] * xo + m[0][1] * yo + m[0][2] * zo
v = m[1][0] * xo + m[1][1] * yo + m[1][2] * zo
w = m[2][0] * xo + m[2][1] * yo + m[2][2] * zo
u /= w
v /= w
x0 = 0.015 # mm
y0 = -0.02 # mm
f = 152.4 # mm
print(-f * u, -f * v)
print(x0 -f * u , y0 - f * v)
Explanation: Example from Mikhail
End of explanation
# First from pixel to ground:
f = isd['focal_length']
# We know that the pixel size is 0.014^2 mm per pixel (14.4mm / 1024 pixels)
pixel_size = 0.014
x0 = 512 # Convert from pixel based principal point to metric principal point
y0 = 512
f = isd['focal_length']
o = isd['omega']
p = isd['phi']
k = isd['kappa']
M = opk_to_rotation(o,p,k)
camera_coords = [512, 512]
print(camera_coords)
# This is image to ground
X, Y = collinearity_inv(f, M, [XL, YL, ZL], camera_coords, 1455660, (x0, y0))
print('Ground: ', X, Y) # Arbitrary 1000m elevation - here is where iteration with intersection is needed.
# Now reverse! This is ground to image
# These are in mm and need to convert to pixels
x, y, f = collinearity(f, M, [XL, YL, ZL], [X, Y, 1455660], [x0,y0])
print(x,y)
print('Sensor Position (X,Y,Z): ', XL, YL, ZL)
print('')
print(XL, YL, ZL)
x = np.arange(9).reshape((3,3))
print(x)
print(x.T)
Explanation: Now with our Messenger Camera
End of explanation
def groundToImage(f, camera_position, camera_orientation, ground_position, principal_point=(0,0)):
XL, YL, ZL = camera_position
X, Y, Z = ground_position
x0, y0 = principal_point
M = opk_to_rotation(*camera_orientation)
x = (-f * ((M[0,0] * (X - XL) + M[1, 0] * (Y - YL) + M[2, 0] * (Z - ZL))/
(M[0, 2] * (X - XL) + M[1, 2] * (Y - YL) + M[2,2] * (Z - ZL)))) + x0
y = (-f * ((M[0,1] * (X - XL) + M[1,1] * (Y - YL) + M[2,1] * (Z - ZL))/
(M[0,2] * (X - XL) + M[1,2] * (Y - YL) + M[2,2] * (Z - ZL)))) + y0
return x, y
# Rectangular camera position (lon, lat) in radians
XL = 1728357.70312
YL = -2088409.0061
ZL = 2082873.92806
camera_position = [XL, YL, ZL]
# Camera rotation
o = isd['omega']
p = isd['phi']
k = isd['kappa']
camera_orientation = [o,p,k]
f = 549.1178195372703
principal_point = [0,0]
X = 1129210.
Y = -1599310.
Z = 1455250.
ground_position = [X, Y, Z]
x, y = groundToImage(f, camera_position, camera_orientation, ground_position, principal_point)
po = isd['ccd_center'][0]
lo = isd['ccd_center'][1]
sy = isd['itrans_line'][2]
sx = isd['itrans_sample'][1]
print(x, y, po, lo, sy, sx)
l = y / sy + lo
s = x / sx + lo
print(l, s)
# From mm to pixels
x = 3.5
y = -3.5
transx = isd['transx'][1]
transy = isd['transy'][2]
s = x / transx + 512.5
l = y / transy + 512.5
l, s
#From pixels to mm
l = 1024
s = 1024
itransl= isd['itrans_line'][2]
itranss = isd['itrans_sample'][1]
x = (l - 512.5) / itransl
y = (s - 512.5) / itranss
x, y
isd['itrans_line']
isd['transx'][0], isd['transy'][1]
def intersect_ellipsoid(h, xc, yc, zc, xl, yl, zl):
xc, yc, zc are the camera position
xl, yl, zl are the points transformed in the collinearity eqn.
ap = isd['semi_major_axis'] * 1000 + h
bp = isd['semi_minor_axis'] * 1000 + h
k = ap**2 / bp**2
print('A', ap, bp, k)
at = xl**2 + yl**2 + k * zl**2
bt = 2.0 * (xl * xc + yl * yc + k * zl * zc)
ct = xc * xc + yc * yc + k * zc * zc - ap * ap
quadterm = bt * bt - 4.0 * at * ct
print('B', at, bt, ct, quadterm)
if 0.0 > quadterm:
quadterm = 0
scale = (-bt - math.sqrt(quadterm)) / (2.0 * at)
print(scale)
x = xc + scale * xl
y = yc + scale * yl
z = zc + scale * zl
print(x, y, z)
return x, y, z
def imageToGround(f, camera_position, camera_orientation, pixel_position, elevation, principal_point=(0,0)):
XL, YL, ZL = camera_position
x, y = pixel_position
Z = elevation
x0, y0 = principal_point
print(x-x0)
M = opk_to_rotation(*camera_orientation)
X = (Z-ZL) * ((M[0,0] * (x - x0) + M[0,1] * (y - y0) + M[0,2] * (-f))/
(M[2,0] * (x - x0) + M[2,1] * (y - y0) + M[2,2] * (-f))) + XL;
Y = (Z-ZL) * ((M[1,0] * (x - x0) + M[1,1] * (y - y0) + M[1,2] * (-f))/
(M[2,0] * (x - x0) + M[2,1] * (y - y0) + M[2,2] * (-f))) + YL;
# This does not try to solve for scale.
xl = M[0,0] * (x - x0) + M[0,1] * (y - y0) - M[0,2] * (-f)
yl = M[1,0] * (x - x0) + M[1,1] * (y - y0) - M[1,2] * (-f)
zl = M[2,0] * (x - x0) + M[2,1] * (y - y0) - M[2,2] * (-f)
print(xl, yl, zl)
h = 0
print(h, XL, YL, ZL)
x, y, z = intersect_ellipsoid(h, XL, YL, ZL, xl, yl, zl)
return x, y, z
XL = 1728357.70312
YL = -2088409.0061
ZL = 2082873.92806
camera_position = [XL, YL, ZL]
# Camera rotation
o = isd['omega']
p = isd['phi']
k = isd['kappa']
X = 1129210.
Y = -1599310.
Z = elevation = 1455250.
camera_orientation = [o,p,k]
#f = isd['focal_length']
principal_point = [0,0]
#image_coordinates = [5.768,5.768]
image_coordinates = [0,0]
image_coordiantes = [100,100]
groundx, groundy, groundz = imageToGround(f, camera_position, camera_orientation, image_coordinates, elevation, principal_point)
print(groundx, groundy, groundz)
print(X, Y, Z)
print(groundx - X, groundy - Y, groundz - Z)
# Sanity Checker Here - Inverse.
ground_position = [groundx, groundy, Z]
print('GP: ', ground_position)
groundToImage(f, camera_position, camera_orientation, ground_position, principal_point=(0,0))
# Not invert and check for the sensor coordinates using these ground coordinates
# Should be equal to pixel_position in the previous cell
groundToImage(f, camera_position, camera_orientation, [groundx, groundy, elevation],principal_point)
def distort(x, y, odtx, odty):
ts = np.array([1, x, y, x**2, x*y,y**2,
x**3, x*x*y, x*y*y, y**3])
nx = np.asarray(odtx).dot(ts)
ny = np.asarray(odty).dot(ts)
return nx, ny
def pixel_to_mm(l, s, isd):
itransl= isd['itrans_line'][2]
itranss = isd['itrans_sample'][1]
x = (l - 512.5) / itransl
y = (s - 512.5) / itranss
return x, y
x, y = pixel_to_mm(512.5, 512.5, isd)
nx, ny = distort(x,y,isd['odt_x'], isd['odt_y'])
print(nx, ny)
x, y = pixel_to_mm(100, 100, isd)
print(x, y)
nx, ny = distort(x, y, isd['odt_x'], isd['odt_y'])
print(nx, ny)
len(isd['odt_'])
j = 0
k = 0
x = -5.77
y = -5.77
ts = np.array([1, x, y, x**2, x*y,y**2,
x**3, x*x*y, x*y*y, y**3])
for i in range(10):
j = j + ts[i] * isd['odt_x'][i]
k = k + ts[i] * isd['odt_y'][i]
print(j, k)
!cat isd.isd
Explanation: Now using ISIS data
End of explanation |
15,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Auftrieb umströmter Körper
In diesem Kapitel werden wir sehen, dass auf einen Körper nur dann eine Auftriebskraft wirkt, wenn die Zirkulation $\Gamma$ - das Linienintegral des Geschwindigkeitsfeldes - auf einer beliebigen geschlossenen Kurve um den Körper ungleich Null ist.
Satz von Kutta-Joukowski
Der aus der Strömungslehre bekannte Satz von Kutta-Joukowski beschreibt für stationäre, reibungsfreie, drehungsfreie, inkompressible und zweidimensionale Strömungen den Zusammenhang zwischen der Zirkulation von $\overrightarrow{v}$ längst der Kurve $C$ und der Auftriebskraft (pro Tiefeneinheit)
Step1: Soweit nichts Neues. Fehlt nur noch die lineare Überlagerung.
Wir wählen eine Anströmgeschwindigkeit von $u_\infty = 1~\text{m/s}$, ein Dipolmoment $M = 30~\text{m}^3/\text{s}$ und eine Zirkulation $\Gamma = 10~ \text{m}^3/\text{s}$.
Step2: Die Darstellung der Stromlinien ergibt dann folgendes Bild
Step3: Man sieht deutlich, dass die Strömung jetzt durch die Überlagerung des Potentialwirbels umgelenkt wird. Die Form des Zylinders ist in etwa erkennbar. Aber welchen Radius hat der Zylinder und welche Druckverteilung herrscht auf der Oberfläche des Zylinders?
Um den Radius zu ermitteln, benotigen wir die Koordinaten eines Staupunkts auf dem Zylinder. Da sich der Durchmesser durch den Potentialwirbel nicht ändert, können wir zunächst nur die Überlagerung aus Translations- und Dipolströmung betrachten
Step4: Die Druckverteilung auf der Oberfläche erhalten wir durch Einsetzen des Geschwindigkeitsverlaufs auf der Trennstromlinie in die Bernoulli-Gleichung, wie im vorletzten Kapitel gezeigt.
$$p(\varphi) + \frac{\rho}{2} \cdot \left( u^2(\varphi)+v^2(\varphi) \right) = p_\infty + \frac{\rho}{2}\cdot \overrightarrow{v}_\infty^2 = const.$$
Die Gleichungen für $u(\varphi)$ und $v(\varphi)$ für $r=R$ haben wir oben schon hergeleitet, so dass wir die Druckverteilung einfach ausrechnen und plotten können
Step5: Und siehe da, die Druckverteilung ist nicht mehr symmetrisch. D.h. es wirkt jetzt eine Kraft auf den Körper. Da die Druckkraft immer senkrecht auf die Oberfläche des Zylinders wirkt, können wir die $x$- und $y$-Komponente der Kraft wie folgt ausdrücken
Step6: Wie erwartet ist die berechnete Widerstandskraft gleich Null, da wir die Reibungskräfte mit der Potentialtheorie nicht berücksichtigen können.
Experimentieren Sie mit dem Wert für die Zirkulation. Für welchen Wert fallen die beiden Staupunkte zusammen?
Hier geht's weiter oder hier zurück zur Übersicht.
Copyright (c) 2018, Florian Theobald und Matthias Stripf
Der folgende Python-Code darf ignoriert werden. Er dient nur dazu, die richtige Formatvorlage für die Jupyter-Notebooks zu laden. | Python Code:
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
nx = 400 # Anzahl der Punkte in x-Richtung
ny = 200 # Anzahl der Punkte in y-Richtung
x = np.linspace(-10, 10, nx) # 1D-Array mit x-Koordinaten
y = np.linspace(-5, 5, ny) # 1D-Array mit y-Koordinaten
X, Y = np.meshgrid(x, y) # erzeugt das Gitter mit nx * ny Punkten
def trans_v(x, y, u1, v1): # Geschwindigkeitsvek. der Translationsstr.
return np.full_like(x, u1), np.full_like(y, v1)
def trans_psi(x, y, u1, v1): # Stromfunkt. der Translationsstr.
return -v1*x+u1*y
def trans_phi(x, y, u1, v1): # Potentialfunkt. der Translationsstr.
return u1*x+v1*y
def dipolx_v(x, y, xs, ys, M): # Geschwindigkeitsvek. der Quellenstr.
s = -M/(2*math.pi) / ((x-xs)**2+(y-ys)**2)**2
return s*((x-xs)**2-(y-ys)**2), s*2*(x-xs)*(y-ys)
def dipolx_psi(x, y, xs, ys, M): # Stromfunktion der Quellenströmung
return -M/(2*math.pi) * (y-ys) / ((x-xs)**2+(y-ys)**2)
def dipolx_phi(x, y, xs, ys, M): # Potentialfunktion der Quellenströmung
return M/(2*math.pi) * (x-xs) / ((x-xs)**2+(y-ys)**2)
def vortex_v(x, y, x1, y1, Gamma): # Geschwindigkeitsvek. des Potentialwirbels
s = Gamma/(2*math.pi*((x-x1)**2+(y-y1)**2))
return s*(y-y1), -s*(x-x1)
def vortex_psi(x, y, x1, y1, Gamma): # Stromfunktion des Potentialwirbels
s = -Gamma/(2*math.pi)
return -s*np.log(np.sqrt((x-x1)**2+(y-y1)**2))
def vortex_phi(x, y, x1, y1, Gamma): # Potentialfunktion des Potentialwirbels
s = -Gamma/(2*math.pi)
return np.arctan2((y-y1),(x-x1))
Explanation: Auftrieb umströmter Körper
In diesem Kapitel werden wir sehen, dass auf einen Körper nur dann eine Auftriebskraft wirkt, wenn die Zirkulation $\Gamma$ - das Linienintegral des Geschwindigkeitsfeldes - auf einer beliebigen geschlossenen Kurve um den Körper ungleich Null ist.
Satz von Kutta-Joukowski
Der aus der Strömungslehre bekannte Satz von Kutta-Joukowski beschreibt für stationäre, reibungsfreie, drehungsfreie, inkompressible und zweidimensionale Strömungen den Zusammenhang zwischen der Zirkulation von $\overrightarrow{v}$ längst der Kurve $C$ und der Auftriebskraft (pro Tiefeneinheit):
$$\frac{F_A}{1~\text{m}} = \rho_\infty \cdot v_\infty \cdot \Gamma = \rho_\infty \cdot v_\infty \cdot \oint_C {\overrightarrow{v}(\overrightarrow{r}) \text{d} \overrightarrow{r}}$$
Es kommt also nur dann zu einer Auftriebskraft, wenn die Zirkulation einen positiven Wert annimmt. Dies ist der Fall, wenn die Strömung durch den umströmten Körper umgelenkt wird. Im reibungsfreien Fall ist dies eigentlich nicht möglich, da durch die fehlende Reibung kein "Anhaften" der Strömung an der Oberfläche erfolgt. Die Umströmung eines Schaufelprofils würde im Falle fehlender Reibungskräfte (bzw. sehr hoher Reynoldszahlen $\text{Re} = \frac{\text{Impulskräfte}}{\text{Reibungskräfte}} = \frac{v_\infty \cdot l \cdot \rho}{\mu}$) wie im folgenden Bild dargestellt aussehen. Auf das Profil würde keine Auftriebskraft wirken.
Wenn mit der Potentialtheorie der Auftrieb simuliert werden soll, muss also eine Elementarströmung hinzugefügt werden, bei der die Zirkulation ungleich Null ist. Die einzige die hier in Frage kommt ist der Potentialwirbel.
Der Magnus-Effekt
Um den Satz von Kutta-Joukowski bzw. den Effekt der Zirkulation anschaulich zu zeigen, eignet sich der Magnus-Effekt. Dieser beschreibt die Kraft auf einen quer angeströmten, rotierenden Zylinder.
Technisch interessant ist der Magnus-Effekt z.B. für Schiffsantriebe (Flettner-Rotor), Vertikal-Windkraftanlagen, aber auch für die Realisierung einer Bananen-Flanke im Fußball. Sogar Flugzeuge wurden mit Flettner-Rotoren statt Tragflügel gebaut.
Die Umströmung eines nicht-rotierenden Zylinders haben wir im vorangegangenen Kapitel bereits kennengelernt. Sie ergab sich aus der Überlagerung einer Dipolströmung mit einer Translationsströmung. Um den Magnus-Effekt zu simulieren müssen wir jetzt nur noch einen Potentialwirbel überlagern.
Wir kopieren also zunächst den bereits entwickelten Quellcode für die drei Elementarströmungen:
End of explanation
u1 = 1.0
M = 30.0
Gamma = 20
u_trans, v_trans = trans_v(X, Y, u1, 0) # Translationsströmung
u_dipol, v_dipol = dipolx_v(X, Y, 0, 0, M) # Dipolströmung
u_vortex, v_vortex = vortex_v(X, Y, 0, 0, Gamma) # Potentialwirbel
u_gesamt = u_trans + u_dipol + u_vortex # lineare Überlagerung
v_gesamt = v_trans + v_dipol + v_vortex
Explanation: Soweit nichts Neues. Fehlt nur noch die lineare Überlagerung.
Wir wählen eine Anströmgeschwindigkeit von $u_\infty = 1~\text{m/s}$, ein Dipolmoment $M = 30~\text{m}^3/\text{s}$ und eine Zirkulation $\Gamma = 10~ \text{m}^3/\text{s}$.
End of explanation
# Neuen Plot einrichten
plt.figure(figsize=(10, 5))
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-10,10)
plt.ylim(-5,5)
# Stromlinien mit Matplotlib-Funktion darstellen
plt.streamplot(X, Y, u_gesamt, v_gesamt,
density=2, linewidth=1, arrowsize=2, arrowstyle='->');
Explanation: Die Darstellung der Stromlinien ergibt dann folgendes Bild:
End of explanation
# Radius des Zylinders:
R = math.sqrt(M/(2*math.pi*u1))
# Staupunkte
phi1 = -math.asin(math.sqrt(u1/(2*math.pi*M))*Gamma/(2*u1))
phi2 = -phi1+math.pi
sx1 = R*math.cos(phi1)
sy1 = R*math.sin(phi1)
sx2 = R*math.cos(phi2)
sy2 = R*math.sin(phi2)
# Neuen Plot einrichten
plt.figure(figsize=(10, 5))
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-10,10)
plt.ylim(-5,5)
# Stromlinien mit Matplotlib-Funktion darstellen
plt.streamplot(X, Y, u_gesamt, v_gesamt,
density=2, linewidth=1, arrowsize=2,
arrowstyle='->');
# Zylinder einzeichnen
zylinder = plt.Circle((0,0),R,color='r', alpha=0.5)
plt.gcf().gca().add_artist(zylinder)
# Staupunkte einzeichnen
plt.scatter([sx1,sx2], [sy1,sy2], color='green',
s=50, marker='o', linewidth=0);
Explanation: Man sieht deutlich, dass die Strömung jetzt durch die Überlagerung des Potentialwirbels umgelenkt wird. Die Form des Zylinders ist in etwa erkennbar. Aber welchen Radius hat der Zylinder und welche Druckverteilung herrscht auf der Oberfläche des Zylinders?
Um den Radius zu ermitteln, benotigen wir die Koordinaten eines Staupunkts auf dem Zylinder. Da sich der Durchmesser durch den Potentialwirbel nicht ändert, können wir zunächst nur die Überlagerung aus Translations- und Dipolströmung betrachten:
$$u(x,y) = u_1 - \frac{M}{2\pi}\frac{x^2-y^2}{\left(x^2-y^2\right)^2} \stackrel{!}{=} 0$$
$$v(x,y) = 0 - \frac{M}{2\pi}\frac{2xy}{\left(x^2-y^2\right)^2} \stackrel{!}{=} 0$$
oder in Polarkoordinaten (vgl. Formelsammlung Potentialströmungen), weil eleganter:
$$u(r,\varphi) = u_1 - \frac{M}{2\pi}\frac{\cos(2\varphi)}{r^2} \stackrel{!}{=} 0$$
$$v(r,\varphi) = 0 - \frac{M}{2\pi}\frac{\sin(2\varphi)}{r^2} \stackrel{!}{=} 0$$
Die zweite Gleichung ist erfüllt, wenn $\varphi$ Vielfache von $\frac{1}{2}\pi$ annimmt. Setzen wir z.B. $\varphi = \pi$ in die vorletzte Gleichung ein, erhalten wir für den Radius des Zylinders:
$$R = \pm \sqrt{\frac{M}{2\pi\cdot u_1}}$$
Mit dieser Lösung für den Radius suchen wir nun die Winkel $\varphi$, unter denen die Staupunkte zu finden sind. Dazu benötigen wir die Überlagerung aller drei Elementarströmungen:
$$u(r,\varphi) = u_1 - \frac{M}{2\pi}\frac{\cos(2\varphi)}{r^2}+\frac{\Gamma}{2\pi}\frac{\sin(\varphi)}{r} \stackrel{!}{=} 0$$
$$v(r,\varphi) = 0 - \frac{M}{2\pi}\frac{\sin(2\varphi)}{r^2} - \frac{\Gamma}{2\pi}\frac{\cos(\varphi)}{r} \stackrel{!}{=} 0$$
Die Lösung für $r$ eingesetzt ergibt:
$$u(\varphi) = u_1 - u_1 \cos(2\varphi) + \Gamma \sqrt{\frac{u_1}{2\pi M}} \sin\varphi \stackrel{!}{=} 0$$
$$v(\varphi) = -u_1 \sin(2\varphi) - \Gamma \sqrt{\frac{u_1}{2\pi M}} \cos\varphi \stackrel{!}{=} 0$$
Lösen der vorletzten Gleichung nach $\varphi$ und Einsetzen der Lösungen in die letzte Gleichung liefert die Winkel, unter denen beide Geschwindigkeitskomponenten auf dem Radius $R$ Null werden:
$$\varphi_1 = -\sin^{-1} \left(\sqrt{\frac{u_1}{2\pi M}}\frac{\Gamma}{2u_1}\right) \qquad\qquad \varphi_2 = \sin^{-1} \left(\sqrt{\frac{u_1}{2\pi M}}\frac{\Gamma}{2u_1}\right) + \pi$$
Wir können unser Bild von oben jetzt erweitern und den Zylinder sowie seine Staupunkte einzeichnen:
End of explanation
p1 = 102300
rho1 = 1.2
def v_zylinder(phi, u1, Gamma, M):
u = (u1
- u1 * np.cos(2*phi)
+Gamma*math.sqrt(u1/(2*math.pi*M))*np.sin(phi))
v = (-u1*np.sin(2*phi)
-Gamma*math.sqrt(u1/(2*math.pi*M))*np.cos(phi))
return u,v
def p_zylinder(phi, u1, Gamma, M, p1, rho1):
u,v = v_zylinder(phi, u1, Gamma, M)
v_abs = np.sqrt(u**2+v**2)
p = p1 + 0.5 * rho1 * (u1**2 - v_abs**2)
return p
phi = np.linspace(0, 2*math.pi, 101)
p = p_zylinder(phi, u1, Gamma, M, p1, rho1)
# Neuen Plot einrichten
plt.figure(figsize=(10, 5))
plt.ylabel('p-p1')
plt.xlabel('$\phi$')
plt.axhline(0, color='black')
plt.plot(phi, p-p1);
Explanation: Die Druckverteilung auf der Oberfläche erhalten wir durch Einsetzen des Geschwindigkeitsverlaufs auf der Trennstromlinie in die Bernoulli-Gleichung, wie im vorletzten Kapitel gezeigt.
$$p(\varphi) + \frac{\rho}{2} \cdot \left( u^2(\varphi)+v^2(\varphi) \right) = p_\infty + \frac{\rho}{2}\cdot \overrightarrow{v}_\infty^2 = const.$$
Die Gleichungen für $u(\varphi)$ und $v(\varphi)$ für $r=R$ haben wir oben schon hergeleitet, so dass wir die Druckverteilung einfach ausrechnen und plotten können:
End of explanation
from scipy import integrate
def dFx(phi, u1, Gamma, M, p1, rho1, R):
return -p_zylinder(phi, u1, Gamma, M, p1, rho1) * R * np.cos(phi)
def dFy(phi, u1, Gamma, M, p1, rho1, R):
return -p_zylinder(phi, u1, Gamma, M, p1, rho1) * R * np.sin(phi)
F_x, err = integrate.quad(dFx, 0, 2*math.pi, args=(u1,Gamma,M,p1,rho1,R))
F_y, err = integrate.quad(dFy, 0, 2*math.pi, args=(u1,Gamma,M,p1,rho1,R))
print ('Widerstandskraft in x-Richtung: {0:7.2f}'.format(F_x))
print (' Auftriebskraft in y-Richtung: {0:7.2f}'.format(F_y))
Explanation: Und siehe da, die Druckverteilung ist nicht mehr symmetrisch. D.h. es wirkt jetzt eine Kraft auf den Körper. Da die Druckkraft immer senkrecht auf die Oberfläche des Zylinders wirkt, können wir die $x$- und $y$-Komponente der Kraft wie folgt ausdrücken:
$$F_x = -\int_0^{2\pi} p(\varphi)~R~\cos\varphi ~d\varphi$$
$$F_y = -\int_0^{2\pi} p(\varphi)~R~\sin\varphi ~d\varphi$$
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open('TFDStyle.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: Wie erwartet ist die berechnete Widerstandskraft gleich Null, da wir die Reibungskräfte mit der Potentialtheorie nicht berücksichtigen können.
Experimentieren Sie mit dem Wert für die Zirkulation. Für welchen Wert fallen die beiden Staupunkte zusammen?
Hier geht's weiter oder hier zurück zur Übersicht.
Copyright (c) 2018, Florian Theobald und Matthias Stripf
Der folgende Python-Code darf ignoriert werden. Er dient nur dazu, die richtige Formatvorlage für die Jupyter-Notebooks zu laden.
End of explanation |
15,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 3
Imports
Step2: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
Explanation: Algorithms Exercise 3
Imports
End of explanation
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
# YOUR CODE HERE
#raise NotImplementedError()
s=s.replace(' ','')
l = [i for i in s]
dic={i:l.count(i) for i in l}
prob = [(dic[i]/len(l)) for i in dic]
result = {i:prob[j] for i in l for j in range(len(prob))}
return result
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
# YOUR CODE HERE
#raise NotImplementedError()
s = char_probs(d)
z = [(i,s[i]) for i in s]
w=np.array(z)
P = np.array(w[::,1])
np.log2(P[1])
entropy('haldjfhasdf')
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the pi digits histogram
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation |
15,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: <table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 从 TensorFlow Hub 加载模型。
注:要阅读文档,只需点击模型的网址
Step4: 标签文件将从模型素材资源中加载,并位于 model.class_map_path() 中。您需要将其加载到 class_names 变量上。
Step6: 添加一种方法来验证和转换加载的音频是否使用了正确的 sample_rate (16K),采样率不正确会影响模型的结果。
Step7: 下载并准备声音文件
在这里,您将下载一个 wav 文件并聆听。如果您已有文件,则只需将其上传到 Colab 并改用它。
注:预期的音频文件应为 16kHz 采样率的单声道 wav 文件。
Step8: 需要将 wav_data 归一化为 [-1.0, 1.0] 中的值(如模型文档中所述)。
Step9: 执行模型
现在是简单的部分:使用已经准备好的数据,只需调用模型并获取得分、嵌入向量和声谱图。
得分是您将使用的主要结果。以后将使用声谱图进行一些可视化。
Step10: 可视化
YAMNet 还会返回一些可用于可视化的附加信息。我们看一下波形、声谱图和推断的热门类。 | Python Code:
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import csv
import matplotlib.pyplot as plt
from IPython.display import Audio
from scipy.io import wavfile
Explanation: <table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/yamnet"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/yamnet.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/yamnet.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png"> 在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/yamnet.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td><a href="https://tfhub.dev/google/yamnet/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
使用 YAMNet 进行声音分类
YAMNet 是一个深度网络,可以从训练它的 AudioSet-YouTube 语料库中预测 521 个音频事件类。它采用 Mobilenet_v1 深度可分离卷积架构。
End of explanation
# Load the model.
model = hub.load('https://tfhub.dev/google/yamnet/1')
Explanation: 从 TensorFlow Hub 加载模型。
注:要阅读文档,只需点击模型的网址
End of explanation
# Find the name of the class with the top score when mean-aggregated across frames.
def class_names_from_csv(class_map_csv_text):
Returns list of class names corresponding to score vector.
class_names = []
with tf.io.gfile.GFile(class_map_csv_text) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
class_names.append(row['display_name'])
return class_names
class_map_path = model.class_map_path().numpy()
class_names = class_names_from_csv(class_map_path)
Explanation: 标签文件将从模型素材资源中加载,并位于 model.class_map_path() 中。您需要将其加载到 class_names 变量上。
End of explanation
def ensure_sample_rate(original_sample_rate, waveform,
desired_sample_rate=16000):
Resample waveform if required.
if original_sample_rate != desired_sample_rate:
desired_length = int(round(float(len(waveform)) /
original_sample_rate * desired_sample_rate))
waveform = scipy.signal.resample(waveform, desired_length)
return desired_sample_rate, waveform
Explanation: 添加一种方法来验证和转换加载的音频是否使用了正确的 sample_rate (16K),采样率不正确会影响模型的结果。
End of explanation
!curl -O https://storage.googleapis.com/audioset/speech_whistling2.wav
!curl -O https://storage.googleapis.com/audioset/miaow_16k.wav
# wav_file_name = 'speech_whistling2.wav'
wav_file_name = 'miaow_16k.wav'
sample_rate, wav_data = wavfile.read(wav_file_name, 'rb')
sample_rate, wav_data = ensure_sample_rate(sample_rate, wav_data)
# Show some basic information about the audio.
duration = len(wav_data)/sample_rate
print(f'Sample rate: {sample_rate} Hz')
print(f'Total duration: {duration:.2f}s')
print(f'Size of the input: {len(wav_data)}')
# Listening to the wav file.
Audio(wav_data, rate=sample_rate)
Explanation: 下载并准备声音文件
在这里,您将下载一个 wav 文件并聆听。如果您已有文件,则只需将其上传到 Colab 并改用它。
注:预期的音频文件应为 16kHz 采样率的单声道 wav 文件。
End of explanation
waveform = wav_data / tf.int16.max
Explanation: 需要将 wav_data 归一化为 [-1.0, 1.0] 中的值(如模型文档中所述)。
End of explanation
# Run the model, check the output.
scores, embeddings, spectrogram = model(waveform)
scores_np = scores.numpy()
spectrogram_np = spectrogram.numpy()
infered_class = class_names[scores_np.mean(axis=0).argmax()]
print(f'The main sound is: {infered_class}')
Explanation: 执行模型
现在是简单的部分:使用已经准备好的数据,只需调用模型并获取得分、嵌入向量和声谱图。
得分是您将使用的主要结果。以后将使用声谱图进行一些可视化。
End of explanation
plt.figure(figsize=(10, 6))
# Plot the waveform.
plt.subplot(3, 1, 1)
plt.plot(waveform)
plt.xlim([0, len(waveform)])
# Plot the log-mel spectrogram (returned by the model).
plt.subplot(3, 1, 2)
plt.imshow(spectrogram_np.T, aspect='auto', interpolation='nearest', origin='lower')
# Plot and label the model output scores for the top-scoring classes.
mean_scores = np.mean(scores, axis=0)
top_n = 10
top_class_indices = np.argsort(mean_scores)[::-1][:top_n]
plt.subplot(3, 1, 3)
plt.imshow(scores_np[:, top_class_indices].T, aspect='auto', interpolation='nearest', cmap='gray_r')
# patch_padding = (PATCH_WINDOW_SECONDS / 2) / PATCH_HOP_SECONDS
# values from the model documentation
patch_padding = (0.025 / 2) / 0.01
plt.xlim([-patch_padding-0.5, scores.shape[0] + patch_padding-0.5])
# Label the top_N classes.
yticks = range(0, top_n, 1)
plt.yticks(yticks, [class_names[top_class_indices[x]] for x in yticks])
_ = plt.ylim(-0.5 + np.array([top_n, 0]))
Explanation: 可视化
YAMNet 还会返回一些可用于可视化的附加信息。我们看一下波形、声谱图和推断的热门类。
End of explanation |
15,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scaling behavior
Step1: Tools
For generating scaling scenarios and running it through guv
Step2: Workers versus queue length
For different processing times (fixed deadline).
Should generally be linear, as guv has a proportional scaling algorithm
Step3: Workers versus closeness to deadline
By moving processing time towards deadline.
Could alternatively have moved deadline down towards a fixed processing time. | Python Code:
%matplotlib inline
import subprocess
import json
import os
import pandas
import numpy
Explanation: Scaling behavior
End of explanation
def normalize_scenario(flat):
# Pop out messages
messages = flat['messages']
# Everything else assumed to be config
roleconfig = flat.copy()
del roleconfig['messages']
config = { 'test': flat }
s = {
'role': 'test',
'config': config,
'messages': messages,
}
return s
def calculate_scaling(scenarios, timeout=10):
normalized = [normalize_scenario(s) for s in scenarios]
serialized = json.dumps(normalized)
args = ['../node_modules/.bin/coffee', './scalescenarios.coffee']
try:
stdout = subprocess.check_output(args, input=serialized,
encoding='utf-8', timeout=timeout, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
raise Exception(e.stdout)
try:
results = json.loads(stdout)
except Exception as e:
print('Could not parse stdout', std)
return results
def merge(a, b):
d = a.copy()
d.update(b)
return d
Explanation: Tools
For generating scaling scenarios and running it through guv
End of explanation
base = dict(minimum=0, maximum=20, processing=2, deadline=10, messages=-1)
messages = range(0, 50)
def versus_messages(processing):
scenarios = [ merge(base, { 'messages': m, 'processing': processing }) for m in messages ]
res = calculate_scaling(scenarios)
return res
df = pandas.DataFrame({
'messages': messages,
'1s processing time': versus_messages(processing=1),
'2s processing time': versus_messages(processing=2),
'3s processing time': versus_messages(processing=3),
'4s processing time': versus_messages(processing=4),
})
ax = df.plot(x='messages', figsize=(10, 5))
ax.set_ylabel("workers")
ax
Explanation: Workers versus queue length
For different processing times (fixed deadline).
Should generally be linear, as guv has a proportional scaling algorithm
End of explanation
processing_times = numpy.arange(0.1, 10.0, 0.2)
def versus_processing(messages):
base = dict(minimum=0, maximum=20, processing=2, deadline=10, concurrency=1, messages=-1)
scenarios = [ merge(base, { 'messages': messages, 'processing': p }) for p in processing_times ]
res = calculate_scaling(scenarios)
return res
df = pandas.DataFrame({
'processing': processing_times,
'0 message': versus_processing(messages=0),
'1 message': versus_processing(messages=1),
'4 message': versus_processing(messages=4),
'8 message': versus_processing(messages=8),
'32 message': versus_processing(messages=32),
})
ax = df.plot(x='processing', figsize=(10, 5))
ax.set_ylabel("workers")
ax
Explanation: Workers versus closeness to deadline
By moving processing time towards deadline.
Could alternatively have moved deadline down towards a fixed processing time.
End of explanation |
15,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to Jupyter Notebook
This is a Jupyter Notebook. It works just like python console, but you can freely go back and forth and reexecute pieces (cells) of code.
To run code press Shift+Enter
Step1: I can do to the second cell and run it again and again making "a" bigger and bigger.
You can always reset the "state" of the notebook by doing "Kernel->Restart". You can also rerun all of the cells from the top, by doing "Cell->Run all". Try it now.
Jupyter allows you to run any bash command by using the exclamation mark "!"
Step2: You can use variables defined in python in the bash calls using brackets "{}"
Step3: It's easy to get help in the notebook - just follow a command with question mark '?'
Step4: Plotting
To plot data in python we are going to use matplotlib and seaborn.
Step5: We'll nead numpy for generating some random data
Step6: Where is the plot?!? We need to explicitly tell notebookto put it inline
Step7: We can also plot a scatterplot
Step8: We can also plot an array
Step9: Histograms
Step10: Exercise
Step11: We can read shape and headers of a nifti file without loading it into memory
Step12: We can also get the data
Step13: We can create new files
Step14: Task
Step15: Exercise
Step16: Notice that if we run it again nothing get's recalculated - nipype will be smart enough to use the cache
Step17: But if we change the inputs new calculation will be triggered | Python Code:
a = 2
a += 2
print(a)
Explanation: Welcome to Jupyter Notebook
This is a Jupyter Notebook. It works just like python console, but you can freely go back and forth and reexecute pieces (cells) of code.
To run code press Shift+Enter
End of explanation
!ls -al
Explanation: I can do to the second cell and run it again and again making "a" bigger and bigger.
You can always reset the "state" of the notebook by doing "Kernel->Restart". You can also rerun all of the cells from the top, by doing "Cell->Run all". Try it now.
Jupyter allows you to run any bash command by using the exclamation mark "!":
End of explanation
name = "Chris"
text = "My name is " + name
!echo "{text}" > test.txt
!cat test.txt
Explanation: You can use variables defined in python in the bash calls using brackets "{}":
End of explanation
int?
Explanation: It's easy to get help in the notebook - just follow a command with question mark '?'
End of explanation
import pylab as plt
import seaborn as sns
Explanation: Plotting
To plot data in python we are going to use matplotlib and seaborn.
End of explanation
import numpy as np
random_data = np.random.rand(10,10)
plt.plot(random_data)
Explanation: We'll nead numpy for generating some random data
End of explanation
%matplotlib inline
plt.plot(random_data)
Explanation: Where is the plot?!? We need to explicitly tell notebookto put it inline
End of explanation
plt.scatter(random_data[0,:], random_data[1,:])
Explanation: We can also plot a scatterplot
End of explanation
sns.heatmap(random_data)
Explanation: We can also plot an array:
End of explanation
print('shape of random_data:',random_data.shape)
print('shape of random_data.ravel():',random_data.ravel().shape)
sns.distplot(random_data.ravel())
Explanation: Histograms
End of explanation
import nibabel as nb
import os
datadir="../../../data/ds003"
if not os.path.exists(datadir):
# custom location for Russ's laptop
datadir='/Users/poldrack/data_unsynced/ds003'
boldfile=os.path.join(datadir,"sub001/BOLD/task001_run001/bold.nii.gz")
nii = nb.load(boldfile)
Explanation: Exercise: sample data from a normal distribution and plot it distribution
Reading NIFTI files using nibabel
Nibabel is librabry for reading and writing to various neuroimaging data formats
End of explanation
nii.shape
header = nii.get_header()
header.get_xyzt_units()
nii.get_affine()
Explanation: We can read shape and headers of a nifti file without loading it into memory
End of explanation
data = nii.get_data()
print(data.shape)
sns.set_style("white")
plt.imshow(data[:,:,10,1])
Explanation: We can also get the data:
End of explanation
new_nii = nb.Nifti1Image(np.random.rand(*(nii.shape[:-1])), nii.get_affine())
new_nii.to_filename("/tmp/test.nii.gz")
print(new_nii.shape)
!fslview /tmp/test.nii.gz
Explanation: We can create new files
End of explanation
import nilearn.plotting, nilearn.image
nilearn.plotting.plot_epi(new_nii)
nilearn.plotting.plot_epi(nii)
nilearn.plotting.plot_epi(nilearn.image.index_img(nii,0))
nilearn.plotting.plot_epi(nilearn.image.mean_img(nii))
nilearn.plotting.plot_anat(os.path.join(datadir,"sub001/anatomy/inplane.nii.gz"))
Explanation: Task: load an anatomical scan from "../../../data/ds003/sub001/anatomy/inplane.nii.gz" threshold it and save the data back to a file in your home directory
Image manipulation using nilearn
nilearn is a python package for doing machine learning in neuroimaging. We will primarly use it to plot and manipulate images
End of explanation
from nipype.caching import Memory
from nipype.interfaces import fsl
mem = Memory(base_dir='.')
fslmean = mem.cache(fsl.maths.MeanImage)
fslmean_results = fslmean(in_file=os.path.join(datadir,"sub001/BOLD/task001_run001/bold.nii.gz"))
fslmean_results.outputs
nilearn.plotting.plot_epi(fslmean_results.outputs.out_file)
Explanation: Exercise: explore plotting options - could you plot the anatomical image as a series of axial slices?
Nipype
Nipype is a python library for interacting tith various neuroimaging programs. It's primarly used for constructing workflows taht are later executed on many subjects on a cluster. For this workshop we will primarly use it to cache results or commandlines
End of explanation
fslmean_results = fslmean(in_file=os.path.join(datadir,"sub001/BOLD/task001_run001/bold.nii.gz"))
fslmean_results.outputs
Explanation: Notice that if we run it again nothing get's recalculated - nipype will be smart enough to use the cache
End of explanation
fslmean_results = fslmean(in_file=os.path.join(datadir,"sub003/BOLD/task001_run001/bold.nii.gz"))
fslmean_results.outputs
Explanation: But if we change the inputs new calculation will be triggered
End of explanation |
15,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save on memory
Step1: Background on SSS and Maxwell filtering
Signal-space separation (SSS)
Step2: Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling
Step3: <div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.find_bad_channels_maxwell` needs to operate on
a signal without line noise or cHPI signals. By default, it simply
applies a low-pass filter with a cutoff frequency of 40 Hz to the
data, which should remove these artifacts. You may also specify a
different cutoff by passing the ``h_freq`` keyword argument. If you
set ``h_freq=None``, no filtering will be applied. This can be
useful if your data has already been preconditioned, for example
using
Step4: We called ~mne.preprocessing.find_bad_channels_maxwell with the optional
keyword argument return_scores=True, causing the function to return a
dictionary of all data related to the scoring used to classify channels as
noisy or flat. This information can be used to produce diagnostic figures.
In the following, we will generate such visualizations for
the automated detection of noisy gradiometer channels.
Step5: <div class="alert alert-info"><h4>Note</h4><p>You can use the very same code as above to produce figures for
*flat* channel detection. Simply replace the word "noisy" with
"flat", and replace ``vmin=np.nanmin(limits)`` with
``vmax=np.nanmax(limits)``.</p></div>
You can see the un-altered scores for each channel and time segment in the
left subplots, and thresholded scores – those which exceeded a certain limit
of noisiness – in the right subplots. While the right subplot is entirely
white for the magnetometers, we can see a horizontal line extending all the
way from left to right for the gradiometers. This line corresponds to channel
MEG 2443, which was reported as auto-detected noisy channel in the step
above. But we can also see another channel exceeding the limits, apparently
in a more transient fashion. It was therefore not detected as bad, because
the number of segments in which it exceeded the limits was less than 5,
which MNE-Python uses by default.
<div class="alert alert-info"><h4>Note</h4><p>You can request a different number of segments that must be
found to be problematic before
`~mne.preprocessing.find_bad_channels_maxwell` reports them as bad.
To do this, pass the keyword argument ``min_count`` to the
function.</p></div>
Obviously, this algorithm is not perfect. Specifically, on closer inspection
of the raw data after looking at the diagnostic plots above, it becomes clear
that the channel exceeding the "noise" limits in some segments without
qualifying as "bad", in fact contains some flux jumps. There were just not
enough flux jumps in the recording for our automated procedure to report
the channel as bad. So it can still be useful to manually inspect and mark
bad channels. The channel in question is MEG 2313. Let's mark it as bad
Step6: After that, performing SSS and Maxwell filtering is done with a
single call to
Step7: To see the effect, we can plot the data before and after SSS / Maxwell
filtering.
Step8: Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The | Python Code:
import os
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import mne
from mne.preprocessing import find_bad_channels_maxwell
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60)
Explanation: Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save on memory:
End of explanation
fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat')
crosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif')
Explanation: Background on SSS and Maxwell filtering
Signal-space separation (SSS) :footcite:TauluKajola2005,TauluSimola2006
is a technique based on the physics
of electromagnetic fields. SSS separates the measured signal into components
attributable to sources inside the measurement volume of the sensor array
(the internal components), and components attributable to sources outside
the measurement volume (the external components). The internal and external
components are linearly independent, so it is possible to simply discard the
external components to reduce environmental noise. Maxwell filtering is a
related procedure that omits the higher-order components of the internal
subspace, which are dominated by sensor noise. Typically, Maxwell filtering
and SSS are performed together (in MNE-Python they are implemented together
in a single function).
Like SSP <tut-artifact-ssp>, SSS is a form of projection. Whereas SSP
empirically determines a noise subspace based on data (empty-room recordings,
EOG or ECG activity, etc) and projects the measurements onto a subspace
orthogonal to the noise, SSS mathematically constructs the external and
internal subspaces from spherical harmonics_ and reconstructs the sensor
signals using only the internal subspace (i.e., does an oblique projection).
<div class="alert alert-danger"><h4>Warning</h4><p>Maxwell filtering was originally developed for Elekta Neuromag® systems,
and should be considered *experimental* for non-Neuromag data. See the
Notes section of the :func:`~mne.preprocessing.maxwell_filter` docstring
for details.</p></div>
The MNE-Python implementation of SSS / Maxwell filtering currently provides
the following features:
Basic bad channel detection
(:func:~mne.preprocessing.find_bad_channels_maxwell)
Bad channel reconstruction
Cross-talk cancellation
Fine calibration correction
tSSS
Coordinate frame translation
Regularization of internal components using information theory
Raw movement compensation (using head positions estimated by MaxFilter)
cHPI subtraction (see :func:mne.chpi.filter_chpi)
Handling of 3D (in addition to 1D) fine calibration files
Epoch-based movement compensation as described in
:footcite:TauluKajola2005 through :func:mne.epochs.average_movements
Experimental processing of data from (un-compensated) non-Elekta
systems
Using SSS and Maxwell filtering in MNE-Python
For optimal use of SSS with data from Elekta Neuromag® systems, you should
provide the path to the fine calibration file (which encodes site-specific
information about sensor orientation and calibration) as well as a crosstalk
compensation file (which reduces interference between Elekta's co-located
magnetometer and paired gradiometer sensor units).
End of explanation
raw.info['bads'] = []
raw_check = raw.copy()
auto_noisy_chs, auto_flat_chs, auto_scores = find_bad_channels_maxwell(
raw_check, cross_talk=crosstalk_file, calibration=fine_cal_file,
return_scores=True, verbose=True)
print(auto_noisy_chs) # we should find them!
print(auto_flat_chs) # none for this dataset
Explanation: Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling :func:`~mne.preprocessing.maxwell_filter` in order to prevent
bad channel noise from spreading.</p></div>
Let's see if we can automatically detect it.
End of explanation
bads = raw.info['bads'] + auto_noisy_chs + auto_flat_chs
raw.info['bads'] = bads
Explanation: <div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.find_bad_channels_maxwell` needs to operate on
a signal without line noise or cHPI signals. By default, it simply
applies a low-pass filter with a cutoff frequency of 40 Hz to the
data, which should remove these artifacts. You may also specify a
different cutoff by passing the ``h_freq`` keyword argument. If you
set ``h_freq=None``, no filtering will be applied. This can be
useful if your data has already been preconditioned, for example
using :func:`mne.chpi.filter_chpi`,
:func:`mne.io.Raw.notch_filter`, or :meth:`mne.io.Raw.filter`.</p></div>
Now we can update the list of bad channels in the dataset.
End of explanation
# Only select the data forgradiometer channels.
ch_type = 'grad'
ch_subset = auto_scores['ch_types'] == ch_type
ch_names = auto_scores['ch_names'][ch_subset]
scores = auto_scores['scores_noisy'][ch_subset]
limits = auto_scores['limits_noisy'][ch_subset]
bins = auto_scores['bins'] # The the windows that were evaluated.
# We will label each segment by its start and stop time, with up to 3
# digits before and 3 digits after the decimal place (1 ms precision).
bin_labels = [f'{start:3.3f} – {stop:3.3f}'
for start, stop in bins]
# We store the data in a Pandas DataFrame. The seaborn heatmap function
# we will call below will then be able to automatically assign the correct
# labels to all axes.
data_to_plot = pd.DataFrame(data=scores,
columns=pd.Index(bin_labels, name='Time (s)'),
index=pd.Index(ch_names, name='Channel'))
# First, plot the "raw" scores.
fig, ax = plt.subplots(1, 2, figsize=(12, 8))
fig.suptitle(f'Automated noisy channel detection: {ch_type}',
fontsize=16, fontweight='bold')
sns.heatmap(data=data_to_plot, cmap='Reds', cbar_kws=dict(label='Score'),
ax=ax[0])
[ax[0].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[0].set_title('All Scores', fontweight='bold')
# Now, adjust the color range to highlight segments that exceeded the limit.
sns.heatmap(data=data_to_plot,
vmin=np.nanmin(limits), # bads in input data have NaN limits
cmap='Reds', cbar_kws=dict(label='Score'), ax=ax[1])
[ax[1].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[1].set_title('Scores > Limit', fontweight='bold')
# The figure title should not overlap with the subplots.
fig.tight_layout(rect=[0, 0.03, 1, 0.95])
Explanation: We called ~mne.preprocessing.find_bad_channels_maxwell with the optional
keyword argument return_scores=True, causing the function to return a
dictionary of all data related to the scoring used to classify channels as
noisy or flat. This information can be used to produce diagnostic figures.
In the following, we will generate such visualizations for
the automated detection of noisy gradiometer channels.
End of explanation
raw.info['bads'] += ['MEG 2313'] # from manual inspection
Explanation: <div class="alert alert-info"><h4>Note</h4><p>You can use the very same code as above to produce figures for
*flat* channel detection. Simply replace the word "noisy" with
"flat", and replace ``vmin=np.nanmin(limits)`` with
``vmax=np.nanmax(limits)``.</p></div>
You can see the un-altered scores for each channel and time segment in the
left subplots, and thresholded scores – those which exceeded a certain limit
of noisiness – in the right subplots. While the right subplot is entirely
white for the magnetometers, we can see a horizontal line extending all the
way from left to right for the gradiometers. This line corresponds to channel
MEG 2443, which was reported as auto-detected noisy channel in the step
above. But we can also see another channel exceeding the limits, apparently
in a more transient fashion. It was therefore not detected as bad, because
the number of segments in which it exceeded the limits was less than 5,
which MNE-Python uses by default.
<div class="alert alert-info"><h4>Note</h4><p>You can request a different number of segments that must be
found to be problematic before
`~mne.preprocessing.find_bad_channels_maxwell` reports them as bad.
To do this, pass the keyword argument ``min_count`` to the
function.</p></div>
Obviously, this algorithm is not perfect. Specifically, on closer inspection
of the raw data after looking at the diagnostic plots above, it becomes clear
that the channel exceeding the "noise" limits in some segments without
qualifying as "bad", in fact contains some flux jumps. There were just not
enough flux jumps in the recording for our automated procedure to report
the channel as bad. So it can still be useful to manually inspect and mark
bad channels. The channel in question is MEG 2313. Let's mark it as bad:
End of explanation
raw_sss = mne.preprocessing.maxwell_filter(
raw, cross_talk=crosstalk_file, calibration=fine_cal_file, verbose=True)
Explanation: After that, performing SSS and Maxwell filtering is done with a
single call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk
and fine calibration filenames provided (if available):
End of explanation
raw.pick(['meg']).plot(duration=2, butterfly=True)
raw_sss.pick(['meg']).plot(duration=2, butterfly=True)
Explanation: To see the effect, we can plot the data before and after SSS / Maxwell
filtering.
End of explanation
head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS',
'test_move_anon_raw.pos')
head_pos = mne.chpi.read_head_pos(head_pos_file)
mne.viz.plot_head_positions(head_pos, mode='traces')
Explanation: Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The :func:~mne.preprocessing.maxwell_filter function has parameters
int_order and ext_order for setting the order of the spherical
harmonic expansion of the interior and exterior components; the default
values are appropriate for most use cases. Additional parameters include
coord_frame and origin for controlling the coordinate frame ("head"
or "meg") and the origin of the sphere; the defaults are appropriate for most
studies that include digitization of the scalp surface / electrodes. See the
documentation of :func:~mne.preprocessing.maxwell_filter for details.
Spatiotemporal SSS (tSSS)
An assumption of SSS is that the measurement volume (the spherical shell
where the sensors are physically located) is free of electromagnetic sources.
The thickness of this source-free measurement shell should be 4-8 cm for SSS
to perform optimally. In practice, there may be sources falling within that
measurement volume; these can often be mitigated by using Spatiotemporal
Signal Space Separation (tSSS) :footcite:TauluSimola2006.
tSSS works by looking for temporal
correlation between components of the internal and external subspaces, and
projecting out any components that are common to the internal and external
subspaces. The projection is done in an analogous way to
SSP <tut-artifact-ssp>, except that the noise vector is computed
across time points instead of across sensors.
To use tSSS in MNE-Python, pass a time (in seconds) to the parameter
st_duration of :func:~mne.preprocessing.maxwell_filter. This will
determine the "chunk duration" over which to compute the temporal projection.
The chunk duration effectively acts as a high-pass filter with a cutoff
frequency of $\frac{1}{\mathtt{st_duration}}~\mathrm{Hz}$; this
effective high-pass has an important consequence:
In general, larger values of st_duration are better (provided that your
computer has sufficient memory) because larger values of st_duration
will have a smaller effect on the signal.
If the chunk duration does not evenly divide your data length, the final
(shorter) chunk will be added to the prior chunk before filtering, leading
to slightly different effective filtering for the combined chunk (the
effective cutoff frequency differing at most by a factor of 2). If you need
to ensure identical processing of all analyzed chunks, either:
choose a chunk duration that evenly divides your data length (only
recommended if analyzing a single subject or run), or
include at least 2 * st_duration of post-experiment recording time at
the end of the :class:~mne.io.Raw object, so that the data you intend to
further analyze is guaranteed not to be in the final or penultimate chunks.
Additional parameters affecting tSSS include st_correlation (to set the
correlation value above which correlated internal and external components
will be projected out) and st_only (to apply only the temporal projection
without also performing SSS and Maxwell filtering). See the docstring of
:func:~mne.preprocessing.maxwell_filter for details.
Movement compensation
If you have information about subject head position relative to the sensors
(i.e., continuous head position indicator coils, or :term:cHPI), SSS
can take that into account when projecting sensor data onto the internal
subspace. Head position data can be computed using
:func:mne.chpi.compute_chpi_locs and :func:mne.chpi.compute_head_pos,
or loaded with the:func:mne.chpi.read_head_pos function. The
example data <sample-dataset> doesn't include cHPI, so here we'll
load a :file:.pos file used for testing, just to demonstrate:
End of explanation |
15,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Aggregate all blast results
Step1: 2. Write annotated results out | Python Code:
blast_file_regex = re.compile(r"(blast[np])_vs_([a-zA-Z0-9_]+).tsv")
blast_cols = ["query_id","subject_id","pct_id","ali_len","mism",
"gap_open","q_start","q_end","s_start","s_end",
"e_value","bitscore","q_len","s_len","s_gi",
"s_taxids","s_scinames","s_names","q_cov","s_description"
]
#blast_cols = "qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore qlen slen sgi staxids sscinames scomnames qcovs stitle"
blast_hits = []
for blast_filename in glob("2_blast/*.tsv"):
tool_id,db_id = blast_file_regex.search(blast_filename).groups()
blast_hits.append( pd.read_csv(blast_filename,sep="\t",header=None,names=blast_cols) )
blast_hits[-1]["tool"] = tool_id
blast_hits[-1]["db"] = db_id
all_blast_hits = blast_hits[0]
for search_hits in blast_hits[1:]:
all_blast_hits = all_blast_hits.append(search_hits)
print(all_blast_hits.shape)
all_blast_hits.head()
Explanation: 1. Aggregate all blast results
End of explanation
all_blast_hits.sort_values(by=["query_id","bitscore"],ascending=False).to_csv("2_blastp_hits.csv",index=False)
Explanation: 2. Write annotated results out
End of explanation |
15,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running Tune experiments with ZOOpt
In this tutorial we introduce ZOOpt, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with ZOOpt and, as a result, allow you to seamlessly scale up a ZOOpt optimization process - without sacrificing performance.
Zeroth-order optimization (ZOOpt) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable. Therefore, zeroth-order optimization is commonly referred to as "derivative-free optimization" and "black-box optimization". In this example we minimize a simple objective to briefly demonstrate the usage of ZOOpt with Ray Tune via ZOOptSearch. It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume zoopt==0.4.1 library is installed. To learn more, please refer to the ZOOpt website.
Step1: Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
Step2: Let's start by defining a simple evaluation function.
We artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment.
This setup assumes that we're running multiple steps of an experiment and try to tune two hyperparameters,
namely width and height, and activation.
Step3: Next, our objective function takes a Tune config, evaluates the score of your experiment in a training loop,
and uses tune.report to report the score back to Tune.
Step4: Next we define a search space. The critical assumption is that the optimal hyperparameters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.
Step5: The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples.
(you can decrease this if it takes too long on your machine).
Step6: Next we define the search algorithm built from ZOOptSearch, constrained to a maximum of 8 concurrent trials via ZOOpt's internal "parallel_num".
Step7: Finally, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run().
Step8: Here are the hyperparamters found to minimize the mean loss of the defined objective.
Step9: Optional
Step10: ZOOpt again handles constraining the amount of concurrent trials with "parallel_num".
Step11: This time we pass only "steps" and "activation" to the Tune config because "height" and "width" have been passed into ZOOptSearch to create the search_algo.
Again, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times.
Step12: Here are the hyperparamters found to minimize the mean loss of the defined objective. | Python Code:
# !pip install ray[tune]
!pip install zoopt==0.4.1
Explanation: Running Tune experiments with ZOOpt
In this tutorial we introduce ZOOpt, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with ZOOpt and, as a result, allow you to seamlessly scale up a ZOOpt optimization process - without sacrificing performance.
Zeroth-order optimization (ZOOpt) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable. Therefore, zeroth-order optimization is commonly referred to as "derivative-free optimization" and "black-box optimization". In this example we minimize a simple objective to briefly demonstrate the usage of ZOOpt with Ray Tune via ZOOptSearch. It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume zoopt==0.4.1 library is installed. To learn more, please refer to the ZOOpt website.
End of explanation
import time
import ray
from ray import tune
from ray.tune.suggest.zoopt import ZOOptSearch
from zoopt import ValueType
Explanation: Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
End of explanation
def evaluate(step, width, height):
time.sleep(0.1)
return (0.1 + width * step / 100) ** (-1) + height * 0.1
Explanation: Let's start by defining a simple evaluation function.
We artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment.
This setup assumes that we're running multiple steps of an experiment and try to tune two hyperparameters,
namely width and height, and activation.
End of explanation
def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"])
tune.report(iterations=step, mean_loss=score)
ray.init(configure_logging=False)
Explanation: Next, our objective function takes a Tune config, evaluates the score of your experiment in a training loop,
and uses tune.report to report the score back to Tune.
End of explanation
search_config = {
"steps": 100,
"width": tune.randint(0, 10),
"height": tune.quniform(-10, 10, 1e-2),
"activation": tune.choice(["relu, tanh"])
}
Explanation: Next we define a search space. The critical assumption is that the optimal hyperparameters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.
End of explanation
num_samples = 1000
# If 1000 samples take too long, you can reduce this number.
# We override this number here for our smoke tests.
num_samples = 10
Explanation: The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples.
(you can decrease this if it takes too long on your machine).
End of explanation
zoopt_config = {
"parallel_num": 8
}
algo = ZOOptSearch(
algo="Asracos", # only supports ASRacos currently
budget=num_samples,
**zoopt_config,
)
Explanation: Next we define the search algorithm built from ZOOptSearch, constrained to a maximum of 8 concurrent trials via ZOOpt's internal "parallel_num".
End of explanation
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="zoopt_exp",
num_samples=num_samples,
config=search_config
)
Explanation: Finally, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run().
End of explanation
print("Best hyperparameters found were: ", analysis.best_config)
Explanation: Here are the hyperparamters found to minimize the mean loss of the defined objective.
End of explanation
space = {
"height": (ValueType.CONTINUOUS, [-10, 10], 1e-2),
"width": (ValueType.DISCRETE, [0, 10], True),
"layers": (ValueType.GRID, [4, 8, 16])
}
Explanation: Optional: passing the parameter space into the search algorithm
We can also pass the parameter space ourselves in the following formats:
- continuous dimensions: (continuous, search_range, precision)
- discrete dimensions: (discrete, search_range, has_order)
- grid dimensions: (grid, grid_list)
End of explanation
zoopt_search_config = {
"parallel_num": 8,
"metric": "mean_loss",
"mode": "min"
}
algo = ZOOptSearch(
algo="Asracos",
budget=num_samples,
dim_dict=space,
**zoopt_search_config
)
Explanation: ZOOpt again handles constraining the amount of concurrent trials with "parallel_num".
End of explanation
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="zoopt_exp",
num_samples=num_samples,
config={
"steps": 100,
}
)
Explanation: This time we pass only "steps" and "activation" to the Tune config because "height" and "width" have been passed into ZOOptSearch to create the search_algo.
Again, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times.
End of explanation
print("Best hyperparameters found were: ", analysis.best_config)
ray.shutdown()
Explanation: Here are the hyperparamters found to minimize the mean loss of the defined objective.
End of explanation |
15,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jennifer 8. Lee et al have been using a Google spreadsheet to track the production of books in Project GITenberg
Step1: Getting access to the spreadsheet (Method 1)
Step3: Calculations on the spreadsheet
Step4: cloning repos
Step7: rtc covers
https
Step8: Getting covers into repos
Step9: Getting covers into repos
Step10: Generalized structure for iterating over repos
Step11: Travis work
Step12: Calculating URL for latest epub for each repo
e.g., https
Step13: Misc | Python Code:
from __future__ import print_function
import gspread
import json
# rtc50_settings.py holds URL related to the Google spreadsheet
from rtc50_settings import (g_name, g_url, g_key)
OFFICIAL_NAME_KEY = "Name in rtc/books.json, Official Name"
Explanation: Jennifer 8. Lee et al have been using a Google spreadsheet to track the production of books in Project GITenberg:
This notebook uses the gspread Python library to parse (and write?) the spreadsheet.
End of explanation
g_url
import json
import gspread
from oauth2client.client import SignedJwtAssertionCredentials
json_key = json.load(open('nypl50-gspread.json'))
scope = ['https://spreadsheets.google.com/feeds']
credentials = SignedJwtAssertionCredentials(json_key['client_email'], json_key['private_key'], scope)
gc = gspread.authorize(credentials)
wks = gc.open_by_key(g_key).sheet1
Explanation: Getting access to the spreadsheet (Method 1): OAuth2
Using OAuth2 for Authorization — gspread 0.2.5 documentation
Created: https://console.developers.google.com/project/nypl50-gspread/apiui/credential#
pip install --upgrade oauth2client
I'd have to share the spreadsheet with 535523918532-5ejclnn335tr2g1u0dqnvh7g39q78mim@developer.gserviceaccount.com -- so let's look at method 2
End of explanation
wks
# load the rows
all_rows = wks.get_all_values()
# use pandas
import pandas as pd
from pandas import (DataFrame, Series)
df = DataFrame(all_rows[2:], columns=all_rows[1])
df.index = df.index + 3 # shift index to match spreadsheet
df.head()
# what does the status mean?
df[df["RTC Good Cover?"] == 'YES']["Gitenberg Status"].value_counts()
# "RTC 1st GEN" vs "RTC 2nd GEN" vs "RTC Other Gen"
len(df[df["RTC 1st GEN"] == 'X'])
from second_folio import all_repos
set(all_repos) - set(df['Gitenberg URL'].map(lambda u: u.split("/")[-1]))
# just forget the whole part 1/part 2 -- figure out what repos are ready to work on haven't yet been done.
from github3 import (login, GitHub)
from github_settings import (username, password, token)
from itertools import islice
#gh = login(username, password=password)
gh = login(token=token)
def asciidoc_in_repo_root(repo, branch ='master'):
return list of asciidocs in the root of repo
repo_branch = repo.branch(branch)
tree = repo.tree(repo_branch.commit.sha)
return [hash_.path
for hash_ in tree.tree
if hash_.path.endswith('.asciidoc')]
def asciidocs_for_repo_name(repo_name):
try:
repo = gh.repository('GITenberg', repo_name)
return asciidoc_in_repo_root(repo, branch ='master')
except Exception as e:
return e
# copy CSV to clipboard, making it easy to then paste it to
# https://github.com/gitenberg-dev/Second-Folio/blob/master/Gitenberg%20Book%20List.csv
df.to_clipboard(encoding='utf-8', sep=',', index=False)
Explanation: Calculations on the spreadsheet
End of explanation
import sh
sh.cd("/Users/raymondyee/C/src/gitenberg/Adventures-of-Huckleberry-Finn_76")
len(sh.grep (sh.git.remote.show("-n", "origin"),
"git@github-GITenberg:GITenberg/Adventures-of-Huckleberry-Finn_76.git", _ok_code=[0,1]))
from itertools import islice
from second_folio import (repo_cloned, clone_repo)
repos_to_clone = (repo for repo in all_repos if not repo_cloned(repo)[0])
for (i, repo) in enumerate(islice(repos_to_clone,None)):
output = clone_repo(repo)
print ("\r{} {} {} {}".format(i, repo, output, repo_cloned(repo)))
Explanation: cloning repos
End of explanation
import requests
# rtc_covers_url = "https://raw.githubusercontent.com/plympton/rtc/master/books.json"
rtc_covers_url = "https://raw.githubusercontent.com/rdhyee/rtc/master/books.json"
covers = requests.get(rtc_covers_url).json()
covers_dict = dict([(cover['name'], cover) for cover in covers])
len(covers_dict)
# Are there any covers in the Plymton repo not in books.json?
df
# not that many covers
cover_names = set([cover['name'] for cover in covers])
# read off cover_map from df
# http://stackoverflow.com/a/9762084
cover_map = dict(filter(lambda (k,v):v,
[tuple(x) for x in df[['Title', OFFICIAL_NAME_KEY]].values]
))
repos_with_covers = list(df[df[OFFICIAL_NAME_KEY].map(lambda s: len(s) > 0)]['Gitenberg URL'].map(lambda u: u.split("/")[-1]))
repos_with_covers
len(repos_with_covers)
# compare list of cover repo data in
# https://raw.githubusercontent.com/gitenberg-dev/Second-Folio/master/covers_data.json
import requests
r = requests.get("https://raw.githubusercontent.com/gitenberg-dev/Second-Folio/master/covers_data.json")
covers_data = r.json()
covers_data
set(repos_with_covers) - set([c['GitHub repo'] for c in covers_data])
set([c['GitHub repo'] for c in covers_data]) - set(repos_with_covers)
mapped_cover_names = set(cover_map.values())
(cover_names - mapped_cover_names), (mapped_cover_names - cover_names)
[v['covers'][0]['filename']
for (k,v) in covers_dict.items()]
# Have I downloaded all the big images?
img_path = "/Users/raymondyee/Downloads/rtc/full_images/"
cover_names
from IPython.display import HTML
from PIL import Image
import jinja2
# let's look at the images for the books
# https://cdn.rawgit.com/plympton/rtc/master/rtc_books/
# https://cdn.rawgit.com/plympton/rtc/master/rtc_books_resized/
cover_url_base = "https://cdn.rawgit.com/plympton/rtc/master/rtc_books/"
small_cover_url_base = "https://cdn.rawgit.com/plympton/rtc/master/rtc_books_resized/"
from functools import partial
def cover_name_to_url(name, reduce=False):
if reduce:
url = small_cover_url_base
else:
url = cover_url_base
cover = covers_dict.get(name)
if cover is not None:
return url + cover['covers'][0]["filename"]
else:
return None
def cover_name_to_artist(name):
cover = covers_dict.get(name)
if cover is not None:
return cover['covers'][0]['artist']
else:
return None
cover_name_to_url_small = partial(cover_name_to_url, reduce=True)
cover_name_to_url_big = partial(cover_name_to_url, reduce=False)
df['big_image_url'] = rtc50[OFFICIAL_NAME_KEY].map(cover_name_to_url_big)
df['small_image_url'] = rtc50[OFFICIAL_NAME_KEY].map(cover_name_to_url_small)
rtc50 = df[df["RTC Good Cover?"] == 'YES']
rtc50.head()
results = rtc50[['Title', 'big_image_url']].T.to_dict().values()
results
from IPython.display import HTML
from jinja2 import Template
CSS =
<style>
.wrap img {
margin-left: 0px;
margin-right: 0px;
display: inline-block;
width: 100px;
}
</style>
IMAGES_TEMPLATE = CSS +
<div class="wrap">
{% for item in items %}<img title="{{item.Title}}" src="{{item.}}"/>{% endfor %}
</div>
template = Template(IMAGES_TEMPLATE)
HTML(template.render(items=results))
#let's try looping over all the images and convert them to png
def download_big_images(limit=None):
import requests
from itertools import islice
import os
img_path = "/Users/raymondyee/Downloads/rtc/full_images/"
for image in islice(results,limit):
# check whether we have the cover already before downloading
url = image['big_image_url']
if url is not None:
name = url.split("/")[-1]
dest_path = img_path + name
if not os.path.exists(dest_path):
print (dest_path)
content = requests.get(url).content
with open(img_path + name, "wb") as f:
f.write(content)
download_big_images(limit=None)
# loop over jpg and convert to png
def convert_small_jpg_to_png():
import glob
for f in glob.glob("/Users/raymondyee/Downloads/rtc/resized/*.jp*g"):
im = Image.open(f)
png_path = ".".join(f.split(".")[:-1]) + ".png"
if im.mode not in ["1", "L", "P", "RGB", "RGBA"]:
im = im.convert("RGB")
im.save(png_path)
# image types in covers
from collections import Counter
map(lambda p: p.split(".")[-1], reduce(lambda x,y: x+y, [[c['filename'] for c in cover['covers'] for cover in covers]]))
df['GitHub repo']=df['Gitenberg URL'].map(lambda u:u.split("/")[-1])
import numpy as np
df['local_big_file'] = df['big_image_url'].map(lambda u:u.split("/")[-1] if u is not None and u is not np.nan else None)
df['cover_artist'] = df[OFFICIAL_NAME_KEY].map(cover_name_to_artist)
df['local_big_file'] = df['local_big_file'].map(lambda s: re.sub(r".png$", ".jpg", s) if s is not None else s)
def write_covers_data():
import json
rtc50 = df[df["RTC Good Cover?"] == 'YES']
covers_data_path = "/Users/raymondyee/C/src/gitenberg/Second-Folio/covers_data.json"
with open(covers_data_path, "w") as f:
f.write(json.dumps(rtc50[['GitHub repo', 'cover_artist', 'local_big_file']].T.to_dict().values(),
sort_keys=True,indent=2, separators=(',', ': ')))
#write_covers_data()
Explanation: rtc covers
https://raw.githubusercontent.com/plympton/rtc/master/books.json
End of explanation
import sh
# can control tty settings for sh
# https://amoffat.github.io/sh/#ttys
sh.ls("-1", _tty_out=False ).split()
dict([(c['GitHub repo'], c) for c in covers_data])
s = Series(repos)
list(s.map(lambda r: covers_data_dict.get(r).get('local_big_file')))
Explanation: Getting covers into repos
End of explanation
import os
import os
import shutil
import sh
from pandas import DataFrame, Series
from itertools import islice
REPOS_LIST = "/Users/raymondyee/C/src/gitenberg/Second-Folio/list_of_repos.txt"
COVERS_DATA = "/Users/raymondyee/C/src/gitenberg/Second-Folio/covers_data.json"
GITENBERG_DIR = "/Users/raymondyee/C/src/gitenberg/"
COVERS_DIR = "/Users/raymondyee/Downloads/rtc/full_images/"
repos=open(REPOS_LIST).read().strip().split("\n")
covers_data = json.loads(open(COVERS_DATA).read())
covers_data_dict = dict([(c['GitHub repo'], c) for c in covers_data])
def copy_repo_cover(repo, dry_run=False):
cover_file = covers_data_dict[repo]['local_big_file']
local_cover_path = None
copied = False
if cover_file is not None:
local_cover_path = os.path.join(COVERS_DIR, cover_file)
destination = os.path.join(GITENBERG_DIR, repo, "cover.jpg")
if os.path.exists(local_cover_path) and not os.path.exists(destination):
if not dry_run:
shutil.copyfile(local_cover_path, destination)
copied = True
return (local_cover_path, copied)
def git_pull(repo):
sh.cd(os.path.join(GITENBERG_DIR, repo))
return sh.git("pull")
def copy_covers():
for (i,repo) in enumerate(islice(repos,None)):
print (i, repo, copy_repo_cover(repo, dry_run=False))
copy_covers()
# let's compute missing covers
for repo in repos:
destination = os.path.join(GITENBERG_DIR, repo, "cover.jpg")
if not os.path.exists(destination):
print (repo)
def git_add_cover_commit_push(repo):
cover_path = os.path.join(GITENBERG_DIR, repo, "cover.jpg")
try:
if os.path.exists(cover_path):
sh.cd(os.path.join(GITENBERG_DIR, repo))
print ("add")
sh.git("add", "cover.jpg")
print ("commit")
try:
sh.git("commit", "-m", "add cover.jpg")
except:
pass
print ("push")
sh.git.push()
else:
return None
except Exception as e:
return e
for (i,repo) in enumerate(islice(repos,None)):
print (i, repo)
print (git_add_cover_commit_push(repo))
def git_pull(repo):
sh.cd(os.path.join(GITENBERG_DIR, repo))
sh.git("pull")
for (i,repo) in enumerate(islice(repos,None)):
print (i, repo)
git_pull(repo)
sh.cd("/Users/raymondyee/C/src/gitenberg/Jane-Eyre_1260")
sh.git.push()
Explanation: Getting covers into repos
End of explanation
import os
import json
import shutil
import sh
import yaml
from pandas import DataFrame, Series
from itertools import islice
REPOS_LIST = "/Users/raymondyee/C/src/gitenberg/Second-Folio/list_of_repos.txt"
GITENBERG_DIR = "/Users/raymondyee/C/src/gitenberg/"
METADATA_DIR = "/Users/raymondyee/C/src/gitenberg-dev/giten_site/metadata"
COVERS_DATA = "/Users/raymondyee/C/src/gitenberg/Second-Folio/covers_data.json"
Explanation: Generalized structure for iterating over repos
End of explanation
import os
import glob
import sh
import yaml
from gitenberg import metadata
import jinja2
from second_folio import (GITENBERG_DIR,
all_repos,
apply_to_repos,
travis_setup_releases,
git_pull,
apply_travis,
finish_travis,
repo_is_buildable,
has_travis_with_gitenberg_build,
slugify,
latest_epub,
repo_version
)
from github_settings import (username, password)
from itertools import islice, izip
repos = list(islice(all_repos,0,None))
# determine which repos are "buildable"
repos_statues = list(izip(repos,
apply_to_repos(repo_is_buildable, repos=repos),
apply_to_repos(has_travis_with_gitenberg_build, repos=repos) ))
# we want to apply travis to repos that are buildable but that don't yet have .travis.yml.
repos_to_travisfy = [repo[0] for repo in repos_statues if repo[1] and not repo[2]]
repos_to_travisfy
from __future__ import print_function
for (i, repo) in enumerate(islice(repos_to_travisfy,1)):
print (i, repo, end=" ")
r1 = apply_travis(repo, username, password, overwrite_travis=True)
print (r1, end=" ")
if r1:
r2 = finish_travis(repo)
print (r2)
else:
print ("n/a")
Explanation: Travis work
End of explanation
import requests
url = "https://github.com/GITenberg/Adventures-of-Huckleberry-Finn_76/releases/download/0.0.17/Adventures-of-Huckleberry-Finn.epub"
r = requests.head(url)
r.status_code, r.url, r.url == url
epub_urls = list(apply_to_repos(latest_epub))
import pandas as pd
from pandas import DataFrame
df = DataFrame({'epub_url':epub_urls}, index=all_repos)
df.head()
df['status_code'] = df.epub_url.apply(lambda u: requests.head(u).status_code)
df['buildable'] = df.index.map(repo_is_buildable)
k = df[df['status_code'] == 404][:3]
k['status_code'] = k.epub_url.apply(lambda u: requests.head(u).status_code)
k.head()
df.ix[k.index] = k
list(k.epub_url)
df[(df.status_code == 404) & (df.buildable)]
df['metadata_url'] = df.index.map(lambda repo: "https://github.com/GITenberg/{}/raw/master/metadata.yaml".format(repo))
print "\n".join(list(df[~df.buildable].index))
df.buildable.value_counts()
df.to_clipboard(index_label="repo", sep=',')
df[df.status_code == 404]
Explanation: Calculating URL for latest epub for each repo
e.g., https://github.com/GITenberg/Metamorphosis_5200/releases/download/0.0.1/Metamorphosis.epub
End of explanation
md.metadata.get("title"), md.metadata.get("_repo"), md.metadata.get("_version"),
# figure out what elements to feed to template
#
from jinja2 import Environment, PackageLoader, meta
env = Environment()
parsed_content = env.parse(template)
meta.find_undeclared_variables(parsed_content)
import sh
sh.cd("/Users/raymondyee/C/src/gitenberg/Adventures-of-Huckleberry-Finn_76")
sh.travis.whoami()
from itertools import islice, izip
repos = list(islice(second_folio.all_repos,1,None))
list(izip(repos, apply_to_repos(git_mv_asciidoc, repos=repos)))
list(apply_to_repos(git_pull))
from __future__ import print_function
line = "Detected repository as GITenberg/Don-Quixote_996, is this correct? |yes| "
"Detected" in line
Explanation: Misc
End of explanation |
15,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 14 - Astropy
Today's Agenda
Useful functions of Astropy
Units
Time
Coordinates
FITS files
Analytic functions
AstroPy Tables and different formats
Astropy is a package that is meant to provide a lot of basic functionality for astronomy work in Python
This can be roughly broken up into two areas. One is astronomical calculations
Step1: Units
Astropy.units introduces units and allows for unit conversions. It doesn't, however, correctly handle spherical coordinates, but the astropy.coordinates package will address this later.
These units can be used to create objects that are made up of both a value and a unit, and basic math can be easily carried out with these. We can add the .unit and .value properties to get the units and numerical values, respectively.
Step2: Astropy includes a large number of units, and this can include imperial units as well if desired by importing and enabling imperial units. The .find_equivalent_units() function will also return all the other units that are already defined in astropy. Below we do a quick list of the units that are defined for time and length units
Step3: The package also provides constants, with the units included. The full list of units can be found here. We can take a quick look at c and G below, and see that these are objects which have value, uncertainty, and units.
Step4: Astropy has an aditional function that will allow for unit conversions. So we can, for example, create an object that is the distance to Mars, and then convert that to kilometers or miles. A brief note is that if you try to convert a pure unit (like the 4th line below) into another unit, you'll get a unitless value representing the conversion between the two.
This can also be used to convert constants into other units, so we can convert the speed of light to the somewhat useful pc/yr or the entirely unuseful furlong/fortnight
Step5: To use this more practically, we can calculate the time it will take for light to reach the earth just by dividing 1 AU by the speed of light, as done below. Since AU is a unit, and c is in m/s, we end up with an answer that is (AU*m/s). By using .decompose() we can simplify that expression, which in this case will end up with an answer that is just in seconds. Finally, we can then convert that answer to minutes to get the answer of about 8 1/3 minutes that is commonly used. None of this required our doing the conversions where we might've slipped up.
Step6: Time
Astropy handles time in a similar way to units, with creating Time objects. These objects have two main properties.
The format is simply how the time is displayed. This is the difference between, for example, Julian Date, Modified Julian Date, and ISO time (YYYY-MM-DD HH
Step7: Coordinates
Coordinates again work by using an object time defined for this purpose. We can establish a point in the ICRS frame (this is approximately the equatorial coordinate) by defining the ra and dec. Note that here we are using u.degree in specifying the coordinates.
We can then print out the RA and dec, as well as change the units displayed. In the last line, we can also convert from ICRS equatorial coordinates to galactic coordinates.
Step8: Slightly practical application of this
Using some of these astropy functions, we can do some fancier applications. Starting off, we import a listing of stars with RA and dec from the attached table, and store them in the coordinate formats that are used by astropy. We then use matplotlib to plot this, and are able to easily convert them into radians thanks to astropy. This plot is accurate, but it lacks reference for where these points are.
Step9: To fix this, we will add some references to this by adding a few more sets of data points. The first is relatively simple, we put in a line at the celestial equator. This just has to be a set of points that are all at declination of 0, and from -180 to +180 degrees in RA. These are a and b in the below.
We also want to add the planes of the ecliptic and the galaxy on this. For both, we use coordinate objects and provide numpy arrays where one coordinate is at zero, and the other goes from 0 to 360. With astropy we can then easily convert from each coordinate system to ICRS. There's some for loops to modify the plotting, but the important thing is that this will give us a plot that has not just the locations of all the planets that we've plotted, but will also include the celestial equator, galactic plane, and ecliptic plane on it.
Step10: Reading in FITS files
One of the useful things with Astropy is that you can use it for reading in FITS files, and extracting info such as bands, exposure times, intrument information, etc.
In this example, we will read in a FITS image file, and extract its information
Step11: Now we can extract some of the information stored in the FITS file.
Step12: The returned object, hdulist, (an instance of the HDUList class) behaves like a Python list, and each element maps to a Header-Data Unit (HDU) in the FITS file. You can view more information about the FITS file with
Step13: As we can see, this file contains two HDUs. The first contains the image, the second a data table. To access the primary HDU, which contains the main data, you can then do
Step14: To read the header of the FITS file, you can read hdulist. The following shows the different keys for the header
Step15: As we can see, this file contains two HDUs. The first contains the image, the second a data table.
Let's look at the image of the FITS file.
The hdu object then has two important attributes
Step16: This tells us that it is a 1600-by-1600 pixel image. We can now take a peak at the header. To access the primary HDU, which contains the main data, you can then do
Step17: We can access individual header keywords using standard item notation
Step18: We can plot the image using matplotlib
Step19: You can also add new fields to the FITS file
Step20: and we can also change the data, for example subtracting a background value
Step21: This only changes the FITS file in memory. You can write to a file with
Step23: Analytic Functions
Astropy comes with some built-in analytic functions, e.g. the blackbody radiation function.
Blackbody Radiation
Blackbody flux is calculated with Planck law (Rybicki & Lightman 1979)
$$B_{\lambda}(T) = \frac{2 h c^{2} / \lambda^{5}}{exp(h c / \lambda k T) - 1}$$
$$B_{\nu}(T) = \frac{2 h \nu^{3} / c^{2}}{exp(h \nu / k T) - 1}$$
Step24: Let's plot the Planck function for two bodies with temperatures $T_1 = 8000\ K$ and $T_2 = 6000\ K$
Step25: AstroPy Tables
Read files
You can use Astropy to read tables from data files. We'll use it to read the sources.dat file, which contains columns and rows of data
Step26: Write to files
You can also write directoy to a file using the data in the AstroPy table.
Let's create a new AstroPy Table
Step27: Let's see what's in the astropy_data.tb file
Step28: You can also specify the delimiter of the file. For example, we can separate it with a comma.
Step29: AstroPy Tables to other Formats
The AstroPy tables can also be converted to multiple formats
to Pandas DataFrames
A nice feature of AstroPy Tables is that you can export your data into different formats.
For example, you can export it as a Pandas Dataframe.
See here for more info on how to use pandas with Astropy
Step30: And to compare, let's see the AstroPy Tables format
Step31: to LaTeX tables
A nice thing about AstroPy is that you can convert your data into LaTeX tables. This is easily done with writing it to a file. You can then copy it and use it on your next publication
Step32: To save it as a file, you can do this
Step33: to CSV files
Step34: Other formats
AstroPy tables come with a great support for many different types of files.
This is a list of the supported files that you can import/export AstroPy tables.
Data tables and Column types
You can also use AstroPy tables to preserve the metadata of a column. For example, you can keep the units of each column, so that you use the data later on, and still be able to use unit conversions, etc. for this.
Step35: Now we can save it into a ecsv file. This type of file will preserve the type of units, and more, for each of the columns
Step36: Or you can dump it into a file
Step37: And you can now read it in | Python Code:
# Importing Modules
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_context("notebook")
import astropy
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy import constants as const
Explanation: Week 14 - Astropy
Today's Agenda
Useful functions of Astropy
Units
Time
Coordinates
FITS files
Analytic functions
AstroPy Tables and different formats
Astropy is a package that is meant to provide a lot of basic functionality for astronomy work in Python
This can be roughly broken up into two areas. One is astronomical calculations:
* unit and physical quantity conversions
* physical constants specific to astronomy
* celestial coordinate and time transformations
The other is file type and structures:
* FITS files, implementing the former standalone PyFITS interface
* Virtual Observatory (VO) tables
* common ASCII table formats, e.g. for online catalogues or data supplements of scientific publications
* Hierarchical Data Format (HDF5) files
AstroPy normallly comes with the Anaconda installation. But in case you happen to not have it installed it on your computer, you can simply do a
sh
pip install --no-deps astropy
You can always update it via
sh
conda update astropy
This is just a glimpse of all the features that AstroPy has:
<img src="./images/astropy_sections.png" alt="Astropy Features" width="600">
For purposes of today, we'll focus just on what astropy can do for units, time, coordinates, image manipulation, and more.
End of explanation
d=42*u.meter
t=6*u.second
v=d/t
print v
print v.unit
Explanation: Units
Astropy.units introduces units and allows for unit conversions. It doesn't, however, correctly handle spherical coordinates, but the astropy.coordinates package will address this later.
These units can be used to create objects that are made up of both a value and a unit, and basic math can be easily carried out with these. We can add the .unit and .value properties to get the units and numerical values, respectively.
End of explanation
from astropy.units import imperial
imperial.enable()
print( u.s.find_equivalent_units() )
print( u.m.find_equivalent_units() )
Explanation: Astropy includes a large number of units, and this can include imperial units as well if desired by importing and enabling imperial units. The .find_equivalent_units() function will also return all the other units that are already defined in astropy. Below we do a quick list of the units that are defined for time and length units
End of explanation
print const.c
print const.G
Explanation: The package also provides constants, with the units included. The full list of units can be found here. We can take a quick look at c and G below, and see that these are objects which have value, uncertainty, and units.
End of explanation
Mars=1.5*u.AU
print Mars.to('kilometer')
print Mars.to('mile')
print u.AU.to('kilometer')
print const.c.to('pc/yr')
print const.c.to('fur/fortnight')
Explanation: Astropy has an aditional function that will allow for unit conversions. So we can, for example, create an object that is the distance to Mars, and then convert that to kilometers or miles. A brief note is that if you try to convert a pure unit (like the 4th line below) into another unit, you'll get a unitless value representing the conversion between the two.
This can also be used to convert constants into other units, so we can convert the speed of light to the somewhat useful pc/yr or the entirely unuseful furlong/fortnight
End of explanation
time=1*u.AU/const.c
print(time)
time_s=time.decompose()
print(time_s)
time_min=time_s.to(u.minute)
print(time_min)
Explanation: To use this more practically, we can calculate the time it will take for light to reach the earth just by dividing 1 AU by the speed of light, as done below. Since AU is a unit, and c is in m/s, we end up with an answer that is (AU*m/s). By using .decompose() we can simplify that expression, which in this case will end up with an answer that is just in seconds. Finally, we can then convert that answer to minutes to get the answer of about 8 1/3 minutes that is commonly used. None of this required our doing the conversions where we might've slipped up.
End of explanation
from astropy.time import Time
t=Time(57867.346424, format='mjd', scale='utc')
t1=Time(58867.346424, format='mjd', scale='utc')
print t.mjd
print t.iso
print t.jyear
t1-t
Explanation: Time
Astropy handles time in a similar way to units, with creating Time objects. These objects have two main properties.
The format is simply how the time is displayed. This is the difference between, for example, Julian Date, Modified Julian Date, and ISO time (YYYY-MM-DD HH:MM:SS). The second is the scale, and is the difference between terrestrial time vs time at the barycenter of the solar system.
We can start off by changing a time from one format to many others. We can also subtract times and we will get a timedelta unit.
End of explanation
c = SkyCoord(ra=10.68458*u.degree, dec=41.26917*u.degree, frame='icrs')
print c
print c.ra
print c.dec
print c.ra.hour
print c.ra.hms
print c.galactic
Explanation: Coordinates
Coordinates again work by using an object time defined for this purpose. We can establish a point in the ICRS frame (this is approximately the equatorial coordinate) by defining the ra and dec. Note that here we are using u.degree in specifying the coordinates.
We can then print out the RA and dec, as well as change the units displayed. In the last line, we can also convert from ICRS equatorial coordinates to galactic coordinates.
End of explanation
hosts={}
data=np.loadtxt('./data/planets.tab', dtype='str', delimiter='\t')
print data[0]
hosts['ra_hours']=data[1:,9].astype(float)
hosts['ra']=data[1:,6].astype(float)
hosts['dec']=data[1:,8].astype(float)
#print hosts['ra_hours']
#print hosts['dec']
import astropy.units as u
import astropy.coordinates as coord
from astropy.coordinates import SkyCoord
ra = coord.Angle(hosts['ra']*u.degree)
ra = ra.wrap_at(180*u.degree)
dec = coord.Angle(hosts['dec']*u.degree)
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, projection="mollweide")
plt.title('Map of Exoplanets')
ax.scatter(ra.radian, dec.radian)
ax.set_xticklabels(['14h','16h','18h','20h','22h','0h','2h','4h','6h','8h','10h'])
ax.grid(True)
plt.show()
Explanation: Slightly practical application of this
Using some of these astropy functions, we can do some fancier applications. Starting off, we import a listing of stars with RA and dec from the attached table, and store them in the coordinate formats that are used by astropy. We then use matplotlib to plot this, and are able to easily convert them into radians thanks to astropy. This plot is accurate, but it lacks reference for where these points are.
End of explanation
a=coord.Angle((np.arange(361)-180)*u.degree)
b=coord.Angle(np.zeros(len(a))*u.degree)
numpoints=360
galaxy=SkyCoord(l=coord.Angle((np.arange(numpoints))*u.degree), b=coord.Angle(np.zeros(numpoints)*u.degree), frame='galactic')
ecliptic=SkyCoord(lon=coord.Angle((np.arange(numpoints))*u.degree), lat=coord.Angle(np.zeros(numpoints)*u.degree), frame='geocentrictrueecliptic')
ecl_eq=ecliptic.icrs
gal_eq=galaxy.icrs
#print gal_eq
fixed_ra=[]
for item in gal_eq.ra.radian:
if item < np.pi:
fixed_ra.append(item)
else:
fixed_ra.append(item-2*np.pi)
i=np.argmin(fixed_ra)
fixed_dec=[x for x in gal_eq.dec.radian]
fixed_ra_eq=[]
for item in ecl_eq.ra.radian:
if item < np.pi:
fixed_ra_eq.append(item)
else:
fixed_ra_eq.append(item-2*np.pi)
j=np.argmin(fixed_ra_eq)
fixed_dec_eq=[x for x in ecl_eq.dec.radian]
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, projection="mollweide")
plt.title('Map of Exoplanets')
ax.scatter(ra.radian, dec.radian)
ax.plot(a.radian, b.radian, color='r', lw=2)
#ax.scatter(gal_eq.ra.radian, gal_eq.dec.radian, color='g')
ax.plot(fixed_ra[i:]+fixed_ra[:i], fixed_dec[i:]+fixed_dec[:i], color='g', lw=2)
ax.plot(fixed_ra_eq[j:]+fixed_ra_eq[:j], fixed_dec_eq[j:]+fixed_dec_eq[:j], color='m', lw=2)
ax.set_xticklabels(['14h','16h','18h','20h','22h','0h','2h','4h','6h','8h','10h'])
ax.grid(True)
plt.show()
Explanation: To fix this, we will add some references to this by adding a few more sets of data points. The first is relatively simple, we put in a line at the celestial equator. This just has to be a set of points that are all at declination of 0, and from -180 to +180 degrees in RA. These are a and b in the below.
We also want to add the planes of the ecliptic and the galaxy on this. For both, we use coordinate objects and provide numpy arrays where one coordinate is at zero, and the other goes from 0 to 360. With astropy we can then easily convert from each coordinate system to ICRS. There's some for loops to modify the plotting, but the important thing is that this will give us a plot that has not just the locations of all the planets that we've plotted, but will also include the celestial equator, galactic plane, and ecliptic plane on it.
End of explanation
# We will use `wget` to download the necessary file to the `data` folder.
!wget 'http://star.herts.ac.uk/~gb/python/656nmos.fits' -O ./data/hst_image.fits
Explanation: Reading in FITS files
One of the useful things with Astropy is that you can use it for reading in FITS files, and extracting info such as bands, exposure times, intrument information, etc.
In this example, we will read in a FITS image file, and extract its information
End of explanation
from astropy.io import fits
filename = './data/hst_image.fits'
hdulist = fits.open(filename)
Explanation: Now we can extract some of the information stored in the FITS file.
End of explanation
hdulist.info()
Explanation: The returned object, hdulist, (an instance of the HDUList class) behaves like a Python list, and each element maps to a Header-Data Unit (HDU) in the FITS file. You can view more information about the FITS file with:
End of explanation
hdu = hdulist[0]
Explanation: As we can see, this file contains two HDUs. The first contains the image, the second a data table. To access the primary HDU, which contains the main data, you can then do:
End of explanation
np.sort(hdulist[0].header.keys())
Explanation: To read the header of the FITS file, you can read hdulist. The following shows the different keys for the header
End of explanation
hdu.data.shape
Explanation: As we can see, this file contains two HDUs. The first contains the image, the second a data table.
Let's look at the image of the FITS file.
The hdu object then has two important attributes: data, which behaves like a Numpy array, can be used to access the data, and header, which behaves like a dictionary, can be used to access the header information. First, we can take a look at the data:
End of explanation
hdu.header
Explanation: This tells us that it is a 1600-by-1600 pixel image. We can now take a peak at the header. To access the primary HDU, which contains the main data, you can then do:
End of explanation
hdu.header['INSTRUME']
hdu.header['EXPTIME']
Explanation: We can access individual header keywords using standard item notation:
End of explanation
plt.figure(figsize=(10,10))
plt.imshow(np.log10(hdu.data), origin='lower', cmap='gray', vmin=1.5, vmax=3)
Explanation: We can plot the image using matplotlib:
End of explanation
hdu.header['MODIFIED'] = '2014-12-01' # adds a new keyword
Explanation: You can also add new fields to the FITS file
End of explanation
hdu.data = hdu.data - 0.5
Explanation: and we can also change the data, for example subtracting a background value:
End of explanation
hdu.writeto('./data/hubble-image-background-subtracted.fits', overwrite=True)
!ls ./data
Explanation: This only changes the FITS file in memory. You can write to a file with:
End of explanation
from astropy.analytic_functions import blackbody_lambda, blackbody_nu
def Planck_func(temp, lam_arr, opt='lam'):
Computes the Blackbody radiation curve of a blackbody of a given temperature `temp`.
Parameters
----------
temp: float or array-like
temperature(s) of the blackbody
lam_arr: float or array_like
aray of wavelenths to evaluate the Planck function.
opt: str, optional (default = 'lam')
Option for returning either the flux of `lambda` (wavelength) or `nu` (frequency).
Options:
- `lam`: Return flux for `lambda' or wavelength
- `nu` : Returns flux for `nu` (frequency)
wavelengths = lam_arr * u.AA
temperature = temp * u.K
with np.errstate(all='ignore'):
flux_lam = blackbody_lambda(wavelengths, temperature)
flux_nu = blackbody_nu(wavelengths, temperature)
if opt=='lam':
return flux_lam
if opt=='nu':
return flux_nu
Explanation: Analytic Functions
Astropy comes with some built-in analytic functions, e.g. the blackbody radiation function.
Blackbody Radiation
Blackbody flux is calculated with Planck law (Rybicki & Lightman 1979)
$$B_{\lambda}(T) = \frac{2 h c^{2} / \lambda^{5}}{exp(h c / \lambda k T) - 1}$$
$$B_{\nu}(T) = \frac{2 h \nu^{3} / c^{2}}{exp(h \nu / k T) - 1}$$
End of explanation
lam_arr = np.arange(1e2, 2e4)
nu_arr = (const.c/(lam_arr * u.AA)).to(1./u.s).value
fig = plt.figure(figsize=(15,8))
ax1 = fig.add_subplot(121, axisbg='white')
ax2 = fig.add_subplot(122, axisbg='white')
ax1.set_xlabel(r'$\lambda$ (Ansgstrom)', fontsize=25)
ax1.set_ylabel(r'$B_{\lambda}(T)$', fontsize=25)
ax2.set_xlabel(r'$\nu$ (\textrm{s}^{-1})', fontsize=25)
ax2.set_ylabel(r'$B_{\nu}(T)$', fontsize=25)
ax2.set_xscale('log')
temp_arr = [6e3, 8e3, 1e4, 1.2e4]
for temp in temp_arr:
ax1.plot(lam_arr, Planck_func(temp, lam_arr=lam_arr, opt='lam'), label='T = {0} K'.format(int(temp)))
ax2.plot(nu_arr , Planck_func(temp, lam_arr=lam_arr, opt='nu' ), label='T = {0} K'.format(int(temp)))
ax1.legend(loc=1, prop={'size':20})
Explanation: Let's plot the Planck function for two bodies with temperatures $T_1 = 8000\ K$ and $T_2 = 6000\ K$
End of explanation
!head ./data/sources.dat
from astropy.io import ascii
sources_tb = ascii.read('./data/sources.dat')
print( sources_tb )
Explanation: AstroPy Tables
Read files
You can use Astropy to read tables from data files. We'll use it to read the sources.dat file, which contains columns and rows of data
End of explanation
from astropy.table import Table, Column, MaskedColumn
x = np.random.uniform(low=10, high=20, size=(1000,))
y = np.random.uniform(low=100, high=50, size=(x.size,))
z = np.random.uniform(low=30, high=50, size=(x.size,))
data = Table([x, y], names=['x', 'y'])
print(data)
ascii.write(data, './data/astropy_data.tb', overwrite=True)
Explanation: Write to files
You can also write directoy to a file using the data in the AstroPy table.
Let's create a new AstroPy Table:
End of explanation
!head ./data/astropy_data.tb
Explanation: Let's see what's in the astropy_data.tb file
End of explanation
ascii.write(data, './data/astropy_data_2.tb', delimiter=',', overwrite=True)
!head ./data/astropy_data_2.tb
Explanation: You can also specify the delimiter of the file. For example, we can separate it with a comma.
End of explanation
df = data.to_pandas()
df.head()
Explanation: AstroPy Tables to other Formats
The AstroPy tables can also be converted to multiple formats
to Pandas DataFrames
A nice feature of AstroPy Tables is that you can export your data into different formats.
For example, you can export it as a Pandas Dataframe.
See here for more info on how to use pandas with Astropy: http://docs.astropy.org/en/stable/table/pandas.html
End of explanation
data
Explanation: And to compare, let's see the AstroPy Tables format
End of explanation
import sys
ascii.write(data[0:10], sys.stdout, format='latex')
Explanation: to LaTeX tables
A nice thing about AstroPy is that you can convert your data into LaTeX tables. This is easily done with writing it to a file. You can then copy it and use it on your next publication
End of explanation
ascii.write(data, './data/astropy_data_latex.tex', format='latex')
# I'm only showing the first 10 lines
!head ./data/astropy_data_latex.tex
Explanation: To save it as a file, you can do this:
End of explanation
ascii.write(data, './data/astropy_data_csv.csv', format='csv', fast_writer=False)
!head ./data/astropy_data_csv.csv
Explanation: to CSV files
End of explanation
t = Table(masked=True)
t['x'] = MaskedColumn([1.0, 2.0], unit='m', dtype='float32')
t['x'][1] = np.ma.masked
t['y'] = MaskedColumn([False, True], dtype='bool')
t
Explanation: Other formats
AstroPy tables come with a great support for many different types of files.
This is a list of the supported files that you can import/export AstroPy tables.
Data tables and Column types
You can also use AstroPy tables to preserve the metadata of a column. For example, you can keep the units of each column, so that you use the data later on, and still be able to use unit conversions, etc. for this.
End of explanation
from astropy.extern.six.moves import StringIO
fh = StringIO()
t.write(fh, format='ascii.ecsv')
table_string = fh.getvalue()
print(table_string)
Table.read(table_string, format='ascii')
Explanation: Now we can save it into a ecsv file. This type of file will preserve the type of units, and more, for each of the columns
End of explanation
t.write('./data/astropy_data_ecsv.ecsv', format='ascii.ecsv', overwrite=True)
Explanation: Or you can dump it into a file
End of explanation
data_ecsv = ascii.read('./data/astropy_data_ecsv.ecsv', format='ecsv')
data_ecsv
data_ecsv['x']
Explanation: And you can now read it in
End of explanation |
15,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: Get the data path
Download the dataset for this tutorial.
Step4: You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https
Step5: Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec.
Step6: Step 3. Customize the TensorFlow model.
Step7: Step 4. Evaluate the model.
Step8: Step 5. Export as a TensorFlow Lite model with metadata.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation.
Step9: You can also download the model using the left sidebar in Colab.
After executing the 5 steps above, you can further use the TensorFlow Lite model file in on-device applications using BertNLClassifier API in TensorFlow Lite Task Library.
The following sections walk through the example step by step to show more detail.
Choose a model_spec that Represents a Model for Text Classifier
Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications.
BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks.
averaging word embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation.
This tutorial uses a smaller model, average_word_vec that you can retrain multiple times to demonstrate the process.
Step10: Load Input Data Specific to an On-device ML App
The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes
Step11: The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format
Step12: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Customize the TensorFlow Model
Create a custom text classifier model based on the loaded data.
Step13: Examine the detailed model structure.
Step14: Evaluate the Customized Model
Evaluate the model with the test data and get its loss and accuracy.
Step15: Export as a TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format with metadata that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.
Step16: The TensorFlow Lite model file can be used in the text classification reference app using NLClassifier API in TensorFlow Lite Task Library.
The allowed export formats can be one or a list of the following
Step17: You can evaluate the tflite model with evaluate_tflite method to get its accuracy.
Step18: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps
Step19: Get the preprocessed data.
Step20: Train the new model.
Step21: You can also adjust the MobileBERT model.
The model parameters you can adjust are
Step22: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs
Step23: Evaluate the newly retrained model with 20 training epochs.
Step24: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install tflite-model-maker
Explanation: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories.The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.
Prerequisites
Install the required packages
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
End of explanation
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker import TextClassifierDataLoader
Explanation: Import the required packages.
End of explanation
data_dir = tf.keras.utils.get_file(
fname='SST-2.zip',
origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8',
extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Explanation: Get the data path
Download the dataset for this tutorial.
End of explanation
spec = model_spec.get('mobilebert_classifier')
Explanation: You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.
End-to-End Workflow
This workflow consists of five steps as outlined below:
Step 1. Choose a model specification that represents a text classification model.
This tutorial uses MobileBERT as an example.
End of explanation
train_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=True)
test_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'dev.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=False)
Explanation: Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec.
End of explanation
model = text_classifier.create(train_data, model_spec=spec)
Explanation: Step 3. Customize the TensorFlow model.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Step 4. Evaluate the model.
End of explanation
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
model.export(export_dir='mobilebert/', quantization_config=config)
Explanation: Step 5. Export as a TensorFlow Lite model with metadata.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation.
End of explanation
spec = model_spec.get('average_word_vec')
Explanation: You can also download the model using the left sidebar in Colab.
After executing the 5 steps above, you can further use the TensorFlow Lite model file in on-device applications using BertNLClassifier API in TensorFlow Lite Task Library.
The following sections walk through the example step by step to show more detail.
Choose a model_spec that Represents a Model for Text Classifier
Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications.
BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks.
averaging word embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation.
This tutorial uses a smaller model, average_word_vec that you can retrain multiple times to demonstrate the process.
End of explanation
data_dir = tf.keras.utils.get_file(
fname='SST-2.zip',
origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8',
extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Explanation: Load Input Data Specific to an On-device ML App
The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes: positive and negative movie reviews.
Download the archived version of the dataset and extract it.
End of explanation
train_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=True)
test_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'dev.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=False)
Explanation: The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format:
sentence | label
--- | ---
it 's a charming and often affecting journey . | 1
unflinchingly bleak and desperate | 0
A positive review is labeled 1 and a negative review is labeled 0.
Use the TestClassifierDataLoader.from_csv method to load the data.
End of explanation
model = text_classifier.create(train_data, model_spec=spec, epochs=10)
Explanation: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Customize the TensorFlow Model
Create a custom text classifier model based on the loaded data.
End of explanation
model.summary()
Explanation: Examine the detailed model structure.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Evaluate the Customized Model
Evaluate the model with the test data and get its loss and accuracy.
End of explanation
model.export(export_dir='average_word_vec/')
Explanation: Export as a TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format with metadata that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.
End of explanation
model.export(export_dir='average_word_vec/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])
Explanation: The TensorFlow Lite model file can be used in the text classification reference app using NLClassifier API in TensorFlow Lite Task Library.
The allowed export formats can be one or a list of the following:
ExportFormat.TFLITE
ExportFormat.LABEL
ExportFormat.VOCAB
ExportFormat.SAVED_MODEL
By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the label file and vocab file as follows:
End of explanation
accuracy = model.evaluate_tflite('average_word_vec/model.tflite', test_data)
Explanation: You can evaluate the tflite model with evaluate_tflite method to get its accuracy.
End of explanation
new_model_spec = model_spec.AverageWordVecModelSpec(wordvec_dim=32)
Explanation: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps:
Creates the model for the text classifier according to model_spec.
Trains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object.
This section covers advanced usage topics like adjusting the model and the training hyperparameters.
Adjust the model
You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecModelSpec class.
For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.
End of explanation
new_train_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
delimiter='\t',
is_training=True)
Explanation: Get the preprocessed data.
End of explanation
model = text_classifier.create(new_train_data, model_spec=new_model_spec)
Explanation: Train the new model.
End of explanation
new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256
Explanation: You can also adjust the MobileBERT model.
The model parameters you can adjust are:
seq_len: Length of the sequence to feed into the model.
initializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
trainable: Boolean that specifies whether the pre-trained layer is trainable.
The training pipeline parameters you can adjust are:
model_dir: The location of the model checkpoint files. If not set, a temporary directory will be used.
dropout_rate: The dropout rate.
learning_rate: The initial learning rate for the Adam optimizer.
tpu: TPU address to connect to.
For instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text.
End of explanation
model = text_classifier.create(train_data, model_spec=spec, epochs=20)
Explanation: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs: more epochs could achieve better accuracy, but may lead to overfitting.
batch_size: the number of samples to use in one training step.
For example, you can train with more epochs.
End of explanation
loss, accuracy = model.evaluate(test_data)
Explanation: Evaluate the newly retrained model with 20 training epochs.
End of explanation
spec = model_spec.get('bert_classifier')
Explanation: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier.
End of explanation |
15,636 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Following-up from this question years ago, is there a canonical "shift" function in numpy? Ideally it can be applied to 2-dimensional arrays. | Problem:
import numpy as np
a = np.array([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.],
[1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]])
shift = 3
def solution(xs, n):
e = np.empty_like(xs)
if n >= 0:
e[:,:n] = np.nan
e[:,n:] = xs[:,:-n]
else:
e[:,n:] = np.nan
e[:,:n] = xs[:,-n:]
return e
result = solution(a, shift) |
15,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
8.1 RM-GF correlations tests
Step1: Correlation between max chapter and answers
method 1
Step2: method 2
use max chapter or <= max chapter?
Step3: Clustering answers to find underlying correlation with RedMetrics data
Step4: Clustering using KernelDensity | Python Code:
%run "../Functions/8. RM-GF correlations.ipynb"
allData = allDataPlaytestPhase1PretestPosttestUniqueProfilesVolunteers.copy()
Explanation: 8.1 RM-GF correlations tests
End of explanation
#def getScoresOnQuestionsFromAllData(allData, Qs):
Explanation: Correlation between max chapter and answers
method 1: correlation matrix
index: question groups
columns: RedMetrics parameters
End of explanation
correctPerMaxChapter = pd.DataFrame(index = posttestScientificQuestions, columns = range(15))
allData.loc[:, allData.loc['maxChapter', :] == 10].columns
# when reaching checkpoint N, what is the rate of good answer for question Q?
maxCheckpointsDF = pd.DataFrame(index = ['maxCh'], columns=range(15))
for chapter in allData.loc['maxChapter', :].unique():
eltsCount = len(allData.loc[:, allData.loc['maxChapter', :] == chapter].columns)
maxCheckpointsDF.loc['maxCh', chapter] = eltsCount
for q in posttestScientificQuestions:
interestingElts = allData.loc[q, allData.loc['maxChapter', :] == chapter]
scoreSum = interestingElts.sum()
correctPerMaxChapter.loc[q, chapter] = int(scoreSum * 100 / eltsCount)
correctPerMaxChapterNotNan = correctPerMaxChapter.fillna(-1)
_fig1 = plt.figure(figsize=(20,20))
_ax1 = plt.subplot(111)
_ax1.set_title("maxCheckpointsDF")
sns.heatmap(
correctPerMaxChapterNotNan,
ax=_ax1,
cmap=plt.cm.jet,
square=True,
annot=True,
fmt='d',
)
maxCheckpointsDFNotNan = maxCheckpointsDF.fillna(0)
_fig2 = plt.figure(figsize=(14,2))
_ax2 = plt.subplot(111)
_ax2.set_title("maxCheckpointsDF")
sns.heatmap(
maxCheckpointsDFNotNan,
ax=_ax2,
cmap=plt.cm.jet,
square=True,
annot=True,
fmt='d',
)
corrChapterScQDF = pd.DataFrame(index=posttestScientificQuestions, columns=['corr'])
# when reaching checkpoint N, what is the rate of good answer for question Q?
for q in posttestScientificQuestions:
corrChapterScQDF.loc[q, 'corr'] = np.corrcoef(allData.loc[q,:].values, allData.loc['maxChapter',:].values)[1,0]
corrChapterScQDFNotNan = corrChapterScQDF.fillna(-2)
_fig1 = plt.figure(figsize=(14,10))
_ax1 = plt.subplot(111)
_ax1.set_title("corrChapterScQDFNotNan")
sns.heatmap(
corrChapterScQDFNotNan,
ax=_ax1,
cmap=plt.cm.jet,
square=True,
annot=True,
fmt='.2f',
vmin=-1,
vmax=1,
)
Explanation: method 2
use max chapter or <= max chapter?
End of explanation
from sklearn.cluster import KMeans
from sklearn.neighbors.kde import KernelDensity
X = np.array([[0.9], [1], [1.1], [4], [4.1], [4.2], [5]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
kmeans.inertia_
kmeans.labels_
kmeans.cluster_centers_
kmeans.predict([[3], [4]])
inertiaThreshold = 1
for question in scientificQuestions:
posttestQuestion = answerTemporalities[1] + " " + question
#deltaQuestion = delta + " " + question
allDataPlaytestPhase1PretestPosttestUniqueProfilesVolunteers.loc[posttestQuestion, :]
X = [[x] for x in allDataPlaytestPhase1PretestPosttestUniqueProfilesVolunteers.loc[posttestQuestion, :].values]
clusterCount = 3
kmeans = KMeans(n_clusters=clusterCount, random_state=0).fit(X)
if len(np.unique(kmeans.labels_)) != clusterCount:
print("incorrect number of clusters")
kmeans.inertia_
Explanation: Clustering answers to find underlying correlation with RedMetrics data
End of explanation
X = np.array([[-1], [-2], [-3], [1], [2], [3]])
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X)
kde.score_samples(X)
X = np.array([-1, -2, -3, 1, 2, 3])
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X.reshape(-1, 1))
kde.score_samples(X.reshape(-1, 1))
X.reshape(-1, 1)
Explanation: Clustering using KernelDensity
End of explanation |
15,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Methods and Experiments
Experimental Setup
Basic Definitions
Computation Steps
Compute Experimental Data
Load Trajectory Data
Compute POI Info
Construct Travelling Sequences
Choose Travelling Sequences for experiment
Compute transition matrix using training set
Compute POI popularity and user interest using training set
Enumerate Trajectories with length {3, 4, 5}
Compute Features (scores) for each enumerated sequence
Experiments with training set
Experiment with random weights
Experiment with single feature
Search weights using coordinate-wise grid search
Apply search results to testing set
<a id='sec1'></a>
1. Experimental Setup
Enumerate all trajectories for each user given the trajectory length (e.g. 3, 4, 5) and the (start, end) POIs.
For each trajectory, compute a score based on the features below
Step1: <a id='sec3.2'></a>
3.2 Compute POI Info
Compute POI (Longitude, Latitude) as the average coordinates of the assigned photos.
Step2: Extract POI category and visiting frequency.
Step3: Save POI info to file.
Step4: <a id='sec3.3'></a>
3.3 Construct Travelling Sequences
Step5: Save travelling sequences to file.
Step6: <a id='sec3.4'></a>
3.4 Choose Travelling Sequences for experiment
Trajectories with length {3, 4, 5} are used in our experiment.
Step7: Split travelling sequences into training set and testing set using leave-one-out for each user.
For testing purpose, users with less than two travelling sequences are not considered in this experiment.
Step8: Sanity check
Step9: Save training/testing set to file.
Step10: <a id='sec3.5'></a>
3.5 Compute transition matrix using training set
Compute transition probabilities between different kind of POI categories.
Step11: Count the transition number for each possible transition.
Step12: Normalise each row to get an estimate of transition probabilities (MLE).
Step13: Compute the log of transition probabilities with smooth factor $\epsilon=10^{-12}$.
Step14: <a id='sec3.6'></a>
3.6 Compute POI popularity and user interest using training set
Compute average POI visit duration, POI popularity as defined at the top of the notebook.
Step15: Compute time/frequency based user interest as defined at the
top of the notebook.
Step16: Sum defined in paper, but cumulative of (time ratio) * (avg POI visit duration) will become extremely large in many cases, which is unrealistic.
Step18: <a id='sec3.7'></a>
3.7 Enumerate Trajectories with length {3, 4, 5}
Step21: <a id='sec3.8'></a>
3.8 Compute Features (scores) for each enumerated sequence
As described at the top of the notebook, features for each trajectory used in this experiment are
Step22: Load features from file if possible.
Step23: <a id='sec4'></a>
4. Experiments with training set
Step24: <a id='sec4.1'></a>
4.1 Experiment with random weights
Step25: <a id='sec4.2'></a>
4.2 Experiment with single feature
NOTE
Step26: 4.2.1 Observations with single feature
The first feature is the total time-based user interest in a trajectory.
(i.e sum of (expected time the user spent at each POI)/(average time a user spent at each POI))
It seems the existance of this feature negative affected the algorithm,
which is strange as the IJCAI paper argues that capturing the expected time a user spent at POI will improve the accuracy of trajectory recommendation.
The second feature is the total number of visit (of the user) of all POIs in a trajectory.
Similar to the first feature, it seems the existance of this feature negative affected the algorithm,
which is also strange as experiments from the IJCAI paper show that capturing a user's visiting frequency of POI will improve the accuracy of trajectory recommendation, though less than capturing visit time duration, but still much better than greedy and random selection strategies.
The third feature is the total POI popularity (i.e. # of visit of a POI by all users) of all POIs in a trajectory.
It seems that doesn't affect the recommendation much, though a positive weight of this feature will help the recommendation algorithm slightly.
The fourth feature is the negative (i.e. multiple -1) of total travelling cost (i.e. total travel distance in the trajectory) for a user of a trajectory.
It's strange that the algorithm prefers long travelling distance.
The fifth feature is the probability of a recommended trajectory based on the transition probabilities between POI categories and the nearest neighbor rule for choosing a specific POI within a certain category.
It's clear the algorithm likes nearest neighbors.
The sixth feature is the probability of a recommended trajectory based on the transition probabilities between POI categories and the most popular POI rule for choosing a specific POI within a certain category. It's clear the algorithm likes popular POIs.
The seventh feature is the probability of a recommended trajectory based on the transition probabilities between POI categories and a rule below for choosing a specific POI within a certain category.
Rule
Step27: <a id='sec5'></a>
5. Apply search results to testing set
Step28: Load features from file if possible. | Python Code:
%matplotlib inline
import os
import math
import random
import pickle
import pandas as pd
import numpy as np
import numpy.matlib
from datetime import datetime
from joblib import Parallel, delayed
import matplotlib.pyplot as plt
nfeatures = 8 # number of features
EPS = 1e-12 # smooth, deal with 0 probability
random.seed(123456789) # control random choice when splitting training/testing set
data_dir = 'data/data-ijcai15'
#fvisit = os.path.join(data_dir, 'userVisits-Osak.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Osak.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Glas.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Glas.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Edin.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Edin.csv')
fvisit = os.path.join(data_dir, 'userVisits-Toro.csv')
fcoord = os.path.join(data_dir, 'photoCoords-Toro.csv')
suffix = fvisit.split('-')[-1].split('.')[0]
fpoi = os.path.join(data_dir, 'poi-' + suffix + '.csv')
fseq = os.path.join(data_dir, 'seq-' + suffix + '.csv')
ftrain = os.path.join(data_dir, 'trainset-' + suffix + '.pkl')
ftest = os.path.join(data_dir, 'testset-' + suffix + '.pkl')
ffeatures_train = os.path.join(data_dir, 'featuresTrain-' + suffix + '.pkl')
ffeatures_test = os.path.join(data_dir, 'featuresTest-' + suffix + '.pkl')
visits = pd.read_csv(fvisit, sep=';')
visits.head()
coords = pd.read_csv(fcoord, sep=';')
coords.head()
# merge data frames according to column 'photoID'
assert(visits.shape[0] == coords.shape[0])
traj = pd.merge(visits, coords, on='photoID')
traj.head()
pd.DataFrame([traj[['photoLon', 'photoLat']].min(), traj[['photoLon', 'photoLat']].max(), \
traj[['photoLon', 'photoLat']].max() - traj[['photoLon', 'photoLat']].min()], \
index = ['min', 'max', 'range'])
plt.figure(figsize=[15, 5])
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.scatter(traj['photoLon'], traj['photoLat'], marker='+')
num_photo = traj['photoID'].unique().shape[0]
num_user = traj['userID'].unique().shape[0]
num_poi = traj['poiID'].unique().shape[0]
num_seq = traj['seqID'].unique().shape[0]
pd.DataFrame([num_photo, num_user, num_poi, num_seq, num_photo/num_user, num_seq/num_user], \
index = ['#photo', '#user', '#poi', '#seq', '#photo/user', '#seq/user'], columns=[str(suffix)])
Explanation: Methods and Experiments
Experimental Setup
Basic Definitions
Computation Steps
Compute Experimental Data
Load Trajectory Data
Compute POI Info
Construct Travelling Sequences
Choose Travelling Sequences for experiment
Compute transition matrix using training set
Compute POI popularity and user interest using training set
Enumerate Trajectories with length {3, 4, 5}
Compute Features (scores) for each enumerated sequence
Experiments with training set
Experiment with random weights
Experiment with single feature
Search weights using coordinate-wise grid search
Apply search results to testing set
<a id='sec1'></a>
1. Experimental Setup
Enumerate all trajectories for each user given the trajectory length (e.g. 3, 4, 5) and the (start, end) POIs.
For each trajectory, compute a score based on the features below:
* User Interest (time-based)
* User Interest (frequency-based)
* POI Popularity
* Travelling Cost (distance/time)
* Trajectory probability based on the transition probabilities between different POI categories and the following rules for choosing a specific POI within certain category:
* The Nearest Neighbor of the current POI
* The most Popular POI
* A random POI choosing with probability proportional to the reciprocal of its distance to current POI
* A random POI choosing with probability proportional to its popularity
To avoid numerical underflow, use the log of of probabilities instead of the probabilities themselves,
to avoid zero probabilities, add a smooth value $\epsilon = 10^{-12}$ for each probability.
Plot the scores of generated and actual trajectories for each (user, trajectoryLength, startPOI, endPOI) tuple with some degree of transparency (alpha).
Recommend trajectory with the highest score and measure the performance of recommendation using recall, precision and F1-score.
Optimise parameters in the score function by learning, in this specific case, the cost function could be based on recall, precision or F1-score, we can also control the estimation of transition matrix.
<a id='sec1.1'></a>
1.1 Basic definitions
For user $u$ and POI $p$, define
Travel History:
\begin{equation}
S_u = {(p_1, t_{p_1}^a, t_{p_1}^d), \dots, (p_n, t_{p_n}^a, t_{p_n}^d)}
\end{equation}
where $t_{p_i}^a$ is the arrival time and $t_{p_i}^d$ the departure time of user $u$ at POI $p_i$
Travel Sequences: split $S_u$ if
\begin{equation}
|t_{p_i}^d - t_{p_{i+1}}^a| > \tau ~(\text{e.g.}~ \tau = 8 ~\text{hours})
\end{equation}
POI Popularity:
\begin{equation}
Pop(p) = \sum_{u \in U} \sum_{p_i \in S_u} \delta(p_i == p)
\end{equation}
Average POI Visit Duration:
\begin{equation}
\bar{V}(p) = \frac{1}{N} \sum_{u \in U} \sum_{p_i \in S_u} (t_{p_i}^d - t_{p_i}^a) \delta(p_i == p)
\end{equation}
where $N$ is #visits of POI $p$ by all users
Define the interest of user $u$ in POI category $c$ as
Time based User Interest:
\begin{equation}
Int^{Time}(u, c) = \sum_{p_i \in S_u} \frac{(t_{p_i}^d - t_{p_i}^a)}{\bar{V}(p_i)} \delta(Cat_{p_i} == c)
\end{equation}
where $Cat_{p_i}$ is the category of POI $p_i$
we also tried this one
\begin{equation}
Int^{Time}(u, c) = \frac{1}{n} \sum_{p_i \in S_u} \frac{(t_{p_i}^d - t_{p_i}^a)}{\bar{V}(p_i)} \delta(Cat_{p_i} == c)
\end{equation}
where $n$ is the number of visit of category $c$ by user $u$ (i.e. the frequency base user interest defined below).
Frequency based User Interest:
\begin{equation}
Int^{Freq}(u, c) = \sum_{p_i \in S_u} \delta(Cat_{p_i} == c)
\end{equation}
<a id='sec2'></a>
2. Computation Steps
Split actual trajectories input two parts, one for training, the other for testing.
Concretely, For each user, consider all the trajectories with length 3, 4 and 5, pick one for testing set and put all others into training set.
Use trajectories in training set to compute (MLE) a transition matrix where element [i, j] denotes the transition probability from POI category i to POI category j.
For each trajectory $T$ in training set, enumerate all possible trajectories that satisfy the following requirements:
The trajectory length is the same as that of $T$
The start/end POI are the same as those of $T$
No sub-tour exists
Compute the 8 scores described above as features, rescale each score into range [-1, 1], compute the weighted sum of these score to get a single score.
~~(weights are normalised so that they are in range [0, 1] and their sum is 1)~~
(the values in parameter/weight vector are now drawn from range [-1, 1], and the sum now doesn't necessarily to be 1.)
Choose the trajectory with the highest score $T^*$ and compute F1 score as follows:
recall = $\frac{|T^* \cap T|}{|T|}$
precision = $\frac{|T^ \cap T|}{T^}$
F1-score = $\frac{2 \times \text{recall} \times \text{precision}}{\text{recall} + \text{precision}}$
Compute the mean F1 score for all trajectory $T$ in the training set.
Use coordinate-wise grid search to find an good weight vector such that the mean F1 score is as large as possible.
<a id='sec3'></a>
3. Compute Experimental Data
NOTE: Before running this notebook, please run script src/ijcai15_setup.py to setup data properly.
<a id='sec3.1'></a>
3.1 Load Trajectory Data
End of explanation
poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').mean()
poi_coords.reset_index(inplace=True)
poi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True)
#poi_coords
Explanation: <a id='sec3.2'></a>
3.2 Compute POI Info
Compute POI (Longitude, Latitude) as the average coordinates of the assigned photos.
End of explanation
poi_catfreq = traj[['poiID', 'poiTheme', 'poiFreq']].groupby('poiID').first()
poi_catfreq.reset_index(inplace=True)
#poi_catfreq
poi_all = pd.merge(poi_catfreq, poi_coords, on='poiID')
poi_all.set_index('poiID', inplace=True)
#poi_all
Explanation: Extract POI category and visiting frequency.
End of explanation
poi_all.to_csv(fpoi, index=True)
#poi_all2 = pd.read_csv(fpoi, index_col=0)
#poi_all2
Explanation: Save POI info to file.
End of explanation
seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\
.groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max])
#seq_all.head()
seq_all.columns = seq_all.columns.droplevel()
seq_all.head()
seq_all.reset_index(inplace=True)
seq_all.head()
seq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)
seq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime']
#print('Found %d sequences' % len(seq_all))
#pickle.dump(seq_all, open('all_trajectories.pkl', 'bw'))
seq_all.head()
Explanation: <a id='sec3.3'></a>
3.3 Construct Travelling Sequences
End of explanation
seq_all.to_csv(fseq, index=False)
#seq_all2 = pd.read_csv(fseq)
#seq_all2.head()
#seq_all = pickle.load(open('all_trajectories.pkl'))
seq_user = seq_all[['seqID', 'userID']].copy()
seq_user = seq_user.groupby('seqID').first()
#type(seq_user)
#seq_user.loc[1].iloc[0]
#seq_user.reset_index(inplace=True)
#seq_user.set_index('seqID', inplace=True)
seq_user.head()
Explanation: Save travelling sequences to file.
End of explanation
seq_len = seq_all[['userID', 'seqID', 'poiID']].copy()
seq_len = seq_len.groupby(['userID', 'seqID']).agg(np.size)
seq_len.reset_index(inplace=True)
seq_len.rename(columns={'poiID':'seqLen'}, inplace=True)
#seq_len.head()
ax = seq_len['seqLen'].hist(bins=20)
ax.set_yscale('log')
seq_345 = seq_len[seq_len['seqLen'].isin({3, 4, 5})]
seq_345['seqLen'].hist(bins=9)
Explanation: <a id='sec3.4'></a>
3.4 Choose Travelling Sequences for experiment
Trajectories with length {3, 4, 5} are used in our experiment.
End of explanation
train_set = []
test_set = []
user_seqs = seq_345[['userID', 'seqID']].groupby('userID')
for user, indices in user_seqs.groups.items():
if len(indices) < 2: continue
idx = random.choice(indices)
test_set.append(seq_345.loc[idx, 'seqID'])
train_set.extend([seq_345.loc[x, 'seqID'] for x in indices if x != idx])
print('#seq in trainset:', len(train_set))
print('#seq in testset:', len(test_set))
seq_345[seq_345['seqID'].isin(train_set)]['seqLen'].hist(bins=9)
#data = np.array(seqs1['seqLen'])
#hist, bins = np.histogram(data, bins=3)
#print(hist)
Explanation: Split travelling sequences into training set and testing set using leave-one-out for each user.
For testing purpose, users with less than two travelling sequences are not considered in this experiment.
End of explanation
seq_exp = seq_345[['userID', 'seqID']].copy()
seq_exp = seq_exp.groupby('userID').agg(np.size)
seq_exp.reset_index(inplace=True)
seq_exp.rename(columns={'seqID':'#seq'}, inplace=True)
seq_exp = seq_exp[seq_exp['#seq'] > 1] # user with more than 1 sequences
print('total #seq for experiment:', seq_exp['#seq'].sum())
#seq_exp.head()
seq_t = seq_345[seq_345['seqID'].isin(train_set)]
nseq3 = seq_t[seq_t['seqLen'] == 3].shape[0]
nseq4 = seq_t[seq_t['seqLen'] == 4].shape[0]
nseq5 = seq_t[seq_t['seqLen'] == 5].shape[0]
nseq345 = nseq3 + nseq4 + nseq5
randF1 = (2/3) * (nseq3 / nseq345) + (2/4) * (nseq4 / nseq345) + (2/5) * (nseq5 / nseq345)
print('%d sequences with length 3, %d sequences with length 4, %d sequences with length 5' % (nseq3, nseq4, nseq5))
print('F1-score by random guessing is %.3f' % randF1)
Explanation: Sanity check: the total number of travelling sequences used in experiment.
End of explanation
pickle.dump(train_set, open(ftrain, 'wb'))
pickle.dump(test_set, open(ftest, 'wb'))
Explanation: Save training/testing set to file.
End of explanation
poi_cats = traj['poiTheme'].unique().tolist()
poi_cats.sort()
poi_cats
ncats = len(poi_cats)
trans_mat = pd.DataFrame(data=np.zeros((ncats, ncats), dtype=np.float64), index=poi_cats, columns=poi_cats)
#trans_mat
Explanation: <a id='sec3.5'></a>
3.5 Compute transition matrix using training set
Compute transition probabilities between different kind of POI categories.
End of explanation
#train_set = [4, 13, 32, 33, 34, 99, 101]
#seq_all[seq_all['seqID'] == train_set[0]]
for seqid in train_set:
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
for j in range(len(seqi.index)-1):
idx1 = seqi.index[j]
idx2 = seqi.index[j+1]
poi1 = seqi.loc[idx1, 'poiID']
poi2 = seqi.loc[idx2, 'poiID']
cat1 = poi_all.loc[poi1, 'poiTheme']
cat2 = poi_all.loc[poi2, 'poiTheme']
trans_mat.loc[cat1, cat2] += 1
trans_mat
Explanation: Count the transition number for each possible transition.
End of explanation
for r in trans_mat.index:
rowsum = trans_mat.ix[r].sum()
if rowsum == 0: continue # deal with lack of data
for c in trans_mat.columns:
trans_mat.loc[r, c] /= rowsum
trans_mat
Explanation: Normalise each row to get an estimate of transition probabilities (MLE).
End of explanation
log10_trans_mat = np.log10(trans_mat.copy() + EPS)
log10_trans_mat
Explanation: Compute the log of transition probabilities with smooth factor $\epsilon=10^{-12}$.
End of explanation
poi_avg_pop = seq_all[seq_all['seqID'].isin(train_set)]
poi_avg_pop = poi_avg_pop[['poiID', 'poiDuration(sec)']].copy()
poi_avg_pop = poi_avg_pop.groupby('poiID').agg([np.mean, np.size])
poi_avg_pop.columns = poi_avg_pop.columns.droplevel()
poi_avg_pop.reset_index(inplace=True)
poi_avg_pop.rename(columns={'mean':'avgDuration(sec)', 'size':'popularity'}, inplace=True)
poi_avg_pop.set_index('poiID', inplace=True)
print('#poi:', poi_avg_pop.shape[0])
if poi_avg_pop.shape[0] < poi_all.shape[0]:
extra_index = list(set(poi_all.index) - set(poi_avg_pop.index))
extra_poi = pd.DataFrame(data=np.zeros((len(extra_index), 2), dtype=np.float64), \
index=extra_index, columns=['avgDuration(sec)', 'popularity'])
poi_avg_pop = poi_avg_pop.append(extra_poi)
print('#poi after extension:', poi_avg_pop.shape[0])
poi_avg_pop
Explanation: <a id='sec3.6'></a>
3.6 Compute POI popularity and user interest using training set
Compute average POI visit duration, POI popularity as defined at the top of the notebook.
End of explanation
user_interest = seq_all[seq_all['seqID'].isin(train_set)]
user_interest = user_interest[['userID', 'poiID', 'poiDuration(sec)']].copy()
user_interest['timeRatio'] = [poi_avg_pop.loc[x, 'avgDuration(sec)'] for x in user_interest['poiID']]
user_interest['timeRatio'] = user_interest['poiDuration(sec)'] / user_interest['timeRatio']
user_interest['poiTheme'] = [poi_all.loc[x, 'poiTheme'] for x in user_interest['poiID']]
user_interest.drop(['poiID', 'poiDuration(sec)'], axis=1, inplace=True)
Explanation: Compute time/frequency based user interest as defined at the
top of the notebook.
End of explanation
#user_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.sum, np.size])
user_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.mean, np.size]) # try the mean
user_interest.columns = user_interest.columns.droplevel()
#user_interest.rename(columns={'sum':'timeBased', 'size':'freqBased'}, inplace=True)
user_interest.rename(columns={'mean':'timeBased', 'size':'freqBased'}, inplace=True)
user_interest.reset_index(inplace=True)
user_interest.set_index(['userID', 'poiTheme'], inplace=True)
user_interest.head()
Explanation: Sum defined in paper, but cumulative of (time ratio) * (avg POI visit duration) will become extremely large in many cases, which is unrealistic.
End of explanation
poi_list = poi_all.index.tolist()
def enum_345_seq(seqid_set, poi_list):
Enumerate all possible travelling sequences with length {3, 4, 5}
act_seqs = dict()
enum_seqs = dict()
for seqid in seqid_set:
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
act_seqs[seqid] = seqi['poiID'].tolist()
p0 = seqi.loc[seqi.index[0], 'poiID']
pN = seqi.loc[seqi.index[-1],'poiID']
# enumerate sequences with length 3
if seqi.shape[0] == 3:
enum_seqs[seqid] = [[p0, p, pN] \
for p in poi_list if p not in {p0, pN}]
continue
# enumerate sequences with length 4
if seqi.shape[0] == 4:
enum_seqs[seqid] = [[p0, p1, p2, pN] \
for p1 in poi_list if p1 not in {p0, pN} \
for p2 in poi_list if p2 not in {p0, p1, pN}]
continue
# enumerate sequences with length 5
if seqi.shape[0] == 5:
enum_seqs[seqid] = [[p0, p1, p2, p3, pN] \
for p1 in poi_list if p1 not in {p0, pN} \
for p2 in poi_list if p2 not in {p0, p1, pN} \
for p3 in poi_list if p3 not in {p0, p1, p2, pN}]
continue
return enum_seqs, act_seqs
Explanation: <a id='sec3.7'></a>
3.7 Enumerate Trajectories with length {3, 4, 5}
End of explanation
def calc_dist(longitude1, latitude1, longitude2, latitude2):
Calculate the distance (unit: km) between two places on earth
# convert degrees to radians
lon1 = math.radians(longitude1)
lat1 = math.radians(latitude1)
lon2 = math.radians(longitude2)
lat2 = math.radians(latitude2)
radius = 6371.009 # mean earth radius is 6371.009km, en.wikipedia.org/wiki/Earth_radius#Mean_radius
# The haversine formula, en.wikipedia.org/wiki/Great-circle_distance
dlon = math.fabs(lon1 - lon2)
dlat = math.fabs(lat1 - lat2)
return 2 * radius * math.asin(math.sqrt(\
(math.sin(0.5*dlat))**2 + math.cos(lat1) * math.cos(lat2) * (math.sin(0.5*dlon))**2 ))
poi_dist_mat = pd.DataFrame(data=np.zeros((poi_all.shape[0], poi_all.shape[0]), dtype=np.float64), \
index=poi_all.index, columns=poi_all.index)
poi_rdist_mat = poi_dist_mat.copy()
for i in range(poi_all.index.shape[0]):
for j in range(i+1, poi_all.index.shape[0]):
r = poi_all.index[i]
c = poi_all.index[j]
dist = calc_dist(poi_all.loc[r, 'poiLon'], poi_all.loc[r, 'poiLat'], \
poi_all.loc[c, 'poiLon'], poi_all.loc[c, 'poiLat'])
poi_dist_mat.loc[r, c] = dist
poi_dist_mat.loc[c, r] = dist
assert(dist > 0.)
rdist = 1./dist
poi_rdist_mat.loc[r, c] = rdist
poi_rdist_mat.loc[c, r] = rdist
def calc_features(user, seq, poi_all, user_interest, log10_trans_mat, poi_dist_mat, poi_rdist_mat):
Compute 8 features for each enumerated trajectory
features = np.zeros(nfeatures, dtype=np.float64)
# POI based features
for poi in seq:
cat = poi_all.loc[poi, 'poiTheme']
if (user, cat) in user_interest.index:
features[0] += user_interest.loc[user, cat]['timeBased'] # 1. time-based user interest
features[1] += user_interest.loc[user, cat]['freqBased'] # 2. freq-based user interest
features[2] += poi_avg_pop.loc[poi, 'popularity'] # 3. POI popularity
# POI-pair based features
for k in range(len(seq)-1):
poi1 = seq[k]
poi2 = seq[k+1]
assert(poi1 != poi2)
cat1 = poi_all.loc[poi1, 'poiTheme']
cat2 = poi_all.loc[poi2, 'poiTheme']
features[3] += -1 * poi_dist_mat.loc[poi1, poi2] # 4. travel distance
trans_prob = log10_trans_mat.loc[cat1, cat2] # log of transition probability
for l in range(4, 8):
features[l] += trans_prob
poi_cat2 = poi_all[poi_all['poiTheme'] == cat2].copy()
if cat1 == cat2:
poi_cat2.drop(poi1, axis=0, inplace=True) # drop row
distvec = pd.DataFrame(data=[poi_rdist_mat.loc[poi1, x] for x in poi_cat2.index], index=poi_cat2.index)
if distvec.idxmax().iloc[0] == poi2:
features[4] += math.log10(1. + EPS) # poi2 is the nearest neighbor of poi1
else:
features[4] += math.log10(0. + EPS) # poi2 is not the nearest neighbor of poi1
popvec = pd.DataFrame(data=[poi_avg_pop.loc[x,'popularity'] for x in poi_cat2.index], index=poi_cat2.index)
#popvec = pd.DataFrame(data=[poi_all.loc[x,'poiFreq'] for x in poi_cat2.index], index=poi_cat2.index)
if popvec.idxmax().iloc[0] == poi2:
features[5] += math.log10(1. + EPS) # poi2 is the most popular one within cat2
else:
features[5] += math.log10(0. + EPS) # poi2 is not the most popular one within cat2
features[6] += math.log10(EPS + distvec.loc[poi2].iloc[0] / distvec.sum().iloc[0])
features[7] += math.log10(EPS + popvec.loc[poi2].iloc[0] / popvec.sum().iloc[0])
# normalise score, range [-1, 1]
features /= abs(features).max()
return features
enum_seqs, train_seqs = enum_345_seq(train_set, poi_list)
Explanation: <a id='sec3.8'></a>
3.8 Compute Features (scores) for each enumerated sequence
As described at the top of the notebook, features for each trajectory used in this experiment are:
1. total time-based user interest
1. total freq-based user interest
1. total POI popularity
1. total travel distance (without the visit duration time at each POI)
1. features 5 to 8 are trajectory (log) probabilities based on the transition matrix between different POI categories and the following rules for choosing a specific POI within certain category:
* The Nearest Neighbor of the current POI
* The most Popular POI
* A random POI choosing with probability proportional to the reciprocal of its distance to current POI
* A random POI choosing with probability proportional to its popularity
End of explanation
doCompute = True
train_set1 = pickle.load(open(ftrain, 'rb'))
if (np.array(sorted(train_set1)) == np.array(sorted(train_set))).all() and os.path.exists(ffeatures_train):
doCompute = False
all_features = None
if doCompute:
#[(seqid, seqidx_in_enum_seqs_dict)]
seq_info = [(seqid, j) for seqid in train_set for j in range(len(enum_seqs[seqid]))]
# all CPUs but one are used
all_features = Parallel\
(n_jobs=-2)\
(delayed\
(calc_features)\
(seq_user.loc[x[0]].iloc[0], enum_seqs[x[0]][x[1]], poi_all, user_interest, \
log10_trans_mat, poi_dist_mat, poi_rdist_mat) \
for x in seq_info)
pickle.dump(all_features, open(ffeatures_train, 'wb'))
else:
# load features
all_features = pickle.load(open(ffeatures_train, 'rb'))
print(len(all_features))
num_enum_seq = 0 # total number of enumerated sequences
score_indices = dict() # score vector index in the feature matrix
for seqid in train_set:
num = len(enum_seqs[seqid])
score_indices[seqid] = [x for x in range(num_enum_seq, num_enum_seq + num)]
num_enum_seq += num
print(num_enum_seq)
assert(len(all_features) == num_enum_seq)
Explanation: Load features from file if possible.
End of explanation
features_name = \
['total time-based user interest', 'total freq-based user interest', \
'total POI popularity', 'total (negative) travel distance', \
'trajectory probability with nearest neighbor rule', 'trajectory probability with most popular POI rule', \
'trajectory probability which prefers near neighbors', 'trajectory probability which prefers popular POIs']
def calc_F1score(seq_act, seq_rec):
assert(len(seq_act) > 0)
assert(len(seq_rec) > 0)
actset = set(seq_act)
recset = set(seq_rec)
intersect = actset & recset
recall = len(intersect) / len(seq_act)
precision = len(intersect) / len(seq_rec)
return 2. * precision * recall / (precision + recall)
def calc_mean_F1score(train_set, train_seqs, enum_seqs, all_scores, score_indices):
F1scores = []
for seqid in train_set:
scores = np.array([all_scores[x] for x in score_indices[seqid]])
bestseq = enum_seqs[seqid][scores.argmax()]
F1scores.append(calc_F1score(train_seqs[seqid], bestseq))
return np.mean(F1scores)
features_mat = np.array(all_features)
print(features_mat.shape)
#print(features_mat[0])
Explanation: <a id='sec4'></a>
4. Experiments with training set
End of explanation
N = 1000
rand_weights = np.zeros((N, nfeatures), dtype=np.float64)
rand_F1scores = np.zeros(N, dtype=np.float64)
for j in range(N):
weights = np.random.uniform(-1, 1, nfeatures)
rand_weights[j] = weights
all_scores = features_mat.dot(weights)
rand_F1scores[j] = calc_mean_F1score(train_set, train_seqs, enum_seqs, all_scores, score_indices)
maxidx = rand_F1scores.argmax()
minidx = rand_F1scores.argmin()
print('max avgF1:', rand_F1scores[maxidx], ', weights:', rand_weights[maxidx])
print('min avgF1:', rand_F1scores[minidx], ', weights:', rand_weights[minidx])
# all_scores matrix are too large to fit in memory
#randscores = Parallel(n_jobs=-2)\
# (delayed\
# (calc_mean_F1score)\
# (train_set, train_seqs, enum_seqs, all_scores[j], score_indices) for j in range(N))
plt.figure(figsize=[10, 8])
plt.xlim([0, N])
plt.ylim([rand_F1scores.min()-0.01, max(rand_F1scores.max(), randF1)+0.01])
plt.xlabel('Experiment')
plt.ylabel('Mean F1score')
plt.plot([0, N], [randF1, randF1], color='r', linestyle='--', label='random guessing: ' + str(round(randF1,3)))
plt.scatter([range(N)], rand_F1scores, marker='+')
plt.legend()
pd.Series(rand_F1scores).hist(bins=50)
Explanation: <a id='sec4.1'></a>
4.1 Experiment with random weights
End of explanation
#values = [-0.5, 0, 0.5]
#values = np.linspace(-1, 1, 11)
#values = [-0.1, 0.1]
values = [-1, -0.5, -0.1, 0.1, 0.5, 1]
plt.figure(figsize=[16, 16])
for j in range(nfeatures):
all_F1scores = []
for k in range(len(values)):
weights = np.zeros(nfeatures, dtype=np.float64)
weights[j] = values[k]
all_scores = features_mat.dot(weights)
F1scores = []
for seqid in train_set:
scores = np.array([all_scores[x] for x in score_indices[seqid]])
bestseq = enum_seqs[seqid][scores.argmax()]
F1scores.append(calc_F1score(train_seqs[seqid], bestseq))
all_F1scores.append(F1scores)
plt.subplot(4, 2, j+1)
xlim = [min(values)-0.1, max(values)+0.1]
ylim = [0.3, 1.1]
plt.xlim(xlim)
plt.ylim(ylim)
plt.xlabel(features_name[j])
plt.ylabel('Mean F1-score')
plt.boxplot(all_F1scores, labels=values) #OK
avgF1scores = np.array([np.mean(x) for x in all_F1scores])
#plt.scatter(values, avgF1scores, marker='+')
#plt.plot([values[avgF1scores.argmax()]], [avgF1scores.max()], marker='o', color='r', \
# label='max: ' + str(round(avgF1scores.max(),3)))
#plt.plot(xlim, [randF1, randF1], color='g', linestyle='--', label='random guessing: ' + str(round(randF1,3)))
#plt.legend()
Explanation: <a id='sec4.2'></a>
4.2 Experiment with single feature
NOTE:
* When trajectories in training set can't cover all POIs and/or all types of POI category transitions, all above single feature plots show uniform results, and all points are below the green line (e.g. use Osaka data).
* For experiment with single feature, if the weight of the feature is 0, then no features are used in the algorithm, and the scores of all candidate trajectories are the same, so the F1 score of the algorithm will depend on the order of candidates (python will choose the first one), as a result, the actual value of F1 score is meaningless, which indicates that 0 should be skipped in this case.
End of explanation
weights = np.array([0.01, -0.01, 0, 0.88, -0.01, 0.01, -0.05, 0.05])
calc_mean_F1score(train_set, train_seqs, enum_seqs, features_mat.dot(weights), score_indices)
# uniform
# 0.714, [-1. 0. 0. 0.85 0.64 -0.05 0.03 0.]
#weights = np.zeros(nfeatures, dtype=np.float64)
# 0.728, [-0.3 0.65 -0.37 0.99 1. 0.05 0.95 1.]
#weights = np.ones(nfeatures, dtype=np.float64)
# 0.694, [-1. -1. -1. 0.22 0.99 0.49 0.25 0.76]
#weights = -1 * np.ones(nfeatures, dtype=np.float64)
# random
# 0.723, [-0.74 0.09 -0.19 0.99 0.84 -0.04 0.03 0.45]
#weights = np.random.uniform(-1, 1, nfeatures)
# 0.753, [0.35 -0.34 -0.17 0.86 0.78 0.35 0.89 0.63]
#weights = rand_weights[ rand_F1scores.argmax() ]
# hints from single feature experiment
# 0.732, [1 -1 1 1 1 1 1 -1]
#weights = np.array([-1, -1, 1, 1, 1, 1, 1, 1])
# 0.756, [0.12 -0.1 -0.03 0.75 0.1 0.1 0.1 0.1]
#weights = np.array([-0.1, -0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])
# 0.7996, [0.06 -0.05 -0.01 0.98 -0.02 0.04 0.23 0.43]
#weights = np.array([-0.05, -0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])
#weights = np.array([0.06, -0.05, -0.01, 0.98, -0.02, 0.04, 0.23, 0.43]) # init: 0.7996, end: 0.803
#weights = np.array([0.05, -0.05, -0.01, 0.86, -0.02, 0.03, 0.16, 0.43]) # init: 0.803, end: 0.805
#weights = np.array([0.05, -0.05, -0.01, 0.97, -0.02, 0.03, 0.16, 0.36]) # init: 0.805, end: 0.805
# 0.808, [0.02 -0.01 0.01 0.92 -0.02 0.01 0.28 -0.19]
#weights = np.array([-0.02, -0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02])
weights = np.array([0.02, -0.01, 0.01, 0.92, -0.02, 0.01, 0.28, -0.19]) # init: 0.808, end: 0.810
#weights = np.array([0.02, -0.01, 0, 0.87, -0.02, 0.01, 0.37, -0.19]) # init: 0.810, end: 0.810
# 0.810 [0.02, -0.01, 0.01, 0.87, -0.02, 0.01, 0.37, -0.19]
#weights = np.array([0.02, -0.01, 0, 1, -0.02, 0.01, 0.37, -0.19])
# 0.794, [0.01, -0.01, 0.01, 0.45, -0.01, 0.02, 0.07, 0.01]
#weights = np.array([-0.01, -0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01])
# 0.794, [0.01 -0.01 0.01 0.45 -0.01 0.02 0.07 0.01]
#weights = np.array([-0.01, -0.01, 0, 0.01, 0.01, 0.01, 0.01, 0.01])
#weights = np.array([0, 0, 0, 0.01, 0.01, 0.01, 0.01, 0.01]) # 0.789
#weights = np.array([0.01, -0.01, 0.01, 0.45, -0.01, 0.02, 0.07, 0.01]) # init: 0.794, end: 0.795
#weights = np.array([0.02, -0.01, 0, 0.44, -0.01, 0.02, 0.07, 0.01]) # init: 0.795, end: 0.795
#params = np.linspace(-1, 1, 41)
#params = np.linspace(-1, 1, 101)
params = np.linspace(-1, 1, 201)
F1scores = np.zeros((weights.shape[0], params.shape[0]), dtype=np.float64)
t1 = datetime.now()
for k in range(nfeatures):
for j in range(params.shape[0]):
weights[k] = params[j]
all_scores = features_mat.dot(weights)
F1scores[k, j] = calc_mean_F1score(train_set, train_seqs, enum_seqs, all_scores, score_indices)
maxidx = F1scores[k].argmax()
weights[k] = params[maxidx]
t2 = datetime.now()
print('%d seconds used' % (t2-t1).total_seconds()) # 120 seconds
#t1 = datetime.now()
#for k in range(nfeatures):
# all_weights = np.matlib.repmat(weights, params.shape[0], 1)
# for j in range(params.shape[0]):
# all_weights[j, k] = params[j]
# F1scores1 = Parallel(n_jobs=-2)\
# (delayed(calc_mean_F1score)
# (train_set, train_seqs, enum_seqs, features_mat.dot(all_weights[j]), score_indices)
# for j in range(params.shape[0]))
# F1scores[k] = np.array(F1scores1)
# maxidx = F1scores[k].argmax()
# weights[k] = params[maxidx]
#t2 = datetime.now()
#print('%d seconds used' % (t2-t1).total_seconds()) # 407 seconds
print(weights)
print(F1scores[-1].max())
plt.figure(figsize=[16, 16])
for k in range(nfeatures):
plt.subplot(4, 2, k+1)
plt.xlim([-1.1, 1.1])
plt.ylim([0.3, 0.9])
plt.xlabel(features_name[k])
plt.ylabel('Mean F1-score')
plt.scatter(params, F1scores[k], marker='+')
plt.plot([-1.1, 1.1], [randF1, randF1], color='g', linestyle='--', label='random guessing: ' + str(round(randF1,3)))
plt.legend()
Explanation: 4.2.1 Observations with single feature
The first feature is the total time-based user interest in a trajectory.
(i.e sum of (expected time the user spent at each POI)/(average time a user spent at each POI))
It seems the existance of this feature negative affected the algorithm,
which is strange as the IJCAI paper argues that capturing the expected time a user spent at POI will improve the accuracy of trajectory recommendation.
The second feature is the total number of visit (of the user) of all POIs in a trajectory.
Similar to the first feature, it seems the existance of this feature negative affected the algorithm,
which is also strange as experiments from the IJCAI paper show that capturing a user's visiting frequency of POI will improve the accuracy of trajectory recommendation, though less than capturing visit time duration, but still much better than greedy and random selection strategies.
The third feature is the total POI popularity (i.e. # of visit of a POI by all users) of all POIs in a trajectory.
It seems that doesn't affect the recommendation much, though a positive weight of this feature will help the recommendation algorithm slightly.
The fourth feature is the negative (i.e. multiple -1) of total travelling cost (i.e. total travel distance in the trajectory) for a user of a trajectory.
It's strange that the algorithm prefers long travelling distance.
The fifth feature is the probability of a recommended trajectory based on the transition probabilities between POI categories and the nearest neighbor rule for choosing a specific POI within a certain category.
It's clear the algorithm likes nearest neighbors.
The sixth feature is the probability of a recommended trajectory based on the transition probabilities between POI categories and the most popular POI rule for choosing a specific POI within a certain category. It's clear the algorithm likes popular POIs.
The seventh feature is the probability of a recommended trajectory based on the transition probabilities between POI categories and a rule below for choosing a specific POI within a certain category.
Rule: choose a random POI with probability proportional to the reciprocal of its distance to current POI.
Similar to the fifth feature which utilises the nearest neighbor rule, the algorithm doesn't like far neighbors.
The last feature is the probability of a recommended trajectory based on the transition probabilities between POI categories and a rule below for choosing a specific POI within a certain category.
Rule: choose a random POI with probability proportional to its popularity.
Similar to the sixth feature which utilises the most popular POI rule, the algorithm doesn't like non-popular POIs.
<a id='sec4.3'></a>
4.3 Search weights using coordinate-wise grid search
End of explanation
enum_seqs_test, test_seqs = enum_345_seq(test_set, poi_list)
Explanation: <a id='sec5'></a>
5. Apply search results to testing set
End of explanation
doCompute = True
test_set1 = pickle.load(open(ftest, 'rb'))
if (np.array(sorted(test_set1)) == np.array(sorted(test_set))).all() and os.path.exists(ffeatures_test):
doCompute = False
all_features_test = None
if doCompute:
#[(seqid, seqidx_in_enum_seqs_dict)]
seq_info_test = [(seqid, j) for seqid in test_set for j in range(len(enum_seqs_test[seqid]))]
# all CPUs but one are used
all_features_test = Parallel\
(n_jobs=-2)\
(delayed\
(calc_features)\
(seq_user.loc[x[0]].iloc[0], enum_seqs_test[x[0]][x[1]], poi_all, user_interest, \
log10_trans_mat, poi_dist_mat, poi_rdist_mat) \
for x in seq_info_test)
pickle.dump(all_features_test, open(ffeatures_test, 'wb'))
else:
# load features
all_features_test = pickle.load(open(ffeatures_test, 'rb'))
print(len(all_features_test))
score_indices_test = dict() # score vector index in the feature matrix
num_enum_seq_test = 0
for seqid in test_set:
num = len(enum_seqs_test[seqid])
score_indices_test[seqid] = [x for x in range(num_enum_seq_test, num_enum_seq_test + num)]
num_enum_seq_test += num
print(num_enum_seq_test)
assert(len(all_features_test) == num_enum_seq_test)
features_mat_test = np.array(all_features_test)
print(features_mat_test.shape)
weights = np.array([0.01, -0.01, 0, 0.88, -0.01, 0.01, -0.05, 0.05]) # train: 0.7993, test: 0.75040650406504061
#weights = np.array([0.32, -0.01, 0.01, 0.92, -0.02, 0.03, 0.27, 0.08])
#weights = np.array([0.02, -0.01, 0, 0.87, -0.02, 0.01, 0.37, -0.19]) # train: 0.810, test: 0.744
#weights = np.array([0.02, -0.01, 0.01, 0.92, -0.02, 0.01, 0.28, -0.19]) # train: 0.808, test: 0.744
#weights = np.array([0.05, -0.05, -0.01, 0.97, -0.02, 0.03, 0.16, 0.36]) # train: 0.805, test: 0.732
#weights = np.array([0.06, -0.05, -0.01, 0.98, -0.02, 0.04, 0.23, 0.43]) # train: 0.7996, test: 0.732
#weights = np.array([0.02, -0.01, 0, 0.44, -0.01, 0.02, 0.07, 0.01]) # init: 0.795, test: 0.740
calc_mean_F1score(test_set, test_seqs, enum_seqs_test, features_mat_test.dot(weights), score_indices_test)
Explanation: Load features from file if possible.
End of explanation |
15,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An end-to-end Vertex Training Pipeline Demonstration
Finally, check that you have correctly installed the packages. The KFP SDK version should be >=1.6
Step1: Then define the pipeline using the following function
Step2: Compile and run the end-to-end ML pipeline
With our full pipeline defined, it's time to compile it
Step3: Next, instantiate an API client
Step4: Next, kick off a pipeline run | Python Code:
!python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
import os
import json
from functools import partial
import kfp
import pprint
import yaml
from jinja2 import Template
from kfp.v2 import dsl
from kfp.v2.compiler import compiler
from kfp.v2.dsl import Dataset
from kfp.v2.google.client import AIPlatformClient
project_id='woven-rush-197905'
project_number='297370817971'
af_registry_location='asia-southeast1'
af_registry_name='mlops-vertex-kit'
components_dir='../components/'
def _load_custom_component(project_id: str,
af_registry_location: str,
af_registry_name: str,
components_dir: str,
component_name: str):
component_path = os.path.join(components_dir,
component_name,
'component.yaml.jinja')
with open(component_path, 'r') as f:
component_text = Template(f.read()).render(
project_id=project_id,
af_registry_location=af_registry_location,
af_registry_name=af_registry_name)
return kfp.components.load_component_from_text(component_text)
load_custom_component = partial(_load_custom_component,
project_id=project_id,
af_registry_location=af_registry_location,
af_registry_name=af_registry_name,
components_dir=components_dir)
preprocess_op = load_custom_component(component_name='data_preprocess')
train_op = load_custom_component(component_name='train_model')
check_metrics_op = load_custom_component(component_name='check_model_metrics')
create_endpoint_op = load_custom_component(component_name='create_endpoint')
test_endpoint_op = load_custom_component(component_name='test_endpoint')
deploy_model_op = load_custom_component(component_name='deploy_model')
monitor_model_op = load_custom_component(component_name='monitor_model')
Explanation: An end-to-end Vertex Training Pipeline Demonstration
Finally, check that you have correctly installed the packages. The KFP SDK version should be >=1.6:
End of explanation
pipeline_region='asia-southeast1'
pipeline_root='gs://vertex_pipeline_demo_root/pipeline_root'
data_region='asia-southeast1'
input_dataset_uri='bq://woven-rush-197905.vertex_pipeline_demo.banknote_authentication'
gcs_data_output_folder='gs://vertex_pipeline_demo_root/datasets/training'
training_data_schema='VWT:float;SWT:float;KWT:float;Entropy:float;Class:int'
data_pipeline_root='gs://vertex_pipeline_demo_root/compute_root'
training_container_image_uri=f'{af_registry_location}-docker.pkg.dev/{project_id}/{af_registry_name}/training:latest'
serving_container_image_uri=f'{af_registry_location}-docker.pkg.dev/{project_id}/{af_registry_name}/serving:latest'
custom_job_service_account=f'{project_number}-compute@developer.gserviceaccount.com'
training_container_image_uri,serving_container_image_uri,custom_job_service_account
train_additional_args = json.dumps({
'num_leaves_hp_param_min': 6,
'num_leaves_hp_param_max': 11,
'max_depth_hp_param_min': -1,
'max_depth_hp_param_max': 4,
'num_boost_round': 300,
'min_data_in_leaf': 5
})
train_additional_args
@dsl.pipeline(name='training-pipeline-template')
def pipeline(project_id: str,
data_region: str,
gcs_data_output_folder: str,
input_dataset_uri: str,
training_data_schema: str,
data_pipeline_root: str,
training_container_image_uri: str,
train_additional_args: str,
serving_container_image_uri: str,
custom_job_service_account: str,
hptune_region: str,
hp_config_suggestions_per_request: int,
hp_config_max_trials: int,
metrics_name: str,
metrics_threshold: float,
endpoint_machine_type: str,
endpoint_min_replica_count: int,
endpoint_max_replica_count: int,
endpoint_test_instances: str,
monitoring_user_emails: str,
monitoring_log_sample_rate: float,
monitor_interval: int,
monitoring_default_threshold: float,
monitoring_custom_skew_thresholds: str,
monitoring_custom_drift_thresholds: str,
machine_type: str = "n1-standard-8",
accelerator_count: int = 0,
accelerator_type: str = 'ACCELERATOR_TYPE_UNSPECIFIED',
vpc_network: str = "",
enable_model_monitoring: str = 'False'):
dataset_importer = kfp.v2.dsl.importer(
artifact_uri=input_dataset_uri,
artifact_class=Dataset,
reimport=False)
preprocess_task = preprocess_op(
project_id=project_id,
data_region=data_region,
gcs_output_folder=gcs_data_output_folder,
gcs_output_format="CSV",
input_dataset=dataset_importer.output)
train_task = train_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
input_data_schema=training_data_schema,
training_container_image_uri=training_container_image_uri,
train_additional_args=train_additional_args,
serving_container_image_uri=serving_container_image_uri,
custom_job_service_account=custom_job_service_account,
input_dataset=preprocess_task.outputs['output_dataset'],
machine_type=machine_type,
accelerator_count=accelerator_count,
accelerator_type=accelerator_type,
hptune_region=hptune_region,
hp_config_max_trials=hp_config_max_trials,
hp_config_suggestions_per_request=hp_config_suggestions_per_request,
vpc_network=vpc_network)
check_metrics_task = check_metrics_op(
metrics_name=metrics_name,
metrics_threshold=metrics_threshold,
basic_metrics=train_task.outputs['basic_metrics'])
create_endpoint_task = create_endpoint_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
display_name='endpoint-classification-template',
create_if_not_exists=True)
deploy_model_task = deploy_model_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
machine_type=endpoint_machine_type,
min_replica_count=endpoint_min_replica_count,
max_replica_count=endpoint_max_replica_count,
model=train_task.outputs['output_model'],
endpoint=create_endpoint_task.outputs['endpoint'])
test_endpoint_task = test_endpoint_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
endpoint=create_endpoint_task.outputs['endpoint'],
test_instances=endpoint_test_instances,
).after(deploy_model_task)
with dsl.Condition(enable_model_monitoring == 'True', name='Monitoring'):
monitor_model_task = monitor_model_op(
project_id=project_id,
data_region=data_region,
user_emails=monitoring_user_emails,
log_sample_rate=monitoring_log_sample_rate,
monitor_interval=monitor_interval,
default_threshold=monitoring_default_threshold,
custom_skew_thresholds=monitoring_custom_skew_thresholds,
custom_drift_thresholds=monitoring_custom_drift_thresholds,
endpoint=create_endpoint_task.outputs['endpoint'],
instance_schema=train_task.outputs['instance_schema'],
dataset=preprocess_task.outputs['output_dataset'])
monitor_model_task.after(deploy_model_task)
Explanation: Then define the pipeline using the following function:
End of explanation
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="training_pipeline_job.json"
)
Explanation: Compile and run the end-to-end ML pipeline
With our full pipeline defined, it's time to compile it:
End of explanation
api_client = AIPlatformClient(
project_id=project_id,
region=pipeline_region)
Explanation: Next, instantiate an API client:
End of explanation
test_instances = json.dumps([
{"VWT":3.6216,"SWT":8.6661,"KWT":-2.8073,"Entropy":-0.44699,"Class":"0"},
{"VWT":4.5459,"SWT":8.1674,"KWT":-2.4586,"Entropy":-1.4621,"Class":"0"},
{"VWT":3.866,"SWT":-2.6383,"KWT":1.9242,"Entropy":0.10645,"Class":"0"},
{"VWT":-3.7503,"SWT":-13.4586,"KWT":17.5932,"Entropy":-2.7771,"Class":"1"},
{"VWT":-3.5637,"SWT":-8.3827,"KWT":12.393,"Entropy":-1.2823,"Class":"1"},
{"VWT":-2.5419,"SWT":-0.65804,"KWT":2.6842,"Entropy":1.1952,"Class":"1"}
])
test_instances
pipeline_params = {
'project_id': project_id,
'data_region': data_region,
'gcs_data_output_folder': gcs_data_output_folder,
'input_dataset_uri': input_dataset_uri,
'training_data_schema': training_data_schema,
'data_pipeline_root': data_pipeline_root,
'training_container_image_uri': training_container_image_uri,
'train_additional_args': train_additional_args,
'serving_container_image_uri': serving_container_image_uri,
'custom_job_service_account': custom_job_service_account,
'hptune_region':"asia-east1",
'hp_config_suggestions_per_request': 5,
'hp_config_max_trials': 30,
'metrics_name': 'au_prc',
'metrics_threshold': 0.4,
'endpoint_machine_type': 'n1-standard-4',
'endpoint_min_replica_count': 1,
'endpoint_max_replica_count': 1,
'endpoint_test_instances': test_instances,
'monitoring_user_emails': 'luoshixin@google.com',
'monitoring_log_sample_rate': 0.8,
'monitor_interval': 3600,
'monitoring_default_threshold': 0.3,
'monitoring_custom_skew_thresholds': 'VWT:.5,SWT:.2,KWT:.7,Entropy:.4',
'monitoring_custom_drift_thresholds': 'VWT:.5,SWT:.2,KWT:.7,Entropy:.4',
'enable_model_monitoring': 'True'
}
response = api_client.create_run_from_job_spec(
job_spec_path="training_pipeline_job.json",
pipeline_root=pipeline_root,
parameter_values=pipeline_params,
enable_caching=False)
Explanation: Next, kick off a pipeline run:
End of explanation |
15,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python and other languages
Interacting with other programs through API calls.
The whole is greater than its parts
Step1: However in the example above there is no real communication between the two languages. Let us change that with another small example, in which we send a numpy matrix to R. Don't worry at this point about what numpy is, we will learn it in the scientific computing chapter.
Step2: All is well above, except the display happens in an external R GUI frame. It would be nice to have an inline plot, matplotlib style. Well guess what, you are in luck because IPython is also having native support for R. This is the recommended way in which Python and R can interact in the IPython notebook | Python Code:
import readline
import rpy2.robjects as robjects
robjects.r('''
source("http://www.bioconductor.org/biocLite.R")
biocLite("ALL")
library("ALL")
data("ALL")
#install.packages("gplots")
eset <- ALL[, ALL$mol.biol %in% c("BCR/ABL", "ALL1/AF4")]
library("limma")
f <- factor(as.character(eset$mol.biol))
design <- model.matrix(~f)
fit <- eBayes(lmFit(eset,design))
selected <- p.adjust(fit$p.value[, 2]) <0.05
esetSel <- eset [selected, ]
color.map <- function(mol.biol) { if (mol.biol=="ALL1/AF4") "#FF0000" else "#0000FF" }
patientcolors <- unlist(lapply(esetSel$mol.bio, color.map))
#heatmap(exprs(esetSel), col=topo.colors(100), ColSideColors=patientcolors)
library("gplots")
heatmap.2(exprs(esetSel), col=redgreen(75), scale="row", ColSideColors=patientcolors,
key=TRUE, symkey=FALSE, density.info="none", trace="none", cexRow=0.5)
''')
Explanation: Python and other languages
Interacting with other programs through API calls.
The whole is greater than its parts: syncretism.
[Python and C]: make your own API.
[Python and R]: direct piping through Jupyter and serialization
[Python and Julia
The "rest" can be an external program, a remote program or a library made for a different language. To a certain degree all languages became good at accessing external resources but Python excels at it. We learned how to access remote APIs. We will only learn here how to deal with C and R.
Python and C
There are ways to extend Python with C and C++, but it is cumbersome. There are different interpreters for Python, most popular being CPython which is the standard one and PyPy which is a just-in-time compiler and interpreter having speeds that match .js and Java. In principle the extension code needs to be re-written in order to run on different interpreters.
Here is an example extension C code, written for the CPython interpreter. When compiled the spam function is callable from Python, so Python was extended with a new module.function():
```
static PyObject *
spam_system(PyObject self, PyObject args)
{
const char *command;
int sts;
if (!PyArg_ParseTuple(args, "s", &command))
return NULL;
sts = system(command);
return Py_BuildValue("i", sts);
}
```
Enter Cython.
Cython is a static compiler that makes it possible to combine C with Python. It is heavily promoted and used by the Scipy stack and it can run on PyPy too. The following code is written in Cython, and as you can see it does differ in one substantial way to Python: variable are statically declared. Another major difference is that this code does not run on an interpreter, instead it is compiled into C and assembled in machine code. A similar project exists for Java, called Jython.
def primes(int kmax): # The argument will be converted to int or raise a TypeError.
cdef int n, k, i # These variables are declared with C types.
cdef int p[1000] # Another C type
result = [] # A Python type
if kmax > 1000:
kmax = 1000
k = 0
n = 2
while k < kmax:
i = 0
while i < k and n % p[i] != 0:
i = i + 1
if i == k:
p[k] = n
k = k + 1
result.append(n)
n = n + 1
return result
SWIG
While Cython is cool, it does require you to write new code. If you have a C/C++ codebase and you want it in Python, perhaps the best option is SWIG. This is a multilanguage library, one can extend Tcl, Perl, Java and C# with it. Let's say you have the following pure C code containing a number of different functions:
```
/ File : example.c /
#include <time.h>
double My_variable = 3.0;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
int my_mod(int x, int y) {
return (x%y);
}
char *get_time()
{
time_t ltime;
time(<ime);
return ctime(<ime);
}
```
All you have to do is write an interface of the code to SWIG:
```
/ example.i /
%module example
%{
/ Put header files here or function declarations like below /
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
%}
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
```
Run a sequence of commands that compiles and links the code with special SWIG signatures. This is slightly different depending on the OS, what you see is Unix/Linux modus operandi.
swig -python example.i
gcc -c example.c example_wrap.c -I/usr/local/include/python2.7
ld -shared example.o example_wrap.o -o _example.so
On Python the result is a module like any other:
```
import example
example.fact(5)
120
example.my_mod(7,3)
1
example.get_time()
'Sun Feb 11 23:01:07 1996'
```
Python and R
While some Python and R programmers don't talk to each other, the languages do. It is possible to call Python from R (rpy) and R from Python (rPython). It works better to call R from Python, in fact the library is much more developed in this direction.
It requires a special module called rpy2. We will make use of R again in the 'omics chapters. For now let us use this example slightly modified for rpy2.
You can see the whole output from R and you can also interact with R environment through the execution.
But, how to get the required rpy2 module?
Google 'conda install rpy2' and feel lucky, the page at https://anaconda.org/r/rpy2 says:
conda install rpy2
This failed on my Ubuntu 64 bits system, it seems that Anaconda has problems maintaining it on the site. So I went to rpy2 homepage:
http://rpy2.bitbucket.org/
.. and I installed rpy2 with pip (Anaconda installs the pip package manager)
pip install rpy2
.. Yea, so this took me one hour last night to fix, but the problem was only affecting Linux and Anaconda. Only use import readline if you have Linux.
End of explanation
import readline
import numpy as np
from rpy2.robjects import r
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
data = np.random.random((10,10))
r.heatmap(data)
Explanation: However in the example above there is no real communication between the two languages. Let us change that with another small example, in which we send a numpy matrix to R. Don't worry at this point about what numpy is, we will learn it in the scientific computing chapter.
End of explanation
#%load_ext rpy2.ipython
#from rpy2.robjects import r
#import rpy2.robjects.numpy2ri
#rpy2.robjects.numpy2ri.activate()
import numpy as np
data = np.random.random((10,10))
%Rpush data
%R heatmap(data)
Explanation: All is well above, except the display happens in an external R GUI frame. It would be nice to have an inline plot, matplotlib style. Well guess what, you are in luck because IPython is also having native support for R. This is the recommended way in which Python and R can interact in the IPython notebook:
End of explanation |
15,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamical X-ray Scattering
In this example static and transient X-ray simulations are carried out employing the dynamical X-ray scattering formalism.
Setup
Do all necessary imports and settings.
Step1: Structure
Refer to the structure-example for more details.
Step2: Heat
Refer to the heat-example for more details.
Step3: Numerical Phonons
Refer to the phonons-example for more details.
Step4: Initialize dynamical X-ray simulation
The XrayDyn class requires a Structure object and a boolean force_recalc in order overwrite previous simulation results.
These results are saved in the cache_dir when save_data is enabled.
Printing simulation messages can be en-/disabled using disp_messages and progress bars can using the boolean switch progress_bar.
Step5: Homogeneous X-ray scattering
For the case of homogeneously strained samples, the dynamical X-ray scattering simulations can be greatly simplified, which saves a lot of computational time.
$q_z$-scan
The XrayDyn object requires an energy and scattering vector qz to run the simulations.
Both parameters can be arrays and the resulting reflectivity has a first dimension for the photon energy and the a second for the scattering vector.
Step6: Due to the very thick static substrate in the structure and the very small step width in qz also the Darwin width of the substrate Bragg peak is nicely resolvable.
Step7: Post-Processing
All result can be convoluted with an arbitrary function handle, which e.g. mimics the instrumental resolution.
Step8: Energy-scan
Energy scans rely on experimental atomic scattering factors that are include also energy ranges around relevant resonances.
The warning message can be safely ignored as it results from the former q_z range which cannot be accessed with the new energy range.
Step9: Inhomogeneous X-ray scattering
The inhomogeneous_reflectivity() method allows to calculate the transient X-ray reflectivity according to a strain_map.
The actual strains per layer will be discretized and limited in order to save computational time using the strain_vectors.
Step10: The results can be convoluted again to mimic real experimental resolution
Step11: Parallel inhomogeneous X-ray scattering
You need to install the udkm1Dsim with the parallel option which essentially add the Dask package to the requirements | Python Code:
import udkm1Dsim as ud
u = ud.u # import the pint unit registry from udkm1Dsim
import scipy.constants as constants
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
u.setup_matplotlib() # use matplotlib with pint units
Explanation: Dynamical X-ray Scattering
In this example static and transient X-ray simulations are carried out employing the dynamical X-ray scattering formalism.
Setup
Do all necessary imports and settings.
End of explanation
O = ud.Atom('O')
Ti = ud.Atom('Ti')
Sr = ud.Atom('Sr')
Ru = ud.Atom('Ru')
Pb = ud.Atom('Pb')
Zr = ud.Atom('Zr')
# c-axis lattice constants of the two layers
c_STO_sub = 3.905*u.angstrom
c_SRO = 3.94897*u.angstrom
# sound velocities [nm/ps] of the two layers
sv_SRO = 6.312*u.nm/u.ps
sv_STO = 7.800*u.nm/u.ps
# SRO layer
prop_SRO = {}
prop_SRO['a_axis'] = c_STO_sub # aAxis
prop_SRO['b_axis'] = c_STO_sub # bAxis
prop_SRO['deb_Wal_Fac'] = 0 # Debye-Waller factor
prop_SRO['sound_vel'] = sv_SRO # sound velocity
prop_SRO['opt_ref_index'] = 2.44+4.32j
prop_SRO['therm_cond'] = 5.72*u.W/(u.m*u.K) # heat conductivity
prop_SRO['lin_therm_exp'] = 1.03e-5 # linear thermal expansion
prop_SRO['heat_capacity'] = '455.2 + 0.112*T - 2.1935e6/T**2' # heat capacity [J/kg K]
SRO = ud.UnitCell('SRO', 'Strontium Ruthenate', c_SRO, **prop_SRO)
SRO.add_atom(O, 0)
SRO.add_atom(Sr, 0)
SRO.add_atom(O, 0.5)
SRO.add_atom(O, 0.5)
SRO.add_atom(Ru, 0.5)
# STO substrate
prop_STO_sub = {}
prop_STO_sub['a_axis'] = c_STO_sub # aAxis
prop_STO_sub['b_axis'] = c_STO_sub # bAxis
prop_STO_sub['deb_Wal_Fac'] = 0 # Debye-Waller factor
prop_STO_sub['sound_vel'] = sv_STO # sound velocity
prop_STO_sub['opt_ref_index'] = 2.1+0j
prop_STO_sub['therm_cond'] = 12*u.W/(u.m*u.K) # heat conductivity
prop_STO_sub['lin_therm_exp'] = 1e-5 # linear thermal expansion
prop_STO_sub['heat_capacity'] = '733.73 + 0.0248*T - 6.531e6/T**2' # heat capacity [J/kg K]
STO_sub = ud.UnitCell('STOsub', 'Strontium Titanate Substrate', c_STO_sub, **prop_STO_sub)
STO_sub.add_atom(O, 0)
STO_sub.add_atom(Sr, 0)
STO_sub.add_atom(O, 0.5)
STO_sub.add_atom(O, 0.5)
STO_sub.add_atom(Ti, 0.5)
S = ud.Structure('Single Layer')
S.add_sub_structure(SRO, 200) # add 100 layers of SRO to sample
S.add_sub_structure(STO_sub, 1000) # add 1000 layers of dynamic STO substrate
substrate = ud.Structure('Static Substrate')
substrate.add_sub_structure(STO_sub, 1000000) # add 1000000 layers of static STO substrate
S.add_substrate(substrate)
Explanation: Structure
Refer to the structure-example for more details.
End of explanation
h = ud.Heat(S, True)
h.save_data = False
h.disp_messages = True
h.excitation = {'fluence': [35]*u.mJ/u.cm**2,
'delay_pump': [0]*u.ps,
'pulse_width': [0]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
# temporal and spatial grid
delays = np.r_[-5:40:0.1]*u.ps
_, _, distances = S.get_distances_of_layers()
temp_map, delta_temp_map = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :])
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.title('Temperature Profile')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map')
plt.tight_layout()
plt.show()
Explanation: Heat
Refer to the heat-example for more details.
End of explanation
p = ud.PhononNum(S, True)
p.save_data = False
p.disp_messages = True
strain_map = p.get_strain_map(delays, temp_map, delta_temp_map)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, strain_map[130, :],
label=np.round(delays[130]))
plt.plot(distances.to('nm').magnitude, strain_map[350, :],
label=np.round(delays[350]))
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Strain')
plt.legend()
plt.title('Strain Profile')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude,
strain_map, cmap='RdBu',
vmin=-np.max(strain_map), vmax=np.max(strain_map), shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Strain Map')
plt.tight_layout()
plt.show()
Explanation: Numerical Phonons
Refer to the phonons-example for more details.
End of explanation
dyn = ud.XrayDyn(S, True)
dyn.disp_messages = True
dyn.save_data = False
Explanation: Initialize dynamical X-ray simulation
The XrayDyn class requires a Structure object and a boolean force_recalc in order overwrite previous simulation results.
These results are saved in the cache_dir when save_data is enabled.
Printing simulation messages can be en-/disabled using disp_messages and progress bars can using the boolean switch progress_bar.
End of explanation
dyn.energy = np.r_[5000, 8047]*u.eV # set two photon energies
dyn.qz = np.r_[3.1:3.3:0.00001]/u.angstrom # qz range
R_hom, A = dyn.homogeneous_reflectivity() # this is the actual calculation
plt.figure()
plt.semilogy(dyn.qz[0, :], R_hom[0, :], label='{}'.format(dyn.energy[0]), alpha=0.5)
plt.semilogy(dyn.qz[1, :], R_hom[1, :], label='{}'.format(dyn.energy[1]), alpha=0.5)
plt.ylabel('Reflectivity')
plt.xlabel('$q_z$ [nm$^{-1}$]')
plt.legend()
plt.show()
Explanation: Homogeneous X-ray scattering
For the case of homogeneously strained samples, the dynamical X-ray scattering simulations can be greatly simplified, which saves a lot of computational time.
$q_z$-scan
The XrayDyn object requires an energy and scattering vector qz to run the simulations.
Both parameters can be arrays and the resulting reflectivity has a first dimension for the photon energy and the a second for the scattering vector.
End of explanation
plt.figure()
plt.semilogy(dyn.qz[0, :], R_hom[0, :], label='{}'.format(dyn.energy[0]), alpha=0.5)
plt.semilogy(dyn.qz[1, :], R_hom[1, :], label='{}'.format(dyn.energy[1]), alpha=0.5)
plt.ylabel('Reflectivity')
plt.xlabel('$q_z$ [nm$^{-1}$]')
plt.xlim(32.17, 32.195)
plt.ylim(1e-3, 1)
plt.legend()
plt.title('Darwin Width')
plt.show()
Explanation: Due to the very thick static substrate in the structure and the very small step width in qz also the Darwin width of the substrate Bragg peak is nicely resolvable.
End of explanation
FWHM = 0.004/1e-10 # Angstrom
sigma = FWHM/2.3548
handle = lambda x: np.exp(-((x)/sigma)**2/2)
y_conv = dyn.conv_with_function(R_hom[0, :], dyn._qz[0, :], handle)
plt.figure()
plt.semilogy(dyn.qz[0, :], R_hom[0, :], label='{}'.format(dyn.energy[0]))
plt.semilogy(dyn.qz[0, :], y_conv, label='{} convoluted'.format(dyn.energy[0]))
plt.ylabel('Reflectivity')
plt.xlabel('$q_z$ [nm$^{-1}$]')
plt.legend()
plt.show()
Explanation: Post-Processing
All result can be convoluted with an arbitrary function handle, which e.g. mimics the instrumental resolution.
End of explanation
dyn.energy = np.r_[2000:4000]*u.eV # set the energy range
dyn.qz = np.r_[2]/u.angstrom # qz range
R_hom, A = dyn.homogeneous_reflectivity() # this is the actual calculation
plt.figure()
plt.plot(dyn.energy, R_hom[:, 0])
plt.ylabel('Reflectivity')
plt.xlabel('Energy [eV]')
plt.show()
Explanation: Energy-scan
Energy scans rely on experimental atomic scattering factors that are include also energy ranges around relevant resonances.
The warning message can be safely ignored as it results from the former q_z range which cannot be accessed with the new energy range.
End of explanation
dyn.energy = np.r_[8047]*u.eV # set two photon energies
dyn.qz = np.r_[3.1:3.3:0.001]/u.angstrom # qz range
strain_vectors = p.get_reduced_strains_per_unique_layer(strain_map)
R_seq = dyn.inhomogeneous_reflectivity(strain_map, strain_vectors, calc_type='sequential')
plt.figure()
plt.pcolormesh(dyn.qz[0, :].to('1/nm').magnitude, delays.to('ps').magnitude, np.log10(R_seq[:, 0, :]), shading='auto')
plt.title('Dynamical X-ray')
plt.ylabel('Delay [ps]')
plt.xlabel('$q_z$ [nm$^{-1}$]')
plt.show()
Explanation: Inhomogeneous X-ray scattering
The inhomogeneous_reflectivity() method allows to calculate the transient X-ray reflectivity according to a strain_map.
The actual strains per layer will be discretized and limited in order to save computational time using the strain_vectors.
End of explanation
R_seq_conv = np.zeros_like(R_seq)
for i, delay in enumerate(delays):
R_seq_conv[i, 0, :] = dyn.conv_with_function(R_seq[i, 0, :], dyn._qz[0, :], handle)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.semilogy(dyn.qz[0, :].to('1/nm'), R_seq_conv[0, 0, :], label=np.round(delays[0]))
plt.semilogy(dyn.qz[0, :].to('1/nm'), R_seq_conv[100, 0, :], label=np.round(delays[100]))
plt.semilogy(dyn.qz[0, :].to('1/nm'), R_seq_conv[-1, 0, :], label=np.round(delays[-1]))
plt.xlabel('$q_z$ [nm$^{-1}$]')
plt.ylabel('Reflectivity')
plt.legend()
plt.title('Dynamical X-ray Convoluted')
plt.subplot(2, 1, 2)
plt.pcolormesh(dyn.qz[0, :].to('1/nm').magnitude, delays.to('ps').magnitude, np.log10(R_seq_conv[:, 0, :]), shading='auto')
plt.ylabel('Delay [ps]')
plt.xlabel('$q_z$ [nm$^{-1}$]')
plt.tight_layout()
plt.show()
Explanation: The results can be convoluted again to mimic real experimental resolution:
End of explanation
try:
from dask.distributed import Client
client = Client()
R_par = dyn.inhomogeneous_reflectivity(strain_map, strain_vectors, calc_type='parallel', dask_client=client)
client.close()
except:
pass
plt.figure()
plt.pcolormesh(dyn.qz[0, :].to('1/nm').magnitude, delays.to('ps').magnitude, np.log10(R_par[:, 0, :]), shading='auto')
plt.title('Parallel Dynamical X-ray')
plt.ylabel('Delay [ps]')
plt.xlabel('$q_z$ [nm$^{-1}$]')
plt.show()
Explanation: Parallel inhomogeneous X-ray scattering
You need to install the udkm1Dsim with the parallel option which essentially add the Dask package to the requirements:
```
pip install udkm1Dsim[parallel]
```
You can also install/add Dask manually, e.g. via pip:
```
pip install dask
```
Please refer to the Dask documentation for more details on parallel computing in Python.
End of explanation |
15,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dataset split of AotM-2011/30Music playlists for playlist generation
When a user in test set is unknown previously (cold user)
Step1: Load playlists
Load playlists.
Step2: check duplicated songs in the same playlist.
Step3: Load song features
Load song_id --> feature array mapping
Step4: Load genres
Song genres from MSD Allmusic Genre Dataset (Top MAGD) and tagtraum.
Step5: Song collection
Step6: Randomise the order of song with the same age.
Step7: Check if all songs have genre info.
Step8: Song popularity.
Step12: Create song-playlist matrix
Songs as rows, playlists as columns.
Step13: Split playlists
Split playlists such that
- all playlists of selected users are in test set.
- every song in test set is also in training set.
Step14: Make sure every song in test set should also be in training set.
Step15: Sanity check that every song in test set should also be in training set.
Step16: Learn artist features
Step17: Hold a subset of playlists, use all songs
Step18: Feature normalisation.
Step19: Playlists of the same user form a clique.
Cliques in train set. | Python Code:
%matplotlib inline
import os
import sys
import gzip
import numpy as np
import pickle as pkl
from scipy.sparse import lil_matrix, issparse, hstack, vstack
from collections import Counter
import gensim
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
np_settings0 = np.seterr(all='raise')
RAND_SEED = 0
n_feature_artist = 30
plt.style.use('seaborn')
datasets = ['aotm2011', '30music']
ffeature = 'data/msd/song2feature.pkl.gz'
fgenre = 'data/msd/song2genre.pkl.gz'
fsong2artist = 'data/msd/song2artist.pkl.gz'
audio_feature_indices = [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 185, 186, 187, 198, 199, 200, 201]
test_user_ratio = 0.3
dix = 1
dataset_name = datasets[dix]
data_dir = 'data/%s' % dataset_name
print(dataset_name)
Explanation: Dataset split of AotM-2011/30Music playlists for playlist generation
When a user in test set is unknown previously (cold user)
End of explanation
fplaylist = os.path.join(data_dir, '%s-playlist.pkl.gz' % dataset_name)
_all_playlists = pkl.load(gzip.open(fplaylist, 'rb'))
# _all_playlists[0]
all_playlists = []
if type(_all_playlists[0][1]) == tuple:
for pl, u in _all_playlists:
user = '%s_%s' % (u[0], u[1]) # user string
all_playlists.append((pl, user))
else:
all_playlists = _all_playlists
# user_playlists = dict()
# for pl, u in all_playlists:
# try:
# user_playlists[u].append(pl)
# except KeyError:
# user_playlists[u] = [pl]
# all_playlists = []
# for u in user_playlists:
# if len(user_playlists[u]) > 4:
# all_playlists += [(pl, u) for pl in user_playlists[u]]
all_users = sorted(set({user for _, user in all_playlists}))
print('#user : {:,}'.format(len(all_users)))
print('#playlist: {:,}'.format(len(all_playlists)))
pl_lengths = [len(pl) for pl, _ in all_playlists]
plt.hist(pl_lengths, bins=100)
print('Average playlist length: %.1f' % np.mean(pl_lengths))
Explanation: Load playlists
Load playlists.
End of explanation
print('{:,} | {:,}'.format(np.sum(pl_lengths), np.sum([len(set(pl)) for pl, _ in all_playlists])))
Explanation: check duplicated songs in the same playlist.
End of explanation
_song2feature = pkl.load(gzip.open(ffeature, 'rb'))
song2feature = dict()
for sid in sorted(_song2feature):
song2feature[sid] = _song2feature[sid][audio_feature_indices]
Explanation: Load song features
Load song_id --> feature array mapping: map a song to the audio features of one of its corresponding tracks in MSD.
End of explanation
song2genre = pkl.load(gzip.open(fgenre, 'rb'))
Explanation: Load genres
Song genres from MSD Allmusic Genre Dataset (Top MAGD) and tagtraum.
End of explanation
_all_songs = sorted([(sid, int(song2feature[sid][-1])) for sid in {s for pl, _ in all_playlists for s in pl}],
key=lambda x: (x[1], x[0]))
print('{:,}'.format(len(_all_songs)))
Explanation: Song collection
End of explanation
song_age_dict = dict()
for sid, age in _all_songs:
age = int(age)
try:
song_age_dict[age].append(sid)
except KeyError:
song_age_dict[age] = [sid]
all_songs = []
np.random.seed(RAND_SEED)
for age in sorted(song_age_dict.keys()):
all_songs += [(sid, age) for sid in np.random.permutation(song_age_dict[age])]
pkl.dump(all_songs, gzip.open(os.path.join(data_dir, 'setting4/all_songs.pkl.gz'), 'wb'))
Explanation: Randomise the order of song with the same age.
End of explanation
print('#songs missing genre: {:,}'.format(len(all_songs) - np.sum([sid in song2genre for (sid, _) in all_songs])))
Explanation: Check if all songs have genre info.
End of explanation
song2index = {sid: ix for ix, (sid, _) in enumerate(all_songs)}
song_pl_mat = lil_matrix((len(all_songs), len(all_playlists)), dtype=np.int8)
for j in range(len(all_playlists)):
pl = all_playlists[j][0]
ind = [song2index[sid] for sid in pl]
song_pl_mat[ind, j] = 1
song_pop = song_pl_mat.tocsc().sum(axis=1)
max_pop = np.max(song_pop)
max_pop
song2pop = {sid: song_pop[song2index[sid], 0] for (sid, _) in all_songs}
pkl.dump(song2pop, gzip.open(os.path.join(data_dir, 'setting4/song2pop.pkl.gz'), 'wb'))
Explanation: Song popularity.
End of explanation
def gen_dataset(playlists, song2feature, song2genre, song2artist, artist2vec,
train_song_set, dev_song_set=[], test_song_set=[], song2pop_train=None):
Create labelled dataset: rows are songs, columns are users.
Input:
- playlists: a set of playlists
- train_song_set: a list of songIDs in training set
- dev_song_set: a list of songIDs in dev set
- test_song_set: a list of songIDs in test set
- song2feature: dictionary that maps songIDs to features from MSD
- song2genre: dictionary that maps songIDs to genre
- song2pop_train: a dictionary that maps songIDs to its popularity
Output:
- (Feature, Label) pair (X, Y)
X: #songs by #features
Y: #songs by #users
song_set = train_song_set + dev_song_set + test_song_set
N = len(song_set)
K = len(playlists)
genre_set = sorted({v for v in song2genre.values()})
genre2index = {genre: ix for ix, genre in enumerate(genre_set)}
def onehot_genre(songID):
One-hot encoding of genres.
Data imputation:
- mean imputation (default)
- one extra entry for songs without genre info
- sampling from the distribution of genre popularity
num = len(genre_set) # + 1
vec = np.zeros(num, dtype=np.float)
if songID in song2genre:
genre_ix = genre2index[song2genre[songID]]
vec[genre_ix] = 1
else:
vec[:] = np.nan
#vec[-1] = 1
return vec
def song_artist_feature(songID):
Return the artist feature for a given song
if songID in song2artist:
aid = song2artist[songID]
return artist2vec[aid]
else:
return artist2vec['$UNK$']
X = np.array([np.concatenate([song2feature[sid], song_artist_feature(sid), onehot_genre(sid)], axis=-1) \
for sid in song_set])
Y = lil_matrix((N, K), dtype=np.bool)
song2index = {sid: ix for ix, sid in enumerate(song_set)}
for k in range(K):
pl = playlists[k]
indices = [song2index[sid] for sid in pl if sid in song2index]
Y[indices, k] = True
# genre imputation
genre_ix_start = -len(genre_set)
genre_nan = np.isnan(X[:, genre_ix_start:])
genre_mean = np.nansum(X[:, genre_ix_start:], axis=0) / (X.shape[0] - np.sum(genre_nan, axis=0))
#print(np.nansum(X[:, genre_ix_start:], axis=0))
#print(genre_set)
#print(genre_mean)
for j in range(len(genre_set)):
X[genre_nan[:,j], j+genre_ix_start] = genre_mean[j]
# normalise the sum of all genres per song to 1
# X[:, -len(genre_set):] /= X[:, -len(genre_set):].sum(axis=1).reshape(-1, 1)
# NOTE: this is not necessary, as the imputed values are guaranteed to be normalised (sum to 1)
# due to the above method to compute mean genres.
# the log of song popularity
if song2pop_train is not None:
# for sid in song_set:
# assert sid in song2pop_train # trust the input
logsongpop = np.log2([song2pop_train[sid]+1 for sid in song_set]) # deal with 0 popularity
X = np.hstack([X, logsongpop.reshape(-1, 1)])
#return X, Y
Y = Y.tocsr()
train_ix = [song2index[sid] for sid in train_song_set]
X_train = X[train_ix, :]
Y_train = Y[train_ix, :]
dev_ix = [song2index[sid] for sid in dev_song_set]
X_dev = X[dev_ix, :]
Y_dev = Y[dev_ix, :]
test_ix = [song2index[sid] for sid in test_song_set]
X_test = X[test_ix, :]
Y_test = Y[test_ix, :]
if len(dev_song_set) > 0:
if len(test_song_set) > 0:
return X_train, Y_train.tocsc(), X_dev, Y_dev.tocsc(), X_test, Y_test.tocsc()
else:
return X_train, Y_train.tocsc(), X_dev, Y_dev.tocsc()
else:
if len(test_song_set) > 0:
return X_train, Y_train.tocsc(), X_test, Y_test.tocsc()
else:
return X_train, Y_train.tocsc()
Explanation: Create song-playlist matrix
Songs as rows, playlists as columns.
End of explanation
user_playlists = dict()
for j in range(len(all_playlists)):
u = all_playlists[j][1]
try:
user_playlists[u].append(j)
except KeyError:
user_playlists[u] = [j]
# sanity check
npl_all = np.sum([len(user_playlists[u]) for u in user_playlists])
print('{:30s} {:,}'.format('#users:', len(user_playlists)))
print('{:30s} {:,}'.format('#playlists:', npl_all))
print('{:30s} {:.2f}'.format('Average #playlists per user:', npl_all / len(user_playlists)))
Explanation: Split playlists
Split playlists such that
- all playlists of selected users are in test set.
- every song in test set is also in training set.
End of explanation
user_songcnt = dict()
for u in all_users:
songcnt_u = np.zeros(len(all_songs), dtype=np.int32)
for pl, _ in [all_playlists[j] for j in user_playlists[u]]:
for sid in pl:
songcnt_u[song2index[sid]] += 1
user_songcnt[u] = songcnt_u
all_songcnt = np.zeros(len(all_songs), dtype=np.int32)
for u in all_users:
all_songcnt += user_songcnt[u]
candidate_users = set()
other_users = set()
for u in all_users:
_songcnt = all_songcnt - user_songcnt[u]
if np.all(_songcnt > 0):
candidate_users.add(u)
else:
other_users.add(u)
print(len(candidate_users), len(other_users), len(all_users))
npl_candidate = sorted([len(user_playlists[u]) for u in candidate_users])
print('%d, %.1f, %d, %d' % (min(npl_candidate), np.mean(npl_candidate), max(npl_candidate), np.sum(npl_candidate)))
train_users = set(all_users)
test_users = set()
train_songcnt = all_songcnt.copy()
np.random.seed(RAND_SEED)
for u in np.random.permutation(sorted(candidate_users)):
_songcnt = train_songcnt - user_songcnt[u]
if np.all(_songcnt > 0):
train_users = train_users - {u}
test_users.add(u)
train_songcnt[:] = _songcnt
npl_test = np.sum([len(user_playlists[u]) for u in test_users])
if len(test_users) >= int(test_user_ratio * len(all_users)):
break
train_playlists = [all_playlists[j] for u in sorted(train_users) for j in user_playlists[u]]
test_playlists = [all_playlists[j] for u in sorted(test_users) for j in user_playlists[u]]
Explanation: Make sure every song in test set should also be in training set.
End of explanation
print('#Songs in train set: %d, #Songs total: %d' % \
(len(set([sid for pl, _ in train_playlists for sid in pl])), len(all_songs)))
print('{:30s} {:,}'.format('#playlists (train):', len(train_playlists)))
print('{:30s} {:,}'.format('#playlists (test) :', len(test_playlists)))
print('{:30s} {:,} out of {:,}'.format('#users in test set:', len(test_users), len(all_users)))
assert 0 == len(train_users & test_users)
print('#users (train): {:,}'.format(len(train_users)))
xmax = np.max([len(pl) for (pl, _) in all_playlists]) + 1
ax = plt.subplot(111)
ax.hist([len(pl) for (pl, _) in train_playlists], bins=100)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
ax.set_title('Histogram of playlist length in TRAINING set')
pass
ax = plt.subplot(111)
ax.hist([len(pl) for (pl, _) in test_playlists], bins=100)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
ax.set_title('Histogram of playlist length in TEST set')
pass
song2pop_train = song2pop.copy()
for pl, _ in test_playlists:
for sid in pl:
song2pop_train[sid] -= 1
pkl.dump(song2pop_train, gzip.open(os.path.join(data_dir, 'setting4/song2pop_train.pkl.gz'), 'wb'))
Explanation: Sanity check that every song in test set should also be in training set.
End of explanation
song2artist = pkl.load(gzip.open(fsong2artist, 'rb'))
artist_playlist = []
for pl, _ in train_playlists:
pl_artists = [song2artist[sid] if sid in song2artist else '$UNK$' for sid in pl]
artist_playlist.append(pl_artists)
fartist2vec_bin = os.path.join(data_dir, 'setting4/artist2vec.bin')
if os.path.exists(fartist2vec_bin):
artist2vec = gensim.models.KeyedVectors.load_word2vec_format(fartist2vec_bin, binary=True)
else:
artist2vec_model = gensim.models.Word2Vec(sentences=artist_playlist, size=n_feature_artist, seed=RAND_SEED,
window=10, iter=10, min_count=1)
artist2vec_model.wv.save_word2vec_format(fartist2vec_bin, binary=True)
artist2vec = artist2vec_model.wv
Explanation: Learn artist features
End of explanation
pkl_dir = os.path.join(data_dir, 'coldstart/setting4')
fpl = os.path.join(pkl_dir, 'playlists_train_test_s4.pkl.gz')
fx = os.path.join(pkl_dir, 'X.pkl.gz')
fytrain = os.path.join(pkl_dir, 'Y_train.pkl.gz')
fytest = os.path.join(pkl_dir, 'Y_test.pkl.gz')
fclique_train = os.path.join(pkl_dir, 'cliques_train.pkl.gz')
fclique_all = os.path.join(pkl_dir, 'cliques_all.pkl.gz')
X, Y = gen_dataset(playlists = [t[0] for t in train_playlists + test_playlists],
song2feature = song2feature, song2genre = song2genre,
song2artist = song2artist, artist2vec = artist2vec,
train_song_set = [t[0] for t in all_songs], song2pop_train=song2pop_train)
split_ix = len(train_playlists)
Y_train = Y[:, :split_ix].tocsc()
Y_test = Y[:, split_ix:].tocsc()
assert Y_train.shape[0] == Y_test.shape[0] == X.shape[0] == len(all_songs)
assert Y_train.shape[1] + Y_test.shape[1] == Y.shape[1] == len(all_playlists)
pkl.dump({'train_playlists': train_playlists, 'test_playlists': test_playlists}, gzip.open(fpl, 'wb'))
Explanation: Hold a subset of playlists, use all songs
End of explanation
X_mean = np.mean(X, axis=0).reshape((1, -1))
X_std = np.std(X, axis=0).reshape((1, -1)) + 10 ** (-6)
X -= X_mean
X /= X_std
print(np.mean(np.mean(X, axis=0)))
print(np.mean( np.std(X, axis=0)) - 1)
print('Train :', Y_train.shape)
print('Test :', Y_test.shape)
print('All: %s, %s' % (X.shape, Y.shape))
pkl.dump(X, gzip.open(fx, 'wb'))
pkl.dump(Y_train, gzip.open(fytrain, 'wb'))
pkl.dump(Y_test, gzip.open(fytest, 'wb'))
Explanation: Feature normalisation.
End of explanation
pl_users = [u for (_, u) in train_playlists]
cliques_train = []
for u in sorted(set(pl_users)):
clique = np.where(u == np.array(pl_users, dtype=np.object))[0]
cliques_train.append(clique)
pkl.dump(cliques_train, gzip.open(fclique_train, 'wb'))
clqsize = [len(clq) for clq in cliques_train]
print(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))
assert np.all(np.arange(Y_train.shape[1]) == np.asarray(sorted([k for clq in cliques_train for k in clq])))
pldata = pkl.load(gzip.open(fpl, 'rb'))
train_playlists = pldata['train_playlists']
test_playlists = pldata['test_playlists']
pl_users = [u for (_, u) in train_playlists + test_playlists]
train_users = [u for (_, u) in train_playlists]
test_users = [u for (_, u) in test_playlists]
pl_users = train_users + test_users
user_set = sorted(set(train_users)) + sorted(set(test_users))
clique_all = []
for u in user_set:
clique = np.where(u == np.array(pl_users, dtype=np.object))[0]
#if len(clique) > 1:
clique_all.append(clique)
clqsize = [len(clq) for clq in clique_all]
print(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))
Y_train = pkl.load(gzip.open(fytrain, 'rb'))
Y_test = pkl.load(gzip.open(fytest, 'rb'))
N = Y_train.shape[1] + Y_test.shape[1]
assert np.all(np.arange(N) == np.asarray(sorted([k for clq in clique_all for k in clq])))
pkl.dump(clique_all, gzip.open(fclique_all, 'wb'))
Explanation: Playlists of the same user form a clique.
Cliques in train set.
End of explanation |
15,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HW3 visualisation
Step1: Load the topojson with the Swiss cantons and the CSV containing the grants by canton
Step2: The canton that don't receive any grant don't appear in the CSV. The we add them in the dataframe with an amount of 0
Step3: We take the log of each value since that scale is more appropriate for the amount we have
Step4: Bonus
Step5: We want to compare the difference between the area divided by the röstigraben
So we need
Step6: Map each canton to its languages. We apply the function get_language (in map_university.py) that given an abreviation of canton, return a list of language. We didn't consider romanish since it'is only spoken by a small part of the Graubünden.
Step7: Some cantons have 2 languages (Fribourg, Berne and Valais). We need to split the list and create a second attribut language2. The array of language is sorted by main language. So language contains the main language first and language2 int the second place
Step8: Group by language (FR, D, IT) and sum the amount. Then rename the amount columns to amount_by_language
Step9: Join the the table group by language and the one containing the grant by canton | Python Code:
import folium
import pandas as pd
import numpy as np
from map_universities import *
Explanation: HW3 visualisation
End of explanation
swiss_canton = 'ch-cantons.topojson.json'
grant_data = pd.read_csv(r'all_canton_grants.csv')
Explanation: Load the topojson with the Swiss cantons and the CSV containing the grants by canton
End of explanation
#Create a dataframe of the canton abreviation -> names
#the function cantons() from map_universities.py return a dict : abreviation -> names
list_canton = pd.DataFrame.from_dict(cantons(), orient='index')
#get the canton that doen't appear in the the grand_data dataframe
not_in_grant_data = list_canton[~list_canton.index.isin(grant_data.canton)]
#Create a new dataframe containing those cantons
not_in_grant_data = pd.DataFrame(not_in_grant_data.index, columns=['canton'])
not_in_grant_data['amount'] = 0
#concatenate the 2 dataframe
grant_data = pd.concat([grant_data, not_in_grant_data], ignore_index= True)
grant_data
Explanation: The canton that don't receive any grant don't appear in the CSV. The we add them in the dataframe with an amount of 0
End of explanation
def log_function(row):
if row.amount == 0: return row
row.amount = np.log10(row.amount)
return row
grant_data_log = grant_data.apply(log_function,1)
grant_data_log
swiss_map = folium.Map(location=[46.5966, 7.9761],zoom_start=7)
swiss_map.choropleth(
geo_path=swiss_canton,
topojson='objects.cantons',
data=grant_data_log,
columns=['canton', 'amount'],
key_on='id',
threshold_scale=[0,5,6,7,8,9],
fill_color='YlOrRd', fill_opacity=0.7, line_opacity=0.6,
legend_name='Grant money received by canton (CHF)'
)
swiss_map.save('swiss_map.html')
swiss_map
#the map is not rendered on Github. To see it, you can download swiss_map.html and open it on your browser.
Explanation: We take the log of each value since that scale is more appropriate for the amount we have
End of explanation
%run map_universities.py
Explanation: Bonus
End of explanation
grant_rostigraben = grant_data.copy()
Explanation: We want to compare the difference between the area divided by the röstigraben
So we need:
1) Map each canton to its main language. Since the topojson allow us only to give one color for each canton, we only consider the main language of each canton. It means:
* Fribourg -> French
* Valais -> French
* Bern -> German
* Graubünden -> German)
2) Group by each language and sum the grant
3) Join the list of canton with the list of grant by language
4) Show the map with the color code mapped to the amount by language
End of explanation
grant_rostigraben['language'] = grant_rostigraben.canton.apply(get_language)
grant_rostigraben.head()
Explanation: Map each canton to its languages. We apply the function get_language (in map_university.py) that given an abreviation of canton, return a list of language. We didn't consider romanish since it'is only spoken by a small part of the Graubünden.
End of explanation
def split_if_two_language(row):
l = row['language']
row.language = l[0]
if(len(l)>1):
row['language2'] = l[1]
return row
grant_rostigraben = grant_rostigraben.apply(split_if_two_language, 1)
grant_rostigraben.head()
Explanation: Some cantons have 2 languages (Fribourg, Berne and Valais). We need to split the list and create a second attribut language2. The array of language is sorted by main language. So language contains the main language first and language2 int the second place
End of explanation
grant_language = grant_rostigraben.groupby(by='language', axis=0, as_index=False).sum()
grant_language.rename(columns={'amount':'amount_by_language'}, inplace=True)
Explanation: Group by language (FR, D, IT) and sum the amount. Then rename the amount columns to amount_by_language
End of explanation
grant_with_language = pd.merge(grant_rostigraben, grant_language, how='inner')
grant_with_language = pd.merge(grant_rostigraben, grant_language, on='language')
grant_with_language.head()
rostigraben_map = folium.Map(location=[46.5966, 7.9761],zoom_start=7)
rostigraben_map.choropleth(
geo_path=swiss_canton,
topojson='objects.cantons',
data=grant_with_language,
columns=['canton', 'amount_by_language'],
key_on='id',
#threshold_scale=[7,8,9],
fill_color='YlOrRd', fill_opacity=0.7, line_opacity=0.6,
legend_name='Grant money received by language (CHF)'
)
rostigraben_map.save('rostigraben_map.html')
rostigraben_map
#the map is not rendered on Github. To see it, you can download rostigraben_map.html and open it on your browser.
Explanation: Join the the table group by language and the one containing the grant by canton
End of explanation |
15,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tokenization
Default tokenization
Tokenization (the first of the five parts of the Gothenburg model) divides the texts to be collated into tokens, which are most commonly (but not obligatorily) words. By default CollateX considers punctuation to be its own token, which means that the witness readings “Hi!” and “Hi” will both contain a token that reads “Hi” (and the first witness will contain an additional token, which reads “!”). In this situation, that’s the behavior the user probably wants, since both witnesses contain what a human would recognize as the same word.
We are going to be using the CollateX library to demonstrate tokenization, so let's go ahead and import it.
Step1: Issues with default tokenization
But is a word like “Peter’s” the same word as “Peter” for collation purposes? Because CollateX will regard the apostrophe as a separate token, “Peter’s” will be tokenized as three tokens
Step2: For possessives that may be acceptable behavior, but how about contractions like “didn’t” or “A’dam” (short for “Amsterdam”)? If the default tokenization does what you need, so much the better, but if not, you can override it according to your own requirements. Below we describe what CollateX does by default and how to override that behavior and perform your own tokenization.
How CollateX tokenizes
Step3: and split it into a list of whitespace-separated words with the Python re library, which we will import here so that we can use it below.
Step4: Now let’s treat final punctuation as a separate token without splitting on internal punctuation
Step5: The regex says that a token is either a string of any characters that ends in a word character (which will match “Peter’s” with the internal apostrophe as one token, since it ends in “s”, which is a word character) or a string of non-word characters. The re.findall method will give us back a list of all the separate (i.e. non-overlapping) times our expression matched. In the case of the string cat., the .*\w alternative matches cat (i.e. anything ending in a word character), and then the \W+ alternative matches . (i.e anything that is made entirely of non-word characters).
We now have three tokens, but they’re in nested lists, which isn’t what we want. Rather, we want a single list with all the tokens on the same level. We can accomplish that with a for loop and the .extend method for lists
Step6: We’ve now split our witness text into tokens, but instead of returning them as a list of strings, we need to format them into the list of Python dictionaries that CollateX requires. So let's talk about what CollateX requires.
Specifying the witnesses to be used in the collation
The format in which CollateX expects to receive our custom lists of tokens for all witnesses to be collated is a Python dictionary, which has the following structure
Step7: Since we want to tokenize all of our witnesses, let’s turn our tokenization routine into a Python function that we can call with different input text
Step8: Let's see how it worked! Here is how to give the tokens to CollateX.
Step9: Hands-on
The task
Suppose you want to keep the default tokenization (punctuation is always a separate token), except that
Step10: The next step
Step11: Notice here what ETree does with the namespace! It doesn't naturally like namespace prefixes like tei
Step12: In our Ozymandias file, the words of the poem are contained in phrases. So let's start by seeking out all the <phr> elements and getting their text.
Step13: This looks plausible at first, but we notice pretty soon that we are missing pieces of line - the third line, for example, should read something like
"Two vast and trunkless legs of stone
<lb/>Stand in the desart....
What's going on?
Here is the slightly mind-bending thing about ETree
Step14: Now that's looking better. We have a bunch of text, and now all we need to do is tokenize it! For this we can come back to the function that we wrote earlier, tokenize. Let's plug each of these bits of content in turn into our tokenizer, and see what we get.
Step15: Adding complexity
As XML tokenization goes, this one was pretty straightforward - all your text was in <phr> elements, and none of the text was in any child element, so we were able to get by with a combination of .text and .tail for the elements we encountered. What if our markup isn't so simple? What do we do?
Here is where you start to really have to grapple with the fact that TEI allows a thousand encoding variations to bloom. In order to tokenize your particular text, you will have to think about what you encoded and how, and what "counts" as text you want to extract.
IN the file ozymandias_2.xml I have provided a simple example of this. Here the encoder chose to add the canonical spelling for the word "desert" in a <corr> element, as part of a <choice>. If I tokenize that file in the same way as above, here is what I get.
Step16: Notice that I have neither "desert" nor "desart"! That is because, while I got the tail of the <choice> element, I didn't look inside it, and I didn't visit the <sic> or <corr> elements at all. I have to make my logic a little more complex, and I also have to think about which alternative I want. Let's say that I want to stay relatively true to the original. Here is the sort of thing I would have to do. | Python Code:
from collatex import *
Explanation: Tokenization
Default tokenization
Tokenization (the first of the five parts of the Gothenburg model) divides the texts to be collated into tokens, which are most commonly (but not obligatorily) words. By default CollateX considers punctuation to be its own token, which means that the witness readings “Hi!” and “Hi” will both contain a token that reads “Hi” (and the first witness will contain an additional token, which reads “!”). In this situation, that’s the behavior the user probably wants, since both witnesses contain what a human would recognize as the same word.
We are going to be using the CollateX library to demonstrate tokenization, so let's go ahead and import it.
End of explanation
collation = Collation()
collation.add_plain_witness("A", "Peter's cat.")
collation.add_plain_witness("B", "Peter's dog.")
table = collate(collation, segmentation=False)
print(table)
Explanation: Issues with default tokenization
But is a word like “Peter’s” the same word as “Peter” for collation purposes? Because CollateX will regard the apostrophe as a separate token, “Peter’s” will be tokenized as three tokens: the name, the apostrophe, and the possessive. Here’s the default behavior:
End of explanation
input = "Peter's cat."
print(input)
Explanation: For possessives that may be acceptable behavior, but how about contractions like “didn’t” or “A’dam” (short for “Amsterdam”)? If the default tokenization does what you need, so much the better, but if not, you can override it according to your own requirements. Below we describe what CollateX does by default and how to override that behavior and perform your own tokenization.
How CollateX tokenizes: default behavior
The default tokenizer built into CollateX defines a token as a string of either alphanumeric characters (in any writing system) or non-alphanumeric characters, in both cases including any (optional) trailing whitespace. This means that the input reading “Peter’s cat.” will be analyzed as consisting of five tokens: “Peter” plus “’” plus “s ” plus “cat” plus “.”. For alignment purposes CollateX ignores any trailing white space, so that “cat” in “The cat in the hat” would be tokenzied as “cat ” (with a trailing space), but for collation purposes it would match the “cat” in “Peter’s cat.”, which has no trailing space because it’s followed by a period.
If we need to override the default tokenization behavior, we can create our own tokenized input and tell CollateX to use that, instead of letting CollateX perform the tokenization itself prior to collation.
Doing your own tokenization
In a way that is consistent with the modular design of the Gothenburg model, CollateX permits the user to change the tokenization without having to change the other parts of the collation process. Since the tokenizer passes to CollateX the indivisible units that are to be aligned, performing our own collation means specifying those units on our own. We will now look at how we can split a text into tokens the way we prefer.
Automating the tokenization
In the example above we built our token list by hand, but that obviously isn’t scalable to a real project with more than a handful of words. Let’s enhance the code above so that it builds the token lists for us by tokenizing the input strings according to our requirements. This is where projects have to identify and formalize their own specifications, since, unfortunately, there is no direct way to tell Python to read your mind and “keep punctuation with adjacent letters when I want it there, but not when I don’t.” For this example, we’ll write a tokenizer that breaks a string first on white space (which would give us two tokens: “Peter’s” and “cat.”) and then, within those intermediate tokens, on final punctuation (separating the final period from “cat” but not breaking on the internal apostrophe in “Peter’s”). This strategy would also keep English-language contractions together as single tokens, but as we’ve written it, it wouldn’t separate a leading quotation mark from a word token, although that’s a behavior we’d probably want. In Real Life we might fine-tune the routine still further, but for this tutorial we’ll prioritize just handling the sample data.
Splitting on white space and then separating final but not internal punctuation
To develop our tokenization, let’s start with:
End of explanation
import re
input = "Peter's cat."
words = re.split(r'\s+', input)
print(words)
Explanation: and split it into a list of whitespace-separated words with the Python re library, which we will import here so that we can use it below.
End of explanation
input = "Peter's cat."
words = re.split(r'\s+', input)
tokens_by_word = [re.findall(r'.*\w|\W+$', word) for word in words]
print(tokens_by_word)
Explanation: Now let’s treat final punctuation as a separate token without splitting on internal punctuation:
End of explanation
input = "Peter's cat."
words = re.split(r'\s+', input)
tokens_by_word = [re.findall(r'.*\w|\W+$', word) for word in words]
tokens = []
for item in tokens_by_word:
tokens.extend(item)
print(tokens)
Explanation: The regex says that a token is either a string of any characters that ends in a word character (which will match “Peter’s” with the internal apostrophe as one token, since it ends in “s”, which is a word character) or a string of non-word characters. The re.findall method will give us back a list of all the separate (i.e. non-overlapping) times our expression matched. In the case of the string cat., the .*\w alternative matches cat (i.e. anything ending in a word character), and then the \W+ alternative matches . (i.e anything that is made entirely of non-word characters).
We now have three tokens, but they’re in nested lists, which isn’t what we want. Rather, we want a single list with all the tokens on the same level. We can accomplish that with a for loop and the .extend method for lists:
End of explanation
input = "Peter's cat."
words = re.split(r'\s+', input)
tokens_by_word = [re.findall(r'.*\w|\W+$', word) for word in words]
tokens = []
for item in tokens_by_word:
tokens.extend(item)
token_list = [{"t": token} for token in tokens]
print(token_list)
Explanation: We’ve now split our witness text into tokens, but instead of returning them as a list of strings, we need to format them into the list of Python dictionaries that CollateX requires. So let's talk about what CollateX requires.
Specifying the witnesses to be used in the collation
The format in which CollateX expects to receive our custom lists of tokens for all witnesses to be collated is a Python dictionary, which has the following structure:
{ "witnesses": [ witness_a, witness_b ] }
This is a Python dictionary whose key is the word witnesses, and whose value is a list of the witnesses (that is, the sets of text tokens) that we want to collate. Doing our own tokenization, then, means building a dictionary like the one above and putting our custom tokens in the correct format where the witness_a and witness_b variables stand above.
Specifying the siglum and token list for each witness
The witness data for each witness is a Python dictionary that must contain two properties, which have as keys the strings id and tokens. The value for the id key is a string that will be used as the siglum of the witness in any CollateX output. The value for the tokens key is a Python list of tokens that comprise the text (much like what we have made with our regular expressions, but we have one more step to get through...!
witness_a = { "id": "A", "tokens": list_of_tokens_for_witness_a }
Specifying the tokens for each witness
Each token for each witness is a Python dictionary with at least one member, which has the key "t" (think “text”). You'll learn in the Normalization unit what else you can put in here. A token for the string “cat” would look like:
{ "t": "cat" }
The key for every token is the string "t"; the value for this token is the string "cat". As noted above, the tokens for a witness are structured as a Python list, so if we chose to split our text only on whitespace we would tokenize our first witness as:
list_of_tokens_for_witness_a = [ { "t": "Peter's" }, { "t": "cat." } ]
Our witness has two tokens, instead of the five that the default tokenizer would have provided, because we’ve done the tokenization ourselves according to our own specifications.
Putting it all together
For ease of exposition we’ve used variables to limit the amount of code we write in any one line. We define our sets of tokens as:
list_of_tokens_for_witness_a = [ { "t": "Peter's" }, { "t": "cat." } ]
list_of_tokens_for_witness_b = [ { "t": "Peter's" }, { "t": "dog." } ]
Once we have those, we can define our witnesses that bear these tokens:
witness_a = { "id": "A", "tokens": list_of_tokens_for_witness_a }
witness_b = { "id": "B", "tokens": list_of_tokens_for_witness_b }
until finally we define our collation set as:
{ "witnesses": [ witness_a, witness_b ] }
with variables that point to the data for the two witnesses.
It is also possible to represent the same information directly, without variables:
{"witnesses": [
{
"id": "A",
"tokens": [
{"t": "Peter's"},
{"t": "cat."}
]
},
{
"id": "B",
"tokens": [
{"t": "Peter's"},
{"t": "dog."}
]
}
]}
So let's put a single witness together in the format CollateX requires, starting with that list of tokens we made.
End of explanation
def tokenize(input):
words = re.split(r'\s+', input) # split on whitespace
tokens_by_word = [re.findall(r'.*\w|\W+$', word) for word in words] # break off final punctuation
tokens = []
for item in tokens_by_word:
tokens.extend(item)
token_list = [{"t": token} for token in tokens] # create dictionaries for each token
return token_list
input_a = "Peter's cat."
input_b = "Peter's dog."
tokens_a = tokenize(input_a)
tokens_b = tokenize(input_b)
witness_a = { "id": "A", "tokens": tokens_a }
witness_b = { "id": "B", "tokens": tokens_b }
input = { "witnesses": [ witness_a, witness_b ] }
input
Explanation: Since we want to tokenize all of our witnesses, let’s turn our tokenization routine into a Python function that we can call with different input text:
End of explanation
table = collate(input, segmentation=False)
print(table)
Explanation: Let's see how it worked! Here is how to give the tokens to CollateX.
End of explanation
## Your code goes here
Explanation: Hands-on
The task
Suppose you want to keep the default tokenization (punctuation is always a separate token), except that:
Words should not break on internal hyphenation. For example, “hands-on” should be treated as one word.
English possessive apostrophe + “s” should be its own token. For example, “Peter’s” should be tokenized as “Peter” plus “’s”.
How to think about the task
Create a regular expression that mimics the default behavior, where punctuation is a separate token.
Enhance it to exclude hyphens from the inventory of punctuation that signals a token division.
Enhance it to treat “’s” as a separate token.
You can practice your regular expressions at http://www.regexpal.com/.
Sample sentence
Peter’s cat has completed the hands-on tokenization exercise.
End of explanation
from lxml import etree
with open('ozymandias.xml', encoding='utf-8') as f:
ozzy = etree.parse(f)
print("Got an ElementTree with root tag", ozzy.getroot().tag)
print(etree.tostring(ozzy).decode('utf-8'))
Explanation: The next step: tokenizing XML
After all that work on marking up your document in XML, you are certainly going to want to tokenize it! This works in basically the same way, only we also have to learn to use an XML parser.
Personally I favor the lxml.etree library, though its methods of handling text nodes takes some getting used to. If you have experience with more standard XML parsing models, take a look at the Integrating XML with Python notebook in this directory. We will see as we go along how etree works.
For this exercise, let's tokenize the Ozymandias file that we were working on yesterday. It's a good idea to work with "our" version of the file until you understand what is going on here, but once you think you have the hang of it, feel free to try it with the file you marked up!
End of explanation
def tei(tag):
return "{http://www.tei-c.org/ns/1.0}%s" % tag
tei('text')
Explanation: Notice here what ETree does with the namespace! It doesn't naturally like namespace prefixes like tei:, but prefers to just stick the entire URL in curly braces. We can make a little shortcut to do this for us, and then we can use it to find our elements.
End of explanation
for phrase in ozzy.iter(tei('phr')):
print(phrase.text)
Explanation: In our Ozymandias file, the words of the poem are contained in phrases. So let's start by seeking out all the <phr> elements and getting their text.
End of explanation
for phrase in ozzy.iter(tei('phr')):
content = phrase.text
for child in phrase:
content = content + child.tail
print(content)
Explanation: This looks plausible at first, but we notice pretty soon that we are missing pieces of line - the third line, for example, should read something like
"Two vast and trunkless legs of stone
<lb/>Stand in the desart....
What's going on?
Here is the slightly mind-bending thing about ETree: each element has not only textual content, but can also have a text tail. In this case, the <phr> element has the following contents:
Text content: Two vast and trunkless legs of stone\n
A child element: <lb/>
The <lb/> has no content, but it does have a tail! The tail is Stand in the desart.... and we have to ask for it separately. So let's try this - instead of getting just the text of each element, let's get its text AND the tail of any child elements. Here's how we do that.
End of explanation
tokens = []
for phrase in ozzy.iter(tei('phr')):
content = phrase.text
for child in phrase:
content = content + child.tail
tokens.extend(tokenize(content))
print(tokens)
Explanation: Now that's looking better. We have a bunch of text, and now all we need to do is tokenize it! For this we can come back to the function that we wrote earlier, tokenize. Let's plug each of these bits of content in turn into our tokenizer, and see what we get.
End of explanation
with open('ozymandias_2.xml', encoding='utf-8') as f:
ozzy2 = etree.parse(f)
print(etree.tostring(ozzy2).decode('utf-8'))
tokens = []
for phrase in ozzy2.iter(tei('phr')):
content = phrase.text
for child in phrase:
content = content + child.tail
tokens.extend(tokenize(content))
print(tokens)
Explanation: Adding complexity
As XML tokenization goes, this one was pretty straightforward - all your text was in <phr> elements, and none of the text was in any child element, so we were able to get by with a combination of .text and .tail for the elements we encountered. What if our markup isn't so simple? What do we do?
Here is where you start to really have to grapple with the fact that TEI allows a thousand encoding variations to bloom. In order to tokenize your particular text, you will have to think about what you encoded and how, and what "counts" as text you want to extract.
IN the file ozymandias_2.xml I have provided a simple example of this. Here the encoder chose to add the canonical spelling for the word "desert" in a <corr> element, as part of a <choice>. If I tokenize that file in the same way as above, here is what I get.
End of explanation
tokens = []
for phrase in ozzy2.iter(tei('phr')):
content = phrase.text
for child in phrase:
if child.tag == tei('choice'):
## We know there is only one 'sic' element, but
## etree won't assume that! So we have to deal
## with "all" of them.
for sic in child.iter(tei('corr')):
content = content + sic.text
content = content + child.tail
tokens.extend(tokenize(content))
print(tokens)
Explanation: Notice that I have neither "desert" nor "desart"! That is because, while I got the tail of the <choice> element, I didn't look inside it, and I didn't visit the <sic> or <corr> elements at all. I have to make my logic a little more complex, and I also have to think about which alternative I want. Let's say that I want to stay relatively true to the original. Here is the sort of thing I would have to do.
End of explanation |
15,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Load the leakage coefficient from disk
Step6: Load the direct excitation coefficient ($d_{exAA}$) from disk
Step7: Update d with the correction coefficients
Step8: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step9: We need to define some parameters
Step10: We should check if everithing is OK with an alternation histogram
Step11: If the plot looks good we can apply the parameters with
Step12: Measurements infos
All the measurement data is in the d variable. We can print it
Step13: Or check the measurements duration
Step14: Compute background
Compute the background using automatic threshold
Step15: Burst search and selection
Step16: Donor Leakage fit
Step17: Burst sizes
Step18: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step19: Weighted mean of $E$ of each burst
Step20: Gaussian fit (no weights)
Step21: Gaussian fit (using burst size as weights)
Step22: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step23: The Maximum likelihood fit for a Gaussian population is the mean
Step24: Computing the weighted mean and weighted standard deviation we get
Step25: Save data to file
Step26: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step27: This is just a trick to format the different variables | Python Code:
ph_sel_name = "None"
data_id = "12d"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:38:45 2017
Duration: 7 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakage = np.loadtxt(leakage_coeff_fname)
print('Leakage coefficient:', leakage)
Explanation: Load the leakage coefficient from disk:
End of explanation
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk:
End of explanation
d.leakage = leakage
d.dir_ex = dir_ex_aa
Explanation: Update d with the correction coefficients:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
Explanation: Burst search and selection
End of explanation
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size',
x_range=E_range_do, x_ax=E_ax, save_fitter=True)
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])
plt.xlim(-0.3, 0.5)
print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100))
Explanation: Donor Leakage fit
End of explanation
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst sizes
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_kde, S_gauss, S_gauss_sig, S_gauss_err
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err S_kde S_gauss S_gauss_sig S_gauss_err '
'E_pr_do_kde nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
15,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Tokenizing with TF Text
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Splitter API
The main interfaces are Splitter and SplitterWithOffsets which have single methods split and split_with_offsets. The SplitterWithOffsets variant (which extends Splitter) includes an option for getting byte offsets. This allows the caller to know which bytes in the original string the created token was created from.
The Tokenizer and TokenizerWithOffsets are specialized versions of the Splitter that provide the convenience methods tokenize and tokenize_with_offsets respectively.
Generally, for any N-dimensional input, the returned tokens are in a N+1-dimensional RaggedTensor with the inner-most dimension of tokens mapping to the original individual strings.
```python
class Splitter {
@abstractmethod
def split(self, input)
}
class SplitterWithOffsets(Splitter) {
@abstractmethod
def split_with_offsets(self, input)
}
```
There is also a Detokenizer interface. Any tokenizer implementing this interface can accept a N-dimensional ragged tensor of tokens, and normally returns a N-1-dimensional tensor or ragged tensor that has the given tokens assembled together.
python
class Detokenizer {
@abstractmethod
def detokenize(self, input)
}
Tokenizers
Below is the suite of tokenizers provided by TensorFlow Text. String inputs are assumed to be UTF-8. Please review the Unicode guide for converting strings to UTF-8.
Whole word tokenizers
These tokenizers attempt to split a string by words, and is the most intuitive way to split text.
WhitespaceTokenizer
The text.WhitespaceTokenizer is the most basic tokenizer which splits strings on ICU defined whitespace characters (eg. space, tab, new line). This is often good for quickly building out prototype models.
Step3: You may notice a shortcome of this tokenizer is that punctuation is included with the word to make up a token. To split the words and punctuation into separate tokens, the UnicodeScriptTokenizer should be used.
UnicodeScriptTokenizer
The UnicodeScriptTokenizer splits strings based on Unicode script boundaries. The script codes used correspond to International Components for Unicode (ICU) UScriptCode values. See
Step4: Subword tokenizers
Subword tokenizers can be used with a smaller vocabulary, and allow the model to have some information about novel words from the subwords that make create it.
We briefly discuss the Subword tokenization options below, but the Subword Tokenization tutorial goes more in depth and also explains how to generate the vocab files.
WordpieceTokenizer
WordPiece tokenization is a data-driven tokenization scheme which generates a set of sub-tokens. These sub tokens may correspond to linguistic morphemes, but this is often not the case.
The WordpieceTokenizer expects the input to already be split into tokens. Because of this prerequisite, you will often want to split using the WhitespaceTokenizer or UnicodeScriptTokenizer beforehand.
Step5: After the string is split into tokens, the WordpieceTokenizer can be used to split into subtokens.
Step6: BertTokenizer
The BertTokenizer mirrors the original implementation of tokenization from the BERT paper. This is backed by the WordpieceTokenizer, but also performs additional tasks such as normalization and tokenizing to words first.
Step7: SentencepieceTokenizer
The SentencepieceTokenizer is a sub-token tokenizer that is highly configurable. This is backed by the Sentencepiece library. Like the BertTokenizer, it can include normalization and token splitting before splitting into sub-tokens.
Step8: Other splitters
UnicodeCharTokenizer
This splits a string into UTF-8 characters. It is useful for CJK languages that do not have spaces between words.
Step9: The output is Unicode codepoints. This can be also useful for creating character ngrams, such as bigrams. To convert back into UTF-8 characters.
Step10: HubModuleTokenizer
This is a wrapper around models deployed to TF Hub to make the calls easier since TF Hub currently does not support ragged tensors. Having a model perform tokenization is particularly useful for CJK languages when you want to split into words, but do not have spaces to provide a heuristic guide. At this time, we have a single segmentation model for Chinese.
Step11: It may be difficult to view the results of the UTF-8 encoded byte strings. Decode the list values to make viewing easier.
Step12: SplitMergeTokenizer
The SplitMergeTokenizer & SplitMergeFromLogitsTokenizer have a targeted purpose of splitting a string based on provided values that indicate where the string should be split. This is useful when building your own segmentation models like the previous Segmentation example.
For the SplitMergeTokenizer, a value of 0 is used to indicate the start of a new string, and the value of 1 indicates the character is part of the current string.
Step13: The SplitMergeFromLogitsTokenizer is similar, but it instead accepts logit value pairs from a neural network that predict if each character should be split into a new string or merged into the current one.
Step14: RegexSplitter
The RegexSplitter is able to segment strings at arbitrary breakpoints defined by a provided regular expression.
Step15: Offsets
When tokenizing strings, it is often desired to know where in the original string the token originated from. For this reason, each tokenizer which implements TokenizerWithOffsets has a tokenize_with_offsets method that will return the byte offsets along with the tokens. The start_offsets lists the bytes in the original string each token starts at, and the end_offsets lists the bytes immediately after the point where each token ends. To refrase, the start offsets are inclusive and the end offsets are exclusive.
Step16: Detokenization
Tokenizers which implement the Detokenizer provide a detokenize method which attempts to combine the strings. This has the chance of being lossy, so the detokenized string may not always match exactly the original, pre-tokenized string.
Step17: TF Data
TF Data is a powerful API for creating an input pipeline for training models. Tokenizers work as expected with the API. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q "tensorflow-text==2.8.*"
import requests
import tensorflow as tf
import tensorflow_text as tf_text
Explanation: Tokenizing with TF Text
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/guide/tokenizers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/tokenizers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/tokenizers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/tokenizers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/zh_segmentation/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
Overview
Tokenization is the process of breaking up a string into tokens. Commonly, these tokens are words, numbers, and/or punctuation. The tensorflow_text package provides a number of tokenizers available for preprocessing text required by your text-based models. By performing the tokenization in the TensorFlow graph, you will not need to worry about differences between the training and inference workflows and managing preprocessing scripts.
This guide discusses the many tokenization options provided by TensorFlow Text, when you might want to use one option over another, and how these tokenizers are called from within your model.
Setup
End of explanation
tokenizer = tf_text.WhitespaceTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
Explanation: Splitter API
The main interfaces are Splitter and SplitterWithOffsets which have single methods split and split_with_offsets. The SplitterWithOffsets variant (which extends Splitter) includes an option for getting byte offsets. This allows the caller to know which bytes in the original string the created token was created from.
The Tokenizer and TokenizerWithOffsets are specialized versions of the Splitter that provide the convenience methods tokenize and tokenize_with_offsets respectively.
Generally, for any N-dimensional input, the returned tokens are in a N+1-dimensional RaggedTensor with the inner-most dimension of tokens mapping to the original individual strings.
```python
class Splitter {
@abstractmethod
def split(self, input)
}
class SplitterWithOffsets(Splitter) {
@abstractmethod
def split_with_offsets(self, input)
}
```
There is also a Detokenizer interface. Any tokenizer implementing this interface can accept a N-dimensional ragged tensor of tokens, and normally returns a N-1-dimensional tensor or ragged tensor that has the given tokens assembled together.
python
class Detokenizer {
@abstractmethod
def detokenize(self, input)
}
Tokenizers
Below is the suite of tokenizers provided by TensorFlow Text. String inputs are assumed to be UTF-8. Please review the Unicode guide for converting strings to UTF-8.
Whole word tokenizers
These tokenizers attempt to split a string by words, and is the most intuitive way to split text.
WhitespaceTokenizer
The text.WhitespaceTokenizer is the most basic tokenizer which splits strings on ICU defined whitespace characters (eg. space, tab, new line). This is often good for quickly building out prototype models.
End of explanation
tokenizer = tf_text.UnicodeScriptTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
Explanation: You may notice a shortcome of this tokenizer is that punctuation is included with the word to make up a token. To split the words and punctuation into separate tokens, the UnicodeScriptTokenizer should be used.
UnicodeScriptTokenizer
The UnicodeScriptTokenizer splits strings based on Unicode script boundaries. The script codes used correspond to International Components for Unicode (ICU) UScriptCode values. See: http://icu-project.org/apiref/icu4c/uscript_8h.html
In practice, this is similar to the WhitespaceTokenizer with the most apparent difference being that it will split punctuation (USCRIPT_COMMON) from language texts (eg. USCRIPT_LATIN, USCRIPT_CYRILLIC, etc) while also separating language texts from each other. Note that this will also split contraction words into separate tokens.
End of explanation
tokenizer = tf_text.WhitespaceTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
Explanation: Subword tokenizers
Subword tokenizers can be used with a smaller vocabulary, and allow the model to have some information about novel words from the subwords that make create it.
We briefly discuss the Subword tokenization options below, but the Subword Tokenization tutorial goes more in depth and also explains how to generate the vocab files.
WordpieceTokenizer
WordPiece tokenization is a data-driven tokenization scheme which generates a set of sub-tokens. These sub tokens may correspond to linguistic morphemes, but this is often not the case.
The WordpieceTokenizer expects the input to already be split into tokens. Because of this prerequisite, you will often want to split using the WhitespaceTokenizer or UnicodeScriptTokenizer beforehand.
End of explanation
url = "https://github.com/tensorflow/text/blob/master/tensorflow_text/python/ops/test_data/test_wp_en_vocab.txt?raw=true"
r = requests.get(url)
filepath = "vocab.txt"
open(filepath, 'wb').write(r.content)
subtokenizer = tf_text.UnicodeScriptTokenizer(filepath)
subtokens = tokenizer.tokenize(tokens)
print(subtokens.to_list())
Explanation: After the string is split into tokens, the WordpieceTokenizer can be used to split into subtokens.
End of explanation
tokenizer = tf_text.BertTokenizer(filepath, token_out_type=tf.string, lower_case=True)
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
Explanation: BertTokenizer
The BertTokenizer mirrors the original implementation of tokenization from the BERT paper. This is backed by the WordpieceTokenizer, but also performs additional tasks such as normalization and tokenizing to words first.
End of explanation
url = "https://github.com/tensorflow/text/blob/master/tensorflow_text/python/ops/test_data/test_oss_model.model?raw=true"
sp_model = requests.get(url).content
tokenizer = tf_text.SentencepieceTokenizer(sp_model, out_type=tf.string)
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
Explanation: SentencepieceTokenizer
The SentencepieceTokenizer is a sub-token tokenizer that is highly configurable. This is backed by the Sentencepiece library. Like the BertTokenizer, it can include normalization and token splitting before splitting into sub-tokens.
End of explanation
tokenizer = tf_text.UnicodeCharTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
Explanation: Other splitters
UnicodeCharTokenizer
This splits a string into UTF-8 characters. It is useful for CJK languages that do not have spaces between words.
End of explanation
characters = tf.strings.unicode_encode(tf.expand_dims(tokens, -1), "UTF-8")
bigrams = tf_text.ngrams(characters, 2, reduction_type=tf_text.Reduction.STRING_JOIN, string_separator='')
print(bigrams.to_list())
Explanation: The output is Unicode codepoints. This can be also useful for creating character ngrams, such as bigrams. To convert back into UTF-8 characters.
End of explanation
MODEL_HANDLE = "https://tfhub.dev/google/zh_segmentation/1"
segmenter = tf_text.HubModuleTokenizer(MODEL_HANDLE)
tokens = segmenter.tokenize(["新华社北京"])
print(tokens.to_list())
Explanation: HubModuleTokenizer
This is a wrapper around models deployed to TF Hub to make the calls easier since TF Hub currently does not support ragged tensors. Having a model perform tokenization is particularly useful for CJK languages when you want to split into words, but do not have spaces to provide a heuristic guide. At this time, we have a single segmentation model for Chinese.
End of explanation
def decode_list(x):
if type(x) is list:
return list(map(decode_list, x))
return x.decode("UTF-8")
def decode_utf8_tensor(x):
return list(map(decode_list, x.to_list()))
print(decode_utf8_tensor(tokens))
Explanation: It may be difficult to view the results of the UTF-8 encoded byte strings. Decode the list values to make viewing easier.
End of explanation
strings = ["新华社北京"]
labels = [[0, 1, 1, 0, 1]]
tokenizer = tf_text.SplitMergeTokenizer()
tokens = tokenizer.tokenize(strings, labels)
print(decode_utf8_tensor(tokens))
Explanation: SplitMergeTokenizer
The SplitMergeTokenizer & SplitMergeFromLogitsTokenizer have a targeted purpose of splitting a string based on provided values that indicate where the string should be split. This is useful when building your own segmentation models like the previous Segmentation example.
For the SplitMergeTokenizer, a value of 0 is used to indicate the start of a new string, and the value of 1 indicates the character is part of the current string.
End of explanation
strings = [["新华社北京"]]
labels = [[[5.0, -3.2], [0.2, 12.0], [0.0, 11.0], [2.2, -1.0], [-3.0, 3.0]]]
tokenizer = tf_text.SplitMergeFromLogitsTokenizer()
tokenizer.tokenize(strings, labels)
print(decode_utf8_tensor(tokens))
Explanation: The SplitMergeFromLogitsTokenizer is similar, but it instead accepts logit value pairs from a neural network that predict if each character should be split into a new string or merged into the current one.
End of explanation
splitter = tf_text.RegexSplitter("\s")
tokens = splitter.split(["What you know you can't explain, but you feel it."], )
print(tokens.to_list())
Explanation: RegexSplitter
The RegexSplitter is able to segment strings at arbitrary breakpoints defined by a provided regular expression.
End of explanation
tokenizer = tf_text.UnicodeScriptTokenizer()
(tokens, start_offsets, end_offsets) = tokenizer.tokenize_with_offsets(['Everything not saved will be lost.'])
print(tokens.to_list())
print(start_offsets.to_list())
print(end_offsets.to_list())
Explanation: Offsets
When tokenizing strings, it is often desired to know where in the original string the token originated from. For this reason, each tokenizer which implements TokenizerWithOffsets has a tokenize_with_offsets method that will return the byte offsets along with the tokens. The start_offsets lists the bytes in the original string each token starts at, and the end_offsets lists the bytes immediately after the point where each token ends. To refrase, the start offsets are inclusive and the end offsets are exclusive.
End of explanation
tokenizer = tf_text.UnicodeCharTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
strings = tokenizer.detokenize(tokens)
print(strings.numpy())
Explanation: Detokenization
Tokenizers which implement the Detokenizer provide a detokenize method which attempts to combine the strings. This has the chance of being lossy, so the detokenized string may not always match exactly the original, pre-tokenized string.
End of explanation
docs = tf.data.Dataset.from_tensor_slices([['Never tell me the odds.'], ["It's a trap!"]])
tokenizer = tf_text.WhitespaceTokenizer()
tokenized_docs = docs.map(lambda x: tokenizer.tokenize(x))
iterator = iter(tokenized_docs)
print(next(iterator).to_list())
print(next(iterator).to_list())
Explanation: TF Data
TF Data is a powerful API for creating an input pipeline for training models. Tokenizers work as expected with the API.
End of explanation |
15,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SimpleITK Images, They're Physical Objects <a href="https
Step1: Load your first image and display it
Step2: Image Construction
There are a variety of ways to create an image.
The following components are required for a complete definition of an image
Step3: Basic Image Attributes
You can change the image origin, spacing and direction using function calls. Making such changes to an image already containing data should be done cautiously.
You can also use the dictionary like bracket operator to make these changes, with the keywords 'origin', 'spacing', 'direction'.
Step4: Image dimension queries
Step5: What is the depth of a 2D image?
Step6: Pixel/voxel type queries
Step7: What is the dimension and size of a Vector image and its data?
Step8: Accessing Pixels and Slicing
The Image class's member functions GetPixel and SetPixel provide an ITK-like interface for pixel access.
Step9: Slicing of SimpleITK images returns a copy of the image data.
This is similar to slicing Python lists and differs from the "view" returned by slicing numpy arrays.
Step10: Draw a square on top of the logo image
Step11: We can also paste one image into the other, either using the PasteImageFilter with its procedural interface or using a more Pythonic approach with image slicing. Note that for these operations SimpleITK treats the images as arrays of pixels and not as spatial objects. In the example below the fact that the images have different spacings is ignored.
Step12: Finally, SimpleITK images also support the usage of ellipsis. Below we use both available approaches to obtain a slice.
Step13: Conversion between numpy and SimpleITK
SimpleITK and numpy indexing access is in opposite order!
SimpleITK
Step14: From numpy to SimpleITK
Remember to to set the image's origin, spacing, and possibly direction cosine matrix. The default values may not match the physical dimensions of your image.
Step15: There and back again
The following code cell illustrates a situation where your code is a combination of SimpleITK methods and custom Python code which works with intensity values or labels outside of SimpleITK. This is a reasonable approach when you implement an algorithm in Python and don't care about the physical spacing of things (you are actually assuming the volume is isotropic).
Step16: Image operations
SimpleITK supports basic arithmetic operations between images, <b>taking into account their physical space</b>.
Repeatedly run this cell. Fix the error (comment out the SetDirection, then SetSpacing). Why doesn't the SetOrigin line cause a problem? How close do two physical attributes need to be in order to be considered equivalent?
Step17: Reading and Writing
SimpleITK can read and write images stored in a single file, or a set of files (e.g. DICOM series).
Images stored in the DICOM format have a meta-data dictionary associated with them, which is populated with the DICOM tags. When a DICOM series is read as a single image, the meta-data information is not available since DICOM tags are specific to each file. If you need the meta-data, you have three options
Step18: Read an image in JPEG format and cast the pixel type according to user selection.
Step19: Read a DICOM series and write it as a single mha file
Step20: Write an image series as JPEG. The WriteImage function receives a volume and a list of images names and writes the volume according to the z axis. For a displayable result we need to rescale the image intensities (default is [0,255]) since the JPEG format requires a cast to the UInt8 pixel type.
Step21: Select a specific DICOM series from a directory and only then load user selection.
Step22: DICOM photometric interpretation
Generally speaking, SimpleITK represents color images as multi-channel images independent of a color space. It is up to you to interpret the channels correctly based on additional color space knowledge prior to using them for display or any other purpose.
The following cells illustrate reading and interpretation of interesting images in DICOM format. The first is a photograph of an X-ray on a light box (yes, there are some strange things in the wild). The second is a digital X-ray. While both of these are chest X-rays they differ in image modality (0008|0060) and in Photometric Interpretation (0028|0004), color space in DICOM speak.
Things to note
Step23: The first, is a color sRGB image while an x-ray should be a single channel gray scale image. We will convert sRGB to gray scale.
Step24: Finer control
The ImageFileReader's interface provides finer control for reading, allowing us to require the use of a specific IO and allowing us to stream parts of an image to memory without reading the whole image (supported by a subset of the ImageIO components).
Selecting a Specific Image IO
SimpleITK relies on the registered ImageIOs to indicate whether they can read a file and then perform the reading. This is done automatically, going over the set of ImageIOs and inquiring whether they can read the given file. The first one that can is selected. If multiple ImageIOs can read a specific format, we do not know which one was used for the task (e.g. TIFFImageIO and LSMImageIO, which is derived from it, can both read tif files). In some cases you may want to use a specific IO, possibly one that reads the file faster, or supports a more complete feature set associated with the file format.
The next cell shows how to find out which ImageIOs are registered and specify the one we want.
Step25: Streaming Image IO
Some of the ImageIOs supported in SimpleITK allow you to stream in sub-regions of an image without the need to read the whole image into memory. This is very useful when you are memory constrained (either your images are large or your memory is limited).
The ImageIOs that support streaming include HDF5ImageIO, VTKImageIO, NiftiImageIO, MetaImageIO...
The next cell shows how to read in a sub/cropped image from a larger image. We read the central 1/3 portion of the image [1/3,2/3] of the original image.
Step27: The next cells show how to subtract two large images from each other with a smaller memory footprint than the direct approach, though the code is much more complex and slower than the direct approach
Step28: A simple way of seeing your system's memory usage is to open the appropriate monitoring program | Python Code:
import SimpleITK as sitk
# If the environment variable SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT is set, this will override the ReadImage
# function so that it also resamples the image to a smaller size (testing environment is memory constrained).
%run setup_for_testing
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, fixed
import os
OUTPUT_DIR = "Output"
# Utility method that either downloads data from the Girder repository or
# if already downloaded returns the file name for reading from disk (cached data).
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
Explanation: SimpleITK Images, They're Physical Objects <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F03_Image_Details.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
SimpleITK conventions:
* Image access is in x,y,z order, image.GetPixel(x,y,z) or image[x,y,z], with zero based indexing.
* If the output of an ITK filter has non-zero starting index, then the index will be set to 0, and the origin adjusted accordingly.
The unique feature of SimpleITK (derived from ITK) as a toolkit for image manipulation and analysis is that it views <b>images as physical objects occupying a bounded region in physical space</b>. In addition images can have different spacing between pixels along each axis, and the axes are not necessarily orthogonal. The following figure illustrates these concepts.
<img src="ImageOriginAndSpacing.png" style="width:700px"/><br><br>
Pixel Types
The pixel type is represented as an enumerated type. The following is a table of the enumerated list.
<table>
<tr><td>sitkUInt8</td><td>Unsigned 8 bit integer</td></tr>
<tr><td>sitkInt8</td><td>Signed 8 bit integer</td></tr>
<tr><td>sitkUInt16</td><td>Unsigned 16 bit integer</td></tr>
<tr><td>sitkInt16</td><td>Signed 16 bit integer</td></tr>
<tr><td>sitkUInt32</td><td>Unsigned 32 bit integer</td></tr>
<tr><td>sitkInt32</td><td>Signed 32 bit integer</td></tr>
<tr><td>sitkUInt64</td><td>Unsigned 64 bit integer</td></tr>
<tr><td>sitkInt64</td><td>Signed 64 bit integer</td></tr>
<tr><td>sitkFloat32</td><td>32 bit float</td></tr>
<tr><td>sitkFloat64</td><td>64 bit float</td></tr>
<tr><td>sitkComplexFloat32</td><td>complex number of 32 bit float</td></tr>
<tr><td>sitkComplexFloat64</td><td>complex number of 64 bit float</td></tr>
<tr><td>sitkVectorUInt8</td><td>Multi-component of unsigned 8 bit integer</td></tr>
<tr><td>sitkVectorInt8</td><td>Multi-component of signed 8 bit integer</td></tr>
<tr><td>sitkVectorUInt16</td><td>Multi-component of unsigned 16 bit integer</td></tr>
<tr><td>sitkVectorInt16</td><td>Multi-component of signed 16 bit integer</td></tr>
<tr><td>sitkVectorUInt32</td><td>Multi-component of unsigned 32 bit integer</td></tr>
<tr><td>sitkVectorInt32</td><td>Multi-component of signed 32 bit integer</td></tr>
<tr><td>sitkVectorUInt64</td><td>Multi-component of unsigned 64 bit integer</td></tr>
<tr><td>sitkVectorInt64</td><td>Multi-component of signed 64 bit integer</td></tr>
<tr><td>sitkVectorFloat32</td><td>Multi-component of 32 bit float</td></tr>
<tr><td>sitkVectorFloat64</td><td>Multi-component of 64 bit float</td></tr>
<tr><td>sitkLabelUInt8</td><td>RLE label of unsigned 8 bit integers</td></tr>
<tr><td>sitkLabelUInt16</td><td>RLE label of unsigned 16 bit integers</td></tr>
<tr><td>sitkLabelUInt32</td><td>RLE label of unsigned 32 bit integers</td></tr>
<tr><td>sitkLabelUInt64</td><td>RLE label of unsigned 64 bit integers</td></tr>
</table>
There is also sitkUnknown, which is used for undefined or erroneous pixel ID's. It has a value of -1.
The 64-bit integer types are not available on all distributions. When not available the value is sitkUnknown.
End of explanation
logo = sitk.ReadImage(fdata("SimpleITK.jpg"))
plt.imshow(sitk.GetArrayViewFromImage(logo))
plt.axis("off");
Explanation: Load your first image and display it
End of explanation
image_3D = sitk.Image(256, 128, 64, sitk.sitkInt16)
image_2D = sitk.Image(64, 64, sitk.sitkFloat32)
image_2D = sitk.Image([32, 32], sitk.sitkUInt32)
image_RGB = sitk.Image([128, 64], sitk.sitkVectorUInt8, 3)
Explanation: Image Construction
There are a variety of ways to create an image.
The following components are required for a complete definition of an image:
<ol>
<li>Pixel type [fixed on creation, no default]: unsigned 32 bit integer, sitkVectorUInt8, etc., see list above.</li>
<li> Sizes [fixed on creation, no default]: number of pixels/voxels in each dimension. This quantity implicitly defines the image dimension.</li>
<li> Origin [default is zero]: coordinates of the pixel/voxel with index (0,0,0) in physical units (i.e. mm).</li>
<li> Spacing [default is one]: Distance between adjacent pixels/voxels in each dimension given in physical units.</li>
<li> Direction matrix [default is identity]: mapping, rotation, between direction of the pixel/voxel axes and physical directions.</li>
</ol>
Initial pixel/voxel values are set to zero.
End of explanation
image_3D.SetOrigin((78.0, 76.0, 77.0))
image_3D.SetSpacing([0.5, 0.5, 3.0])
print(f"origin: {image_3D.GetOrigin()}")
print(f"size: {image_3D.GetSize()}")
print(f"spacing: {image_3D.GetSpacing()}")
print(f"direction: {image_3D.GetDirection()}\n")
image_3D["origin"] = (2.0, 4.0, 8.0)
image_3D["spacing"] = [0.25, 0.25, 5.0]
print(f'origin: {image_3D["origin"]}')
print(f"size: {image_3D.GetSize()}")
print(f'spacing: {image_3D["spacing"]}')
print(f'direction: {image_3D["direction"]}')
Explanation: Basic Image Attributes
You can change the image origin, spacing and direction using function calls. Making such changes to an image already containing data should be done cautiously.
You can also use the dictionary like bracket operator to make these changes, with the keywords 'origin', 'spacing', 'direction'.
End of explanation
print(image_3D.GetDimension())
print(image_3D.GetWidth())
print(image_3D.GetHeight())
print(image_3D.GetDepth())
Explanation: Image dimension queries:
End of explanation
print(image_2D.GetSize())
print(image_2D.GetDepth())
Explanation: What is the depth of a 2D image?
End of explanation
print(image_3D.GetPixelIDValue())
print(image_3D.GetPixelIDTypeAsString())
print(image_3D.GetNumberOfComponentsPerPixel())
Explanation: Pixel/voxel type queries:
End of explanation
print(image_RGB.GetDimension())
print(image_RGB.GetSize())
print(image_RGB.GetNumberOfComponentsPerPixel())
Explanation: What is the dimension and size of a Vector image and its data?
End of explanation
help(image_3D.GetPixel)
print(image_3D.GetPixel(0, 0, 0))
image_3D.SetPixel(0, 0, 0, 1)
print(image_3D.GetPixel(0, 0, 0))
# This can also be done using Pythonic notation.
print(image_3D[0, 0, 1])
image_3D[0, 0, 1] = 2
print(image_3D[0, 0, 1])
Explanation: Accessing Pixels and Slicing
The Image class's member functions GetPixel and SetPixel provide an ITK-like interface for pixel access.
End of explanation
# Brute force sub-sampling
logo_subsampled = logo[::2, ::2]
# Get the sub-image containing the word Simple
simple = logo[0:115, :]
# Get the sub-image containing the word Simple and flip it
simple_flipped = logo[115:0:-1, :]
n = 4
plt.subplot(n, 1, 1)
plt.imshow(sitk.GetArrayViewFromImage(logo))
plt.axis("off")
plt.subplot(n, 1, 2)
plt.imshow(sitk.GetArrayViewFromImage(logo_subsampled))
plt.axis("off")
plt.subplot(n, 1, 3)
plt.imshow(sitk.GetArrayViewFromImage(simple))
plt.axis("off")
plt.subplot(n, 1, 4)
plt.imshow(sitk.GetArrayViewFromImage(simple_flipped))
plt.axis("off");
Explanation: Slicing of SimpleITK images returns a copy of the image data.
This is similar to slicing Python lists and differs from the "view" returned by slicing numpy arrays.
End of explanation
# Version 0: get the numpy array and assign the value via broadcast - later on you will need to construct
# a new image from the array
logo_pixels = sitk.GetArrayFromImage(logo)
logo_pixels[0:10, 0:10] = [0, 255, 0]
# Version 1: generates an error, the image slicing returns a new image and you cannot assign a value to an image
# logo[0:10,0:10] = [255,0,0]
# Version 2: image slicing returns a new image, so all assignments here will not have any effect on the original
# 'logo' image
logo_subimage = logo[0:10, 0:10]
for x in range(0, 10):
for y in range(0, 10):
logo_subimage[x, y] = [255, 0, 0]
# Version 3: modify the original image, iterate and assign a value to each pixel
# for x in range(0,10):
# for y in range(0,10):
# logo[x,y] = [255,0,0]
plt.subplot(2, 1, 1)
plt.imshow(sitk.GetArrayViewFromImage(logo))
plt.axis("off")
plt.subplot(2, 1, 2)
plt.imshow(logo_pixels)
plt.axis("off");
Explanation: Draw a square on top of the logo image:
After running this cell, uncomment "Version 3" and see its effect.
End of explanation
logo = sitk.ReadImage(fdata("SimpleITK.jpg"))
sz_x = 10
sz_y = 10
color_channels = [
sitk.Image([sz_x, sz_y], sitk.sitkUInt8),
sitk.Image([sz_x, sz_y], sitk.sitkUInt8) + 255,
sitk.Image([sz_x, sz_y], sitk.sitkUInt8),
]
color_image = sitk.Compose(color_channels)
color_image.SetSpacing([0.5, 0.5])
print(logo.GetSpacing())
print(color_image.GetSpacing())
# Set sub image using the Paste function
logo = sitk.Paste(
destinationImage=logo,
sourceImage=color_image,
sourceSize=color_image.GetSize(),
sourceIndex=[0, 0],
destinationIndex=[0, 0],
)
# Set sub image using slicing.
logo[20 : 20 + sz_x, 0:sz_y] = color_image
sitk.Show(logo)
Explanation: We can also paste one image into the other, either using the PasteImageFilter with its procedural interface or using a more Pythonic approach with image slicing. Note that for these operations SimpleITK treats the images as arrays of pixels and not as spatial objects. In the example below the fact that the images have different spacings is ignored.
End of explanation
z_slice = image_3D.GetDepth() // 2
result1 = image_3D[..., z_slice]
result2 = image_3D[:, :, z_slice]
# Check whether the two slices are equivalent, same pixel content and same origin, spacing, direction cosine.
# Uncomment the following line to see what happens if the slices do not have the same origin.
# result1['origin'] = [o+1.0 for o in result1['origin']]
try:
if np.all(sitk.GetArrayViewFromImage(result1 - result2) == 0):
print("Slices equivalent.")
else:
print("Slices not equivalent (intensity differences).")
except Exception:
print("Slices not equivalent (physical differences).")
Explanation: Finally, SimpleITK images also support the usage of ellipsis. Below we use both available approaches to obtain a slice.
End of explanation
nda = sitk.GetArrayFromImage(image_3D)
print(image_3D.GetSize())
print(nda.shape)
nda = sitk.GetArrayFromImage(image_RGB)
print(image_RGB.GetSize())
print(nda.shape)
gabor_image = sitk.GaborSource(size=[64, 64], frequency=0.03)
# Getting a numpy array view on the image data doesn't copy the data
nda_view = sitk.GetArrayViewFromImage(gabor_image)
plt.imshow(nda_view, cmap=plt.cm.Greys_r)
plt.axis("off")
# Trying to assign a value to the array view will throw an exception
nda_view[0, 0] = 255
Explanation: Conversion between numpy and SimpleITK
SimpleITK and numpy indexing access is in opposite order!
SimpleITK: image[x,y,z]<br>
numpy: image_numpy_array[z,y,x]
From SimpleITK to numpy
We have two options for converting from SimpleITK to numpy:
* GetArrayFromImage(): returns a copy of the image data. You can then freely modify the data as it has no effect on the original SimpleITK image.
* GetArrayViewFromImage(): returns a view on the image data which is useful for display in a memory efficient manner. You cannot modify the data and the view will be invalid if the original SimpleITK image is deleted.
End of explanation
nda = np.zeros((10, 20, 3))
# if this is supposed to be a 3D gray scale image [x=3, y=20, z=10]
img = sitk.GetImageFromArray(nda)
print(img.GetSize())
# if this is supposed to be a 2D color image [x=20,y=10]
img = sitk.GetImageFromArray(nda, isVector=True)
print(img.GetSize())
Explanation: From numpy to SimpleITK
Remember to to set the image's origin, spacing, and possibly direction cosine matrix. The default values may not match the physical dimensions of your image.
End of explanation
def my_algorithm(image_as_numpy_array):
# res is the image result of your algorithm, has the same grid size as the original image
res = image_as_numpy_array
return res
# Starting with SimpleITK
img = sitk.ReadImage(fdata("training_001_mr_T1.mha"))
# Custom Python code working on a numpy array.
npa_res = my_algorithm(sitk.GetArrayFromImage(img))
# Converting back to SimpleITK (assumes we didn't move the image in space as we copy the information from the original)
res_img = sitk.GetImageFromArray(npa_res)
res_img.CopyInformation(img)
# Continuing to work with SimpleITK images
res_img - img
Explanation: There and back again
The following code cell illustrates a situation where your code is a combination of SimpleITK methods and custom Python code which works with intensity values or labels outside of SimpleITK. This is a reasonable approach when you implement an algorithm in Python and don't care about the physical spacing of things (you are actually assuming the volume is isotropic).
End of explanation
img1 = sitk.Image(24, 24, sitk.sitkUInt8)
img1[0, 0] = 0
img2 = sitk.Image(img1.GetSize(), sitk.sitkUInt8)
img2.SetDirection([0, 1, 0.5, 0.5])
img2.SetSpacing([0.5, 0.8])
img2.SetOrigin([0.000001, 0.000001])
img2[0, 0] = 255
img3 = img1 + img2
print(img3[0, 0])
Explanation: Image operations
SimpleITK supports basic arithmetic operations between images, <b>taking into account their physical space</b>.
Repeatedly run this cell. Fix the error (comment out the SetDirection, then SetSpacing). Why doesn't the SetOrigin line cause a problem? How close do two physical attributes need to be in order to be considered equivalent?
End of explanation
img = sitk.ReadImage(fdata("SimpleITK.jpg"))
print(img.GetPixelIDTypeAsString())
# write as PNG and BMP
sitk.WriteImage(img, os.path.join(OUTPUT_DIR, "SimpleITK.png"))
sitk.WriteImage(img, os.path.join(OUTPUT_DIR, "SimpleITK.bmp"))
Explanation: Reading and Writing
SimpleITK can read and write images stored in a single file, or a set of files (e.g. DICOM series).
Images stored in the DICOM format have a meta-data dictionary associated with them, which is populated with the DICOM tags. When a DICOM series is read as a single image, the meta-data information is not available since DICOM tags are specific to each file. If you need the meta-data, you have three options:
Using the object oriented interface's ImageSeriesReader class, configure it to load the tags using the MetaDataDictionaryArrayUpdateOn method and possibly the LoadPrivateTagsOn method if you need the private tags. Once the series is read you can access the meta-data from the series reader using the GetMetaDataKeys, HasMetaDataKey, and GetMetaData.
Using the object oriented interface's ImageFileReader, set a specific slice's file name and only read it's meta-data using the ReadImageInformation method which only reads the meta-data but not the bulk pixel information. Once the meta-data is read you can access it from the file reader using the GetMetaDataKeys, HasMetaDataKey, and GetMetaData.
Using the object oriented interface's ImageFileReader, set a specific slice's file name and read it. Or using the procedural interface's, ReadImage function, read a specific file. You can then access the meta-data directly from the Image using the GetMetaDataKeys, HasMetaDataKey, and GetMetaData.
In the following cell, we read an image in JPEG format, and write it as PNG and BMP. File formats are deduced from the file extension. Appropriate pixel type is also set - you can override this and force a pixel type of your choice.
End of explanation
# Several pixel types, some make sense in this case (vector types) and some are just show
# that the user's choice will force the pixel type even when it doesn't make sense
# (e.g. sitkVectorUInt16 or sitkUInt8).
pixel_types = {
"sitkUInt8": sitk.sitkUInt8,
"sitkUInt16": sitk.sitkUInt16,
"sitkFloat64": sitk.sitkFloat64,
"sitkVectorUInt8": sitk.sitkVectorUInt8,
"sitkVectorUInt16": sitk.sitkVectorUInt16,
"sitkVectorFloat64": sitk.sitkVectorFloat64,
}
def pixel_type_dropdown_callback(pixel_type, pixel_types_dict):
# specify the file location and the pixel type we want
img = sitk.ReadImage(fdata("SimpleITK.jpg"), pixel_types_dict[pixel_type])
print(img.GetPixelIDTypeAsString())
print(img[0, 0])
plt.imshow(sitk.GetArrayViewFromImage(img))
plt.axis("off")
interact(
pixel_type_dropdown_callback,
pixel_type=list(pixel_types.keys()),
pixel_types_dict=fixed(pixel_types),
);
Explanation: Read an image in JPEG format and cast the pixel type according to user selection.
End of explanation
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
series_ID = "1.2.840.113619.2.290.3.3233817346.783.1399004564.515"
# Get the list of files belonging to a specific series ID.
reader = sitk.ImageSeriesReader()
# Use the functional interface to read the image series.
original_image = sitk.ReadImage(
reader.GetGDCMSeriesFileNames(data_directory, series_ID)
)
# Write the image.
output_file_name_3D = os.path.join(OUTPUT_DIR, "3DImage.mha")
sitk.WriteImage(original_image, output_file_name_3D)
# Read it back again.
written_image = sitk.ReadImage(output_file_name_3D)
# Check that the original and written image are the same.
statistics_image_filter = sitk.StatisticsImageFilter()
statistics_image_filter.Execute(original_image - written_image)
# Check that the original and written files are the same
print(
f"Max, Min differences are : {statistics_image_filter.GetMaximum()}, {statistics_image_filter.GetMinimum()}"
)
Explanation: Read a DICOM series and write it as a single mha file
End of explanation
sitk.WriteImage(
sitk.Cast(sitk.RescaleIntensity(written_image), sitk.sitkUInt8),
[
os.path.join(OUTPUT_DIR, f"slice{i:03d}.jpg")
for i in range(written_image.GetSize()[2])
],
)
Explanation: Write an image series as JPEG. The WriteImage function receives a volume and a list of images names and writes the volume according to the z axis. For a displayable result we need to rescale the image intensities (default is [0,255]) since the JPEG format requires a cast to the UInt8 pixel type.
End of explanation
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
# Global variable 'selected_series' is updated by the interact function
selected_series = ""
file_reader = sitk.ImageFileReader()
def DICOM_series_dropdown_callback(series_to_load, series_dictionary):
global selected_series
# Print some information about the series from the meta-data dictionary
# DICOM standard part 6, Data Dictionary: http://medical.nema.org/medical/dicom/current/output/pdf/part06.pdf
file_reader.SetFileName(series_dictionary[series_to_load][0])
file_reader.ReadImageInformation()
tags_to_print = {
"0010|0010": "Patient name: ",
"0008|0060": "Modality: ",
"0008|0021": "Series date: ",
"0008|0080": "Institution name: ",
"0008|1050": "Performing physician's name: ",
}
for tag in tags_to_print:
try:
print(tags_to_print[tag] + file_reader.GetMetaData(tag))
except: # Ignore if the tag isn't in the dictionary
pass
selected_series = series_to_load
# Directory contains multiple DICOM studies/series, store
# in dictionary with key being the series ID
reader = sitk.ImageSeriesReader()
series_file_names = {}
series_IDs = reader.GetGDCMSeriesIDs(data_directory)
# Check that we have at least one series
if series_IDs:
for series in series_IDs:
series_file_names[series] = reader.GetGDCMSeriesFileNames(
data_directory, series
)
interact(
DICOM_series_dropdown_callback,
series_to_load=list(series_IDs),
series_dictionary=fixed(series_file_names),
)
else:
print("Data directory does not contain any DICOM series.")
reader.SetFileNames(series_file_names[selected_series])
img = reader.Execute()
# Display the image slice from the middle of the stack, z axis
z = int(img.GetDepth() / 2)
plt.imshow(sitk.GetArrayViewFromImage(img)[z, :, :], cmap=plt.cm.Greys_r)
plt.axis("off");
Explanation: Select a specific DICOM series from a directory and only then load user selection.
End of explanation
xrays = [sitk.ReadImage(fdata("photo.dcm")), sitk.ReadImage(fdata("cxr.dcm"))]
# We can access the image's metadata via the GetMetaData method or
# via the bracket operator, the latter is more concise.
for img in xrays:
print(f'Image Modality: {img.GetMetaData("0008|0060")}')
print(f"Number of channels: {img.GetNumberOfComponentsPerPixel()}")
print(f'Photomertic Interpretation: {img["0028|0004"]}')
# Display the image using Fiji which expects the channels to be in the RGB color space
sitk.Show(img)
Explanation: DICOM photometric interpretation
Generally speaking, SimpleITK represents color images as multi-channel images independent of a color space. It is up to you to interpret the channels correctly based on additional color space knowledge prior to using them for display or any other purpose.
The following cells illustrate reading and interpretation of interesting images in DICOM format. The first is a photograph of an X-ray on a light box (yes, there are some strange things in the wild). The second is a digital X-ray. While both of these are chest X-rays they differ in image modality (0008|0060) and in Photometric Interpretation (0028|0004), color space in DICOM speak.
Things to note:
1. When using SimpleITK to read a color DICOM image, the channel values will be transformed to the RGB color space.
2. When using SimpleITK to read a scalar image, it is assumed that the lowest intensity value is black and highest white. If the photometric interpretation tag is MONOCHROME2 (lowest value displayed as black) nothing is done. If it is MONOCHROME1 (lowest value displayed as white), the pixel values are inverted.
End of explanation
def srgb2gray(image):
# Convert sRGB image to gray scale and rescale results to [0,255]
channels = [
sitk.VectorIndexSelectionCast(image, i, sitk.sitkFloat32)
for i in range(image.GetNumberOfComponentsPerPixel())
]
# linear mapping
I = 1 / 255.0 * (0.2126 * channels[0] + 0.7152 * channels[1] + 0.0722 * channels[2])
# nonlinear gamma correction
I = (
I * sitk.Cast(I <= 0.0031308, sitk.sitkFloat32) * 12.92
+ I ** (1 / 2.4) * sitk.Cast(I > 0.0031308, sitk.sitkFloat32) * 1.055
- 0.055
)
return sitk.Cast(sitk.RescaleIntensity(I), sitk.sitkUInt8)
sitk.Show(srgb2gray(xrays[0]))
Explanation: The first, is a color sRGB image while an x-ray should be a single channel gray scale image. We will convert sRGB to gray scale.
End of explanation
file_reader = sitk.ImageFileReader()
# Get a tuple listing all registered ImageIOs
image_ios_tuple = file_reader.GetRegisteredImageIOs()
print("The supported image IOs are: " + str(image_ios_tuple))
# Optionally, just print the reader and see which ImageIOs are registered
print("\n", file_reader)
# Specify the JPEGImageIO and read file
file_reader.SetImageIO("JPEGImageIO")
file_reader.SetFileName(fdata("SimpleITK.jpg"))
logo = file_reader.Execute()
# Unfortunately, now reading a non JPEG image will fail
try:
file_reader.SetFileName(fdata("cthead1.png"))
ct_head = file_reader.Execute()
except RuntimeError:
print("Got a RuntimeError exception.")
# We can reset the file reader to its default behaviour so that it automatically
# selects the ImageIO
file_reader.SetImageIO("")
ct_head = file_reader.Execute()
Explanation: Finer control
The ImageFileReader's interface provides finer control for reading, allowing us to require the use of a specific IO and allowing us to stream parts of an image to memory without reading the whole image (supported by a subset of the ImageIO components).
Selecting a Specific Image IO
SimpleITK relies on the registered ImageIOs to indicate whether they can read a file and then perform the reading. This is done automatically, going over the set of ImageIOs and inquiring whether they can read the given file. The first one that can is selected. If multiple ImageIOs can read a specific format, we do not know which one was used for the task (e.g. TIFFImageIO and LSMImageIO, which is derived from it, can both read tif files). In some cases you may want to use a specific IO, possibly one that reads the file faster, or supports a more complete feature set associated with the file format.
The next cell shows how to find out which ImageIOs are registered and specify the one we want.
End of explanation
file_reader = sitk.ImageFileReader()
file_reader.SetFileName(fdata("vm_head_rgb.mha"))
file_reader.ReadImageInformation()
image_size = file_reader.GetSize()
start_index, extract_size = zip(
*[(int(1.0 / 3.0 * sz), int(1.0 / 3.0 * sz)) for sz in file_reader.GetSize()]
)
file_reader.SetExtractIndex(start_index)
file_reader.SetExtractSize(extract_size)
sitk.Show(file_reader.Execute())
Explanation: Streaming Image IO
Some of the ImageIOs supported in SimpleITK allow you to stream in sub-regions of an image without the need to read the whole image into memory. This is very useful when you are memory constrained (either your images are large or your memory is limited).
The ImageIOs that support streaming include HDF5ImageIO, VTKImageIO, NiftiImageIO, MetaImageIO...
The next cell shows how to read in a sub/cropped image from a larger image. We read the central 1/3 portion of the image [1/3,2/3] of the original image.
End of explanation
def streaming_subtract(image1_file_name, image2_file_name, parts):
Subtract image1 from image2 using 'parts' number of sub-regions.
file_reader = sitk.ImageFileReader()
file_reader.SetFileName(image1_file_name)
file_reader.ReadImageInformation()
image_size = file_reader.GetSize()
# Create the result image, initially empty
result_img = sitk.Image(
file_reader.GetSize(),
file_reader.GetPixelID(),
file_reader.GetNumberOfComponents(),
)
result_img.SetSpacing(file_reader.GetSpacing())
result_img.SetOrigin(file_reader.GetOrigin())
result_img.SetDirection(file_reader.GetDirection())
extract_size = list(file_reader.GetSize())
extract_size[-1] = extract_size[-1] // parts
current_index = [0] * file_reader.GetDimension()
for i in range(parts):
if i == (
parts - 1
): # last region may be smaller than the standard extract region
extract_size[-1] = image_size[-1] - current_index[-1]
file_reader.SetFileName(image1_file_name)
file_reader.SetExtractIndex(current_index)
file_reader.SetExtractSize(extract_size)
sub_image1 = file_reader.Execute()
file_reader.SetFileName(image2_file_name)
file_reader.SetExtractIndex(current_index)
file_reader.SetExtractSize(extract_size)
sub_image2 = file_reader.Execute()
# Paste the result of subtracting the two subregions into their location in the result_img
result_img = sitk.Paste(
result_img,
sub_image1 - sub_image2,
extract_size,
[0] * file_reader.GetDimension(),
current_index,
)
current_index[-1] += extract_size[-1]
return result_img
# If you have the patience and RAM you can try this with the vm_head_rgb.mha image.
image1_file_name = fdata("fib_sem_bacillus_subtilis.mha")
image2_file_name = fdata("fib_sem_bacillus_subtilis.mha")
Explanation: The next cells show how to subtract two large images from each other with a smaller memory footprint than the direct approach, though the code is much more complex and slower than the direct approach:
sitk.ReadImage(image1_file_name) - sitk.ReadImage(image2_file_name)
Note: The code assume that the two images occupy the same spatial region (origin, spacing, direction cosine matrix).
End of explanation
result_img = streaming_subtract(image1_file_name, image2_file_name, parts=5)
del result_img
result_img = sitk.ReadImage(image1_file_name) - sitk.ReadImage(image2_file_name)
del result_img
Explanation: A simple way of seeing your system's memory usage is to open the appropriate monitoring program: (Windows) Resource Monitor; (Linux) top; (OS X) Activity Monitor. This will give you a rough idea of the memory used by the streaming vs. non streaming approaches.
End of explanation |
15,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler
Step1: Find the square of the sum of the first 100 natural numbers
Step2: Find and print the difference
Step3: Success! | Python Code:
sum_of_squares = sum([i ** 2 for i in range(1,101)])
Explanation: Project Euler: Problem 6
https://projecteuler.net/problem=6
The sum of the squares of the first ten natural numbers is,
$$1^2 + 2^2 + ... + 10^2 = 385$$
The square of the sum of the first ten natural numbers is,
$$(1 + 2 + ... + 10)^2 = 55^2 = 3025$$
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
Find the sum of the squares of the first 100 natural numbers
End of explanation
square_of_sum = (sum([i for i in range(1,101)])) ** 2
Explanation: Find the square of the sum of the first 100 natural numbers
End of explanation
difference = square_of_sum - sum_of_squares
print(difference)
Explanation: Find and print the difference
End of explanation
# This cell will be used for grading, leave it at the end of the notebook.
Explanation: Success!
End of explanation |
15,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-veg-lr', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-VEG-LR
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
15,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Karen Yu, Nick Vasios, Thibaut Perol
AM207 Final Project
Energy Disaggregation from Non-Intrusive Load Monitoring
DISAGGREGATION USING COMBINATORIAL OPTIMIZATION
Importing Necessary Packages
Step6: The Heart of the Notebook
Step7: Importing and Loading the REDD dataset
Step8: We want to train the Combinatorial Optimization Algorithm using the data for 5 buildings and then test it against the last building. To simplify our analysis and also to enable comparison with other methods (Neural Nets, FHMM, MLE etc) we will only try to dissagregate data associated with the fridge and the microwave. However, the REDD dataset that we are using here does not contain data measurements for the fridge and microwave for all buildings. In particular, building 4 does not have measurements for the fridge. As a result, we will exclude building 4 from the dataset and we will only import the meters associated with the fridge from other buildings.
The train data set will consist of meters associated with the fridge and microwave from buildings 1,2,3 and 6. We will then test the combinatorial optimization algorithm against the aggregated data for building 5.
We first plot the time window span for all buildings
Step9: Unfortunately, due to a bug in one of the main classes of the NILMTK package the implementation of the Combinatorial Optimization do not save the meters for the disaggregated data correctly unless the building on which we test on also exists in the trainihg set. More on this issue can be found here https
Step10: Creating MeterGroups with the desired appliances from the desired buildings
Below we define a function tha is able to create a metergroup that only includes meters for the appliances that we are interested in and is also able to exclude buildings that we don't want in the meter. Also, if an appliance is requested but a meter is not found then the meter is skipped but the metergoup is created nontheless.
Step11: Now we set the appliances that we want as well as the buildings to exclude and we create the metergroup
Step12: As we can see the Metergroup was successfully created and contains all the appliances we requested (Fridge and Microwave) in all buildings that the appliances exist apart from the ones we excluded
Correcting the MeterGroup (Necessary for the CO to work)
Now we need to perform the trick we mentioned previously. We need to also include the meter from building 5 with the Fridge and Microwave which is the building we are going to test on but we need to make sure that only a very small portion of the data is seen for this building. We already took care of that by changing the window for the data in building 5 so now we only have to include the meters for the Fridge and Microwave for building 5 from the reduced time dataset
Step13: As we can see the metergroup was updated successfully
Training
We now need to train in the Metergroup we just created. First, let us load the class for the CO
Step14: Now Let's train
Step15: Preparing the Testing Data
Now that the training is done, the only thing that we have to do is to prepare the Data for Building 5 that we want to test on and call the Disaggregation. The data set is now the remaining part of building 5 that is not seen. After that, we only keep the Main meter which contains ifrormation about the aggregated data consumption and we disaggregate.
Step16: Disaggregating the test data
The disaggregation Begins Now
Step17: OK.. Now we are all done. All that remains is to interpret the results and plot the scores..
Post Processing & Results
Step19: Resampling to align meters
Before we are able to calculate and plot the metrics we need to align the ground truth meter with the disaggregated meters. Why so? If you notice in the dissagregation method of the CO class above, you may see that by default the time sampling is changed from 3s which is the raw data to 60s. This has to happen in order to make the disaggregation more efficient computationally but also because it is impossible to disaggregate using the actual time step. So in order to compare now we have to resample the meter for the ground truth and align it
Step20: Here we just plot the disaggregated data alongside the ground truth for the Fridge
Step21: Aligning meters, Converting to Numpy and Computing Metrics
In this part of the Notebook, we call the function we previously defined to align the meters and then we convert the meters to pandas and ultimately to numpy arrays. We check if any NaN's exist (which is something possible after resmplilng.. Resampling errors may occur) and replace them with 0's if they do. We also compute the following metrics for each appliance
Step22: Results
Now we just plot the scores for both the Fridge and the Microwave in order to be able to visualize what is going on. We do not comment on the results in this notebook since we do this in the report. There is a separate notebook where all these results are combined along with the corresponding results from the Neural Network and the FHMM method and the total results are reported side by side to ease comparison. We plot them here as well for housekeeping although it is redundant.
F1-Score
Step23: Precision
Step24: Recall
Step25: Accuracy | Python Code:
from __future__ import print_function, division
import numpy as np
import pandas as pd
from os.path import join
import pickle
import copy
from pylab import rcParams
import matplotlib.pyplot as plt
%matplotlib inline
rcParams['figure.figsize'] = (13, 6)
import nilmtk
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.utils import print_dict,find_nearest
from nilmtk.feature_detectors import cluster
from nilmtk.disaggregate import Disaggregator
from nilmtk.electric import get_vampire_power
from nilmtk.metrics import f1_score
import warnings
from warnings import warn
warnings.filterwarnings("ignore")
import seaborn as sns
# sns.set_style("white")
# Fix the seed for repeatability of experiments
SEED = 42
np.random.seed(SEED)
Explanation: Karen Yu, Nick Vasios, Thibaut Perol
AM207 Final Project
Energy Disaggregation from Non-Intrusive Load Monitoring
DISAGGREGATION USING COMBINATORIAL OPTIMIZATION
Importing Necessary Packages
End of explanation
class CombinatorialOptimisation(Disaggregator):
A Combinatorial Optimization Algorithm based on the implementation by NILMTK
This class is build upon the main Dissagregator class already implemented by NILMTK
All the methods from Dissagregator are passed in here as well since we import the class
as shown above. We should note howeger that Dissagregator is nothing more than a general interface
class upon which all dissagregator algortihms are build. All the methods are initialized in the
Dissagregator class but the specific implementation is based upon the method to be implemented.
In other words, even though we pass in Dissagregator, all methods will be redefined again to work with
the Combinatorial Optimization algorithm as you can see below.
Attributes
----------
model : list of dicts
Each dict has these keys:
states : list of ints (the power (Watts) used in different states)
training_metadata : ElecMeter or MeterGroup object used for training
this set of states. We need this information because we
need the appliance type (and perhaps some other metadata)
for each model.
state_combinations : 2D array
Each column is an appliance.
Each row is a possible combination of power demand values e.g.
[[0, 0, 0, 0],
[0, 0, 0, 100],
[0, 0, 50, 0],
[0, 0, 50, 100], ...]
MIN_CHUNK_LENGTH : int
def __init__(self):
self.model = []
self.state_combinations = None
self.MIN_CHUNK_LENGTH = 100
self.MODEL_NAME = 'Combinatorial Optimization'
def train(self, metergroup, num_states_dict=None, **load_kwargs):
Train using 1D CO. Places the learnt model in the `model` attribute.
Parameters
----------
metergroup : a nilmtk.MeterGroup object
num_states_dict : dict
**load_kwargs : keyword arguments passed to `meter.power_series()`
Notes
-----
* only uses first chunk for each meter (TODO: handle all chunks).
# Initializing dictionary to save the number of states
if num_states_dict is None:
num_states_dict = {}
# The CO class is only able to train in new models. We can only train once. If model exists, raise an error
if self.model:
raise RuntimeError(
"This implementation of Combinatorial Optimisation"
" does not support multiple calls to `train`.")
# How many meters do we have in the training set?
num_meters = len(metergroup.meters)
# If more than 20 then reduce the number of clusters to reduce the computational cost.
if num_meters > 20:
max_num_clusters = 2
else:
max_num_clusters = 3
print('Now training...')
print('Loop in all meters begins...')
# We now loop in all meters passed in in the training data set
# Every time, we load the data in the meter and we call the method
# --> train_on_chunk. For more info about this method please see below
for i, meter in enumerate(metergroup.submeters().meters):
#print('We now train for submeter {}'.format(meter))
# Load the time series for the power consumption for this meter
power_series = meter.power_series(**load_kwargs)
# Note that we do not effectively load until we use the next() method
# We load and save into chunk. Chunk will be used in training
chunk = power_series.next()
# Get the number of total states from the dictionary
num_total_states = num_states_dict.get(meter)
if num_total_states is not None:
num_on_states = num_total_states - 1
else:
num_on_states = None
#print('i={},num_total_states={},num_on_states={}'.format(i,meter,num_total_states,num_on_states))
# The actual training happens now. We call train_on_chunk using the time series we loaded on chunk for this meter
self.train_on_chunk(chunk, meter, max_num_clusters, num_on_states)
# Check to see if there are any more chunks.
try:
power_series.next()
except StopIteration:
pass
else:
warn("The current implementation of CombinatorialOptimisation"
" can only handle a single chunk. But there are multiple"
" chunks available. So have only trained on the"
" first chunk!")
print("Done training!")
def train_on_chunk(self, chunk, meter, max_num_clusters, num_on_states):
Train on chunk trains the Combinatorial Optimization Model based on the time series for the power consumption
passed in chunk. This method is based on the sklearn machine learning library and in particular the KMEANS
algorithm. It calls the cluster function which is imported in the beginning of this notebook. Cluster, prepares
the data in chunk so that its size is always compatible and the same and then calls the KMEANS algorithm to
perform the clustering. Function cluster returns only the centers of the clustered data which correspond to the
individual states for the given appliance/meter
# Check if we've already trained on this meter. We only allow training once on each meter
meters_in_model = [d['training_metadata'] for d in self.model]
if meter in meters_in_model:
raise RuntimeError(
"Meter {} is already in model!"
" Can't train twice on the same meter!"
.format(meter))
# Do the KMEANS clustering and return the centers
states = cluster(chunk, max_num_clusters, num_on_states)
print('\t Now Clustering in Train on Chunk')
#print('\t {}'.format(states))
# Append the clustered data to the model
self.model.append({
'states': states,
'training_metadata': meter})
def _set_state_combinations_if_necessary(self):
Get centroids
# If we import sklearn at the top of the file then auto doc fails.
if (self.state_combinations is None or
self.state_combinations.shape[1] != len(self.model)):
from sklearn.utils.extmath import cartesian
# Saving the centroids in centroids (appliance states)
centroids = [model['states'] for model in self.model]
# Function cartesian returns all possible combinations
# than can be performed using centroids
self.state_combinations = cartesian(centroids)
print()
#print('Now printing the state combinations...')
#print(cartesian(centroids))
def disaggregate(self, mains, output_datastore,
vampire_power=None, **load_kwargs):
'''Disaggregate mains according to the model learnt previously.
Parameters
----------
mains : nilmtk.ElecMeter or nilmtk.MeterGroup
output_datastore : instance of nilmtk.DataStore subclass
For storing power predictions from disaggregation algorithm.
vampire_power : None or number (watts)
If None then will automatically determine vampire power
from data. If you do not want to use vampire power then
set vampire_power = 0.
sample_period : number, optional
The desired sample period in seconds. Set to 60 by default.
sections : TimeFrameGroup, optional
Set to mains.good_sections() by default.
**load_kwargs : key word arguments
Passed to `mains.power_series(**kwargs)`
'''
# Performing default pre disaggregation checks. Checking meters etc..
load_kwargs = self._pre_disaggregation_checks(load_kwargs)
# Disaggregation defauls. Sample perios and sections
load_kwargs.setdefault('sample_period', 60)
load_kwargs.setdefault('sections', mains.good_sections())
# Initializing time frames and fetching the meter for the aggregated data
timeframes = []
building_path = '/building{}'.format(mains.building())
mains_data_location = building_path + '/elec/meter1'
data_is_available = False
# We now load the aggregated data for power consumption of the whole house in small chunks
# Every iteration of the following loop we perform the CO step to disaggregate
counter = 0
print('Disaggregation now begins...')
for chunk in mains.power_series(**load_kwargs):
counter += 1
# Check that chunk is sensible size
if len(chunk) < self.MIN_CHUNK_LENGTH:
continue
print('\t Now processing chunk {}...'.format(counter))
# Record metadata
timeframes.append(chunk.timeframe)
measurement = chunk.name
# This is where the disaggregation happens
# Vampire Power is just the minimum of the power series in this chunk
appliance_powers = self.disaggregate_chunk(chunk, vampire_power)
# Here we save the disaggregated data for this chunk in Pandas dataframe and update the
# HDF5 file we created.
for i, model in enumerate(self.model):
# Fetch the disag data for this appliance
appliance_power = appliance_powers[i]
if len(appliance_power) == 0:
continue
data_is_available = True
# Just for saving.. Nothing major happening here
cols = pd.MultiIndex.from_tuples([chunk.name])
meter_instance = model['training_metadata'].instance()
df = pd.DataFrame(
appliance_power.values, index=appliance_power.index,
columns=cols)
key = '{}/elec/meter{}'.format(building_path, meter_instance)
output_datastore.append(key, df)
# Copy mains data to disag output
mains_df = pd.DataFrame(chunk, columns=cols)
output_datastore.append(key=mains_data_location, value=mains_df)
if data_is_available:
self._save_metadata_for_disaggregation(
output_datastore=output_datastore,
sample_period=load_kwargs['sample_period'],
measurement=measurement,
timeframes=timeframes,
building=mains.building(),
meters=[d['training_metadata'] for d in self.model]
)
print('Disaggregation Completed Successfully...!!!')
def disaggregate_chunk(self, mains, vampire_power=None):
In-memory disaggregation.
Parameters
----------
mains : pd.Series
vampire_power : None or number (watts)
If None then will automatically determine vampire power
from data. If you do not want to use vampire power then
set vampire_power = 0.
Returns
-------
appliance_powers : pd.DataFrame where each column represents a
disaggregated appliance. Column names are the integer index
into `self.model` for the appliance in question.
if not self.model:
raise RuntimeError(
"The model needs to be instantiated before"
" calling `disaggregate`. The model"
" can be instantiated by running `train`.")
if len(mains) < self.MIN_CHUNK_LENGTH:
raise RuntimeError("Chunk is too short.")
# sklearn produces lots of DepreciationWarnings with PyTables
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# Because CombinatorialOptimisation could have been trained using
# either train() or train_on_chunk(), we must
# set state_combinations here.
self._set_state_combinations_if_necessary()
# Add vampire power to the model (Min of power series of the aggregated data)
if vampire_power is None:
vampire_power = get_vampire_power(mains)
if vampire_power > 0:
print()
#print("Including vampire_power = {} watts to model...".format(vampire_power))
# How many combinations
n_rows = self.state_combinations.shape[0]
vampire_power_array = np.zeros((n_rows, 1)) + vampire_power
state_combinations = np.hstack(
(self.state_combinations, vampire_power_array))
else:
state_combinations = self.state_combinations
summed_power_of_each_combination = np.sum(state_combinations, axis=1)
# summed_power_of_each_combination is now an array where each
# value is the total power demand for each combination of states.
# Start disaggregation
# The following line finds the best combination from all the possible combinations
# Returns the index to find the best combination as well as the residual
# Uses the Find_Nearest algorithm
indices_of_state_combinations, residual_power = find_nearest(
summed_power_of_each_combination, mains.values)
# Now update the state for each appliance with the optimal one and return the list
# as Dataframe
appliance_powers_dict = {}
for i, model in enumerate(self.model):
#print()
#print("Estimating power demand for '{}'".format(model['training_metadata']))
predicted_power = state_combinations[
indices_of_state_combinations, i].flatten()
column = pd.Series(predicted_power, index=mains.index, name=i)
appliance_powers_dict[i] = column
appliance_powers = pd.DataFrame(appliance_powers_dict)
return appliance_powers
# The current implementation of the CO does not make use of the following 2 functions.
#
#
# -------------------------------------------------------------------------------------
def import_model(self, filename):
imported_model = pickle.load(open(filename, 'r'))
self.model = imported_model.model
# recreate datastores from filenames
for pair in self.model:
pair['training_metadata'].store = HDFDataStore(
pair['training_metadata'].store)
self.state_combinations = imported_model.state_combinations
self.MIN_CHUNK_LENGTH = imported_model.MIN_CHUNK_LENGTH
def export_model(self, filename):
# Can't pickle datastore, so convert to filenames
exported_model = copy.deepcopy(self)
for pair in exported_model.model:
pair['training_metadata'].store = (
pair['training_metadata'].store.store.filename)
pickle.dump(exported_model, open(filename, 'wb'))
Explanation: The Heart of the Notebook: The Combinatorial Optimization class
End of explanation
data_dir = '\Users\Nick\Google Drive\PhD\Courses\Semester 2\AM207\Project'
we = DataSet(join(data_dir, 'REDD.h5'))
print('loaded ' + str(len(we.buildings)) + ' buildings')
Explanation: Importing and Loading the REDD dataset
End of explanation
for i in xrange(1,7):
print('Timeframe for building {} is {}'.format(i,we.buildings[i].elec.get_timeframe()))
Explanation: We want to train the Combinatorial Optimization Algorithm using the data for 5 buildings and then test it against the last building. To simplify our analysis and also to enable comparison with other methods (Neural Nets, FHMM, MLE etc) we will only try to dissagregate data associated with the fridge and the microwave. However, the REDD dataset that we are using here does not contain data measurements for the fridge and microwave for all buildings. In particular, building 4 does not have measurements for the fridge. As a result, we will exclude building 4 from the dataset and we will only import the meters associated with the fridge from other buildings.
The train data set will consist of meters associated with the fridge and microwave from buildings 1,2,3 and 6. We will then test the combinatorial optimization algorithm against the aggregated data for building 5.
We first plot the time window span for all buildings
End of explanation
# Data file directory
data_dir = '\Users\Nick\Google Drive\PhD\Courses\Semester 2\AM207\Project'
# Make the Data set
Data = DataSet(join(data_dir, 'REDD.h5'))
# Make copies of the Data Set so that local changes would not affect the global dataset
Data_for_5 = DataSet(join(data_dir, 'REDD.h5'))
Data_for_rest = DataSet(join(data_dir, 'REDD.h5'))
# How many buildings in the data set?
print(' Found {} buildings in the Data Ser.. Buildings Loaded successfully.'.format(len(Data.buildings)))
# This is the point that we will break the data from building 5 so that we only include a small
# portion in the training set. In fact, the line below makes sure than only a day of data is seen during training.
break_point = '2011-04-19 02:00'
# Changing the window for building 5
Data_for_5.set_window(end=break_point)
# Making a metergroup..
e = [Data_for_5.buildings[5].elec[a] for a in ['fridge','microwave']]
me = MeterGroup(e)
# The data that we pass in for training for building 5 look like this...
me.plot()
Explanation: Unfortunately, due to a bug in one of the main classes of the NILMTK package the implementation of the Combinatorial Optimization do not save the meters for the disaggregated data correctly unless the building on which we test on also exists in the trainihg set. More on this issue can be found here https://github.com/nilmtk/nilmtk/issues/194
However, for us it makes no sense to use the same building for training and testing since we would like to compare this algorithm with the results from FHMM and Neural Networks. In order to circumvent this bug we do the following:
The main issue is that the meter for the building we would like to disaggregate must be on the training set in order to be able to disaggregate correctly. That being said, we still want to train as less as possible on the meter we want to test on since we would like to see how the algorithm performs when a completely unknown dataset is available. In order to do that we create a metergroup comprising of the following:
1) The meters for the Frigde and Microwave for all buildings but building 5, since building 5 is the building we would like to test on. Later we will see that building 4 needs to be excluded as well because there is no meter associated with the fridge for this building.
2) The meters for the Frigde and Microwave for building 5 which is the building we would like to test on, but we limit the time window to be a very very small one. Doing that, we make sure that the meters are there and understood by the Combinatorial Optimization Class but at the same time, by limiting the time window to just a few housrd for this building do not provide enough data to overtrain. In other words, we only do this in order to be able to disaggregate correctly.
After we train we will test the algorithm against the data from building 5 that werent fed into the training meters. After we disaggregate we will compare with the ground truth for the same exact window.
Modifying Datasets to work with CO
End of explanation
def get_all_trainings(appliance, dataset, buildings_to_exclude):
# Filtering by appliances:
elecs = []
for app in appliance:
app_l = [app]
print ('Now loading data for ' + app + ' for all buildings in the data to create the metergroup')
print()
for building in dataset.buildings:
if building not in buildings_to_exclude:
print ('Processing Building ' + str(building) + '...')
print()
try:
elec = dataset.buildings[building].elec[app]
elecs.append(elec)
except KeyError:
print ('Appliance '+str(app)+' does not exist in this building')
print ('Building skipped...')
print ()
metergroup = MeterGroup(elecs)
return metergroup
Explanation: Creating MeterGroups with the desired appliances from the desired buildings
Below we define a function tha is able to create a metergroup that only includes meters for the appliances that we are interested in and is also able to exclude buildings that we don't want in the meter. Also, if an appliance is requested but a meter is not found then the meter is skipped but the metergoup is created nontheless.
End of explanation
applianceName = ['fridge','microwave']
buildings_to_exclude = [4,5]
metergroup = get_all_trainings(applianceName,Data_for_rest,buildings_to_exclude)
print('Now printing the Meter Group...')
print()
print(metergroup)
Explanation: Now we set the appliances that we want as well as the buildings to exclude and we create the metergroup
End of explanation
def correct_meter(Data,building,appliance,oldmeter):
# Unpack meters from the MeterGroup
meters = oldmeter.all_meters()
# Get the rest of the meters and append
for a in appliance:
meter_to_add = Data.buildings[building].elec[a]
meters.append(meter_to_add)
# Group again in a single metergroup and return
return MeterGroup(meters)
corr_metergroup = correct_meter(Data_for_5,5,applianceName,metergroup)
print('The Modified Meter is now..')
print()
print(corr_metergroup)
Explanation: As we can see the Metergroup was successfully created and contains all the appliances we requested (Fridge and Microwave) in all buildings that the appliances exist apart from the ones we excluded
Correcting the MeterGroup (Necessary for the CO to work)
Now we need to perform the trick we mentioned previously. We need to also include the meter from building 5 with the Fridge and Microwave which is the building we are going to test on but we need to make sure that only a very small portion of the data is seen for this building. We already took care of that by changing the window for the data in building 5 so now we only have to include the meters for the Fridge and Microwave for building 5 from the reduced time dataset
End of explanation
# Train
co = CombinatorialOptimisation()
Explanation: As we can see the metergroup was updated successfully
Training
We now need to train in the Metergroup we just created. First, let us load the class for the CO
End of explanation
co.train(corr_metergroup)
Explanation: Now Let's train
End of explanation
Test_Data = DataSet(join(data_dir, 'REDD.h5'))
Test_Data.set_window(start=break_point)
# The building number on which we test
building_for_testing = 5
test = Test_Data.buildings[building_for_testing].elec
mains = test.mains()
Explanation: Preparing the Testing Data
Now that the training is done, the only thing that we have to do is to prepare the Data for Building 5 that we want to test on and call the Disaggregation. The data set is now the remaining part of building 5 that is not seen. After that, we only keep the Main meter which contains ifrormation about the aggregated data consumption and we disaggregate.
End of explanation
# Disaggregate
disag_filename = join(data_dir, 'COMBINATORIAL_OPTIMIZATION.h5')
mains = test.mains()
try:
output = HDFDataStore(disag_filename, 'w')
co.disaggregate(mains, output)
except ValueError:
output.close()
output = HDFDataStore(disag_filename, 'w')
co.disaggregate(mains, output)
for meter in range(1, 2):
df1 = output.store.get('/building5/elec/meter{}'.format(meter))
df2 = we.store.store.get('/building5/elec/meter{}'.format(meter))
output.close()
Explanation: Disaggregating the test data
The disaggregation Begins Now
End of explanation
# Opening the Dataset with the Disaggregated data
disag = DataSet(disag_filename)
# Getting electric appliances and meters
disag_elec = disag.buildings[building_for_testing].elec
# We also get the electric appliances and meters for the ground truth data to compare
elec = Test_Data.buildings[building_for_testing].elec
e = [test[a] for a in applianceName]
me = MeterGroup(e)
print(me)
Explanation: OK.. Now we are all done. All that remains is to interpret the results and plot the scores..
Post Processing & Results
End of explanation
def align_two_meters(master, slave, func='when_on'):
Returns a generator of 2-column pd.DataFrames. The first column is from
`master`, the second from `slave`.
Takes the sample rate and good_periods of `master` and applies to `slave`.
Parameters
----------
master, slave : ElecMeter or MeterGroup instances
sample_period = master.sample_period()
period_alias = '{:d}S'.format(sample_period)
sections = master.good_sections()
master_generator = getattr(master, func)(sections=sections)
for master_chunk in master_generator:
if len(master_chunk) < 2:
return
chunk_timeframe = TimeFrame(master_chunk.index[0],
master_chunk.index[-1])
slave_generator = getattr(slave, func)(sections=[chunk_timeframe])
slave_chunk = next(slave_generator)
# TODO: do this resampling in the pipeline?
slave_chunk = slave_chunk.resample(period_alias)
if slave_chunk.empty:
continue
master_chunk = master_chunk.resample(period_alias)
return master_chunk,slave_chunk
Explanation: Resampling to align meters
Before we are able to calculate and plot the metrics we need to align the ground truth meter with the disaggregated meters. Why so? If you notice in the dissagregation method of the CO class above, you may see that by default the time sampling is changed from 3s which is the raw data to 60s. This has to happen in order to make the disaggregation more efficient computationally but also because it is impossible to disaggregate using the actual time step. So in order to compare now we have to resample the meter for the ground truth and align it
End of explanation
disag_elec.select(instance=18).plot()
me.select(instance=18).plot()
Explanation: Here we just plot the disaggregated data alongside the ground truth for the Fridge
End of explanation
appliances_scores = {}
for m in me.meters:
print('Processing {}...'.format(m.label()))
ground_truth = m
inst = m.instance()
prediction = disag_elec.select(instance=inst)
a = prediction.meters[0]
b = a.power_series_all_data()
pr_a,gt_a = align_two_meters(prediction.meters[0],ground_truth)
gt = gt_a.as_matrix()
pr = pr_a.as_matrix()
if np.all(np.isnan(pr)==False):
print('\t Predictions array seems to be fine...')
print('\t No Nans detected')
print()
else:
print('\t Serious error in Predictions...')
print('\t The resampled array contains Nans')
print()
gt_states_on = gt > 0.1
pr_states_on = pr > 0.1
TP = np.sum(np.logical_and(gt_states_on==True,pr_states_on[1:]==True))
FP = np.sum(np.logical_and(gt_states_on==True,pr_states_on[1:]==False))
FN = np.sum(np.logical_and(gt_states_on==False,pr_states_on[1:]==True))
TN = np.sum(np.logical_and(gt_states_on==False,pr_states_on[1:]==False))
P = np.sum(gt_states_on==True)
N = np.sum(gt_states_on==False)
recall = TP/float(TP+FN)
precision = TP/float(TP+FP)
f1 = 2*precision*recall/(precision+recall)
accuracy = (TP+TN)/float(P+N)
result = {'F1-Score':f1,
'Precision':precision,
'Recall':recall,
'Accuracy':accuracy}
appliances_scores[m.label()] = result
print(appliances_scores)
Names = ['Fridge','Microwave']
Explanation: Aligning meters, Converting to Numpy and Computing Metrics
In this part of the Notebook, we call the function we previously defined to align the meters and then we convert the meters to pandas and ultimately to numpy arrays. We check if any NaN's exist (which is something possible after resmplilng.. Resampling errors may occur) and replace them with 0's if they do. We also compute the following metrics for each appliance:
1) True Positive, False Positive, False Negative, True Negative
2) Precision and Recall
3) Accuracy and F1-Score
For more information about these metrics please refer to the report.
End of explanation
x = np.arange(2)
y = np.array([appliances_scores[i]['F1-Score'] for i in Names])
y[np.isnan(y)] = 0.001
f = plt.figure(figsize=(18,8))
plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']})
plt.rc('text', usetex=True)
ax = f.add_axes([0.2,0.2,0.8,0.8])
ax.bar(x,y,align='center')
ax.set_xticks(x)
ax.set_yticks(y)
ax.set_yticklabels(y,fontsize=20)
ax.set_xticklabels(Names,fontsize=20)
ax.set_xlim([min(x)-0.5,max(x)+0.5])
plt.xlabel('Appliances',fontsize=20)
plt.ylabel('F1-Score',fontsize=20)
plt.title('Combinatorial Optimization',fontsize=22)
plt.show()
Explanation: Results
Now we just plot the scores for both the Fridge and the Microwave in order to be able to visualize what is going on. We do not comment on the results in this notebook since we do this in the report. There is a separate notebook where all these results are combined along with the corresponding results from the Neural Network and the FHMM method and the total results are reported side by side to ease comparison. We plot them here as well for housekeeping although it is redundant.
F1-Score
End of explanation
x = np.arange(2)
y = np.array([appliances_scores[i]['Precision'] for i in Names])
y[np.isnan(y)] = 0.001
f = plt.figure(figsize=(18,8))
plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']})
plt.rc('text', usetex=True)
ax = f.add_axes([0.2,0.2,0.8,0.8])
ax.bar(x,y,align='center')
ax.set_xticks(x)
ax.set_yticks(y)
ax.set_yticklabels(y,fontsize=20)
ax.set_xticklabels(Names,fontsize=20)
ax.set_xlim([min(x)-0.5,max(x)+0.5])
plt.xlabel('Appliances',fontsize=20)
plt.ylabel('Precision',fontsize=20)
plt.title('Combinatorial Optimization',fontsize=22)
plt.show()
Explanation: Precision
End of explanation
x = np.arange(2)
y = np.array([appliances_scores[i]['Recall'] for i in Names])
y[np.isnan(y)] = 0.001
f = plt.figure(figsize=(18,8))
plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']})
plt.rc('text', usetex=True)
ax = f.add_axes([0.2,0.2,0.8,0.8])
ax.bar(x,y,align='center')
ax.set_xticks(x)
ax.set_yticks(y)
ax.set_yticklabels(y,fontsize=20)
ax.set_xticklabels(['Fridge','Sockets','Lights'],fontsize=20)
ax.set_xlim([min(x)-0.5,max(x)+0.5])
plt.xlabel('Appliances',fontsize=20)
plt.ylabel('Recall',fontsize=20)
plt.title('Combinatorial Optimization',fontsize=22)
plt.show()
Explanation: Recall
End of explanation
x = np.arange(2)
y = np.array([appliances_scores[i]['Accuracy'] for i in Names])
y[np.isnan(y)] = 0.001
f = plt.figure(figsize=(18,8))
plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']})
plt.rc('text', usetex=True)
ax = f.add_axes([0.2,0.2,0.8,0.8])
ax.bar(x,y,align='center')
ax.set_xticks(x)
ax.set_yticks(y)
ax.set_yticklabels(y,fontsize=20)
ax.set_xticklabels(['Fridge','Sockets','Lights'],fontsize=20)
ax.set_xlim([min(x)-0.5,max(x)+0.5])
plt.xlabel('Appliances',fontsize=20)
plt.ylabel('Accuracy',fontsize=20)
plt.title('Combinatorial Optimization',fontsize=22)
plt.show()
Explanation: Accuracy
End of explanation |
15,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Scientific Programming in Python</h1>
<h2 align="center">Topic 5
Step4: La distancia de Hausdorff nuevamente...
En esta actividad volveremos a implementar la distancia/métrica de Hausdorff, pero ahora utilizando Cython.
La métrica de Hausdorff corresponde a un métrica o distancia ocupada para medir cuán disímiles son dos subconjuntos dados.
Esta tiene muchas aplicaciones, en particular para comparar el parecido entre imágenes. En el caso en donde los conjuntos son arreglos bidimensionales, la definición es la siguiente | Python Code:
import numba
import numpy as np
from math import sqrt
%load_ext Cython
Explanation: <h1 align="center">Scientific Programming in Python</h1>
<h2 align="center">Topic 5: Accelerating Python with Cython: Writting C in Python </h2>
Notebook created by Martín Villanueva - martin.villanueva@usm.cl - DI UTFSM - May2017.
End of explanation
@numba.jit('float64 (float64[:], float64[:])')
def metric_numba(x, y):
standard Euclidean distance
ret = x-y
ret *= ret
return np.sqrt(ret).sum()
@numba.jit('float64 (float64[:], float64[:,:])', nopython=True)
def inf_dist_numba(x, Y):
inf distance between row x and array Y
m = Y.shape[0]
inf = np.inf
for i in range(m):
dist = metric_numba(x, Y[i])
if dist < inf:
inf = dist
return inf
@numba.jit('float64 (float64[:,:], float64[:,:])', nopython=True)
def hausdorff_numba(X, Y):
Hausdorff distance between arrays X and Y
m = X.shape[0]
n = Y.shape[0]
sup1 = -1.
sup2 = -1.
for i in range(m):
inf1 = inf_dist_numba(X[i], Y)
if inf1 > sup1:
sup1 = inf1
for i in range(n):
inf2 = inf_dist_numba(Y[i], X)
if inf2 > sup2:
sup2 = inf2
return max(sup1, sup2)
Explanation: La distancia de Hausdorff nuevamente...
En esta actividad volveremos a implementar la distancia/métrica de Hausdorff, pero ahora utilizando Cython.
La métrica de Hausdorff corresponde a un métrica o distancia ocupada para medir cuán disímiles son dos subconjuntos dados.
Esta tiene muchas aplicaciones, en particular para comparar el parecido entre imágenes. En el caso en donde los conjuntos son arreglos bidimensionales, la definición es la siguiente:
Sean $X \in \mathbb{R}^{m \times 3}$ e $Y \in \mathbb{R}^{n \times 3}$ dos matrices, la métrica/distancia de Hausdorff sobre sobre estas como:
$$
d_H(X,Y) = \max \left(\ \max_{i\leq m} \min_{j \leq n} d(X[i],Y[j]), \ \max_{j\leq n} \min_{i \leq m} d(Y[j],X[i]) \ \right)
$$
donde $d$ es la distancia Euclideana clásica. ($X[i]$ indíca la i-ésima fila de X).
Ilustración unidimensional: Distancia entre funciones.
<img src='data/hausdorff.png' style="width: 600px;">
A continuación se le proveen 3 funciones que implementan tal métrica, usando Numba.
End of explanation |
15,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Declaring elements in a function
If we write a function that accepts one or more parameters and constructs an element, we can build plots that do things like
Step2: The function defines a number of parameters that will change the signal, but using the default parameters the function outputs a Curve like this
Step3: HoloMaps
The HoloMap is the first container type we will start working with, because it is often the starting point of a parameter exploration. HoloMaps allow exploring a parameter space sampled at specific, discrete values, and can easily be created using a dictionary comprehension. When declaring a HoloMap, just ensure the length and ordering of the key tuple matches the key dimensions
Step4: Note how the keys in our HoloMap map on to two automatically generated sliders. HoloViews supports two types of widgets by default
Step5: Apart from their simplicity and generality, one of the key features of HoloMaps is that they can be exported to a static HTML file, GIF, or video, because every combination of the sliders (parameter values) has been pre-computed already. This very convenient feature of pre-computation becomes a liability for very large or densely sampled parameter spaces, however, leading to the DynamicMap type discussed next.
Summary
HoloMaps allow declaring a parameter space
The default widgets provide a slider for numeric types and a dropdown menu for non-numeric types.
HoloMap works well for small or sparsely sampled parameter spaces, exporting to static files
DynamicMap
A [DynamicMap]((holoviews.org/reference/containers/bokeh/DynamicMap.html) is very similar to a HoloMap except that it evaluates the function lazily. This property makes DynamicMap require a live, running Python server, not just an HTML-serving web site or email, and it may be slow if each frame is slower to compute than it is to display. However, because of these properties, DynamicMap allows exploring arbitrarily large parameter spaces, dynamically generating each element as needed to satisfy a request from the user. The key dimensions kdims must match the arguments of the function
Step6: Faceting parameter spaces
Casting
HoloMaps and DynamicMaps let you explore a multidimensional parameter space by looking at one point in that space at a time, which is often but not always sufficient. If you want to see more data at once, you can facet the HoloMap to put some data points side by side or overlaid to facilitate comparison. One easy way to do that is to cast your HoloMap into a GridSpace, NdLayout, or NdOverlay container
Step7: Faceting with methods
Using the .overlay, .grid and .layout methods we can facet multi-dimensional data by a specific dimension
Step8: Using these methods with a DynamicMap requires special attention, because a dynamic map can return an infinite number of different values along its dimensions, unlike a HoloMap. Obviously, HoloViews could not comply with such a request, but these methods are perfectly legal with DynamicMap if you also define which specific dimension values you need, using the .redim.values method
Step9: Optional
Slicing and indexing
HoloMaps and other containers also allow you to easily index or select by key, allowing you to
Step10: You can do the same using the select method | Python Code:
import numpy as np
import holoviews as hv
hv.extension('bokeh')
%opts Curve Area [width=600]
Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
<div style="float:right;"><h2>03. Exploration with Containers</h2></div>
In the first two sections of this tutorial we discovered how to declare static elements and compose them one by one into composite objects, allowing us to quickly visualize data as we explore it. However, many datasets contain numerous additional dimensions of data, such as the same measurement repeated across a large number of different settings or parameter values. To address these common situations, HoloViews provides ontainers that allow you to explore extra dimensions of your data using widgets, as animations, or by "faceting" it (splitting it into "small multiples") in various ways.
To begin with we will discover how we can quickly explore the parameters of a function by having it return an element and then evaluating the function over the parameter space.
End of explanation
def fm_modulation(f_carrier=110, f_mod=110, mod_index=1, length=0.1, sampleRate=3000):
x = np.arange(0, length, 1.0/sampleRate)
y = np.sin(2*np.pi*f_carrier*x + mod_index*np.sin(2*np.pi*f_mod*x))
return hv.Curve((x, y), kdims=['Time'], vdims=['Amplitude'])
Explanation: Declaring elements in a function
If we write a function that accepts one or more parameters and constructs an element, we can build plots that do things like:
Loading data from disk as needed
Querying data from an API
Calculating data from a mathematical function
Generating data from a simulation
As a basic example, let's declare a function that generates a frequency-modulated signal and returns a Curve element:
End of explanation
fm_modulation()
Explanation: The function defines a number of parameters that will change the signal, but using the default parameters the function outputs a Curve like this:
End of explanation
carrier_frequencies = [10, 20, 110, 220, 330]
modulation_frequencies = [110, 220, 330]
hmap = hv.HoloMap({(fc, fm): fm_modulation(fc, fm) for fc in carrier_frequencies
for fm in modulation_frequencies}, kdims=['fc', 'fm'])
hmap
Explanation: HoloMaps
The HoloMap is the first container type we will start working with, because it is often the starting point of a parameter exploration. HoloMaps allow exploring a parameter space sampled at specific, discrete values, and can easily be created using a dictionary comprehension. When declaring a HoloMap, just ensure the length and ordering of the key tuple matches the key dimensions:
End of explanation
# Exercise: Try changing the function below to return an ``Area`` or ``Scatter`` element,
# in the same way `fm_modulation` returned a ``Curve`` element.
def fm_modulation2(f_carrier=220, f_mod=110, mod_index=1, length=0.1, sampleRate=3000):
x = np.arange(0,length, 1.0/sampleRate)
y = np.sin(2*np.pi*f_carrier*x + mod_index*np.sin(2*np.pi*f_mod*x))
# Then declare a HoloMap like above and assign it to a ``exercise_hmap`` variable and display that
# Solution:
def fm_modulation2(f_carrier=220, f_mod=110, mod_index=1, length=0.1, sampleRate=3000):
x = np.arange(0,length, 1.0/sampleRate)
y = np.sin(2*np.pi*f_carrier*x + mod_index*np.sin(2*np.pi*f_mod*x))
return hv.Area((x, y), kdims=['Time'], vdims=['Amplitude']) #
carrier_frequencies = [10, 20, 110, 220, 330]
modulation_frequencies = [110, 220, 330]
exercise_hmap = hv.HoloMap({(fc, fm): fm_modulation2(fc, fm) for fc in carrier_frequencies
for fm in modulation_frequencies}, kdims=['fc', 'fm'])
exercise_hmap
Explanation: Note how the keys in our HoloMap map on to two automatically generated sliders. HoloViews supports two types of widgets by default: numeric sliders, or a dropdown selection menu for all non-numeric types. These sliders appear because a HoloMap can display only a single Element at one time, and the user must thus select which of the available elements to show at any one time.
End of explanation
%%opts Curve (color='red')
# Note: Sliders will not work without a live server
dmap = hv.DynamicMap(fm_modulation, kdims=['f_carrier', 'f_mod', 'mod_index'])
dmap = dmap.redim.range(f_carrier=((10, 110)), f_mod=(10, 110), mod_index=(0.1, 2))
dmap
# Exercise: Declare a DynamicMap using the function from the previous exercise and name it ``exercise_dmap``
# Note: Sliders will not work without a live server
exercise_dmap = hv.DynamicMap(fm_modulation2, kdims=['f_carrier', 'f_mod', 'mod_index'])
exercise_dmap = exercise_dmap.redim.range(f_carrier=((10, 110)), f_mod=(10, 110), mod_index=(0.1, 2))
exercise_dmap
# Exercise (Optional): Use the ``.redim.step`` method and a floating point range to modify the slider step
# Note: The mod_index slider now jumps in increments of 0.1
exercise_dmap = exercise_dmap.redim.step(mod_index=0.1)
exercise_dmap
Explanation: Apart from their simplicity and generality, one of the key features of HoloMaps is that they can be exported to a static HTML file, GIF, or video, because every combination of the sliders (parameter values) has been pre-computed already. This very convenient feature of pre-computation becomes a liability for very large or densely sampled parameter spaces, however, leading to the DynamicMap type discussed next.
Summary
HoloMaps allow declaring a parameter space
The default widgets provide a slider for numeric types and a dropdown menu for non-numeric types.
HoloMap works well for small or sparsely sampled parameter spaces, exporting to static files
DynamicMap
A [DynamicMap]((holoviews.org/reference/containers/bokeh/DynamicMap.html) is very similar to a HoloMap except that it evaluates the function lazily. This property makes DynamicMap require a live, running Python server, not just an HTML-serving web site or email, and it may be slow if each frame is slower to compute than it is to display. However, because of these properties, DynamicMap allows exploring arbitrarily large parameter spaces, dynamically generating each element as needed to satisfy a request from the user. The key dimensions kdims must match the arguments of the function:
End of explanation
%%opts Curve [width=150]
hv.GridSpace(hmap).opts()
# Exercise: Try casting your ``exercise_hmap`` HoloMap from the first exercise to an ``NdLayout`` or
# ``NdOverlay``, guessing from the name what the resulting organization will be before testing it.
# Solution: As an NdOverlay:
exercise_hmap.overlay()
# Exercise: Try casting your ``exercise_hmap`` HoloMap from the first exercise to an ``NdLayout`` or
# ``NdOverlay``, guessing from the name what the resulting organization will be before testing it.
# Solution: As a NdLayout:
exercise_hmap.layout()
Explanation: Faceting parameter spaces
Casting
HoloMaps and DynamicMaps let you explore a multidimensional parameter space by looking at one point in that space at a time, which is often but not always sufficient. If you want to see more data at once, you can facet the HoloMap to put some data points side by side or overlaid to facilitate comparison. One easy way to do that is to cast your HoloMap into a GridSpace, NdLayout, or NdOverlay container:
End of explanation
hmap.overlay('fm')
Explanation: Faceting with methods
Using the .overlay, .grid and .layout methods we can facet multi-dimensional data by a specific dimension:
End of explanation
%%opts Curve [width=150]
# Note: Added .opts() at the end to clear earlier setting of 'red' style option.
dmap.redim.values(f_mod=[10, 20, 30], f_carrier=[10, 20, 30]).overlay('f_mod').grid('f_carrier').opts()
%%opts Area [width=150]
# Exercise: Facet the ``exercise_dmap`` DynamicMap using ``.overlay`` and ``.grid``
# Hint: Use the .redim.values method to set discrete values for ``f_mod`` and ``f_carrier`` dimensions
# Note: This example needs the Area width to be reduced like the curve example above.
exercise_dmap.redim.values(f_mod=[10, 20, 30], f_carrier=[10, 20, 30]).overlay('f_mod').grid('f_carrier')
Explanation: Using these methods with a DynamicMap requires special attention, because a dynamic map can return an infinite number of different values along its dimensions, unlike a HoloMap. Obviously, HoloViews could not comply with such a request, but these methods are perfectly legal with DynamicMap if you also define which specific dimension values you need, using the .redim.values method:
End of explanation
%%opts Curve [width=300]
hmap[10, 110] + hmap[10, 200:].overlay() + hmap[[10, 110], 110].overlay()
Explanation: Optional
Slicing and indexing
HoloMaps and other containers also allow you to easily index or select by key, allowing you to:
select a specific key: obj[10, 110]
select a slice: obj[10, 200:]
select multiple values: obj[[10, 110], 110]
End of explanation
(hmap.select(fc=10, fm=110) +
hmap.select(fc=10, fm=(200, None)).overlay() +
hmap.select(fc=[10, 110], fm=110).overlay())
# Exercise: Try selecting two carrier frequencies and two modulation frequencies on the ``exercise_hmap``
exercise_hmap.select(fc=[10, 20], fm=[110, 220])
Explanation: You can do the same using the select method:
End of explanation |
15,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I will be writing about the Extreme value theory (EVT) which was introduced to me by my brother Sudhanshu, while he was working on his internship project. I really liked the connection it has with central limit theorem (CLT). The approach allowed me to better understand central limit theorem as a way to identify distribution of a function applied to independent and identically distributed (i.i.d.) samples. For CLT, the function is the mean or sum applied to the samples. For EVT the function is max or min.
I will do a multi part series trying to explain EVT using examples from real data. In the first part, the focus is on simply understanding the core ideas and introducing the limiting distributions for EVT.
Most of us know about the central limit theorem. It states that the sample means for i.i.d. samples from a distribution with finite variance, follows a normal distribution. More formally, we can write it as follows
Step1: Let us compare the distribution of means, max, and min of samples from a few distributions distributions e.g. uniform, normal, exponential, logistic, and powerlaw. We are going to plot the PDF as well as the CDF of the distributions. The PDF tells us the neighborhood of high probability density, the CDF gives us a way to bound a certain percentage of the data.
Step2: As we can observe, as the number of samples $k$ increases, the estimation of the mean and the extreme values converges to the true value. However, unlike the central limit theorem, where the variance of estimated mean reduced for higher $k$, for extreme value distributions, the distribution shifts to either right (for max) or left (for min) as $k$ increases. The variance the extreme value distributions sometimes reduces or increases as $k$ increases. I am not aware of any theorem which proves the convergence of variance.
For bounded distributions like uniform, exponential, and powerlaw, the extreme value converges to the bound as $k$ increases.
UPDATE | Python Code:
def draw_samples(dist, n, k=1):
return dist.rvs(size=(n,k))
def plot_samples(dists, aggregations, n=1000, k_max=10000):
rows, cols = len(aggregations), len(dists)
fig, ax = plt.subplots(rows, cols, figsize=(5*cols, 3*rows))
fig_cum, ax_cum = plt.subplots(rows, cols, figsize=(5*cols, 3*rows))
for i, (dist_name, dist) in enumerate(dists):
samples = draw_samples(dist, n, k_max)
dist_mean = dist.mean()
dist_std = dist.std()
for j, (agg_name, agg) in enumerate(aggregations):
for k in [int(10**p) for p in np.arange(1, np.log10(k_max)+1)]:
normed_samples = samples[:, :k]
#normed_samples = (samples[:, :k] - dist_mean)/dist_std
X = agg(normed_samples, axis=1)
#print(dist_name, agg_name, k, X.shape)
ax[j, i].hist(
X, bins=100, weights=np.ones_like(X)/X.shape[0],
cumulative=False,
histtype="step",
label=f"$k={k}$"
)
ax_cum[j, i].hist(
X, bins=100, weights=np.ones_like(X)/X.shape[0],
cumulative=True,
histtype="step",
label=f"$k={k}$"
)
if j == 0:
ax[j, i].set_title(dist_name)
ax_cum[j, i].set_title(dist_name)
if i == 0:
ax[j, i].set_ylabel(f"$p({agg_name}[X_k])$")
ax_cum[j, i].set_ylabel(f"$p({agg_name}[X_k] < x)$")
if j == rows-1 and i == cols-1:
ax[j, i].legend(loc="best", bbox_to_anchor=(1.01, 0.5))
ax_cum[j, i].legend(loc="best", bbox_to_anchor=(1.01, 0.5))
sns.despine(fig=fig, offset=10)
sns.despine(fig=fig_cum, offset=10)
fig.tight_layout()
fig_cum.tight_layout()
return (fig, ax), (fig_cum, ax_cum)
samples = draw_samples(stats.norm(), 100, 10000)
samples.shape
Explanation: I will be writing about the Extreme value theory (EVT) which was introduced to me by my brother Sudhanshu, while he was working on his internship project. I really liked the connection it has with central limit theorem (CLT). The approach allowed me to better understand central limit theorem as a way to identify distribution of a function applied to independent and identically distributed (i.i.d.) samples. For CLT, the function is the mean or sum applied to the samples. For EVT the function is max or min.
I will do a multi part series trying to explain EVT using examples from real data. In the first part, the focus is on simply understanding the core ideas and introducing the limiting distributions for EVT.
Most of us know about the central limit theorem. It states that the sample means for i.i.d. samples from a distribution with finite variance, follows a normal distribution. More formally, we can write it as follows:
$$
\begin{equation}
X_1, X_2, ..., X_n \sim p(X)\
S_n = \frac{\sum_i X_i}{n}\
S_n \sim \mathcal{N}(\mu, \sigma^2)\
\end{equation}
$$
Similarly, there exists a the Extreme value theory. This theory deals with the distribution of the extreme values (like minimum or maximum) from the (i.i.d.) samples from some distribution.
$$
\begin{equation}
X_1, X_2, ..., X_n \sim p(X)\
M_n = \max_i X_i\
M_n \sim {\textrm {GEV}}(\mu ,\,\sigma ,\,\xi ) \
\end{equation}
$$
Surprisingly, like the central limit theorem, these values also converge on a class of distributions called the generalized extreme value distribution which characterizes three types of distributions. These distributions are:
Weibull law: $G(z)={\begin{cases}\exp \left{-\left(-\left({\frac {z-b}{a}}\right)\right)^{\alpha }\right}&z<b\1&z\geq b\end{cases}}$ when the distribution of $M_{n}$ has a light tail with finite upper bound. Also known as Type 3.
Gumbel law: $G(z)=\exp \left{-\exp \left(-\left({\frac {z-b}{a}}\right)\right)\right}{\text{ for }}z\in \mathbb {R}$ . when the distribution of $M_{n}$ has an exponential tail. Also known as Type 1
Fréchet Law: $G(z)={\begin{cases}0&z\leq b\\exp \left{-\left({\frac {z-b}{a}}\right)^{-\alpha }\right}&z>b.\end{cases}}$ when the distribution of $M_{n}$ has a heavy tail (including polynomial decay). Also known as Type 2.
In all cases, $\alpha >0$.
[Source: https://en.wikipedia.org/wiki/Extreme_value_theory#Univariate_theory ]
The good thing about knowing the distribution of $M_n$ is that we can quantify what is the probability of observing a value as extreme as $M_n$. In case of maximum this can be computed by using the cumulative density function (CDF) as $P(M_n < x)$. If we set a threshhold on this probability (say $\delta = 99\%$) then we can make systems which are robust to the extreme value $\delta$ percentage of time. Also, we can identify an observation as a rare event if it has CDF more than $\delta$.
Let us see this in action. First we are going to create a draw_samples function which allows us to draw samples from various disitributions available in scipy.stats. We will draw samples of size $n$ and will draw $k$ such samples.
End of explanation
dists = [
("uniform", stats.uniform()),
("norm", stats.norm()),
("expon ", stats.expon()),
("logistic", stats.logistic()),
("powerlaw", stats.powerlaw(a=5)),
]
aggregations = [
("mean", np.mean),
("max", np.max),
("min", np.min),
]
plot_samples(dists, aggregations, n=1000, k_max=10000);
Explanation: Let us compare the distribution of means, max, and min of samples from a few distributions distributions e.g. uniform, normal, exponential, logistic, and powerlaw. We are going to plot the PDF as well as the CDF of the distributions. The PDF tells us the neighborhood of high probability density, the CDF gives us a way to bound a certain percentage of the data.
End of explanation
normal = stats.norm()
gumbel = stats.genextreme(c=0)
frechet = stats.genextreme(c=-1)
weibull = stats.genextreme(c=1)
fig, ax = plt.subplots(1,2, sharex=True, sharey=True, figsize=(15, 6))
x = np.arange(-3, 5.001, 0.001)
ax[0].plot(x, normal.pdf(x), label="normal", lw=1)
ax[0].plot(x, gumbel.pdf(x), label="gumbel", lw=1)
ax[0].plot(x, frechet.pdf(x), label="frechet", lw=1)
ax[0].plot(x, weibull.pdf(x), label="weibull", lw=1)
ax[0].axvline(x=0, lw=0.5, linestyle="--", color="k")
ax[0].set_ylabel("$P(X=x)$")
ax[0].set_xlabel("$x$")
ax[1].plot(x, normal.cdf(x), label="normal", lw=1)
ax[1].plot(x, gumbel.cdf(x), label="gumbel", lw=1)
ax[1].plot(x, frechet.cdf(x), label="frechet", lw=1)
ax[1].plot(x, weibull.cdf(x), label="weibull", lw=1)
ax[1].axvline(x=0, lw=0.5, linestyle="--", color="k")
ax[1].axhline(y=0.9, lw=0.5, linestyle="--", color="k")
ax[1].set_ylabel("$P(X\leq x)$")
ax[1].set_xlabel("$x$")
lgd = fig.legend(
*ax[1].get_legend_handles_labels(),
bbox_to_anchor=(0.8, 1.05),
bbox_transform=fig.transFigure,
ncol=4
)
for lh in lgd.legendHandles:
lh.set_linewidth(5)
sns.despine(offset=10)
fig.tight_layout()
Explanation: As we can observe, as the number of samples $k$ increases, the estimation of the mean and the extreme values converges to the true value. However, unlike the central limit theorem, where the variance of estimated mean reduced for higher $k$, for extreme value distributions, the distribution shifts to either right (for max) or left (for min) as $k$ increases. The variance the extreme value distributions sometimes reduces or increases as $k$ increases. I am not aware of any theorem which proves the convergence of variance.
For bounded distributions like uniform, exponential, and powerlaw, the extreme value converges to the bound as $k$ increases.
UPDATE: I found this very similar blog post https://eranraviv.com/distribution-sample-maximum/ which explains the convergence of sample maximum for a real use case of hiring top talent. I would highly recommend reading this.
Comparing normal and GEV distributions
A good thing about these limiting GEV distributions is that they often fat tailed compared to the normal distribution, and also that they are skewed. This allows us to model extreme values more appropriately. We can observe this skew and fat tailed property by comparing normal distribution, to each of the GEV distributions.
End of explanation |
15,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9d-l78', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-9D-L78
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
15,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Simulating fullCyc Day1 control gradients
Not simulating incorporation (all 0% isotope incorp.)
Don't know how much true incorporatation for emperical data
Using parameters inferred from TRIMMED emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters
Determining whether simulated taxa show similar distribution to the emperical data
Input parameters
phyloseq.bulk file
taxon mapping file
list of genomes
fragments simulated for all genomes
bulk community richness
workflow
Creating a community file from OTU abundances in bulk soil samples
phyloseq.bulk --> OTU table --> filter to sample --> community table format
Fragment simulation
simulated_fragments --> parse out fragments for target OTUs
simulated_fragments --> parse out fragments from random genomes to obtain richness of interest
combine fragment python objects
Convert fragment lists to kde object
Add diffusion
Make incorp config file
Add isotope incorporation
Calculating BD shift from isotope incorp
Simulating gradient fractions
Simulating OTU table
Simulating PCR
Subsampling from the OTU table
Init
Step1: BD min/max
Step2: Nestly
assuming fragments already simulated
Step3: BD min/max
what is the min/max BD that we care about?
Step4: Loading non-PCR subsampled OTU tables
Step5: BD range where an OTU is detected
Do the simulated OTU BD distributions span the same BD range of the emperical data?
Step6: Assessing diversity
Asigning zeros
Step7: Plotting Shannon diversity for each
Step8: Plotting variance
Step9: Notes
spikes at low & high G+C
absence of taxa or presence of taxa at those locations?
Plotting absolute abundance distributions | Python Code:
import os
import glob
import re
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(phyloseq)
Explanation: Goal
Simulating fullCyc Day1 control gradients
Not simulating incorporation (all 0% isotope incorp.)
Don't know how much true incorporatation for emperical data
Using parameters inferred from TRIMMED emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters
Determining whether simulated taxa show similar distribution to the emperical data
Input parameters
phyloseq.bulk file
taxon mapping file
list of genomes
fragments simulated for all genomes
bulk community richness
workflow
Creating a community file from OTU abundances in bulk soil samples
phyloseq.bulk --> OTU table --> filter to sample --> community table format
Fragment simulation
simulated_fragments --> parse out fragments for target OTUs
simulated_fragments --> parse out fragments from random genomes to obtain richness of interest
combine fragment python objects
Convert fragment lists to kde object
Add diffusion
Make incorp config file
Add isotope incorporation
Calculating BD shift from isotope incorp
Simulating gradient fractions
Simulating OTU table
Simulating PCR
Subsampling from the OTU table
Init
End of explanation
%%R
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
cat('Min BD:', min_BD, '\n')
cat('Max BD:', max_BD, '\n')
Explanation: BD min/max
End of explanation
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'
buildDir = os.path.join(workDir, 'rep3')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
fragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl'
nreps = 3
# building tree structure
nest = nestly.Nest()
# varying params
nest.add('rep', [x + 1 for x in xrange(nreps)])
## set params
nest.add('abs', ['1e9'], create_dir=False)
nest.add('percIncorp', [0], create_dir=False)
nest.add('percTaxa', [0], create_dir=False)
nest.add('np', [8], create_dir=False)
nest.add('subsample_dist', ['lognormal'], create_dir=False)
nest.add('subsample_mean', [9.432], create_dir=False)
nest.add('subsample_scale', [0.5], create_dir=False)
nest.add('subsample_min', [10000], create_dir=False)
nest.add('subsample_max', [30000], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
nest.add('fragFile', [fragFile], create_dir=False)
nest.add('physeqDir', [physeqDir], create_dir=False)
nest.add('physeq_bulkCore', [physeq_bulkCore], create_dir=False)
nest.add('bandwidth', [0.6], create_dir=False)
nest.add('comm_params', ['mean:-7.6836085,sigma:0.9082843'], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
echo '#-- SIPSim pipeline --#'
echo '# converting fragments to KDE'
SIPSim fragment_KDE \
{fragFile} \
> fragsParsed_KDE.pkl
echo '# making a community file'
SIPSim KDE_info \
-t fragsParsed_KDE.pkl \
> taxon_names.txt
SIPSim communities \
--abund_dist_p {comm_params} \
taxon_names.txt \
> comm.txt
echo '# adding diffusion'
SIPSim diffusion \
fragsParsed_KDE.pkl \
--bw {bandwidth} \
--np {np} \
> fragsParsed_KDE_dif.pkl
echo '# adding DBL contamination'
SIPSim DBL \
fragsParsed_KDE_dif.pkl \
--bw {bandwidth} \
--np {np} \
> fragsParsed_KDE_dif_DBL.pkl
echo '# making incorp file'
SIPSim incorpConfigExample \
--percTaxa {percTaxa} \
--percIncorpUnif {percIncorp} \
> {percTaxa}_{percIncorp}.config
echo '# adding isotope incorporation to BD distribution'
SIPSim isotope_incorp \
fragsParsed_KDE_dif_DBL.pkl \
{percTaxa}_{percIncorp}.config \
--comm comm.txt \
--bw {bandwidth} \
--np {np} \
> fragsParsed_KDE_dif_DBL_inc.pkl
echo '# simulating gradient fractions'
SIPSim gradient_fractions \
comm.txt \
> fracs.txt
echo '# simulating an OTU table'
SIPSim OTU_table \
fragsParsed_KDE_dif_DBL_inc.pkl \
comm.txt \
fracs.txt \
--abs {abs} \
--np {np} \
> OTU_abs{abs}.txt
#-- w/ PCR simulation --#
echo '# simulating PCR'
SIPSim OTU_PCR \
OTU_abs{abs}.txt \
> OTU_abs{abs}_PCR.txt
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}_PCR.txt \
> OTU_abs{abs}_PCR_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_meta.txt
#-- w/out PCR simulation --#
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}.txt \
> OTU_abs{abs}_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_meta.txt
!chmod 777 $bashFile
!cd $workDir; \
nestrun --template-file $bashFile -d rep3 --log-file log.txt -j 3
%pushnote SIPsim rep3 complete
Explanation: Nestly
assuming fragments already simulated
End of explanation
%%R
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
cat('Min BD:', min_BD, '\n')
cat('Max BD:', max_BD, '\n')
Explanation: BD min/max
what is the min/max BD that we care about?
End of explanation
OTU_files = !find $buildDir -name "OTU_abs1e9_sub.txt"
OTU_files
%%R -i OTU_files
# loading files
df.SIM = list()
for (x in OTU_files){
SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep3/', '', x)
SIM_rep = gsub('/OTU_abs1e9_sub.txt', '', SIM_rep)
df.SIM[[SIM_rep]] = read.delim(x, sep='\t')
}
df.SIM = do.call('rbind', df.SIM)
df.SIM$SIM_rep = gsub('\\.[0-9]+$', '', rownames(df.SIM))
rownames(df.SIM) = 1:nrow(df.SIM)
df.SIM %>% head(n=3)
Explanation: Loading non-PCR subsampled OTU tables
End of explanation
comm_files = !find $buildDir -name "comm.txt"
comm_files
%%R -i comm_files
df.SIM.comm = list()
for (x in comm_files){
SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep3/', '', x)
SIM_rep = gsub('/comm.txt', '', SIM_rep)
df.SIM.comm[[SIM_rep]] = read.delim(x, sep='\t')
}
df.SIM.comm = do.call(rbind, df.SIM.comm)
df.SIM.comm$SIM_rep = gsub('\\.[0-9]+$', '', rownames(df.SIM.comm))
rownames(df.SIM.comm) = 1:nrow(df.SIM.comm)
df.SIM.comm = df.SIM.comm %>%
rename('bulk_abund' = rel_abund_perc) %>%
mutate(bulk_abund = bulk_abund / 100)
df.SIM.comm %>% head(n=3)
%%R -w 800 -h 400
# Plotting the pre-fractionation abundances of each taxon
df.SIM.comm.s = df.SIM.comm %>%
group_by(taxon_name) %>%
summarize(median_rank = median(rank),
mean_abund = mean(bulk_abund),
sd_abund = sd(bulk_abund))
df.SIM.comm.s$taxon_name = reorder(df.SIM.comm.s$taxon_name, -df.SIM.comm.s$mean_abund)
ggplot(df.SIM.comm.s, aes(taxon_name, mean_abund,
ymin=mean_abund-sd_abund,
ymax=mean_abund+sd_abund)) +
geom_linerange(alpha=0.4) +
geom_point(alpha=0.6, size=1.2) +
scale_y_log10() +
labs(x='taxon', y='Relative abundance', title='Pre-fractionation abundance') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
## joining SIP & comm (pre-fractionation)
df.SIM.j = inner_join(df.SIM, df.SIM.comm, c('library' = 'library',
'taxon' = 'taxon_name',
'SIM_rep' = 'SIM_rep')) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.j %>% head(n=3)
%%R
# calculating BD range
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
group_by(SIM_rep) %>%
mutate(max_BD_range = max(BD_mid) - min(BD_mid)) %>%
ungroup() %>%
group_by(SIM_rep, taxon) %>%
summarize(mean_bulk_abund = mean(bulk_abund),
min_BD = min(BD_mid),
max_BD = max(BD_mid),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / first(max_BD_range) * 100) %>%
ungroup()
df.SIM.j.f %>% head(n=3) %>% as.data.frame
%%R -h 300 -w 550
## plotting
ggplot(df.SIM.j.f, aes(mean_bulk_abund, BD_range_perc, color=SIM_rep)) +
geom_point(alpha=0.5, shape='O') +
scale_x_log10() +
scale_y_continuous() +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
#geom_vline(xintercept=0.001, linetype='dashed', alpha=0.5) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
Explanation: BD range where an OTU is detected
Do the simulated OTU BD distributions span the same BD range of the emperical data?
End of explanation
%%R
# giving value to missing abundances
min.pos.val = df.SIM.j %>%
filter(rel_abund > 0) %>%
group_by() %>%
mutate(min_abund = min(rel_abund)) %>%
ungroup() %>%
filter(rel_abund == min_abund)
min.pos.val = min.pos.val[1,'rel_abund'] %>% as.numeric
imp.val = min.pos.val / 10
# convert numbers
df.SIM.j[df.SIM.j$rel_abund == 0, 'abundance'] = imp.val
# another closure operation
df.SIM.j = df.SIM.j %>%
group_by(SIM_rep, fraction) %>%
mutate(rel_abund = rel_abund / sum(rel_abund))
# status
cat('Below detection level abundances converted to: ', imp.val, '\n')
Explanation: Assessing diversity
Asigning zeros
End of explanation
%%R
shannon_index_long = function(df, abundance_col, ...){
# calculating shannon diversity index from a 'long' formated table
## community_col = name of column defining communities
## abundance_col = name of column defining taxon abundances
df = df %>% as.data.frame
cmd = paste0(abundance_col, '/sum(', abundance_col, ')')
df.s = df %>%
group_by_(...) %>%
mutate_(REL_abundance = cmd) %>%
mutate(pi__ln_pi = REL_abundance * log(REL_abundance),
shannon = -sum(pi__ln_pi, na.rm=TRUE)) %>%
ungroup() %>%
dplyr::select(-REL_abundance, -pi__ln_pi) %>%
distinct_(...)
return(df.s)
}
%%R -w 800 -h 300
# calculating shannon
df.SIM.shan = shannon_index_long(df.SIM.j, 'count', 'library', 'fraction') %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.shan.s = df.SIM.shan %>%
group_by(BD_bin = ntile(BD_mid, 24)) %>%
summarize(mean_BD = mean(BD_mid),
mean_shannon = mean(shannon),
sd_shannon = sd(shannon))
# plotting
p = ggplot(df.SIM.shan.s, aes(mean_BD, mean_shannon,
ymin=mean_shannon-sd_shannon,
ymax=mean_shannon+sd_shannon)) +
geom_pointrange() +
labs(x='Buoyant density', y='Shannon index') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
Explanation: Plotting Shannon diversity for each
End of explanation
%%R -w 800 -h 350
df.SIM.j.var = df.SIM.j %>%
group_by(SIM_rep, fraction) %>%
mutate(variance = var(rel_abund)) %>%
ungroup() %>%
distinct(SIM_rep, fraction) %>%
select(SIM_rep, fraction, variance, BD_mid)
ggplot(df.SIM.j.var, aes(BD_mid, variance, color=SIM_rep)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Plotting variance
End of explanation
OTU_files = !find $buildDir -name "OTU_abs1e9.txt"
OTU_files
%%R -i OTU_files
# loading files
df.abs = list()
for (x in OTU_files){
SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep3/', '', x)
SIM_rep = gsub('/OTU_abs1e9.txt', '', SIM_rep)
df.abs[[SIM_rep]] = read.delim(x, sep='\t')
}
df.abs = do.call('rbind', df.abs)
df.abs$SIM_rep = gsub('\\.[0-9]+$', '', rownames(df.abs))
rownames(df.abs) = 1:nrow(df.abs)
df.abs %>% head(n=3)
%%R -w 800
ggplot(df.abs, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
labs(x='Buoyant density', y='Subsampled community\n(absolute abundance)') +
facet_grid(SIM_rep ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
%%R -w 800
p1 = ggplot(df.abs %>% filter(BD_mid < 1.7), aes(BD_mid, count, fill=taxon, color=taxon)) +
labs(x='Buoyant density', y='Subsampled community\n(absolute abundance)') +
facet_grid(SIM_rep ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
p2 = p1 + geom_line(alpha=0.25) + scale_y_log10()
p1 = p1 + geom_area(stat='identity', position='dodge', alpha=0.5)
grid.arrange(p1, p2, ncol=2)
%%R -w 800
p1 = ggplot(df.abs %>% filter(BD_mid > 1.72), aes(BD_mid, count, fill=taxon, color=taxon)) +
labs(x='Buoyant density', y='Subsampled community\n(absolute abundance)') +
facet_grid(SIM_rep ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
p2 = p1 + geom_line(alpha=0.25) + scale_y_log10()
p1 = p1 + geom_area(stat='identity', position='dodge', alpha=0.5)
grid.arrange(p1, p2, ncol=2)
Explanation: Notes
spikes at low & high G+C
absence of taxa or presence of taxa at those locations?
Plotting absolute abundance distributions
End of explanation |
15,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPyOpt
Step1: Next, we load the data.
Step2: Now, we process the dataset. We selected only the data corresponding to the day April 22, 2014 and we remove the stations in Alaska and the US islands. The fist part of the next cell was taken from this matplotlib tutorial.
Step3: The array stations_coordinates contains the longitude and latitude of the weather stations and stations_maxT contains the maximum temperature value recorded in those locations on the April 22, 2014. Next we make a plot of all available stations.
Step4: Our goal is to find the coldest stations in this map using the minumum number of queries. We use the full dataset to create this objective function.
Step5: The class max_Temp returns the temperature of each station everytime is queried with the coordinates of one of the available stations. To use it for this optimization example we create and instance of it.
Step6: Our design space is now the coordinates of the weather stations. We crete it
Step7: Now we create the GPyOpt object. We will initilize the process with 50 stations, assume that the data are noisy, and we won't normalize the outputs. A seed is used for reproducibility
Step8: We run the optimization for a maximum of 50 iterations
Step9: GPyOpt prints a message to say that the optimization was stopped because the same location was selected twice. Let's have a look to the results. We plot the map with the true temperature of the stations, the coldest one and the best found location.
Step10: The coldest and the selected locations are very close. Note that, in total, only three evaluations were necesary to find this stations. Of course, different results can be found with different initilizations, models, acquisition, etc. To finish, we plot the value of the temperature in the best found station over the histogram of all temperatures. | Python Code:
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import GPyOpt
Explanation: GPyOpt: armed bandits optimization
Written by Javier Gonzalez, Amazon Reseach Cambridge, UK.
Last updated Monday, 22 May 2016.
In this notebook we will see how to do armed bandits optimization with GPyOpt. To do this will use data of weather forecasts at weather stations across more that 10.000 locations in the United States. The project OpenWeatherMap project provides an API service to download this information and at that dataset it is possible to find the weather forecasts for these stations. In this notebook we will use the file target_day_20140422.dat that contains the weather forecasts for each station in the United States for the April 22, 2014. The latitude and longitude of the stations is available as well as the forecasts for the next 7 days.
We start by loading the packages that we will need in out analysis.
End of explanation
filename='./data/target_day_20140422.dat'
f = open(filename, 'r')
contents = f.readlines()
Explanation: Next, we load the data.
End of explanation
## Create a dictionary for the forecasted
forecast_dict = {}
for line in range(1, len(contents)):
line_split = contents[line].split(' ')
try:
forecast_dict[line_split[0], line_split[1]][line_split[2]] = {'MaxT':float(line_split[3]),
'MinT':float(line_split[4][:-1])}
except:
forecast_dict[line_split[0], line_split[1]] = {}
forecast_dict[line_split[0], line_split[1]][line_split[2]] = {'MaxT':float(line_split[3]),
'MinT':float(line_split[4][:-1])}
keys = forecast_dict.keys()
day_out = '0' # 0-7
temp = 'MaxT' # MaxT or MinT
temperature = []; lat = []; lon = []
for key in keys:
temperature.append(float(forecast_dict[key][day_out][temp]))
lat.append(float(key[0]))
lon.append(float(key[1]))
## Create numpy arrays for the analyisis and remove Alaska and the islands
lon = np.array(lon)
lat = np.array(lat)
sel = np.logical_and(np.logical_and(lat>24,lat<51),np.logical_and(lon> -130, lon <-65))
stations_coordinates_all = np.array([lon,lat]).T
stations_maxT_all = np.array([temperature]).T
stations_coordinates = stations_coordinates_all[sel,:]
stations_maxT = stations_maxT_all[sel,:]
# Check the total number of stations.
stations_maxT.shape[0]
Explanation: Now, we process the dataset. We selected only the data corresponding to the day April 22, 2014 and we remove the stations in Alaska and the US islands. The fist part of the next cell was taken from this matplotlib tutorial.
End of explanation
plt.figure(figsize=(12,7))
sc = plt.scatter(stations_coordinates[:,0],stations_coordinates[:,1], c='b', s=2, edgecolors='none')
plt.title('US weather stations',size=25)
plt.xlabel('Logitude',size=15)
plt.ylabel('Latitude',size=15)
plt.ylim((25,50))
plt.xlim((-128,-65))
Explanation: The array stations_coordinates contains the longitude and latitude of the weather stations and stations_maxT contains the maximum temperature value recorded in those locations on the April 22, 2014. Next we make a plot of all available stations.
End of explanation
# Class that defines the function to optimize given the available locations
class max_Temp(object):
def __init__(self,stations_coordinates,stations_maxT):
self.stations_coordinates = stations_coordinates
self.stations_maxT = stations_maxT
def f(self,x):
return np.dot(0.5*(self.stations_coordinates == x).sum(axis=1),self.stations_maxT)[:,None]
Explanation: Our goal is to find the coldest stations in this map using the minumum number of queries. We use the full dataset to create this objective function.
End of explanation
# Objective function given the current inputs
func = max_Temp(stations_coordinates,stations_maxT)
Explanation: The class max_Temp returns the temperature of each station everytime is queried with the coordinates of one of the available stations. To use it for this optimization example we create and instance of it.
End of explanation
domain = [{'name': 'stations', 'type': 'bandit', 'domain':stations_coordinates }] # armed bandit with the locations
Explanation: Our design space is now the coordinates of the weather stations. We crete it:
End of explanation
from numpy.random import seed
seed(123)
myBopt = GPyOpt.methods.BayesianOptimization(f=func.f, # function to optimize
domain=domain,
initial_design_numdata = 5,
acquisition_type='EI',
exact_feval = True,
normalize_Y = False,
optimize_restarts = 10,
acquisition_weight = 2,
de_duplication = True)
myBopt.model.model
Explanation: Now we create the GPyOpt object. We will initilize the process with 50 stations, assume that the data are noisy, and we won't normalize the outputs. A seed is used for reproducibility
End of explanation
# Run the optimization
max_iter = 50 # evaluation budget
myBopt.run_optimization(max_iter)
Explanation: We run the optimization for a maximum of 50 iterations
End of explanation
plt.figure(figsize=(15,7))
jet = plt.cm.get_cmap('jet')
sc = plt.scatter(stations_coordinates[:,0],stations_coordinates[:,1], c=stations_maxT[:, 0], vmin=0, vmax =35, cmap=jet, s=3, edgecolors='none')
cbar = plt.colorbar(sc, shrink = 1)
cbar.set_label(temp)
plt.plot(myBopt.x_opt[0],myBopt.x_opt[1],'ko',markersize=10, label ='Best found')
plt.plot(myBopt.X[:,0],myBopt.X[:,1],'k.',markersize=8, label ='Observed stations')
plt.plot(stations_coordinates[np.argmin(stations_maxT),0],stations_coordinates[np.argmin(stations_maxT),1],'k*',markersize=15, label ='Coldest station')
plt.legend()
plt.ylim((25,50))
plt.xlim((-128,-65))
plt.title('Max. temperature: April, 22, 2014',size=25)
plt.xlabel('Longitude',size=15)
plt.ylabel('Latitude',size=15)
plt.text(-125,28,'Total stations =' + str(stations_maxT.shape[0]),size=20)
plt.text(-125,26.5,'Sampled stations ='+ str(myBopt.X.shape[0]),size=20)
Explanation: GPyOpt prints a message to say that the optimization was stopped because the same location was selected twice. Let's have a look to the results. We plot the map with the true temperature of the stations, the coldest one and the best found location.
End of explanation
plt.figure(figsize=(8,5))
xx= plt.hist(stations_maxT,bins =50)
plt.title('Distribution of max. temperatures',size=25)
plt.vlines(min(stations_maxT),0,1000,lw=3,label='Coldest station')
plt.vlines(myBopt.fx_opt,0,1000,lw=3,linestyles=u'dotted',label='Best found')
plt.legend()
plt.xlabel('Max. temperature',size=15)
plt.xlabel('Frequency',size=15)
Explanation: The coldest and the selected locations are very close. Note that, in total, only three evaluations were necesary to find this stations. Of course, different results can be found with different initilizations, models, acquisition, etc. To finish, we plot the value of the temperature in the best found station over the histogram of all temperatures.
End of explanation |
15,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas Dataframes
This notebook demonstrates how systematic analysis of tally scores is possible using Pandas dataframes. A dataframe can be automatically generated using the Tally.get_pandas_dataframe(...) method. Furthermore, by linking the tally data in a statepoint file with geometry and material information from a summary file, the dataframe can be shown with user-supplied labels.
Step1: Generate Input Files
First we need to define materials that will be used in the problem. We will create three materials for the fuel, water, and cladding of the fuel pin.
Step2: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step6: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step7: We now must create a geometry that is assigned a root universe and export it to XML.
Step8: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 minimum active batches each with 2500 particles. We also tell OpenMC to turn tally triggers on, which means it will keep running until some criterion on the uncertainty of tallies is reached.
Step9: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
Step10: As we can see from the plot, we have a nice array of pin cells with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
Step11: Instantiate a fission rate mesh Tally
Step12: Instantiate a cell Tally with nuclides
Step13: Create a "distribcell" Tally. The distribcell filter allows us to tally multiple repeated instances of the same cell throughout the geometry.
Step14: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step15: Tally Data Processing
Step16: Analyze the mesh fission rate tally
Step17: Use the new Tally data retrieval API with pure NumPy
Step18: Analyze the cell+nuclides scatter-y2 rate tally
Step19: Use the new Tally data retrieval API with pure NumPy
Step20: Analyze the distribcell tally
Step21: Use the new Tally data retrieval API with pure NumPy
Step22: Print the distribcell tally dataframe
Step23: Perform a statistical test comparing the tally sample distributions for two categories of fuel pins.
Step24: Note that the symmetry implied by the y=-x diagonal ensures that the two sampling distributions are identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\alpha$=0.05) one would not reject the null hypothesis that the two sampling distributions are identical.
Next, perform the same test but with two groupings of pins which are not symmetrically identical to one another.
Step25: Note that the asymmetry implied by the y=x diagonal ensures that the two sampling distributions are not identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\alpha$=0.05) one would reject the null hypothesis that the two sampling distributions are identical. | Python Code:
import glob
from IPython.display import Image
import matplotlib.pyplot as plt
import scipy.stats
import numpy as np
import pandas as pd
import openmc
%matplotlib inline
Explanation: Pandas Dataframes
This notebook demonstrates how systematic analysis of tally scores is possible using Pandas dataframes. A dataframe can be automatically generated using the Tally.get_pandas_dataframe(...) method. Furthermore, by linking the tally data in a statepoint file with geometry and material information from a summary file, the dataframe can be shown with user-supplied labels.
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
Explanation: Generate Input Files
First we need to define materials that will be used in the problem. We will create three materials for the fuel, water, and cladding of the fuel pin.
End of explanation
# Instantiate a Materials collection
materials = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials.export_to_xml()
Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create boundary planes to surround the geometry
# Use both reflective and vacuum boundaries to make life interesting
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='vacuum')
min_y = openmc.YPlane(y0=-10.71, boundary_type='vacuum')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10.71, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10.71, boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel', fill=fuel,
region=-fuel_outer_radius)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad', fill=zircaloy)
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator', fill=water,
region=+clad_outer_radius)
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin', cells=[
fuel_cell, clad_cell, moderator_cell
])
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel - 0BA')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
assembly.universes = [[pin_cell_universe] * 17] * 17
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell', fill=assembly)
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
# Create Geometry and export to "geometry.xml"
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
min_batches = 20
max_batches = 200
inactive = 5
particles = 2500
# Instantiate a Settings object
settings = openmc.Settings()
settings.batches = min_batches
settings.inactive = inactive
settings.particles = particles
settings.output = {'tallies': False}
settings.trigger_active = True
settings.trigger_max_batches = max_batches
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 minimum active batches each with 2500 particles. We also tell OpenMC to turn tally triggers on, which means it will keep running until some criterion on the uncertainty of tallies is reached.
End of explanation
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.width = [21.5, 21.5]
plot.pixels = [250, 250]
plot.color_by = 'material'
# Show plot
openmc.plot_inline(plot)
Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
End of explanation
# Instantiate an empty Tallies object
tallies = openmc.Tallies()
Explanation: As we can see from the plot, we have a nice array of pin cells with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
End of explanation
# Instantiate a tally Mesh
mesh = openmc.RegularMesh(mesh_id=1)
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.width = [1.26, 1.26]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate energy Filter
energy_filter = openmc.EnergyFilter([0, 0.625, 20.0e6])
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter, energy_filter]
tally.scores = ['fission', 'nu-fission']
# Add mesh and Tally to Tallies
tallies.append(tally)
Explanation: Instantiate a fission rate mesh Tally
End of explanation
# Instantiate tally Filter
cell_filter = openmc.CellFilter(fuel_cell)
# Instantiate the tally
tally = openmc.Tally(name='cell tally')
tally.filters = [cell_filter]
tally.scores = ['scatter']
tally.nuclides = ['U235', 'U238']
# Add mesh and tally to Tallies
tallies.append(tally)
Explanation: Instantiate a cell Tally with nuclides
End of explanation
# Instantiate tally Filter
distribcell_filter = openmc.DistribcellFilter(moderator_cell)
# Instantiate tally Trigger for kicks
trigger = openmc.Trigger(trigger_type='std_dev', threshold=5e-5)
trigger.scores = ['absorption']
# Instantiate the Tally
tally = openmc.Tally(name='distribcell tally')
tally.filters = [distribcell_filter]
tally.scores = ['absorption', 'scatter']
tally.triggers = [trigger]
# Add mesh and tally to Tallies
tallies.append(tally)
# Export to "tallies.xml"
tallies.export_to_xml()
Explanation: Create a "distribcell" Tally. The distribcell filter allows us to tally multiple repeated instances of the same cell throughout the geometry.
End of explanation
# Remove old HDF5 (summary, statepoint) files
!rm statepoint.*
# Run OpenMC!
openmc.run()
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
# We do not know how many batches were needed to satisfy the
# tally trigger(s), so find the statepoint file(s)
statepoints = glob.glob('statepoint.*.h5')
# Load the last statepoint file
sp = openmc.StatePoint(statepoints[-1])
Explanation: Tally Data Processing
End of explanation
# Find the mesh tally with the StatePoint API
tally = sp.get_tally(name='mesh tally')
# Print a little info about the mesh tally to the screen
print(tally)
Explanation: Analyze the mesh fission rate tally
End of explanation
# Get the relative error for the thermal fission reaction
# rates in the four corner pins
data = tally.get_values(scores=['fission'],
filters=[openmc.MeshFilter, openmc.EnergyFilter], \
filter_bins=[((1,1),(1,17), (17,1), (17,17)), \
((0., 0.625),)], value='rel_err')
print(data)
# Get a pandas dataframe for the mesh tally data
df = tally.get_pandas_dataframe(nuclides=False)
# Set the Pandas float display settings
pd.options.display.float_format = '{:.2e}'.format
# Print the first twenty rows in the dataframe
df.head(20)
# Create a boxplot to view the distribution of
# fission and nu-fission rates in the pins
bp = df.boxplot(column='mean', by='score')
# Extract thermal nu-fission rates from pandas
fiss = df[df['score'] == 'nu-fission']
fiss = fiss[fiss['energy low [eV]'] == 0.0]
# Extract mean and reshape as 2D NumPy arrays
mean = fiss['mean'].values.reshape((17,17))
plt.imshow(mean, interpolation='nearest')
plt.title('fission rate')
plt.xlabel('x')
plt.ylabel('y')
plt.colorbar()
Explanation: Use the new Tally data retrieval API with pure NumPy
End of explanation
# Find the cell Tally with the StatePoint API
tally = sp.get_tally(name='cell tally')
# Print a little info about the cell tally to the screen
print(tally)
# Get a pandas dataframe for the cell tally data
df = tally.get_pandas_dataframe()
# Print the first twenty rows in the dataframe
df.head(20)
Explanation: Analyze the cell+nuclides scatter-y2 rate tally
End of explanation
# Get the standard deviations the total scattering rate
data = tally.get_values(scores=['scatter'],
nuclides=['U238', 'U235'], value='std_dev')
print(data)
Explanation: Use the new Tally data retrieval API with pure NumPy
End of explanation
# Find the distribcell Tally with the StatePoint API
tally = sp.get_tally(name='distribcell tally')
# Print a little info about the distribcell tally to the screen
print(tally)
Explanation: Analyze the distribcell tally
End of explanation
# Get the relative error for the scattering reaction rates in
# the first 10 distribcell instances
data = tally.get_values(scores=['scatter'], filters=[openmc.DistribcellFilter],
filter_bins=[tuple(range(10))], value='rel_err')
print(data)
Explanation: Use the new Tally data retrieval API with pure NumPy
End of explanation
# Get a pandas dataframe for the distribcell tally data
df = tally.get_pandas_dataframe(nuclides=False)
# Print the last twenty rows in the dataframe
df.tail(20)
# Show summary statistics for absorption distribcell tally data
absorption = df[df['score'] == 'absorption']
absorption[['mean', 'std. dev.']].dropna().describe()
# Note that the maximum standard deviation does indeed
# meet the 5e-5 threshold set by the tally trigger
Explanation: Print the distribcell tally dataframe
End of explanation
# Extract tally data from pins in the pins divided along y=-x diagonal
multi_index = ('level 2', 'lat',)
lower = df[df[multi_index + ('x',)] + df[multi_index + ('y',)] < 16]
upper = df[df[multi_index + ('x',)] + df[multi_index + ('y',)] > 16]
lower = lower[lower['score'] == 'absorption']
upper = upper[upper['score'] == 'absorption']
# Perform non-parametric Mann-Whitney U Test to see if the
# absorption rates (may) come from same sampling distribution
u, p = scipy.stats.mannwhitneyu(lower['mean'], upper['mean'])
print('Mann-Whitney Test p-value: {0}'.format(p))
Explanation: Perform a statistical test comparing the tally sample distributions for two categories of fuel pins.
End of explanation
# Extract tally data from pins in the pins divided along y=x diagonal
multi_index = ('level 2', 'lat',)
lower = df[df[multi_index + ('x',)] > df[multi_index + ('y',)]]
upper = df[df[multi_index + ('x',)] < df[multi_index + ('y',)]]
lower = lower[lower['score'] == 'absorption']
upper = upper[upper['score'] == 'absorption']
# Perform non-parametric Mann-Whitney U Test to see if the
# absorption rates (may) come from same sampling distribution
u, p = scipy.stats.mannwhitneyu(lower['mean'], upper['mean'])
print('Mann-Whitney Test p-value: {0}'.format(p))
Explanation: Note that the symmetry implied by the y=-x diagonal ensures that the two sampling distributions are identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\alpha$=0.05) one would not reject the null hypothesis that the two sampling distributions are identical.
Next, perform the same test but with two groupings of pins which are not symmetrically identical to one another.
End of explanation
# Extract the scatter tally data from pandas
scatter = df[df['score'] == 'scatter']
scatter['rel. err.'] = scatter['std. dev.'] / scatter['mean']
# Show a scatter plot of the mean vs. the std. dev.
scatter.plot(kind='scatter', x='mean', y='rel. err.', title='Scattering Rates')
# Plot a histogram and kernel density estimate for the scattering rates
scatter['mean'].plot(kind='hist', bins=25)
scatter['mean'].plot(kind='kde')
plt.title('Scattering Rates')
plt.xlabel('Mean')
plt.legend(['KDE', 'Histogram'])
Explanation: Note that the asymmetry implied by the y=x diagonal ensures that the two sampling distributions are not identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\alpha$=0.05) one would reject the null hypothesis that the two sampling distributions are identical.
End of explanation |
15,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST from scratch
This notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits.
An example follows.
Step2: We're going to be building a model that recognizes these digits as 5, 0, and 4.
Imports and input data
We'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive.
Step3: Working with the images
Now we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5].
Let's try to unpack the data using the documented format
Step4: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
We could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image.
Step5: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Both the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0.
We'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models.
Step6: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Reading the labels
Let's next unpack the test label data. The format here is similar
Step8: Indeed, the first label of the test set is 7.
Forming the training, testing, and validation data sets
Now that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation.
Image data
The code below is a generalization of our prototyping above that reads the entire test and training data set.
Step9: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Let's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.)
Step11: Looks good. Now we know how to index our full set of training and test images.
Label data
Let's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1.
Step12: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.
Step13: The 1-hot encoding looks reasonable.
Segmenting data into training, test, and validation
The final step in preparing our data is to split it into three sets
Step14: Defining the model
Now that we've prepared our data, we're ready to define our model.
The comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout.
We'll separate our model definition into three steps
Step16: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
We'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.)
Step17: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Here, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training.
The vaildation and prediction graphs are much simpler the generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output.
Step18: Training and visualizing results
Now that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error.
All of these operations take place in the context of a session. In Python, we'd write something like
Step19: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.
Step20: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.
Step21: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.
Step22: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.
Step23: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.
Step25: Now let's wrap this up into our scoring function.
Step26: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Here, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end.
(One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.)
Step27: The error seems to have gone down. Let's evaluate the results using the test set.
To help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix.
Step28: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
Let's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values. | Python Code:
from __future__ import print_function
from IPython.display import Image
import base64
Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==".encode('utf-8')), embed=True)
Explanation: MNIST from scratch
This notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits.
An example follows.
End of explanation
import os
from six.moves.urllib.request import urlretrieve
SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'
WORK_DIRECTORY = "/tmp/mnist-data"
def maybe_download(filename):
A helper to download the data files if not present.
if not os.path.exists(WORK_DIRECTORY):
os.mkdir(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not os.path.exists(filepath):
filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
else:
print('Already downloaded', filename)
return filepath
train_data_filename = maybe_download('train-images-idx3-ubyte.gz')
train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')
test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')
test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')
Explanation: We're going to be building a model that recognizes these digits as 5, 0, and 4.
Imports and input data
We'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive.
End of explanation
import gzip, binascii, struct, numpy
import matplotlib.pyplot as plt
with gzip.open(test_data_filename) as f:
# Print the header fields.
for field in ['magic number', 'image count', 'rows', 'columns']:
# struct.unpack reads the binary data provided by f.read.
# The format string '>i' decodes a big-endian integer, which
# is the encoding of the data.
print(field, struct.unpack('>i', f.read(4))[0])
# Read the first 28x28 set of pixel values.
# Each pixel is one byte, [0, 255], a uint8.
buf = f.read(28 * 28)
image = numpy.frombuffer(buf, dtype=numpy.uint8)
# Print the first few values of image.
print('First 10 pixels:', image[:10])
Explanation: Working with the images
Now we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5].
Let's try to unpack the data using the documented format:
[offset] [type] [value] [description]
0000 32 bit integer 0x00000803(2051) magic number
0004 32 bit integer 60000 number of images
0008 32 bit integer 28 number of rows
0012 32 bit integer 28 number of columns
0016 unsigned byte ?? pixel
0017 unsigned byte ?? pixel
........
xxxx unsigned byte ?? pixel
Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).
We'll start by reading the first image from the test data as a sanity check.
End of explanation
%matplotlib inline
# We'll show the image and its pixel value histogram side-by-side.
_, (ax1, ax2) = plt.subplots(1, 2)
# To interpret the values as a 28x28 image, we need to reshape
# the numpy array, which is one dimensional.
ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(image, bins=20, range=[0,255]);
Explanation: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
We could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image.
End of explanation
# Let's convert the uint8 image to 32 bit floats and rescale
# the values to be centered around 0, between [-0.5, 0.5].
#
# We again plot the image and histogram to check that we
# haven't mangled the data.
scaled = image.astype(numpy.float32)
scaled = (scaled - (255 / 2.0)) / 255
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(scaled, bins=20, range=[-0.5, 0.5]);
Explanation: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Both the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0.
We'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models.
End of explanation
with gzip.open(test_labels_filename) as f:
# Print the header fields.
for field in ['magic number', 'label count']:
print(field, struct.unpack('>i', f.read(4))[0])
print('First label:', struct.unpack('B', f.read(1))[0])
Explanation: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Reading the labels
Let's next unpack the test label data. The format here is similar: a magic number followed by a count followed by the labels as uint8 values. In more detail:
[offset] [type] [value] [description]
0000 32 bit integer 0x00000801(2049) magic number (MSB first)
0004 32 bit integer 10000 number of items
0008 unsigned byte ?? label
0009 unsigned byte ?? label
........
xxxx unsigned byte ?? label
As with the image data, let's read the first test set value to sanity check our input path. We'll expect a 7.
End of explanation
IMAGE_SIZE = 28
PIXEL_DEPTH = 255
def extract_data(filename, num_images):
Extract the images into a 4D tensor [image index, y, x, channels].
For MNIST data, the number of channels is always 1.
Values are rescaled from [0, 255] down to [-0.5, 0.5].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and dimensions; we know these values.
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)
return data
train_data = extract_data(train_data_filename, 60000)
test_data = extract_data(test_data_filename, 10000)
Explanation: Indeed, the first label of the test set is 7.
Forming the training, testing, and validation data sets
Now that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation.
Image data
The code below is a generalization of our prototyping above that reads the entire test and training data set.
End of explanation
print('Training data shape', train_data.shape)
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);
ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys);
Explanation: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Let's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.)
End of explanation
NUM_LABELS = 10
def extract_labels(filename, num_images):
Extract the labels into a 1-hot matrix [image index, label index].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and count; we know these values.
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
# Convert to dense 1-hot representation.
return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)
train_labels = extract_labels(train_labels_filename, 60000)
test_labels = extract_labels(test_labels_filename, 10000)
Explanation: Looks good. Now we know how to index our full set of training and test images.
Label data
Let's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1.
End of explanation
print('Training labels shape', train_labels.shape)
print('First label vector', train_labels[0])
print('Second label vector', train_labels[1])
Explanation: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.
End of explanation
VALIDATION_SIZE = 5000
validation_data = train_data[:VALIDATION_SIZE, :, :, :]
validation_labels = train_labels[:VALIDATION_SIZE]
train_data = train_data[VALIDATION_SIZE:, :, :, :]
train_labels = train_labels[VALIDATION_SIZE:]
train_size = train_labels.shape[0]
print('Validation shape', validation_data.shape)
print('Train size', train_size)
Explanation: The 1-hot encoding looks reasonable.
Segmenting data into training, test, and validation
The final step in preparing our data is to split it into three sets: training, test, and validation. This isn't the format of the original data set, so we'll take a small slice of the training data and treat that as our validation set.
End of explanation
import tensorflow as tf
# We'll bundle groups of examples during training for efficiency.
# This defines the size of the batch.
BATCH_SIZE = 60
# We have only one channel in our grayscale images.
NUM_CHANNELS = 1
# The random seed that defines initialization.
SEED = 42
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step, which we'll write once we define the graph structure.
train_data_node = tf.placeholder(
tf.float32,
shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
train_labels_node = tf.placeholder(tf.float32,
shape=(BATCH_SIZE, NUM_LABELS))
# For the validation and test data, we'll just hold the entire dataset in
# one constant node.
validation_data_node = tf.constant(validation_data)
test_data_node = tf.constant(test_data)
# The variables below hold all the trainable weights. For each, the
# parameter defines how the variables will be initialized.
conv1_weights = tf.Variable(
tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.
stddev=0.1,
seed=SEED))
conv1_biases = tf.Variable(tf.zeros([32]))
conv2_weights = tf.Variable(
tf.truncated_normal([5, 5, 32, 64],
stddev=0.1,
seed=SEED))
conv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))
fc1_weights = tf.Variable( # fully connected, depth 512.
tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],
stddev=0.1,
seed=SEED))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))
fc2_weights = tf.Variable(
tf.truncated_normal([512, NUM_LABELS],
stddev=0.1,
seed=SEED))
fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))
print('Done')
Explanation: Defining the model
Now that we've prepared our data, we're ready to define our model.
The comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout.
We'll separate our model definition into three steps:
Defining the variables that will hold the trainable weights.
Defining the basic model graph structure described above. And,
Stamping out several copies of the model graph for training, testing, and validation.
We'll start with the variables.
End of explanation
def model(data, train=False):
The Model definition.
# 2D convolution, with 'SAME' padding (i.e. the output feature map has
# the same size as the input). Note that {strides} is a 4D array whose
# shape matches the data layout: [image index, y, x, depth].
conv = tf.nn.conv2d(data,
conv1_weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Bias and rectified linear non-linearity.
relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))
# Max pooling. The kernel size spec ksize also follows the layout of
# the data. Here we have a pooling window of 2, and a stride of 2.
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
conv = tf.nn.conv2d(pool,
conv2_weights,
strides=[1, 1, 1, 1],
padding='SAME')
relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Reshape the feature map cuboid into a 2D matrix to feed it to the
# fully connected layers.
pool_shape = pool.get_shape().as_list()
reshape = tf.reshape(
pool,
[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
# Fully connected layer. Note that the '+' operation automatically
# broadcasts the biases.
hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)
# Add a 50% dropout during training only. Dropout also scales
# activations such that no rescaling is needed at evaluation time.
if train:
hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)
return tf.matmul(hidden, fc2_weights) + fc2_biases
print('Done')
Explanation: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
We'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.)
End of explanation
# Training computation: logits + cross-entropy loss.
logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=train_labels_node, logits=logits))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +
tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))
# Add the regularization term to the loss.
loss += 5e-4 * regularizers
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0)
# Decay once per epoch, using an exponential schedule starting at 0.01.
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
# Predictions for the minibatch, validation set and test set.
train_prediction = tf.nn.softmax(logits)
# We'll compute them only once in a while by calling their {eval()} method.
validation_prediction = tf.nn.softmax(model(validation_data_node))
test_prediction = tf.nn.softmax(model(test_data_node))
print('Done')
Explanation: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Here, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training.
The vaildation and prediction graphs are much simpler the generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output.
End of explanation
# Create a new interactive session that we'll use in
# subsequent code cells.
s = tf.InteractiveSession()
# Use our newly created session as the default for
# subsequent operations.
s.as_default()
# Initialize all the variables we defined above.
tf.global_variables_initializer().run()
Explanation: Training and visualizing results
Now that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error.
All of these operations take place in the context of a session. In Python, we'd write something like:
with tf.Session() as s:
...training / test / evaluation loop...
But, here, we'll want to keep the session open so we can poke at values as we work out the details of training. The TensorFlow API includes a function for this, InteractiveSession.
We'll start by creating a session and initializing the varibles we defined above.
End of explanation
BATCH_SIZE = 60
# Grab the first BATCH_SIZE examples and labels.
batch_data = train_data[:BATCH_SIZE, :, :, :]
batch_labels = train_labels[:BATCH_SIZE]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
print('Done')
Explanation: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.
End of explanation
print(predictions[0])
Explanation: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.
End of explanation
# The highest probability in the first entry.
print('First prediction', numpy.argmax(predictions[0]))
# But, predictions is actually a list of BATCH_SIZE probability vectors.
print(predictions.shape)
# So, we'll take the highest probability for each vector.
print('All predictions', numpy.argmax(predictions, 1))
Explanation: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.
End of explanation
print('Batch labels', numpy.argmax(batch_labels, 1))
Explanation: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.
End of explanation
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))
total = predictions.shape[0]
print(float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
Explanation: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.
End of explanation
def error_rate(predictions, labels):
Return the error rate and confusions.
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))
total = predictions.shape[0]
error = 100.0 - (100 * float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
return error, confusions
print('Done')
Explanation: Now let's wrap this up into our scoring function.
End of explanation
# Train over the first 1/4th of our training set.
steps = train_size // BATCH_SIZE
for step in range(steps):
# Compute the offset of the current minibatch in the data.
# Note that we could use better randomization across epochs.
offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)
batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
# Print out the loss periodically.
if step % 100 == 0:
error, _ = error_rate(predictions, batch_labels)
print('Step %d of %d' % (step, steps))
print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))
print('Validation error: %.1f%%' % error_rate(
validation_prediction.eval(), validation_labels)[0])
Explanation: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Here, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end.
(One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.)
End of explanation
test_error, confusions = error_rate(test_prediction.eval(), test_labels)
print('Test error: %.1f%%' % test_error)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
for i, cas in enumerate(confusions):
for j, count in enumerate(cas):
if count > 0:
xoff = .07 * len(str(count))
plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white')
Explanation: The error seems to have gone down. Let's evaluate the results using the test set.
To help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix.
End of explanation
plt.xticks(numpy.arange(NUM_LABELS))
plt.hist(numpy.argmax(test_labels, 1));
Explanation: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
Let's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values.
End of explanation |
15,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mie Scattering Efficiencies
Scott Prahl
Jan 2022
If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)
Step1: When a monochromatic plane wave is incident on a sphere, it scatters and absorbs light depending on the properties of the light and sphere. The sphere has radius $r$ and index of refraction $m=m_\mathrm{re}- j\,m_\mathrm{im}$. The sphere size parameter $x=2\pi x/\lambda$ where $\lambda$ is the wavelength of the plane wave in a vacuum.
Step2: Efficiencies
miepython.mie(m,x) calculates three dimensionless efficiencies for a sphere with complex index of refraction $m$ and dimensionless size parameter $x$
Step3: Scattering and absorption coefficients
The scattering cross section may be related to the transmission of a beam
through a dispersion of scatterers of equal size. For $\rho$ particles per
unit volume, the attenuation due to scattering is
$$
-\frac{dI}{dx} = \rho \sigma_\mathrm{sca} I
$$
The transmission is
$$
T = I/I_0 = \exp(-\rho \sigma_\mathrm{sca} x) = \exp(-\mu_s x)
$$
and the coefficients for a sphere with radius r is
$$
\mu_\mathrm{sca} = \rho \sigma_\mathrm{sca} = \rho \pi r^2 Q_\mathrm{sca}
$$
$$
\mu_\mathrm{ext} = \rho \sigma_\mathrm{ext} = \rho \pi r^2 Q_\mathrm{ext}
$$
$$
\mu_\mathrm{abs} = \rho \sigma_\mathrm{abs} = \rho \pi r^2 (Q_\mathrm{ext}-Q_\mathrm{sca})
$$
Kerker, p. 38.
Backscattering Cross Section
For plane-wave radiation incident on a scattering object or a scattering medium, the ratio of the intensity [W/sr] scattered in the direction toward the source to the incident irradiance [W/area].
So defined, the backscattering cross section has units of area per unit solid angle.
In common usage, synonymous with radar cross section, although this can be confusing because the radar cross section is $4\pi$ times the backscattering cross section as defined above and has units of area.
If $Q_{sca}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{sca}$ [area]
$$
\sigma_\mathrm{sca} = \pi r^2 Q_{sca}
$$
Thus if $Q_{back}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{back}$ [area]
$$
\sigma_\mathrm{back} = \pi r^2 Q_{back}
$$
Now the phase function is normalized to one ($S_1(\theta)$ has units of sr$^{-0.5}$)
$$
\int_{4\pi} \frac{|S_1(\theta)|^2+|S_2(\theta)|^2)}{2}\,d\Omega =1
$$
Now since
$$
|S_1(-180^\circ)|^2=|S_2(-180^\circ)|^2=|S_1(180^\circ)|^2=|S_2(180^\circ)|^2
$$
The differential scattering cross section [area/sr] in the backwards direction will be
$$
\left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} =\sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
and the backscattering cross section will be $4\pi$ times this
$$
\sigma_\mathrm{back} = 4\pi \left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} = 4\pi \sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
Step4: Efficiencies
To create a non-dimensional quantity, the scattering efficiency may be defined as
$$
Q_\mathrm{sca} = \frac{\sigma_\mathrm{sca}}{ \pi r^2}
$$
where the scattering cross section is normalized by the geometric cross section. Thus when the scattering efficiency is unity, then the portion of the incident plane wave that is affected is equal to the cross sectional area of the sphere.
Similarly the absorption efficiency
$$
Q_\mathrm{abs} = \frac{\sigma_\mathrm{abs}}{ \pi r^2}
$$
And finally the extinction cross section is
$$
Q_{ext}=Q_{sca}+Q_{abs}
$$
where $Q_{sca}$ is the scattering efficiency and $Q_{abs}$ is the absorption
efficiency. $Q_{sca}$ and $Q_{ext}$ are determined by the
Mie scattering program and $Q_{abs}$ is obtained by subtraction.
Step5: Radiation Pressure
The radiation pressure is given by [e.g., Kerker, p. 94]
$$
Q_\mathrm{pr}=Q_\mathrm{ext}-g Q_\mathrm{sca}
$$
and is the momentum given to the scattering particle [van de Hulst, p. 13] in the direction of the incident wave. The radiation pressure cross section $\sigma_\mathrm{pr}$ is just the efficiency multiplied by the geometric cross section
$$
\sigma_\mathrm{pr} = \pi r^2 Q_\mathrm{pr}
$$
The radiation pressure cross section $\sigma_\mathrm{pr}$ can be interpreted as the area of a black wall that would receive the same force from the same incident wave. The actual force on the particle is
$$
F = E_0 \frac{\sigma_\mathrm{pr}}{c}
$$
where $E_0$ is the irradiance (W/m$^2$) on the sphere and $c$ is the velocity of the radiation in the medium. If the irradiance has N photons per geometric cross section ($\pi r^2$) then this can be rewritten as
$$
F = N \frac{h}{\lambda} \sigma_\mathrm{pr} = N \cdot \mbox{(photon momentum)} \cdot \sigma_\mathrm{pr}
$$
Step6: Graph of backscattering efficiency
van de Hulst has a nice graph of backscattering efficiency that we can replicate | Python Code:
#!pip install --user miepython
import importlib.resources
import numpy as np
import matplotlib.pyplot as plt
try:
import miepython
except ModuleNotFoundError:
print('miepython not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
Explanation: Mie Scattering Efficiencies
Scott Prahl
Jan 2022
If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)
End of explanation
# import the Johnson and Christy data for silver
# ag = np.genfromtxt('https://refractiveindex.info/tmp/data/main/Ag/Johnson.txt', delimiter='\t')
nname = "data/ag-Johnson.txt"
ref = importlib.resources.files('miepython').joinpath(nname)
ag = np.genfromtxt(ref, delimiter='\t')
# data is stacked so need to rearrange
N = len(ag)//2
ag_lam = ag[1:N,0]
ag_mre = ag[1:N,1]
ag_mim = ag[N+1:,1]
plt.scatter(ag_lam*1000,ag_mre,s=2,color='blue')
plt.scatter(ag_lam*1000,ag_mim,s=2,color='red')
plt.xlim(300,800)
plt.ylim(0,5)
plt.xlabel('Wavelength (nm)')
plt.ylabel('Refractive Index')
plt.text(350, 1.2, '$m_{re}$', color='blue', fontsize=14)
plt.text(350, 2.2, '$m_{im}$', color='red', fontsize=14)
plt.title('Complex Refractive Index of Silver')
plt.show()
Explanation: When a monochromatic plane wave is incident on a sphere, it scatters and absorbs light depending on the properties of the light and sphere. The sphere has radius $r$ and index of refraction $m=m_\mathrm{re}- j\,m_\mathrm{im}$. The sphere size parameter $x=2\pi x/\lambda$ where $\lambda$ is the wavelength of the plane wave in a vacuum.
End of explanation
r = 0.3 #radius in microns
geometric_cross_section = np.pi * r**2
x = 2*np.pi*r/ag_lam;
m = ag_mre - 1.0j * ag_mim
qext, qsca, qback, g = miepython.mie(m,x)
absorb = (qext - qsca) * geometric_cross_section
scatt = qsca * geometric_cross_section
extinct = qext * geometric_cross_section
plt.plot(ag_lam*1000,absorb,color='blue')
plt.plot(ag_lam*1000,scatt,color='red')
plt.plot(ag_lam*1000,extinct,color='green')
plt.text(350, 0.35,'$\sigma_{abs}$', color='blue', fontsize=14)
plt.text(350, 0.54,'$\sigma_{sca}$', color='red', fontsize=14)
plt.text(350, 0.84,'$\sigma_{ext}$', color='green', fontsize=14)
plt.xlabel("Wavelength (nm)")
plt.ylabel("Cross Section (1/microns$^2$)")
plt.title("Cross Sections for %.1f$\mu$m Silver Spheres" % (r*2))
plt.xlim(300,800)
plt.show()
Explanation: Efficiencies
miepython.mie(m,x) calculates three dimensionless efficiencies for a sphere with complex index of refraction $m$ and dimensionless size parameter $x$:
$Q_{ext}$ the extinction efficiency
$Q_{sca}$ the scattering efficiency
$Q_{back}$ the back-scattering efficiency
as well as the dimensionless average cosine of the scattering angle
$g$ scattering anisotropy.
Cross Sections
Scattering and absorption cross sections $\sigma$ have units of area and can be obtained from the efficiencies by multiplying by the geometric cross section $\pi r^2$ of the sphere.
$$
\sigma_\mathrm{sca} = \pi r^2 Q_\mathrm{sca}
$$
$$
\sigma_\mathrm{ext} = \pi r^2 Q_\mathrm{ext}
$$
$$
\sigma_\mathrm{back} = \pi r^2 Q_\mathrm{back}
$$
For example, the scattering cross section $\sigma_\mathrm{sca}$ is effective area of a the incident plane wave that interacts and produces scattered light.
Since some of the incident light may be absorbed (when $m_\mathrm{im}$ is non-zero) then there is also an area of the incident wave that is absorbed $\sigma_\mathrm{abs}$.
$$
Q_\mathrm{ext} = Q_\mathrm{abs}+Q_\mathrm{sca}
$$
and so
$$
\sigma_\mathrm{abs} = \sigma_\mathrm{ext}-\sigma_\mathrm{sca}
$$
End of explanation
lambda0 = 1 # microns
a = lambda0/10 # also microns
k = 2*np.pi/lambda0 # per micron
m = 1.5
x = a * k
geometric_cross_section = np.pi * a**2
theta = np.linspace(-180,180,180)
mu = np.cos(theta/180*np.pi)
s1,s2 = miepython.mie_S1_S2(m,x,mu)
phase = (abs(s1[0])**2+abs(s2[0])**2)/2
print(' unpolarized =',phase)
print(' |s1[-180]|**2 =',abs(s1[0]**2))
print(' |s2[-180]|**2 =',abs(s2[0]**2))
print(' |s1[ 180]|**2 =',abs(s1[179]**2))
print(' |s2[ 180]|**2 =',abs(s2[179]**2))
print()
qext, qsca, qback, g = miepython.mie(m,x)
Cback = qback * geometric_cross_section
Csca = qsca * geometric_cross_section
print(' Csca =',Csca)
print(' Cback =',Cback)
print('4*pi*Csca*p(180) =',4*np.pi*Csca*phase)
Explanation: Scattering and absorption coefficients
The scattering cross section may be related to the transmission of a beam
through a dispersion of scatterers of equal size. For $\rho$ particles per
unit volume, the attenuation due to scattering is
$$
-\frac{dI}{dx} = \rho \sigma_\mathrm{sca} I
$$
The transmission is
$$
T = I/I_0 = \exp(-\rho \sigma_\mathrm{sca} x) = \exp(-\mu_s x)
$$
and the coefficients for a sphere with radius r is
$$
\mu_\mathrm{sca} = \rho \sigma_\mathrm{sca} = \rho \pi r^2 Q_\mathrm{sca}
$$
$$
\mu_\mathrm{ext} = \rho \sigma_\mathrm{ext} = \rho \pi r^2 Q_\mathrm{ext}
$$
$$
\mu_\mathrm{abs} = \rho \sigma_\mathrm{abs} = \rho \pi r^2 (Q_\mathrm{ext}-Q_\mathrm{sca})
$$
Kerker, p. 38.
Backscattering Cross Section
For plane-wave radiation incident on a scattering object or a scattering medium, the ratio of the intensity [W/sr] scattered in the direction toward the source to the incident irradiance [W/area].
So defined, the backscattering cross section has units of area per unit solid angle.
In common usage, synonymous with radar cross section, although this can be confusing because the radar cross section is $4\pi$ times the backscattering cross section as defined above and has units of area.
If $Q_{sca}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{sca}$ [area]
$$
\sigma_\mathrm{sca} = \pi r^2 Q_{sca}
$$
Thus if $Q_{back}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{back}$ [area]
$$
\sigma_\mathrm{back} = \pi r^2 Q_{back}
$$
Now the phase function is normalized to one ($S_1(\theta)$ has units of sr$^{-0.5}$)
$$
\int_{4\pi} \frac{|S_1(\theta)|^2+|S_2(\theta)|^2)}{2}\,d\Omega =1
$$
Now since
$$
|S_1(-180^\circ)|^2=|S_2(-180^\circ)|^2=|S_1(180^\circ)|^2=|S_2(180^\circ)|^2
$$
The differential scattering cross section [area/sr] in the backwards direction will be
$$
\left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} =\sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
and the backscattering cross section will be $4\pi$ times this
$$
\sigma_\mathrm{back} = 4\pi \left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} = 4\pi \sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
End of explanation
r = 0.3 #radius in microns
x = 2*np.pi*r/ag_lam;
m = ag_mre - 1.0j * ag_mim
qext, qsca, qback, g = miepython.mie(m,x)
plt.plot(ag_lam*1000,qext - qsca,color='blue')
plt.plot(ag_lam*1000,qsca,color='red')
plt.plot(ag_lam*1000,qext,color='green')
plt.text(350, 1.2,'$Q_{abs}$', color='blue', fontsize=14)
plt.text(350, 1.9,'$Q_{sca}$', color='red', fontsize=14)
plt.text(350, 3.0,'$Q_{ext}$', color='green', fontsize=14)
plt.xlabel("Wavelength (nm)")
plt.ylabel("Efficiency (-)")
plt.title("Mie Efficiencies for %.1f$\mu$m Silver Spheres" % (r*2))
plt.xlim(300,800)
plt.show()
Explanation: Efficiencies
To create a non-dimensional quantity, the scattering efficiency may be defined as
$$
Q_\mathrm{sca} = \frac{\sigma_\mathrm{sca}}{ \pi r^2}
$$
where the scattering cross section is normalized by the geometric cross section. Thus when the scattering efficiency is unity, then the portion of the incident plane wave that is affected is equal to the cross sectional area of the sphere.
Similarly the absorption efficiency
$$
Q_\mathrm{abs} = \frac{\sigma_\mathrm{abs}}{ \pi r^2}
$$
And finally the extinction cross section is
$$
Q_{ext}=Q_{sca}+Q_{abs}
$$
where $Q_{sca}$ is the scattering efficiency and $Q_{abs}$ is the absorption
efficiency. $Q_{sca}$ and $Q_{ext}$ are determined by the
Mie scattering program and $Q_{abs}$ is obtained by subtraction.
End of explanation
r = 0.3 #radius in microns
x = 2*np.pi*r/ag_lam;
m = ag_mre - 1.0j * ag_mim
qext, qsca, qback, g = miepython.mie(m,x)
qpr = qext - g*qsca
plt.plot(ag_lam*1000,qpr,color='blue')
plt.xlabel("Wavelength (nm)")
plt.ylabel("Efficiency $Q_{pr}$ (-)")
plt.title("Radiation Pressure Efficiency for %.1f$\mu$m Silver Spheres" % (r*2))
plt.xlim(300,800)
plt.ylim(1,2.5)
plt.show()
Explanation: Radiation Pressure
The radiation pressure is given by [e.g., Kerker, p. 94]
$$
Q_\mathrm{pr}=Q_\mathrm{ext}-g Q_\mathrm{sca}
$$
and is the momentum given to the scattering particle [van de Hulst, p. 13] in the direction of the incident wave. The radiation pressure cross section $\sigma_\mathrm{pr}$ is just the efficiency multiplied by the geometric cross section
$$
\sigma_\mathrm{pr} = \pi r^2 Q_\mathrm{pr}
$$
The radiation pressure cross section $\sigma_\mathrm{pr}$ can be interpreted as the area of a black wall that would receive the same force from the same incident wave. The actual force on the particle is
$$
F = E_0 \frac{\sigma_\mathrm{pr}}{c}
$$
where $E_0$ is the irradiance (W/m$^2$) on the sphere and $c$ is the velocity of the radiation in the medium. If the irradiance has N photons per geometric cross section ($\pi r^2$) then this can be rewritten as
$$
F = N \frac{h}{\lambda} \sigma_\mathrm{pr} = N \cdot \mbox{(photon momentum)} \cdot \sigma_\mathrm{pr}
$$
End of explanation
x = np.linspace(0.1,4,50)
m = 3.41-1.94j
qext, qsca, qback, g = miepython.mie(m,x)
plt.plot(x,qback)
plt.text(0.6,0,"m=3.41-1.94j")
m = 10000
qext, qsca, qback, g = miepython.mie(m,x)
plt.plot(x,qback)
plt.text(1.2,3.0,"m=10,000")
plt.xlabel("Size Parameter")
plt.ylabel(r"$Q_{back}$")
plt.title("van de Hulst Figure 61")
plt.grid(True)
plt.show()
Explanation: Graph of backscattering efficiency
van de Hulst has a nice graph of backscattering efficiency that we can replicate
End of explanation |
15,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
The Bureau of Economic Analysis (BEA) publishes economic statistics in a variety of formats. This document describes the BEA Data Retrieval Application Programming Interface (API) – including detailed instructions for retrieving data and meta-data published by BEA using the pyBEA package.
The pyBEA pacakge provides a simple interface to the BEA API and includes methods for retrieving a subset of BEA statistical data, including any meta-data describing it, and loading the results into a Pandas DataFrame object for further analysis.
Data Return Format
The BEA API returns data in one of two formats
Step1: Meta-Data API Methods
The BEA API contains three methods for retrieving meta-data as follows
Step2: Example Usage
Step3: Data Retrieval API Method
The BEA API has one method for retrieving data
Step4: NIPA (National Income and Product Accounts)
This dataset contains data from the National Income and Product Accounts which include measures of the value and composition of U.S.production and the incomes generated in producing it. NIPA data is provided on a table basis; individual tables contain between fewer than 10 to more than 200 distinct data series.
Example Usage
Percent change in Real Gross Domestic Product, Annually and Quarterly for all years.
Step5: Example Usage
Personal Income, Monthly, for 2015 and 2016.
Step6: NIUnderlyingDetail (National Income and Product Accounts)
The DataSetName is NIUnderlyingDetail. This dataset contains underlying detail data from the National Income and Product Accounts which include measures of the value and composition of U.S.production and the incomes generated in producing it. NIPA Underlying Detail data is provided on a table basis; individual tables contain between fewer than 10 to more than 200 distinct data series.
Example Usage
Personal Consumption Expenditures, Current Dollars, Annually, Quarterly and Monthly for all years.
Step7: Example Usage
Auto and Truck Unit Sales, Production, Inventories, Expenditures and Price, Monthly, for 2015 and 2016
Step8: Fixed Assets
The FixedAssets dataset contains data from the standard set of Fixed Assets tables as published online.
Step9: ITA (International Transactions)
This dataset contains data on U.S. international transactions. BEA's international transactions (balance of payments) accounts include all transactions between U.S. and foreign residents.
Example Usage
Balance on goods with China for 2011 and 2012.
Step10: Example Usage
Net U.S. acquisition of portfolio investment assets (quarterly not seasonally adjusted) for 2013.
Step11: RegionalIncome
Example Usage
Fetch data on personal income for 2012 and 2013 for all counties, in JSON format
Step12: RegionalProduct
Example Usage
Real GDP for all years for all MSAs, in JSON format
Step13: Example Usage
GDP for 2012 and 2013 for selected Southeast states, for the Retail Trade industry.
Step14: InputOutput
The Input-Output Statistics are contained within a dataset called InputOutput. BEA's industry accounts are used extensively by policymakers and businesses to understand industry interactions, productivity trends, and the changing structure of the U.S. economy. The input-output accounts provide a detailed view of the interrelationships between U.S. producers and users.
Example Usage
Data from The Use of Commodities by Industries, Before Redefinitions (Producer’s Prices) sector level table for years 2010, 2011, and 2012.
Step15: Example Usage
Data for 2007 from The Make of Commodities by Industries, Before Redefinitions sector and summary level tables. | Python Code:
import pybea
Explanation: Introduction
The Bureau of Economic Analysis (BEA) publishes economic statistics in a variety of formats. This document describes the BEA Data Retrieval Application Programming Interface (API) – including detailed instructions for retrieving data and meta-data published by BEA using the pyBEA package.
The pyBEA pacakge provides a simple interface to the BEA API and includes methods for retrieving a subset of BEA statistical data, including any meta-data describing it, and loading the results into a Pandas DataFrame object for further analysis.
Data Return Format
The BEA API returns data in one of two formats: JSON or XML (with JSON being the default). Currently the pyBEA package only supports JSON requests.
End of explanation
pybea.get_data_set_list?
pybea.get_parameter_list?
pybea.get_parameter_values?
Explanation: Meta-Data API Methods
The BEA API contains three methods for retrieving meta-data as follows:
GetDataSetList: retrieves a list of the datasets currently offered.
GetParameterList: retrieves a list of the parameters (required and optional) for a particular dataset.
GetParameterValues: retrieves a list of the valid values for a particular parameter.
Each of these methods has a corresponding function in the pybea package.
End of explanation
# replace this with your BEA data API key!
USER_ID = ???
# access the BEA data API...
available_datasets = pybea.get_data_set_list(USER_ID)
available_datasets
request = pybea.api.DataSetListRequest(USER_ID,
ResultFormat="JSON")
request.data_set_list
regional_income_params = pybea.get_parameter_list(USER_ID,
DataSetName='RegionalIncome',
ResultFormat="XML")
regional_income_params
request = pybea.api.ParameterListRequest(USER_ID,
DataSetName='RegionalIncome',
ResultFormat="JSON")
request.parameter_list
regional_income_geofips = pybea.get_parameter_values(USER_ID,
DataSetName='RegionalIncome',
ParameterName='GeoFips')
regional_income_geofips
request = pybea.api.ParameterValuesRequest(USER_ID,
DataSetName='RegionalIncome',
ParameterName='GeoFips')
request.parameter_values
Explanation: Example Usage
End of explanation
pybea.get_data?
Explanation: Data Retrieval API Method
The BEA API has one method for retrieving data: GetData. This method has its own function in the pybea package.
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='NIPA',
TableName='T10101',
Frequency=['A', 'Q'],
Year='ALL',
ResultFormat="XML"
)
data.head()
data.tail()
Explanation: NIPA (National Income and Product Accounts)
This dataset contains data from the National Income and Product Accounts which include measures of the value and composition of U.S.production and the incomes generated in producing it. NIPA data is provided on a table basis; individual tables contain between fewer than 10 to more than 200 distinct data series.
Example Usage
Percent change in Real Gross Domestic Product, Annually and Quarterly for all years.
End of explanation
request = pybea.api.NIPARequest(USER_ID,
TableName='T20600',
Frequency='M',
Year=['2015', '2016'],
ResultFormat='JSON')
request.data.head()
data.tail()
Explanation: Example Usage
Personal Income, Monthly, for 2015 and 2016.
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='NIUnderlyingDetail',
TableName='U20305',
Frequency=['A', 'Q'],
Year='ALL',
ResultFormat='XML')
data.head()
data.tail()
Explanation: NIUnderlyingDetail (National Income and Product Accounts)
The DataSetName is NIUnderlyingDetail. This dataset contains underlying detail data from the National Income and Product Accounts which include measures of the value and composition of U.S.production and the incomes generated in producing it. NIPA Underlying Detail data is provided on a table basis; individual tables contain between fewer than 10 to more than 200 distinct data series.
Example Usage
Personal Consumption Expenditures, Current Dollars, Annually, Quarterly and Monthly for all years.
End of explanation
request = pybea.api.NIUnderlyingDetailRequest(USER_ID,
TableName='U70205S',
Frequency='M',
Year=['2015', '2016'],
ResultFormat='JSON')
request.data.head()
request.data.tail()
Explanation: Example Usage
Auto and Truck Unit Sales, Production, Inventories, Expenditures and Price, Monthly, for 2015 and 2016
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='FixedAssets',
TableID='16',
Year='2012',
ResultFormat='XML')
data.head()
data.tail()
Explanation: Fixed Assets
The FixedAssets dataset contains data from the standard set of Fixed Assets tables as published online.
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='ITA',
Indicator='BalGds',
AreaOrCountry='China',
Frequency='A',
Year=['2011', '2012'],
ResultFormat='XML')
data.head()
Explanation: ITA (International Transactions)
This dataset contains data on U.S. international transactions. BEA's international transactions (balance of payments) accounts include all transactions between U.S. and foreign residents.
Example Usage
Balance on goods with China for 2011 and 2012.
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='ITA',
Indicator='PfInvAssets',
AreaOrCountry='AllCountries',
Frequency='QNSA',
Year='2013',
ResultFormat='XML')
data.head()
Explanation: Example Usage
Net U.S. acquisition of portfolio investment assets (quarterly not seasonally adjusted) for 2013.
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='RegionalIncome',
TableName='CA1',
LineCode=1,
GeoFips='COUNTY',
Year=['2012', '2013'],
ResultFormat='JSON')
data.head()
data.tail()
Explanation: RegionalIncome
Example Usage
Fetch data on personal income for 2012 and 2013 for all counties, in JSON format
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='RegionalProduct',
Component="RGDP_MAN",
IndustryId=1,
GeoFips="MSA",
Year="ALL",
ResultFormat='XML')
data.head()
data.tail()
Explanation: RegionalProduct
Example Usage
Real GDP for all years for all MSAs, in JSON format
End of explanation
southeast_states = ["01000", "05000", "12000", "13000", "21000", "22000",
"28000", "37000", "45000", "47000", "51000", "54000"]
data = pybea.get_data(USER_ID,
DataSetName='RegionalProduct',
Component="GDP_sAN",
IndustryId=35,
GeoFips=southeast_states,
Year=["2013", "2013"],
ResultFormat='XML')
data.head()
data.tail()
Explanation: Example Usage
GDP for 2012 and 2013 for selected Southeast states, for the Retail Trade industry.
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='InputOutput',
TableID=2,
Year=['2010', '2011', '2012', '2013'],
ResultFormat='JSON')
data.head()
data.tail()
Explanation: InputOutput
The Input-Output Statistics are contained within a dataset called InputOutput. BEA's industry accounts are used extensively by policymakers and businesses to understand industry interactions, productivity trends, and the changing structure of the U.S. economy. The input-output accounts provide a detailed view of the interrelationships between U.S. producers and users.
Example Usage
Data from The Use of Commodities by Industries, Before Redefinitions (Producer’s Prices) sector level table for years 2010, 2011, and 2012.
End of explanation
data = pybea.get_data(USER_ID,
DataSetName='InputOutput',
TableID=[46, 47],
Year='2007',
ResultFormat='JSON')
data.head()
data.tail()
Explanation: Example Usage
Data for 2007 from The Make of Commodities by Industries, Before Redefinitions sector and summary level tables.
End of explanation |
15,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Although the notebook in Part 1 might have seemed simplistic, it is useful for determining the effectiveness and interpretability of multiple regression compared to more advanced methods. In this Jupyter notebook, we will primarily examine an exploratory and dimensional reduction technique called Principal Component Analysis (PCA). PCA transformation works especially well when variables are highly correlated, which we found was true in the previous post.
First, the following code will load and preprocess the data mentioned in the previous blog post
Step1: PCA Modeling
PCA works by transforming the axes of a dataset. For example, the original axes of a dataset might fall along X, Y, and Z; after PCA transformation, the first axis will be a linear combination of all three variables (such as X+2Y-3Z), the second axis will be orthogonal (at a right angle) to the first axis but also a linear combination of all three variables, and so on.
New variables are created as linear combinations of the original variables. Optimal linear combinations are discovered using the eigenvectors and eigenvalues of the original data. Singular value decomposition, a technique from linear algebra, may be used to divide the original dataset into three matrices containing the eigenvalues, eigenvectors, and transformed data points of the original matrix. The eigenvectors are the transformed linear combinations, the eigenvalues describe the explained variance (informative significance) of the eigenvectors, with the highest eigenvalues being the most significant. After determining the optimal linear combinations, insignificant factors may be discarded, and the complexity of the problem is significantly reduced.
When might PCA be useful? Variables within a dataset may lie along arbitrary axes that are not necessarily convenient for modeling, depending on how the data is collected. In many cases, more accurate models can be achieved when the choice of axes is optimized. Furthermore, some variables (or transformed axes) may be removed from consideration if they do not add a meaningful amount of information to the problem. Or PCA might be used as a data exploration technique, as it helps determine which factors typically contain the same information.
The scikit-learn library may be used to perform PCA on the abalone dataset from the previous blog post. First, a model will be built with 10 principal components, the same as the number of variables.
Step2: According to the figure above, the majority of the variance within the model can be explained using only the first four principal components. Since we know that most of these variables are highly correlated, a good assumption is that PCs 5 through 10 contain mostly noise and can be removed from consideration.
The following plot will illustrate the coefficients of each principal component as a combination of the original 10 variables.
Step3: Regression Modeling
Using only the first 4 principal components, which explain the majority of the variance in the dataset, a multiple regression model can be created. The following code will remove the last 6 PCs and create a regression model.
Step4: Again, let's get a sense of how well the model performed by looking at a Y-Yhat plot and some basic performance metrics | Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
from sklearn import linear_model
from sklearn.decomposition import PCA
from sklearn.metrics import r2_score, mean_absolute_error
from sklearn.model_selection import train_test_split
%matplotlib inline
abaloneDF = pd.read_csv('abalone.csv', names=['Sex', 'Length', 'Diameter', 'Height',
'Whole Weight', 'Shucked Weight',
'Viscera Weight', 'Shell Weight',
'Rings'])
abaloneDF['Male'] = (abaloneDF['Sex'] == 'M').astype(int)
abaloneDF['Female'] = (abaloneDF['Sex'] == 'F').astype(int)
abaloneDF['Infant'] = (abaloneDF['Sex'] == 'I').astype(int)
abaloneDF = abaloneDF[abaloneDF['Height'] > 0]
Explanation: Although the notebook in Part 1 might have seemed simplistic, it is useful for determining the effectiveness and interpretability of multiple regression compared to more advanced methods. In this Jupyter notebook, we will primarily examine an exploratory and dimensional reduction technique called Principal Component Analysis (PCA). PCA transformation works especially well when variables are highly correlated, which we found was true in the previous post.
First, the following code will load and preprocess the data mentioned in the previous blog post:
End of explanation
dataset = abaloneDF.drop(['Rings', 'Sex'],axis=1)
#We will start with the same number of components as variables
pca_model = PCA(n_components=10)
pca_model.fit(dataset)
#Plot the explained variance
plt.plot(range(1,11),pca_model.explained_variance_ratio_);
plt.xlabel('Principal Component');
plt.ylabel('Percentage Explained Variance');
Explanation: PCA Modeling
PCA works by transforming the axes of a dataset. For example, the original axes of a dataset might fall along X, Y, and Z; after PCA transformation, the first axis will be a linear combination of all three variables (such as X+2Y-3Z), the second axis will be orthogonal (at a right angle) to the first axis but also a linear combination of all three variables, and so on.
New variables are created as linear combinations of the original variables. Optimal linear combinations are discovered using the eigenvectors and eigenvalues of the original data. Singular value decomposition, a technique from linear algebra, may be used to divide the original dataset into three matrices containing the eigenvalues, eigenvectors, and transformed data points of the original matrix. The eigenvectors are the transformed linear combinations, the eigenvalues describe the explained variance (informative significance) of the eigenvectors, with the highest eigenvalues being the most significant. After determining the optimal linear combinations, insignificant factors may be discarded, and the complexity of the problem is significantly reduced.
When might PCA be useful? Variables within a dataset may lie along arbitrary axes that are not necessarily convenient for modeling, depending on how the data is collected. In many cases, more accurate models can be achieved when the choice of axes is optimized. Furthermore, some variables (or transformed axes) may be removed from consideration if they do not add a meaningful amount of information to the problem. Or PCA might be used as a data exploration technique, as it helps determine which factors typically contain the same information.
The scikit-learn library may be used to perform PCA on the abalone dataset from the previous blog post. First, a model will be built with 10 principal components, the same as the number of variables.
End of explanation
df = pd.DataFrame(data=pca_model.components_)
df.index = dataset.columns
dfRed = df.ix[:,0:3]
dfRed.columns = range(1,5)
dfRed.plot.bar();
plt.ylabel('Coefficient');
Explanation: According to the figure above, the majority of the variance within the model can be explained using only the first four principal components. Since we know that most of these variables are highly correlated, a good assumption is that PCs 5 through 10 contain mostly noise and can be removed from consideration.
The following plot will illustrate the coefficients of each principal component as a combination of the original 10 variables.
End of explanation
#Remove the last 6 PCs
red_PCA = PCA(n_components=4)
red_PCA.fit(dataset)
rings = abaloneDF['Rings'].values.reshape(len(abaloneDF),1)
red_data = np.hstack([red_PCA.transform(dataset),rings])
red_df = pd.DataFrame(red_data,columns=['PC1','PC2','PC3','PC4','Rings'])
train, test = train_test_split(red_df,train_size=0.7)
xtrain = train.drop(['Rings'],axis=1)
ytrain = train['Rings']
xtest = test.drop(['Rings'],axis=1)
ytest = test['Rings']
regr = linear_model.LinearRegression()
regr.fit(xtrain, ytrain)
#Take a look at the regression coefficients
dict(zip(list(xtrain.columns),regr.coef_))
Explanation: Regression Modeling
Using only the first 4 principal components, which explain the majority of the variance in the dataset, a multiple regression model can be created. The following code will remove the last 6 PCs and create a regression model.
End of explanation
#Same function as in Part 1:
def plot_yyhat(ytest,ypred):
r2 = r2_score(ytest, ypred )
mae = mean_absolute_error(ytest, ypred)
absmin = min([ytest.min(),ypred.min()])
absmax = max([ytest.max(),ypred.max()])
ax = plt.axes()
ax.scatter(ytest,ypred)
ax.set_title('Y vs. YHat')
ax.axis([absmin, absmax, absmin, absmax])
ax.plot([absmin, absmax], [absmin, absmax],c="k")
ax.set_ylabel('Predicted Rings')
ax.set_xlabel('Actual Rings')
#Plot the text box
props = dict(boxstyle='round', facecolor='wheat', alpha=0.5)
textStr = '$MAE=%.3f$\n$R2=%.3f$' % (mae, r2)
ax.text(0.05, 0.95, textStr, transform=ax.transAxes, fontsize=14,
verticalalignment='top', bbox=props);
ypred = regr.predict(xtest)
plot_yyhat(ytest,ypred)
Explanation: Again, let's get a sense of how well the model performed by looking at a Y-Yhat plot and some basic performance metrics:
End of explanation |
15,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TryAlgo Maps in Paris
Here is a demo of the tryalgo package over Paris' graph.
We are going to display a shortest path from Gare de Lyon to Place d'Italie.
Let's first store the graph in an adjacency list.
Step1: How many nodes?
Step2: Which means the node 0 leads to the node 1079 with cost 113 and so on.
Step3: Geolocation using geopy
Step5: We need a function that provides the index of the closest node in the graph of Paris. The distance between two pairs of latitude and longitude is given by the following haversine function
Step6: Visualization using Folium
Step7: Pathfinding using tryalgo
Step8: To finish, let's display the path. | Python Code:
with open('paris.txt') as f:
lines = f.read().splitlines()
N, M, T, C, S = map(int, lines[0].split())
paris_coords = []
for i in range(1, N + 1):
paris_coords.append(list(map(float, lines[i].split()))) # Read coords
paris = {node: {} for node in range(N)}
for i in range(N + 1, N + M + 1):
start, end, nb_directions, duration, length = map(int, lines[i].split())
paris[start][end] = length
if nb_directions == 2:
paris[end][start] = length
Explanation: TryAlgo Maps in Paris
Here is a demo of the tryalgo package over Paris' graph.
We are going to display a shortest path from Gare de Lyon to Place d'Italie.
Let's first store the graph in an adjacency list.
End of explanation
len(paris)
paris[0]
Explanation: How many nodes?
End of explanation
%matplotlib inline
from matplotlib import pyplot as plt
x = [point[0] for point in paris_coords]
y = [point[1] for point in paris_coords]
plt.scatter(x, y, marker='.', s=1)
Explanation: Which means the node 0 leads to the node 1079 with cost 113 and so on.
End of explanation
from geopy.geocoders import Nominatim
geocoder = Nominatim(user_agent='tryalgo')
start = geocoder.geocode("Gare de Lyon, Paris")
end = geocoder.geocode("Porte d'Italie, Paris")
start.longitude, start.latitude
Explanation: Geolocation using geopy
End of explanation
from math import radians, cos, sin, asin, sqrt
def haversine(lon1, lat1, lon2, lat2):
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
r = 6371 # Radius of earth in kilometers. Use 3956 for miles
return c * r
def closest_node(coords, location):
dmin = float('inf')
closest = None
for i in range(len(coords)):
d = haversine(coords[i][1], coords[i][0], location.longitude, location.latitude)
if d < dmin:
closest = i
dmin = d
return closest
Explanation: We need a function that provides the index of the closest node in the graph of Paris. The distance between two pairs of latitude and longitude is given by the following haversine function:
End of explanation
import folium
paris_viz = folium.Map(location=(48.8330293, 2.3618845), tiles='Stamen Watercolor', zoom_start=13)
paris_viz
Explanation: Visualization using Folium
End of explanation
from tryalgo.dijkstra import dijkstra
source = closest_node(paris_coords, start)
target = closest_node(paris_coords, end)
dist, prec = dijkstra(paris, paris, source, target)
# Let's build the path
path = [target]
node = target
while prec[node] is not None:
node = prec[node]
path.append(node)
print('Path found with', len(path), 'nodes:', path[::-1])
Explanation: Pathfinding using tryalgo
End of explanation
from folium.features import PolyLine
paris_viz.add_child(PolyLine(map(lambda node: paris_coords[node], path)))
paris_viz
# We can also save it to a file
# paris_viz.save('pathfinding_in_paris.html')
# from IPython.display import IFrame
# IFrame('pathfinding_in_paris.html', width='100%', height=510)
Explanation: To finish, let's display the path.
End of explanation |
15,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison between the magnetic field produced by a triaxial ellipsoid and a sphere
Import the required modules and functions
Step1: Set some parameters for modelling
Step2: Triaxial ellipsoid versus sphere
This test compares the total-field anomalies produced by a triaxial ellipsoid with that produced by a sphere. The ellipsoid has semi-axes $a$, $b$, and $c$ equal to 500.1 m, 500 m, and 499.9 m, respectively, and the sphere has a radius equal to the intermediate semi-axis $b$. Both bodies are centered at the point (0, 0, 1000) and have the same magnetization.
Triaxial ellipsoid
Step3: Sphere
Step4: Total-field anomalies
Step5: Field components | Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from fatiando import gridder, utils
from fatiando.gravmag import sphere
from fatiando.mesher import Sphere
import triaxial_ellipsoid
from mesher import TriaxialEllipsoid
# Set some plot parameters
from matplotlib import rcParams
rcParams['figure.dpi'] = 300.
rcParams['font.size'] = 6
rcParams['xtick.labelsize'] = 'medium'
rcParams['ytick.labelsize'] = 'medium'
rcParams['axes.labelsize'] = 'large'
rcParams['legend.fontsize'] = 'medium'
rcParams['savefig.dpi'] = 300.
Explanation: Comparison between the magnetic field produced by a triaxial ellipsoid and a sphere
Import the required modules and functions
End of explanation
# The local-geomagnetic field
F, inc, dec = 60000, 50, 20
# Create a regular grid at z = 0 m
shape = (50, 50)
area = [-5000, 5000, -4000, 6000]
xp, yp, zp = gridder.regular(area, shape, z=0)
Explanation: Set some parameters for modelling
End of explanation
ellipsoid = TriaxialEllipsoid(0, 0, 1000, 500.1, 500, 499.9, 40, -60, 180,
{'principal susceptibilities': [0.01, 0.01, 0.01],
'susceptibility angles': [-40, 90, 7],
'remanent magnetization': [0.7, -7, 10]})
magnetization = triaxial_ellipsoid.magnetization(ellipsoid, F, inc, dec, demag=True)
magnetization
Explanation: Triaxial ellipsoid versus sphere
This test compares the total-field anomalies produced by a triaxial ellipsoid with that produced by a sphere. The ellipsoid has semi-axes $a$, $b$, and $c$ equal to 500.1 m, 500 m, and 499.9 m, respectively, and the sphere has a radius equal to the intermediate semi-axis $b$. Both bodies are centered at the point (0, 0, 1000) and have the same magnetization.
Triaxial ellipsoid
End of explanation
spherical_body = Sphere(ellipsoid.x, ellipsoid.y, ellipsoid.z,
ellipsoid.intermediate_axis,
{'magnetization': magnetization})
spherical_body.props['magnetization']
Explanation: Sphere
End of explanation
# total-field anomaly produced by the ellipsoid (in nT)
tf_t = triaxial_ellipsoid.tf(xp, yp, zp, [ellipsoid],
F, inc, dec)
# total-field anomaly produced by the sphere (in nT)
tf_s = sphere.tf(xp, yp, zp, [spherical_body], inc, dec)
# residuals
tf_r = tf_t - tf_s
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(tf_t), np.max(tf_t),
np.min(tf_s), np.max(tf_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(tf_r), np.max(tf_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
Explanation: Total-field anomalies
End of explanation
# field components produced by the ellipsoid (in nT)
bx_t = triaxial_ellipsoid.bx(xp, yp, zp, [ellipsoid],
F, inc, dec)
by_t = triaxial_ellipsoid.by(xp, yp, zp, [ellipsoid],
F, inc, dec)
bz_t = triaxial_ellipsoid.bz(xp, yp, zp, [ellipsoid],
F, inc, dec)
bt = [bx_t, by_t, bz_t]
# field components produced by the sphere (in nT)
bx_s = sphere.bx(xp, yp, zp, [spherical_body])
by_s = sphere.by(xp, yp, zp, [spherical_body])
bz_s = sphere.bz(xp, yp, zp, [spherical_body])
bs = [bx_s, by_s, bz_s]
# residuals
bx_r = bx_t - bx_s
by_r = by_t - by_s
bz_r = bz_t - bz_s
br = [bx_r, by_r, bz_r]
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(bx_t), np.max(bx_t),
np.min(bx_s), np.max(bx_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(bx_r), np.max(bx_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(by_t), np.max(by_t),
np.min(by_s), np.max(by_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(by_r), np.max(by_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(bz_t), np.max(bz_t),
np.min(bz_s), np.max(bz_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(bz_r), np.max(bz_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
Explanation: Field components
End of explanation |
15,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Równanie dyfuzji - dekompozycja siatki z zastosowaniem MPI
Problem
Step1: non-contigous slice
Step2: N - slices | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
print os.getenv("HOME")
wd = os.path.join( os.getenv("HOME"),"mpi_tmpdir")
if not os.path.isdir(wd):
os.mkdir(wd)
os.chdir(wd)
print "WD is now:",os.getcwd()
%%writefile mpi002.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] =A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N=52
Niter=211
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.1
c = D*dt
u = np.zeros([N, N])
if rank == 0:
u[-2,u.shape[1]/2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
for i in range(Niter):
if rank == 0:
comm.Send([u[-2,:], MPI.FLOAT], dest=1)
comm.Recv([u[-1,:], MPI.FLOAT], source=1)
elif rank == 1:
comm.Recv([u[0,:], MPI.FLOAT], source=0)
comm.Send([u[1,:], MPI.FLOAT], dest=0)
numpy_diff2d(u,dx2,dy2,c)
#np.savez("udata%04d"%rank, u=u)
U = comm.gather(u[1:-1,1:-1])
if rank==0:
np.savez("Udata", U=U)
!mpirun -n 2 python mpi002.py
data = np.load("Udata.npz")
plt.imshow(np.vstack(data['U']))
print data['U'].shape
!pwd
Explanation: Równanie dyfuzji - dekompozycja siatki z zastosowaniem MPI
Problem:
Chcemy rozwiązać równanie dyfuzji wykorzystując wiele procesorów.
Dzielimy siatkę na obszary o w każdym obszarze rozwiązujemy równanie niezależnie. Po każdym kroku czasowym wykorzystujemy komunikację w MPI do wymiany informacji o przylegających brzegach odpowiednich obszarów.
End of explanation
%%writefile mpi003.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] =A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N=52
Niter=211
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.1
c = D*dt
u = np.zeros([N, N])
if rank == 0:
u[u.shape[1]/2,-2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
for i in range(Niter):
if rank == 0:
OUT = u[:,-2].copy()
IN = np.empty_like(OUT)
comm.Send([OUT, MPI.FLOAT], dest=1)
comm.Recv([IN, MPI.FLOAT], source=1)
u[:,-1] = IN
elif rank == 1:
OUT = u[:,1].copy()
IN = np.empty_like(OUT)
comm.Recv([IN, MPI.FLOAT], source=0)
comm.Send([OUT, MPI.FLOAT], dest=0)
u[:,0] = IN
numpy_diff2d(u,dx2,dy2,c)
np.savez("udata%04d"%rank, u=u)
!mpirun -n 2 python mpi003.py
u1 = np.load('udata0000.npz')['u']
u2 = np.load('udata0001.npz')['u']
plt.imshow(np.hstack([u1[:,:-1],u2[:,1:]]))
Explanation: non-contigous slice
End of explanation
%%writefile mpi004.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
Nproc = comm.size
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] = A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N = 16*128
Nx = N
Ny = N/Nproc
Niter=200
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.2
c = D*dt
u = np.zeros([Ny, Nx])
if rank == 0:
u[-2,u.shape[1]/2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
t0 = MPI.Wtime()
for i in range(Niter):
if Nproc>1:
if rank == 0:
comm.Send([u[-2,:], MPI.FLOAT], dest=1)
if rank >0 and rank < Nproc-1:
comm.Recv([u[0,:], MPI.FLOAT], source=rank-1)
comm.Send([u[-2,:], MPI.FLOAT], dest=rank+1)
if rank == Nproc - 1:
comm.Recv([u[0,:], MPI.FLOAT], source=Nproc-2)
comm.Send([u[1,:], MPI.FLOAT], dest=Nproc-2)
if rank >0 and rank < Nproc-1:
comm.Recv([u[-1,:], MPI.FLOAT], source=rank+1)
comm.Send([u[1,:], MPI.FLOAT], dest=rank-1)
if rank == 0:
comm.Recv([u[-1,:], MPI.FLOAT], source=1)
#print rank
comm.Barrier()
numpy_diff2d(u,dx2,dy2,c)
t1 = MPI.Wtime()
print rank,t1-t0
#np.savez("udata%04d"%rank, u=u)
if Nproc>1:
U = comm.gather(u[1:-1,1:-1])
if rank==0:
np.savez("Udata", U=U)
!mpirun -H gpu2,gpu3 python mpi004.py
!mpirun -n 4 python mpi004.py
data = np.load("Udata.npz")
plt.imshow(np.vstack(data['U']))
print data['U'].shape
a = np.arange(0,16).reshape(4,4)
b = a[:,2]
c = a[2,:]
np.may_share_memory(a,b),np.may_share_memory(a,c)
a.flags
b.flags
c.flags
a=np.array(range(6))
b = a[2:4]
b=666
print a
np.may_share_memory?
Explanation: N - slices
End of explanation |
15,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a notebook developed by Dustin Lang and Phil Marshall in Spring 2015, and distributed by them under the GPLv2 licence. We can pull bits out of this and make good use of them in some expanded notebooks, before removing this original.
Step1: Fitting a straight line
Step2: Least squares fitting
An industry standard
Step3: Evaluating posterior probability on a grid
This procedure will get us the Bayesian solution to the problem - not an estimator, but a probability distribution for the parameters m and b. This PDF captures the uncertainty in the model parameters given the data. For simple, 2-dimensional parameter spaces like this one, evaluating on a grid is not a bad way to go. We'll see that the least squares solution lies at the peak of the posterior PDF - for a certain set of assumptions about the data and the model.
Step4: MCMC Sampling
In problems with higher dimensional parameter spaces, we need a more efficient way of approximating the posterior PDF - both when characterizing it in the first place, and then when doing integrals over that PDF (to get the marginalized PDFs for the parameters, or to compress them in to single numbers with uncertainties that can be easily reported). In most applications it's sufficient to approximate a PDF with a (relatively) small number of samples drawn from it; MCMC is a procedure for drawing samples from PDFs.
Step5: Model checking
How do we know if our model is any good? There are two properties that "good" models have
Step6: Note that we did not have to look up the "chi squared distribution" - we can simply compute the posterior predictive distribution given our generative model.
Intrinsic scatter
Now let's add an extra parameter to the model
Step7: Evidence comparison for intrinsic scatter
The other virtue we are looking for in a model is efficiency. Not because this is intrinsically a good thing, but rather because the data might prefer it. We can compare two models, given the data, with their relative posterior probabilities. This is not easy, because we have to specify their prior probabilities, and then compute the probability of getting the data given each model (the "Evidence") - but under certain assumptions we can do this.
Evidence computation is famously difficult - the simplest way is by simple Monte Carlo. We draw prior samples, and weight each one by the likelihood function. It's inefficient, but sometimes that doesn't matter.
Step8: In this case there is very little to choose between the two models. Both provide comparably good fits to the data, so the only thing working against the scatter model is its extra parameter. However, the prior for s is very well -matched to the data (uniform in log s corresponds to a 1/s distribution, favoring small values, and so there is not a very big "Occam's Razor" factor in the evidence. Both models are appropriate for this dataset.
Incidentally, let's look at a possible approximation for the evidence - the posterior mean log likelihood from our MCMC chains
Step9: The difference between the posterior mean log likelihood and the Evidence is the Shannon information gained when we updated the prior into the posterior. In both cases we gained about 2 bits of information - perhaps corresponding to approximately 2 good measurements (regardless of the number of parameters being inferred)? | Python Code:
from straightline_utils import *
%matplotlib inline
from matplotlib import rcParams
rcParams['savefig.dpi'] = 100
Explanation: This is a notebook developed by Dustin Lang and Phil Marshall in Spring 2015, and distributed by them under the GPLv2 licence. We can pull bits out of this and make good use of them in some expanded notebooks, before removing this original.
End of explanation
(x,y,sigmay) = get_data_no_outliers()
plot_yerr(x, y, sigmay)
Explanation: Fitting a straight line
End of explanation
# Linear algebra: weighted least squares
N = len(x)
A = np.zeros((N,2))
A[:,0] = 1. / sigmay
A[:,1] = x / sigmay
b = y / sigmay
theta,nil,nil,nil = np.linalg.lstsq(A, b)
plot_yerr(x, y, sigmay)
b_ls,m_ls = theta
print 'Least Squares (maximum likelihood) estimator:', b_ls,m_ls
plot_line(m_ls, b_ls);
Explanation: Least squares fitting
An industry standard: find the slope and intercept that minimize the mean square residual. Since the data depend linearly on the parameters, the least squares solution can be found by a matrix inversion and multiplication, conveneniently packed in numpy.linalg.
End of explanation
def straight_line_log_likelihood(x, y, sigmay, m, b):
'''
Returns the log-likelihood of drawing data values *y* at
known values *x* given Gaussian measurement noise with standard
deviation with known *sigmay*, where the "true" y values are
*y_t = m * x + b*
x: list of x coordinates
y: list of y coordinates
sigmay: list of y uncertainties
m: scalar slope
b: scalar line intercept
Returns: scalar log likelihood
'''
return (np.sum(np.log(1./(np.sqrt(2.*np.pi) * sigmay))) +
np.sum(-0.5 * (y - (m*x + b))**2 / sigmay**2))
def straight_line_log_prior(m, b):
return 0.
def straight_line_log_posterior(x,y,sigmay, m,b):
return (straight_line_log_likelihood(x,y,sigmay, m,b) +
straight_line_log_prior(m, b))
# Evaluate log P(m,b | x,y,sigmay) on a grid.
# Set up grid
mgrid = np.linspace(mlo, mhi, 100)
bgrid = np.linspace(blo, bhi, 101)
log_posterior = np.zeros((len(mgrid),len(bgrid)))
# Evaluate log probability on grid
for im,m in enumerate(mgrid):
for ib,b in enumerate(bgrid):
log_posterior[im,ib] = straight_line_log_posterior(x, y, sigmay, m, b)
# Convert to probability density and plot
posterior = np.exp(log_posterior - log_posterior.max())
plt.imshow(posterior, extent=[blo,bhi, mlo,mhi],cmap='Blues',
interpolation='nearest', origin='lower', aspect=(bhi-blo)/(mhi-mlo),
vmin=0, vmax=1)
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
i = np.argmax(posterior)
i,j = np.unravel_index(i, posterior.shape)
print 'Grid maximum posterior values:', bgrid[i], mgrid[j]
plt.title('Straight line: posterior PDF for parameters');
plt.plot(b_ls, m_ls, 'w+', ms=12, mew=4);
plot_mb_setup();
Explanation: Evaluating posterior probability on a grid
This procedure will get us the Bayesian solution to the problem - not an estimator, but a probability distribution for the parameters m and b. This PDF captures the uncertainty in the model parameters given the data. For simple, 2-dimensional parameter spaces like this one, evaluating on a grid is not a bad way to go. We'll see that the least squares solution lies at the peak of the posterior PDF - for a certain set of assumptions about the data and the model.
End of explanation
def straight_line_posterior(x, y, sigmay, m, b):
return np.exp(straight_line_log_posterior(x, y, sigmay, m, b))
# initial m, b
m,b = 2, 0
# step sizes
mstep, bstep = 0.1, 10.
# how many steps?
nsteps = 10000
chain = []
probs = []
naccept = 0
print 'Running MH for', nsteps, 'steps'
# First point:
L_old = straight_line_log_likelihood(x, y, sigmay, m, b)
p_old = straight_line_log_prior(m, b)
prob_old = np.exp(L_old + p_old)
for i in range(nsteps):
# step
mnew = m + np.random.normal() * mstep
bnew = b + np.random.normal() * bstep
# evaluate probabilities
# prob_new = straight_line_posterior(x, y, sigmay, mnew, bnew)
L_new = straight_line_log_likelihood(x, y, sigmay, mnew, bnew)
p_new = straight_line_log_prior(mnew, bnew)
prob_new = np.exp(L_new + p_new)
if (prob_new / prob_old > np.random.uniform()):
# accept
m = mnew
b = bnew
L_old = L_new
p_old = p_new
prob_old = prob_new
naccept += 1
else:
# Stay where we are; m,b stay the same, and we append them
# to the chain below.
pass
chain.append((b,m))
probs.append((L_old,p_old))
print 'Acceptance fraction:', naccept/float(nsteps)
# Pull m and b arrays out of the Markov chain and plot them:
mm = [m for b,m in chain]
bb = [b for b,m in chain]
# Scatterplot of m,b posterior samples
plt.clf()
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
plt.gca().set_aspect((bhi-blo)/(mhi-mlo))
plt.plot(bb, mm, 'b.', alpha=0.1)
plot_mb_setup()
plt.show()
# 1 and 2D marginalised distributions:
import triangle
triangle.corner(chain, labels=['b','m'], range=[(blo,bhi),(mlo,mhi)],quantiles=[0.16,0.5,0.84],
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=True, fill_contours=True, levels=[0.68, 0.95], color='b', bins=40, smooth=1.0);
plt.show()
# Traces, for convergence inspection:
plt.clf()
plt.subplot(2,1,1)
plt.plot(mm, 'k-')
plt.ylim(mlo,mhi)
plt.ylabel('m')
plt.subplot(2,1,2)
plt.plot(bb, 'k-')
plt.ylabel('b')
plt.ylim(blo,bhi)
plt.show()
Explanation: MCMC Sampling
In problems with higher dimensional parameter spaces, we need a more efficient way of approximating the posterior PDF - both when characterizing it in the first place, and then when doing integrals over that PDF (to get the marginalized PDFs for the parameters, or to compress them in to single numbers with uncertainties that can be easily reported). In most applications it's sufficient to approximate a PDF with a (relatively) small number of samples drawn from it; MCMC is a procedure for drawing samples from PDFs.
End of explanation
# Posterior predictive check, in data space
X = np.array(xlimits)
for i in (np.random.rand(100)*len(chain)).astype(int):
b,m = chain[i]
plt.plot(X, b+X*m, 'b-', alpha=0.1)
plot_line(m_ls, b_ls);
plot_yerr(x, y, sigmay)
def test_statistic(x,y,sigmay,b_ls,m_ls):
return np.sum((y - m_ls*x - b_ls)**2.0/sigmay**2.0)/(len(y)-2)
def mean_test_stat(x,y,sigmay):
pass
T = np.zeros(len(chain))
for k,(b,m) in enumerate(chain):
yp = b + m*x + np.random.randn(len(x)) * sigmay
T[k] = test_statistic(x,yp,sigmay,b_ls,m_ls)
Td = test_statistic(x,y, sigmay, b_ls, m_ls)
plt.hist(T, 100, histtype='step', color='b', lw=2, range=(0,4))
plt.axvline(Td, color='k', linestyle='--', lw=2)
plt.xlabel('Test statistic')
plt.ylabel('Posterior predictive distribution');
Explanation: Model checking
How do we know if our model is any good? There are two properties that "good" models have: the first is accuracy, and the second is efficiency. Accurate models generate data that is like the observed data. What does this mean? First we have to define what similarity is, in this context. Visual impression is one very important way. Test statistics that capture relevant features of the data are another. Let's look at the posterior predictive distributions for the datapoints, and for a particularly interesting test statistic, the reduced chi-squared.
End of explanation
def straight_line_with_scatter_log_likelihood(x, y, sigmay, m, b, log_s):
'''
Returns the log-likelihood of drawing data values *y* at
known values *x* given Gaussian measurement noise with standard
deviation with known *sigmay*, where the "true" y values have
been drawn from N(mean=m * x + b, variance=(s^2)).
x: list of x coordinates
y: list of y coordinates
sigmay: list of y uncertainties
m: scalar slope
b: scalar line intercept
s: intrinsic scatter, Gaussian std.dev
Returns: scalar log likelihood
'''
s = np.exp(log_s)
V = sigmay**2 + s**2
return (np.sum(np.log(1./(np.sqrt(2.*np.pi*V)))) +
np.sum(-0.5 * (y - (m*x + b))**2 / V))
def straight_line_with_scatter_log_prior(m, b, log_s):
if log_s < np.log(slo) or log_s > np.log(shi):
return -np.inf
return 0.
def straight_line_with_scatter_log_posterior(x,y,sigmay, m,b,log_s):
return (straight_line_with_scatter_log_likelihood(x,y,sigmay,m,b,log_s) +
straight_line_with_scatter_log_prior(m,b,log_s))
def straight_line_with_scatter_posterior(x,y,sigmay,m,b,log_s):
return np.exp(straight_line_with_scatter_log_posterior(x,y,sigmay,m,b,log_s))
# initial m, b, s
m,b,log_s = 2, 20, 0.
# step sizes
mstep, bstep, log_sstep = 1., 10., 1.
# how many steps?
nsteps = 30000
schain = []
sprobs = []
naccept = 0
print 'Running MH for', nsteps, 'steps'
L_old = straight_line_with_scatter_log_likelihood(x, y, sigmay, m, b, log_s)
p_old = straight_line_with_scatter_log_prior(m, b, log_s)
prob_old = np.exp(L_old + p_old)
for i in range(nsteps):
# step
mnew = m + np.random.normal() * mstep
bnew = b + np.random.normal() * bstep
log_snew = log_s + np.random.normal() * log_sstep
# evaluate probabilities
# prob_new = straight_line_with_scatter_posterior(x, y, sigmay, mnew, bnew, log_snew)
L_new = straight_line_with_scatter_log_likelihood(x, y, sigmay, mnew, bnew, log_snew)
p_new = straight_line_with_scatter_log_prior(mnew, bnew, log_snew)
prob_new = np.exp(L_new + p_new)
if (prob_new / prob_old > np.random.uniform()):
# accept
m = mnew
b = bnew
log_s = log_snew
L_old = L_new
p_old = p_new
prob_old = prob_new
naccept += 1
else:
# Stay where we are; m,b stay the same, and we append them
# to the chain below.
pass
schain.append((b,m,np.exp(log_s)))
sprobs.append((L_old,p_old))
print 'Acceptance fraction:', naccept/float(nsteps)
# Histograms:
import triangle
slo,shi = [0,10]
triangle.corner(schain, labels=['b','m','s'], range=[(blo,bhi),(mlo,mhi),(slo,shi)],quantiles=[0.16,0.5,0.84],
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=True, fill_contours=True, levels=[0.68, 0.95], color='b', bins=20, smooth=1.0);
plt.show()
# Traces:
plt.clf()
plt.subplot(3,1,1)
plt.plot([b for b,m,s in schain], 'k-')
plt.ylabel('b')
plt.subplot(3,1,2)
plt.plot([m for b,m,s in schain], 'k-')
plt.ylabel('m')
plt.subplot(3,1,3)
plt.plot([s for b,m,s in schain], 'k-')
plt.ylabel('s')
plt.show()
Explanation: Note that we did not have to look up the "chi squared distribution" - we can simply compute the posterior predictive distribution given our generative model.
Intrinsic scatter
Now let's add an extra parameter to the model: intrinisc scatter. What does this mean? We imagine the model y values to be drawn from a PDF that is conditional on m, b and also s, the intrinsic scatter of the population. This scatter parameter can be inferred from the data as well: in this simple case we can introduce it, along with a "true" y value for each data point, and analytically marginalize over the "true" y's. This yields a new likelihood function, which looks as though it has an additional source of uncertainty in the y values - which is what scatter is.
End of explanation
# Draw a buttload of prior samples and hope for the best
N=50000
mm = np.random.uniform(mlo,mhi, size=N)
bb = np.random.uniform(blo,bhi, size=N)
slo,shi = [0.001,10]
log_slo = np.log(slo)
log_shi = np.log(shi)
log_ss = np.random.uniform(log_slo, log_shi, size=N)
log_likelihood_vanilla = np.zeros(N)
log_likelihood_scatter = np.zeros(N)
for i in range(N):
log_likelihood_vanilla[i] = straight_line_log_likelihood(x, y, sigmay, mm[i], bb[i])
log_likelihood_scatter[i] = straight_line_with_scatter_log_likelihood(x, y, sigmay, mm[i], bb[i], log_ss[i])
def logsum(x):
mx = x.max()
return np.log(np.sum(np.exp(x - mx))) + mx
log_evidence_vanilla = logsum(log_likelihood_vanilla) - np.log(N)
log_evidence_scatter = logsum(log_likelihood_scatter) - np.log(N)
print 'Log evidence vanilla:', log_evidence_vanilla
print 'Log evidence scatter:', log_evidence_scatter
print 'Odds ratio in favour of the vanilla model:', np.exp(log_evidence_vanilla - log_evidence_scatter)
Explanation: Evidence comparison for intrinsic scatter
The other virtue we are looking for in a model is efficiency. Not because this is intrinsically a good thing, but rather because the data might prefer it. We can compare two models, given the data, with their relative posterior probabilities. This is not easy, because we have to specify their prior probabilities, and then compute the probability of getting the data given each model (the "Evidence") - but under certain assumptions we can do this.
Evidence computation is famously difficult - the simplest way is by simple Monte Carlo. We draw prior samples, and weight each one by the likelihood function. It's inefficient, but sometimes that doesn't matter.
End of explanation
mean_log_L_vanilla = np.average(np.atleast_1d(probs).T[0])
mean_log_L_scatter = np.average(np.atleast_1d(sprobs).T[0])
print "No scatter: Evidence, mean log L, difference: ",log_evidence_vanilla,mean_log_L_vanilla,(mean_log_L_vanilla - log_evidence_vanilla)
print " Scatter: Evidence, mean log L, difference: ",log_evidence_scatter,mean_log_L_scatter,(mean_log_L_scatter - log_evidence_scatter)
Explanation: In this case there is very little to choose between the two models. Both provide comparably good fits to the data, so the only thing working against the scatter model is its extra parameter. However, the prior for s is very well -matched to the data (uniform in log s corresponds to a 1/s distribution, favoring small values, and so there is not a very big "Occam's Razor" factor in the evidence. Both models are appropriate for this dataset.
Incidentally, let's look at a possible approximation for the evidence - the posterior mean log likelihood from our MCMC chains:
End of explanation
def likelihood_outliers((m, b, pbad), (x, y, sigmay, sigmabad)):
return np.prod(pbad * 1./(np.sqrt(2.*np.pi)*sigmabad) *
np.exp(-y**2 / (2.*sigmabad**2))
+ (1.-pbad) * (1./(np.sqrt(2.*np.pi)*sigmay)
* np.exp(-(y-(m*x+b))**2/(2.*sigmay**2))))
def prior_outliers((m, b, pbad)):
if pbad < 0:
return 0
if pbad > 1:
return 0
return 1.
def prob_outliers((m,b,pbad), x,y,sigmay,sigmabad):
return (likelihood_outliers((m,b,pbad), (x,y,sigmay,sigmabad)) *
prior_outliers((m,b,pbad)))
x,y,sigmay = data1.T
sigmabad = np.std(y)
prob_args = (x,y,sigmay,sigmabad)
mstep = 0.1
bstep = 1.
pbadstep = 0.01
proposal_args = ((mstep, bstep,pbadstep),)
m,b,pbad = 2.2, 30, 0.1
mh(prob_outliers, prob_args, gaussian_proposal, proposal_args,
(m,b,pbad), 100000);
def likelihood_t((m, b, nu), (x, y, sigmay)):
return np.prod(pbad * 1./(np.sqrt(2.*np.pi)*sigmabad) *
np.exp(-y**2 / (2.*sigmabad**2))
+ (1.-pbad) * (1./(np.sqrt(2.*np.pi)*sigmay)
* np.exp(-(y-(m*x+b))**2/(2.*sigmay**2))))
def complexity_brewer_likelihood((m, b, q), (x, y, sigmay)):
# q: quadratic term
if q < 0:
q = 0
else:
k = 0.01
q = -k * np.log(1 - q)
return np.prod(np.exp(-(y-(b+m*x+q*(x - 150)**2))**2/(2.*sigmay**2)))
def complexity_brewer_prior((m,b,q)):
if q < -1:
return 0.
return 1.
def complexity_brewer_prob(params, *args):
return complexity_brewer_prior(params) * complexity_brewer_likelihood(params, args)
x,y,sigmay = get_data_no_outliers()
print 'x', x.min(), x.max()
print 'y', y.min(), y.max()
y = y + (0.001 * (x-150)**2)
prob_args = (x,y,sigmay)
mstep = 0.1
bstep = 1.
qstep = 0.1
proposal_args = ((mstep, bstep, qstep, cstep),)
m,b,q = 2.2, 30, 0.
plt.errorbar(x, y, fmt='.', yerr=sigmay)
plt.show()
mh(complexity_brewer_prob, prob_args, gaussian_proposal, proposal_args,
(m,b,q), 10000, pnames=['m','b','q']);
Explanation: The difference between the posterior mean log likelihood and the Evidence is the Shannon information gained when we updated the prior into the posterior. In both cases we gained about 2 bits of information - perhaps corresponding to approximately 2 good measurements (regardless of the number of parameters being inferred)?
End of explanation |
15,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load airports of each country
Step1: record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
Step2: parse Arrivals
Step3: parse Departures
Step4: for c in AP | Python Code:
L=json.loads(file('../json/L.json','r').read())
M=json.loads(file('../json/M.json','r').read())
N=json.loads(file('../json/N.json','r').read())
import requests
AP={}
for c in M:
if c not in AP:AP[c]={}
for i in range(len(L[c])):
AP[c][N[c][i]]=L[c][i]
sch={}
Explanation: Load airports of each country
End of explanation
baseurl='https://www.airportia.com/'
import requests, urllib2
SC={}
Explanation: record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
End of explanation
for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'arrivals/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SC[c]=sch
Explanation: parse Arrivals
End of explanation
SD={}
for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'departures/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SD[c]=sch
SC
Explanation: parse Departures
End of explanation
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['To']=i
df['Date']=d
mdf=pd.concat([mdf,df])
mdf=mdf.replace('Hahn','Frankfurt')
mdf=mdf.replace('Hahn HHN','Frankfurt HHN')
mdf['City']=[i[:i.rfind(' ')] for i in mdf['From']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['From']]
file("mdf_ae_arrv.json",'w').write(json.dumps(mdf.reset_index().to_json()))
len(mdf)
airlines=set(mdf['Airline'])
cities=set(mdf['City'])
file("cities_ae_arrv.json",'w').write(json.dumps(list(cities)))
file("airlines_ae_arrv.json",'w').write(json.dumps(list(airlines)))
citycoords={}
for i in cities:
if i not in citycoords:
if i==u'Birmingham': z='Birmingham, UK'
elif i==u'Valencia': z='Valencia, Spain'
elif i==u'Naples': z='Naples, Italy'
elif i==u'St. Petersburg': z='St. Petersburg, Russia'
elif i==u'Bristol': z='Bristol, UK'
elif i==u'Victoria': z='Victoria, Seychelles'
elif i==u'Washington': z='Washington, DC'
elif i==u'Odessa': z='Odessa, Ukraine'
else: z=i
citycoords[i]=Geocoder(apik).geocode(z)
print i
citysave={}
for i in citycoords:
citysave[i]={"coords":citycoords[i][0].coordinates,
"country":citycoords[i][0].country}
file("citysave_ae_arrv.json",'w').write(json.dumps(citysave))
Explanation: for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'arrivals/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SC[c]=sch
End of explanation |
15,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Forwards & backwards flows
This recipe demonstrates how forwards and backwards flows work.
For demonstration, the CSV data is written directly in the cell below -- in practice you would want to load data a file.
Step2: Here is one structure, with nodes b and c both in the same vertical slice
Step3: Alternatively, if b is moved to the right, extra hidden waypoints are automatically added to get the b--c flow back to the left of c | Python Code:
import pandas as pd
from io import StringIO
flows = pd.read_csv(StringIO(
source,target,type,value
a,b,main,2
a,c,main,1
c,d,main,3
b,c,back,2
))
flows
Explanation: Forwards & backwards flows
This recipe demonstrates how forwards and backwards flows work.
For demonstration, the CSV data is written directly in the cell below -- in practice you would want to load data a file.
End of explanation
from floweaver import *
# Set the default size to fit the documentation better.
size = dict(width=570, height=300)
nodes = {
'a': ProcessGroup(['a']),
'b': ProcessGroup(['b']),
'c': ProcessGroup(['c']),
'd': ProcessGroup(['d']),
'back': Waypoint(direction='L'),
}
bundles = [
Bundle('a', 'b'),
Bundle('a', 'c'),
Bundle('b', 'c', waypoints=['back']),
Bundle('c', 'd'),
Bundle('c', 'b'),
]
ordering = [
[['a'], []],
[['b', 'c'], ['back']],
[['d'], []],
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
Explanation: Here is one structure, with nodes b and c both in the same vertical slice:
End of explanation
bundles = [
Bundle('a', 'b'),
Bundle('a', 'c'),
Bundle('b', 'c'),
Bundle('c', 'd'),
Bundle('c', 'b'),
]
ordering = [
[['a'], []],
[['c'], ['back']],
[['b', 'd'], []],
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
Explanation: Alternatively, if b is moved to the right, extra hidden waypoints are automatically added to get the b--c flow back to the left of c:
End of explanation |
15,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KBMOD Demo
The purpose of this demo is to showcase how KBMOD can be used to search through images for moving objects. The images used here are from the Subaru telescope and were processed using the LSST Science Pipelines.
Step1: searchImage contains the tools for constructing the likelihood images and performing the KBMOD detection algorithm. It saves these results to file and analysis and identification of the objects takes place using the analyzeImage tools.
Step2: First we create a mask to use in the images. Since some moving objects are bright enough to be masked in some of the images we set a fraction of the images that a pixel has to be masked in order to add it to the master mask. Below that setting is 75% of the images.
Step3: Next we use the mask and original images to build the Psi and Phi likelihood images.
Step4: Here we calculate the image time from the header and also load the MJD values of each image in order to calculate orbit details later.
Step5: Finally, we load the images themselves with the mask on top in order to build the postage stamps we will use to look at the potential objects we discover.
Step6: We also need the wcs for the images.
Step7: Our algorithm uses the PSF kernel from the LSST data management pipeline, but below we show that is it comparable to a 2-d Gaussian with a sigma of 1 pixel.
Step8: Here we set up a grid of angles from the ecliptic of -15 to 15 degrees and velocities ranging from 0.4 to 4.5 arcsec per hour in order to line up with the grid used in Fraser and Kavelaars (2009).
Step9: We currently have trouble with slower trajectories so we set a lower bound on the velocity for the demo.
Step10: We now have everything we need set up to begin searching in likelihood space for the images. We use the method findObjectsEcliptic now. Below we are setup to search a section of the images between pixels 1024 and 2048 in both the x and y axes.
In order to filter out results that start in the same place and travel along similar trajectories we cluster results in a 4-dimensional space
Step11: Now we will load the results and try to find the best possible objects. For this we use the methods in analyzeImage.
Step12: We sort these results by the ratio of maximum brightness within an aperture centered on a coadded postage stamp along the trajectory to the maximum brightness outside of this aperture. A stationary unmasked object that is in the results will have a streak of fairly uniform brightness along the trajectory's slope in a coadded postage stamp while a moving object will have a bright center since the single images that make the coadd of a trajectory move along with the object.
Step13: Notice that the best_targets array that comes out of the sorting algorithm actually rearranged the results. Here we plot the most likely objects and see that the sorting algorithm gave us an actual asteroid as the most likely object while the highest likelihood object was moved down the rankings (the best_targets output above shows it is the third in the images below).
Step14: We have also written methods to plot information related to a trajectory. Below we plot its path through one of the images and its light curve.
Step15: Finally, we can output the coordinates of the object throughout its trajectory and use this as input to orbit fitting software such as that of Bernstein and Khushalani (2000). | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from searchImage import searchImage
from analyzeImage import analyzeImage
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: KBMOD Demo
The purpose of this demo is to showcase how KBMOD can be used to search through images for moving objects. The images used here are from the Subaru telescope and were processed using the LSST Science Pipelines.
End of explanation
si = searchImage()
Explanation: searchImage contains the tools for constructing the likelihood images and performing the KBMOD detection algorithm. It saves these results to file and analysis and identification of the objects takes place using the analyzeImage tools.
End of explanation
mask = si.createMask('data_repo/chip_0/', 0.75)
fig = plt.figure(figsize=(12,12))
plt.imshow(mask, origin='lower', cmap=plt.cm.Greys_r)
plt.xlabel('X Pixels')
plt.ylabel('Y Pixels')
Explanation: First we create a mask to use in the images. Since some moving objects are bright enough to be masked in some of the images we set a fraction of the images that a pixel has to be masked in order to add it to the master mask. Below that setting is 75% of the images.
End of explanation
psi_array, phi_array = si.calcPsiPhi('data_repo/chip_0', mask)
Explanation: Next we use the mask and original images to build the Psi and Phi likelihood images.
End of explanation
image_times, image_mjd = si.loadImageTimes('data_repo/chip_0')
Explanation: Here we calculate the image time from the header and also load the MJD values of each image in order to calculate orbit details later.
End of explanation
im_array = si.loadMaskedImages('data_repo/chip_0', mask)
Explanation: Finally, we load the images themselves with the mask on top in order to build the postage stamps we will use to look at the potential objects we discover.
End of explanation
wcs_list = si.loadWCSList('data_repo/chip_0')
Explanation: We also need the wcs for the images.
End of explanation
psf_array = si.loadPSF('data_repo/chip_0')
from createImage import createImage as ci
#Create a 2-d Gaussian with sigma 1 pixel in x and y directions and in the center of a 41 x 41 pixel grid with
#total flux equal to 1.0
gauss2d = ci().createGaussianSource([20., 20.], [1., 1.], [41., 41.], 1.)
fig = plt.figure(figsize=(12,12))
fig.add_subplot(1,2,1)
plt.imshow(psf_array[0], origin='lower', interpolation='None')#, vmin = -111, vmax = 119, cmap=plt.cm.Greys_r)
plt.title('LSST DM Kernel')
plt.colorbar()
fig.add_subplot(1,2,2)
plt.imshow(gauss2d, origin='lower', interpolation='None')
plt.title('2-D Gaussian')
plt.colorbar()
Explanation: Our algorithm uses the PSF kernel from the LSST data management pipeline, but below we show that is it comparable to a 2-d Gaussian with a sigma of 1 pixel.
End of explanation
angles = [-15., -7.5, 0, 7.5, 15]
rates = np.arange(0.4, 4.5, .2277)
para_steps = []
perp_steps = []
for rate in rates:
for angle in angles:
para_steps.append(rate*np.cos(np.radians(angle)))
perp_steps.append(rate*np.sin(np.radians(angle)))
para_steps = -1.*np.array(para_steps)
perp_steps = np.array(perp_steps)
Explanation: Here we set up a grid of angles from the ecliptic of -15 to 15 degrees and velocities ranging from 0.4 to 4.5 arcsec per hour in order to line up with the grid used in Fraser and Kavelaars (2009).
End of explanation
para_fast = para_steps[45:]
perp_fast = perp_steps[45:]
vel_grid = np.array([para_fast, perp_fast]).T
Explanation: We currently have trouble with slower trajectories so we set a lower bound on the velocity for the demo.
End of explanation
x_quad_size = 1024
y_quad_size = 1024
x_offset = 1024
y_offset = 1024
quad_results = {}
for quadrant_x in range(0,1):
for quadrant_y in range(0,1):
x_range = [x_offset+x_quad_size*quadrant_x, x_offset+x_quad_size*(quadrant_x+1)]
y_range = [y_offset+y_quad_size*quadrant_y, y_offset+y_quad_size*(quadrant_y+1)]
topResults = si.findObjectsEcliptic(psi_array, # The psi images
phi_array, # The phi images
vel_grid, # The velocity search grid
2.0, # The likelihood threshold
image_times, # The times after image 1, each image was taken.
[wcs_list[0]]*55, # The wcs values
xRange = x_range, # The x pixel coordinate range
yRange = y_range, # The y pixel coordinate range
out_file='results/chip_0/quad_%i_%i_test_test.txt' % (quadrant_x, quadrant_y))
Explanation: We now have everything we need set up to begin searching in likelihood space for the images. We use the method findObjectsEcliptic now. Below we are setup to search a section of the images between pixels 1024 and 2048 in both the x and y axes.
In order to filter out results that start in the same place and travel along similar trajectories we cluster results in a 4-dimensional space: x starting position, y starting position, total velocity and slope.
End of explanation
results = np.genfromtxt('results/chip_0/quad_0_0_test_test.txt', names=True)
ai = analyzeImage()
Explanation: Now we will load the results and try to find the best possible objects. For this we use the methods in analyzeImage.
End of explanation
best_targets = ai.sortCluster(results, im_array, image_times)
print best_targets
Explanation: We sort these results by the ratio of maximum brightness within an aperture centered on a coadded postage stamp along the trajectory to the maximum brightness outside of this aperture. A stationary unmasked object that is in the results will have a streak of fairly uniform brightness along the trajectory's slope in a coadded postage stamp while a moving object will have a bright center since the single images that make the coadd of a trajectory move along with the object.
End of explanation
fig = plt.figure(figsize=(18,12))
i=0
for imNum in range(5):
fig.add_subplot(1,5,imNum+1)
try:
plt.imshow(ai.createPostageStamp(im_array,
list(results[['t0_x', 't0_y']][best_targets][imNum]),
list(results[['v_x', 'v_y']][best_targets][imNum]),
image_times, [25., 25.])[0],
origin='lower', cmap=plt.cm.Greys_r, interpolation='None')
plt.title(str(' [x,y] = ' + str(list(results[['t0_x', 't0_y']][best_targets][imNum]))))
except:
continue
plt.tight_layout()
Explanation: Notice that the best_targets array that comes out of the sorting algorithm actually rearranged the results. Here we plot the most likely objects and see that the sorting algorithm gave us an actual asteroid as the most likely object while the highest likelihood object was moved down the rankings (the best_targets output above shows it is the third in the images below).
End of explanation
return_result = best_targets[0]
fig = plt.figure(figsize=(12,12))
ax = ai.plotTrajectory(results[return_result],
image_times, im_array[0], im_plot_args={'vmin':-111, 'vmax':111})
plt.xlabel('Pixel X')
plt.ylabel('Pixel Y')
fig = plt.figure()
ax = ai.plotLightCurves(im_array, results[return_result], image_times)
Explanation: We have also written methods to plot information related to a trajectory. Below we plot its path through one of the images and its light curve.
End of explanation
test_coords = ai.return_ra_dec(np.array(list(results[['t0_x', 't0_y']][return_result])),
np.array(list(results[['v_x', 'v_y']][return_result])),
image_times, image_mjd, wcs_list[0], np.ones(len(image_times))*.1, 568)
test_coords
Explanation: Finally, we can output the coordinates of the object throughout its trajectory and use this as input to orbit fitting software such as that of Bernstein and Khushalani (2000).
End of explanation |
15,669 | Given the following text description, write Python code to implement the functionality described.
Description:
Given a string representing a space separated lowercase letters, return a dictionary
of the letter with the most repetition and containing the corresponding count.
If several letters have the same occurrence, return all of them.
Example:
histogram('a b c') == {'a': 1, 'b': 1, 'c': 1}
histogram('a b b a') == {'a': 2, 'b': 2}
histogram('a b c a b') == {'a': 2, 'b': 2}
histogram('b b b b a') == {'b': 4}
histogram('') == {}
| Python Code:
def histogram(test):
dict1={}
list1=test.split(" ")
t=0
for i in list1:
if(list1.count(i)>t) and i!='':
t=list1.count(i)
if t>0:
for i in list1:
if(list1.count(i)==t):
dict1[i]=t
return dict1 |
15,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Convolutional Neural Network for MNIST Classification. This solution got me a score of 0.98 on the leaderboard.
Note
Step1: Simulation Constants
Download notebook and use commented out values for better performance.
Step2: Import Data
prepare data by
applying 1-hot encoding
Step3: Model
Let's now build a network with two convolutional layers, followed by one fully connected layer. Since this is computationally pretty expensive, we'll limit the depth and number of fully connected nodes for this online notebook.
We initialize the input data with placeholders
Step4: We choose a 4 layered network consisting of 2 convolutional layers with weights and biases (w1, b1) and (w2,b2), followed by a fully connected hidden layer (w3,b3) with #HIDDEN hidden neurons and an output layer (w4, b4) with 10 output nodes (one-hot encoding).
We initialize the weights and biases such that the kernel has a patch size of PATCH and the depth of the second convolutional layer is twice the depth of the first convolutional layer (DEPTH). For the rest, the fully connected hidden layer has HIDDEN hidden neurons.
Step5: We use the categorical cross entropy loss for training the model.
As optimizer we could use a Gradient Descent optimizer [with or without decaying learning rate] or one of the more sophisticated (and easier to optimize) optimizers like Adam or RMSProp
Step6: Train
open the session
Step7: Run the session (Run this cell again if the desired accuracy is not yet reached).
Step8: Visualize the training history
Step9: Results
Step10: Make a prediction about the test labels
Step11: Plot an example
Step12: Submission
Step13: Close Session
(note | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import ShuffleSplit
from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder
Explanation: A Convolutional Neural Network for MNIST Classification. This solution got me a score of 0.98 on the leaderboard.
Note: this solution is loosely based on the official tensorflow tutorial.
Packages and Imports
End of explanation
LABELS = 10 # Number of different types of labels (1-10)
WIDTH = 28 # width / height of the image
CHANNELS = 1 # Number of colors in the image (greyscale)
VALID = 10000 # Validation data size
STEPS = 3500 #20000 # Number of steps to run
BATCH = 100 # Stochastic Gradient Descent batch size
PATCH = 5 # Convolutional Kernel size
DEPTH = 8 #32 # Convolutional Kernel depth size == Number of Convolutional Kernels
HIDDEN = 100 #1024 # Number of hidden neurons in the fully connected layer
LR = 0.001 # Learning rate
Explanation: Simulation Constants
Download notebook and use commented out values for better performance.
End of explanation
data = pd.read_csv('../input/train.csv') # Read csv file in pandas dataframe
labels = np.array(data.pop('label')) # Remove the labels as a numpy array from the dataframe
labels = LabelEncoder().fit_transform(labels)[:, None]
labels = OneHotEncoder().fit_transform(labels).todense()
data = StandardScaler().fit_transform(np.float32(data.values)) # Convert the dataframe to a numpy array
data = data.reshape(-1, WIDTH, WIDTH, CHANNELS) # Reshape the data into 42000 2d images
train_data, valid_data = data[:-VALID], data[-VALID:]
train_labels, valid_labels = labels[:-VALID], labels[-VALID:]
print('train data shape = ' + str(train_data.shape) + ' = (TRAIN, WIDTH, WIDTH, CHANNELS)')
print('labels shape = ' + str(labels.shape) + ' = (TRAIN, LABELS)')
Explanation: Import Data
prepare data by
applying 1-hot encoding: 1 = [1,0,0...0], 2 = [0,1,0...0] ...
reshaping into image shape: (# images, # vertical height, # horizontal width, # colors)
splitting data into train and validation set.
End of explanation
tf_data = tf.placeholder(tf.float32, shape=(None, WIDTH, WIDTH, CHANNELS))
tf_labels = tf.placeholder(tf.float32, shape=(None, LABELS))
Explanation: Model
Let's now build a network with two convolutional layers, followed by one fully connected layer. Since this is computationally pretty expensive, we'll limit the depth and number of fully connected nodes for this online notebook.
We initialize the input data with placeholders
End of explanation
w1 = tf.Variable(tf.truncated_normal([PATCH, PATCH, CHANNELS, DEPTH], stddev=0.1))
b1 = tf.Variable(tf.zeros([DEPTH]))
w2 = tf.Variable(tf.truncated_normal([PATCH, PATCH, DEPTH, 2*DEPTH], stddev=0.1))
b2 = tf.Variable(tf.constant(1.0, shape=[2*DEPTH]))
w3 = tf.Variable(tf.truncated_normal([WIDTH // 4 * WIDTH // 4 * 2*DEPTH, HIDDEN], stddev=0.1))
b3 = tf.Variable(tf.constant(1.0, shape=[HIDDEN]))
w4 = tf.Variable(tf.truncated_normal([HIDDEN, LABELS], stddev=0.1))
b4 = tf.Variable(tf.constant(1.0, shape=[LABELS]))
def logits(data):
# Convolutional layer 1
x = tf.nn.conv2d(data, w1, [1, 1, 1, 1], padding='SAME')
x = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
x = tf.nn.relu(x + b1)
# Convolutional layer 2
x = tf.nn.conv2d(x, w2, [1, 1, 1, 1], padding='SAME')
x = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
x = tf.nn.relu(x + b2)
# Fully connected layer
x = tf.reshape(x, (-1, WIDTH // 4 * WIDTH // 4 * 2*DEPTH))
x = tf.nn.relu(tf.matmul(x, w3) + b3)
return tf.matmul(x, w4) + b4
# Prediction:
tf_pred = tf.nn.softmax(logits(tf_data))
Explanation: We choose a 4 layered network consisting of 2 convolutional layers with weights and biases (w1, b1) and (w2,b2), followed by a fully connected hidden layer (w3,b3) with #HIDDEN hidden neurons and an output layer (w4, b4) with 10 output nodes (one-hot encoding).
We initialize the weights and biases such that the kernel has a patch size of PATCH and the depth of the second convolutional layer is twice the depth of the first convolutional layer (DEPTH). For the rest, the fully connected hidden layer has HIDDEN hidden neurons.
End of explanation
tf_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits(tf_data),
labels=tf_labels))
tf_acc = 100*tf.reduce_mean(tf.to_float(tf.equal(tf.argmax(tf_pred, 1), tf.argmax(tf_labels, 1))))
#tf_opt = tf.train.GradientDescentOptimizer(LR)
#tf_opt = tf.train.AdamOptimizer(LR)
tf_opt = tf.train.RMSPropOptimizer(LR)
tf_step = tf_opt.minimize(tf_loss)
Explanation: We use the categorical cross entropy loss for training the model.
As optimizer we could use a Gradient Descent optimizer [with or without decaying learning rate] or one of the more sophisticated (and easier to optimize) optimizers like Adam or RMSProp
End of explanation
init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)
Explanation: Train
open the session
End of explanation
ss = ShuffleSplit(n_splits=STEPS, train_size=BATCH)
ss.get_n_splits(train_data, train_labels)
history = [(0, np.nan, 10)] # Initial Error Measures
for step, (idx, _) in enumerate(ss.split(train_data,train_labels), start=1):
fd = {tf_data:train_data[idx], tf_labels:train_labels[idx]}
session.run(tf_step, feed_dict=fd)
if step%500 == 0:
fd = {tf_data:valid_data, tf_labels:valid_labels}
valid_loss, valid_accuracy = session.run([tf_loss, tf_acc], feed_dict=fd)
history.append((step, valid_loss, valid_accuracy))
print('Step %i \t Valid. Acc. = %f'%(step, valid_accuracy), end='\n')
Explanation: Run the session (Run this cell again if the desired accuracy is not yet reached).
End of explanation
steps, loss, acc = zip(*history)
fig = plt.figure()
plt.title('Validation Loss / Accuracy')
ax_loss = fig.add_subplot(111)
ax_acc = ax_loss.twinx()
plt.xlabel('Training Steps')
plt.xlim(0, max(steps))
ax_loss.plot(steps, loss, '-o', color='C0')
ax_loss.set_ylabel('Log Loss', color='C0');
ax_loss.tick_params('y', colors='C0')
ax_loss.set_ylim(0.01, 0.5)
ax_acc.plot(steps, acc, '-o', color='C1')
ax_acc.set_ylabel('Accuracy [%]', color='C1');
ax_acc.tick_params('y', colors='C1')
ax_acc.set_ylim(1,100)
plt.show()
Explanation: Visualize the training history:
End of explanation
test = pd.read_csv('../input/test.csv') # Read csv file in pandas dataframe
test_data = StandardScaler().fit_transform(np.float32(test.values)) # Convert the dataframe to a numpy array
test_data = test_data.reshape(-1, WIDTH, WIDTH, CHANNELS) # Reshape the data into 42000 2d images
Explanation: Results
End of explanation
test_pred = session.run(tf_pred, feed_dict={tf_data:test_data})
test_labels = np.argmax(test_pred, axis=1)
Explanation: Make a prediction about the test labels
End of explanation
k = 0 # Try different image indices k
print("Label Prediction: %i"%test_labels[k])
fig = plt.figure(figsize=(2,2)); plt.axis('off')
plt.imshow(test_data[k,:,:,0]); plt.show()
Explanation: Plot an example:
End of explanation
submission = pd.DataFrame(data={'ImageId':(np.arange(test_labels.shape[0])+1), 'Label':test_labels})
submission.to_csv('submission.csv', index=False)
submission.tail()
Explanation: Submission
End of explanation
#session.close()
Explanation: Close Session
(note: once the session is closed, the training cell cannot be run again...)
End of explanation |
15,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Classification of Movie Reviews
Unpack data - this only works on linux and (maybe?) OS X. Unpack using 7zip on Windows.
Step1: N-Grams | Python Code:
#! tar -xf data/aclImdb.tar.bz2 --directory data
from sklearn.datasets import load_files
reviews_train = load_files("data/aclImdb/train/")
text_train, y_train = reviews_train.data, reviews_train.target
print("Number of documents in training data: %d" % len(text_train))
print(np.bincount(y_train))
reviews_test = load_files("data/aclImdb/test/")
text_test, y_test = reviews_test.data, reviews_test.target
print("Number of documents in test data: %d" % len(text_test))
print(np.bincount(y_test))
from IPython.display import HTML
print(text_train[1])
HTML(text_train[1].decode("utf-8"))
print(y_train[1])
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
cv.fit(text_train)
len(cv.vocabulary_)
print(cv.get_feature_names()[:50])
print(cv.get_feature_names()[50000:50050])
X_train = cv.transform(text_train)
X_train
print(text_train[19726])
X_train[19726].nonzero()[1]
X_test = cv.transform(text_test)
from sklearn.svm import LinearSVC
svm = LinearSVC()
svm.fit(X_train, y_train)
svm.score(X_train, y_train)
svm.score(X_test, y_test)
def visualize_coefficients(classifier, feature_names, n_top_features=25):
# get coefficients with large absolute values
coef = classifier.coef_.ravel()
positive_coefficients = np.argsort(coef)[-n_top_features:]
negative_coefficients = np.argsort(coef)[:n_top_features]
interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
# plot them
plt.figure(figsize=(15, 5))
colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]]
plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.subplots_adjust(bottom=0.3)
plt.xticks(np.arange(1, 1 + 2 * n_top_features), feature_names[interesting_coefficients], rotation=60, ha="right");
visualize_coefficients(svm, cv.get_feature_names())
svm = LinearSVC(C=0.001)
svm.fit(X_train, y_train)
visualize_coefficients(svm, cv.get_feature_names())
from sklearn.pipeline import make_pipeline
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
text_pipe.fit(text_train, y_train)
text_pipe.score(text_test, y_test)
from sklearn.grid_search import GridSearchCV
import time
start = time.time()
param_grid = {'linearsvc__C': np.logspace(-5, 0, 6)}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
grid.fit(text_train, y_train)
print(time.time() - start)
grid.best_score_
def plot_grid_1d(grid_search_cv, ax=None):
if ax is None:
ax = plt.gca()
if len(grid_search_cv.param_grid.keys()) > 1:
raise ValueError("More then one parameter found. Can't do 1d plot.")
score_means, score_stds = zip(*[(np.mean(score.cv_validation_scores), np.std(score.cv_validation_scores))
for score in grid_search_cv.grid_scores_])
score_means, score_stds = np.array(score_means), np.array(score_stds)
parameters = next(grid_search_cv.param_grid.values().__iter__())
artists = []
artists.extend(ax.plot(score_means))
artists.append(ax.fill_between(range(len(parameters)), score_means - score_stds,
score_means + score_stds, alpha=0.2, color="b"))
ax.set_xticklabels(parameters)
plot_grid_1d(grid)
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.best_score_
grid.score(text_test, y_test)
Explanation: Text Classification of Movie Reviews
Unpack data - this only works on linux and (maybe?) OS X. Unpack using 7zip on Windows.
End of explanation
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
param_grid = {'linearsvc__C': np.logspace(-3, 2, 6),
"countvectorizer__ngram_range": [(1, 1), (1, 2)]}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
grid.fit(text_train, y_train)
scores = np.array([score.mean_validation_score for score in grid.grid_scores_]).reshape(3, -1)
plt.matshow(scores)
plt.ylabel("n-gram range")
plt.yticks(range(3), param_grid["countvectorizer__ngram_range"])
plt.xlabel("C")
plt.xticks(range(6), param_grid["linearsvc__C"]);
plt.colorbar()
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.score(text_test, y_test)
Explanation: N-Grams
End of explanation |
15,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading and writing data -- Tour of Beam
So far we've learned some of the basic transforms like
Map,
FlatMap,
Filter,
Combine, and
GroupByKey.
These allow us to transform data in any way, but so far we've used
Create
to get data from an in-memory
iterable, like a list.
This works well for experimenting with small datasets. For larger datasets we can use Source transforms to read data and Sink transforms to write data.
If there are no built-in Source or Sink transforms, we can also easily create our custom I/O transforms.
Let's create some data files and see how we can read them in Beam.
Step1: Reading from text files
We can use the
ReadFromText
transform to read text files into str elements.
It takes a
glob pattern
as an input, and reads all the files that match that pattern.
It returns one element for each line in the file.
For example, in the pattern data/*.txt, the * is a wildcard that matches anything. This pattern matches all the files in the data/ directory with a .txt extension.
Step2: Writing to text files
We can use the
WriteToText transform to write str elements into text files.
It takes a file path prefix as an input, and it writes the all str elements into one or more files with filenames starting with that prefix. You can optionally pass a file_name_suffix as well, usually used for the file extension. Each element goes into its own line in the output files.
Step3: Reading data
Your data might reside in various input formats. Take a look at the
Built-in I/O Transforms
page for a list of all the available I/O transforms in Beam.
If none of those work for you, you might need to create your own input transform.
ℹ️ For a more in-depth guide, take a look at the
Developing a new I/O connector page.
Reading from an iterable
The easiest way to create elements is using
FlatMap.
A common way is having a generator function. This could take an input and expand it into a large amount of elements. The nice thing about generators is that they don't have to fit everything into memory like a list, they simply
yield
elements as they process them.
For example, let's define a generator called count, that yields the numbers from 0 to n. We use Create for the initial n value(s) and then exapand them with FlatMap.
Step4: Creating an input transform
For a nicer interface, we could abstract the Create and the FlatMap into a custom PTransform. This would give a more intuitive way to use it, while hiding the inner workings.
We need a new class that inherits from beam.PTransform. We can do this more conveniently with the
beam.ptransform_fn decorator.
The PTransform function takes the input PCollection as the first argument, and any other inputs from the generator function, like n, can be arguments to the PTransform as well. The original generator function can be defined locally within the PTransform.
Finally, we apply the Create and FlatMap transforms and return a new PCollection.
We can also, optionally, add type hints with the with_input_types and with_output_types decorators. They serve both as documentation, and are a way to ensure your data types are consistent throughout your pipeline. This becomes more useful as the complexity grows.
Since our PTransform is expected to be the first transform in the pipeline, it doesn't receive any inputs. We can mark it as the beginning with the PBegin type hint.
Finally, to enable type checking, you can pass --type_check_additional=all when running your pipeline. Alternatively, you can also pass it directly to PipelineOptions if you want them enabled by default. To learn more about pipeline options, see Configuring pipeline options.
Step5: Example
Step6: Example
Step7: We could use a FlatMap transform to receive a SQL query and yield each result row, but that would mean creating a new database connection for each query. If we generated a large number of queries, creating that many connections could be a bottleneck.
It would be nice to create the database connection only once for each worker, and every query could use the same connection if needed.
We can use a
custom DoFn transform
for this. It allows us to open and close resources, like the database connection, only once per DoFn instance by using the setup and teardown methods.
ℹ️ It should be safe to read from a database with multiple concurrent processes using the same connection, but only one process should be writing at once.
Step8: Writing data
Your might want to write your data in various output formats. Take a look at the
Built-in I/O Transforms
page for a list of all the available I/O transforms in Beam.
If none of those work for you, you might need to create your own output transform.
ℹ️ For a more in-depth guide, take a look at the
Developing a new I/O connector page.
Creating an output transform
The most straightforward way to write data would be to use a Map transform to write each element into our desired output format. In most cases, however, this would result in a lot of overhead creating, connecting to, and/or deleting resources.
Instead, most data services are optimized to write batches of elements at a time. Batch writes only connects to the service once, and can load many elements at a time.
Here, we discuss two common ways of batching elements for optimized writes
Step9: Writing windows of elements
If the order of the elements is important, we could batch the elements by windows. This could be useful in streaming pipelines, where we have an indefinite number of incoming elements and we would like to write windows as they are being processed.
ℹ️ For more information about windows and triggers, check the Windowing page.
We use a
custom DoFn transform
to extract the window start time and end time.
We use this for the file names of the output files. | Python Code:
# Install apache-beam with pip.
!pip install --quiet apache-beam
# Create a directory for our data files.
!mkdir -p data
%%writefile data/my-text-file-1.txt
This is just a plain text file, UTF-8 strings are allowed 🎉.
Each line in the file is one element in the PCollection.
%%writefile data/my-text-file-2.txt
There are no guarantees on the order of the elements.
ฅ^•ﻌ•^ฅ
%%writefile data/penguins.csv
species,culmen_length_mm,culmen_depth_mm,flipper_length_mm,body_mass_g
0,0.2545454545454545,0.6666666666666666,0.15254237288135594,0.2916666666666667
0,0.26909090909090905,0.5119047619047618,0.23728813559322035,0.3055555555555556
1,0.5236363636363636,0.5714285714285713,0.3389830508474576,0.2222222222222222
1,0.6509090909090909,0.7619047619047619,0.4067796610169492,0.3333333333333333
2,0.509090909090909,0.011904761904761862,0.6610169491525424,0.5
2,0.6509090909090909,0.38095238095238104,0.9830508474576272,0.8333333333333334
Explanation: Reading and writing data -- Tour of Beam
So far we've learned some of the basic transforms like
Map,
FlatMap,
Filter,
Combine, and
GroupByKey.
These allow us to transform data in any way, but so far we've used
Create
to get data from an in-memory
iterable, like a list.
This works well for experimenting with small datasets. For larger datasets we can use Source transforms to read data and Sink transforms to write data.
If there are no built-in Source or Sink transforms, we can also easily create our custom I/O transforms.
Let's create some data files and see how we can read them in Beam.
End of explanation
import apache_beam as beam
input_files = 'data/*.txt'
with beam.Pipeline() as pipeline:
(
pipeline
| 'Read files' >> beam.io.ReadFromText(input_files)
| 'Print contents' >> beam.Map(print)
)
Explanation: Reading from text files
We can use the
ReadFromText
transform to read text files into str elements.
It takes a
glob pattern
as an input, and reads all the files that match that pattern.
It returns one element for each line in the file.
For example, in the pattern data/*.txt, the * is a wildcard that matches anything. This pattern matches all the files in the data/ directory with a .txt extension.
End of explanation
import apache_beam as beam
output_file_name_prefix = 'outputs/file'
with beam.Pipeline() as pipeline:
(
pipeline
| 'Create file lines' >> beam.Create([
'Each element must be a string.',
'It writes one element per line.',
'There are no guarantees on the line order.',
'The data might be written into multiple files.',
])
| 'Write to files' >> beam.io.WriteToText(
output_file_name_prefix,
file_name_suffix='.txt')
)
# Lets look at the output files and contents.
!head outputs/file*.txt
Explanation: Writing to text files
We can use the
WriteToText transform to write str elements into text files.
It takes a file path prefix as an input, and it writes the all str elements into one or more files with filenames starting with that prefix. You can optionally pass a file_name_suffix as well, usually used for the file extension. Each element goes into its own line in the output files.
End of explanation
import apache_beam as beam
from typing import Iterable
def count(n: int) -> Iterable[int]:
for i in range(n):
yield i
n = 5
with beam.Pipeline() as pipeline:
(
pipeline
| 'Create inputs' >> beam.Create([n])
| 'Generate elements' >> beam.FlatMap(count)
| 'Print elements' >> beam.Map(print)
)
Explanation: Reading data
Your data might reside in various input formats. Take a look at the
Built-in I/O Transforms
page for a list of all the available I/O transforms in Beam.
If none of those work for you, you might need to create your own input transform.
ℹ️ For a more in-depth guide, take a look at the
Developing a new I/O connector page.
Reading from an iterable
The easiest way to create elements is using
FlatMap.
A common way is having a generator function. This could take an input and expand it into a large amount of elements. The nice thing about generators is that they don't have to fit everything into memory like a list, they simply
yield
elements as they process them.
For example, let's define a generator called count, that yields the numbers from 0 to n. We use Create for the initial n value(s) and then exapand them with FlatMap.
End of explanation
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from typing import Iterable
@beam.ptransform_fn
@beam.typehints.with_input_types(beam.pvalue.PBegin)
@beam.typehints.with_output_types(int)
def Count(pbegin: beam.pvalue.PBegin, n: int) -> beam.PCollection[int]:
def count(n: int) -> Iterable[int]:
for i in range(n):
yield i
return (
pbegin
| 'Create inputs' >> beam.Create([n])
| 'Generate elements' >> beam.FlatMap(count)
)
n = 5
options = PipelineOptions(flags=[], type_check_additional='all')
with beam.Pipeline(options=options) as pipeline:
(
pipeline
| f'Count to {n}' >> Count(n)
| 'Print elements' >> beam.Map(print)
)
Explanation: Creating an input transform
For a nicer interface, we could abstract the Create and the FlatMap into a custom PTransform. This would give a more intuitive way to use it, while hiding the inner workings.
We need a new class that inherits from beam.PTransform. We can do this more conveniently with the
beam.ptransform_fn decorator.
The PTransform function takes the input PCollection as the first argument, and any other inputs from the generator function, like n, can be arguments to the PTransform as well. The original generator function can be defined locally within the PTransform.
Finally, we apply the Create and FlatMap transforms and return a new PCollection.
We can also, optionally, add type hints with the with_input_types and with_output_types decorators. They serve both as documentation, and are a way to ensure your data types are consistent throughout your pipeline. This becomes more useful as the complexity grows.
Since our PTransform is expected to be the first transform in the pipeline, it doesn't receive any inputs. We can mark it as the beginning with the PBegin type hint.
Finally, to enable type checking, you can pass --type_check_additional=all when running your pipeline. Alternatively, you can also pass it directly to PipelineOptions if you want them enabled by default. To learn more about pipeline options, see Configuring pipeline options.
End of explanation
import apache_beam as beam
from apache_beam.io.filesystems import FileSystems as beam_fs
from apache_beam.options.pipeline_options import PipelineOptions
import codecs
import csv
from typing import Dict, Iterable, List
@beam.ptransform_fn
@beam.typehints.with_input_types(beam.pvalue.PBegin)
@beam.typehints.with_output_types(Dict[str, str])
def ReadCsvFiles(pbegin: beam.pvalue.PBegin, file_patterns: List[str]) -> beam.PCollection[Dict[str, str]]:
def expand_pattern(pattern: str) -> Iterable[str]:
for match_result in beam_fs.match([pattern])[0].metadata_list:
yield match_result.path
def read_csv_lines(file_name: str) -> Iterable[Dict[str, str]]:
with beam_fs.open(file_name) as f:
# Beam reads files as bytes, but csv expects strings,
# so we need to decode the bytes into utf-8 strings.
for row in csv.DictReader(codecs.iterdecode(f, 'utf-8')):
yield dict(row)
return (
pbegin
| 'Create file patterns' >> beam.Create(file_patterns)
| 'Expand file patterns' >> beam.FlatMap(expand_pattern)
| 'Read CSV lines' >> beam.FlatMap(read_csv_lines)
)
input_patterns = ['data/*.csv']
options = PipelineOptions(flags=[], type_check_additional='all')
with beam.Pipeline(options=options) as pipeline:
(
pipeline
| 'Read CSV files' >> ReadCsvFiles(input_patterns)
| 'Print elements' >> beam.Map(print)
)
Explanation: Example: Reading CSV files
Lets say we want to read CSV files to get elements as Python dictionaries. We like how ReadFromText expands a file pattern, but we might want to allow for multiple patterns as well.
We create a ReadCsvFiles transform, which takes a list of file_patterns as input. It expands all the glob patterns, and then, for each file name it reads each row as a dict using the
csv.DictReader module.
We could use the open function to open a local file, but Beam already supports several different file systems besides local files.
To leverage that, we can use the apache_beam.io.filesystems module.
ℹ️ The open
function from the Beam filesystem reads bytes,
it's roughly equivalent to opening a file in rb mode.
To write a file, you would use
create instead.
End of explanation
#@title Creating the SQLite database
import sqlite3
database_file = "moon-phases.db" #@param {type:"string"}
with sqlite3.connect(database_file) as db:
cursor = db.cursor()
# Create the moon_phases table.
cursor.execute('''
CREATE TABLE IF NOT EXISTS moon_phases (
id INTEGER PRIMARY KEY,
phase_emoji TEXT NOT NULL,
peak_datetime DATETIME NOT NULL,
phase TEXT NOT NULL)''')
# Truncate the table if it's already populated.
cursor.execute('DELETE FROM moon_phases')
# Insert some sample data.
insert_moon_phase = 'INSERT INTO moon_phases(phase_emoji, peak_datetime, phase) VALUES(?, ?, ?)'
cursor.execute(insert_moon_phase, ('🌕', '2017-12-03 15:47:00', 'Full Moon'))
cursor.execute(insert_moon_phase, ('🌗', '2017-12-10 07:51:00', 'Last Quarter'))
cursor.execute(insert_moon_phase, ('🌑', '2017-12-18 06:30:00', 'New Moon'))
cursor.execute(insert_moon_phase, ('🌓', '2017-12-26 09:20:00', 'First Quarter'))
cursor.execute(insert_moon_phase, ('🌕', '2018-01-02 02:24:00', 'Full Moon'))
cursor.execute(insert_moon_phase, ('🌗', '2018-01-08 22:25:00', 'Last Quarter'))
cursor.execute(insert_moon_phase, ('🌑', '2018-01-17 02:17:00', 'New Moon'))
cursor.execute(insert_moon_phase, ('🌓', '2018-01-24 22:20:00', 'First Quarter'))
cursor.execute(insert_moon_phase, ('🌕', '2018-01-31 13:27:00', 'Full Moon'))
# Query for the data in the table to make sure it's populated.
cursor.execute('SELECT * FROM moon_phases')
for row in cursor.fetchall():
print(row)
Explanation: Example: Reading from a SQLite database
Lets begin by creating a small SQLite local database file.
Run the "Creating the SQLite database" cell to create a new SQLite3 database with the filename you choose. You can double-click it to see the source code if you want.
End of explanation
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
import sqlite3
from typing import Iterable, List, Tuple
class SQLiteSelect(beam.DoFn):
def __init__(self, database_file: str):
self.database_file = database_file
self.connection = None
def setup(self):
self.connection = sqlite3.connect(self.database_file)
def process(self, query: Tuple[str, List[str]]) -> Iterable[Dict[str, str]]:
table, columns = query
cursor = self.connection.cursor()
cursor.execute(f"SELECT {','.join(columns)} FROM {table}")
for row in cursor.fetchall():
yield dict(zip(columns, row))
def teardown(self):
self.connection.close()
@beam.ptransform_fn
@beam.typehints.with_input_types(beam.pvalue.PBegin)
@beam.typehints.with_output_types(Dict[str, str])
def SelectFromSQLite(
pbegin: beam.pvalue.PBegin,
database_file: str,
queries: List[Tuple[str, List[str]]],
) -> beam.PCollection[Dict[str, str]]:
return (
pbegin
| 'Create None' >> beam.Create(queries)
| 'SQLite SELECT' >> beam.ParDo(SQLiteSelect(database_file))
)
queries = [
# (table_name, [column1, column2, ...])
('moon_phases', ['phase_emoji', 'peak_datetime', 'phase']),
('moon_phases', ['phase_emoji', 'phase']),
]
options = PipelineOptions(flags=[], type_check_additional='all')
with beam.Pipeline(options=options) as pipeline:
(
pipeline
| 'Read from SQLite' >> SelectFromSQLite(database_file, queries)
| 'Print rows' >> beam.Map(print)
)
Explanation: We could use a FlatMap transform to receive a SQL query and yield each result row, but that would mean creating a new database connection for each query. If we generated a large number of queries, creating that many connections could be a bottleneck.
It would be nice to create the database connection only once for each worker, and every query could use the same connection if needed.
We can use a
custom DoFn transform
for this. It allows us to open and close resources, like the database connection, only once per DoFn instance by using the setup and teardown methods.
ℹ️ It should be safe to read from a database with multiple concurrent processes using the same connection, but only one process should be writing at once.
End of explanation
import apache_beam as beam
from apache_beam.io.filesystems import FileSystems as beam_fs
from apache_beam.options.pipeline_options import PipelineOptions
import os
import uuid
from typing import Iterable
@beam.ptransform_fn
@beam.typehints.with_input_types(str)
@beam.typehints.with_output_types(beam.pvalue.PDone)
def WriteBatchesToFiles(
pcollection: beam.PCollection[str],
file_name_prefix: str,
file_name_suffix: str = '.txt',
batch_size: int = 100,
) -> beam.pvalue.PDone:
def expand_pattern(pattern):
for match_result in beam_fs.match([pattern])[0].metadata_list:
yield match_result.path
def write_file(lines: Iterable[str]):
file_name = f"{file_name_prefix}-{uuid.uuid4().hex}{file_name_suffix}"
with beam_fs.create(file_name) as f:
for line in lines:
f.write(f"{line}\n".encode('utf-8'))
# Remove existing files matching the output file_name pattern.
for path in expand_pattern(f"{file_name_prefix}*{file_name_suffix}"):
os.remove(path)
return (
pcollection
# For simplicity we key with `None` and discard it.
| 'Key with None' >> beam.WithKeys(lambda _: None)
| 'Group into batches' >> beam.GroupIntoBatches(batch_size)
| 'Discard key' >> beam.Values()
| 'Write file' >> beam.Map(write_file)
)
output_file_name_prefix = 'outputs/batch'
options = PipelineOptions(flags=[], type_check_additional='all')
with beam.Pipeline(options=options) as pipeline:
(
pipeline
| 'Create file lines' >> beam.Create([
'Each element must be a string.',
'It writes one element per line.',
'There are no guarantees on the line order.',
'The data might be written into multiple files.',
])
| 'Write batches to files' >> WriteBatchesToFiles(
file_name_prefix=output_file_name_prefix,
file_name_suffix='.txt',
batch_size=3,
)
)
# Lets look at the output files and contents.
!head outputs/batch*.txt
Explanation: Writing data
Your might want to write your data in various output formats. Take a look at the
Built-in I/O Transforms
page for a list of all the available I/O transforms in Beam.
If none of those work for you, you might need to create your own output transform.
ℹ️ For a more in-depth guide, take a look at the
Developing a new I/O connector page.
Creating an output transform
The most straightforward way to write data would be to use a Map transform to write each element into our desired output format. In most cases, however, this would result in a lot of overhead creating, connecting to, and/or deleting resources.
Instead, most data services are optimized to write batches of elements at a time. Batch writes only connects to the service once, and can load many elements at a time.
Here, we discuss two common ways of batching elements for optimized writes: fixed-sized batches, and
windows
of elements.
Writing fixed-sized batches
If the order of the elements is not important, we can simply create fixed-sized batches and write those independently.
We can use
GroupIntoBatches
to get fixed-sized batches. Note that it expects (key, value) pairs. Since GroupIntoBatches is an aggregation, all the elements in a batch must fit into memory for each worker.
ℹ️ GroupIntoBatches requires a (key, value) pair. For simplicity, this example uses a placeholder None key and discards it later. Depending on your data, there might be a key that makes more sense. Using a balanced key, where each key contains around the same number of elements, may help parallelize the batching process.
Let's create something similar to WriteToText but keep it simple with a unique identifier in the file name instead of the file count.
To write a file using the Beam filesystems module, we need to use create, which writes bytes into the file.
ℹ️ To read a file instead, use the open
function instead.
For the output type hint, we can use PDone to indicate this is the last transform in a given pipeline.
End of explanation
import apache_beam as beam
from apache_beam.io.filesystems import FileSystems as beam_fs
from apache_beam.options.pipeline_options import PipelineOptions
from datetime import datetime
import time
from typing import Any, Dict
def unix_time(time_str: str) -> int:
return time.mktime(time.strptime(time_str, '%Y-%m-%d %H:%M:%S'))
class WithWindowInfo(beam.DoFn):
def process(self, element: Any, window=beam.DoFn.WindowParam) -> Iterable[Dict[str, Any]]:
yield {
'element': element,
'window_start': window.start.to_utc_datetime(),
'window_end': window.end.to_utc_datetime(),
}
@beam.ptransform_fn
@beam.typehints.with_input_types(str)
@beam.typehints.with_output_types(beam.pvalue.PDone)
def WriteWindowsToFiles(
pcollection: beam.PCollection[str],
file_name_prefix: str,
file_name_suffix: str = '.txt',
) -> beam.pvalue.PDone:
def write_file(batch: Dict[str, Any]):
start_date = batch['window_start'].date()
start_time = batch['window_start'].time()
end_time = batch['window_end'].time()
file_name = f"{file_name_prefix}-{start_date}-{start_time}-{end_time}{file_name_suffix}"
with beam_fs.create(file_name) as f:
for x in batch['element']:
f.write(f"{x}\n".encode('utf-8'))
return (
pcollection
| 'Group all per window' >> beam.GroupBy(lambda _: None)
| 'Discard key' >> beam.Values()
| 'Get window info' >> beam.ParDo(WithWindowInfo())
| 'Write files' >> beam.Map(write_file)
)
output_file_name_prefix = 'outputs/window'
window_size_sec = 5 * 60 # 5 minutes
options = PipelineOptions(flags=[], type_check_additional='all')
with beam.Pipeline(options=options) as pipeline:
(
pipeline
| 'Create elements' >> beam.Create([
{'timestamp': unix_time('2020-03-19 08:49:00'), 'event': 'login'},
{'timestamp': unix_time('2020-03-19 08:49:20'), 'event': 'view_account'},
{'timestamp': unix_time('2020-03-19 08:50:00'), 'event': 'view_orders'},
{'timestamp': unix_time('2020-03-19 08:51:00'), 'event': 'track_order'},
{'timestamp': unix_time('2020-03-19 09:00:00'), 'event': 'logout'},
])
| 'With timestamps' >> beam.Map(
lambda x: beam.window.TimestampedValue(x, x['timestamp']))
| 'Fixed-sized windows' >> beam.WindowInto(
beam.window.FixedWindows(window_size_sec))
| 'To string' >> beam.Map(
lambda x: f"{datetime.fromtimestamp(x['timestamp'])}: {x['event']}")
| 'Write windows to files' >> WriteWindowsToFiles(
file_name_prefix=output_file_name_prefix,
file_name_suffix='.txt',
)
)
# Lets look at the output files and contents.
!head outputs/window*.txt
Explanation: Writing windows of elements
If the order of the elements is important, we could batch the elements by windows. This could be useful in streaming pipelines, where we have an indefinite number of incoming elements and we would like to write windows as they are being processed.
ℹ️ For more information about windows and triggers, check the Windowing page.
We use a
custom DoFn transform
to extract the window start time and end time.
We use this for the file names of the output files.
End of explanation |
15,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6 Jupyter Notebook</div>
Graph signal processing
<br>
<div class="alert alert-warning">
<b>Note</b>
Step1: First, simple filtering on a noisy graph signal will be demonstrated. This is based on an example in an article by Nathanael Perraudin et al. (2016) titled "GSPBOX | Python Code:
%%html
%matplotlib inline
import matplotlib
#import pygsp #Uncomment if you have pygsp installed.
import numpy as np
import matplotlib.pylab as plt
import networkx as nx
import pandas as pd
plt.rcParams['figure.figsize'] = (6, 6)
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Graph signal processing
<br>
<div class="alert alert-warning">
<b>Note</b>:<br> You have been provided with the code required to execute the content of this notebook. Due to the time required for installation, and the strain on the virtual analysis environment, the output has been included as static images, and you should not attempt to execute the code cells.
</div>
Notebook introduction
The video content highlighted the importance of understanding graphs as data representation objects that can capture and describe relationships between data entities. Applications of graphs extend across numerous network types, including transportation, geographical, and social networks. The weight associated with each edge in the graph often represents the similarity between the two vertices it connects, or the strength of such a relationship. The earlier notebooks in this module demonstrated how to explore and exploit edge structure (connectedness) properties in order to understand the structure of the graphs. You were then able to use this knowledge in clustering, and the identification of communities in graphs (using graph partitioning algorithms). Besides the information about the relationship between connected components, there is an ever-increasing amount of information about the components themselves as a result of big data. Therefore, a field of research and application (graph signal processing), which is based on a simple model for graph-structured data (a graph signal), is emerging.
The data on these graphs can be visualized as a finite collection of samples, with one sample at each vertex in the graph, and each such sample described by a scalar value. The collection of these scalar samples, defined on each vertex of a graph, is referred to as a graph signal. The figure below shows an example of a graph signal, where each bar represents a random positive value generated on the vertices of a Petersen graph (Shuman et al. 2013). Thus, the relational structure is now paired with measurements on the nodes of the network.
Graph signal processing can be considered a generalization of the classical signal processing framework in the graph spectral domain. As discussed in the video content, just as the frequency-based domain representation of a signal decomposes a signal into harmonics of varying frequencies, so too does graph signal processing demonstrate how fast a graph signal changes with respect to graph topology. For example, this can be used in tracking the shifts of personal preferences between friends in a social network.
In graph signal processing, the graph Laplacian matrix is the core operator. The spectral decomposition of this matrix results in eigenspaces. This approach is similar to the use of sinusoidal functions in classical frequency analysis. Graph-based signal processing can be used in compression, denoising, interpolation, and many other applications.
Example applications of graph signal processing include the following:
- Sensor networks: This is in terms of the relative positions of sensors, and temperature. Does temperature vary smoothly?
- Social network analysis: This analysis can be done on aspects such as friendship, relationship, and age. Are friends of similar ages?
- Image processing: This is in terms of pixel positions and similarity, pixel values, discontinuities, and smoothness.
- Mobility inference: This inference can enable an understanding of people’s behaviors, while simultaneously protecting their privacy.
Next, this notebook will demonstrate simple filtering applications based on graph signal processing. This material is primarily for review purposes only. A good mathematical foundation in linear algebra and calculus is required for a more thorough treatment of this topic.
Note:
The examples that follow make use of PYGSP, the graph signal processing Python module. This module is not installed because it may cause instability of your AWS instance. The code used to generate the results has been provided for you to review separately.
1. Generic example of graph signal processing
Load libraries
End of explanation
%%html
## Create a graph.
N = 100 # number of nodes.
G = pygsp.graphs.Sensor(N)
## Compute the Fourier basis.
G.compute_fourier_basis()
## Create a smooth signal with noise.
## The second Eigenvector of the Laplacian matrix, often called the Fiedler vector,
# can be considered as a smooth graph signal.
x = G.U[:, 1]
y = x + np.random.normal(scale=1/np.sqrt(N), size=N)
## Select a filter.
filter = pygsp.filters.Expwin(G, 0.1)
## Filter the noise.
s = filter.analysis(y)
## Display the original signal.
G.plot_signal(x, default_qtg=False, plot_name='original_signal',savefig=True)
## Display the noisy signal.
G.plot_signal(y, default_qtg=False, plot_name='noisy_signal',savefig=True)
## Display the filtered signal.
G.plot_signal(s, default_qtg=False, plot_name='filtered_signal',savefig=True)
Explanation: First, simple filtering on a noisy graph signal will be demonstrated. This is based on an example in an article by Nathanael Perraudin et al. (2016) titled "GSPBOX: A toolbox for signal processing on graphs".
Note:
The code below is included for review purposes only. As a safeguard, the code cell will output HTML when run.
End of explanation |
15,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WavefunctionPlot
The WavefunctionPlot class will help you very easily generate and display wavefunctions from a Hamiltonian or any other source. If you already have your wavefunction in a grid, you can use GridPlot.
<div class="alert alert-info">
Note
`WavefunctionPlot` is just an extension of `GridPlot`, so everything in [the GridPlot notebook](./GridPlot.html) applies and this notebook **will only display the additional features**.
</div>
Step1: Generating wavefunctions from a hamiltonian
We will create a toy graphene tight binding hamiltonian, but you could have read the Hamiltonian from any source. Note that your hamiltonian needs to contain the corresponding geometry with the right orbitals, otherwise we have no idea what's the shape of the wavefunction.
Step2: Now that we have our hamiltonian, plotting a wavefunction is as simple as
Step3: That truly is an ugly wavefunction.
Selecting the wavefunction
By default, WavefunctionPlot gives you the first wavefunction at the gamma point. You can control this behavior by tuning the i and k settings.
For example, to get the second wavefunction at the gamma point
Step4: You can also select the spin with the spin setting (if you have, of course, a spin polarized Hamiltonian).
<div class="alert alert-info">
Note
If you update the **number of the wavefunction, the eigenstates are already calculated**, so there's no need to recalculate them. However, changing the **k point** or the **spin component** will trigger a **recalculation of the eigenstates**.
</div>
Grid precision
The wavefunction is projected in a grid, and how fine that grid is will determine the resolution. You can control this with the grid_prec setting, which accepts the grid precision in Angstrom. Let's check the difference in 2D, where it will be best appreciated
Step5: Much better, isn't it? Notice how it didn't look that bad in 3d, because the grid is smooth, so it's values are nicely interpolated. You can also appreciate this by setting zsmooth to "best" in 2D, which does an "OK job" at guessing the values.
Step6: <div class="alert alert-warning">
Warning
Keep in mind that a finer grid will **occupy more memory and take more time to generate and render**, and sometimes it might be unnecessary to make your grid very fine, specially if it's smooth.
</div>
GridPlot settings
As stated at the beggining of this notebook, you have all the power of GridPlot available to you. Therefore you can, for example, display supercells of the resulting wavefunctions (please don't tile the hamiltonian!
Step7: We hope you enjoyed what you learned!
This next cell is just to create the thumbnail for the notebook in the docs | Python Code:
import sisl
import sisl.viz
Explanation: WavefunctionPlot
The WavefunctionPlot class will help you very easily generate and display wavefunctions from a Hamiltonian or any other source. If you already have your wavefunction in a grid, you can use GridPlot.
<div class="alert alert-info">
Note
`WavefunctionPlot` is just an extension of `GridPlot`, so everything in [the GridPlot notebook](./GridPlot.html) applies and this notebook **will only display the additional features**.
</div>
End of explanation
import numpy as np
r = np.linspace(0, 3.5, 50)
f = np.exp(-r)
orb = sisl.AtomicOrbital('2pzZ', (r, f))
geom = sisl.geom.graphene(orthogonal=True, atoms=sisl.Atom(6, orb))
geom = geom.move([0, 0, 5])
H = sisl.Hamiltonian(geom)
H.construct([(0.1, 1.44), (0, -2.7)], )
Explanation: Generating wavefunctions from a hamiltonian
We will create a toy graphene tight binding hamiltonian, but you could have read the Hamiltonian from any source. Note that your hamiltonian needs to contain the corresponding geometry with the right orbitals, otherwise we have no idea what's the shape of the wavefunction.
End of explanation
H.plot.wavefunction()
Explanation: Now that we have our hamiltonian, plotting a wavefunction is as simple as:
End of explanation
plot = H.plot.wavefunction(i=2, k=(0, 0, 0))
plot
Explanation: That truly is an ugly wavefunction.
Selecting the wavefunction
By default, WavefunctionPlot gives you the first wavefunction at the gamma point. You can control this behavior by tuning the i and k settings.
For example, to get the second wavefunction at the gamma point:
End of explanation
plot.update_settings(axes="xy", k=(0,0,0), transforms=["square"]) # by default grid_prec is 0.2 Ang
plot.update_settings(grid_prec=0.05)
Explanation: You can also select the spin with the spin setting (if you have, of course, a spin polarized Hamiltonian).
<div class="alert alert-info">
Note
If you update the **number of the wavefunction, the eigenstates are already calculated**, so there's no need to recalculate them. However, changing the **k point** or the **spin component** will trigger a **recalculation of the eigenstates**.
</div>
Grid precision
The wavefunction is projected in a grid, and how fine that grid is will determine the resolution. You can control this with the grid_prec setting, which accepts the grid precision in Angstrom. Let's check the difference in 2D, where it will be best appreciated:
End of explanation
plot.update_settings(grid_prec=0.2, zsmooth="best")
Explanation: Much better, isn't it? Notice how it didn't look that bad in 3d, because the grid is smooth, so it's values are nicely interpolated. You can also appreciate this by setting zsmooth to "best" in 2D, which does an "OK job" at guessing the values.
End of explanation
plot.update_settings(axes="xyz", nsc=[2,2,1], grid_prec=0.1, transforms=[],
isos=[
{"val": -0.07, "opacity": 1, "color": "salmon"},
{"val": 0.07, "opacity": 0.7, "color": "blue"}
],
geom_kwargs={"atoms_style": dict(color=["orange", "red", "green", "pink"])},
)
Explanation: <div class="alert alert-warning">
Warning
Keep in mind that a finer grid will **occupy more memory and take more time to generate and render**, and sometimes it might be unnecessary to make your grid very fine, specially if it's smooth.
</div>
GridPlot settings
As stated at the beggining of this notebook, you have all the power of GridPlot available to you. Therefore you can, for example, display supercells of the resulting wavefunctions (please don't tile the hamiltonian! :)).
End of explanation
thumbnail_plot = plot
if thumbnail_plot:
thumbnail_plot.show("png")
Explanation: We hope you enjoyed what you learned!
This next cell is just to create the thumbnail for the notebook in the docs
End of explanation |
15,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Xarray with Dask Arrays
<img src="images/dataset-diagram-logo.png"
align="right"
width="66%"
alt="Xarray Dataset">
Xarray is an open source project and Python package that extends the labeled data functionality of Pandas to N-dimensional array-like datasets. It shares a similar API to NumPy and Pandas and supports both Dask and NumPy arrays under the hood.
Step1: Start Dask Client for Dashboard
Starting the Dask Client is optional. It will provide a dashboard which
is useful to gain insight on the computation.
The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.
Step2: Open a sample dataset
We will use some of xarray's tutorial data for this example. By specifying the chunk shape, xarray will automatically create Dask arrays for each data variable in the Dataset. In xarray, Datasets are dict-like container of labeled arrays, analogous to the pandas.DataFrame. Note that we're taking advantage of xarray's dimension labels when specifying chunk shapes.
Step3: Quickly inspecting the Dataset above, we'll note that this Dataset has three dimensions akin to axes in NumPy (lat, lon, and time), three coordinate variables akin to pandas.Index objects (also named lat, lon, and time), and one data variable (air). Xarray also holds Dataset specific metadata in as attributes.
Step4: Each data variable in xarray is called a DataArray. These are the fundemental labeled array object in xarray. Much like the Dataset, DataArrays also have dimensions and coordinates that support many of its label-based opperations.
Step5: Accessing the underlying array of data is done via the data property. Here we can see that we have a Dask array. If this array were to be backed by a NumPy array, this property would point to the actual values in the array.
Use Standard Xarray Operations
In almost all cases, operations using xarray objects are identical, regardless if the underlying data is stored as a Dask array or a NumPy array.
Step6: Call .compute() or .load() when you want your result as a xarray.DataArray with data stored as NumPy arrays.
If you started Client() above then you may want to watch the status page during computation.
Step7: Persist data in memory
If you have the available RAM for your dataset then you can persist data in memory.
This allows future computations to be much faster.
Step8: Time Series Operations
Because we have a datetime index time-series operations work efficiently. Here we demo the use of xarray's resample method
Step9: and rolling window operations
Step10: Since xarray stores each of its coordinate variables in memory, slicing by label is trivial and entirely lazy.
Step11: Custom workflows and automatic parallelization
Almost all of xarray’s built-in operations work on Dask arrays. If you want to use a function that isn’t wrapped by xarray, one option is to extract Dask arrays from xarray objects (.data) and use Dask directly.
Another option is to use xarray’s apply_ufunc() function, which can automate embarrassingly parallel “map” type operations where a functions written for processing NumPy arrays should be repeatedly applied to xarray objects containing Dask arrays. It works similarly to dask.array.map_blocks() and dask.array.atop(), but without requiring an intermediate layer of abstraction.
Here we show an example using NumPy operations and a fast function from bottleneck, which we use to calculate Spearman’s rank-correlation coefficient
Step12: In the examples above, we were working with an some air temperature data. For this example, we'll calculate the spearman correlation using the raw air temperature data with the smoothed version that we also created (da_smooth). For this, we'll also have to rechunk the data ahead of time. | Python Code:
%matplotlib inline
from dask.distributed import Client
import xarray as xr
Explanation: Xarray with Dask Arrays
<img src="images/dataset-diagram-logo.png"
align="right"
width="66%"
alt="Xarray Dataset">
Xarray is an open source project and Python package that extends the labeled data functionality of Pandas to N-dimensional array-like datasets. It shares a similar API to NumPy and Pandas and supports both Dask and NumPy arrays under the hood.
End of explanation
client = Client(n_workers=8, threads_per_worker=2, memory_limit='1GB')
client
Explanation: Start Dask Client for Dashboard
Starting the Dask Client is optional. It will provide a dashboard which
is useful to gain insight on the computation.
The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.
End of explanation
ds = xr.tutorial.open_dataset('air_temperature',
chunks={'lat': 25, 'lon': 25, 'time': -1})
ds
Explanation: Open a sample dataset
We will use some of xarray's tutorial data for this example. By specifying the chunk shape, xarray will automatically create Dask arrays for each data variable in the Dataset. In xarray, Datasets are dict-like container of labeled arrays, analogous to the pandas.DataFrame. Note that we're taking advantage of xarray's dimension labels when specifying chunk shapes.
End of explanation
da = ds['air']
da
Explanation: Quickly inspecting the Dataset above, we'll note that this Dataset has three dimensions akin to axes in NumPy (lat, lon, and time), three coordinate variables akin to pandas.Index objects (also named lat, lon, and time), and one data variable (air). Xarray also holds Dataset specific metadata in as attributes.
End of explanation
da.data
Explanation: Each data variable in xarray is called a DataArray. These are the fundemental labeled array object in xarray. Much like the Dataset, DataArrays also have dimensions and coordinates that support many of its label-based opperations.
End of explanation
da2 = da.groupby('time.month').mean('time')
da3 = da - da2
da3
Explanation: Accessing the underlying array of data is done via the data property. Here we can see that we have a Dask array. If this array were to be backed by a NumPy array, this property would point to the actual values in the array.
Use Standard Xarray Operations
In almost all cases, operations using xarray objects are identical, regardless if the underlying data is stored as a Dask array or a NumPy array.
End of explanation
computed_da = da3.load()
type(computed_da.data)
computed_da
Explanation: Call .compute() or .load() when you want your result as a xarray.DataArray with data stored as NumPy arrays.
If you started Client() above then you may want to watch the status page during computation.
End of explanation
da = da.persist()
Explanation: Persist data in memory
If you have the available RAM for your dataset then you can persist data in memory.
This allows future computations to be much faster.
End of explanation
da.resample(time='1w').mean('time').std('time')
da.resample(time='1w').mean('time').std('time').load().plot(figsize=(12, 8))
Explanation: Time Series Operations
Because we have a datetime index time-series operations work efficiently. Here we demo the use of xarray's resample method:
End of explanation
da_smooth = da.rolling(time=30).mean().persist()
da_smooth
Explanation: and rolling window operations:
End of explanation
%time da.sel(time='2013-01-01T18:00:00')
%time da.sel(time='2013-01-01T18:00:00').load()
Explanation: Since xarray stores each of its coordinate variables in memory, slicing by label is trivial and entirely lazy.
End of explanation
import numpy as np
import xarray as xr
import bottleneck
def covariance_gufunc(x, y):
return ((x - x.mean(axis=-1, keepdims=True))
* (y - y.mean(axis=-1, keepdims=True))).mean(axis=-1)
def pearson_correlation_gufunc(x, y):
return covariance_gufunc(x, y) / (x.std(axis=-1) * y.std(axis=-1))
def spearman_correlation_gufunc(x, y):
x_ranks = bottleneck.rankdata(x, axis=-1)
y_ranks = bottleneck.rankdata(y, axis=-1)
return pearson_correlation_gufunc(x_ranks, y_ranks)
def spearman_correlation(x, y, dim):
return xr.apply_ufunc(
spearman_correlation_gufunc, x, y,
input_core_dims=[[dim], [dim]],
dask='parallelized',
output_dtypes=[float])
Explanation: Custom workflows and automatic parallelization
Almost all of xarray’s built-in operations work on Dask arrays. If you want to use a function that isn’t wrapped by xarray, one option is to extract Dask arrays from xarray objects (.data) and use Dask directly.
Another option is to use xarray’s apply_ufunc() function, which can automate embarrassingly parallel “map” type operations where a functions written for processing NumPy arrays should be repeatedly applied to xarray objects containing Dask arrays. It works similarly to dask.array.map_blocks() and dask.array.atop(), but without requiring an intermediate layer of abstraction.
Here we show an example using NumPy operations and a fast function from bottleneck, which we use to calculate Spearman’s rank-correlation coefficient:
End of explanation
corr = spearman_correlation(da.chunk({'time': -1}),
da_smooth.chunk({'time': -1}),
'time')
corr
corr.plot(figsize=(12, 8))
Explanation: In the examples above, we were working with an some air temperature data. For this example, we'll calculate the spearman correlation using the raw air temperature data with the smoothed version that we also created (da_smooth). For this, we'll also have to rechunk the data ahead of time.
End of explanation |
15,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample for KFServing SDK v1beta1
This is a sample for KFServing SDK v1beta1.
The notebook shows how to use KFServing SDK to create, get and delete InferenceService.
Step1: Define namespace where InferenceService needs to be deployed to. If not specified, below function defines namespace to the current one where SDK is running in the cluster, otherwise it will deploy to default namespace.
Step2: Define InferenceService
Firstly define default endpoint spec, and then define the inferenceservice basic on the endpoint spec.
Step3: Create InferenceService
Call KFServingClient to create InferenceService.
Step4: Check the InferenceService
Step5: Patch the InferenceService and define Canary Traffic Percent
Step6: Check the InferenceService after Patching
Step7: Delete the InferenceService | Python Code:
from kubernetes import client
from kfserving import KFServingClient
from kfserving import constants
from kfserving import utils
from kfserving import V1beta1InferenceService
from kfserving import V1beta1InferenceServiceSpec
from kfserving import V1beta1PredictorSpec
from kfserving import V1beta1TFServingSpec
Explanation: Sample for KFServing SDK v1beta1
This is a sample for KFServing SDK v1beta1.
The notebook shows how to use KFServing SDK to create, get and delete InferenceService.
End of explanation
#namespace = utils.get_default_target_namespace()
namespace = 'kfserving-test'
Explanation: Define namespace where InferenceService needs to be deployed to. If not specified, below function defines namespace to the current one where SDK is running in the cluster, otherwise it will deploy to default namespace.
End of explanation
api_version = constants.KFSERVING_GROUP + '/' + kfserving_version
isvc = V1beta1InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name='flower-sample', namespace=namespace),
spec=V1beta1InferenceServiceSpec(
predictor=V1beta1PredictorSpec(
tensorflow=(V1beta1TFServingSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers'))))
)
Explanation: Define InferenceService
Firstly define default endpoint spec, and then define the inferenceservice basic on the endpoint spec.
End of explanation
KFServing = KFServingClient()
KFServing.create(isvc)
Explanation: Create InferenceService
Call KFServingClient to create InferenceService.
End of explanation
KFServing.get('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)
Explanation: Check the InferenceService
End of explanation
isvc = V1beta1InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name='flower-sample', namespace=namespace),
spec=V1beta1InferenceServiceSpec(
predictor=V1beta1PredictorSpec(
canary_traffic_percent=20,
tensorflow=(V1beta1TFServingSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers-2'))))
)
KFServing.patch('flower-sample', isvc, namespace=namespace)
Explanation: Patch the InferenceService and define Canary Traffic Percent
End of explanation
KFServing.wait_isvc_ready('flower-sample', namespace=namespace)
KFServing.get('flower-sample', namespace=namespace, watch=True)
Explanation: Check the InferenceService after Patching
End of explanation
KFServing.delete('flower-sample', namespace=namespace)
Explanation: Delete the InferenceService
End of explanation |
15,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Нейронные сети
Step1: Выполним загрузку данных
Step2: В качестве альтернативного варианта, можно выполнить загрузку данных напрямую из репозитория UCI, воспользовавшись библиотекой urllib.
Step3: Выделим из данных целевую переменную. Классы в задаче являются несбалинсированными
Step4: Двуслойная нейронная сеть
Двуслойная нейронная сеть представляет собой функцию распознавания, которая може быть записана в виде следующей суперпозиции
Step5: Инициализируем основные параметры задачи
Step6: Инициализируем структуру данных ClassificationDataSet, используемую библиотекой pybrain. Для инициализации структура принимает два аргумента
Step7: Инициализируем двуслойную сеть и произведем оптимизацию ее параметров. Аргументами для инициализации являются
Step8: Выполним оптимизацию параметров сети. График ниже показывает сходимость функции ошибки на обучающей/контрольной части.
Step9: Рассчитаем значение доли неправильных ответов на обучающей и контрольной выборке.
Step10: Задание. Определение оптимального числа нейронов.
В задании требуется исследовать зависимость ошибки на контрольной выборке в зависимости от числа нейронов в скрытом слое сети. Количество нейронов, по которому предполагается провести перебор, записано в векторе
hidden_neurons_num = [50, 100, 200, 500, 700, 1000]
Для фиксированного разбиения на обучающую и контрольную части подсчитайте долю неправильных ответов (ошибок) классификации на обучении/контроле в зависимости от количества нейронов в скрытом слое сети. Запишите результаты в массивы res_train_vec и res_test_vec, соответственно. С помощью функции plot_classification_error постройте график зависимости ошибок на обучении/контроле от количества нейронов. Являются ли графики ошибок возрастающими/убывающими? При каком количестве нейронов достигается минимум ошибок классификации?
С помощью функции write_answer_nn запишите в выходной файл число | Python Code:
# Выполним инициализацию основных используемых модулей
%matplotlib inline
import random
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
import numpy as np
Explanation: Нейронные сети: зависимость ошибки и обучающей способности от числа нейронов
В этом задании вы будете настраивать двуслойную нейронную сеть для решения задачи многоклассовой классификации. Предлагается выполнить процедуры загрузки и разбиения входных данных, обучения сети и подсчета ошибки классификации. Предлагается определить оптимальное количество нейронов в скрытом слое сети. Нужно так подобрать число нейронов, чтобы модель была с одной стороны несложной, а с другой стороны давала бы достаточно точный прогноз и не переобучалась. Цель задания -- показать, как зависит точность и обучающая способность сети от ее сложности.
Для решения задачи многоклассовой классификации предлагается воспользоваться библиотекой построения нейронных сетей pybrain. Библиотека содержит основные модули инициализации двуслойной нейронной сети прямого распространения, оценки ее параметров с помощью метода обратного распространения ошибки (backpropagation) и подсчета ошибки.
Установить библиотеку pybrain можно с помощью стандартной системы управления пакетами pip:
pip install pybrain
Кроме того, для установки библиотеки можно использовать и другие способы, приведенные в документации.
Используемые данные
Рассматривается задача оценки качества вина по его физико-химическим свойствам [1]. Данные размещены в открытом доступе в репозитории UCI и содержат 1599 образцов красного вина, описанных 11 признаками, среди которых -- кислотность, процентное содержание сахара, алкоголя и пр. Кроме того, каждому объекту поставлена в соответствие оценка качества по шкале от 0 до 10. Требуется восстановить оценку качества вина по исходному признаковому описанию.
[1] P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
End of explanation
with open('winequality-red.csv') as f:
f.readline() # пропуск заголовочной строки
data = np.loadtxt(f, delimiter=';')
Explanation: Выполним загрузку данных
End of explanation
import urllib
# URL for the Wine Quality Data Set (UCI Machine Learning Repository)
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv"
# загрузка файла
f = urllib.urlopen(url)
f.readline() # пропуск заголовочной строки
data = np.loadtxt(f, delimiter=';')
Explanation: В качестве альтернативного варианта, можно выполнить загрузку данных напрямую из репозитория UCI, воспользовавшись библиотекой urllib.
End of explanation
TRAIN_SIZE = 0.7 # Разделение данных на обучающую и контрольную части в пропорции 70/30%
from sklearn.cross_validation import train_test_split
y = data[:, -1]
np.place(y, y < 5, 5)
np.place(y, y > 7, 7)
y -= min(y)
X = data[:, :-1]
X = normalize(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=TRAIN_SIZE, random_state=0)
Explanation: Выделим из данных целевую переменную. Классы в задаче являются несбалинсированными: основной доле объектов поставлена оценка качества от 5 до 7. Приведем задачу к трехклассовой: объектам с оценкой качества меньше пяти поставим оценку 5, а объектам с оценкой качества больше семи поставим 7.
End of explanation
from pybrain.datasets import ClassificationDataSet # Структура данных pybrain
from pybrain.tools.shortcuts import buildNetwork
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.structure.modules import SoftmaxLayer
from pybrain.utilities import percentError
Explanation: Двуслойная нейронная сеть
Двуслойная нейронная сеть представляет собой функцию распознавания, которая може быть записана в виде следующей суперпозиции:
$f(x,W)=h^{(2)}\left(\sum\limits_{i=1}^D w_i^{(2)}h^{(1)}\left(\sum\limits_{j=1}^n w_{ji}^{(1)}x_j+b_i^{(1)}\right)+b^{(2)}\right)$, где
$x$ -- исходный объект (сорт вина, описанный 11 признаками), $x_j$ -- соответствующий признак,
$n$ -- количество нейронов во входном слое сети, совпадающее с количеством признаков,
$D$ -- количество нейронов в скрытом слое сети,
$w_i^{(2)}, w_{ji}^{(1)}, b_i^{(1)}, b^{(2)}$ -- параметры сети, соответствующие весам нейронов,
$h^{(1)}, h^{(2)}$ -- функции активации.
В качестве функции активации на скрытом слое сети используется линейная функция. На выходном слое сети используется функция активации softmax, являющаяся обобщением сигмоидной функции на многоклассовый случай:
$y_k=\text{softmax}k(a_1,...,a_k)=\frac{\exp(a_k)}{\sum{k=1}^K\exp(a_k)}.$
Настройка параметров сети
Оптимальные параметры сети $W_{opt}$ определяются путем минимизации функции ошибки:
$W_{opt}=\arg\min\limits_{W}L(W)+\lambda\|W\|^2$.
Здесь $L(W)$ является функцией ошибки многоклассовой классификации,
$L(W)=- \sum^N_{n=1}\sum^K_{k=1} t_{kn} log(y_{kn}),$
$t_{kn}$ -- бинарно закодированные метки классов, $K$ -- количество меток, $N$ -- количество объектов,
а $\lambda\|W\|^2$ является регуляризующим слагаемым, контролирующим суммарный вес параметров сети и предотвращающий эффект переобучения.
Оптимизация параметров выполняется методом обратного распространения ошибки (backpropagation).
Выполним загрузку основных модулей: ClassificationDataSet -- структура данных pybrain, buildNetwork -- инициализация нейронной сети, BackpropTrainer -- оптимизация параметров сети методом backpropagation, SoftmaxLayer -- функция softmax, соответствующая выходному слою сети, percentError -- функцию подсчета ошибки классификации (доля неправильных ответов).
End of explanation
# Определение основных констант
HIDDEN_NEURONS_NUM = 100 # Количество нейронов, содержащееся в скрытом слое сети
MAX_EPOCHS = 100 # Максимальное число итераций алгоритма оптимизации параметров сети
Explanation: Инициализируем основные параметры задачи: HIDDEN_NEURONS_NUM -- количество нейронов скрытого слоя, MAX_EPOCHS -- максимальное количество итераций алгоритма оптимизации
End of explanation
# Конвертация данных в структуру ClassificationDataSet
# Обучающая часть
ds_train = ClassificationDataSet(np.shape(X)[1], nb_classes=len(np.unique(y_train)))
# Первый аргумент -- количество признаков np.shape(X)[1], второй аргумент -- количество меток классов len(np.unique(y_train)))
ds_train.setField('input', X_train) # Инициализация объектов
ds_train.setField('target', y_train[:, np.newaxis]) # Инициализация ответов; np.newaxis создает вектор-столбец
ds_train._convertToOneOfMany( ) # Бинаризация вектора ответов
# Контрольная часть
ds_test = ClassificationDataSet(np.shape(X)[1], nb_classes=len(np.unique(y_train)))
ds_test.setField('input', X_test)
ds_test.setField('target', y_test[:, np.newaxis])
ds_test._convertToOneOfMany( )
ds_train1 = ClassificationDataSet(np.shape(X)[1], nb_classes=len(np.unique(y_train)))
# Первый аргумент -- количество признаков np.shape(X)[1], второй аргумент -- количество меток классов len(np.unique(y_train)))
ds_train1.setField('input', X_train) # Инициализация объектов
ds_train1.setField('target', y_train[:, np.newaxis]) # Инициализация ответов; np.newaxis создает вектор-столбец
print ds_train1
print ds_train
ds_train['target'].max()
Explanation: Инициализируем структуру данных ClassificationDataSet, используемую библиотекой pybrain. Для инициализации структура принимает два аргумента: количество признаков np.shape(X)[1] и количество различных меток классов len(np.unique(y)).
Кроме того, произведем бинаризацию целевой переменной с помощью функции _convertToOneOfMany( ) и разбиение данных на обучающую и контрольную части.
End of explanation
np.random.seed(0) # Зафиксируем seed для получения воспроизводимого результата
# Построение сети прямого распространения (Feedforward network)
net = buildNetwork(ds_train.indim, HIDDEN_NEURONS_NUM, ds_train.outdim, outclass=SoftmaxLayer)
# ds.indim -- количество нейронов входного слоя, равне количеству признаков
# ds.outdim -- количество нейронов выходного слоя, равное количеству меток классов
# SoftmaxLayer -- функция активации, пригодная для решения задачи многоклассовой классификации
init_params = np.random.random((len(net.params))) # Инициализируем веса сети для получения воспроизводимого результата
net._setParameters(init_params)
Explanation: Инициализируем двуслойную сеть и произведем оптимизацию ее параметров. Аргументами для инициализации являются:
ds.indim -- количество нейронов на входном слое сети, совпадает с количеством признаков (в нашем случае 11),
HIDDEN_NEURONS_NUM -- количество нейронов в скрытом слое сети,
ds.outdim -- количество нейронов на выходном слое сети, совпадает с количеством различных меток классов (в нашем случае 3),
SoftmaxLayer -- функция softmax, используемая на выходном слое для решения задачи многоклассовой классификации.
End of explanation
random.seed(0)
# Модуль настройки параметров pybrain использует модуль random; зафиксируем seed для получения воспроизводимого результата
trainer = BackpropTrainer(net, dataset=ds_train) # Инициализируем модуль оптимизации
err_train, err_val = trainer.trainUntilConvergence(maxEpochs=MAX_EPOCHS)
line_train = plt.plot(err_train, 'b', err_val, 'r') # Построение графика
xlab = plt.xlabel('Iterations')
ylab = plt.ylabel('Error')
Explanation: Выполним оптимизацию параметров сети. График ниже показывает сходимость функции ошибки на обучающей/контрольной части.
End of explanation
res_train = net.activateOnDataset(ds_train).argmax(axis=1) # Подсчет результата на обучающей выборке
print 'Error on train: ', percentError(res_train, ds_train['target'].argmax(axis=1)), '%' # Подсчет ошибки
res_test = net.activateOnDataset(ds_test).argmax(axis=1) # Подсчет результата на тестовой выборке
print 'Error on test: ', percentError(res_test, ds_test['target'].argmax(axis=1)), '%' # Подсчет ошибки
Explanation: Рассчитаем значение доли неправильных ответов на обучающей и контрольной выборке.
End of explanation
%%time
random.seed(0) # Зафиксируем seed для получния воспроизводимого результата
np.random.seed(0)
def plot_classification_error(hidden_neurons_num, res_train_vec, res_test_vec):
# hidden_neurons_num -- массив размера h, содержащий количество нейронов, по которому предполагается провести перебор,
# hidden_neurons_num = [50, 100, 200, 500, 700, 1000];
# res_train_vec -- массив размера h, содержащий значения доли неправильных ответов классификации на обучении;
# res_train_vec -- массив размера h, содержащий значения доли неправильных ответов классификации на контроле
plt.figure()
plt.plot(hidden_neurons_num, res_train_vec)
plt.plot(hidden_neurons_num, res_test_vec, '-r')
def write_answer_nn(optimal_neurons_num):
with open("nnets_answer1.txt", "w") as fout:
fout.write(str(optimal_neurons_num))
hidden_neurons_num = [50, 100, 200, 500, 700, 1000]
res_train_vec = list()
res_test_vec = list()
i = 0
for nnum in hidden_neurons_num:
# Put your code here
net = buildNetwork(ds_train.indim, nnum, ds_train.outdim, outclass=SoftmaxLayer)
init_params = np.random.random((len(net.params))) # Инициализируем веса сети для получения воспроизводимого результата
net._setParameters(init_params)
trainer = BackpropTrainer(net, dataset=ds_train) # Инициализируем модуль оптимизации
err_train, err_val = trainer.trainUntilConvergence(maxEpochs=MAX_EPOCHS)
res_train_vec.append(percentError(net.activateOnDataset(ds_train).argmax(axis=1),ds_train['target'].argmax(axis=1))) # Подсчет результата на обучающей выборке
res_test_vec.append(percentError(net.activateOnDataset(ds_test).argmax(axis=1), ds_test['target'].argmax(axis=1)))
print min(err_train), min(err_val)
# Постройте график зависимости ошибок на обучении и контроле в зависимости от количества нейронов
plot_classification_error(hidden_neurons_num, res_train_vec, res_test_vec)
# Запишите в файл количество нейронов, при котором достигается минимум ошибки на контроле
write_answer_nn(hidden_neurons_num[res_test_vec.index(min(res_test_vec))])
print res_train_vec
Explanation: Задание. Определение оптимального числа нейронов.
В задании требуется исследовать зависимость ошибки на контрольной выборке в зависимости от числа нейронов в скрытом слое сети. Количество нейронов, по которому предполагается провести перебор, записано в векторе
hidden_neurons_num = [50, 100, 200, 500, 700, 1000]
Для фиксированного разбиения на обучающую и контрольную части подсчитайте долю неправильных ответов (ошибок) классификации на обучении/контроле в зависимости от количества нейронов в скрытом слое сети. Запишите результаты в массивы res_train_vec и res_test_vec, соответственно. С помощью функции plot_classification_error постройте график зависимости ошибок на обучении/контроле от количества нейронов. Являются ли графики ошибок возрастающими/убывающими? При каком количестве нейронов достигается минимум ошибок классификации?
С помощью функции write_answer_nn запишите в выходной файл число: количество нейронов в скрытом слое сети, для которого достигается минимум ошибки классификации на контрольной выборке.
End of explanation |
15,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 6
Step1: Rapidgram
The date
Step4: Question 1
Step7: Question 2
Step10: Question 3
Step13: Question 4
Step15: Question 5
Step18: Do you think this query will work as intended? Why or why not? Try designing a better query below
Step20: Question 6
Step23: Using the generate_series view, get a sample of ten students, weighted in this manner.
Step26: Question 7 | Python Code:
!pip install ipython-sql
%load_ext sql
%sql sqlite:///./lab06.sqlite
import sqlalchemy
engine = sqlalchemy.create_engine("sqlite:///lab05.sqlite")
connection = engine.connect()
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab06.ok')
Explanation: Lab 6: SQL
End of explanation
%%sql
DROP TABLE IF EXISTS users;
DROP TABLE IF EXISTS follows;
CREATE TABLE users (
USERID INT NOT NULL,
NAME VARCHAR (256) NOT NULL,
YEAR FLOAT NOT NULL,
PRIMARY KEY (USERID)
);
CREATE TABLE follows (
USERID INT NOT NULL,
FOLLOWID INT NOT NULL,
PRIMARY KEY (USERID, FOLLOWID)
);
%%capture
count = 0
users = ["Ian", "Daniel", "Sarah", "Kelly", "Sam", "Alison", "Henry", "Joey", "Mark", "Joyce", "Natalie", "John"]
years = [1, 3, 4, 3, 4, 2, 5, 2, 1, 3, 4, 2]
for username, year in zip(users, years):
count += 1
%sql INSERT INTO users VALUES ($count, '$username', $year);
%%capture
follows = [0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1,
0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1,
0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1,
1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1,
0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1,
1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0,
1, 1, 0, 1]
for i in range(12):
for j in range(12):
if i != j and follows[i + j*12]:
%sql INSERT INTO follows VALUES ($i+1, $j+1);
Explanation: Rapidgram
The date: March, 2017. All of the students at Berkeley are obsessed with the hot new social networking app, Rapidgram, where users can share text and image posts. You've been hired as Rapidgram's very first Data Scientist, in charge of analyzing their petabyte-scale user data, in order to sell it to credit card companies (I mean, they had to monetize somehow). But before you get into that, you need to learn more about their database schema.
First, run the next few cells to generate a snapshot of their data. It will be saved locally as the file lab05.sqlite.
End of explanation
q1 =
...
%sql $q1
#SOLUTION
q1 =
SELECT COUNT(*) FROM follows, users
WHERE users.name="Joey"
AND (users.userid=follows.followid)
%sql $q1
q1_answer = connection.execute(q1).fetchall()
_ = ok.grade('q1')
_ = ok.backup()
Explanation: Question 1: Joey's Followers
How many people follow Joey?
End of explanation
q2 =
...
%sql $q2
#SOLUTION
q2 =
SELECT COUNT(*) FROM follows, users
WHERE users.name="Joey"
AND (users.userid=follows.userid)
%sql $q2
q2_answer = connection.execute(q2).fetchall()
_ = ok.grade('q2')
_ = ok.backup()
Explanation: Question 2: I Ain't no Followback Girl
How many people does Joey follow?
End of explanation
q3 =
...
%sql $q3
#SOLUTION
q3 =
SELECT u1.name
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
AND u2.name="Joey"
%sql $q3
q3_answer = connection.execute(q3).fetchall()
_ = ok.grade('q3')
_ = ok.backup()
Explanation: Question 3: Know your Audience
What are the names of Joey's followers?
End of explanation
q4 =
...
%sql $q4
#SOLUTION
q4 =
SELECT name, COUNT(*) as friends
FROM follows, users
WHERE follows.followid=users.userid
GROUP BY name
ORDER BY friends DESC
LIMIT 5
%sql $q4
q4_answer = connection.execute(q4).fetchall()
_ = ok.grade('q4')
_ = ok.backup()
Explanation: Question 4: Popularity Contest
How many followers does each user have? You'll need to use GROUP BY to solve this. List only the top 5 users by number of followers.
End of explanation
q5a =
SELECT u1.name as follower, u2.name as followee
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
AND RANDOM() < 0.33
Explanation: Question 5: Randomness
Rapidgram wants to get a random sample of their userbase. Specifically, they want to look at exactly one-third of the follow-relations in their data. A Rapidgram engineer suggests the following SQL query:
End of explanation
q5b =
...
%sql $q5b
#SOLUTION
q5b =
SELECT u1.name as follower, u2.name as followee
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
ORDER BY RANDOM() LIMIT 72*1/3
%sql $q5b
q5_answers = [connection.execute(q5b).fetchall() for _ in range(100)]
_ = ok.grade('q5')
_ = ok.backup()
Explanation: Do you think this query will work as intended? Why or why not? Try designing a better query below:
End of explanation
q6a =
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=10
)
SELECT value
FROM generate_series
%sql $q6a
Explanation: Question 6: More Randomness
Rapidgram leadership wants to give more priority to more experienced users, so they decide to weight a survey of users towards students who have spend a greater number of years at berkeley. They want to take a sample of 10 students, weighted such that a student's chance of being in the sample is proportional to their number of years spent at berkeley - for instance, a student with 6 years has three times the chance of a student with 2 years, who has twice the chance of a student with only one year.
To take this sample, they've provided you with a helpful temporary view. You can run the cell below to see its functionality.
End of explanation
q6b =
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=12
)
SELECT name
FROM ...
WHERE ...
ORDER BY ...
LIMIT 10
%sql $q6b
#SOLUTION
q6b =
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=12
)
SELECT name
FROM generate_series, users
WHERE value < year
ORDER BY RANDOM()
LIMIT 10
%sql $q6b
q6_answers = [connection.execute(q6b).fetchall() for _ in range(100)]
_ = ok.grade('q6')
_ = ok.backup()
Explanation: Using the generate_series view, get a sample of ten students, weighted in this manner.
End of explanation
q7 =
SELECT name FROM (
SELECT ...
)
WHERE year > avg_follower_years
%sql $q7
#SOLUTION
q7 =
SELECT name FROM
(SELECT u1.name, u1.year, AVG(u2.year) as avg_follower_years
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
GROUP BY u1.name)
WHERE year > avg_follower_years
%sql $q7
q7_answer = connection.execute(q7).fetchall()
_ = ok.grade('q7')
_ = ok.backup()
_ = ok.grade_all()
_ = ok.submit()
Explanation: Question 7: Older and Wiser (challenge)
List every person who has been at Berkeley longer - that is, their year is greater - than their average follower.
End of explanation |
15,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index - Back - Next
Step1: Building a Custom Widget - Email widget
The widget framework is built on top of the Comm framework (short for communication). The Comm framework is a framework that allows the kernel to send/receive JSON messages to/from the front end (as seen below).
To create a custom widget, you need to define the widget both in the browser and in the python kernel.
Building a Custom Widget
To get started, you'll create a simple email widget.
Python Kernel
DOMWidget and Widget
To define a widget, you must inherit from the Widget or DOMWidget base class. If you intend for your widget to be displayed in the Jupyter notebook, you'll want to inherit from the DOMWidget. The DOMWidget class itself inherits from the Widget class. The Widget class is useful for cases in which the Widget is not meant to be displayed directly in the notebook, but instead as a child of another rendering environment. For example, if you wanted to create a three.js widget (a popular WebGL library), you would implement the rendering window as a DOMWidget and any 3D objects or lights meant to be rendered in that window as Widgets.
_view_name
Inheriting from the DOMWidget does not tell the widget framework what front end widget to associate with your back end widget.
Instead, you must tell it yourself by defining specially named trait attributes, _view_name, _view_module, and _view_module_version (as seen below) and optionally _model_name and _model_module.
Step2: sync=True traitlets
Traitlets is an IPython library for defining type-safe properties on configurable objects. For this tutorial you do not need to worry about the configurable piece of the traitlets machinery. The sync=True keyword argument tells the widget framework to handle synchronizing that value to the browser. Without sync=True, attributes of the widget won't be synchronized with the front-end.
Other traitlet types
Unicode, used for _view_name, is not the only Traitlet type, there are many more some of which are listed below
Step3: Define the view
Next, define your widget view class. Inherit from the DOMWidgetView by using the .extend method.
Step4: Render method
Lastly, override the base render method of the view to define custom rendering logic. A handle to the widget's default DOM element can be acquired via this.el. The el property is the DOM element associated with the view.
Step5: Test
You should be able to display your widget just like any other widget now.
Step6: Making the widget stateful
There is not much that you can do with the above example that you can't do with the IPython display framework. To change this, you will make the widget stateful. Instead of displaying a static "example@example.com" email address, it will display an address set by the back end. First you need to add a traitlet in the back end. Use the name of value to stay consistent with the rest of the widget framework and to allow your widget to be used with interact.
We want to be able to avoid user to write an invalid email address, so we need a validator using traitlets.
Step7: Accessing the model from the view
To access the model associated with a view instance, use the model property of the view. get and set methods are used to interact with the Backbone model. get is trivial, however you have to be careful when using set. After calling the model set you need call the view's touch method. This associates the set operation with a particular view so output will be routed to the correct cell. The model also has an on method, which allows you to listen to events triggered by the model (like value changes).
Rendering model contents
By replacing the string literal with a call to model.get, the view will now display the value of the back end upon display. However, it will not update itself to a new value when the value changes.
Step8: Dynamic updates
To get the view to update itself dynamically, register a function to update the view's value when the model's value property changes. This can be done using the model.on method. The on method takes three parameters, an event name, callback handle, and callback context. The Backbone event named change will fire whenever the model changes. By appending
Step9: This allows us to update the value from the Python kernel to the views. Now to get the value updated from the front-end to the Python kernel (when the input is not disabled) we can do it using the model.set method.
Step10: Test | Python Code:
from __future__ import print_function
Explanation: Index - Back - Next
End of explanation
from traitlets import Unicode, Bool, validate, TraitError
from ipywidgets import DOMWidget, register
@register
class Email(DOMWidget):
_view_name = Unicode('EmailView').tag(sync=True)
_view_module = Unicode('email_widget').tag(sync=True)
_view_module_version = Unicode('0.1.0').tag(sync=True)
Explanation: Building a Custom Widget - Email widget
The widget framework is built on top of the Comm framework (short for communication). The Comm framework is a framework that allows the kernel to send/receive JSON messages to/from the front end (as seen below).
To create a custom widget, you need to define the widget both in the browser and in the python kernel.
Building a Custom Widget
To get started, you'll create a simple email widget.
Python Kernel
DOMWidget and Widget
To define a widget, you must inherit from the Widget or DOMWidget base class. If you intend for your widget to be displayed in the Jupyter notebook, you'll want to inherit from the DOMWidget. The DOMWidget class itself inherits from the Widget class. The Widget class is useful for cases in which the Widget is not meant to be displayed directly in the notebook, but instead as a child of another rendering environment. For example, if you wanted to create a three.js widget (a popular WebGL library), you would implement the rendering window as a DOMWidget and any 3D objects or lights meant to be rendered in that window as Widgets.
_view_name
Inheriting from the DOMWidget does not tell the widget framework what front end widget to associate with your back end widget.
Instead, you must tell it yourself by defining specially named trait attributes, _view_name, _view_module, and _view_module_version (as seen below) and optionally _model_name and _model_module.
End of explanation
%%javascript
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
});
Explanation: sync=True traitlets
Traitlets is an IPython library for defining type-safe properties on configurable objects. For this tutorial you do not need to worry about the configurable piece of the traitlets machinery. The sync=True keyword argument tells the widget framework to handle synchronizing that value to the browser. Without sync=True, attributes of the widget won't be synchronized with the front-end.
Other traitlet types
Unicode, used for _view_name, is not the only Traitlet type, there are many more some of which are listed below:
Any
Bool
Bytes
CBool
CBytes
CComplex
CFloat
CInt
CLong
CRegExp
CUnicode
CaselessStrEnum
Complex
Dict
DottedObjectName
Enum
Float
FunctionType
Instance
InstanceType
Int
List
Long
Set
TCPAddress
Tuple
Type
Unicode
Union
Not all of these traitlets can be synchronized across the network, only the JSON-able traits and Widget instances will be synchronized.
Front end (JavaScript)
Models and views
The IPython widget framework front end relies heavily on Backbone.js. Backbone.js is an MVC (model view controller) framework. Widgets defined in the back end are automatically synchronized with generic Backbone.js models in the front end. The traitlets are added to the front end instance automatically on first state push. The _view_name trait that you defined earlier is used by the widget framework to create the corresponding Backbone.js view and link that view to the model.
Import @jupyter-widgets/base
You first need to import the @jupyter-widgets/base module. To import modules, use the define method of require.js (as seen below).
End of explanation
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
// Define the EmailView
var EmailView = widgets.DOMWidgetView.extend({
});
return {
EmailView: EmailView
}
});
Explanation: Define the view
Next, define your widget view class. Inherit from the DOMWidgetView by using the .extend method.
End of explanation
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = 'example@example.com';
this.email_input.disabled = true;
this.el.appendChild(this.email_input);
},
});
return {
EmailView: EmailView
};
});
Explanation: Render method
Lastly, override the base render method of the view to define custom rendering logic. A handle to the widget's default DOM element can be acquired via this.el. The el property is the DOM element associated with the view.
End of explanation
Email()
Explanation: Test
You should be able to display your widget just like any other widget now.
End of explanation
from traitlets import Unicode, Bool, validate, TraitError
from ipywidgets import DOMWidget, register
@register
class Email(DOMWidget):
_view_name = Unicode('EmailView').tag(sync=True)
_view_module = Unicode('email_widget').tag(sync=True)
_view_module_version = Unicode('0.1.0').tag(sync=True)
# Attributes
value = Unicode('example@example.com', help="The email value.").tag(sync=True)
disabled = Bool(False, help="Enable or disable user changes.").tag(sync=True)
# Basic validator for the email value
@validate('value')
def _valid_value(self, proposal):
if proposal['value'].count("@") != 1:
raise TraitError('Invalid email value: it must contain an "@" character')
if proposal['value'].count(".") == 0:
raise TraitError('Invalid email value: it must contain at least one "." character')
return proposal['value']
Explanation: Making the widget stateful
There is not much that you can do with the above example that you can't do with the IPython display framework. To change this, you will make the widget stateful. Instead of displaying a static "example@example.com" email address, it will display an address set by the back end. First you need to add a traitlet in the back end. Use the name of value to stay consistent with the rest of the widget framework and to allow your widget to be used with interact.
We want to be able to avoid user to write an invalid email address, so we need a validator using traitlets.
End of explanation
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = this.model.get('value');
this.email_input.disabled = this.model.get('disabled');
this.el.appendChild(this.email_input);
},
});
return {
EmailView: EmailView
};
});
Email(value='john.doe@domain.com', disabled=True)
Explanation: Accessing the model from the view
To access the model associated with a view instance, use the model property of the view. get and set methods are used to interact with the Backbone model. get is trivial, however you have to be careful when using set. After calling the model set you need call the view's touch method. This associates the set operation with a particular view so output will be routed to the correct cell. The model also has an on method, which allows you to listen to events triggered by the model (like value changes).
Rendering model contents
By replacing the string literal with a call to model.get, the view will now display the value of the back end upon display. However, it will not update itself to a new value when the value changes.
End of explanation
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = this.model.get('value');
this.email_input.disabled = this.model.get('disabled');
this.el.appendChild(this.email_input);
// Python -> JavaScript update
this.model.on('change:value', this.value_changed, this);
this.model.on('change:disabled', this.disabled_changed, this);
},
value_changed: function() {
this.email_input.value = this.model.get('value');
},
disabled_changed: function() {
this.email_input.disabled = this.model.get('disabled');
},
});
return {
EmailView: EmailView
};
});
Explanation: Dynamic updates
To get the view to update itself dynamically, register a function to update the view's value when the model's value property changes. This can be done using the model.on method. The on method takes three parameters, an event name, callback handle, and callback context. The Backbone event named change will fire whenever the model changes. By appending :value to it, you tell Backbone to only listen to the change event of the value property (as seen below).
End of explanation
%%javascript
require.undef('email_widget');
define('email_widget', ["@jupyter-widgets/base"], function(widgets) {
var EmailView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.email_input = document.createElement('input');
this.email_input.type = 'email';
this.email_input.value = this.model.get('value');
this.email_input.disabled = this.model.get('disabled');
this.el.appendChild(this.email_input);
// Python -> JavaScript update
this.model.on('change:value', this.value_changed, this);
this.model.on('change:disabled', this.disabled_changed, this);
// JavaScript -> Python update
this.email_input.onchange = this.input_changed.bind(this);
},
value_changed: function() {
this.email_input.value = this.model.get('value');
},
disabled_changed: function() {
this.email_input.disabled = this.model.get('disabled');
},
input_changed: function() {
this.model.set('value', this.email_input.value);
this.model.save_changes();
},
});
return {
EmailView: EmailView
};
});
Explanation: This allows us to update the value from the Python kernel to the views. Now to get the value updated from the front-end to the Python kernel (when the input is not disabled) we can do it using the model.set method.
End of explanation
email = Email(value='john.doe@domain.com', disabled=False)
email
email.value
email.value = 'jane.doe@domain.com'
Explanation: Test
End of explanation |
15,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: # Concepts de programmation de TensorFlow
Objectifs de formation
Step2: N'oubliez pas d'exécuter le bloc de code qui précède (les déclarations import).
Autres déclarations import courantes
Step3: ## Exercice | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
import tensorflow as tf
Explanation: # Concepts de programmation de TensorFlow
Objectifs de formation :
* Découvrir les bases du modèle de programmation TensorFlow, en particulier les concepts suivants :
* Les Tensors
* Les opérations
* Les graphes
* Les sessions
* Développer un programme TensorFlow simple permettant de créer un graphe par défaut, ainsi qu'une session pour exécuter ce graphe
Remarque : Veuillez lire attentivement ce didacticiel. Le modèle de programmation TensorFlow diffère sans doute de ceux que vous avez pu rencontrer jusqu'à maintenant. Il peut donc ne pas être aussi intuitif que vous pouvez l'imaginer.
## Présentation des concepts
Le terme TensorFlow est dérivé du mot Tensors, qui désigne des tableaux de dimension arbitraire. TensorFlow permet d'utiliser des Tensors avec un nombre très élevé de dimensions. Ceci étant, vous utiliserez le plus souvent un ou plusieurs des Tensors de faible dimension ci-après :
Un scalaire est un tableau à zéro dimension (Tensor d'ordre 0). Exemple : \'Salut\' ou 5.
Un vecteur est un tableau à une dimension (Tensor d'ordre 1). Exemple : [2, 3, 5, 7, 11] ou [5].
Une matrice est un tableau à deux dimensions (Tensor d'ordre 2). Exemple : [[3.1, 8.2, 5.9][4.3, -2.7, 6.5]].
Vous pouvez créer, supprimer et manipuler les Tensors au moyen d'opérations. Dans un programme TensorFlow standard, les lignes de code correspondent essentiellement à des opérations.
Un graphe TensorFlow (aussi appelé graphe de calcul ou graphe de flux de données) désigne la représentation graphique d'une structure de données. De nombreux programmes TensorFlow sont constitués d'un seul graphe, mais il est tout à possible d'en créer plusieurs. Les nœuds du graphe représentent des opérations, tandis que les arêtes représentent des Tensors. Les Tensors passent d'un nœud à l'autre et subissent à chaque fois une opération. Le Tensor de sortie d'une opération devient souvent le Tensor d'entrée de l'opération suivante. TensorFlow repose sur un modèle d'exécution paresseux : les nœuds ne sont calculés qu'en cas de nécessité, en fonction des besoins des nœuds associés.
Les Tensors sont enregistrés dans le graphe en tant que constantes ou variables. Comme vous pouvez l'imaginer, les constantes correspondent aux Tensors de valeur fixe. Les variables, elles, désignent les Tensors de valeur variable. Ce qui vous aura peut-être échappé, en revanche, c'est que les constantes et les variables s'ajoutent aux autres opérations du graphe. Ainsi, une constante est une opération renvoyant systématiquement la même valeur de Tensor, et une variable une opération renvoyant le Tensor qui lui a été affecté.
Pour définir une constante, vous devez utiliser l'opérateur tf.constant et transmettre sa valeur. Par exemple :
x = tf.constant([5.2])
De la même façon, le code suivant permet de créer une variable :
y = tf.Variable([5])
Vous pouvez également créer la variable, puis lui affecter une valeur (une valeur par défaut doit être définie) :
y = tf.Variable([0])
y = y.assign([5])
Après avoir défini plusieurs constantes ou variables, vous pouvez les combiner avec d'autres opérations (par exemple, tf.add). Lors de son évaluation, tf.add appelle les opérations tf.constant ou tf.Variable afin d'obtenir leur valeur, puis renvoie un nouveau Tensor correspondant à la somme de ces valeurs.
Les graphes doivent être exécutés dans une session TensorFlow, qui maintient leur état :
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
print(y.eval())
Les opérations tf.Variable doivent être initialisées explicitement en appelant tf.global_variables_initializer au début de la session, comme illustré ci-dessus.
Remarque : Les graphes d'une session peuvent être exécutés sur plusieurs machines (à condition d'exécuter le programme sur un framework de calcul distribué). Pour en savoir plus, consultez la page Distributed TensorFlow (Programme TensorFlow distribué).
Résumé
Le processus de programmation TensorFlow se divise en deux grandes étapes :
1. Assemblage des constantes, variables et opérations sur un graphe
2. Évaluation de ces constantes, variables et opérations au sein d'une session
## Créer un programme TensorFlow simple
Voyons comment coder un programme TensorFlow simple afin d'ajouter deux constantes.
### Définir des déclarations import
Comme pour la quasi-totalité des programmes Python, la première étape consiste à définir des déclarations import.
Évidemment, ces déclarations varient en fonction des fonctions auquel le programme TensorFlow doit accéder. Tout programme TensorFlow doit au moins contenir la déclaration import tensorflow :
End of explanation
from __future__ import print_function
import tensorflow as tf
# Create a graph.
g = tf.Graph()
# Establish the graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of the following three operations:
# * Two tf.constant operations to create the operands.
# * One tf.add operation to add the two operands.
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
sum = tf.add(x, y, name="x_y_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
print(sum.eval())
Explanation: N'oubliez pas d'exécuter le bloc de code qui précède (les déclarations import).
Autres déclarations import courantes :
import matplotlib.pyplot as plt # Visualisation d'un ensemble de données.
import numpy as np # Bibliothèque numérique Python de niveau inférieur.
import pandas as pd # Bibliothèque numérique Python de niveau supérieur.
Le modèle TensorFlow propose un graphe par défaut. Nous vous recommandons cependant de créer votre propre graphe pour faciliter le suivi de son état (vous pouvez utiliser un graphe différent par cellule, par exemple).
End of explanation
# Create a graph.
g = tf.Graph()
# Establish our graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of three operations.
# (Creating a tensor is an operation.)
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
sum = tf.add(x, y, name="x_y_sum")
# Task 1: Define a third scalar integer constant z.
z = tf.constant(4, name="z_const")
# Task 2: Add z to `sum` to yield a new sum.
new_sum = tf.add(sum, z, name="x_y_z_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
# Task 3: Ensure the program yields the correct grand total.
print(new_sum.eval())
Explanation: ## Exercice : Ajouter une troisième opérande
Modifiez le code ci-dessus pour ajouter trois entiers, au lieu de deux :
Définissez un troisième entier scalaire de type constante (z), auquel vous affectez la valeur 4.
Ajoutez z à l'opération sum pour réaliser une nouvelle addition.
Astuce : Pour en savoir plus sur la signature de la fonction, consultez la documentation de l'API relative à tf.add().
Exécutez de nouveau le bloc de code modifié. Obtenez-vous le bon total ?
### Solution
Cliquez ci-dessous pour afficher la solution.
End of explanation |
15,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
US Treasury Yield Curve Animation
The notebook uses daily US Treasury yield data from FRED (https
Step1: Download Data and Merge into DataFrame
Step2: Construct Figure
Step3: Create Animation and Save
Note ffmpeg (https
Step4: Print Time to Run | Python Code:
import matplotlib
matplotlib.use("Agg")
import fredpy as fp
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('classic')
import matplotlib.animation as animation
import os
import time
# Approximately when the program started
start_time = time.time()
# start and end dates
start_date = '1965-01-01'
end_date = '2100-01-01'
file_name = '../video/US_Treasury_Yield_Curve_Animation'
Explanation: US Treasury Yield Curve Animation
The notebook uses daily US Treasury yield data from FRED (https://fred.stlouisfed.org/) to construct an animated visualization of the US Treasury yield curve from January 1965 through the present. Data are downloaded using the fredpy module (https://github.com/letsgoexploring/fredpy-package).
Preliminaries
End of explanation
# Download data into Fred objects
y1m= fp.series('DTB4WK')
y3m= fp.series('DTB3')
y6m= fp.series('DTB6')
y1 = fp.series('DGS1')
y5 = fp.series('DGS5')
y10= fp.series('DGS10')
y20= fp.series('DGS20')
y30= fp.series('DGS30')
# Give the series names
y1m.data.name = '1 mo'
y3m.data.name = '3 mo'
y6m.data.name = '6 mo'
y1.data.name = '1 yr'
y5.data.name = '5 yr'
y10.data.name = '10 yr'
y20.data.name = '20 yr'
y30.data.name = '30 yr'
yields = pd.concat([y1m.data,y3m.data,y6m.data,y1.data,y5.data,y10.data,y20.data,y30.data],axis=1)
yields = yields.loc[start_date:end_date]
yields = yields.dropna(thresh=1)
N = len(yields.index)
print('Date range: '+yields.index[0].strftime('%b %d, %Y')+' to '+yields.index[-1].strftime('%b %d, %Y'))
Explanation: Download Data and Merge into DataFrame
End of explanation
# Initialize figure
fig = plt.figure(figsize=(16,9))
ax = fig.add_subplot(1, 1, 1)
line, = ax.plot([], [], lw=8)
ax.grid()
ax.set_xlim(0,7)
ax.set_ylim(0,18)
ax.set_xticks(range(8))
ax.set_yticks([2,4,6,8,10,12,14,16,18])
xlabels = ['1m','3m','6m','1y','5y','10y','20y','30y']
ylabels = [2,4,6,8,10,12,14,16,18]
ax.set_xticklabels(xlabels,fontsize=20)
ax.set_yticklabels(ylabels,fontsize=20)
figure_title = 'U.S. Treasury Bond Yield Curve'
figure_xlabel = 'Time to maturity'
figure_ylabel = 'Percent'
plt.text(0.5, 1.03, figure_title,horizontalalignment='center',fontsize=30,transform = ax.transAxes)
plt.text(0.5, -.1, figure_xlabel,horizontalalignment='center',fontsize=25,transform = ax.transAxes)
plt.text(-0.05, .5, figure_ylabel,horizontalalignment='center',fontsize=25,rotation='vertical',transform = ax.transAxes)
ax.text(5.75,.25, 'Created by Brian C Jenkins',fontsize=11, color='black',alpha=0.5)#,
dateText = ax.text(0.975, 16.625, '',fontsize=18,horizontalalignment='right')
Explanation: Construct Figure
End of explanation
# Initialization function
def init_func():
line.set_data([], [])
return line,
# The animation function
def animate(i):
global yields
x = [0,1,2,3,4,5,6,7]
y = yields.iloc[i]
line.set_data(x, y)
dateText.set_text(yields.index[i].strftime('%b %d, %Y'))
return line ,dateText
# Set up the writer
Writer = animation.writers['ffmpeg']
writer = Writer(fps=25, metadata=dict(artist='Brian C Jenkins'), bitrate=3000)
# Make the animation
anim = animation.FuncAnimation(fig, animate, init_func=init_func,frames=N, interval=20, blit=True)
# Create a directory called 'Video' in the parent directory if it doesn't exist
try:
os.mkdir('../Video')
except:
pass
# Save the animation as .mp4
anim.save(file_name+'.mp4', writer = writer)
# Convert the .mp4 to .ogv
# os.system('ffmpeg -i '+file_name+'.mp4 -acodec libvorbis -ac 2 -ab 128k -ar 44100 -b:v 1800k '+file_name+'.ogv')
Explanation: Create Animation and Save
Note ffmpeg (https://ffmpeg.org/) is required to save the animation as an mp4 or ogv file.
End of explanation
# Print runtime
seconds = time.time() - start_time
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
print("%dh %02dm %02ds"% (h, m, s))
Explanation: Print Time to Run
End of explanation |
15,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 05 - Numpy II and Scipy
Today's Agenda
Numpy II
Scipy
Numpy II
Last time in Week 05, we covered Numpy and Matplotlib. This time we will be focusing on more advanced concepts of Numpy.
Step1: Review
As a review, let's explore some of the concepts that were introduced last time, in Numpy I.
Create 1D-arrays
We introduced how to create a 1D-array
Step2: Handling arrays
These are just a few of the different ways to create numpy arrays.
You can also use functions like np.max() and np.min() to get the maximum and minimum values, respectively.
Step3: Apply mathematical functions
Step4: Conditionals
Find the indices of the elements in an array that meet some criteria.
In this example, we're finding all the elements that are within 100 and 500 in array "zz".
Step5: Manipulating Arrays
There are a lot of things we can do to a numpy array.
Step6: We can get the overall size and shape of the array.
We cane use the functions numpy.size and numpy.shape to get the total number of elements in an array and the shape of the array, respectively.
Step7: You can also transpose array A.
Step8: Why are Numpy arrays better than lists
Step9: linspace and logspace
We use these functions to created ordered lists, separated by intervals in real- and log-space.
Step10: Array of 25 elements from $10^{0}$ to $10^{3}$, with base of 10.
Step11: Creating an array of 11 elements from $e^{0}$ to $e^{10}$, with the base == numpy.e
Step12: Random Data
Step13: Arrays of zeros and ones.
Step14: You can use these to populate other arrays
Step15: Diagonals
You can also construct an array with another array as the diagonal
Step16: Indexing
You can choose which values to select.
Normally, you select the rows first, and then the cols of a numpy.ndarray.
Step17: Selecting the 1st row
Step18: The 2nd column
Step19: Select a range of columns and rows
Step20: You can easily use this to create a mask, for when you are cleaning your data.
Step21: Appying the mask from $A \to B$
Step22: Binning you data
This is probably one of the best functions of Numpy.
You can use this to bin you data, and calculate means, standard deviations, etc.
numpy.digitize
Step23: Now I want to bin my data and calculate the mean for each bin
Step24: Calculating the mean for each of the bins
Step29: You can put all of this into a function that estimates errors and more...
Step30: Example of using these function
Step31: With this function, it is really easy to apply statistics on binned data, as well as to estimate errors on the data.
Reshaping, resizing and stacking arrays
One can always modify the shape of a numpy.ndarray, as well as append it to a pre-existing array.
Step32: np.concatenate
You can also concadenate different arrays
Step33: Copy and "Deep Copy"
Sometimes it is important to create new copies of arrays and other objects. For this reason, one uses numpy.copy to create new copies of arrays
Step34: If we make any changes to B, A will also be affected by this change.
Step35: To get a completely independent, new object, you would use
Step36: The array A was not affected by this changed. This is important when you're constantly re-defining new arrays
Scipy - Library of Scientific Algorithms for Python
SciPy provides a large number of higher-level scientif algorithms.
It includes
Step37: Interpolation
You can use Scipy to interpolate your data.
You would use the interp1d function to interpolate your function.
Step38: KD-Trees
You can also use SciPy to calculate KD-Trees for a set of points
Step39: Let's say we want to know how many points are within distances of 30 and 50 from other points. To know this, you construct a KD-Tree
Step40: Let's say you want to get the distances to the Nth-nearest neighbor.
Step41: You can also get the indices
Step42: The first columns corresponds to itself.
You can also find pairs that are separated by at most a distance r | Python Code:
# Loading modules
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Week 05 - Numpy II and Scipy
Today's Agenda
Numpy II
Scipy
Numpy II
Last time in Week 05, we covered Numpy and Matplotlib. This time we will be focusing on more advanced concepts of Numpy.
End of explanation
x = np.array([1,2,3,5,6,7,8,10],dtype=float)
x
y = np.arange(10)
y
z = np.linspace(0,100,50)
z
h = np.random.randn(100)
h
Explanation: Review
As a review, let's explore some of the concepts that were introduced last time, in Numpy I.
Create 1D-arrays
We introduced how to create a 1D-array
End of explanation
print('Min X: {0:.3f} \t Max X: {1:.3f}'.format(np.min(x), np.max(x)) )
Explanation: Handling arrays
These are just a few of the different ways to create numpy arrays.
You can also use functions like np.max() and np.min() to get the maximum and minimum values, respectively.
End of explanation
zz = x**2 + 3*x**3
zz
Explanation: Apply mathematical functions
End of explanation
zz_idx = np.where((zz>= 100)&(zz <= 500))[0]
print('zz_idx: {0}'.format(zz_idx))
zz[zz_idx]
Explanation: Conditionals
Find the indices of the elements in an array that meet some criteria.
In this example, we're finding all the elements that are within 100 and 500 in array "zz".
End of explanation
h1 = np.random.randint(10, 50, 50)
h1
Explanation: Manipulating Arrays
There are a lot of things we can do to a numpy array.
End of explanation
np.size(h1)
h1.shape
A = np.array([[1,2,3,4,5],
[6,7,8,9,10],
[12,13,14,16,17],
[13,45,67,89,90] ])
A
np.shape(A)
Explanation: We can get the overall size and shape of the array.
We cane use the functions numpy.size and numpy.shape to get the total number of elements in an array and the shape of the array, respectively.
End of explanation
A_t = np.transpose(A)
A_t
Explanation: You can also transpose array A.
End of explanation
np.arange(0,10,1)
np.arange(0,20,5)
np.arange(-40,21,10)
Explanation: Why are Numpy arrays better than lists:
- Python lists are very general.
- Lists do not support matrix and dot multiplications, etc.
- Numpy arrays are memory efficient.
- Numpy arrays are statically typed and homogeneous.
- They are fast at mathematical functions.
- They can be used in compiled languages, e.g. C and Fortran.
Array-generating functions
For large arrays it is inpractical to initialize the data manually, using normal Python lists. Instead, we can use many of the Numpy functions to generate arrays of different forms.
numpy.arange
We use this one to create a sequence of ordered elements
End of explanation
B = np.linspace(0,50)
B
B = np.linspace(0,100, 20)
B
Explanation: linspace and logspace
We use these functions to created ordered lists, separated by intervals in real- and log-space.
End of explanation
B = np.logspace(0,3,25)
B
Explanation: Array of 25 elements from $10^{0}$ to $10^{3}$, with base of 10.
End of explanation
B = np.logspace(0,10,11, base=np.e)
B
Explanation: Creating an array of 11 elements from $e^{0}$ to $e^{10}$, with the base == numpy.e
End of explanation
from numpy import random
# Uniform random numbers in [0,1]
random.rand(5,5)
# 20 Random integers from 10 to 30
random.randint(10,30,20)
Explanation: Random Data
End of explanation
np.zeros(20)
Explanation: Arrays of zeros and ones.
End of explanation
nelem = 10
C = np.ones(10)
C
for ii in range(C.size):
C[ii] = random.rand()
C
Explanation: You can use these to populate other arrays
End of explanation
np.diag(random.randint(10,20,5))
Explanation: Diagonals
You can also construct an array with another array as the diagonal
End of explanation
M = random.rand(10,5)
M
Explanation: Indexing
You can choose which values to select.
Normally, you select the rows first, and then the cols of a numpy.ndarray.
End of explanation
M[1,:]
Explanation: Selecting the 1st row
End of explanation
M[:,1]
Explanation: The 2nd column
End of explanation
M[1:3, 2:4]
Explanation: Select a range of columns and rows
End of explanation
A = random.rand(3,3)
np.fill_diagonal(A, np.nan)
A
B = np.arange(0,9).reshape((3,3))
B
Explanation: You can easily use this to create a mask, for when you are cleaning your data.
End of explanation
A_mask = np.isfinite(A)
A_mask
B[A_mask]
Explanation: Appying the mask from $A \to B$
End of explanation
# Creating my bin edges
bins = np.arange(0,13)
bins
# Generating Data
data = 10*random.rand(100)
data
Explanation: Binning you data
This is probably one of the best functions of Numpy.
You can use this to bin you data, and calculate means, standard deviations, etc.
numpy.digitize
End of explanation
# Defining statistical function to use
stat_func = np.nanmean
# Binning the data
data_bins = np.digitize(data, bins)
data_bins
Explanation: Now I want to bin my data and calculate the mean for each bin
End of explanation
failval = -10
bins_stat = np.array([stat_func(data[data_bins == ii]) \
if len(data[data_bins == ii]) > 0 \
else failval \
for ii in range(1,len(bins))])
bins_stat = np.asarray(bins_stat)
bins_stat
Explanation: Calculating the mean for each of the bins
End of explanation
import math
def myceil(x, base=10):
Returns the upper-bound integer of 'x' in base 'base'.
Parameters
----------
x: float
number to be approximated to closest number to 'base'
base: float
base used to calculate the closest 'largest' number
Returns
-------
n_high: float
Closest float number to 'x', i.e. upper-bound float.
Example
-------
>>>> myceil(12,10)
20
>>>>
>>>> myceil(12.05, 0.1)
12.10000
n_high = float(base*math.ceil(float(x)/base))
return n_high
def myfloor(x, base=10):
Returns the lower-bound integer of 'x' in base 'base'
Parameters
----------
x: float
number to be approximated to closest number of 'base'
base: float
base used to calculate the closest 'smallest' number
Returns
-------
n_low: float
Closest float number to 'x', i.e. lower-bound float.
Example
-------
>>>> myfloor(12, 5)
>>>> 10
n_low = float(base*math.floor(float(x)/base))
return n_low
def Bins_array_create(arr, base=10):
Generates array between [arr.min(), arr.max()] in steps of `base`.
Parameters
----------
arr: array_like, Shape (N,...), One-dimensional
Array of numerical elements
base: float, optional (default=10)
Interval between bins
Returns
-------
bins_arr: array_like
Array of bin edges for given arr
base = float(base)
arr = np.array(arr)
assert(arr.ndim==1)
arr_min = myfloor(arr.min(), base=base)
arr_max = myceil( arr.max(), base=base)
bins_arr = np.arange(arr_min, arr_max+0.5*base, base)
return bins_arr
def Mean_std_calc_one_array(x1, y1, arr_len=0, statfunc=np.nanmean,
failval=np.nan, error='std',
base=10.):
Calculates statistics of two arrays, e.g. scatter,
error in `statfunc`, etc.
Parameters
----------
x1: array-like, shape (N,)
array of x-values
y1: array-like, shape (N,)
array of y-values
arr_len: int, optional (default = 0)
minimum number of elements in the bin
statfunc: numpy function, optional (default = numpy.nanmean)
statistical function used to evaluate the bins
failval: int or float, optional (default = numpy.nan)
Number to use to replace when the number of elements in the
bin is smaller than `arr_len`
error: string, optional (default = 'std')
type of error to evaluate
Options:
- 'std': Evaluates the standard deviation of the bin
- 'stat': Evaluates the error in the mean/median of each bin
- 'none': Does not calculate the error in `y1`
base: float
Value of bin width in units of that of `x1`
Returns
--------
x1_stat: array-like, shape (N,)
`stat_func` of each bin in `base` spacings for x1
y1_stat: array-like, shape (N,)
`stat_func` of each bin in `base` spacings for y1
x1 = np.asarray(x1)
y1 = np.asarray(y1)
assert((x1.ndim==1) & (y1.ndim==1))
assert((x1.size >0) & (y1.size>0))
n_elem = len(x1)
## Computing Bins
x1_bins = Bins_array_create(x1, base=base)
x1_digit = np.digitize(x1, x1_bins)
## Computing Statistics in bins
x1_stat = np.array([statfunc(x1[x1_digit==ii])
if len(x1[x1_digit==ii])>arr_len
else failval
for ii in range(1,x1_bins.size)])
y1_stat = np.array([statfunc(y1[x1_digit==ii])
if len(y1[x1_digit==ii])>arr_len
else failval
for ii in range(1,x1_bins.size)])
## Computing error in the data
if error=='std':
stat_err = np.nanstd
y1_err = np.array([stat_err(y1[x1_digit==ii])
if len(y1[x1_digit==ii])>arr_len
else failval
for ii in range(1,x1_bins.size)])
if error!='none':
y1_err = np.array([stat_err(y1[x1_digit==ii])/np.sqrt(len(y1[x1_digit==ii]))
if len(y1[x1_digit==ii])>arr_len
else failval
for ii in range(1,x1_bins.size)])
if (stat_func==np.median) or (stat_func==np.nanmedian):
y1_err *= 1.253
else:
y1_err = np.zeros(y1.stat.size)
return x1_stat, y1_stat, y1_err
Explanation: You can put all of this into a function that estimates errors and more...
End of explanation
import numpy as np
# Defining arrays
x_arr = np.arange(100)
y_arr = 50*np.random.randn(x_arr.size)
# Computing mean and error in the mean for `x_arr` and `y_arr`
x_stat, y_stat, y_err = Mean_std_calc_one_array(x_arr, y_arr,
statfunc=np.nanmean,
failval=np.nan,
base=10)
x_stat2, y_stat2, y_err2 = Mean_std_calc_one_array(x_arr, y_arr,
statfunc=np.nanmedian,
failval=np.nan,
base=10)
plt.style.use('seaborn-notebook')
plt.clf()
plt.close()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111,facecolor='white')
ax.plot(x_arr, y_arr, 'ro', label='Data')
ax.errorbar(x_stat, y_stat, yerr=y_err, color='blue', marker='o',
linestyle='--',label='Mean')
ax.errorbar(x_stat2, y_stat2, yerr=y_err2, color='green', marker='o',
linestyle='--',label='Median')
ax.set_xlabel('X axis', fontsize=20)
ax.set_ylabel('Y axis', fontsize=20)
ax.set_title('Data and the Binned Data', fontsize=24)
plt.legend(fontsize=20)
plt.show()
Explanation: Example of using these function:
End of explanation
A = np.array([[n+m*10 for n in range(5)] for m in range(5)])
A
n, m = A.shape
B = A.reshape((1,n*m))
B
A_f = A.flatten()
A_f
C = random.rand(A.size)
C
C.shape
# Stacking the two arrays
D = np.column_stack((A_f,C))
D
# Selecting from 3rd to 11th row
D[2:10]
Explanation: With this function, it is really easy to apply statistics on binned data, as well as to estimate errors on the data.
Reshaping, resizing and stacking arrays
One can always modify the shape of a numpy.ndarray, as well as append it to a pre-existing array.
End of explanation
a = np.array([[1, 2], [3, 4]])
b = np.array([[5,6]])
np.concatenate((a,b))
np.concatenate((a,b.T), axis=1)
Explanation: np.concatenate
You can also concadenate different arrays
End of explanation
A = np.array([[1, 2], [3, 4]])
A
# `B` is now referring to the same array data as `A`
B = A
Explanation: Copy and "Deep Copy"
Sometimes it is important to create new copies of arrays and other objects. For this reason, one uses numpy.copy to create new copies of arrays
End of explanation
B[0,0] = 10
B
A
Explanation: If we make any changes to B, A will also be affected by this change.
End of explanation
B = np.copy(A)
# Modifying `B`
B[0,0] = -5
B
A
Explanation: To get a completely independent, new object, you would use:
End of explanation
import scipy as sc
Explanation: The array A was not affected by this changed. This is important when you're constantly re-defining new arrays
Scipy - Library of Scientific Algorithms for Python
SciPy provides a large number of higher-level scientif algorithms.
It includes:
- Special Functions
- Integration
- Optimization
- Interpolation
- Fourier Transforms
- Signal Processing
- Linear Algebra
- Statistics
- Multi-dimensional image processing
End of explanation
from scipy.interpolate import interp1d
def f(x):
return np.sin(x)
n = np.arange(0, 10)
x = np.linspace(0, 9, 100)
y_meas = f(n) + 0.1 * np.random.randn(len(n)) # simulate measurement with noise
y_real = f(x)
linear_interpolation = interp1d(n, y_meas)
y_interp1 = linear_interpolation(x)
cubic_interpolation = interp1d(n, y_meas, kind='cubic')
y_interp2 = cubic_interpolation(x)
fig, ax = plt.subplots(figsize=(15,6))
ax.set_facecolor('white')
ax.plot(n, y_meas, 'bs', label='noisy data')
ax.plot(x, y_real, 'k', lw=2, label='true function')
ax.plot(x, y_interp1, 'r', label='linear interp')
ax.plot(x, y_interp2, 'g', label='cubic interp')
ax.legend(loc=3, prop={'size':20});
ax.tick_params(axis='both', which='major', labelsize=20)
ax.tick_params(axis='both', which='minor', labelsize=15)
Explanation: Interpolation
You can use Scipy to interpolate your data.
You would use the interp1d function to interpolate your function.
End of explanation
Lbox = 250.
Npts = 1000
# Creating cartesian coordinates
x = np.random.uniform(0, Lbox, Npts)
y = np.random.uniform(0, Lbox, Npts)
z = np.random.uniform(0, Lbox, Npts)
sample1 = np.vstack([x, y, z]).T
sample1
sample1.shape
Explanation: KD-Trees
You can also use SciPy to calculate KD-Trees for a set of points
End of explanation
from scipy.spatial import cKDTree
# Initializing KDTree
KD_obj = cKDTree(sample1)
N_neighbours = cKDTree.count_neighbors(KD_obj, KD_obj, 50) - \
cKDTree.count_neighbors(KD_obj, KD_obj, 30)
print("Number of Neighbours: {0}".format(N_neighbours))
Explanation: Let's say we want to know how many points are within distances of 30 and 50 from other points. To know this, you construct a KD-Tree
End of explanation
k_nearest = 4
dist_k, dist_k_idx = cKDTree.query(KD_obj, sample1, k_nearest)
dist_k
Explanation: Let's say you want to get the distances to the Nth-nearest neighbor.
End of explanation
dist_k_idx
Explanation: You can also get the indices
End of explanation
pairs = KD_obj.query_ball_tree(KD_obj, 30)
pairs[0:10]
Explanation: The first columns corresponds to itself.
You can also find pairs that are separated by at most a distance r
End of explanation |
15,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning About the Singular Value Decomposition
This notebook explores the Singular Value Decomposition (SVD)
Step1: Start With A Simple Matrix
Step2: Now Take the SVD
Step3: Check U, S and V^T Do Actually Reconstruct A | Python Code:
# import numpy for SVD function
import numpy
# import matplotlib.pyplot for visualising arrays
import matplotlib.pyplot as plt
Explanation: Learning About the Singular Value Decomposition
This notebook explores the Singular Value Decomposition (SVD)
End of explanation
# create a really simple matrix
A = numpy.array([[-1,1], [1,1]])
# and show it
print("A = \n", A)
# plot the array
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("A")
p.plot(A[0,],A[1,],'ro')
plt.show()
Explanation: Start With A Simple Matrix
End of explanation
# break it down into an SVD
U, s, VT = numpy.linalg.svd(A, full_matrices=False)
S = numpy.diag(s)
# what are U, S and V
print("U =\n", U, "\n")
print("S =\n", S, "\n")
print("V^T =\n", VT, "\n")
for px in [(131,U, "U"), (132,S, "S"), (133,VT, "VT")]:
subplot = px[0]
matrix = px[1]
matrix_name = px[2]
p = plt.subplot(subplot)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title(matrix_name)
p.plot(matrix[0,],matrix[1,],'ro')
pass
plt.show()
Explanation: Now Take the SVD
End of explanation
# rebuild A2 from U.S.V
A2 = numpy.dot(U,numpy.dot(S,VT))
print("A2 = \n", A2)
# plot the reconstructed A2
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("A2")
p.plot(A2[0,],A2[1,],'ro')
plt.show()
Explanation: Check U, S and V^T Do Actually Reconstruct A
End of explanation |
15,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de redes en Python (Networkx)
Bienvenidos
Step3: Ahora le vamos a agregar nodos
Step7: Borremos los nodos
Step10: Vamos a darle nombre!
Se pueden asignar atributos a la gráfica como a cada uno de sus elementos. Los nodos pueden ser también strings!
Step11: Ups!! ahora hay que borrar esos nodos!.. como lo hacemos??
Tip
Step12: Nos faltan más miembros del grupo y sus datos... como lo hacemos?
Ejemplo
Step13: ahora vamos con los vertices!
Step14: Edge Attributes
Podemos añadir y manipular los atributos de los vertices usando add_edge(), add_edges_from(), subscript notation, or G.edge.
Step15: ¿Entonces quien quedó con quien?
Step16: ¿Que más le podemos preguntar a nuestra gráfica?
La distribución de conexiones (o vecinos) P (k)
Step17: Podemos usar las funciones y algoritmos dentro de otras funciones
Step18: También podemos generar la matriz adjacente a otros formatos
Step19: Tambien podemos importar a partir de varias estructuras de datos
Por ejemplo
Step20: O de diferentes tipos de archivos como por ejemplo ".graphml" o ".leda"
http
Step21: Ejercicio
Ahora podemos practicar leyendo y analizando el archivo
Step22: Clásicas gráficas pequeñas
Step23: Generadores de gráficas estocásticas, e.g.
Step24: Dibujemos la gráfica
Step25: A dibujar!!!
Ahora elijan una gráfica de las que generamos anteriormente y muestrenla a los demás | Python Code:
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as gr
import scipy as sc
G=nx.Graph()
G.graph
Explanation: Análisis de redes en Python (Networkx)
Bienvenidos :D
En esta sesión aprenderemos a utilizar la librería "Networkx" (http://networkx.readthedocs.io/en/networkx-1.11/index.html) que es una librería de Python para la creación, manipulación y estudio de la estructura, dinámicas y función de redes complejas.
Que es una red compleja
Las redes complejas son conjuntos de muchos nodos conectados que interactúan de alguna forma. A los nodos de una red también se les llama vértices o elementos y los representaremos por los símbolos v1, v2, ..., vN , donde N es el número total de nodos en la red. Si un nodo vi está conectado con otro nodo vj, esta conexión se representa por una pareja ordenada (vi, vj ).
Ejemplos de redes
Redes Sociales
Redes Informaticas
Redes Biológicas
etc etc etc
Empezamos!
Creemos una gráfica vacía
End of explanation
Uno a uno
G.add_node(1)
O en forma de lista
G.add_nodes_from([2,3,4,5])
G.node
Explanation: Ahora le vamos a agregar nodos
End of explanation
Uno a uno
G.remove_node(3)
G.node
Todos
G.clear()
G.node
Podemos añadir nodos a partir de una variable
H=[5, 6, 7, 8, 9, 12]
G.add_nodes_from(H)
G.node
H=nx.path_graph(10)
G.add_nodes_from(H)
G.node
#print (H)
Explanation: Borremos los nodos
End of explanation
GPL=nx.Graph(Name=["Pyladies"], librería= "Scipy")
GPL.graph
Ahora vamos a editar el atributo
GPL.graph['librería']='Networkx'
GPL.graph
Empecemos a añadir los nodos
GPL.add_node('Erika')
GPL.add_nodes_from("Ale", edad = 30 ) # que pasa si hacemos esto???
GPL.nodes(data=True)
Explanation: Vamos a darle nombre!
Se pueden asignar atributos a la gráfica como a cada uno de sus elementos. Los nodos pueden ser también strings!
End of explanation
GPL.remove_nodes_from(['A','l','e'])
GPL.node
GPL.add_node("Ale", edad = 30 )
GPL.node["Ale"]
GPL.node['Ale']['entidad'] = 'IFC'
GPL.node
GPL.number_of_nodes()
Explanation: Ups!! ahora hay que borrar esos nodos!.. como lo hacemos??
Tip: remove_nodes_from
End of explanation
GPL.add_node('Jane Doe', edad = 25, entidad = 'Ciencias')
GPL.nodes(data=True)
Explanation: Nos faltan más miembros del grupo y sus datos... como lo hacemos?
Ejemplo:
End of explanation
GPL.add_edge('Ale','Erika')
GPL.edge
e=('Erika','Jane Doe')
GPL.add_edge(*e)
GPL.edge
GPL.add_edges_from([('Jane Doe','Ale'),('Ale','Marco')])
GPL.number_of_edges()
# ¿pero que pasó aqui? Marco no estaba incluido en los nodos del grupo Pyladies
GPL.node
Explanation: ahora vamos con los vertices!
End of explanation
GPL.add_edge(2,5, weight=2)
GPL.add_edges_from([('Erika','Jane Doe'),('Jane Doe','Ale')], color='red')
GPL.add_edges_from([('Ale','Marco',{'color':'blue'}), ('Erika','Erin',{'weight':8}), (3,5,{'weight':8})])
GPL['Ale']['Erika']['weight'] = 3
GPL.edge['Ale']['Marco']['weight'] = 4
GPL.edge
Explanation: Edge Attributes
Podemos añadir y manipular los atributos de los vertices usando add_edge(), add_edges_from(), subscript notation, or G.edge.
End of explanation
GPL.neighbors('Erika')
GPL.degree('Erika')
Explanation: ¿Entonces quien quedó con quien?
End of explanation
nx.info(GPL)
#nx.info(GPL, 'Ale')
nx.get_node_attributes(GPL,'Erika')
nx.clustering(GPL)
nx.degree(GPL)
nx.degree_histogram(GPL)
Explanation: ¿Que más le podemos preguntar a nuestra gráfica?
La distribución de conexiones (o vecinos) P (k): Es la probabilidad de que un nodo escogido al azar tenga k conexiones (o vecinos). (degree distribution)
Por ejemplo, en una red de contactos sexuales P (k) es la probabilidad de que una persona escogida al azar en una sociedad haya tenido k parejas sexuales distintas a lo largo de su vida.
El coeficiente de agrupamiento (C): Es la probabilidad de que dos nodos estén conectados directamente a un tercer nodo, estén conectados entre sí. (clustering coefficient)
Por ejemplo, en una red de amistades, es la probabilidad de que dos de mis amigos sean ellos mismos amigos uno del otro.
La longitud mínima entre dos nodos vi y vj: Es el número mínimo de “brincos”que se tienen que dar para llegar de un nodo vi de la red a otro nodo vj de la red. (path lenght)
La longitud promedio de la red (L): Es el promedio de las longitudes mínimas entre todas las posibles parejas de nodos (vi, vj) de la red.
Algoritmos y funciones
http://networkx.readthedocs.io/en/networkx-1.11/reference/algorithms.html
http://networkx.readthedocs.io/en/networkx-1.11/reference/functions.html
Podemos usar varios algoritmos y funciones que nos dan información acerca de la red, por ejemplo:
average_clustering(G, nodes=None, weight=None, count_zeros=True) : Es el coeficiente de agrupamiento.
Compute the average clustering coefficient for the graph G.
shortest_path_length(G[, source, target, weight]) Encuentra el camino mas corto entre los nodos
Compute shortest path lengths in the graph.
degree_histogram(G) Return a list of the frequency of each degree value.
info(G[, n]) Print short summary of information for the graph G or the node n.
Ejemplos:
End of explanation
degree=list();
clustering=list();
for n in sc.arange(len(GPL.node)): # Some properties of the graph
degree.append(sorted(nx.degree(GPL,[n]).values())) # así nos regresa solo los valores de grado
#degree.append(nx.degree(GPL,[n]))
#así nos regresa los nodos con su correspondiente valor de grado de conectividad
clustering.append(nx.clustering(GPL,[n]))
print(degree)
print(clustering)
#¿que pasó aqui?... no esta accesando a la informacion de los nodos que son strings y no números... cuidado con eso!
G=nx.Graph()
e=[('a','b',0.3),('b','c',0.9),('a','c',0.5),('c','d',1.2)]
G.add_weighted_edges_from(e)
nx.dijkstra_path(G,'a','d') # encuentra el camino mas corto y con mayor peso entre los nodos
Explanation: Podemos usar las funciones y algoritmos dentro de otras funciones
End of explanation
A = nx.to_scipy_sparse_matrix(GPL)
print(A.todense())
A = nx.to_numpy_matrix(GPL)
print(A)
A = nx.to_pandas_dataframe(GPL)
print(A)
Explanation: También podemos generar la matriz adjacente a otros formatos
End of explanation
import pandas as pd
import numpy as np
r = np.random.RandomState(seed=5)
ints = r.randint(1, 10, size=(3,2))
a = ['A', 'B', 'C']
b = ['D', 'A', 'E']
df = pd.DataFrame(ints, columns=['weight', 'cost'])
df['a'] = a
df['b'] = b
df
G=nx.from_pandas_dataframe(df, 'a', 'b', ['weight', 'cost'])
nx.info(G)
G['E']['C']['cost']
Explanation: Tambien podemos importar a partir de varias estructuras de datos
Por ejemplo:
from_pandas_dataframe(df, source, target, edge_attr=None, create_using=None)
End of explanation
RB=nx.read_leda('rhesus_brain_2.leda')
#RB=nx.read_graphml('rhesus_brain_2.graphml')
nx.info(RB)
RB.node
nx.is_directed(RB)
RB.neighbors('VIP')
RB.node['VIP']
RB.edge['VIP']
nx.flow_hierarchy(RB)
#Returns the flow hierarchy of a directed network.
#Flow hierarchy is defined as the fraction of edges not participating in cycles in a directed graph
largest_sc = sorted(nx.strongly_connected_components(RB))
#largest_sc = max(nx.strongly_connected_components(RB))
largest_sc
AttarctComp=sorted(nx.attracting_components(RB))
AttarctComp
degree=(nx.degree_histogram(RB))
degreeX=(sc.arange(len(degree)))
gr.bar(degreeX, degree)
gr.title("Degree Histogram")
gr.xlabel("k")
gr.ylabel("Frequency")
fig = gr.gcf()
degree_sequence=sorted(degree,reverse=True) # degree sequence
#print "Degree sequence", degree_sequence
dmax=max(degree_sequence)
gr.loglog(degree_sequence,'b-',marker='o')
gr.title("Degree rank plot")
gr.ylabel("degree")
gr.xlabel("rank")
#gr.savefig("degree_histogram.png")
gr.show()
Explanation: O de diferentes tipos de archivos como por ejemplo ".graphml" o ".leda"
http://networkx.readthedocs.io/en/networkx-1.11/reference/readwrite.html
El archivo que vamos a usar para el ejemplo contiene información acerca de la conectividad entre las áreas del cerebro de un mono rhesus
Archivos como éste los podemos descargar de http://openconnecto.me/graph-services/download/
End of explanation
W=nx.wheel_graph(10)
W.edge
G=nx.complete_graph(8)
G.edge
Explanation: Ejercicio
Ahora podemos practicar leyendo y analizando el archivo: mixed.species_brain_1.graphml
Generadores de gráficas
Para hacer nuestros ejercicios podemos usar generadores de redes que ya tenemos a
disposición en Networkx por ejemplo:
barbell_graph(m1, m2[, create_using]) Return the Barbell Graph: two complete graphs connected by a path.
complete_graph(n[, create_using]) Return the complete graph K_n with n nodes.
complete_multipartite_graph(*block_sizes) Returns the complete multipartite graph with the specified block sizes.
dorogovtsev_goltsev_mendes_graph(n[, ...]) Return the hierarchically constructed Dorogovtsev-Goltsev-Mendes graph.
grid_graph(dim[, periodic]) Return the n-dimensional grid graph.
hypercube_graph(n) Return the n-dimensional hypercube.
lollipop_graph(m, n[, create_using]) Return the Lollipop Graph; Km connected to Pm.
path_graph(n[, create_using]) Return the Path graph P_n of n nodes linearly connected by n-1 edges.
wheel_graph(n[, create_using]) Return the wheel graph: a single hub node connected to each node of the (n-1)-node cycle graph
End of explanation
petersen=nx.petersen_graph()
tutte=nx.tutte_graph()
maze=nx.sedgewick_maze_graph()
tet=nx.tetrahedral_graph()
Explanation: Clásicas gráficas pequeñas
End of explanation
er=nx.erdos_renyi_graph(100,0.15)
ws=nx.watts_strogatz_graph(30,3,0.1)
ba=nx.barabasi_albert_graph(100,5)
red=nx.random_lobster(100,0.9,0.9)
Explanation: Generadores de gráficas estocásticas, e.g.
End of explanation
ba=nx.barabasi_albert_graph(40,5)
nx.draw(ba, node_size=80,node_color="blue", alpha=0.5)
#nx.draw_circular(ba, node_size=80,node_color="blue", alpha=0.5)
gr.title('Barabasi Albert Graph', fontsize= 18) # Se requiere de matplotlib... aqui esta importada como gr
gr.show()
ba=nx.barabasi_albert_graph(40,5)
#nx.draw(ba, node_size=80,node_color="blue", alpha=0.5)
nx.draw_circular(ba, node_size=80,node_color="blue", alpha=0.5)
gr.title('Barabasi Albert Graph', fontsize= 18) # Se requiere de matplotlib... aqui esta importada como gr
gr.show()
nx.average_shortest_path_length(ba)
nx.draw(ws, node_size=80,node_color="blue", alpha=0.5)
gr.title('Watts Strogatz Grap', fontsize= 18) # Se requiere de matplotlib... aqui esta importada como gr
gr.show()
nx.average_shortest_path_length(ws)
Explanation: Dibujemos la gráfica
End of explanation
nx.draw(W, node_size=50,node_color="green", alpha=0.5)
gr.title('Wheel graph', fontsize= 18)
gr.show()
nx.average_shortest_path_length(W)
Hyp=nx.hypercube_graph(6)
nx.draw(Hyp, node_size=50,node_color="orange", alpha=0.5)
gr.title('Otra gráfica', fontsize= 18)
gr.show()
lollipop=nx.lollipop_graph(10,20)
nx.draw(lollipop, node_size=50,node_color="orange", alpha=0.5)
gr.title('Otra gráfica', fontsize= 18)
gr.show()
nx.average_shortest_path_length(lollipop)
nx.draw(GPL, node_size=100, node_color="pink", alpha=0.5)
gr.title('Pyladies graph', fontsize= 18)
gr.show()
nx.average_shortest_path_length(GPL)
nx.draw_random(RB, node_size=100, node_color="RED", alpha=0.5)
gr.title('RHESUS BRAIN graph', fontsize= 18)
gr.show()
nx.average_shortest_path_length(RB)
Explanation: A dibujar!!!
Ahora elijan una gráfica de las que generamos anteriormente y muestrenla a los demás
End of explanation |
15,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook I do first some data cleaning, then I perform some basic analysis of the dataset (no. cases per year, correlation population/no. of cases etc...).
<p>
I analyze finally the simplified municipalities indicators. In particular, I use Lasso Regression to find out the features that are most correlated to the no. of cases. I also analyze the features themself, to see how they are correlated to each other and how they are clustered together.
# Data preparation
Step1: Data loading
Some checking first
The following are the dataset files containing KPI data. They have to be loaded and concatenated as a single pandas dataframe.
Step2: I want to be sure that headers are consistent for all KPI files. I raise an exception if that is not the case.
Step3: I load now the other CSV files as pandas dataframes.
Step4: Data cleaning
After some manual data wrangling, I've noticed that KPI files contain the same header line multiple times. I guess each file is a concationation of many original smaller files. I will remove this lines manually.
Step5: The value field contains sometimes the string "None". For the purposes of this analysis it should be fine to set it to zero. Then I convert both period and value to numeric types. They will be used below during the analysis.
Step6: Check that municipality names of KPIs data match those of simplified municipality indicators (SMIs).
Step7: Data Analisys
Misc stats
Step8: Municipalities with the highest number of cases
Step9: Cases by year
Step10: Histogram no. of cases by municipalities
Step11: Correlation population / no. of cases
I show some misc info about population and no. of cases and then I compute and show their correlation.
Step12: Mean value of KPIs
I plot here the mean value of KPIs aggregated by year. I see some similarities with the "Cases by year" plot. Are KPI values and no. of cases correlated? I would have expected the opposite.
Step13: Features analysis
Let's choose a subset of SMIs features that make sense.
Step14: Lasso regression
Using Lasso regression on selected features of the simplified KPIs dataframe to compute the importance of each single feature.
Step15: Bar chart of importance by feature
The most positively correlated features are "reportedCrimeVandalism", "foreignBorn" and "hasEducation", the most negatively correlated are "populationShare65plus", "longitude" (but why?), "populationChange", "youngUnskilled" (they didn't go to school?). But "youthUnemployment" and "motorcycles"?
Step16: Correlation of SMIs
I plot the correlation between the SMIs features as a heatmap.
Step17: I show in the following table the most important correlations, filtering out auto-correlations. Obvious
Step18: Features clusters as clustermap | Python Code:
import pandas as pd
import numpy as np
from sklearn import linear_model
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white", color_codes=True)
%matplotlib inline
Explanation: In this notebook I do first some data cleaning, then I perform some basic analysis of the dataset (no. cases per year, correlation population/no. of cases etc...).
<p>
I analyze finally the simplified municipalities indicators. In particular, I use Lasso Regression to find out the features that are most correlated to the no. of cases. I also analyze the features themself, to see how they are correlated to each other and how they are clustered together.
# Data preparation
End of explanation
data_dir = './'
kpi_files = [data_dir + kpi for kpi in ['kpis_1998_2003.csv',
'kpis_2004_2008.csv',
'kpis_2009_2011.csv',
'kpis_2012_2013.csv']]
Explanation: Data loading
Some checking first
The following are the dataset files containing KPI data. They have to be loaded and concatenated as a single pandas dataframe.
End of explanation
# Check that all headers are the same
curHeaders = None
for csv in kpi_files:
with open(csv, 'r') as f:
for line in f:
if curHeaders is None:
curHeaders = line.strip()
elif curHeaders != line.strip():
raise Exception('KPI headers mismatch')
break
kpis = pd.concat([pd.read_csv(f) for f in kpi_files])
'Total number of KPIs: {}'.format(len(kpis.index))
Explanation: I want to be sure that headers are consistent for all KPI files. I raise an exception if that is not the case.
End of explanation
municipality_indicators = pd.read_csv(data_dir + 'municipality_indicators.csv')
simplified_municipality_indicators = pd.read_csv(data_dir + 'simplified_municipality_indicators.csv')
school_fire_cases = pd.read_csv(data_dir + 'school_fire_cases_1998_2014.csv')
Explanation: I load now the other CSV files as pandas dataframes.
End of explanation
# Rows to be removed will have field 'kpi' equal to string 'kpi', 'period' equal to 'period'
# and so on. One single check on the first attribute should be enough.
kpis = kpis[kpis['kpi'] != 'kpi']
'Total number of KPIs after cleaning: {}'.format(len(kpis.index))
Explanation: Data cleaning
After some manual data wrangling, I've noticed that KPI files contain the same header line multiple times. I guess each file is a concationation of many original smaller files. I will remove this lines manually.
End of explanation
print(kpis.dtypes)
kpis['period'] = kpis['period'].astype(int)
kpis['value'] = kpis['value'].replace(['None'], [0.]).astype(float)
print(kpis.dtypes)
Explanation: The value field contains sometimes the string "None". For the purposes of this analysis it should be fine to set it to zero. Then I convert both period and value to numeric types. They will be used below during the analysis.
End of explanation
names_simplified_municipality_indicators = set(simplified_municipality_indicators['name'])
names_kpis = set(kpis['municipality_name'])
assert names_simplified_municipality_indicators ^ names_kpis == set()
municipality_types = set(simplified_municipality_indicators['municipalityType'])
'Number of municipality types = {}'.format(len(municipality_types))
Explanation: Check that municipality names of KPIs data match those of simplified municipality indicators (SMIs).
End of explanation
total_fire_cases = school_fire_cases['Cases'].sum()
cases_years = school_fire_cases['Year']
print('Number of unique years = {}'.format(len(cases_years.unique())))
period_desc = '{}-{}'.format(cases_years.min(), cases_years.max())
print('Number of total fire cases in period {} = {}'.format(period_desc, total_fire_cases))
print('Total number of municipalities = {}'.format(len(school_fire_cases['Municipality'].unique())))
Explanation: Data Analisys
Misc stats
End of explanation
total_cases_by_municipality = school_fire_cases.groupby('Municipality').sum()['Cases'].sort_values(ascending=False)
max_cases_per_year = school_fire_cases.sort_values(by='Cases', ascending=False) \
.groupby('Year', as_index=False) \
.first()
print('The following municipalities were the ones the highest number of cases during the period {}:\n{}' \
.format(period_desc, max_cases_per_year['Municipality'].unique()))
piechart_data = total_cases_by_municipality[:20]
others = total_cases_by_municipality[20:]
piechart_data.set_value('Others', others.sum())
f, ax = plt.subplots(figsize=(11, 4))
plt.axis('equal');
plt.pie(piechart_data, labels=piechart_data.index);
Explanation: Municipalities with the highest number of cases
End of explanation
cases_by_year = school_fire_cases.groupby('Year')
f, ax = plt.subplots(figsize=(11, 4))
plt.xlabel('Year')
plt.ylabel('No. of cases')
_ = plt.plot(cases_by_year.sum()['Cases'])
Explanation: Cases by year
End of explanation
print('Average cases = {}, standard deviation = {}, median = {}, 75th percentile = {}'.format(total_cases_by_municipality.mean(),
total_cases_by_municipality.std(),
total_cases_by_municipality.quantile(.5),
total_cases_by_municipality.quantile(.75)))
f, ax = plt.subplots(figsize=(11, 4))
plt.xlabel('No. of cases')
plt.ylabel('No. of municipalities')
_ = plt.hist(total_cases_by_municipality, bins=100)
Explanation: Histogram no. of cases by municipalities
End of explanation
population = school_fire_cases['Population']
print('Max population = {}, min population = {}'.format(population.max(), population.min()))
cases = school_fire_cases['Cases']
print('Max cases = {}, min cases = {}'.format(cases.max(), cases.min()))
reg = linear_model.LinearRegression()
features = np.array([[pp] for pp in population.values])
targets = np.array([[cc] for cc in cases.values])
reg.fit(features, targets)
print('Slope = {}, intercept = {}, score (R^2) = {}'.format(reg.coef_[0], reg.intercept_, reg.score(features, targets)))
f, ax = plt.subplots(figsize=(11, 4))
plt.xlim([0, 1000000])
plt.ylim([0,60])
plt.scatter(population, cases)
_ = plt.plot(features, reg.predict(features), color='r')
Explanation: Correlation population / no. of cases
I show some misc info about population and no. of cases and then I compute and show their correlation.
End of explanation
kpis_by_municipality = kpis['value'].groupby(kpis['municipality_name'])
kpis_by_period = kpis['value'].groupby(kpis['period'])
f, ax = plt.subplots(figsize=(11, 4))
plt.xlabel('Year')
plt.ylabel('Mean KPI value')
_ = plt.plot(kpis_by_period.mean())
Explanation: Mean value of KPIs
I plot here the mean value of KPIs aggregated by year. I see some similarities with the "Cases by year" plot. Are KPI values and no. of cases correlated? I would have expected the opposite.
End of explanation
mun_indicators_features_list = ['medianIncome',
'youthUnemployment2010',
'youthUnemployment2013',
'unemploymentChange',
'reportedCrime',
'populationChange',
'hasEducation',
'asylumCosts',
'urbanDegree',
'foreignBorn',
'reportedCrimeVandalism',
'youngUnskilled',
'latitude',
'longitude',
'population',
'populationShare65plus',
'refugees',
'rentalApartments',
'fokusRanking',
'foretagsklimatRanking',
'cars',
'motorcycles',
'tractors',
'snowmobiles']
mun_indicators_features = simplified_municipality_indicators.loc[:, mun_indicators_features_list].as_matrix()
y_cases = [total_cases_by_municipality[m] for m in simplified_municipality_indicators['name']]
Explanation: Features analysis
Let's choose a subset of SMIs features that make sense.
End of explanation
lasso = linear_model.Lasso(alpha=0.1)
lasso.fit(mun_indicators_features, y_cases)
lasso.coef_
features_by_coef = sorted(zip(mun_indicators_features_list, lasso.coef_), key=lambda tup: tup[1], reverse=True)
chart_x = [t[0] for t in features_by_coef]
chart_y = [t[1] for t in features_by_coef]
Explanation: Lasso regression
Using Lasso regression on selected features of the simplified KPIs dataframe to compute the importance of each single feature.
End of explanation
f, ax = plt.subplots(figsize=(11, 4))
plt.xticks(range(len(chart_x)), chart_x)
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
_ = ax.bar(range(len(chart_x)), chart_y, 0.3, color="blue")
Explanation: Bar chart of importance by feature
The most positively correlated features are "reportedCrimeVandalism", "foreignBorn" and "hasEducation", the most negatively correlated are "populationShare65plus", "longitude" (but why?), "populationChange", "youngUnskilled" (they didn't go to school?). But "youthUnemployment" and "motorcycles"?
End of explanation
indicators_and_cases = simplified_municipality_indicators.loc[:, mun_indicators_features_list]
cor_mat = simplified_municipality_indicators.loc[:, mun_indicators_features_list].corr()
f, ax = plt.subplots(figsize=(15, 12))
sns.heatmap(cor_mat,linewidths=.5, ax=ax);
Explanation: Correlation of SMIs
I plot the correlation between the SMIs features as a heatmap.
End of explanation
threshold = 0.7
important_corrs = (cor_mat[abs(cor_mat) > threshold][cor_mat != 1.0]) \
.unstack().dropna().to_dict()
unique_important_corrs = pd.DataFrame(
list(set([(tuple(sorted(key)), important_corrs[key]) \
for key in important_corrs])), columns=['attribute pair', 'correlation'])
# sorted by absolute value
unique_important_corrs = unique_important_corrs.ix[
abs(unique_important_corrs['correlation']).argsort()[::-1]]
unique_important_corrs
Explanation: I show in the following table the most important correlations, filtering out auto-correlations. Obvious: latitude/snowmobiles, tractors/urbanDegree, youthUnemployment2010/youthUnemployment2013. Interesting: hasEducation/mediaIncome, populationChange, populationShare65plus (negative).
End of explanation
# See https://www.kaggle.com/cast42/santander-customer-satisfaction/exploring-features
import matplotlib.patches as patches
from scipy.cluster import hierarchy
from scipy.stats.mstats import mquantiles
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.preprocessing import scale
from sklearn.preprocessing import StandardScaler
# scale to mean 0, variance 1
train_std = pd.DataFrame(scale(indicators_and_cases))
train_std.columns = indicators_and_cases.columns
m = train_std.corr()
l = linkage(m, 'ward')
mclust = sns.clustermap(m,
linewidths=0,
cmap=plt.get_cmap('RdBu'),
vmax=1,
vmin=-1,
figsize=(14, 14),
row_linkage=l,
col_linkage=l)
# http://stackoverflow.com/a/34697479/297313
_ = plt.setp(mclust.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)
Explanation: Features clusters as clustermap
End of explanation |
15,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Copyright 2017 Allen Downey
License
Step1: Low pass filter
Step2: Now I'll create a Params object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
Step4: Now we can pass the Params object make_system which computes some additional parameters and defines init.
make_system uses the given radius to compute area and the given v_term to compute the drag coefficient C_d.
Step5: Let's make a System
Step7: Here's the slope function,
Step8: As always, let's test the slope function with the initial conditions.
Step9: And then run the simulation.
Step10: Here are the results.
Step11: Here's the plot of position as a function of time.
Step12: And velocity as a function of time | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
with units_off():
for i, name in enumerate(dir(UNITS)):
unit = getattr(UNITS, name)
try:
res = 1*unit - 1
if res == 0:
print(name, 1*unit - 1)
except TypeError:
pass
if i > 10000:
break
with units_off():
print(2 * UNITS.farad - 1)
with units_off():
print(2 * UNITS.volt - 1)
with units_off():
print(2 * UNITS.newton - 1)
mN = UNITS.gram * UNITS.meter / UNITS.second**2
with units_off():
print(2 * mN - 1)
Explanation: Low pass filter
End of explanation
params = Params(
R1 = 1e6, # ohm
C1 = 1e-9, # farad
A = 5, # volt
f = 1000, # Hz
)
Explanation: Now I'll create a Params object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
End of explanation
def make_system(params):
Makes a System object for the given conditions.
params: Params object
returns: System object
unpack(params)
init = State(V_out = 0)
omega = 2 * np.pi * f
tau = R1 * C1
cutoff = 1 / R1 / C1
t_end = 3 / f
return System(params, init=init, t_end=t_end,
omega=omega, cutoff=cutoff)
Explanation: Now we can pass the Params object make_system which computes some additional parameters and defines init.
make_system uses the given radius to compute area and the given v_term to compute the drag coefficient C_d.
End of explanation
system = make_system(params)
Explanation: Let's make a System
End of explanation
def slope_func(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
V_out, = state
unpack(system)
V_in = A * np.cos(omega * t)
V_R1 = V_in - V_out
I_R1 = V_R1 / R1
I_C1 = I_R1
dV_out = I_C1 / C1
return dV_out
Explanation: Here's the slope function,
End of explanation
slope_func(system.init, 0, system)
Explanation: As always, let's test the slope function with the initial conditions.
End of explanation
ts = linspace(0, system.t_end, 301)
results, details = run_ode_solver(system, slope_func, t_eval=ts)
details
Explanation: And then run the simulation.
End of explanation
# results
Explanation: Here are the results.
End of explanation
def plot_results(results):
xs = results.V_out.index
ys = results.V_out.values
t_end = get_last_label(results)
if t_end < 10:
xs *= 1000
xlabel = 'Time (ms)'
else:
xlabel = 'Time (s)'
plot(xs, ys)
decorate(xlabel=xlabel,
ylabel='$V_{out}$ (volt)',
legend=False)
plot_results(results)
Explanation: Here's the plot of position as a function of time.
End of explanation
fs = [1, 10, 100, 1000, 10000, 100000]
for i, f in enumerate(fs):
system = make_system(Params(params, f=f))
ts = linspace(0, system.t_end, 301)
results, details = run_ode_solver(system, slope_func, t_eval=ts)
subplot(3, 2, i+1)
plot_results(results)
Explanation: And velocity as a function of time:
End of explanation |
15,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step1: Short Tutorial
Step2: The tree table
In the treeslider tool you have the option to collapse individuals from the same species or population into a single taxon using an imap dictionary. While that can be useful for reducing missing data or generating tree plots...
Step3: The tree table
Step4: Enter window and slide arguments
Here I select the scaffold Qrob_Chr03 (scaffold_idx=2), and run 2Mb windows (window_size) non-overlapping (2Mb slide_size) across the entire scaffold. I use the default inference method "raxml", and modify its default arguments to run 100 bootstrap replicates. More details on modifying raxml params later. I set for it to skip windows with <10 SNPs (minsnps), and to filter sites within windows (mincov) to only include those that have coverage across all 9 clades, with samples grouped into clades using an imap dictionary.
Step5: The tree inference command
You can examine the command that will be called on each genomic window. By modifying the inference_args above we can modify this string. See examples later in this tutorial.
Step6: Run tree inference jobs in parallel
To run the command on every window across all available cores call the .run() command. This will automatically save checkpoints to a file of the tree_table as it runs, and can be restarted later if it interrupted.
Step7: The tree table
Our goal is to fill the .tree_table, a pandas DataFrame where rows are genomic windows and the information content of each window is recorded, and a newick string tree is inferred and filled in for each. The tree table is also saved as a CSV formatted file in the workdir. You can re-load it later using Pandas. Below I demonstrate how to plot results from the tree_able. To examine how phylogenetic relationships vary across the genome see also the clade_weights() tool, which takes the tree_table as input.
Step8: <h3><span style="color
Step9: Draw cloud tree
Using toytree you can easily draw a cloud tree of overlapping gene trees to visualize discordance. These typically look much better if you root the trees, order tips by their consensus tree order, and do not use edge lengths. See below for an example, and see the toytree documentation.
Step10: <h3><span style="color | Python Code:
# conda install ipyrad -c bioconda
# conda install toyplot -c eaton-lab
import ipyrad.analysis as ipa
import toyplot
from ipyrad.analysis.clade_weights import *
Explanation: <span style="color:gray">ipyrad-analysis toolkit:</span> clade_weights
<h5><span style="color:red">(Reference only method)</span></h5>
The clade_weights tool is designed to analyze a tree_table, which can be generated using the treeslider ipyrad-analysis tool. This tool can quantify and generate plots of clade supports in sliding windows along the genome. This is similar to the TWISST software, for showing how different gene tree patterns vary along the genome.
Key features:
Required software
End of explanation
# the path to your CSV
data = "./analysis-treeslider/test.tree_table.csv"
# check scaffold idx (row) against scaffold names
self = ipa.clade_weights(
data=data,
name="test",
workdir="analysis-clade_weights",
imap={
"SE": ["virg", "mini", "gemi"],
"CA": ["sagr", "oleo"],
"WE": ["bran", "fusi-N", "fusi-S"],
#"REF": ["reference"],
},
minsupport=50,
)
Explanation: Short Tutorial:
Load the data
End of explanation
self.tree_table.head()
self.run(auto=True, force=True)
toyplot.plot(
self.clade_weights.rolling(5, win_type="boxcar", center=True).mean(),
height=300,
opacity=0.7,
);
# make empty clade weights table
self.clade_weights = pd.DataFrame(
{i: [0.] * self.tree_table.shape[0] for i in self.imap.keys()}
)
treelist = self.tree_table.tree[:10].tolist()
clades = self.imap
idx = 0
# iterate over trees
for tidx, tree in enumerate(treelist):
# get tips for this subtree
tree = toytree.tree(tree)
tips = set(tree.get_tip_labels())
# iterate over clades to test
for name, clade in clades.items():
idx = 0
tsum = 0
iclade = tips.intersection(set(clade))
isamp = itertools.combinations(iclade, 2)
oclade = tips.difference(iclade)
osamp = itertools.combinations(oclade, 2)
# iterate over quartets
for ipair in isamp:
for opair in osamp:
quartet = set(list(ipair) + list(opair))
todrop = set(tree.get_tip_labels()) - quartet
dt = tree.drop_tips(todrop)
tsum += clade_true(dt.unroot(), iclade)
idx += 1
print(tips)
clade_weights(treelist, clades, idx)
Explanation: The tree table
In the treeslider tool you have the option to collapse individuals from the same species or population into a single taxon using an imap dictionary. While that can be useful for reducing missing data or generating tree plots...
End of explanation
weights.tree_table.head()
weights.run(auto=True)
weights.clade_weights
Explanation: The tree table
End of explanation
# select a scaffold idx, start, and end positions
ts = ipa.treeslider(
name="test",
data="/home/deren/Downloads/ref_pop2.seqs.hdf5",
workdir="analysis-treeslider",
scaffold_idxs=2,
window_size=2000000,
slide_size=2000000,
inference_method="raxml",
inference_args={"N": 100, "T": 4},
minsnps=10,
mincov=9,
imap={
"reference": ["reference"],
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi-N": ["TXGR3", "TXMD3"],
"fusi-S": ["MXED8", "MXGT4"],
"sagr": ["CUVN10", "CUCA4", "CUSV6", "CUMM5"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017", "CRL0001"],
},
)
Explanation: Enter window and slide arguments
Here I select the scaffold Qrob_Chr03 (scaffold_idx=2), and run 2Mb windows (window_size) non-overlapping (2Mb slide_size) across the entire scaffold. I use the default inference method "raxml", and modify its default arguments to run 100 bootstrap replicates. More details on modifying raxml params later. I set for it to skip windows with <10 SNPs (minsnps), and to filter sites within windows (mincov) to only include those that have coverage across all 9 clades, with samples grouped into clades using an imap dictionary.
End of explanation
# this is the tree inference command that will be used
ts.show_inference_command()
Explanation: The tree inference command
You can examine the command that will be called on each genomic window. By modifying the inference_args above we can modify this string. See examples later in this tutorial.
End of explanation
ts.run(auto=True, force=True)
Explanation: Run tree inference jobs in parallel
To run the command on every window across all available cores call the .run() command. This will automatically save checkpoints to a file of the tree_table as it runs, and can be restarted later if it interrupted.
End of explanation
# the tree table is automatically saved to disk as a CSV during .run()
ts.tree_table.head()
Explanation: The tree table
Our goal is to fill the .tree_table, a pandas DataFrame where rows are genomic windows and the information content of each window is recorded, and a newick string tree is inferred and filled in for each. The tree table is also saved as a CSV formatted file in the workdir. You can re-load it later using Pandas. Below I demonstrate how to plot results from the tree_able. To examine how phylogenetic relationships vary across the genome see also the clade_weights() tool, which takes the tree_table as input.
End of explanation
# filter to only windows with >50 SNPS
trees = ts.tree_table[ts.tree_table.snps > 50].tree.tolist()
# load all trees into a multitree object
mtre = toytree.mtree(trees)
# root trees and collapse nodes with <50 bootstrap support
mtre.treelist = [
i.root("reference").collapse_nodes(min_support=50)
for i in mtre.treelist
]
# draw the first 12 trees in a grid
mtre.draw_tree_grid(
nrows=3, ncols=4, start=0,
tip_labels_align=True,
tip_labels_style={"font-size": "9px"},
);
Explanation: <h3><span style="color:red">Advanced</span>: Plots tree results </h3>
Examine multiple trees
You can select trees from the .tree column of the tree_table and plot them one by one using toytree, or any other tree drawing tool. Below I use toytree to draw a grid of the first 12 trees.
End of explanation
# filter to only windows with >50 SNPS (this could have been done in run)
trees = ts.tree_table[ts.tree_table.snps > 50].tree.tolist()
# load all trees into a multitree object
mtre = toytree.mtree(trees)
# root trees
mtre.treelist = [i.root("reference") for i in mtre.treelist]
# infer a consensus tree to get best tip order
ctre = mtre.get_consensus_tree()
# draw the first 12 trees in a grid
mtre.draw_cloud_tree(
width=400,
height=400,
fixed_order=ctre.get_tip_labels(),
use_edge_lengths=False,
);
Explanation: Draw cloud tree
Using toytree you can easily draw a cloud tree of overlapping gene trees to visualize discordance. These typically look much better if you root the trees, order tips by their consensus tree order, and do not use edge lengths. See below for an example, and see the toytree documentation.
End of explanation
# select a scaffold idx, start, and end positions
ts = ipa.treeslider(
name="chr1_w500K_s100K",
data=data,
workdir="analysis-treeslider",
scaffold_idxs=[0, 1, 2],
window_size=500000,
slide_size=100000,
minsnps=10,
inference_method="raxml",
inference_args={"m": "GTRCAT", "N": 10, "f": "d", 'x': None},
)
# this is the tree inference command that will be used
ts.show_inference_command()
Explanation: <h3><span style="color:red">Advanced</span>: Modify the raxml command</h3>
In this analysis I entered multiple scaffolds to create windows across each scaffold. I also entered a smaller slide size than window size so that windows are partially overlapping. The raxml command string was modified to perform 10 full searches with no bootstraps.
End of explanation |
15,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imagine that you are at a casino, and you find an unbalanced roulette wheel that looks like the following
Step1: After spending a significant amount of time spinning the wheel, you feel a little unsatisfied. Sure, you found the expected payout, but there's a nagging feeling that your answer might have been much more accurate if you just had more money to spend. In fact, you're not even completely sure that last two digits are correct (see the N^(-1/2) rule from the previous blog post).
Let's see what the answer would have been if you instead had $50 million to spend
Step2: Much more satisfying. You're now fairly confident of the answer within about a hundredths of a cent. Unfortunately, you don't actually have $50 million to spend (and you certainly don't have the time for 100 million spins). Is there a way to increase the accuracy (i.e. reduce the variance) of the simulation without spending more money (i.e. requiring more histories)?
Fortunately, there's a number of methods in Monte Carlo simulation called "variance reduction" methods which can be used for this purpose. In this notebook, we will be examining a method called "weight correction". Essentially, instead of choosing values from the original probability distribution, we will choose them from a new probability distribution that we specify.
This is similar to the idea of "class balancing" in classification. We can choose a new probability distribution in such a way as to increase the likelihood that the (originally) less likely choices are selected. This will effectively decrease the variance of the final result. Once we choose a new probability distribution, we must adjust the weight of each choice based on the following formula, so the results will be unbiased
Step3: Based on the simulation above, we got an answer that was closer to the "true" value of 0.315, even though we used the same number of spins (1000). (Since Monte Carlo simulation is stochastic, it's possible that if you re-run this notebook, you might get an answer that's farther away from the "true" value, but the weight correction method will more consistently give an answer that's closer to the true value)
However, the results aren't perfect. Can we improve the selection of P_New? It turns out that we can, if we select each P_New according to the following formula | Python Code:
import random
from numba import jit
# Monte Carlo simulation function. This is defined as
# a function so the numba library can be used to speed
# up execution. Otherwise, this would run much slower.
# p1 is the probability of the first area, and s1 is the
# score of the first area, and so on. The probabilities
# are cumulative.
@jit
def MCHist(n_hist, p1, s1, p2, s2, p3, s3, p4, s4):
money = 0
for n in range(1, n_hist):
x = random.random()
if x <= p1:
money += s1
elif x <= (p1 + p2):
money += s2
elif x <= (p1 + p2 + p3):
money += s3
elif x <= (p1 + p2 + p3 + p4):
money += s4
return money
# Run the simulation, iterating over each number of
# histories in the num_hists array. Don't cheat and look
# at these probabilities!! "You" don't know them yet.
num_hist = 1e3 # $500
results = MCHist(num_hist, 0.05, 1, 0.3, 0.3, 0.15, 0.5, 0.5, 0.2)
payout = round(results / num_hist, 3)
print('Expected payout per spin is ${}'.format(payout))
Explanation: Imagine that you are at a casino, and you find an unbalanced roulette wheel that looks like the following:
[Image in Blog Post]
You notice a sign next to the wheel that mentions the price per spin is 50 cents. Instead of selecting a colored area and receiving a payout if you made the correct choice, this game always give a payout, based on which area the ball falls upon. Each of the colored areas will give the following payout:
1 dollar
30 cents
50 cents
20 cents
As you can see, if the ball lands on area 2 or 4, you'll receive less money than you started with. If it lands on area 3, you'll get your money back, and if it lands on 1, you'll receive twice the money you started with.
Luckily, you happen to have $500 to spend on this game, and you're willing to try your luck for 1000 spins. You're curious what the expected payoff of the game will be. It's clear that each area has a different probability of being selected each time, but the sign didn't mention exactly what the probabilities are. Sure, you could make some guesses about the probability based on the size of each area, but where's the fun in that?
Let's run this problem as a Monte Carlo simulation:
End of explanation
num_hist2 = 1e8 # $50 million
results2 = MCHist(num_hist2, 0.05, 1, 0.3, 0.3, 0.15, 0.5, 0.5, 0.2)
payout2 = round(results2 / num_hist2, 3)
print('Expected payout per spin is ${}'.format(payout2))
Explanation: After spending a significant amount of time spinning the wheel, you feel a little unsatisfied. Sure, you found the expected payout, but there's a nagging feeling that your answer might have been much more accurate if you just had more money to spend. In fact, you're not even completely sure that last two digits are correct (see the N^(-1/2) rule from the previous blog post).
Let's see what the answer would have been if you instead had $50 million to spend:
End of explanation
num_hist3 = 1e3 # $500
results3 = MCHist(num_hist3, 0.25, 0.2, 0.25, 0.36, 0.25, 0.3, 0.25, 0.4)
payout3 = round(results3 / num_hist3, 5)
print('Expected payout per spin is ${}'.format(payout3))
Explanation: Much more satisfying. You're now fairly confident of the answer within about a hundredths of a cent. Unfortunately, you don't actually have $50 million to spend (and you certainly don't have the time for 100 million spins). Is there a way to increase the accuracy (i.e. reduce the variance) of the simulation without spending more money (i.e. requiring more histories)?
Fortunately, there's a number of methods in Monte Carlo simulation called "variance reduction" methods which can be used for this purpose. In this notebook, we will be examining a method called "weight correction". Essentially, instead of choosing values from the original probability distribution, we will choose them from a new probability distribution that we specify.
This is similar to the idea of "class balancing" in classification. We can choose a new probability distribution in such a way as to increase the likelihood that the (originally) less likely choices are selected. This will effectively decrease the variance of the final result. Once we choose a new probability distribution, we must adjust the weight of each choice based on the following formula, so the results will be unbiased:
weight_correction = p(x) / p'(x)
where p(x) is the original probability distribution and p'(x) is the new probability distribution. This formula is often expressed using the mnemonic "shoulda over did". Be careful that every value of x is still possible in the new probability distribution.
Let's examine this in practice. What if we choose a probability distribution such that each of the areas in the roulette wheel are equally likely to be chosen? Assume that the roulette operator is nice and allows you to select the results of the roulette so this situation could actually occur in real life. After all, no matter what, the house still wins, right?
The operator must also tell you the probabilities for this method to work. Note that you can calculate the analytic expected value now:
(0.05)(1) + (0.3)(0.3) + (0.15)(0.5) + (0.5)(0.2) = 0.315
But you're still curious. Will your "weight correction" method work?
For each of the areas, we can find the weight by dividing P_Orig by P_New:
| Area | P_Orig | Score_Orig | P_New | Weight | Score_New |
| -----|--------|------------|-------|--------|-----------|
| 1 | 0.05 | 1.0 | 0.25 | 0.2 | 0.2 |
| 2 | 0.30 | 0.3 | 0.25 | 1.2 | 0.36 |
| 3 | 0.15 | 0.5 | 0.25 | 0.6 | 0.3 |
| 4 | 0.50 | 0.2 | 0.25 | 2 | 0.4 |
Let's rerun the Monte Carlo analysis, using the new probability distribution and the new scoring system, but still using only $500 (1000 spins):
End of explanation
num_hist4 = 1e3 # $500
results4 = MCHist(num_hist4, 0.159, 0.315, 0.286, 0.315, 0.238, 0.315, 0.317, 0.315)
payout4 = round(results4 / num_hist4, 3)
print('Expected payout per spin is ${}'.format(payout4))
Explanation: Based on the simulation above, we got an answer that was closer to the "true" value of 0.315, even though we used the same number of spins (1000). (Since Monte Carlo simulation is stochastic, it's possible that if you re-run this notebook, you might get an answer that's farther away from the "true" value, but the weight correction method will more consistently give an answer that's closer to the true value)
However, the results aren't perfect. Can we improve the selection of P_New? It turns out that we can, if we select each P_New according to the following formula:
P_New = (P_Orig)(Score_Orig) / (TotalP_New)
Where TotalP_New is the sum of all (P_Orig)(Score_Orig) for all choices.
Here's the table for our roulette example:
| Area | P_Orig | Score_Orig | P_New | Weight | Score_New |
| -----|--------|------------|--------------------|----------|-----------|
| 1 | 0.05 | 1.0 | 0.050/0.315=0.159 | 0.315 | 0.315 |
| 2 | 0.30 | 0.3 | 0.090/0.315=0.286 | 1.049 | 0.315 |
| 3 | 0.15 | 0.5 | 0.075/0.315=0.238 | 0.630 | 0.315 |
| 4 | 0.50 | 0.2 | 0.100/0.315=0.317 | 1.577 | 0.315 |
What a strange coincidence. Our scores are now all equal to the same value, which just happens to be the analytical solution that we calculated earlier.
As you might expect, if we run the Monte Carlo simulation again, no matter which area is chosen, we get the same score, so the only possible result is 0.315:
End of explanation |
15,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step2: Training checkpoints
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Saving from tf.keras training APIs
See the tf.keras guide on saving and
restoring.
tf.keras.Model.save_weights saves a TensorFlow checkpoint.
Step5: Writing checkpoints
The persistent state of a TensorFlow model is stored in tf.Variable objects. These can be constructed directly, but are often created through high-level APIs like tf.keras.layers or tf.keras.Model.
The easiest way to manage variables is by attaching them to Python objects, then referencing those objects.
Subclasses of tf.train.Checkpoint, tf.keras.layers.Layer, and tf.keras.Model automatically track variables assigned to their attributes. The following example constructs a simple linear model, then writes checkpoints which contain values for all of the model's variables.
You can easily save a model-checkpoint with Model.save_weights.
Manual checkpointing
Setup
To help demonstrate all the features of tf.train.Checkpoint, define a toy dataset and optimization step
Step6: Create the checkpoint objects
Use a tf.train.Checkpoint object to manually create a checkpoint, where the objects you want to checkpoint are set as attributes on the object.
A tf.train.CheckpointManager can also be helpful for managing multiple checkpoints.
Step7: Train and checkpoint the model
The following training loop creates an instance of the model and of an optimizer, then gathers them into a tf.train.Checkpoint object. It calls the training step in a loop on each batch of data, and periodically writes checkpoints to disk.
Step8: Restore and continue training
After the first training cycle you can pass a new model and manager, but pick up training exactly where you left off
Step9: The tf.train.CheckpointManager object deletes old checkpoints. Above it's configured to keep only the three most recent checkpoints.
Step10: These paths, e.g. './tf_ckpts/ckpt-10', are not files on disk. Instead they are prefixes for an index file and one or more data files which contain the variable values. These prefixes are grouped together in a single checkpoint file ('./tf_ckpts/checkpoint') where the CheckpointManager saves its state.
Step11: <a id="loading_mechanics"/>
Loading mechanics
TensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded. Edge names typically come from attribute names in objects, for example the "l1" in self.l1 = tf.keras.layers.Dense(5). tf.train.Checkpoint uses its keyword argument names, as in the "step" in tf.train.Checkpoint(step=...).
The dependency graph from the example above looks like this
Step12: The dependency graph for these new objects is a much smaller subgraph of the larger checkpoint you wrote above. It includes only the bias and a save counter that tf.train.Checkpoint uses to number checkpoints.
restore returns a status object, which has optional assertions. All of the objects created in the new Checkpoint have been restored, so status.assert_existing_objects_matched passes.
Step13: There are many objects in the checkpoint which haven't matched, including the layer's kernel and the optimizer's variables. status.assert_consumed only passes if the checkpoint and the program match exactly, and would throw an exception here.
Deferred restorations
Layer objects in TensorFlow may defer the creation of variables to their first call, when input shapes are available. For example, the shape of a Dense layer's kernel depends on both the layer's input and output shapes, and so the output shape required as a constructor argument is not enough information to create the variable on its own. Since calling a Layer also reads the variable's value, a restore must happen between the variable's creation and its first use.
To support this idiom, tf.train.Checkpoint defers restores which don't yet have a matching variable.
Step14: Manually inspecting checkpoints
tf.train.load_checkpoint returns a CheckpointReader that gives lower level access to the checkpoint contents. It contains mappings from each variable's key, to the shape and dtype for each variable in the checkpoint. A variable's key is its object path, like in the graphs displayed above.
Note
Step15: So if you're interested in the value of net.l1.kernel you can get the value with the following code
Step16: It also provides a get_tensor method allowing you to inspect the value of a variable
Step17: Object tracking
Checkpoints save and restore the values of tf.Variable objects by "tracking" any variable or trackable object set in one of its attributes. When executing a save, variables are gathered recursively from all of the reachable tracked objects.
As with direct attribute assignments like self.l1 = tf.keras.layers.Dense(5), assigning lists and dictionaries to attributes will track their contents.
Step18: You may notice wrapper objects for lists and dictionaries. These wrappers are checkpointable versions of the underlying data-structures. Just like the attribute based loading, these wrappers restore a variable's value as soon as it's added to the container. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
class Net(tf.keras.Model):
A simple linear model.
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
net = Net()
Explanation: Training checkpoints
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/checkpoint"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The phrase "Saving a TensorFlow model" typically means one of two things:
Checkpoints, OR
SavedModel.
Checkpoints capture the exact value of all parameters (tf.Variable objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.
The SavedModel format on the other hand includes a serialized description of the computation defined by the model in addition to the parameter values (checkpoint). Models in this format are independent of the source code that created the model. They are thus suitable for deployment via TensorFlow Serving, TensorFlow Lite, TensorFlow.js, or programs in other programming languages (the C, C++, Java, Go, Rust, C# etc. TensorFlow APIs).
This guide covers APIs for writing and reading checkpoints.
Setup
End of explanation
net.save_weights('easy_checkpoint')
Explanation: Saving from tf.keras training APIs
See the tf.keras guide on saving and
restoring.
tf.keras.Model.save_weights saves a TensorFlow checkpoint.
End of explanation
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
def train_step(net, example, optimizer):
Trains `net` on `example` using `optimizer`.
with tf.GradientTape() as tape:
output = net(example['x'])
loss = tf.reduce_mean(tf.abs(output - example['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
Explanation: Writing checkpoints
The persistent state of a TensorFlow model is stored in tf.Variable objects. These can be constructed directly, but are often created through high-level APIs like tf.keras.layers or tf.keras.Model.
The easiest way to manage variables is by attaching them to Python objects, then referencing those objects.
Subclasses of tf.train.Checkpoint, tf.keras.layers.Layer, and tf.keras.Model automatically track variables assigned to their attributes. The following example constructs a simple linear model, then writes checkpoints which contain values for all of the model's variables.
You can easily save a model-checkpoint with Model.save_weights.
Manual checkpointing
Setup
To help demonstrate all the features of tf.train.Checkpoint, define a toy dataset and optimization step:
End of explanation
opt = tf.keras.optimizers.Adam(0.1)
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
Explanation: Create the checkpoint objects
Use a tf.train.Checkpoint object to manually create a checkpoint, where the objects you want to checkpoint are set as attributes on the object.
A tf.train.CheckpointManager can also be helpful for managing multiple checkpoints.
End of explanation
def train_and_checkpoint(net, manager):
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for _ in range(50):
example = next(iterator)
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
train_and_checkpoint(net, manager)
Explanation: Train and checkpoint the model
The following training loop creates an instance of the model and of an optimizer, then gathers them into a tf.train.Checkpoint object. It calls the training step in a loop on each batch of data, and periodically writes checkpoints to disk.
End of explanation
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
train_and_checkpoint(net, manager)
Explanation: Restore and continue training
After the first training cycle you can pass a new model and manager, but pick up training exactly where you left off:
End of explanation
print(manager.checkpoints) # List the three remaining checkpoints
Explanation: The tf.train.CheckpointManager object deletes old checkpoints. Above it's configured to keep only the three most recent checkpoints.
End of explanation
!ls ./tf_ckpts
Explanation: These paths, e.g. './tf_ckpts/ckpt-10', are not files on disk. Instead they are prefixes for an index file and one or more data files which contain the variable values. These prefixes are grouped together in a single checkpoint file ('./tf_ckpts/checkpoint') where the CheckpointManager saves its state.
End of explanation
to_restore = tf.Variable(tf.zeros([5]))
print(to_restore.numpy()) # All zeros
fake_layer = tf.train.Checkpoint(bias=to_restore)
fake_net = tf.train.Checkpoint(l1=fake_layer)
new_root = tf.train.Checkpoint(net=fake_net)
status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))
print(to_restore.numpy()) # This gets the restored value.
Explanation: <a id="loading_mechanics"/>
Loading mechanics
TensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded. Edge names typically come from attribute names in objects, for example the "l1" in self.l1 = tf.keras.layers.Dense(5). tf.train.Checkpoint uses its keyword argument names, as in the "step" in tf.train.Checkpoint(step=...).
The dependency graph from the example above looks like this:
The optimizer is in red, regular variables are in blue, and the optimizer slot variables are in orange. The other nodes—for example, representing the tf.train.Checkpoint—are in black.
Slot variables are part of the optimizer's state, but are created for a specific variable. For example, the 'm' edges above correspond to momentum, which the Adam optimizer tracks for each variable. Slot variables are only saved in a checkpoint if the variable and the optimizer would both be saved, thus the dashed edges.
Calling restore on a tf.train.Checkpoint object queues the requested restorations, restoring variable values as soon as there's a matching path from the Checkpoint object. For example, you can load just the bias from the model you defined above by reconstructing one path to it through the network and the layer.
End of explanation
status.assert_existing_objects_matched()
Explanation: The dependency graph for these new objects is a much smaller subgraph of the larger checkpoint you wrote above. It includes only the bias and a save counter that tf.train.Checkpoint uses to number checkpoints.
restore returns a status object, which has optional assertions. All of the objects created in the new Checkpoint have been restored, so status.assert_existing_objects_matched passes.
End of explanation
deferred_restore = tf.Variable(tf.zeros([1, 5]))
print(deferred_restore.numpy()) # Not restored; still zeros
fake_layer.kernel = deferred_restore
print(deferred_restore.numpy()) # Restored
Explanation: There are many objects in the checkpoint which haven't matched, including the layer's kernel and the optimizer's variables. status.assert_consumed only passes if the checkpoint and the program match exactly, and would throw an exception here.
Deferred restorations
Layer objects in TensorFlow may defer the creation of variables to their first call, when input shapes are available. For example, the shape of a Dense layer's kernel depends on both the layer's input and output shapes, and so the output shape required as a constructor argument is not enough information to create the variable on its own. Since calling a Layer also reads the variable's value, a restore must happen between the variable's creation and its first use.
To support this idiom, tf.train.Checkpoint defers restores which don't yet have a matching variable.
End of explanation
reader = tf.train.load_checkpoint('./tf_ckpts/')
shape_from_key = reader.get_variable_to_shape_map()
dtype_from_key = reader.get_variable_to_dtype_map()
sorted(shape_from_key.keys())
Explanation: Manually inspecting checkpoints
tf.train.load_checkpoint returns a CheckpointReader that gives lower level access to the checkpoint contents. It contains mappings from each variable's key, to the shape and dtype for each variable in the checkpoint. A variable's key is its object path, like in the graphs displayed above.
Note: There is no higher level structure to the checkpoint. It only know's the paths and values for the variables, and has no concept of models, layers or how they are connected.
End of explanation
key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE'
print("Shape:", shape_from_key[key])
print("Dtype:", dtype_from_key[key].name)
Explanation: So if you're interested in the value of net.l1.kernel you can get the value with the following code:
End of explanation
reader.get_tensor(key)
Explanation: It also provides a get_tensor method allowing you to inspect the value of a variable:
End of explanation
save = tf.train.Checkpoint()
save.listed = [tf.Variable(1.)]
save.listed.append(tf.Variable(2.))
save.mapped = {'one': save.listed[0]}
save.mapped['two'] = save.listed[1]
save_path = save.save('./tf_list_example')
restore = tf.train.Checkpoint()
v2 = tf.Variable(0.)
assert 0. == v2.numpy() # Not restored yet
restore.mapped = {'two': v2}
restore.restore(save_path)
assert 2. == v2.numpy()
Explanation: Object tracking
Checkpoints save and restore the values of tf.Variable objects by "tracking" any variable or trackable object set in one of its attributes. When executing a save, variables are gathered recursively from all of the reachable tracked objects.
As with direct attribute assignments like self.l1 = tf.keras.layers.Dense(5), assigning lists and dictionaries to attributes will track their contents.
End of explanation
restore.listed = []
print(restore.listed) # ListWrapper([])
v1 = tf.Variable(0.)
restore.listed.append(v1) # Restores v1, from restore() in the previous cell
assert 1. == v1.numpy()
Explanation: You may notice wrapper objects for lists and dictionaries. These wrappers are checkpointable versions of the underlying data-structures. Just like the attribute based loading, these wrappers restore a variable's value as soon as it's added to the container.
End of explanation |
15,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planning Observations
Planning of observations is currently a weakness in python. We have the following packages to perform this task
Step1: In order to know the altitude and azimuth of a fixed target in the sky we will mainly need to know
Step2: You can also search by its name if it is in CDS...
Step3: Now we should specify where the observer will be on Earth
Step4: But maybe we are picky and want the exact location (or specify a different location that is not present in the database...)
Step5: Finally we need to set up the time, which by default is set in UTC.
Step6: Let's ask python if we can see the Nebula tonight from la palma
Step7: We assume that at 11.30 pm will be dark, but let's make sure...
Step8: You might get an IERS warning (International Earth Rotation and Reference Systems Service) to update the Earth Location.
For more info
Step9: Calculate rise/set/meridian transit times
It can also provide information about all the twilight times
Step10: By default it set's the nearest sunset but you can specify also next or previous.
Step11: Similarly, we can ask when the target will be raising or setting
Step12: Calculate alt/az positions for targets and Airmass
With this information we can also ask what is the Altitute and Azimuth of our target at that specific time
Step13: With the integrated sec function we can easily get the Airmass
Step14: We can now aim to make an altitude plot scanning the altitude of our target every hour
Step15: Fortunately, there is a function that does it (much faster) within the day around the date we provide
Step16: We can also give a range of dates to focus on a specific region of time (dark time)
Step17: Making sky charts
Step18: Finder Chart Image
Astroplan also provides the option to display sky charts from a list of surveys (but it goes really slow..) | Python Code:
%matplotlib inline
import numpy as np
import math
import matplotlib.pyplot as plt
import seaborn
from astropy.io import fits
from astropy import units as u
from astropy.coordinates import SkyCoord
plt.rcParams['figure.figsize'] = (12, 8)
plt.rcParams['font.size'] = 14
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['xtick.labelsize'] = 13
plt.rcParams['ytick.labelsize'] = 13
plt.rcParams['axes.titlesize'] = 14
plt.rcParams['legend.fontsize'] = 13
Explanation: Planning Observations
Planning of observations is currently a weakness in python. We have the following packages to perform this task:
pyephem - works well, but is not maitained anymore and a bit obsolete
astropy - has basic operations (i.e. compute altaz of sources) but has no high-level functionalities
astroplan - aims many interesting and useful features but it is at a very early stage and some features do not work (and it is slow!)
Here we will mostly review astroplan as we believe it will be adopted by astropy and possibly become a reference.
End of explanation
from astropy.coordinates import SkyCoord
from astroplan import FixedTarget
coordinates = SkyCoord('18h18m48.0s', '−13d49m0.0s', frame='icrs')
eagle_nebula = FixedTarget(name='M16', coord=coordinates)
print (eagle_nebula)
Explanation: In order to know the altitude and azimuth of a fixed target in the sky we will mainly need to know:
The location of the target (on Sky)
The location of the observer (on Earth)
The time
Let's define first the target we want to observe:
End of explanation
eagle_nebula = FixedTarget.from_name('M16')
print (eagle_nebula)
Explanation: You can also search by its name if it is in CDS...
End of explanation
from astroplan import Observer
observer = Observer.at_site('lapalma')
print (observer)
Explanation: Now we should specify where the observer will be on Earth:
End of explanation
import astropy.units as u
from astropy.coordinates import EarthLocation
#from pytz import timezone
from astroplan import Observer
longitude = '-17d52m54s'
latitude = '28d45m38s'
elevation = 2344 * u.m
location = EarthLocation.from_geodetic(longitude, latitude, elevation)
observer = Observer(name='WHT',
location=location,
pressure=0.615 * u.bar,
relative_humidity=0.04,
temperature=18 * u.deg_C,
#timezone=timezone('US/Hawaii'),
description="Our beloved William Herschel Telescope")
print (observer)
Explanation: But maybe we are picky and want the exact location (or specify a different location that is not present in the database...)
End of explanation
from astropy.time import Time
time = Time('2017-09-15 23:30:00')
Explanation: Finally we need to set up the time, which by default is set in UTC.
End of explanation
observer.target_is_up(time, eagle_nebula)
Explanation: Let's ask python if we can see the Nebula tonight from la palma:
End of explanation
observer.is_night(time)
Explanation: We assume that at 11.30 pm will be dark, but let's make sure...
End of explanation
from astroplan import download_IERS_A
download_IERS_A()
Explanation: You might get an IERS warning (International Earth Rotation and Reference Systems Service) to update the Earth Location.
For more info: http://astroplan.readthedocs.io/en/latest/faq/iers.html
Let's do it:
End of explanation
observer.sun_set_time(time, which='nearest').iso
Explanation: Calculate rise/set/meridian transit times
It can also provide information about all the twilight times:
End of explanation
observer.sun_set_time(time, which='next').iso
observer.twilight_evening_civil(time, which='nearest').iso
observer.twilight_evening_nautical(time, which='nearest').iso
observer.twilight_evening_astronomical(time, which='nearest').iso
Explanation: By default it set's the nearest sunset but you can specify also next or previous.
End of explanation
observer.target_rise_time(time, eagle_nebula).iso
observer.target_set_time(time, eagle_nebula).iso
Explanation: Similarly, we can ask when the target will be raising or setting:
End of explanation
altaz_eagle = observer.altaz(time, eagle_nebula)
altaz_eagle.alt, altaz_eagle.az
Explanation: Calculate alt/az positions for targets and Airmass
With this information we can also ask what is the Altitute and Azimuth of our target at that specific time
End of explanation
altaz_eagle.secz
Explanation: With the integrated sec function we can easily get the Airmass
End of explanation
from astropy.time import TimeDelta
time_list = []
airmass_list = []
current_time = observer.sun_set_time(time, which='nearest')
while current_time < observer.sun_rise_time(time, which='nearest'):
current_altaz = observer.altaz(current_time, eagle_nebula)
if current_altaz.alt > 0:
airmass_list.append(current_altaz.alt.value)
else:
airmass_list.append(0)
time_list.append(current_time.datetime)
current_time += TimeDelta(3600, format='sec')
plt.plot(time_list, airmass_list)
Explanation: We can now aim to make an altitude plot scanning the altitude of our target every hour
End of explanation
from astroplan.plots import plot_airmass
middle_of_the_night = Time('2017-09-16 01:00:00')
plot_airmass(targets=eagle_nebula,
observer=observer,
time=middle_of_the_night,
#brightness_shading=True,
#altitude_yaxis=True
)
plt.legend()
Explanation: Fortunately, there is a function that does it (much faster) within the day around the date we provide:
End of explanation
from astroplan.plots import dark_style_sheet
start_time = observer.sun_set_time(time, which='nearest')
end_time = observer.sun_rise_time(time, which='nearest')
delta_t = end_time - start_time
observe_time = start_time + delta_t*np.linspace(0, 1, 75)
andromeda = FixedTarget.from_name('M31')
pleiades = FixedTarget.from_name('M45')
some_nice_stuff_to_look_tonight = [eagle_nebula, andromeda, pleiades]
plot_airmass(targets=some_nice_stuff_to_look_tonight,
observer=observer,
time=observe_time,
brightness_shading=True,
altitude_yaxis=True,
#style_sheet=dark_style_sheet
)
plt.legend()
Explanation: We can also give a range of dates to focus on a specific region of time (dark time)
End of explanation
from astroplan.plots import plot_sky
plot_sky(eagle_nebula, observer, middle_of_the_night)
plot_sky(pleiades, observer, middle_of_the_night)
plot_sky(andromeda, observer, middle_of_the_night)
plt.legend()
observe_time = Time('2000-03-15 17:00:00')
observe_time = observe_time + np.linspace(-4, 5, 10)*u.hour
plot_sky(pleiades, observer, observe_time)
plt.legend(loc='center left', bbox_to_anchor=(1.25, 0.5))
plt.show()
Explanation: Making sky charts
End of explanation
from astroplan.plots import plot_finder_image
plot_finder_image(eagle_nebula, survey='DSS', log=True)
Explanation: Finder Chart Image
Astroplan also provides the option to display sky charts from a list of surveys (but it goes really slow..)
End of explanation |
15,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MOHC
Source ID: UKESM1-0-LL
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
15,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Find MEG reference channel artifacts
Use ICA decompositions of MEG reference channels to remove intermittent noise.
Many MEG systems have an array of reference channels which are used to detect
external magnetic noise. However, standard techniques that use reference
channels to remove noise from standard channels often fail when noise is
intermittent. The technique described here (using ICA on the reference
channels) often succeeds where the standard techniques do not.
There are two algorithms to choose from
Step1: Read raw data, cropping to 5 minutes to save memory
Step2: Note that even though standard noise removal has already
been applied to these data, much of the noise in the reference channels
(bottom of the plot) can still be seen in the standard channels.
Step3: The PSD of these data show the noise as clear peaks.
Step4: Run the "together" algorithm.
Step5: Cleaned data
Step6: Now try the "separate" algorithm.
Step7: Cleaned raw data traces
Step8: Cleaned raw data PSD | Python Code:
# Authors: Jeff Hanna <jeff.hanna@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import refmeg_noise
from mne.preprocessing import ICA
import numpy as np
print(__doc__)
data_path = refmeg_noise.data_path()
Explanation: Find MEG reference channel artifacts
Use ICA decompositions of MEG reference channels to remove intermittent noise.
Many MEG systems have an array of reference channels which are used to detect
external magnetic noise. However, standard techniques that use reference
channels to remove noise from standard channels often fail when noise is
intermittent. The technique described here (using ICA on the reference
channels) often succeeds where the standard techniques do not.
There are two algorithms to choose from: separate and together (default). In
the "separate" algorithm, two ICA decompositions are made: one on the reference
channels, and one on reference + standard channels. The reference + standard
channel components which correlate with the reference channel components are
removed.
In the "together" algorithm, a single ICA decomposition is made on reference +
standard channels, and those components whose weights are particularly heavy
on the reference channels are removed.
This technique is fully described and validated in :footcite:HannaEtAl2020
End of explanation
raw_fname = data_path + '/sample_reference_MEG_noise-raw.fif'
raw = io.read_raw_fif(raw_fname).crop(300, 600).load_data()
Explanation: Read raw data, cropping to 5 minutes to save memory
End of explanation
select_picks = np.concatenate(
(mne.pick_types(raw.info, meg=True)[-32:],
mne.pick_types(raw.info, meg=False, ref_meg=True)))
plot_kwargs = dict(
duration=100, order=select_picks, n_channels=len(select_picks),
scalings={"mag": 8e-13, "ref_meg": 2e-11})
raw.plot(**plot_kwargs)
Explanation: Note that even though standard noise removal has already
been applied to these data, much of the noise in the reference channels
(bottom of the plot) can still be seen in the standard channels.
End of explanation
raw.plot_psd(fmax=30)
Explanation: The PSD of these data show the noise as clear peaks.
End of explanation
raw_tog = raw.copy()
ica_kwargs = dict(
method='picard',
fit_params=dict(tol=1e-4), # use a high tol here for speed
)
all_picks = mne.pick_types(raw_tog.info, meg=True, ref_meg=True)
ica_tog = ICA(n_components=60, allow_ref_meg=True, **ica_kwargs)
ica_tog.fit(raw_tog, picks=all_picks)
# low threshold (2.0) here because of cropped data, entire recording can use
# a higher threshold (2.5)
bad_comps, scores = ica_tog.find_bads_ref(raw_tog, threshold=2.0)
# Plot scores with bad components marked.
ica_tog.plot_scores(scores, bad_comps)
# Examine the properties of removed components. It's clear from the time
# courses and topographies that these components represent external,
# intermittent noise.
ica_tog.plot_properties(raw_tog, picks=bad_comps)
# Remove the components.
raw_tog = ica_tog.apply(raw_tog, exclude=bad_comps)
Explanation: Run the "together" algorithm.
End of explanation
raw_tog.plot_psd(fmax=30)
Explanation: Cleaned data:
End of explanation
raw_sep = raw.copy()
# Do ICA only on the reference channels.
ref_picks = mne.pick_types(raw_sep.info, meg=False, ref_meg=True)
ica_ref = ICA(n_components=2, allow_ref_meg=True, **ica_kwargs)
ica_ref.fit(raw_sep, picks=ref_picks)
# Do ICA on both reference and standard channels. Here, we can just reuse
# ica_tog from the section above.
ica_sep = ica_tog.copy()
# Extract the time courses of these components and add them as channels
# to the raw data. Think of them the same way as EOG/EKG channels, but instead
# of giving info about eye movements/cardiac activity, they give info about
# external magnetic noise.
ref_comps = ica_ref.get_sources(raw_sep)
for c in ref_comps.ch_names: # they need to have REF_ prefix to be recognised
ref_comps.rename_channels({c: "REF_" + c})
raw_sep.add_channels([ref_comps])
# Now that we have our noise channels, we run the separate algorithm.
bad_comps, scores = ica_sep.find_bads_ref(raw_sep, method="separate")
# Plot scores with bad components marked.
ica_sep.plot_scores(scores, bad_comps)
# Examine the properties of removed components.
ica_sep.plot_properties(raw_sep, picks=bad_comps)
# Remove the components.
raw_sep = ica_sep.apply(raw_sep, exclude=bad_comps)
Explanation: Now try the "separate" algorithm.
End of explanation
raw_sep.plot(**plot_kwargs)
Explanation: Cleaned raw data traces:
End of explanation
raw_sep.plot_psd(fmax=30)
Explanation: Cleaned raw data PSD:
End of explanation |
15,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
verify pyEMU results with the henry problem
Step1: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
Step2: extract and save the forecast sensitivity vectors
Step3: save the prior parameter covariance matrix as an uncertainty file
Step4: PRECUNC7
write a response file to feed stdin to predunc7
Step5: load the posterior matrix written by predunc7
Step6: The cumulative difference between the two posterior matrices
Step7: A few more metrics ...
Step8: PREDUNC1
write a response file to feed stdin. Then run predunc1 for each forecast
Step9: organize the pyemu results into a structure for comparison
Step10: compare the results
Step11: PREDVAR1b
write the nessecary files to run predvar1b
Step12: now for pyemu
Step13: generate some plots to verify
Step14: Identifiability
Step15: cheap plot to verify | Python Code:
%matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
Explanation: verify pyEMU results with the henry problem
End of explanation
la = pyemu.Schur("freyberg.jcb",verbose=False,forecasts=[])
la.drop_prior_information()
jco_ord = la.jco.get(la.pst.obs_names,la.pst.par_names)
ord_base = "freyberg_ord"
jco_ord.to_binary(ord_base + ".jco")
la.pst.write(ord_base+".pst")
Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
End of explanation
pv_names = []
predictions = ["sw_gw_0","sw_gw_1","or28c05_0","or28c05_1"]
for pred in predictions:
pv = jco_ord.extract(pred).T
pv_name = pred + ".vec"
pv.to_ascii(pv_name)
pv_names.append(pv_name)
Explanation: extract and save the forecast sensitivity vectors
End of explanation
prior_uncfile = "pest.unc"
la.parcov.to_uncfile(prior_uncfile,covmat_file=None)
Explanation: save the prior parameter covariance matrix as an uncertainty file
End of explanation
post_mat = "post.cov"
post_unc = "post.unc"
args = [ord_base + ".pst","1.0",prior_uncfile,
post_mat,post_unc,"1"]
pd7_in = "predunc7.in"
f = open(pd7_in,'w')
f.write('\n'.join(args)+'\n')
f.close()
out = "pd7.out"
pd7 = os.path.join("i64predunc7.exe")
os.system(pd7 + " <" + pd7_in + " >"+out)
for line in open(out).readlines():
print(line)
Explanation: PRECUNC7
write a response file to feed stdin to predunc7
End of explanation
post_pd7 = pyemu.Cov.from_ascii(post_mat)
la_ord = pyemu.Schur(jco=ord_base+".jco",predictions=predictions)
post_pyemu = la_ord.posterior_parameter
#post_pyemu = post_pyemu.get(post_pd7.row_names)
Explanation: load the posterior matrix written by predunc7
End of explanation
post_pd7.x
post_pyemu.x
delta = (post_pd7 - post_pyemu).x
(post_pd7 - post_pyemu).to_ascii("delta.cov")
print(delta.sum())
print(delta.max(),delta.min())
delta = np.ma.masked_where(np.abs(delta) < 0.0001,delta)
plt.imshow(delta)
df = (post_pd7 - post_pyemu).to_dataframe().apply(np.abs)
df /= la_ord.pst.parameter_data.parval1
df *= 100.0
print(df.max())
delta
Explanation: The cumulative difference between the two posterior matrices:
End of explanation
print((delta.sum()/post_pyemu.x.sum()) * 100.0)
print(np.abs(delta).sum())
Explanation: A few more metrics ...
End of explanation
args = [ord_base + ".pst", "1.0", prior_uncfile, None, "1"]
pd1_in = "predunc1.in"
pd1 = os.path.join("i64predunc1.exe")
pd1_results = {}
for pv_name in pv_names:
args[3] = pv_name
f = open(pd1_in, 'w')
f.write('\n'.join(args) + '\n')
f.close()
out = "predunc1" + pv_name + ".out"
os.system(pd1 + " <" + pd1_in + ">" + out)
f = open(out,'r')
for line in f:
if "pre-cal " in line.lower():
pre_cal = float(line.strip().split()[-2])
elif "post-cal " in line.lower():
post_cal = float(line.strip().split()[-2])
f.close()
pd1_results[pv_name.split('.')[0].lower()] = [pre_cal, post_cal]
Explanation: PREDUNC1
write a response file to feed stdin. Then run predunc1 for each forecast
End of explanation
# save the results for verification testing
pd.DataFrame(pd1_results).to_csv("predunc1_results.dat")
pyemu_results = {}
for pname in la_ord.prior_prediction.keys():
pyemu_results[pname] = [np.sqrt(la_ord.prior_prediction[pname]),
np.sqrt(la_ord.posterior_prediction[pname])]
Explanation: organize the pyemu results into a structure for comparison
End of explanation
f = open("predunc1_textable.dat",'w')
for pname in pd1_results.keys():
print(pname)
f.write(pname+"&{0:6.5f}&{1:6.5}&{2:6.5f}&{3:6.5f}\\\n"\
.format(pd1_results[pname][0],pyemu_results[pname][0],
pd1_results[pname][1],pyemu_results[pname][1]))
print("prior",pname,pd1_results[pname][0],pyemu_results[pname][0])
print("post",pname,pd1_results[pname][1],pyemu_results[pname][1])
f.close()
Explanation: compare the results:
End of explanation
f = open("pred_list.dat",'w')
out_files = []
for pv in pv_names:
out_name = pv+".predvar1b.out"
out_files.append(out_name)
f.write(pv+" "+out_name+"\n")
f.close()
args = [ord_base+".pst","1.0","pest.unc","pred_list.dat"]
for i in range(36):
args.append(str(i))
args.append('')
args.append("n")
args.append("n")
args.append("y")
args.append("n")
args.append("n")
f = open("predvar1b.in", 'w')
f.write('\n'.join(args) + '\n')
f.close()
os.system("predvar1b.exe <predvar1b.in")
pv1b_results = {}
for out_file in out_files:
pred_name = out_file.split('.')[0]
f = open(out_file,'r')
for _ in range(3):
f.readline()
arr = np.loadtxt(f)
pv1b_results[pred_name] = arr
Explanation: PREDVAR1b
write the nessecary files to run predvar1b
End of explanation
omitted_parameters = [pname for pname in la.pst.parameter_data.parnme if pname.startswith("wf")]
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
omitted_parameters=omitted_parameters,
verbose=False)
df = la_ord_errvar.get_errvar_dataframe(np.arange(36))
df
Explanation: now for pyemu
End of explanation
fig = plt.figure(figsize=(6,8))
max_idx = 13
idx = np.arange(max_idx)
for ipred,pred in enumerate(predictions):
arr = pv1b_results[pred][:max_idx,:]
first = df[("first", pred)][:max_idx]
second = df[("second", pred)][:max_idx]
third = df[("third", pred)][:max_idx]
ax = plt.subplot(len(predictions),1,ipred+1)
#ax.plot(arr[:,1],color='b',dashes=(6,6),lw=4,alpha=0.5)
#ax.plot(first,color='b')
#ax.plot(arr[:,2],color='g',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(second,color='g')
#ax.plot(arr[:,3],color='r',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(third,color='r')
ax.scatter(idx,arr[:,1],marker='x',s=40,color='g',
label="PREDVAR1B - first term")
ax.scatter(idx,arr[:,2],marker='x',s=40,color='b',
label="PREDVAR1B - second term")
ax.scatter(idx,arr[:,3],marker='x',s=40,color='r',
label="PREVAR1B - third term")
ax.scatter(idx,first,marker='o',facecolor='none',
s=50,color='g',label='pyEMU - first term')
ax.scatter(idx,second,marker='o',facecolor='none',
s=50,color='b',label="pyEMU - second term")
ax.scatter(idx,third,marker='o',facecolor='none',
s=50,color='r',label="pyEMU - third term")
ax.set_ylabel("forecast variance")
ax.set_title("forecast: " + pred)
if ipred == len(predictions) -1:
ax.legend(loc="lower center",bbox_to_anchor=(0.5,-0.75),
scatterpoints=1,ncol=2)
ax.set_xlabel("singular values")
else:
ax.set_xticklabels([])
#break
plt.savefig("predvar1b_ver.eps")
Explanation: generate some plots to verify
End of explanation
cmd_args = [os.path.join("i64identpar.exe"),ord_base,"5",
"null","null","ident.out","/s"]
cmd_line = ' '.join(cmd_args)+'\n'
print(cmd_line)
print(os.getcwd())
os.system(cmd_line)
identpar_df = pd.read_csv("ident.out",delim_whitespace=True)
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
verbose=False)
df = la_ord_errvar.get_identifiability_dataframe(5)
df
Explanation: Identifiability
End of explanation
diff = identpar_df["identifiability"].values - df["ident"].values
diff.max()
fig = plt.figure()
ax = plt.subplot(111)
axt = plt.twinx()
ax.plot(identpar_df["identifiability"])
ax.plot(df.ident.values)
ax.set_xlim(-10,600)
diff = identpar_df["identifiability"].values - df["ident"].values
#print(diff)
axt.plot(diff)
axt.set_ylim(-1,1)
ax.set_xlabel("parameter")
ax.set_ylabel("identifiability")
axt.set_ylabel("difference")
Explanation: cheap plot to verify
End of explanation |
15,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recreate multilabel example from scikit-learn.org
This example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process
Step1: Play with Multilabel classification format and f1-score | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_multilabel_classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.preprocessing import LabelBinarizer
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import CCA
def plot_hyperplane(clf, min_x, max_x, linestyle, label):
# get the separating hyperplane
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(min_x - 5, max_x + 5)
yy = a * xx - (clf.intercept_[0]) / w[1]
plt.plot(xx, yy, linestyle, label=label)
def plot_subfigure(X, Y, subplot, title, transform):
if transform == "pca":
X = PCA(n_components=2).fit_transform(X)
elif transform == "cca":
X = CCA(n_components=2).fit(X, Y).transform(X)
else:
raise ValueError
min_x = np.min(X[:, 0])
max_x = np.max(X[:, 0])
min_y = np.min(X[:, 1])
max_y = np.max(X[:, 1])
classif = OneVsRestClassifier(SVC(kernel='linear'))
classif.fit(X, Y)
plt.subplot(2, 2, subplot)
plt.title(title)
zero_class = np.where(Y[:, 0])
one_class = np.where(Y[:, 1])
plt.scatter(X[:, 0], X[:, 1], s=40, c='gray')
plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',
facecolors='none', linewidths=2, label='Class 1')
plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',
facecolors='none', linewidths=2, label='Class 2')
plot_hyperplane(classif.estimators_[0], min_x, max_x, 'k--',
'Boundary\nfor class 1')
plot_hyperplane(classif.estimators_[1], min_x, max_x, 'k-.',
'Boundary\nfor class 2')
plt.xticks(())
plt.yticks(())
plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)
plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)
if subplot == 2:
plt.xlabel('First principal component')
plt.ylabel('Second principal component')
plt.legend(loc='upper left')
plt.figure(figsize=(8,6))
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
allow_unlabeled=True, random_state=1)
plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
allow_unlabeled=False, random_state=1)
plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", 'cca')
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", 'pca')
plt.subplots_adjust(.04, .02, .97, .94, .09, .2)
plt.show()
Explanation: Recreate multilabel example from scikit-learn.org
This example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process:
pick the number of labels: n ~ Poisson(n_labels)
n times, choose a class c: c ~ Multinomial (theta)
pick the document length: k ~ Poisson(length)
k times, choose a word: w ~ Multinomial(theta_c)
In the above process, rejection sampling is used to make sure that n is more than 2, and that the document length is never zero. Likewise, we reject classes which have already been chosen. The documents that are assigned to both classes are plotted surrounded by two colored circles.
The classification is performed by profjecting to the first two principal components found by PCA and CCA for visualisation purposes, followed by using the sklearn.multiclass.OneVSRestClassifier metaclassifier using two SVCs with linear kernels to learn a discriminative model for each class. Note that PCA is used to perform an unsupervised dimensionality reduction, while CCA is used to perform a supervised one.
Note: in the plot, "unlabeled samples" does not mean that we don't know the labels (as in semi-supervised learning) but that the samples simply do not have a label.
End of explanation
from sklearn.preprocessing import MultiLabelBinarizer
y_true = [[2,3,4], [2], [0,1,3], [0,1,2,3,4], [0,1,2]]
Y_true = MultiLabelBinarizer().fit_transform(y_true)
Y_true
y_pred = [[2,3], [2], [0,1,3], [0,1,3], [0,1,2]]
Y_pred = MultiLabelBinarizer(classes=[0,1,2,3,4]).fit_transform(y_pred)
from sklearn.metrics import f1_score
f1_score(y_pred=Y_pred, y_true=Y_true, average='macro')
Y_pred
Explanation: Play with Multilabel classification format and f1-score
End of explanation |
15,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this notebook, we will walk through a hacker's approach to statistical thinking, as applied to network analysis.
Statistics in a Nutshell
All of statistics can be broken down into two activities
Step1: Exercise
Compute some basic descriptive statistics about the graph, namely
Step2: How are protein-protein networks formed? Are they formed by an Erdos-Renyi process, or something else?
In the G(n, p) model, a graph is constructed by connecting nodes randomly. Each edge is included in the graph with probability p independent from every other edge.
If protein-protein networks are formed by an E-R process, then we would expect that properties of the protein-protein graph would look statistically similar to those of an actual E-R graph.
Exercise
Make an ECDF of the degree centralities for the protein-protein interaction graph, and the E-R graph.
- The construction of an E-R graph requires a value for n and p.
- A reasonable number for n is the number of nodes in our protein-protein graph.
- A reasonable value for p might be the density of the protein-protein graph.
Step3: From visualizing these two distributions, it is clear that they look very different. How do we quantify this difference, and statistically test whether the protein-protein graph could have arisen under an Erdos-Renyi model?
One thing we might observe is that the variance, that is the "spread" around the mean, differs between the E-R model compared to our data. Therefore, we can compare variance of the data to the distribtion of variances under an E-R model.
This is essentially following the logic of statistical inference by 'hacking' (not to be confused with the statistical bad practice of p-hacking).
Exercise
Fill in the skeleton code below to simulate 100 E-R graphs.
Step4: Visually, it should be quite evident that the protein-protein graph did not come from an E-R distribution. Statistically, we can also use the hypothesis test procedure to quantitatively test this, using our simulated E-R data.
Step5: Another way to do this is to use the 2-sample Kolmogorov-Smirnov test implemented in the scipy.stats module. From the docs
Step6: Exercise
Now, conduct the K-S test for one synthetic graph and the data. | Python Code:
# Read in the data.
# Note from above that we have to skip the first two rows, and that there's no header column,and that the edges are
# delimited by spaces in between the nodes. Hence the syntax below:
G = cf.load_propro_network()
Explanation: Introduction
In this notebook, we will walk through a hacker's approach to statistical thinking, as applied to network analysis.
Statistics in a Nutshell
All of statistics can be broken down into two activities:
Descriptively summarizing data. (a.k.a. "descriptive statistics")
Figuring out whether something happened by random chance. (a.k.a. "inferential statistics")
Descriptive Statistics
Centrality measures: mean, median, mode
Variance measures: inter-quartile range (IQR), variance and standard deviation
Inferential Statistics
Models of Randomness (see below)
Hypothesis Testing
Fitting Statistical Models
Load Data
Let's load a protein-protein interaction network dataset.
This undirected network contains protein interactions contained in yeast. Research showed that proteins with a high degree were more important for the surivial of the yeast than others. A node represents a protein and an edge represents a metabolic interaction between two proteins. The network contains loops.
End of explanation
# Number of nodes:
# Number of edges:
# Graph density:
# Degree centrality distribution:
Explanation: Exercise
Compute some basic descriptive statistics about the graph, namely:
the number of nodes,
the number of edges,
the graph density,
the distribution of degree centralities in the graph,
End of explanation
ppG_deg_centralities = _______
plt.plot(*ecdf(__________))
erG = nx.erdos_renyi_graph(n=_______, p=________)
erG_deg_centralities = __________
plt.plot(*ecdf(erG_deg_centralities))
plt.show()
Explanation: How are protein-protein networks formed? Are they formed by an Erdos-Renyi process, or something else?
In the G(n, p) model, a graph is constructed by connecting nodes randomly. Each edge is included in the graph with probability p independent from every other edge.
If protein-protein networks are formed by an E-R process, then we would expect that properties of the protein-protein graph would look statistically similar to those of an actual E-R graph.
Exercise
Make an ECDF of the degree centralities for the protein-protein interaction graph, and the E-R graph.
- The construction of an E-R graph requires a value for n and p.
- A reasonable number for n is the number of nodes in our protein-protein graph.
- A reasonable value for p might be the density of the protein-protein graph.
End of explanation
# 1. Generate 100 E-R graph degree centrality variance measurements and store them.
# Takes ~50 seconds or so.
n_sims = ______
er_vars = np.zeros(________) # variances for n simulaed E-R graphs.
for i in range(n_sims):
erG = nx.erdos_renyi_graph(n=____________, p=____________)
erG_deg_centralities = __________
er_vars[i] = np.var(__________)
# 2. Compute the test statistic that is going to be used for the hypothesis test.
ppG_var = np.var(______________)
# Do a quick visual check
n, bins, patches = plt.hist(er_vars)
plt.vlines(ppG_var, ymin=0, ymax=max(n), color='red', lw=2)
Explanation: From visualizing these two distributions, it is clear that they look very different. How do we quantify this difference, and statistically test whether the protein-protein graph could have arisen under an Erdos-Renyi model?
One thing we might observe is that the variance, that is the "spread" around the mean, differs between the E-R model compared to our data. Therefore, we can compare variance of the data to the distribtion of variances under an E-R model.
This is essentially following the logic of statistical inference by 'hacking' (not to be confused with the statistical bad practice of p-hacking).
Exercise
Fill in the skeleton code below to simulate 100 E-R graphs.
End of explanation
# Conduct the hypothesis test.
ppG_var > np.percentile(er_vars, 99) # we can only use the 99th percentile, because there are only 100 data points.
Explanation: Visually, it should be quite evident that the protein-protein graph did not come from an E-R distribution. Statistically, we can also use the hypothesis test procedure to quantitatively test this, using our simulated E-R data.
End of explanation
# Scenario 1: Data come from the same distributions.
# Notice the size of the p-value.
dist1 = npr.random(size=(100))
dist2 = npr.random(size=(100))
ks_2samp(dist1, dist2)
# Note how the p-value, which ranges between 0 and 1, is likely to be greater than a commonly-accepted
# threshold of 0.05
# Scenario 2: Data come from different distributions.
# Note the size of the KS statistic, and the p-value.
dist1 = norm(3, 1).rvs(100)
dist2 = norm(5, 1).rvs(100)
ks_2samp(dist1, dist2)
# Note how the p-value is likely to be less than 0.05, and even more stringent cut-offs of 0.01 or 0.001.
Explanation: Another way to do this is to use the 2-sample Kolmogorov-Smirnov test implemented in the scipy.stats module. From the docs:
This tests whether 2 samples are drawn from the same distribution. Note
that, like in the case of the one-sample K-S test, the distribution is
assumed to be continuous.
This is the two-sided test, one-sided tests are not implemented.
The test uses the two-sided asymptotic Kolmogorov-Smirnov distribution.
If the K-S statistic is small or the p-value is high, then we cannot
reject the hypothesis that the distributions of the two samples
are the same.
As an example to convince yourself that this test works, run the synthetic examples below.
End of explanation
# Now try it on the data distribution
ks_2samp(___________________, ___________________)
Explanation: Exercise
Now, conduct the K-S test for one synthetic graph and the data.
End of explanation |
15,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CDR EDA
First, import relevant libraries
Step1: Then, load the data (takes a few moments)
Step2: The code below creates a calls-per-person frequency distribution, which is the first thing we want to see.
Step3: Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
Step4: It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.
Step5: We also want to look at the number of unique lat-long addresses, which will (roughly) correspond to either where cell phone towers are, and/or the level of truncation. This takes too long in pandas, so we use postgres, piping the results of the query,
\o towers_with_counts.txt
select lat, lon, count(*) as calls, count(distinct cust_id) as users, count(distinct date_trunc('day', date_time_m) ) as days from optourism.cdr_foreigners group by lat, lon order by calls desc;
\q
into the file towers_with_counts.txt. This is followed by the bash command
cat towers_with_counts.txt | sed s/\ \|\ /'\t'/g | sed s/\ //g | sed 2d > towers_with_counts2.txt
to clean up the postgres output format.
Step6: Do the same thing as above.
Step7: Unlike the previous plot, this is not very clean at all, making the cumulative distribution plot critical.
Step8: Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component. | Python Code:
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: CDR EDA
First, import relevant libraries:
End of explanation
# Load data
df = pd.read_csv("./aws-data/firence_foreigners_3days_past_future.csv", header=None)
df.columns = ['lat', 'lon', 'date_time_m', 'home_region', 'cust_id', 'in_florence']
df.head()
# np.max(data.date_time_m) # max date : '2016-09-30
# np.min(data.date_time_m) # min date: 2016-06-07
# Convert the categorical `date_time_m` to a datetime object; then, extract the date component.
df['datetime'] = pd.to_datetime(df['date_time_m'], format='%Y-%m-%d %H:%M:%S')
df['date'] = df['datetime'].dt.floor('d') # Faster than df['datetime'].dt.date
Explanation: Then, load the data (takes a few moments):
End of explanation
fr = df['cust_id'].value_counts().to_frame()['cust_id'].value_counts().to_frame()
fr.columns = ['frequency']
fr.index.name = 'calls'
fr.reset_index(inplace=True)
fr = fr.sort_values('calls')
fr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum()
fr.head()
Explanation: The code below creates a calls-per-person frequency distribution, which is the first thing we want to see.
End of explanation
fr.plot(x='calls', y='frequency', style='o-', logx=True, figsize = (10, 10))
plt.axvline(5,ls='dotted')
plt.ylabel('Number of people')
plt.title('Number of people placing or receiving x number of calls over 4 months')
Explanation: Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
End of explanation
fr.plot(x='calls', y='cumulative', style='o-', logx=True, figsize = (10, 10))
plt.axhline(1.0,ls='dotted',lw=.5)
plt.axhline(.90,ls='dotted',lw=.5)
plt.axhline(.75,ls='dotted',lw=.5)
plt.axhline(.67,ls='dotted',lw=.5)
plt.axhline(.50,ls='dotted',lw=.5)
plt.axhline(.33,ls='dotted',lw=.5)
plt.axhline(.25,ls='dotted',lw=.5)
plt.axhline(.10,ls='dotted',lw=.5)
plt.axhline(0.0,ls='dotted',lw=.5)
plt.axvline(max(fr['calls'][fr['cumulative']<.90]),ls='dotted',lw=.5)
plt.ylabel('Cumulative fraction of people')
plt.title('Cumulative fraction of people placing or receiving x number of calls over 4 months')
Explanation: It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.
End of explanation
df2 = pd.read_table("./aws-data/towers_with_counts2.txt")
df2.head()
Explanation: We also want to look at the number of unique lat-long addresses, which will (roughly) correspond to either where cell phone towers are, and/or the level of truncation. This takes too long in pandas, so we use postgres, piping the results of the query,
\o towers_with_counts.txt
select lat, lon, count(*) as calls, count(distinct cust_id) as users, count(distinct date_trunc('day', date_time_m) ) as days from optourism.cdr_foreigners group by lat, lon order by calls desc;
\q
into the file towers_with_counts.txt. This is followed by the bash command
cat towers_with_counts.txt | sed s/\ \|\ /'\t'/g | sed s/\ //g | sed 2d > towers_with_counts2.txt
to clean up the postgres output format.
End of explanation
fr2 = df2['count'].value_counts().to_frame()
fr2.columns = ['frequency']
fr2.index.name = 'count'
fr2.reset_index(inplace=True)
fr2 = fr2.sort_values('count')
fr2['cumulative'] = fr2['frequency'].cumsum()/fr2['frequency'].sum()
fr2.head()
fr2.plot(x='count', y='frequency', style='o-', logx=True, figsize = (10, 10))
# plt.axvline(5,ls='dotted')
plt.ylabel('Number of cell towers')
plt.title('Number of towers with x number of calls placed or received over 4 months')
Explanation: Do the same thing as above.
End of explanation
fr2.plot(x='count', y='cumulative', style='o-', logx=True, figsize = (10, 10))
plt.axhline(0.1,ls='dotted',lw=.5)
plt.axvline(max(fr2['count'][fr2['cumulative']<.10]),ls='dotted',lw=.5)
plt.axhline(0.5,ls='dotted',lw=.5)
plt.axvline(max(fr2['count'][fr2['cumulative']<.50]),ls='dotted',lw=.5)
plt.axhline(0.9,ls='dotted',lw=.5)
plt.axvline(max(fr2['count'][fr2['cumulative']<.90]),ls='dotted',lw=.5)
plt.ylabel('Cumulative fraction of cell towers')
plt.title('Cumulative fraction of towers with x number of calls placed or received over 4 months')
Explanation: Unlike the previous plot, this is not very clean at all, making the cumulative distribution plot critical.
End of explanation
df['datetime'] = pd.to_datetime(df['date_time_m'], format='%Y-%m-%d %H:%M:%S')
df['date'] = df['datetime'].dt.floor('d') # Faster than df['datetime'].dt.date
df2 = df.groupby(['cust_id','date']).size().to_frame()
df2.columns = ['count']
df2.index.name = 'date'
df2.reset_index(inplace=True)
df2.head(20)
df3 = (df2.groupby('cust_id')['date'].max() - df2.groupby('cust_id')['date'].min()).to_frame()
df3['calls'] = df2.groupby('cust_id')['count'].sum()
df3.columns = ['days','calls']
df3['days'] = df3['days'].dt.days
df3.head()
fr = df['cust_id'].value_counts().to_frame()['cust_id'].value_counts().to_frame()
# plt.scatter(np.log(df3['days']), np.log(df3['calls']))
# plt.show()
fr.plot(x='calls', y='freq', style='o', logx=True, logy=True)
x=np.log(fr['calls'])
y=np.log(1-fr['freq'].cumsum()/fr['freq'].sum())
plt.plot(x, y, 'r-')
# How many home_Regions
np.count_nonzero(data['home_region'].unique())
# How many customers
np.count_nonzero(data['cust_id'].unique())
# How many Nulls are there in the customer ID column?
df['cust_id'].isnull().sum()
# How many missing data are there in the customer ID?
len(df['cust_id']) - df['cust_id'].count()
df['cust_id'].unique()
data_italians = pd.read_csv("./aws-data/firence_italians_3days_past_future_sample_1K_custs.csv", header=None)
data_italians.columns = ['lat', 'lon', 'date_time_m', 'home_region', 'cust_id', 'in_florence']
regions = np.array(data_italians['home_region'].unique())
regions
'Sardegna' in data['home_region']
Explanation: Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component.
End of explanation |
15,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simplifica tu vida con sistemas complejos y algoritmos genéticos
Parte 3 - El dilema Exploración - Explotación
Step1: Supongamos que esta curva representa a una función cuyo máximo buscamos, y supongamos que el eje x representa parámetros de los que la función depende.
Step2: Supongamos que con un algoritmo hemos encontrado un punto alto, pero que corresponde a un óptimo local, por ejemplo
Step3: El dilema Exploración-Explotación hace referencia a a dos fuerzas contrapuestas que necesitamos equilibrar cuidadosamente cuando usemos estos tipos de algoritmos.
La Exploración se refiere a buscar soluciones alejadas de lo que tenemos, abrir nuestro abanico de búsqueda.
Nos permite escapar de máximos locales y encontrar el global.
Nos permite encontrar soluciones atípicas y novedosas a problemas complicados.
Demasiada exploración nos impedirá guardar nuestras soluciones y refinarlas, y tendremos a nuestro algoritmo saltando de un lado a otro sin sacar nada en claro.
La Explotación se refiere a la capacidad de nuestro algoritmo de mantener las soluciones buenas que ha encontrado y refinarlas, buscando en entornos cercanos.
Nos permite encontrar máximos de la función y mantenerlos.
Demasiada Explotación nos bloqueará en máximos locales y nos impedirá encontrar el global.
Step4: Este tipo de estrategias se modulan mediante todos los parámetros de los algoritmos, pero quizás el parámetro que más claramente influye en este equilibrio es el de la mutación en los algoritmos genéticos
Step5: Supongamos que tenemos el siguiente laberinto, al que accedemos por la izquierda y que queremos resolver
Step6: En el ejercicio se detalla más el proceso, llamemos aquí simplemente al algoritmo genético que lo resuelve
Step7: Lo más probable es que hayas obtenido una solución o un camino cerrado en un bucle. Puedes ejecutar la celda superior varias veces para hecerte una idea aproximada de con qué frecuencia aparece cada situación. Pero, ¿por qué aparecen estos bucles?
Examinemos qué aspecto tiene una solución
Step8: La respuesta a por qué se forman bucles está en cómo se define la función de fitness o puntuación de cada camino
Step9: Prueba e ejecutarlo varias veces. ¿Notas si ha cambiado la cantidad de bucles?
Por último, veamos que ocurre si potenciamos la exploración demasiado | Python Code:
%matplotlib inline
import numpy as np # Usaremos arrays
import matplotlib.pyplot as plt # Para pintar resultados
Explanation: Simplifica tu vida con sistemas complejos y algoritmos genéticos
Parte 3 - El dilema Exploración - Explotación: feedback positivo y negativo
Cuando usamos algoritmos genéticos y sistemas complejos, en general, estaremos buscando optimizar funciones muy complicadas, de varios parámetros, a menudo incluso implícitas (como la optimización de un avión mediante CFD). Estas funciones normalmente tendrán óptimos locales, soluciones buenas, pero que no son el máximo global, la mejor solución, que es lo que buscamos.
Hagamos un pequeño esquema para verlo claramente!
End of explanation
x = np.linspace(0,50,500)
y = np.sin(x) * np.sin(x/17)
plt.figure(None, figsize=(10,5))
plt.ylim(-1.1, 1.1)
plt.plot(x,y)
Explanation: Supongamos que esta curva representa a una función cuyo máximo buscamos, y supongamos que el eje x representa parámetros de los que la función depende.
End of explanation
plt.figure(None, figsize=(10,5))
plt.ylim(-1.1, 1.1)
plt.plot(x,y)
plt.plot([21,21],[0,1],'r--')
plt.plot(21, 0.75, 'ko')
Explanation: Supongamos que con un algoritmo hemos encontrado un punto alto, pero que corresponde a un óptimo local, por ejemplo:
End of explanation
# EJEMPLO DE RESULTADO CON DEMASIADA EXPLORACIÓN: NO SE ENCUENTRA NADA
x2 = np.array([7,8,12,28,31,35,40,49])
y2 = np.sin(x2) * np.sin(x2/17)
plt.figure(None, figsize=(10,5))
plt.ylim(-1.1, 1.1)
plt.plot(x,y)
plt.plot([21,21],[0,1],'r--')
plt.plot(21, 0.75, 'ko')
plt.plot(x2, y2, 'go')
# EJEMPLO DE RESULTADO CON DEMASIADA EXPLOTACIÓN: SÓLO SE LLEGA AL LOCAL
x2 = np.linspace(20.2, 21, 10)
y2 = np.sin(x2) * np.sin(x2/17)
plt.figure(None, figsize=(10,5))
plt.ylim(-1.1, 1.1)
plt.plot(x,y)
plt.plot([21,21],[0,1],'r--')
plt.plot(21, 0.75, 'ko')
plt.plot(x2, y2, 'go')
Explanation: El dilema Exploración-Explotación hace referencia a a dos fuerzas contrapuestas que necesitamos equilibrar cuidadosamente cuando usemos estos tipos de algoritmos.
La Exploración se refiere a buscar soluciones alejadas de lo que tenemos, abrir nuestro abanico de búsqueda.
Nos permite escapar de máximos locales y encontrar el global.
Nos permite encontrar soluciones atípicas y novedosas a problemas complicados.
Demasiada exploración nos impedirá guardar nuestras soluciones y refinarlas, y tendremos a nuestro algoritmo saltando de un lado a otro sin sacar nada en claro.
La Explotación se refiere a la capacidad de nuestro algoritmo de mantener las soluciones buenas que ha encontrado y refinarlas, buscando en entornos cercanos.
Nos permite encontrar máximos de la función y mantenerlos.
Demasiada Explotación nos bloqueará en máximos locales y nos impedirá encontrar el global.
End of explanation
#Usaremos el paquete en el ejercicio del laberinto
import Ejercicios.Laberinto.laberinto.laberinto as lab
ag = lab.ag
Explanation: Este tipo de estrategias se modulan mediante todos los parámetros de los algoritmos, pero quizás el parámetro que más claramente influye en este equilibrio es el de la mutación en los algoritmos genéticos: Reduciendo el índice de mutación potenciaremos la explotación, mientras que si lo aumentamos, potenciamos la exploración.
Ejemplo: Laberinto
End of explanation
mapa1 = lab.Map()
mapa1.draw_tablero()
Explanation: Supongamos que tenemos el siguiente laberinto, al que accedemos por la izquierda y que queremos resolver:
End of explanation
mapa1 = lab.Map()
lab.avanzar(mapa1)
lab.draw_all(mapa1)
Explanation: En el ejercicio se detalla más el proceso, llamemos aquí simplemente al algoritmo genético que lo resuelve:
End of explanation
mapa1.list_caminos[0].draw_directions()
mapa1.list_caminos[0].draw_path(0.7)
Explanation: Lo más probable es que hayas obtenido una solución o un camino cerrado en un bucle. Puedes ejecutar la celda superior varias veces para hecerte una idea aproximada de con qué frecuencia aparece cada situación. Pero, ¿por qué aparecen estos bucles?
Examinemos qué aspecto tiene una solución:
Cada casilla contiene una flecha que indica cuál es la siguiente casilla a la que cruzar. Esto es lo que se describe en el genoma de cada camino.
Si la casilla apunta a una pared, el programa intentará cruzar de todos modos a una casilla aleatoria diferente.
End of explanation
mapa1 = lab.Map(veneno=1)
lab.avanzar(mapa1)
lab.draw_all(mapa1)
Explanation: La respuesta a por qué se forman bucles está en cómo se define la función de fitness o puntuación de cada camino:
Se recorren 50 casillas, intentando seguir el camino que determinan las flechas
Cada vez que se choca con una pared, o que se vuelve a la casilla anterior (por ejemplo, si dos flechas se apuntan mutuamente), se pierden puntos.
Se obtiene una puntuación mejor cuanto más a la derecha acabe el caminante.
Se obtiene una gran bonificación si se llega a la salida
En este ejercicio, un bucle es un optimo local: Al no chocarse con nada al recorrerlo, la puntuación es mejor que la de caminos ligeramente diferentes, que terminarían chocando con las paredes varias veces.
Sin embargo, no es la solución que buscamos. Tenemos que potenciar la exploración lejos de estos máximos locales.
Una manera de hacerlo es con feromonas, parecido a lo que hicimos con las hormigas.
Supongamos que cada persona que camina por el laberinto, deja por cada casilla por la que pasa un olor desagradable, que hace que los que vuelvan a pasar por allí intenten evitar ese camino. La manera de implementar esto en el algoritmo es añadir un rastro de feromonas, y luego tener en cuenta la cantidad de feromonas encontradas al calcular la puntuación. ¿Cómo crees que eso afectaría a los bucles?
Probémoslo!
End of explanation
mapa1 = lab.Map(veneno=100)
lab.avanzar(mapa1)
lab.draw_all(mapa1)
Explanation: Prueba e ejecutarlo varias veces. ¿Notas si ha cambiado la cantidad de bucles?
Por último, veamos que ocurre si potenciamos la exploración demasiado:
End of explanation |
15,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adapters example
Demonstration using the networkx adapter. This example requires installing the networkx library before continuing
Step1: Construct data and a simple Mapper
Step2: Convert Mapper to a networkx graph
We can easily convert the graph to a networkx graph representation. This enables us to use many of the commonly provided algorithms and visualization methods. | Python Code:
import kmapper
from sklearn import datasets
import networkx as nx
Explanation: Adapters example
Demonstration using the networkx adapter. This example requires installing the networkx library before continuing:
pip install networkx
End of explanation
data = datasets.make_circles(n_samples=1000)[0]
km = kmapper.KeplerMapper()
lens = km.project(data)
graph = km.map(X=data, lens=lens)
Explanation: Construct data and a simple Mapper
End of explanation
nx_graph = kmapper.adapter.to_nx(graph)
nx.draw(nx_graph)
Explanation: Convert Mapper to a networkx graph
We can easily convert the graph to a networkx graph representation. This enables us to use many of the commonly provided algorithms and visualization methods.
End of explanation |
15,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing string match functions
Note
Step1: Cosine Similarity
Step2: String comparison using cosine similarity https
Step3: Cosine works fine with whole words and word transposition but will start to trip up on CO vs COMPANY and when too much extraneous text is introduced.
DIFFLIB (Python Module)
String comparison using difflib - https
Step4: May have some issues where it comes to partial string matches http
Step5: Also wants to use python-Levenshtein to improve speed, but install failed on gcc - will complain below
Step6: Fuzzywuzzy has an interesting "process" function
Step7: taken from https
Step8: Note that soundex and nysiis both appear to just take the first word | Python Code:
# Note - these lines added to make it work with shared Jupyter Hub instance,
# modifying the system path so that locally installed modules installed with the shell commands below will be found -
# they would need to be modified for your instance, or to install the modules normally remove the --user param
# import sys
# import os
# sys.path.append(os.path.abspath("/...path to your local module install dir..."))
# these are the values we want to test
text1 = 'General Electric Company'
text2 = 'General Electric Co Inc'
Explanation: Testing string match functions
Note: These examples were built with an Anaconda distro, on a Python 3.x kernel. External modules were installed locally using --user option (since these were being run on a shared Jupyter Hub instance)
End of explanation
import re, math
from collections import Counter
Explanation: Cosine Similarity
End of explanation
WORD = re.compile(r'\w+')
def get_cosine(vec1, vec2):
intersection = set(vec1.keys()) & set(vec2.keys())
numerator = sum([vec1[x] * vec2[x] for x in intersection])
sum1 = sum([vec1[x]**2 for x in vec1.keys()])
sum2 = sum([vec2[x]**2 for x in vec2.keys()])
denominator = math.sqrt(sum1) * math.sqrt(sum2)
if not denominator:
return 0.0
else:
return float(numerator) / denominator
def text_to_vector(text):
words = WORD.findall(text)
return Counter(words)
vector1 = text_to_vector(text1)
vector2 = text_to_vector(text2)
cosine = get_cosine(vector1, vector2)
print ('Cosine:', cosine)
Explanation: String comparison using cosine similarity https://en.wikipedia.org/wiki/Cosine_similarity
Code sample copypasta from Stack Overflow: http://stackoverflow.com/questions/15173225/how-to-calculate-cosine-similarity-given-2-sentence-strings-python
End of explanation
import difflib
from difflib import SequenceMatcher
m = SequenceMatcher(None, text1, text2)
print (m.ratio())
Explanation: Cosine works fine with whole words and word transposition but will start to trip up on CO vs COMPANY and when too much extraneous text is introduced.
DIFFLIB (Python Module)
String comparison using difflib - https://docs.python.org/3/library/difflib.html
End of explanation
## %%sh
## pip install fuzzywuzzy --user
Explanation: May have some issues where it comes to partial string matches http://chairnerd.seatgeek.com/fuzzywuzzy-fuzzy-string-matching-in-python/
FuzzyWuzzy (Python Module)
Background on FuzzyWuzzy - http://chairnerd.seatgeek.com/fuzzywuzzy-fuzzy-string-matching-in-python/
Package install should only need to be done once, unless the cluster was reset - this will install locally (using --user parameter), so variables need to be set
End of explanation
## %%sh
## pip install python-Levenshtein --user
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
print (fuzz.ratio(text1, text2))
print (fuzz.partial_ratio(text1, text2))
print (fuzz.token_sort_ratio(text1, text2))
print (fuzz.token_set_ratio(text1, text2))
Explanation: Also wants to use python-Levenshtein to improve speed, but install failed on gcc - will complain below
End of explanation
import numpy as np
# Jaccard Similarity J (A,B) = | Intersection (A,B) | / | Union (A,B) |
def compute_jaccard_similarity_score(x, y):
intersection_cardinality = len(set(x).intersection(set(y)))
union_cardinality = len(set(x).union(set(y)))
return intersection_cardinality / float(union_cardinality)
score = compute_jaccard_similarity_score(text1, text2)
print ("Jaccard Similarity Score: ",score)
Explanation: Fuzzywuzzy has an interesting "process" function:
choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"]
process.extract("new york jets", choices, limit=2)
[('New York Jets', 100), ('New York Giants', 78)]
process.extractOne("cowboys", choices)
("Dallas Cowboys", 90)
Jaccard
End of explanation
## %%sh
## pip install jellyfish --user
import jellyfish
jellyfish.levenshtein_distance(text1,text2)
jellyfish.damerau_levenshtein_distance(text1,text2)
jellyfish.jaro_distance(text1,text2)
jellyfish.jaro_winkler(text1,text2)
jellyfish.match_rating_comparison(text1,text2)
jellyfish.hamming_distance(text1,text2)
jellyfish.soundex(text1)
jellyfish.soundex(text2)
soundexenc = ''
sentence=text1.split()
for word in sentence:
soundexenc = soundexenc+' '+jellyfish.soundex(word)
print(soundexenc)
jellyfish.metaphone(text1)
jellyfish.metaphone(text2)
jellyfish.metaphone(text1) == jellyfish.metaphone(text2)
jellyfish.nysiis(text1)
jellyfish.nysiis(text2)
Explanation: taken from https://codegists.com/code/python%20jaccard/
Jellyfish
Testing Jellyfish library, with the following algorithms
String comparison:
Levenshtein Distance
Damerau-Levenshtein Distance
Jaro Distance
Jaro-Winkler Distance
Match Rating Approach Comparison
Hamming Distance
Phonetic encoding:
American Soundex
Metaphone
NYSIIS (New York State Identification and Intelligence System)
Match Rating Codex
https://github.com/jamesturk/jellyfish
End of explanation
jellyfish.nysiis(text1) == jellyfish.nysiis(text2)
nysiisenc = ''
sentence=text2.split()
for word in sentence:
nysiisenc = nysiisenc+' '+jellyfish.nysiis(word)
print(nysiisenc)
jellyfish.match_rating_codex(text1)
jellyfish.match_rating_codex(text2)
Explanation: Note that soundex and nysiis both appear to just take the first word
End of explanation |