Unnamed: 0
int64 | cleaned_code
string | cleaned_prompt
string |
---|---|---|
14,800 | <ASSISTANT_TASK:>
Python Code:
factors(689)
max_seq_len = 682
#full_train_size = 55820
#train_size = 55800
#small_train_size = 6000 #just because of performance reasons, no statistics behind this decision
#test_size = 6200
data_path = '../../../../Dropbox/data'
phae_path = data_path + '/price_hist_autoencoder'
csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'
assert path.isfile(csv_in)
npz_unprocessed = phae_path + '/price_history_full_seqs.npz'
assert path.isfile(npz_unprocessed)
npz_dates = phae_path + '/price_history_full_seqs_dates.npz'
assert path.isfile(npz_dates)
npz_train = phae_path + '/price_history_seqs_dates_normed_train.npz'
assert path.isfile(npz_train)
npz_test = phae_path + '/price_history_seqs_dates_normed_test.npz'
assert path.isfile(npz_test)
npz_path = npz_train[:-len('_train.npz')]
for key, val in np.load(npz_train).iteritems():
print key, ",", val.shape
dp = PriceHistoryAutoEncDataProvider(npz_path=npz_path, batch_size=53, with_EOS=False)
for data in dp.datalist:
print data.shape
# for item in dp.next():
# print item.shape
# model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
# graph = model.getGraph(batch_size=53,
# enc_num_units = 10,
# dec_num_units = 10,
# ts_len=max_seq_len)
#show_graph(graph)
def experiment():
return model.run(npz_path=npz_path,
epochs=2,
batch_size = 53,
enc_num_units = 400,
dec_num_units = 400,
ts_len=max_seq_len,
learning_rate = 1e-4,
preds_gather_enabled = False,
)
dyn_stats_dic = experiment()
dyn_stats_dic['dyn_stats'].plotStats()
plt.show()
dyn_stats_dic['dyn_stats_diff'].plotStats()
plt.show()
model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
npz_test = npz_path + '_test.npz'
assert path.isfile(npz_test)
path.abspath(npz_test)
def experiment():
return model.run(npz_path=npz_path,
epochs=50,
batch_size = 53,
enc_num_units = 450,
dec_num_units = 450,
ts_len=max_seq_len,
learning_rate = 1e-3,
preds_gather_enabled = True,
)
#%%time
# dyn_stats_dic, preds_dict, targets, twods = experiment()
dyn_stats_dic, preds_dict, targets, twods = get_or_run_nn(experiment, filename='035_autoencoder_001',
nn_runs_folder = data_path + "/nn_runs")
dyn_stats_dic['dyn_stats'].plotStats()
plt.show()
dyn_stats_dic['dyn_stats_diff'].plotStats()
plt.show()
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
#sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b', label='reals')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
twod_arr = np.array(twods.values())
twod_arr.shape
plt.figure(figsize=(16,7))
plt.plot(twod_arr[:, 0], twod_arr[:, 1], 'r.')
plt.title('two dimensional representation of our time series after dimensionality reduction')
plt.xlabel('first dimension')
plt.ylabel('second dimension')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1 - collect data
Step2: Step 2 - Build model
Step3: targets
Step4: Quick test run
Step5: Step 3 training the network
Step6: Conclusion
|
14,801 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
import pom3_ga, sys
import pickle
# TODO 1: Enter your unity ID here
__author__ = "pwang13"
def normalize(problem, points):
Normalize all the objectives
in each point and return them
meta = problem.objectives
all_objs = []
for point in points:
objs = []
for i, o in enumerate(problem.evaluate(point)):
low, high = meta[i].low, meta[i].high
# TODO 3: Normalize 'o' between 'low' and 'high'; Then add the normalized value to 'objs'
if high == low:
objs.append(0)
continue
objs.append((o - low) / (high - low))
all_objs.append(objs)
return all_objs
Performing experiments for [5, 10, 50] generations.
problem = pom3_ga.POM3()
pop_size = 10
repeats = 10
test_gens = [5, 10, 50]
def save_data(file_name, data):
Save 'data' to 'file_name.pkl'
with open(file_name + ".pkl", 'wb') as f:
pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)
def load_data(file_name):
Retrieve data from 'file_name.pkl'
with open(file_name + ".pkl", 'rb') as f:
return pickle.load(f)
def build(problem, pop_size, repeats, test_gens):
Repeat the experiment for 'repeats' number of repeats for each value in 'test_gens'
tests = {t: [] for t in test_gens}
tests[0] = [] # For Initial Population
for _ in range(repeats):
init_population = pom3_ga.populate(problem, pop_size)
pom3_ga.say(".")
for gens in test_gens:
tests[gens].append(normalize(problem, pom3_ga.ga(problem, init_population, retain_size=pop_size, gens=gens)[1]))
tests[0].append(normalize(problem, init_population))
print("\nCompleted")
return tests
Repeat Experiments
# tests = build(problem, pop_size, repeats, test_gens)
Save Experiment Data into a file
# save_data("dump", tests)
Load the experimented data from dump.
tests = load_data("dump")
print(tests.keys())
def make_reference(problem, *fronts):
Make a reference set comparing all the fronts.
Here the comparison we use is bdom. It can
be altered to use cdom as well
retain_size = len(fronts[0])
reference = []
for front in fronts:
reference+=front
def bdom(one, two):
Return True if 'one' dominates 'two'
else return False
:param one - [pt1_obj1, pt1_obj2, pt1_obj3, pt1_obj4]
:param two - [pt2_obj1, pt2_obj2, pt2_obj3, pt2_obj4]
dominates = False
for i, obj in enumerate(problem.objectives):
gt, lt = pom3_ga.gt, pom3_ga.lt
better = lt if obj.do_minimize else gt
# TODO 3: Use the varaibles declared above to check if one dominates two
if (better(one[i], two[i])):
dominates = True
elif one[i] != two[i]:
return False
return dominates
def fitness(one, dom):
return len([1 for another in reference if dom(one, another)])
fitnesses = []
for point in reference:
fitnesses.append((fitness(point, bdom), point))
reference = [tup[1] for tup in sorted(fitnesses, reverse=True)]
return reference[:retain_size]
assert len(make_reference(problem, tests[5][0], tests[10][0], tests[50][0])) == len(tests[5][0])
def eucledian(one, two):
Compute Eucledian Distance between
2 vectors. We assume the input vectors
are normalized.
:param one: Vector 1
:param two: Vector 2
:return:
# TODO 4: Code up the eucledian distance. https://en.wikipedia.org/wiki/Euclidean_distance
return (sum([(o - t) ** 2 for o, t in zip(one, two)]) / len(one)) ** 0.5
def sort_solutions(solutions):
Sort a list of list before computing spread
def sorter(lst):
m = len(lst)
weights = reversed([10 ** i for i in xrange(m)])
return sum([element * weight for element, weight in zip(lst, weights)])
return sorted(solutions, key=sorter)
def closest(one, many):
min_dist = sys.maxint
closest_point = None
for this in many:
dist = eucledian(this, one)
if dist < min_dist:
min_dist = dist
closest_point = this
return min_dist, closest_point
def spread(obtained, ideals):
Calculate the spread (a.k.a diversity)
for a set of solutions
s_obtained = sort_solutions(obtained)
s_ideals = sort_solutions(ideals)
d_f = closest(s_ideals[0], s_obtained)[0]
d_l = closest(s_ideals[-1], s_obtained)[0]
n = len(s_ideals)
distances = []
for i in range(len(s_obtained)-1):
distances.append(eucledian(s_obtained[i], s_obtained[i+1]))
d_bar = sum(distances) / len(distances)
# TODO 5: Compute the value of spread using the definition defined in the previous cell.
d_sum = sum([abs(d_i - d_bar) for d_i in distances])
delta = (d_f + d_l + d_sum) / (d_f + d_l + (n - 1)*d_bar)
return delta
ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0])
print(spread(tests[5][0], ref))
print(spread(tests[10][0], ref))
print(spread(tests[50][0], ref))
def igd(obtained, ideals):
Compute the IGD for a
set of solutions
:param obtained: Obtained pareto front
:param ideals: Ideal pareto front
:return:
# TODO 6: Compute the value of IGD using the definition defined in the previous cell.
igd_val = sum([closest(ideal, obtained)[0] for ideal in ideals]) / len(ideals)
# igd_val = 0
return igd_val
ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0])
print(igd(tests[5][0], ref))
print(igd(tests[10][0], ref))
print(igd(tests[50][0], ref))
import sk
sk = reload(sk)
def format_for_sk(problem, data, measure):
Convert the experiment data into the format
required for sk.py and computet the desired
'measure' for all the data.
gens = data.keys()
reps = len(data[gens[0]])
measured = {gen:["gens_%d"%gen] for gen in gens}
for i in range(reps):
ref_args = [data[gen][i] for gen in gens]
ref = make_reference(problem, *ref_args)
for gen in gens:
measured[gen].append(measure(data[gen][i], ref))
return measured
def report(problem, tests, measure):
measured = format_for_sk(problem, tests, measure).values()
sk.rdivDemo(measured)
print("*** IGD ***")
report(problem, tests, igd)
print("\n*** Spread ***")
report(problem, tests, spread)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize?
Step10: Data Format
Step13: Reference Set
Step17: Spread
Step20: IGD = inter-generational distance; i.e. how good are you compared to the best known?
|
14,802 | <ASSISTANT_TASK:>
Python Code:
class Stack:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s=Stack()
print(s.isEmpty())
s.push(4)
s.push('dog')
print(s.peek())
s.push(True)
print(s.size())
print(s.isEmpty())
s.push(8.4)
print(s.items)
print(s.pop())
print(s.pop())
print(s.size())
class Queue:
def __init__(self):
self.items = []
def add(self,item):
self.items.append( item )
def remove(self):
self.items.remove( self.items[0])
def peek(self):
return self.items[0]
def isEmpty(self):
return self.items == []
def parChecker(symbolString):
s = Stack()
balanced = True
index = 0
while index < len(symbolString) and balanced:
symbol = symbolString[index]
if symbol == "(":
s.push(symbol)
else:
if s.isEmpty():
balanced = False
else:
s.pop()
index = index + 1
if balanced and s.isEmpty():
return True
else:
return False
print(parChecker('((()))'))
print(parChecker('(()'))
def parChekcer(symbolString):
s = Stack()
balanced = True
index = 0
while index < len(symbolString) and balanced:
symbol = symbolString[index]
if symbol in "([{":
s.push(symbol)
else:
if s.isEmpty():
balanced = False
else:
top = s.pop()
if not matches(top,symbol):
balanced = False
index = index + 1
if balanced and s.isEmpty():
return True
else:
return False
def matches(open,close):
opens = "([{"
closers = ")]}"
return opens.index(open) == closers.index(close)
print(parChecker('{{([][])}()}') )
print(parChecker('[{()]'))
def divideBy2(decNumber):
remstack = Stack()
while decNumber >0:
rem = decNumber % 2
remstack.push(rem)
decNumber = decNumber // 2
binString = ""
while not remstack.isEmpty():
binString = binString + str(remstack.pop())
return binString
print(divideBy2(42))
divideBy2(233)
def baseConverter(decNumber,base):
digits = "0123456789ABCDEF"
remstack=Stack()
while decNumber >0:
rem = decNumber % base
remstack.push(rem)
decNumber = decNumber // base
newString = ""
while not remstack.isEmpty():
newString = newString + digits[remstack.pop()]
return newString
print(baseConverter(25,2))
print(baseConverter(25,16))
print(baseConverter(25,8))
print(baseConverter(256,16))
print(baseConverter(26,26))
# Node class
class Node:
# Function to initialize the node object
def __init__(self,data):
self.data = data # Assign data
self.next= None # Initialize
# next as null
# Linked List class
class LinkedList:
# Function to initialize the Linked
# List object
def __init__(self):
self.head = None
# This function prints contents of linked list
# starting from head
# traversal of a linked list
def printList(self):
temp = self.head
while (temp):
print temp.data
temp = temp.next
# Start with the empty list
llist = LinkedList()
llist.head = Node(1)
second = Node(2)
third = Node(3)
llist.head.next = second; # Link 1st node with second
second.next = third
llist.printList()
class Node:
def __init__(self,val):
self.val = val
self.next = None
class LinkedList:
def __init__(self,val=None):
if val is None:
self.head = None
else:
self.head = Node(val)
def insertEnd(self,val):
temp = self.head
while(temp.next): # check if temp has a next
temp = temp.next # keep traversing
temp.next = Node(val)
def printList(self):
temp = self.head
while (temp): # stop when temp is a None, which could happen with next in Node
print temp.val
temp = temp.next
llist = LinkedList(1)
llist.printList()
llist.insertEnd(2)
llist.printList()
llist.insertEnd(4)
llist.printList()
llist.insertEnd(2)
llist.printList()
llist.insertEnd(6)
llist.printList()
llist.insertEnd(7)
llist.printList()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: cf. 3 Stacks and Queues, Cracking the Coding Interview, 6th Ed., McDowell, stack uses LIFO - as in a stack of dinner plates, the most recent item added to the stack is the 1st item to be removed.
Step2: cf. 3.6. Simple Balanced Parentheses
Step3: Linked List
|
14,803 | <ASSISTANT_TASK:>
Python Code:
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
sys.path.append(module_path + '/rl_coach')
from typing import Union
import numpy as np
from rl_coach.agents.ddpg_agent import DDPGAgent, DDPGAgentParameters, DDPGAlgorithmParameters
from rl_coach.spaces import SpacesDefinition
from rl_coach.core_types import RunPhase
class HACDDPGAlgorithmParameters(DDPGAlgorithmParameters):
def __init__(self):
super().__init__()
self.sub_goal_testing_rate = 0.5
self.time_limit = 40
class HACDDPGAgentParameters(DDPGAgentParameters):
def __init__(self):
super().__init__()
self.algorithm = DDPGAlgorithmParameters()
class HACDDPGAgent(DDPGAgent):
def __init__(self, agent_parameters, parent: Union['LevelManager', 'CompositeAgent']=None):
super().__init__(agent_parameters, parent)
self.sub_goal_testing_rate = self.ap.algorithm.sub_goal_testing_rate
self.graph_manager = None
def choose_action(self, curr_state):
# top level decides, for each of his generated sub-goals, for all the layers beneath him if this is a sub-goal
# testing phase
graph_manager = self.parent_level_manager.parent_graph_manager
if self.ap.is_a_highest_level_agent:
graph_manager.should_test_current_sub_goal = np.random.rand() < self.sub_goal_testing_rate
if self.phase == RunPhase.TRAIN:
if graph_manager.should_test_current_sub_goal:
self.exploration_policy.change_phase(RunPhase.TEST)
else:
self.exploration_policy.change_phase(self.phase)
action_info = super().choose_action(curr_state)
return action_info
def update_transition_before_adding_to_replay_buffer(self, transition):
graph_manager = self.parent_level_manager.parent_graph_manager
# deal with goals given from a higher level agent
if not self.ap.is_a_highest_level_agent:
transition.state['desired_goal'] = self.current_hrl_goal
transition.next_state['desired_goal'] = self.current_hrl_goal
self.distance_from_goal.add_sample(self.spaces.goal.distance_from_goal(
self.current_hrl_goal, transition.next_state))
goal_reward, sub_goal_reached = self.spaces.goal.get_reward_for_goal_and_state(
self.current_hrl_goal, transition.next_state)
transition.reward = goal_reward
transition.game_over = transition.game_over or sub_goal_reached
# each level tests its own generated sub goals
if not self.ap.is_a_lowest_level_agent and graph_manager.should_test_current_sub_goal:
_, sub_goal_reached = self.spaces.goal.get_reward_for_goal_and_state(
transition.action, transition.next_state)
sub_goal_is_missed = not sub_goal_reached
if sub_goal_is_missed:
transition.reward = -self.ap.algorithm.time_limit
return transition
def set_environment_parameters(self, spaces: SpacesDefinition):
super().set_environment_parameters(spaces)
if self.ap.is_a_highest_level_agent:
# the rest of the levels already have an in_action_space set to be of type GoalsSpace, thus they will have
# their GoalsSpace set to the in_action_space in agent.set_environment_parameters()
self.spaces.goal = self.spaces.action
self.spaces.goal.set_target_space(self.spaces.state[self.spaces.goal.goal_name])
if not self.ap.is_a_highest_level_agent:
self.spaces.reward.reward_success_threshold = self.spaces.goal.reward_type.goal_reaching_reward
from rl_coach.architectures.tensorflow_components.layers import Dense
from rl_coach.base_parameters import VisualizationParameters, EmbeddingMergerType, EmbedderScheme
from rl_coach.architectures.embedder_parameters import InputEmbedderParameters
from rl_coach.memories.episodic.episodic_hindsight_experience_replay import HindsightGoalSelectionMethod, \
EpisodicHindsightExperienceReplayParameters
from rl_coach.memories.episodic.episodic_hrl_hindsight_experience_replay import \
EpisodicHRLHindsightExperienceReplayParameters
from rl_coach.memories.memory import MemoryGranularity
from rl_coach.spaces import GoalsSpace, ReachingGoal
from rl_coach.exploration_policies.ou_process import OUProcessParameters
from rl_coach.core_types import EnvironmentEpisodes, EnvironmentSteps, RunPhase, TrainingSteps
time_limit = 1000
polar_coordinates = False
distance_from_goal_threshold = np.array([0.075, 0.075, 0.75])
goals_space = GoalsSpace('achieved_goal',
ReachingGoal(default_reward=-1, goal_reaching_reward=0,
distance_from_goal_threshold=distance_from_goal_threshold),
lambda goal, state: np.abs(goal - state)) # raw L1 distance
top_agent_params = HACDDPGAgentParameters()
# memory - Hindsight Experience Replay
top_agent_params.memory = EpisodicHRLHindsightExperienceReplayParameters()
top_agent_params.memory.max_size = (MemoryGranularity.Transitions, 10000000)
top_agent_params.memory.hindsight_transitions_per_regular_transition = 3
top_agent_params.memory.hindsight_goal_selection_method = HindsightGoalSelectionMethod.Future
top_agent_params.memory.goals_space = goals_space
top_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentEpisodes(32)
top_agent_params.algorithm.num_consecutive_training_steps = 40
top_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(40)
# exploration - OU process
top_agent_params.exploration = OUProcessParameters()
top_agent_params.exploration.theta = 0.1
# actor - note that the default middleware is overriden with 3 dense layers
top_actor = top_agent_params.network_wrappers['actor']
top_actor.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
top_actor.middleware_parameters.scheme = [Dense([64])] * 3
top_actor.learning_rate = 0.001
top_actor.batch_size = 4096
# critic - note that the default middleware is overriden with 3 dense layers
top_critic = top_agent_params.network_wrappers['critic']
top_critic.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'action': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
top_critic.embedding_merger_type = EmbeddingMergerType.Concat
top_critic.middleware_parameters.scheme = [Dense([64])] * 3
top_critic.learning_rate = 0.001
top_critic.batch_size = 4096
from rl_coach.schedules import ConstantSchedule
from rl_coach.exploration_policies.e_greedy import EGreedyParameters
bottom_agent_params = HACDDPGAgentParameters()
bottom_agent_params.algorithm.in_action_space = goals_space
bottom_agent_params.memory = EpisodicHindsightExperienceReplayParameters()
bottom_agent_params.memory.max_size = (MemoryGranularity.Transitions, 12000000)
bottom_agent_params.memory.hindsight_transitions_per_regular_transition = 4
bottom_agent_params.memory.hindsight_goal_selection_method = HindsightGoalSelectionMethod.Future
bottom_agent_params.memory.goals_space = goals_space
bottom_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentEpisodes(16 * 25) # 25 episodes is one true env episode
bottom_agent_params.algorithm.num_consecutive_training_steps = 40
bottom_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(40)
bottom_agent_params.exploration = EGreedyParameters()
bottom_agent_params.exploration.epsilon_schedule = ConstantSchedule(0.2)
bottom_agent_params.exploration.evaluation_epsilon = 0
bottom_agent_params.exploration.continuous_exploration_policy_parameters = OUProcessParameters()
bottom_agent_params.exploration.continuous_exploration_policy_parameters.theta = 0.1
# actor
bottom_actor = bottom_agent_params.network_wrappers['actor']
bottom_actor.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
bottom_actor.middleware_parameters.scheme = [Dense([64])] * 3
bottom_actor.learning_rate = 0.001
bottom_actor.batch_size = 4096
# critic
bottom_critic = bottom_agent_params.network_wrappers['critic']
bottom_critic.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'action': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
bottom_critic.embedding_merger_type = EmbeddingMergerType.Concat
bottom_critic.middleware_parameters.scheme = [Dense([64])] * 3
bottom_critic.learning_rate = 0.001
bottom_critic.batch_size = 4096
agents_params = [top_agent_params, bottom_agent_params]
from rl_coach.environments.gym_environment import Mujoco
from rl_coach.environments.environment import SelectedPhaseOnlyDumpMethod
from rl_coach.graph_managers.hrl_graph_manager import HRLGraphManager
from rl_coach.graph_managers.graph_manager import ScheduleParameters
env_params = Mujoco()
env_params.level = "rl_coach.environments.mujoco.pendulum_with_goals:PendulumWithGoals"
env_params.additional_simulator_parameters = {"time_limit": time_limit,
"random_goals_instead_of_standing_goal": False,
"polar_coordinates": polar_coordinates,
"goal_reaching_thresholds": distance_from_goal_threshold}
env_params.frame_skip = 10
env_params.custom_reward_threshold = -time_limit + 1
vis_params = VisualizationParameters()
vis_params.video_dump_methods = [SelectedPhaseOnlyDumpMethod(RunPhase.TEST)]
vis_params.dump_mp4 = False
vis_params.native_rendering = False
schedule_params = ScheduleParameters()
schedule_params.improve_steps = EnvironmentEpisodes(40 * 4 * 64) # 40 epochs
schedule_params.steps_between_evaluation_periods = EnvironmentEpisodes(4 * 64) # 4 small batches of 64 episodes
schedule_params.evaluation_steps = EnvironmentEpisodes(64)
schedule_params.heatup_steps = EnvironmentSteps(0)
graph_manager = HRLGraphManager(agents_params=agents_params, env_params=env_params,
schedule_params=schedule_params, vis_params=vis_params,
consecutive_steps_to_run_each_level=EnvironmentSteps(40))
graph_manager.visualization_parameters.render = True
from rl_coach.base_parameters import TaskParameters, Frameworks
log_path = '../experiments/pendulum_hac'
if not os.path.exists(log_path):
os.makedirs(log_path)
task_parameters = TaskParameters(framework_type=Frameworks.tensorflow,
evaluate_only=False,
experiment_path=log_path)
task_parameters.__dict__['checkpoint_save_secs'] = None
task_parameters.__dict__['verbosity'] = 'low'
graph_manager.create_graph(task_parameters)
graph_manager.improve()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's define the HAC algorithm and agent parameters.
Step2: Now we'll define the agent itself - HACDDPGAgent - which subclasses the DDPG agent class. The main difference between the DDPG agent and the HACDDPGAgent is the subgoal a higher level agent defines to a lower level agent, hence the overrides of the DDPG Agent functions.
Step3: The Preset
Step4: The bottom agent
Step5: Now we define the parameters of all the agents in the hierarchy from top to bottom
Step6: Define the environment, visualization and schedule parameters. The schedule parameters refer to the top level agent.
Step7: Lastly, we create a HRLGraphManager that will execute the hierarchical agent we defined according to the parameters.
Step8: Running the Preset
|
14,804 | <ASSISTANT_TASK:>
Python Code::
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
# create a dictionary containing the hyperparameters
# to tune and the range of values to try
PARAMETERS = {"subsample":[0.75, 1],
"colsample_bytree":[0.75, 1],
"max_depth":[2, 6],
"min_child_weight":[1, 5],
"learning_rate":[0.1, 0.01]}
# create a validation set which will be used for early stopping
eval_set = [(X_val, y_val)]
# initialise an XGBoost classifier, set the number of estimators,
# evaluation metric & early stopping rounds
estimator = xgb.XGBClassifier(n_estimators=100,
n_jobs=-1,
eval_metric='logloss',
early_stopping_rounds=10)
# initialise GridSearchCV model by passing the XGB classifier we
# initialised in the last step along with the dictionary of parameters
# and values to try. We also set the number of folds to validate over
# along with the scoring metic to use
model = GridSearchCV(estimator=estimator,
param_grid=PARAMETERS,
cv=3,
scoring="neg_log_loss")
# fit model
model.fit(X_train,
y_train,
eval_set=eval_set,
verbose=0)
# print out the best hyperparameters
print(model.best_params_)
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,805 | <ASSISTANT_TASK:>
Python Code:
#Import libraries
import sys, os
import pandas as pd
import numpy as np
#Get file names; these files are created by the CreateUsageTable.py and CreateSupplyTable.py respectively
dataDir = '../../Data'
tidyuseFN = dataDir + os.sep + "UsageDataTidy.csv"
tidysupplyFN = dataDir + os.sep + "SupplyTableTidy.csv"
outCountyFN = dataDir + os.sep + "WaterByCounty.csv"
outStateFN = dataDir + os.sep + "WaterByState.csv"
outNationFN = dataDir + os.sep + "WaterBalanceData.csv"
#Read in the usage table from the csv file
dfUse = pd.read_csv(tidyuseFN,dtype={'FIPS':np.str})
#Remove rows with irrigation and thermoelectric sub-classes
#dropValues = ['Irrigation_Crop', 'Irrigation_Golf','ThermoElec_OnceThru', 'ThermoElec_Recirc']
dropValues = ['Irrigation','ThermoElec']
dfUse = dfUse[~dfUse['UseClass'].isin(dropValues)]
#Convert amounts from MGal/day to MGal/year
dfUse['Amount'] = dfUse['Amount'] * 365
#Add STATEFIPS column to dfUse (as left most 2 characters of FIPS values)
dfUse['STATEFIPS'] = dfUse['FIPS'].str[:2]
#Pivot on YEAR and FIPS listing usage in sector/source categories
dfUseFIPS = dfUse.pivot_table(index=['YEAR','STATE','FIPS'],
values='Amount',
aggfunc='sum',
columns=['UseClass','SrcClass'])
#Flatten hierarchical column names
dfUseFIPS.columns = ['_'.join(col).strip() for col in dfUseFIPS.columns.values]
#Remove indices so values are available as columns
dfUseFIPS.reset_index(inplace=True)
dfUseFIPS.head(2)
#Read in the supply table from the csv file
dfSupply = pd.read_csv(tidysupplyFN,dtype={'FIPS':np.str,'STATEFIPS':np.str})
#Compute supply as precipitation - evapotranspiration
#(See https://www.fs.fed.us/rm/value/docs/spatial_distribution_water_supply.pdf)
# * Could also use total_runoff
# * Values are in mm/year and need to be adjusted to MGal/year by mulitplying by weighted area
dfSupply['Supply'] = dfSupply['pr'] - dfSupply['et']
#Summarize supply on YEAR and FIPS
'''We take the mean mm/year across points in a county and then
mulitply by county area to get volume (mm * m3). These values
then need to by converted to MGal to give MGal/year
'''
#Compute mean runoff and supply on year and county
dfSupplyFIPS = dfSupply.groupby(('YEAR','STATEFIPS','FIPS','Area'))['total_runoff','Supply'].mean()
#Reset the index so Year, StateFIPS, FIPS, and AREA become columns again
dfSupplyFIPS.reset_index(inplace=True)
#Convert mm/Year * county area (m2) into MGal/year - to match use values
''' m = [mm] / 1000;
m * [m2] = m3;
[m3] / 3785.41178 = 1 MGal'''
for param in ('total_runoff','Supply'):
dfSupplyFIPS[param] = (dfSupplyFIPS[param] / 1000.0) * dfSupplyFIPS.Area / 3785.41178
dfSupplyFIPS.head(2)
dfSupplyFIPS.columns.values
#Merge the two tables on YEAR and FIPS columns
dfAll = pd.merge(dfUseFIPS,dfSupplyFIPS, how='outer',on=['YEAR','FIPS'],left_index=True,right_index=True)
dfAll.head(2)
#Export to csv
dfAll.to_csv(outCountyFN, index=False, encoding='utf8')
#Group by YEAR
dfUS = dfAll.groupby('YEAR').sum()
dfUS.head()
dfUS.reset_index(inplace=True)
dfUSm = pd.melt(dfUS,id_vars='YEAR',var_name='Group',value_name='MGal')
dfUSm.to_csv(outNationFN,index=False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summarize USE table by county
Step2: Import and summarize supply table by county
Step3: Join Use and Supply Tables on Year and FIPS
Step4: Summarize for entire US
|
14,806 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import numpy as np
from pyshtools import SHCoeffs
lmax = 30
coeffs = SHCoeffs.from_zeros(lmax)
coeffs.set_coeffs(values=[1], ls=[10], ms=[0])
grid = coeffs.expand()
fig, ax = grid.plot3d(elevation=20, azimuth=30)
ldata = 30
degrees = np.arange(ldata+1, dtype=float)
degrees[0] = np.inf
power = degrees**(-2)
coeffs2 = SHCoeffs.from_random(power)
grid2 = coeffs2.expand()
fig, ax = grid2.plot3d(elevation=20, azimuth=30)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To plot the data, we first expand it on a grid, and then use the method plot3d()
Step2: Let's try a somewhat more complicated function. Here we will calculate a random realization of a process whose power spectrum follows a power law with exponent -2
|
14,807 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
# if using a Jupyter notebook, includue:
%matplotlib inline
mu = 80
sigma = 7
x = np.random.normal(mu, sigma, size=200)
plt.hist(x, 20,
density=True,
histtype='bar',
facecolor='b',
alpha=0.5)
plt.show()
import matplotlib.pyplot as plt
# if using a Jupyter notebook, include:
%matplotlib inline
commute_times = [23, 25, 40, 35, 36, 47, 33, 28, 48, 34,
20, 37, 36, 23, 33, 36, 20, 27, 50, 34,
47, 18, 28, 52, 21, 44, 34, 13, 40, 49]
plt.hist(commute_times, 5)
plt.show()
bin_edges = [0,15,30,45,60]
plt.hist(commute_times,
bins=bin_edges,
density=False,
histtype='bar',
color='b',
edgecolor='k',
alpha=0.5)
plt.xlabel('Commute time (min)')
plt.xticks([0,15,30,45,60])
plt.ylabel('Number of commuters')
plt.title('Histogram of commute times')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For our dataset, let's define a mean (average) mu = 80 and a standard deviation (spread) sigma = 7. Then we'll use numpy's np.random.normal() function to produce an array of random numbers with a normal distribution. 200 random numbers is a sufficient quantity to plot. The general format of the np.random.normal() function is below
Step2: Matplotlib's plt.hist() function produces histogram plots. The first positional argument passed to plt.hist() is a list or array of values, the second positional argument denotes the number of bins on the histogram.
Step3: Our next histogram example involves a list of commute times. Suppose the following commute times were recorded in a survey
Step4: Now we'll call plt.hist() and include our commute_times list and specify 5 bins.
Step5: If we want our bins to have specific bin ranges, we can specify a list or array of bin edges in the keyword argument bins=. Let's also add some axis labels and a title to the histogram. A table of some keyword arguments used with plt.hist() is below
|
14,808 | <ASSISTANT_TASK:>
Python Code:
# server = subprocess.Popen(['python', '../go_persistent_server.py'])
# time.sleep(3)
# web = subprocess.Popen(['python', '../go_web.py'])
# time.sleep(3)
web_interface = WebInterface()
results = web_interface.add_trigger(
'junk', 'insert_ts', None, 'db:one:ts')
assert results[0] == 200
print(results)
results = web_interface.add_trigger(
'stats', 'insert_ts', ['mean', 'std'], None)
assert results[0] == 200
print(results[1])
def tsmaker(m, s, j):
'''
Helper function: randomly generates a time series for testing.
Parameters
----------
m : float
Mean value for generating time series data
s : float
Standard deviation value for generating time series data
j : float
Quantifies the "jitter" to add to the time series data
Returns
-------
A time series and associated meta data.
'''
# generate metadata
meta = {}
meta['order'] = int(np.random.choice(
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]))
meta['blarg'] = int(np.random.choice([1, 2]))
# generate time series data
t = np.arange(0.0, 1.0, 0.01)
v = norm.pdf(t, m, s) + j * np.random.randn(ts_length)
# return time series and metadata
return meta, TimeSeries(t, v)
mus = np.random.uniform(low=0.0, high=1.0, size=50)
sigs = np.random.uniform(low=0.05, high=0.4, size=50)
jits = np.random.uniform(low=0.05, high=0.2, size=50)
ts_length = 100
# initialize dictionaries for time series and their metadata
tsdict = {}
metadict = {}
for i, m, s, j in zip(range(50), mus, sigs, jits):
meta, tsrs = tsmaker(m, s, j)
# the primary key format is ts-1, ts-2, etc
pk = "ts-{}".format(i)
tsdict[pk] = tsrs
meta['vp'] = False # augment metadata with a boolean asking if this is a VP.
metadict[pk] = meta
vpkeys = ["ts-{}".format(i) for i in np.random.choice(range(50), size=5, replace=False)]
for i in range(5):
# add 5 triggers to upsert distances to these vantage points
# data = json.dumps({'proc':'corr', 'onwhat':'insert_ts', 'target':["d_vp-{}".format(i)], 'arg':tsdict[vpkeys[i]].to_json()})
# r = requests.post(self.web_url+'/add_trigger', data)
r = web_interface.add_trigger('corr', 'insert_ts', ["d_vp-{}".format(i)], tsdict[vpkeys[i]].to_json())
assert(r[0] == 200)
# change the metadata for the vantage points to have meta['vp']=True
metadict[vpkeys[i]]['vp'] = True
for k in tsdict:
results = web_interface.insert_ts(k, tsdict[k])
assert results[0] == 200
# upsert meta
results = web_interface.upsert_meta(k, metadict[k])
assert results[0] == 200
results = web_interface.add_trigger(
'junk', 'insert_ts', None, 'db:one:ts')
results
# ==========================================
# However if it's not first time to insert these keys,
# insert_ts will return TSDBStatus.INVALID_KEY
# ==========================================
# pick a random pk
idx = np.random.choice(list(tsdict.keys()))
# check that the time series is there now
results = web_interface.select({"primary_key": idx})
assert results[0] == 200
# delete an existing time series
results = web_interface.delete_ts(idx)
assert results[0] == 200
# check that the time series is no longer there
results = web_interface.select({"md":{"order": 1}, "fields":["ts"], "additional":{"sort_by":"-order"}})
assert results[0] == 200
# add the time series back in
results = web_interface.insert_ts(idx, tsdict[idx])
assert results[0] == 200
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step8: Having set up the triggers, now insert the time series, and upsert the metadata
|
14,809 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
dftrain = pd.read_csv('data/bike_sharing_train.csv')
dfval = pd.read_csv('data/bike_sharing_val.csv')
dftest = pd.read_csv('data/bike_sharing_test.csv')
ntrain = len(dftrain)
nval = len(dftrain) + len(dfval)
df = pd.concat([dftrain,dfval,dftest])
print('\nSummary - dataframe completo:\n')
print(df.describe())
df.head()
import matplotlib.pyplot as plt
for column in ['season','workingday','weather','temp', 'atemp', 'humidity', 'windspeed','casual','registered']:
g = df.groupby(column)['count','registered','casual'].mean()
if len(g) > 4:
g.plot()
else:
g.plot.bar()
plt.show()
df['hour'] = pd.to_datetime(df['datetime']).apply(lambda x: x.strftime('%H'))
df['hour'] = pd.to_numeric(df['hour'])
df['day'] = pd.to_datetime(df['datetime']).apply(lambda x: x.strftime('%d'))
df['day'] = pd.to_numeric(df['day'])
df['month'] = pd.to_datetime(df['datetime']).apply(lambda x: x.strftime('%m'))
df['month'] = pd.to_numeric(df['month'])
df['year'] = pd.to_datetime(df['datetime']).apply(lambda x: x.strftime('%Y'))
df['year'] = pd.to_numeric(df['year'])
for column in ['hour','day','month','year']:
g = df.groupby(column)['count'].mean()
g.plot('bar')
plt.show()
Xdf=df.loc[:,['season','holiday','workingday','weather','temp','atemp','humidity','windspeed','hour']]
Ydf=df.loc[:,'count']
X_train = Xdf[0:ntrain].values
X_val = Xdf[ntrain:nval].values
X_test = Xdf[nval:].values
Y_train = Ydf[0:ntrain].values
Y_val = Ydf[ntrain:nval].values
Y_test = Ydf[nval:].values
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
def eval_bikemodel(y_predict,y_true,**kwargs):
diff = np.log(y_predict+1.0) - np.log(y_true+1.0)
return np.sqrt(np.sum(np.square(diff))/len(y_predict))
model = DecisionTreeRegressor(random_state=0)
model.fit(X_train,Y_train)
score_test = model.score(X_test,Y_test)
print("SCORE TEST=%f"%score_test)
Y_pred_train = model.predict(X_train)
Y_pred_val = model.predict(X_val)
Y_pred_test = model.predict(X_test)
kagg_train = eval_bikemodel(Y_pred_train,Y_train)
kagg_val = eval_bikemodel(Y_pred_val,Y_val)
kagg_test = eval_bikemodel(Y_pred_test,Y_test)
print("KAGG EVAL TRAIN =%f"%kagg_train)
print("KAGG EVAL TEST =%f"%kagg_test)
plt.plot(Y_test,Y_pred_test,'.')
plt.show()
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer
def train_tree(X,Y,crit=["mse","mae"],max_depth=30,min_depth=5):
scores = []
count,current = len(crit)*(max_depth-min_depth+1),1
for d in range(min_depth,max_depth+1):
for c in crit:
model = DecisionTreeRegressor(random_state=0,max_depth=d,criterion=c)
score = 0#cross_val_score(model, X,Y).mean()
score_bike = cross_val_score(model, X,Y, scoring=make_scorer(eval_bikemodel)).mean()
scores.append([model, score, score_bike, c, d])
# print("\t Score:%f \t BikeScore:%f \t Max Depth: %d \t Criterion: %s"%(score,score_bike,d,c))
print ("%d/%d"%(current,count), end="\r")
current+=1
return scores
scores = train_tree(np.concatenate([X_train,X_val]),np.concatenate([Y_train,Y_val]),max_depth=20,min_depth=5)
def top_scores(scores):
sorted_scores = sorted(scores, key=(lambda x: -x[2]), reverse=False)
print("Top 10 models:")
for s in sorted_scores[-10:]:
print(" Score:%f \t BikeScore:%f \t Max Depth: %d \t Criterion: %s"%(s[1],s[2],s[4],s[3]))
return sorted_scores[-1]
t=top_scores(scores)
t[0].fit(np.concatenate([X_train,X_val]),np.concatenate([Y_train,Y_val]))
print("Top BikeScore Test: %f"%eval_bikemodel(t[0].predict(X_test),Y_test))
df['cday'] = pd.to_datetime(df['datetime']).dt.dayofweek #0:lunes,6:domingo
df['cday'] = pd.to_numeric(df['cday'])
df['cyear'] = pd.to_datetime(df['datetime']).apply(lambda x: x.strftime('%Y'))
df['cyear'] = pd.to_numeric(df['cyear'])
df['cmonth'] = pd.to_datetime(df['datetime']).apply(lambda x: x.strftime('%m'))
df['cmonth'] = pd.to_numeric(df['cmonth'])
df['ghour'] = df['hour'].apply(lambda x: int(x/3))
df['gatemp'] = df['atemp'].apply(lambda x: int(x/3))
df['holiday'] = pd.to_numeric(df['holiday'])
df['workingday'] = pd.to_numeric(df['workingday'])
df['tday'] = df['holiday'] + 2*df['workingday']
df['humidity'] = pd.to_numeric(df['humidity'])
Xdf=df.loc[:,['season','weather','humidity','cday','hour','cyear','tday','gatemp']]
Xdf.head()
X_train2 = Xdf[0:nval].values
Y_train2 = Ydf[0:nval].values
X_test2 = Xdf[nval:].values
Y_test2 = Ydf[nval:].values
scores2 = train_tree(X_train2,Y_train2,crit=["mse","mae"],max_depth=15,min_depth=5)
t=top_scores(scores2)
t[0].fit(X_train2,Y_train2)
print("Top BikeScore Test: %f"%eval_bikemodel(t[0].predict(X_test2),Y_test2))
def eval_bikemodel_log(y_predict,y_true,**kwargs):
y_predict = np.exp(y_predict)-1
y_true = np.exp(y_true)-1
diff = np.log(y_predict+1.0) - np.log(y_true+1.0)
return np.sqrt(np.sum(np.square(diff))/len(y_predict))
df['day_weekend'] = df['tday'].apply(lambda x: x==0)
df['day_holiday'] = df['tday'].apply(lambda x: x==1)
df['day_work'] = df['tday'].apply(lambda x: x==2)
df['season_spr'] = df['season'].apply(lambda x: x==1)
df['season_sum'] = df['season'].apply(lambda x: x==2)
df['season_fal'] = df['season'].apply(lambda x: x==3)
df['season_win'] = df['season'].apply(lambda x: x==4)
df['weather_clear'] = df['weather'].apply(lambda x: x==1)
df['weather_mist'] = df['weather'].apply(lambda x: x==2)
df['weather_snow'] = df['weather'].apply(lambda x: x==3)
df['weather_rain'] = df['weather'].apply(lambda x: x==4)
Xdf=pd.get_dummies(df[['humidity','windspeed','hour','atemp','tday','season','weather','cmonth','cyear']],
columns=['hour','season','weather','tday','cmonth','cyear'])
Ydf=np.log(df.loc[:,'count']+1)
X_train = Xdf[0:ntrain].values
X_val = Xdf[ntrain:nval].values
X_test = Xdf[nval:].values
Y_train = Ydf[0:ntrain].values
Y_val = Ydf[ntrain:nval].values
Y_test = Ydf[nval:].values
from sklearn.preprocessing import StandardScaler
scalerX = StandardScaler()
X_train = scalerX.fit_transform(X_train)
X_val = scalerX.fit_transform(X_val)
X_test = scalerX.transform(X_test)
from sklearn.svm import SVR
model = SVR()
model.fit(X_train,Y_train)
Y_pred_train = model.predict(X_train)
Y_pred_val = model.predict(X_val)
Y_pred_test = model.predict(X_test)
print("Top BikeScore Test: %f"%eval_bikemodel_log(Y_pred_test,Y_test))
def train_svm(X,Y,kernel=['linear','poly','rbf','sigmoid','precomputed'],max_c=20,min_c=1):
scores = []
c_range = range(min_c,max_c+1)
e_range = [0.01,0.05,0.1,0.5,1,10]
count,current = len(kernel)*len(c_range)*len(e_range),1
for c in c_range:
for k in kernel:
for e in e_range:
model = SVR(C=c,epsilon=e,kernel=k)
score_bike = cross_val_score(model, X,Y, scoring=make_scorer(eval_bikemodel_log)).mean()
scores.append([model, 0, score_bike, c, k, e])
print ("%d/%d"%(current,count), end="\r")
current+=1
return scores
def top_scores_svm(scores):
sorted_scores = sorted(scores, key=(lambda x: -x[2]), reverse=False)
print("Top 10 models:")
for s in sorted_scores[-10:]:
print(" BikeScore:%f \t C: %d \t kernel: %s \t e: %f"%(s[2],s[3],s[4],s[5]))
return sorted_scores[-1]
scores_svm = train_svm(np.concatenate([X_train,X_val]),np.concatenate([Y_train,Y_val]),kernel=['rbf'],max_c=5)
t=top_scores_svm(scores_svm)
model = SVR(C=t[3],kernel=t[4],epsilon=t[5])
model.fit(np.concatenate([X_train,X_val]),np.concatenate([Y_train,Y_val]))
Y_pred_test_svr = model.predict(X_test)
print("Top BikeScore Test: %f"%eval_bikemodel_log(Y_pred_test_svr,Y_test))
# Ydf=df.ix[:,'count'] #demanda total
Ydf_r=np.log(df['registered']+1) #demanda registrada
Yr_train = Ydf_r[0:ntrain].values
Yr_val = Ydf_r[ntrain:nval].values
Yr_test = Ydf_r[nval:].values
Ydf_c=np.log(df['casual']+1) #demanda casual
Yc_train = Ydf_c[0:ntrain].values
Yc_val = Ydf_c[ntrain:nval].values
Yc_test = Ydf_c[nval:].values
model_r = SVR(C=t[3],kernel=t[4],epsilon=t[5])
model_c = SVR(C=t[3],kernel=t[4],epsilon=t[5])
model_r.fit(np.concatenate([X_train,X_val]),np.concatenate([Yr_train,Yr_val]))
model_c.fit(np.concatenate([X_train,X_val]),np.concatenate([Yc_train,Yc_val]))
Yr_pred_test = model_r.predict(X_test)
Yc_pred_test = model_c.predict(X_test)
print("Top BikeScore Test registered: %f"%eval_bikemodel_log(Yr_pred_test,Yr_test))
print("Top BikeScore Test casual : %f"%eval_bikemodel_log(Yc_pred_test,Yc_test))
print("Top BikeScore Test count : %f"%eval_bikemodel_log(Yr_pred_test+Yc_pred_test,Y_test))
from sklearn.ensemble import RandomForestRegressor
def train_random_forest(Xtrain,Ytrain,Xeval,Yval, estimators, depth):
model = RandomForestRegressor(n_estimators=estimators,max_depth=depth, random_state=0)
model.fit(Xtrain, Ytrain)
Yeval_pred = model.predict(Xeval)
return (model,eval_bikemodel_log(Yeval_pred,Yval),estimators,depth)
def top_scores_rfr(scores):
sorted_scores = sorted(scores, key=(lambda x: -x[1]), reverse=False)
print("Top 10 models:")
for s in sorted_scores[-10:]:
print(" BikeScore:%f \t estimators: %d \t depth: %d"%(s[1],s[2],s[3]))
return sorted_scores[-1]
scores = []
count,current=20*20,1
for e in range(1,20):
for d in range(1,20):
scores.append(train_random_forest(X_train,Y_train,X_val,Y_val, e, d))
print ("%d/%d"%(current,count), end="\r")
current+=1
t=top_scores_rfr(scores)
t[0].fit(X_train,Y_train)
Ytest_pred = t[0].predict(X_test)
print("Top BikeScore Test: %f"%eval_bikemodel_log(Ytest_pred,Y_test))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: b) Entrene un árbol de regresión para resolver el problema usando parámetros por defecto. Con este fin, construya una matriz $X_{train}$ de forma $n_{train} \times d_1$ que contenga los datos de entrenamiento en sus filas, seleccionando las columnas que desee/pueda utilizar para el entrenamiento. Implemente además, la función de evaluación que hemos definido anteriormente para este problema. Evalúe el árbol de regresión ajustado a los datos de entrenamiento sobre el conjunto de entrenamiento y pruebas. Construya un gráfico que compare las predicciones con los valores reales. En este punto usted debiese tener un modelo con puntaje del orden de 0.59, lo que lo dejará más o menos en la posición 2140 de la competencia.
Step2: c) Mejore el árbol de regresión definido en el punto anterior haciendo modificaciones a los hiper-parámetros del modelo. Por ejemplo, como estos modelos tienden a sobre-ajustar, podría intentar limitar la profundidad del árbol (¿Por qué esto debiese ayudar?). Naturalmente, está absolutamente prohibido tomar este tipo de decisiones en función del resultado de pruebas. Debe realizar estas elecciones evaluando sobre el conjunto de validación. Si no desea utilizarlo, y prefiere implementar validación cruzada u otra técnica automática, tiene la ventaja de poder usar el conjunto de validación como parte del entrenamiento. Con estas modificaciones debiese poder mejorar su ranking en unas 300 posiciones.
Step3: Se utilizó validación cruzada con la finalidad de no guiarse por una sola evaluación, la que podría ser "buena" por cosa de suerte.
Step4: e) Entrene una SVM no lineal para resolver el problema midiendo el efecto de las distintas representaciones que haya descubierto hasta este punto. Un detalle importante es que antes de entrenar la SVM sería aconsejable hacer dos tipos de pre-procesamiento adicional de los datos
Step5: f) Mejore la SVM definida en el punto anterior haciendo modificaciones a los hiper-parámetros de la máquina ($C$, $\epsilon$ o la misma función de kernel). Naturalmente, está absolutamente prohibido tomar este tipo de decisiones de diseño mirando el resultado de pruebas. Debe realizar estas elecciones evaluando sobre el conjunto de validación. Si no desea utilizarlo, y prefiere implementar validación cruzada u otra técnica automática, tiene la ventaja de poder usar el conjunto de validación como parte del entrenamiento.
Step6: g) Evaúe el efecto de utilizar el dataset de validación para entrenamiento y seleccionar los parámetros estructurales del árbol de clasificación y la SVM usando validación cruzada. El código de ejemplo para esto ha sido proporcionado en las tareas 1 y 2, pero se adjunta de nuevo a continuación
Step7: i) Evalúe el efecto de utilizar un algoritmo genérico para ensamblar máquinas de aprendizaje para predecir la demanda total de bicicletas. Puede experimentar con una sola técnica (e.g. Random Forest), discuta la evolución a medida que aumenta el número de máquinas.
|
14,810 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
train = pd.read_csv('titanic_train.csv')
train.head()
sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
sns.set_style('whitegrid')
sns.countplot(x='Survived',data=train,palette='RdBu_r')
sns.set_style('whitegrid')
sns.countplot(x='Survived',hue='Sex',data=train,palette='RdBu_r')
sns.set_style('whitegrid')
sns.countplot(x='Survived',hue='Pclass',data=train,palette='rainbow')
sns.distplot(train['Age'].dropna(),kde=False,color='darkred',bins=30)
train['Age'].hist(bins=30,color='darkred',alpha=0.7)
sns.countplot(x='SibSp',data=train)
train['Fare'].hist(color='green',bins=40,figsize=(8,4))
plt.figure(figsize=(12, 7))
sns.boxplot(x='Pclass',y='Age',data=train,palette='winter')
def impute_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass == 2:
return 29
else:
return 24
else:
return Age
train['Age'] = train[['Age','Pclass']].apply(impute_age,axis=1)
sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
train.drop('Cabin',axis=1,inplace=True)
train.head()
train.dropna(inplace=True)
train.info()
sex = pd.get_dummies(train['Sex'],drop_first=True)
embark = pd.get_dummies(train['Embarked'],drop_first=True)
train.drop(['Sex','Embarked','Name','Ticket'],axis=1,inplace=True)
train = pd.concat([train,sex,embark],axis=1)
train.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(train.drop('Survived',axis=1),
train['Survived'], test_size=0.30,
random_state=101)
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Data
Step2: Exploratory Data Analysis
Step3: Roughly 20 percent of the Age data is missing. The proportion of Age missing is likely small enough for reasonable replacement with some form of imputation. Looking at the Cabin column, it looks like we are just missing too much of that data to do something useful with at a basic level. We'll probably drop this later, or change it to another feature like "Cabin Known
Step4: Data Cleaning
Step5: We can see the wealthier passengers in the higher classes tend to be older, which makes sense. We'll use these average age values to impute based on Pclass for Age.
Step6: Now apply that function!
Step7: Now let's check that heat map again!
Step8: Great! Let's go ahead and drop the Cabin column and the row in Embarked that is NaN.
Step9: Converting Categorical Features
Step10: Great! Our data is ready for our model!
Step11: Training and Predicting
Step12: Let's move on to evaluate our model!
|
14,811 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install --quiet neural-structured-learning
import matplotlib.pyplot as plt
import neural_structured_learning as nsl
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
class HParams(object):
def __init__(self):
self.input_shape = [28, 28, 1]
self.num_classes = 10
self.conv_filters = [32, 64, 64]
self.kernel_size = (3, 3)
self.pool_size = (2, 2)
self.num_fc_units = [64]
self.batch_size = 32
self.epochs = 5
self.adv_multiplier = 0.2
self.adv_step_size = 0.2
self.adv_grad_norm = 'infinity'
HPARAMS = HParams()
datasets = tfds.load('mnist')
train_dataset = datasets['train']
test_dataset = datasets['test']
IMAGE_INPUT_NAME = 'image'
LABEL_INPUT_NAME = 'label'
def normalize(features):
features[IMAGE_INPUT_NAME] = tf.cast(
features[IMAGE_INPUT_NAME], dtype=tf.float32) / 255.0
return features
def convert_to_tuples(features):
return features[IMAGE_INPUT_NAME], features[LABEL_INPUT_NAME]
def convert_to_dictionaries(image, label):
return {IMAGE_INPUT_NAME: image, LABEL_INPUT_NAME: label}
train_dataset = train_dataset.map(normalize).shuffle(10000).batch(HPARAMS.batch_size).map(convert_to_tuples)
test_dataset = test_dataset.map(normalize).batch(HPARAMS.batch_size).map(convert_to_tuples)
def build_base_model(hparams):
Builds a model according to the architecture defined in `hparams`.
inputs = tf.keras.Input(
shape=hparams.input_shape, dtype=tf.float32, name=IMAGE_INPUT_NAME)
x = inputs
for i, num_filters in enumerate(hparams.conv_filters):
x = tf.keras.layers.Conv2D(
num_filters, hparams.kernel_size, activation='relu')(
x)
if i < len(hparams.conv_filters) - 1:
# max pooling between convolutional layers
x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x)
x = tf.keras.layers.Flatten()(x)
for num_units in hparams.num_fc_units:
x = tf.keras.layers.Dense(num_units, activation='relu')(x)
pred = tf.keras.layers.Dense(hparams.num_classes)(x)
model = tf.keras.Model(inputs=inputs, outputs=pred)
return model
base_model = build_base_model(HPARAMS)
base_model.summary()
base_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
base_model.fit(train_dataset, epochs=HPARAMS.epochs)
results = base_model.evaluate(test_dataset)
named_results = dict(zip(base_model.metrics_names, results))
print('\naccuracy:', named_results['acc'])
adv_config = nsl.configs.make_adv_reg_config(
multiplier=HPARAMS.adv_multiplier,
adv_step_size=HPARAMS.adv_step_size,
adv_grad_norm=HPARAMS.adv_grad_norm
)
base_adv_model = build_base_model(HPARAMS)
adv_model = nsl.keras.AdversarialRegularization(
base_adv_model,
label_keys=[LABEL_INPUT_NAME],
adv_config=adv_config
)
train_set_for_adv_model = train_dataset.map(convert_to_dictionaries)
test_set_for_adv_model = test_dataset.map(convert_to_dictionaries)
adv_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs)
results = adv_model.evaluate(test_set_for_adv_model)
named_results = dict(zip(adv_model.metrics_names, results))
print('\naccuracy:', named_results['sparse_categorical_accuracy'])
reference_model = nsl.keras.AdversarialRegularization(
base_model, label_keys=[LABEL_INPUT_NAME], adv_config=adv_config)
reference_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
models_to_eval = {
'base': base_model,
'adv-regularized': adv_model.base_model
}
metrics = {
name: tf.keras.metrics.SparseCategoricalAccuracy()
for name in models_to_eval.keys()
}
perturbed_images, labels, predictions = [], [], []
for batch in test_set_for_adv_model:
perturbed_batch = reference_model.perturb_on_batch(batch)
# Clipping makes perturbed examples have the same range as regular ones.
perturbed_batch[IMAGE_INPUT_NAME] = tf.clip_by_value(
perturbed_batch[IMAGE_INPUT_NAME], 0.0, 1.0)
y_true = perturbed_batch.pop(LABEL_INPUT_NAME)
perturbed_images.append(perturbed_batch[IMAGE_INPUT_NAME].numpy())
labels.append(y_true.numpy())
predictions.append({})
for name, model in models_to_eval.items():
y_pred = model(perturbed_batch)
metrics[name](y_true, y_pred)
predictions[-1][name] = tf.argmax(y_pred, axis=-1).numpy()
for name, metric in metrics.items():
print('%s model accuracy: %f' % (name, metric.result().numpy()))
batch_index = 0
batch_image = perturbed_images[batch_index]
batch_label = labels[batch_index]
batch_pred = predictions[batch_index]
batch_size = HPARAMS.batch_size
n_col = 4
n_row = (batch_size + n_col - 1) // n_col
print('accuracy in batch %d:' % batch_index)
for name, pred in batch_pred.items():
print('%s model: %d / %d' % (name, np.sum(batch_label == pred), batch_size))
plt.figure(figsize=(15, 15))
for i, (image, y) in enumerate(zip(batch_image, batch_label)):
y_base = batch_pred['base'][i]
y_adv = batch_pred['adv-regularized'][i]
plt.subplot(n_row, n_col, i+1)
plt.title('true: %d, base: %d, adv: %d' % (y, y_base, y_adv))
plt.imshow(tf.keras.utils.array_to_img(image), cmap='gray')
plt.axis('off')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adversarial regularization for image classification
Step2: Import libraries. We abbreviate neural_structured_learning to nsl.
Step3: Hyperparameters
Step4: MNIST dataset
Step5: To make the model numerically stable, we normalize the pixel values to [0, 1]
Step7: Base model
Step8: Next we train and evaluate the base model.
Step9: We can see that the base model achieves 99% accuracy on the test set. We will
Step10: Now we can wrap a base model with AdversarialRegularization. Here we create a
Step11: Next we compile, train, and evaluate the
Step12: We can see that the adversarial-regularized model also performs very well (99%
Step13: We collect in a dictionary the models to be evaluted, and also create a metric
Step14: Here is the loop to generate perturbed examples and to evaluate models with
Step15: We can see that the accuracy of the base model drops dramatically (from 99% to
|
14,812 | <ASSISTANT_TASK:>
Python Code:
PROJECT_ID = "YOUR PROJECT ID"
BUCKET_NAME = "gs://YOUR BUCKET NAME"
REGION = "YOUR REGION"
SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT"
content_name = "pt-img-cls-gpu-cust-cont-torchserve"
gcs_output_uri_prefix = f"{BUCKET_NAME}/{content_name}"
! gsutil ls $gcs_output_uri_prefix
! curl -O https://raw.githubusercontent.com/alvarobartt/pytorch-model-serving/master/images/sample.jpg
! ls sample.jpg
%run convert_b64.py
! ls sample_b64.json
! gsutil cp -r $gcs_output_uri_prefix/model ./model_server/
! ls ./model_server/model/
! cd model_server && torch-model-archiver \
--model-name antandbee \
--version 1.0 \
--serialized-file ./model/antandbee.pth \
--model-file ./model.py \
--handler ./handler.py \
--extra-files ./index_to_name.json \
-f
! ls model_server/antandbee.mar
hostname = "gcr.io"
tag = "latest"
model_name = "antandbee"
image_name_serve = content_name + "-" + model_name
custom_container_image_uri_serve = f"{hostname}/{PROJECT_ID}/{image_name_serve}:{tag}"
! cd model_server && docker build -t $custom_container_image_uri_serve -f Dockerfile .
! rm -rf ./model_server/model/
! docker run \
--rm -it \
-d \
--name ts_antandbee \
-p 8080:8080 \
-p 8081:8081 \
$custom_container_image_uri_serve
! curl http://localhost:8080/ping
! curl http://127.0.0.1:8081/models/antandbee
! curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @sample_b64.json \
localhost:8080/predictions/antandbee
! docker stop ts_antandbee
! docker push $custom_container_image_uri_serve
! gcloud container images list --repository $hostname/$PROJECT_ID
! pip install -r requirements.txt
from google.cloud import aiplatform
aiplatform.init(
project=PROJECT_ID,
staging_bucket=BUCKET_NAME,
location=REGION,
)
model_display_name = image_name_serve
model = aiplatform.Model.upload(
display_name=model_display_name,
serving_container_image_uri=custom_container_image_uri_serve,
serving_container_ports=[8080],
serving_container_predict_route=f"/predictions/{model_name}",
serving_container_health_route="/ping",
)
endpoint = model.deploy(
machine_type="n1-standard-4",
)
endpoint.resource_name
import base64
def convert_b64(input_file_name):
Open image and convert it to Base64
with open(input_file_name, "rb") as input_file:
jpeg_bytes = base64.b64encode(input_file.read()).decode("utf-8")
return jpeg_bytes
image_file_name = "./sample.jpg"
instance = {"data": {"b64": convert_b64(image_file_name)}}
prediction = endpoint.predict(instances=[instance])
print(prediction)
! gsutil rm -rf $gcs_output_uri_prefix
! rm sample.jpg
! rm sample_b64.json
! rm model_server/antandbee.mar
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training Artifact
Step2: Vertex Prediction using Custom TorchServe Container
Step3: Model Archive for TorchServe
Step4: Option
Step5: Initialize Vertex SDK
Step6: Create a Vertex Model with Custom TorchServe Container
Step8: Create a Vertex Endpoint for Online Prediction
Step9: Clean Up
|
14,813 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import itertools
import urllib2
import scipy.stats as stats
%matplotlib inline
np.set_printoptions(precision=3, threshold=1000000, suppress=True)
np.random.seed(1)
alpha = .025
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
num_samples = 150 # how many different sized samples to draw
N = np.sum(csv[:, -1]) # total data set size
L = np.unique(csv[:, 2]) # list of labels
print L.shape
# sample sizes to iterate over
sample_sizes = np.logspace(1.0, 5.0, num=num_samples, base=10.0)
print sample_sizes
# simulated sampling under the null
repeats = 100 # how many repitions per sample size
pi_null = np.array([1.0/float(len(L))]*len(L)) # pi vector under the null (uniform probs)
power_null = []
for s in sample_sizes:
power = 0
E_i = pi_null*s # expected per label
for r in xrange(repeats):
null_data = np.random.multinomial(s, pi_null)
chi_sq = stats.chisquare(null_data, E_i)
p_value = chi_sq[1]
# can we reject the null hypothesis
if p_value < alpha:
power = power + 1
power_null.append(float(power)/float(repeats))
# simulated sampling under alternate
repeats = 100 # how many repitions per sample size
power_alt = []
pi_alt = np.random.rand(len(L)) # create a pi vector (random probabilities)
pi_alt = pi_alt/np.sum(pi_alt) # normalize
for s in sample_sizes:
power = 0
E_i = pi_null*s # all labels have equal expectancy
for r in xrange(repeats):
alt_data = np.random.multinomial(s, pi_alt) # use pi vector to gen data
chi_sq = stats.chisquare(alt_data, E_i)
p_value = chi_sq[1]
# can we reject the null hypothesis
if p_value < alpha:
power = power + 1
power_alt.append(float(power)/float(repeats))
plt.scatter(sample_sizes, power_null, hold=True, label='null', s=4)
plt.scatter(sample_sizes, power_alt, color='green', hold=True, label='alt', s=4)
plt.xlabel('sample size')
plt.xscale('log')
plt.ylabel('power')
plt.axhline(alpha, color='red', linestyle='--', label='alpha')
plt.legend(loc=5)
plt.show()
from __future__ import division
csv = csv[np.where(csv[:, -2] != 0)]
X = csv[:, -1]
density = csv[:, -1]/csv[:,-2]
# get average density (probability)
avg = np.average(density)
# expected values are everage probability multipled by unmasked per bin
E = csv[:, -2]*avg
print X[:50]
print E[:50]
print stats.chisquare(X, E)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 4a
Step2: Step 4b
Step3: Step 5
Step4: Apply to data
|
14,814 | <ASSISTANT_TASK:>
Python Code:
# setup
from pyrise import products as prod
obsid = prod.OBSERVATION_ID('PSP_003072_0985')
# test orbit number
assert obsid.orbit == '003072'
# test setting orbit property
obsid.orbit = 4080
assert obsid.orbit == '004080'
# test repr
assert obsid.__repr__() == 'PSP_004080_0985'
# test targetcode
assert obsid.targetcode == '0985'
# test setting targetcode property
obsid.targetcode = '0980'
assert obsid.targetcode == '0980'
assert obsid.__repr__() == 'PSP_004080_0980'
# test phase
assert obsid.phase == 'PSP'
# test upper orbit folder
assert obsid.get_upper_orbit_folder() == 'ORB_004000_004099'
# test storage path stem
assert obsid.storage_path_stem == 'PSP/ORB_004000_004099/PSP_004080_0980'
pid = prod.PRODUCT_ID('PSP_003072_0985')
pid
pid.kind = 'RED'
pid
pid.s
pid.storage_stem
pid.label_fname
pid.label_path
pid.jp2_fname
pid.jp2_path
for item in dir(pid):
if not item.startswith('__'):
print(item,':')
print(getattr(pid, item))
print()
spid = prod.SOURCE_PRODUCT_ID('PSP_003092_0985_RED4_0')
spid
spid.channel = 1
spid
spid.ccd
for i in dir(spid):
if not i.startswith('__'):
print(i,':')
print(getattr(spid, i))
print()
spid.pid.storage_stem
spid.pid.edr_storage_stem
spid.fpath
hiurl = prod.HiRISE_URL(spid.fpath)
hiurl.url
hiurl.path
pid.label_path
pid.obsid
pid
prod.RED_PRODUCT_ID(pid.obsid.s, 4, 1).furl
prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
from pyrise import downloads
obsid = 'PSP_003092_0985'
downloads.download_RED_product(obsid, 4, 0)
red_pid = prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
red_pid.fname
pid
name = obsid + '_RED'
channels = [4, 5]
ccds = [0, 1]
for channel in channels:
for ccd in ccds:
print(f'{name}{channel}_0.cub')
sid = prod.RED_PRODUCT_ID(obsid, 4,0)
sid.pid.label_url
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PRODUCT_ID
Step2: SOURCE_PRODUCT_ID
Step3: http
Step4: HiRISE_URL
Step5: others
|
14,815 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import tensorflow_datasets as tfds
import os
DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']
for name in FILE_NAMES:
text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name)
parent_dir = os.path.dirname(text_dir)
parent_dir
def labeler(example, index):
return example, tf.cast(index, tf.int64)
labeled_data_sets = []
for i, file_name in enumerate(FILE_NAMES):
lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name))
labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
labeled_data_sets.append(labeled_dataset)
BUFFER_SIZE = 50000
BATCH_SIZE = 64
TAKE_SIZE = 5000
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
all_labeled_data = all_labeled_data.concatenate(labeled_dataset)
all_labeled_data = all_labeled_data.shuffle(
BUFFER_SIZE, reshuffle_each_iteration=False)
for ex in all_labeled_data.take(5):
print(ex)
tokenizer = tfds.features.text.Tokenizer()
vocabulary_set = set()
for text_tensor, _ in all_labeled_data:
some_tokens = tokenizer.tokenize(text_tensor.numpy())
vocabulary_set.update(some_tokens)
vocab_size = len(vocabulary_set)
vocab_size
encoder = tfds.features.text.TokenTextEncoder(vocabulary_set)
example_text = next(iter(all_labeled_data))[0].numpy()
print(example_text)
encoded_example = encoder.encode(example_text)
print(encoded_example)
def encode(text_tensor, label):
encoded_text = encoder.encode(text_tensor.numpy())
return encoded_text, label
def encode_map_fn(text, label):
# py_func doesn't set the shape of the returned tensors.
encoded_text, label = tf.py_function(encode,
inp=[text, label],
Tout=(tf.int64, tf.int64))
# `tf.data.Datasets` work best if all components have a shape set
# so set the shapes manually:
encoded_text.set_shape([None])
label.set_shape([])
return encoded_text, label
all_encoded_data = all_labeled_data.map(encode_map_fn)
train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE)
test_data = all_encoded_data.take(TAKE_SIZE)
test_data = test_data.padded_batch(BATCH_SIZE)
sample_text, sample_labels = next(iter(test_data))
sample_text[0], sample_labels[0]
vocab_size += 1
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(vocab_size, 64))
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)))
# 一个或多个紧密连接的层
# 编辑 `for` 行的列表去检测层的大小
for units in [64, 64]:
model.add(tf.keras.layers.Dense(units, activation='relu'))
# 输出层。第一个参数是标签个数。
model.add(tf.keras.layers.Dense(3, activation='softmax'))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_data, epochs=3, validation_data=test_data)
eval_loss, eval_acc = model.evaluate(test_data)
print('\nEval loss: {}, Eval accuracy: {}'.format(eval_loss, eval_acc))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用 tf.data 加载文本数据
Step2: 三个版本的翻译分别来自于
Step3: 将文本加载到数据集中
Step4: 将这些标记的数据集合并到一个数据集中,然后对其进行随机化操作。
Step5: 你可以使用 tf.data.Dataset.take 与 print 来查看 (example, label) 对的外观。numpy 属性显示每个 Tensor 的值。
Step6: 将文本编码成数字
Step7: 样本编码
Step8: 你可以尝试运行这一行代码并查看输出的样式。
Step9: 现在,在数据集上运行编码器(通过将编码器打包到 tf.py_function 并且传参至数据集的 map 方法的方式来运行)。
Step10: 将数据集分割为测试集和训练集且进行分支
Step11: 现在,test_data 和 train_data 不是( example, label )对的集合,而是批次的集合。每个批次都是一对(多样本, 多标签 ),表示为数组。
Step12: 由于我们引入了一个新的 token 来编码(填充零),因此词汇表大小增加了一个。
Step13: 建立模型
Step14: 第一层将整数表示转换为密集矢量嵌入。更多内容请查阅 Word Embeddings 教程。
Step15: 下一层是 LSTM 层,它允许模型利用上下文中理解单词含义。 LSTM 上的双向包装器有助于模型理解当前数据点与其之前和之后的数据点的关系。
Step16: 最后,我们将获得一个或多个紧密连接的层,其中最后一层是输出层。输出层输出样本属于各个标签的概率,最后具有最高概率的分类标签即为最终预测结果。
Step17: 最后,编译这个模型。对于一个 softmax 分类模型来说,通常使用 sparse_categorical_crossentropy 作为其损失函数。你可以尝试其他的优化器,但是 adam 是最常用的。
Step18: 训练模型
|
14,816 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from SimPEG import Mesh
import matplotlib.pyplot as plt
%matplotlib inline
plt.set_cmap(plt.get_cmap('viridis')) # use a nice colormap!
# define a 1D mesh
mesh1D = Mesh.TensorMesh([5]) # with 5 cells
fig, ax = plt.subplots(1,1, figsize=(12,2))
ax.plot(mesh1D.gridN, np.zeros(mesh1D.nN),'-k',marker='|',markeredgewidth=2, markersize=16)
ax.plot(mesh1D.gridCC,np.zeros(mesh1D.nC),'o')
ax.plot(mesh1D.gridFx,np.zeros(mesh1D.nFx),'>')
ax.set_title('1D Mesh')
# and define a vector of fluxes that live on the faces of the 1D mesh
face_vec = np.r_[0., 1., 2., 2., 1., 0.] # vector of fluxes that live on the faces of the mesh
print("The flux on the faces is {}".format(face_vec))
plt.plot(mesh1D.gridFx, face_vec, '-o')
plt.ylim([face_vec.min()-0.5, face_vec.max()+0.5])
plt.grid(which='both')
plt.title('face_vec');
# We can take the divergence over the entire mesh by looping over each cell
div_face_vec = np.zeros(mesh1D.nC) # allocate for each cell
for i in range(mesh1D.nC): # loop over each cell and
div_face_vec[i] = 1.0/mesh1D.vol[i] * (-face_vec[i] + face_vec[i+1])
print("The face div of the 1D flux is {}".format(div_face_vec))
faceDiv = np.zeros([mesh1D.nC, mesh1D.nF]) # allocate space for a face div matrix
for i in range(mesh1D.nC): # loop over each cell
faceDiv[i, [i, i+1]] = 1.0/mesh1D.vol[i] * np.r_[-1,+1]
print("The 1D face div matrix for this mesh is \n{}".format(faceDiv))
assert np.all( faceDiv.dot(face_vec) == div_face_vec ) # make sure we get the same result!
print("\nThe face div of the 1D flux is still {}!".format(div_face_vec))
"There are {nnz} zeros (too many!) that we are storing".format(nnz = np.sum(faceDiv == 0))
import scipy.sparse as sp
from SimPEG.Utils import sdiag # we are often building sparse diagonal matrices, so we made a functio in SimPEG!
# construct differencing matrix with diagonals -1, +1
sparse_diff = sp.spdiags((np.ones((mesh1D.nC+1, 1))*[-1, 1]).T, [0, 1], mesh1D.nC, mesh1D.nC+1, format="csr")
print("the sparse differencing matrix is \n{}".format(sparse_diff.todense()))
# account for the volume
faceDiv_sparse = sdiag(1./mesh1D.vol) * sparse_diff # account for volume
print("\n and the face divergence is \n{}".format(faceDiv_sparse.todense()))
print("\n but now we are only storing {nnz} nonzeros".format(nnz=faceDiv_sparse.nnz))
assert np.all(faceDiv_sparse.dot(face_vec) == div_face_vec)
print("\n and we get the same answer! {}".format(faceDiv_sparse * face_vec))
print(mesh1D.faceDiv * face_vec) # and still gives us the same answer!
mesh2D = Mesh.TensorMesh([100,80])
mesh2D.plotGrid()
plt.axis('tight');
jx_fct = lambda x, y: -np.sin(2.*np.pi*x)
jy_fct = lambda x, y: -np.sin(2.*np.pi*y)
jx_vec = jx_fct(mesh2D.gridFx[:,0], mesh2D.gridFx[:,1])
jy_vec = jy_fct(mesh2D.gridFy[:,0], mesh2D.gridFy[:,1])
j_vec = np.r_[jx_vec, jy_vec]
print("There are {nFx} x-faces and {nFy} y-faces, so the length of the "
"face function, j, is {lenj}".format(
nFx=mesh2D.nFx,
nFy=mesh2D.nFy,
lenj=len(j_vec)
))
plt.colorbar(mesh2D.plotImage(j_vec, 'F', view='vec')[0])
small_mesh2D = Mesh.TensorMesh([3,4])
print("Each y-face is {} entries apart".format(small_mesh2D.nCx))
print("and the total number of x-faces is {}".format(small_mesh2D.nFx))
print("So in the first row of the faceDiv, we have non-zero entries at \n{}".format(
small_mesh2D.faceDiv[0,:]))
fig, ax = plt.subplots(1,2, figsize=(12,4))
# plot the non-zero entries in the faceDiv
ax[0].spy(small_mesh2D.faceDiv, ms=2)
ax[0].set_xlabel('2D faceDiv')
small_mesh2D.plotGrid(ax=ax[1])
# Number the faces and plot. (We should really add this to SimPEG... pull request anyone!?)
xys = zip(
small_mesh2D.gridFx[:,0],
small_mesh2D.gridFx[:,1],
range(small_mesh2D.nFx)
)
for x,y,ii in xys:
ax[1].plot(x, y, 'r>')
ax[1].text(x+0.01, y-0.02, ii, color='r')
xys = zip(
small_mesh2D.gridFy[:,0],
small_mesh2D.gridFy[:,1],
range(small_mesh2D.nFy)
)
for x,y,ii in xys:
ax[1].plot(x, y, 'g^')
ax[1].text(x-0.02, y+0.02, ii+small_mesh2D.nFx, color='g')
ax[1].set_xlim((-0.1,1.1));
ax[1].set_ylim((-0.1,1.1));
# small_mesh2D.faceDiv?? # check out the code!
Div_j = mesh2D.faceDiv * j_vec
fig, ax = plt.subplots(1,2, figsize=(8,4))
plt.colorbar(mesh2D.plotImage(j_vec, 'F', view='vec', ax=ax[0])[0],ax=ax[0])
plt.colorbar(mesh2D.plotImage(Div_j, ax=ax[1])[0],ax=ax[1])
ax[0].set_title('j')
ax[1].set_title('Div j')
plt.tight_layout()
# from earlier
# jx_fct = lambda x, y: -np.sin(2*np.pi*x)
# jy_fct = lambda x, y: -np.sin(2*np.pi*y)
sol = lambda x, y: -2*np.pi*(np.cos(2*np.pi*x)+np.cos(2*np.pi*y))
cont_div_j = sol(mesh2D.gridCC[:,0], mesh2D.gridCC[:,1])
Div_j = mesh2D.faceDiv * j_vec
fig, ax = plt.subplots(1,2, figsize=(8,4))
plt.colorbar(mesh2D.plotImage(Div_j, ax=ax[0])[0],ax=ax[0])
plt.colorbar(mesh2D.plotImage(cont_div_j, ax=ax[1])[0],ax=ax[1])
ax[0].set_title('Discrete Div j')
ax[1].set_title('Continuous Div j')
plt.tight_layout()
import unittest
from SimPEG.Tests import OrderTest
jx = lambda x, y: -np.sin(2*np.pi*x)
jy = lambda x, y: -np.sin(2*np.pi*y)
sol = lambda x, y: -2*np.pi*(np.cos(2*np.pi*x)+np.cos(2*np.pi*y))
class Testify(OrderTest):
meshDimension = 2
def getError(self):
j = np.r_[jx(self.M.gridFx[:,0], self.M.gridFx[:,1]),
jy(self.M.gridFy[:,0], self.M.gridFy[:,1])]
num = self.M.faceDiv * j # numeric answer
ans = sol(self.M.gridCC[:,0], self.M.gridCC[:,1]) # note M is a 2D mesh
return np.linalg.norm((num - ans), np.inf) # look at the infinity norm
# (as we refine the mesh, the number of cells
# changes, so need to be careful if using a 2-norm)
def test_order(self):
self.orderTest()
# This just runs the unittest:
suite = unittest.TestLoader().loadTestsFromTestCase( Testify )
unittest.TextTestRunner().run( suite );
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Over a single cell, the divergence is
Step2: Doing it as a for loop is easy to program for the first time,
Step3: the above is still a loop... (and python is not a fan of loops).
Step4: Working in Sparse
Step5: In SimPEG, this is stored as the faceDiv property on the mesh
Step6: Moving to 2D
Step7: We define 2 face functions, one in the x-direction and one in the y-direction. Here, we choose to work with sine functions as the continuous divergence is easy to compute, meaning we can test it!
Step8: But first... what does the matrix look like?
Step9: Now, lets look at the matrix structure
Step10: How did we construct the matrix? - Kronecker products.
Step11: Now that we have a discrete divergence, lets check out the divergence of the face function we defined earlier.
Step12: Are we right??
Step13: Those look similar
|
14,817 | <ASSISTANT_TASK:>
Python Code:
from beakerx import *
import pandas as pd
tableRows = pd.read_csv('../resources/data/interest-rates.csv')
Plot(title="Title",
xLabel="Horizontal",
yLabel="Vertical",
initWidth=500,
initHeight=200)
x = [1, 4, 6, 8, 10]
y = [3, 6, 4, 5, 9]
pp = Plot(title='Bars, Lines, Points and 2nd yAxis',
xLabel="xLabel",
yLabel="yLabel",
legendLayout=LegendLayout.HORIZONTAL,
legendPosition=LegendPosition.RIGHT,
omitCheckboxes=True)
pp.add(YAxis(label="Right yAxis"))
pp.add(Bars(displayName="Bar",
x=[1,3,5,7,10],
y=[100, 120,90,100,80],
width=1))
pp.add(Line(displayName="Line",
x=x,
y=y,
width=6,
yAxis="Right yAxis"))
pp.add(Points(x=x,
y=y,
size=10,
shape=ShapeType.DIAMOND,
yAxis="Right yAxis"))
plot = Plot(title= "Setting line properties")
ys = [0, 1, 6, 5, 2, 8]
ys2 = [0, 2, 7, 6, 3, 8]
plot.add(Line(y= ys, width= 10, color= Color.red))
plot.add(Line(y= ys, width= 3, color= Color.yellow))
plot.add(Line(y= ys, width= 4, color= Color(33, 87, 141), style= StrokeType.DASH, interpolation= 0))
plot.add(Line(y= ys2, width= 2, color= Color(212, 57, 59), style= StrokeType.DOT))
plot.add(Line(y= [5, 0], x= [0, 5], style= StrokeType.LONGDASH))
plot.add(Line(y= [4, 0], x= [0, 5], style= StrokeType.DASHDOT))
plot = Plot(title= "Changing Point Size, Color, Shape")
y1 = [6, 7, 12, 11, 8, 14]
y2 = [4, 5, 10, 9, 6, 12]
y3 = [2, 3, 8, 7, 4, 10]
y4 = [0, 1, 6, 5, 2, 8]
plot.add(Points(y= y1))
plot.add(Points(y= y2, shape= ShapeType.CIRCLE))
plot.add(Points(y= y3, size= 8.0, shape= ShapeType.DIAMOND))
plot.add(Points(y= y4, size= 12.0, color= Color.orange, outlineColor= Color.red))
plot = Plot(title= "Changing point properties with list")
cs = [Color.black, Color.red, Color.orange, Color.green, Color.blue, Color.pink]
ss = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0]
fs = [False, False, False, True, False, False]
plot.add(Points(y= [5] * 6, size= 12.0, color= cs))
plot.add(Points(y= [4] * 6, size= 12.0, color= Color.gray, outlineColor= cs))
plot.add(Points(y= [3] * 6, size= ss, color= Color.red))
plot.add(Points(y= [2] * 6, size= 12.0, color= Color.black, fill= fs, outlineColor= Color.black))
plot = Plot()
y1 = [1.5, 1, 6, 5, 2, 8]
cs = [Color.black, Color.red, Color.gray, Color.green, Color.blue, Color.pink]
ss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH]
plot.add(Stems(y= y1, color= cs, style= ss, width= 5))
plot = Plot(title= "Setting the base of Stems")
ys = [3, 5, 2, 3, 7]
y2s = [2.5, -1.0, 3.5, 2.0, 3.0]
plot.add(Stems(y= ys, width= 2, base= y2s))
plot.add(Points(y= ys))
plot = Plot(title= "Bars")
cs = [Color(255, 0, 0, 128)] * 5 # transparent bars
cs[3] = Color.red # set color of a single bar, solid colored bar
plot.add(Bars(x= [1, 2, 3, 4, 5], y= [3, 5, 2, 3, 7], color= cs, outlineColor= Color.black, width= 0.3))
plot = Plot(title= "Pandas line")
plot.add(Line(y= tableRows.y1, width= 2, color= Color(216, 154, 54)))
plot.add(Line(y= tableRows.y10, width= 2, color= Color.lightGray))
plot
plot = Plot(title= "Pandas Series")
plot.add(Line(y= pd.Series([0, 6, 1, 5, 2, 4, 3]), width=2))
plot = Plot(title= "Bars")
cs = [Color(255, 0, 0, 128)] * 7 # transparent bars
cs[3] = Color.red # set color of a single bar, solid colored bar
plot.add(Bars(pd.Series([0, 6, 1, 5, 2, 4, 3]), color= cs, outlineColor= Color.black, width= 0.3))
ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT)
plot = Plot(crosshair=ch)
y1 = [4, 8, 16, 20, 32]
base = [2, 4, 8, 10, 16]
cs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink]
ss = [StrokeType.SOLID,
StrokeType.SOLID,
StrokeType.DASH,
StrokeType.DOT,
StrokeType.DASHDOT,
StrokeType.LONGDASH]
plot.add(Area(y=y1, base=base, color=Color(255, 0, 0, 50)))
plot.add(Stems(y=y1, base=base, color=cs, style=ss, width=5))
plot = Plot()
y = [3, 5, 2, 3]
x0 = [0, 1, 2, 3]
x1 = [3, 4, 5, 8]
plot.add(Area(x= x0, y= y))
plot.add(Area(x= x1, y= y, color= Color(128, 128, 128, 50), interpolation= 0))
p = Plot()
p.add(Line(y= [3, 6, 12, 24], displayName= "Median"))
p.add(Area(y= [4, 8, 16, 32], base= [2, 4, 8, 16],
color= Color(255, 0, 0, 50), displayName= "Q1 to Q3"))
ch = Crosshair(color= Color(255, 128, 5), width= 2, style= StrokeType.DOT)
pp = Plot(crosshair= ch, omitCheckboxes= True,
legendLayout= LegendLayout.HORIZONTAL, legendPosition= LegendPosition.TOP)
x = [1, 4, 6, 8, 10]
y = [3, 6, 4, 5, 9]
pp.add(Line(displayName= "Line", x= x, y= y, width= 3))
pp.add(Bars(displayName= "Bar", x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], y= [2, 2, 4, 4, 2, 2, 0, 2, 2, 4], width= 0.5))
pp.add(Points(x= x, y= y, size= 10))
p = Plot ()
p.add(Line(y=[-1, 1]))
p.add(ConstantLine(x=0.65, style=StrokeType.DOT, color=Color.blue))
p.add(ConstantLine(y=0.1, style=StrokeType.DASHDOT, color=Color.blue))
p.add(ConstantLine(x=0.3, y=0.4, color=Color.gray, width=5, showLabel=True))
Plot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3]))
p = Plot()
p.add(Line(x= [-3, 1, 2, 4, 5], y= [4, 2, 6, 1, 5]))
p.add(ConstantBand(x= ['-Infinity', 1], color= Color(128, 128, 128, 50)))
p.add(ConstantBand(x= [1, 2]))
p.add(ConstantBand(x= [4, 'Infinity']))
from decimal import Decimal
pos_inf = Decimal('Infinity')
neg_inf = Decimal('-Infinity')
print (pos_inf)
print (neg_inf)
from beakerx.plot import Text as BeakerxText
plot = Plot()
xs = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
ys = [8.6, 6.1, 7.4, 2.5, 0.4, 0.0, 0.5, 1.7, 8.4, 1]
def label(i):
if ys[i] > ys[i+1] and ys[i] > ys[i-1]:
return "max"
if ys[i] < ys[i+1] and ys[i] < ys[i-1]:
return "min"
if ys[i] > ys[i-1]:
return "rising"
if ys[i] < ys[i-1]:
return "falling"
return ""
for i in xs:
i = i - 1
if i > 0 and i < len(xs)-1:
plot.add(BeakerxText(x= xs[i], y= ys[i], text= label(i), pointerAngle= -i/3.0))
plot.add(Line(x= xs, y= ys))
plot.add(Points(x= xs, y= ys))
plot = Plot(title= "Setting 2nd Axis bounds")
ys = [0, 2, 4, 6, 15, 10]
ys2 = [-40, 50, 6, 4, 2, 0]
ys3 = [3, 6, 3, 6, 70, 6]
plot.add(YAxis(label="Spread"))
plot.add(Line(y= ys))
plot.add(Line(y= ys2, yAxis="Spread"))
plot.setXBound([-2, 10])
#plot.setYBound(1, 5)
plot.getYAxes()[0].setBound(1,5)
plot.getYAxes()[1].setBound(3,6)
plot
plot = Plot(title= "Setting 2nd Axis bounds")
ys = [0, 2, 4, 6, 15, 10]
ys2 = [-40, 50, 6, 4, 2, 0]
ys3 = [3, 6, 3, 6, 70, 6]
plot.add(YAxis(label="Spread"))
plot.add(Line(y= ys))
plot.add(Line(y= ys2, yAxis="Spread"))
plot.setXBound([-2, 10])
plot.setYBound(1, 5)
plot
import time
millis = current_milli_time()
hour = round(1000 * 60 * 60)
xs = []
ys = []
for i in range(11):
xs.append(millis + hour * i)
ys.append(i)
plot = TimePlot(timeZone="America/New_York")
# list of milliseconds
plot.add(Points(x=xs, y=ys, size=10, displayName="milliseconds"))
plot = TimePlot()
plot.add(Line(x=tableRows['time'], y=tableRows['m3']))
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = [np.datetime64('2015-02-01'),
np.datetime64('2015-02-02'),
np.datetime64('2015-02-03'),
np.datetime64('2015-02-04'),
np.datetime64('2015-02-05'),
np.datetime64('2015-02-06')]
plot = TimePlot()
plot.add(Line(x=dates, y=y))
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = pd.Series(['2015-02-01',
'2015-02-02',
'2015-02-03',
'2015-02-04',
'2015-02-05',
'2015-02-06']
, dtype='datetime64[ns]')
plot = TimePlot()
plot.add(Line(x=dates, y=y))
import datetime
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = [datetime.date(2015, 2, 1),
datetime.date(2015, 2, 2),
datetime.date(2015, 2, 3),
datetime.date(2015, 2, 4),
datetime.date(2015, 2, 5),
datetime.date(2015, 2, 6)]
plot = TimePlot()
plot.add(Line(x=dates, y=y))
import datetime
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = [datetime.datetime(2015, 2, 1),
datetime.datetime(2015, 2, 2),
datetime.datetime(2015, 2, 3),
datetime.datetime(2015, 2, 4),
datetime.datetime(2015, 2, 5),
datetime.datetime(2015, 2, 6)]
plot = TimePlot()
plot.add(Line(x=dates, y=y))
millis = current_milli_time()
nanos = millis * 1000 * 1000
xs = []
ys = []
for i in range(11):
xs.append(nanos + 7 * i)
ys.append(i)
nanoplot = NanoPlot()
nanoplot.add(Points(x=xs, y=ys))
y1 = [1,5,3,2,3]
y2 = [7,2,4,1,3]
p = Plot(title='Plot with XYStacker', initHeight=200)
a1 = Area(y=y1, displayName='y1')
a2 = Area(y=y2, displayName='y2')
stacker = XYStacker()
p.add(stacker.stack([a1, a2]))
SimpleTimePlot(tableRows, ["y1", "y10"], # column names
timeColumn="time", # time is default value for a timeColumn
yLabel="Price",
displayNames=["1 Year", "10 Year"],
colors = [[216, 154, 54], Color.lightGray],
displayLines=True, # no lines (true by default)
displayPoints=False) # show points (false by default))
#time column base on DataFrame index
tableRows.index = tableRows['time']
SimpleTimePlot(tableRows, ['m3'])
rng = pd.date_range('1/1/2011', periods=72, freq='H')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
df = pd.DataFrame(ts, columns=['y'])
SimpleTimePlot(df, ['y'])
p = TimePlot(xLabel= "Time", yLabel= "Interest Rates")
p.add(YAxis(label= "Spread", upperMargin= 4))
p.add(Area(x= tableRows.time, y= tableRows.spread, displayName= "Spread",
yAxis= "Spread", color= Color(180, 50, 50, 128)))
p.add(Line(x= tableRows.time, y= tableRows.m3, displayName= "3 Month"))
p.add(Line(x= tableRows.time, y= tableRows.y10, displayName= "10 Year"))
import math
points = 100
logBase = 10
expys = []
xs = []
for i in range(0, points):
xs.append(i / 15.0)
expys.append(math.exp(xs[i]))
cplot = CombinedPlot(xLabel= "Linear")
logYPlot = Plot(title= "Linear x, Log y", yLabel= "Log", logY= True, yLogBase= logBase)
logYPlot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)"))
logYPlot.add(Line(x= xs, y= xs, displayName= "g(x) = x"))
cplot.add(logYPlot, 4)
linearYPlot = Plot(title= "Linear x, Linear y", yLabel= "Linear")
linearYPlot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)"))
linearYPlot.add(Line(x= xs, y= xs, displayName= "g(x) = x"))
cplot.add(linearYPlot,4)
cplot
plot = Plot(title= "Log x, Log y", xLabel= "Log", yLabel= "Log",
logX= True, xLogBase= logBase, logY= True, yLogBase= logBase)
plot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)"))
plot.add(Line(x= xs, y= xs, displayName= "f(x) = x"))
plot
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot items
Step2: Lines, Points with Pandas
Step3: Areas, Stems and Crosshair
Step4: Constant Lines, Constant Bands
Step5: TimePlot
Step6: numpy datatime64
Step7: Timestamp
Step8: Datetime and date
Step9: NanoPlot
Step10: Stacking
Step11: SimpleTime Plot
Step12: Second Y Axis
Step13: Combined Plot
|
14,818 | <ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import sys
import os
sys.path.append('..')
import graphlab
products = graphlab.SFrame('datasets/')
products['sentiment']
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products['perfect']
products['contains_perfect'] = products['perfect'].apply(lambda i: 1 if i>=1 else 0)
print(products['contains_perfect'])
print(products['perfect'])
print(products['contains_perfect'].sum())
import numpy as np
from algorithms.sframe_get_numpy_data import get_numpy_data
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
sentiment
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
score = np.dot(feature_matrix, coefficients)
predictions = 1.0 / (1 + np.exp(-score))
return predictions
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
def feature_derivative(errors, feature):
derivative = np.dot(errors, feature)
return derivative
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j].
derivative = feature_derivative(errors, feature_matrix[:,j])
coefficients[j] = coefficients[j] + step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
scores = np.dot(feature_matrix, coefficients)
class_probabilities = 1.0 / (1 + np.exp(-scores) )
print(class_probabilities)
print( 'Number of positive sentiment predicted is {}'.format( (class_probabilities >= 0.5).sum()) )
class_predictions = np.copy(class_probabilities)
class_predictions[class_predictions >=0.5] = 1
class_predictions[class_predictions < 0.5] = 0
print(class_predictions)
print(len(sentiment))
print(len(class_predictions))
num_mistakes = (sentiment == class_predictions).sum() # YOUR CODE HERE
accuracy = 1 -( 1.0*num_mistakes / len(sentiment)) # YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
word_coefficient_tuples[:10]
word_coefficient_tuples[-10:]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load review dataset
Step2: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step3: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step4: Note
Step5: Now, we will perform 2 simple data transformations
Step6: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Step7: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step8: Now, write some code to compute the number of product reviews that contain the word perfect.
Step9: Quiz Question. How many reviews contain the word perfect?
Step10: Convert SFrame to NumPy array
Step11: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step12: Let us convert the data into NumPy arrays.
Step13: Now, let us see what the sentiment column looks like
Step14: Estimating conditional probability with link function
Step15: Aside. How the link function works with matrix algebra
Step16: Compute derivative of log likelihood with respect to a single coefficient
Step17: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
Step18: Checkpoint
Step19: Taking gradient steps
Step20: Now, let us run the logistic regression solver.
Step21: Predicting sentiments
Step22: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above
Step23: Measuring accuracy
Step24: Which words contribute most to positive & negative sentiments?
Step25: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Step26: Ten "most negative" words
|
14,819 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
def find_peaks(x):
a = np.array(x)
l = []
for i in range(len(a)):
if i == 0 and a[i] > a[i+1]:
l.append(i)
elif i == len(a)-1 and a[i]> a[i-1]:
l.append(i)
elif i > 0 and i < len(a)-1:
if a[i]>a[i-1] and a[i] > a[i+1]:
l.append(i)
return l
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
a = [i for i in pi_digits_str]
b = find_peaks(a)
plt.hist(np.diff(b),10, align = 'left')
plt.title("Local Max");
plt.xlabel("Difference");
plt.ylabel("Freq");
assert True # use this for grading the pi digits histogram
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Peak finding
Step2: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following
|
14,820 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Point, Polygon, MultiPoint, MultiPolygon
from shapely.prepared import prep
import fiona
from matplotlib.collections import PatchCollection
from descartes import PolygonPatch
import json
import datetime
from IPython.display import Image
with open('LocationHistory.json', 'r') as fh:
raw = json.loads(fh.read())
# use ld as an abbreviation for location data
ld = pd.DataFrame(raw['locations'])
del raw #free up some memory
# convert to typical units
ld['latitudeE7'] = ld['latitudeE7']/float(1e7)
ld['longitudeE7'] = ld['longitudeE7']/float(1e7)
ld['timestampMs'] = ld['timestampMs'].map(lambda x: float(x)/1000) #to seconds
ld['datetime'] = ld.timestampMs.map(datetime.datetime.fromtimestamp)
# Rename fields based on the conversions we just did
ld.rename(columns={'latitudeE7':'latitude', 'longitudeE7':'longitude', 'timestampMs':'timestamp'}, inplace=True)
ld = ld[ld.accuracy < 1000] #Ignore locations with accuracy estimates over 1000m
ld.reset_index(drop=True, inplace=True)
degrees_to_radians = np.pi/180.0
ld['phi'] = (90.0 - ld.latitude) * degrees_to_radians
ld['theta'] = ld.longitude * degrees_to_radians
# Compute distance between two GPS points on a unit sphere
ld['distance'] = np.arccos(
np.sin(ld.phi)*np.sin(ld.phi.shift(-1)) * np.cos(ld.theta - ld.theta.shift(-1)) +
np.cos(ld.phi)*np.cos(ld.phi.shift(-1))
) * 6378.100 # radius of earth in km
ld['speed'] = ld.distance/(ld.timestamp - ld.timestamp.shift(-1))*3600 #km/hr
flightdata = pd.DataFrame(data={'endlat':ld.latitude,
'endlon':ld.longitude,
'enddatetime':ld.datetime,
'distance':ld.distance,
'speed':ld.speed,
'startlat':ld.shift(-1).latitude,
'startlon':ld.shift(-1).longitude,
'startdatetime':ld.shift(-1).datetime,
}).reset_index(drop=True)
def distance_on_unit_sphere(lat1, long1, lat2, long2):
# http://www.johndcook.com/python_longitude_latitude.html
# Convert latitude and longitude to spherical coordinates in radians.
degrees_to_radians = np.pi/180.0
# phi = 90 - latitude
phi1 = (90.0 - lat1)*degrees_to_radians
phi2 = (90.0 - lat2)*degrees_to_radians
# theta = longitude
theta1 = long1*degrees_to_radians
theta2 = long2*degrees_to_radians
cos = (np.sin(phi1)*np.sin(phi2)*np.cos(theta1 - theta2) +
np.cos(phi1)*np.cos(phi2))
arc = np.arccos( cos )
# Remember to multiply arc by the radius of the earth
# in your favorite set of units to get length.
return arc
flights = flightdata[(flightdata.speed > 40) & (flightdata.distance > 80)].reset_index()
# Combine instances of flight that are directly adjacent
# Find the indices of flights that are directly adjacent
_f = flights[flights['index'].diff() == 1]
adjacent_flight_groups = np.split(_f, (_f['index'].diff() > 1).nonzero()[0])
# Now iterate through the groups of adjacent flights and merge their data into
# one flight entry
for flight_group in adjacent_flight_groups:
idx = flight_group.index[0] - 1 #the index of flight termination
flights.ix[idx, ['startlat', 'startlon', 'startdatetime']] = [flight_group.iloc[-1].startlat,
flight_group.iloc[-1].startlon,
flight_group.iloc[-1].startdatetime]
# Recompute total distance of flight
flights.ix[idx, 'distance'] = distance_on_unit_sphere(flights.ix[idx].startlat,
flights.ix[idx].startlon,
flights.ix[idx].endlat,
flights.ix[idx].endlon)*6378.1
# Now remove the "flight" entries we don't need anymore.
flights = flights.drop(_f.index).reset_index(drop=True)
# Finally, we can be confident that we've removed instances of flights broken up by
# GPS data points during flight. We can now be more liberal in our constraints for what
# constitutes flight. Let's remove any instances below 200km as a final measure.
flights = flights[flights.distance > 200].reset_index(drop=True)
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from mpl_toolkits.basemap import Basemap
fig = plt.figure(figsize=(18,12))
# Plotting across the international dateline is tough. One option is to break up flights
# by hemisphere. Otherwise, you'd need to plot using a different projection like 'robin'
# and potentially center on the Int'l Dateline (lon_0=-180)
# flights = flights[(flights.startlon < 0) & (flights.endlon < 0)]# Western Hemisphere Flights
# flights = flights[(flights.startlon > 0) & (flights.endlon > 0)] # Eastern Hemisphere Flights
xbuf = 0.2
ybuf = 0.35
minlat = np.min([flights.endlat.min(), flights.startlat.min()])
minlon = np.min([flights.endlon.min(), flights.startlon.min()])
maxlat = np.max([flights.endlat.max(), flights.startlat.max()])
maxlon = np.max([flights.endlon.max(), flights.startlon.max()])
width = maxlon - minlon
height = maxlat - minlat
m = Basemap(llcrnrlon=minlon - width* xbuf,
llcrnrlat=minlat - height*ybuf,
urcrnrlon=maxlon + width* xbuf,
urcrnrlat=maxlat + height*ybuf,
projection='merc',
resolution='l',
lat_0=minlat + height/2,
lon_0=minlon + width/2,)
m.drawmapboundary(fill_color='#EBF4FA')
m.drawcoastlines()
m.drawstates()
m.drawcountries()
m.fillcontinents()
for idx, f in flights.iterrows():
m.drawgreatcircle(f.startlon, f.startlat, f.endlon, f.endlat, linewidth=3, alpha=0.4, color='b' )
m.plot(*m(f.startlon, f.startlat), color='g', alpha=0.8, marker='o')
m.plot(*m(f.endlon, f.endlat), color='r', alpha=0.5, marker='o' )
fig.text(0.125, 0.18, "Data collected from 2013-2016 on Android \nPlotted using Python, Basemap",
ha='left', color='#555555', style='italic')
fig.text(0.125, 0.15, "kivanpolimis.com", color='#555555', fontsize=16, ha='left')
plt.savefig('flights.png', dpi=150, frameon=False, transparent=False, bbox_inches='tight', pad_inches=0.2)
Image(filename='flights.png')
flights.distance.sum()*.621371 # distance column is in km, convert to miles
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Wrangling
Step2: data manipulation
Step3: calculate speed during trips (in km/hr)
Step4: Make a new dataframe containing the difference in location between each pair of points.
Step5: Now flightdata contains a comparison of each adjacent GPS location.
Step6: Flight algorithm
Step7: This algorithm worked 100% of the time for me - no false positives or negatives. But the adjacency-criteria of the algorithm is fairly brittle. The core of it centers around the assumption that inter-flight GPS data will be directly adjacent to one another. That's why the initial screening on line 1 of the previous cell had to be so liberal.
Step8: You can draw entertaining conclusions from the flight visualization. For instance, you can see some popular layover locations, all those lines in/out of Seattle, plus a recent trip to Germany. And Basemap has made it so simple for us - no Shapefiles to import because all map information is included in the Basemap module.
|
14,821 | <ASSISTANT_TASK:>
Python Code:
# RUN THIS CELL FIRST!!!
import time
from pydrill.client import PyDrill
import psycopg2
import pandas as pd
drill = PyDrill(host='128.206.116.250', port=8048)
if not drill.is_active():
raise ImproperlyConfigured('Please run Drill first')
# Start the Timer
start = time.perf_counter()
rows = drill.query("SELECT * FROM dfs.datasets.`twitter/tweet.json` WHERE text LIKE 'RT%' LIMIT 100000")
end = time.perf_counter()
# How long did this look up take?
print("Time to for PyDrill:")
print(end - start)
print('------------------')
# pandas dataframe
df = rows.to_dataframe()
df.head()
statement =
SELECT *
FROM twitter.tweet
WHERE text LIKE 'RT%'
LIMIT 100000;
start = time.perf_counter()
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
end = time.perf_counter()
# How long did this look up take?
print("Time to for Postgres:")
print(end - start)
print('------------------')
# create dictionary from the rows and column names
job = {}
for i in list(range(len(column_names))):
job['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
df = pd.DataFrame(job)
df.head()
statement =
SELECT COUNT(*)
FROM
(SELECT DISTINCT text
FROM
(SELECT text
FROM dfs.datasets.`twitter/tweet.json`
WHERE text LIKE 'RT%'
LIMIT 10000) AS retweets) AS retweet_count
drill = PyDrill(host='128.206.116.250', port=8048)
if not drill.is_active():
raise ImproperlyConfigured('Please run Drill first')
# Start the Timer
start = time.perf_counter()
rows = drill.query(statement)
end = time.perf_counter()
# How long did this look up take?
print("Time to for PyDrill:")
print(end - start)
print('------------------')
# pandas dataframe
df = rows.to_dataframe()
df.head()
statement =
SELECT COUNT(*)
FROM
(SELECT DISTINCT text
FROM
(SELECT text
FROM twitter.tweet
WHERE text
LIKE 'RT%'
LIMIT 10000) AS retweets) AS retweet_count
start = time.perf_counter()
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
end = time.perf_counter()
# How long did this look up take?
print("Time to for Postgres:")
print(end - start)
print('------------------')
# create dictionary from the rows and column names
job = {}
for i in list(range(len(column_names))):
job['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
df = pd.DataFrame(job)
df.head()
statement =
SELECT DISTINCT text, COUNT(*)
FROM
(SELECT text
FROM twitter.tweet
WHERE text LIKE 'RT%'
LIMIT 10000) AS retweets
GROUP BY text
ORDER BY count DESC;
start = time.perf_counter()
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
end = time.perf_counter()
# How long did this look up take?
print("Time to for Postgres:")
print(end - start)
print('------------------')
# create dictionary from the rows and column names
job = {}
for i in list(range(len(column_names))):
job['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
df = pd.DataFrame(job)
df
statement =
SELECT DISTINCT text, iso_language, COUNT(*)
FROM
(SELECT text, iso_language
FROM twitter.tweet
WHERE text LIKE 'RT%'
LIMIT 10000) AS retweets
GROUP BY text, iso_language
ORDER BY count DESC;
start = time.perf_counter()
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
end = time.perf_counter()
# How long did this look up take?
print("Time to for Postgres:")
print(end - start)
print('------------------')
# create dictionary from the rows and column names
job = {}
for i in list(range(len(column_names))):
job['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
df = pd.DataFrame(job)
df
statement =
SELECT DISTINCT job_id, text, iso_language, COUNT(*)
FROM
(SELECT job_id, text, iso_language
FROM twitter.tweet
WHERE text LIKE 'RT%' AND job_id >= 255
LIMIT 10000) AS retweets
GROUP BY job_id, text, iso_language
ORDER BY count DESC;
start = time.perf_counter()
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
end = time.perf_counter()
# How long did this look up take?
print("Time to for Postgres:")
print(end - start)
print('------------------')
# create dictionary from the rows and column names
job = {}
for i in list(range(len(column_names))):
job['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
df = pd.DataFrame(job)
df
statement =
SELECT DISTINCT job_id, text, iso_language, COUNT(*)
FROM
(SELECT job_id, text, iso_language
FROM twitter.tweet
WHERE lower(text)
LIKE 'rt%#royals%' AND job_id >= 255 LIMIT 100) AS retweets
GROUP BY job_id, text, iso_language
ORDER BY count DESC;
start = time.perf_counter()
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
end = time.perf_counter()
# How long did this look up take?
print("Time to for Postgres:")
print(end - start)
print('------------------')
# create dictionary from the rows and column names
job = {}
for i in list(range(len(column_names))):
job['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
df = pd.DataFrame(job)
df
statement =
SELECT DISTINCT description, iso_language, COUNT(*)
FROM
(SELECT h.description, t.job_id, t.text, t.iso_language
FROM twitter.tweet t, twitter.job h
WHERE lower(t.text) LIKE 'rt%#royals%' AND h.job_id >= 255 and h.job_id = t.job_id LIMIT 100) AS retweets
GROUP BY description, iso_language
ORDER BY count DESC;
start = time.perf_counter()
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
end = time.perf_counter()
# How long did this look up take?
print("Time to for Postgres:")
print(end - start)
print('------------------')
# create dictionary from the rows and column names
job = {}
for i in list(range(len(column_names))):
job['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
df = pd.DataFrame(job)
df
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pydrill
Step3: Postgres
Step5: Simple enough. Just as a reminder, the % operator in SQL matches any or no characters.
Step7: Postgres
Step9: Keep in mind that LIMIT works independently of ORDER BY so the results of the queries may be slightly different each time you run it since we aren't taking into account all rows in the database.
Step12: We can take this even further and circle back to the linguistic component. We can add another feature to the group by to find the count of the most popular retweets per language.
Step14: We can further limit our query by a certain hashtag. It may be of interest to find the most common retweet by a certain topic where a topic is defined by a tweet hashtag. We have been collecting data for the duration of the 2017 MLB season, so if you are a baseball fan, we can extract the most retweeted tweets for a particular baseball team.
Step16: You will notice that there were use lower(text) in the query. This turns all characters to lowercase as we don't want to mess with cases when looking for a simple hashtag.
|
14,822 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_00 import *
import operator
def test(a,b,cmp,cname=None):
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
def test_eq(a,b): test(a,b,operator.eq,'==')
test_eq(TEST,'test')
# To run tests in console:
# ! python run_notebook.py 01_matmul.ipynb
#export
from pathlib import Path
from IPython.core.debugger import set_trace
from fastai import datasets
import pickle, gzip, math, torch, matplotlib as mpl
import matplotlib.pyplot as plt
from torch import tensor
MNIST_URL='http://deeplearning.net/data/mnist/mnist.pkl'
path = datasets.download_data(MNIST_URL, ext='.gz'); path
with gzip.open(path, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
x_train,y_train,x_valid,y_valid = map(tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train, x_train.shape, y_train, y_train.shape, y_train.min(), y_train.max()
assert n==y_train.shape[0]==50000
test_eq(c,28*28)
test_eq(y_train.min(),0)
test_eq(y_train.max(),9)
mpl.rcParams['image.cmap'] = 'gray'
img = x_train[0]
img.view(28,28).type()
plt.imshow(img.view((28,28)));
weights = torch.randn(784,10)
bias = torch.zeros(10)
def matmul(a,b):
ar,ac = a.shape # n_rows * n_cols
br,bc = b.shape
assert ac==br
c = torch.zeros(ar, bc)
for i in range(ar):
for j in range(bc):
for k in range(ac): # or br
c[i,j] += a[i,k] * b[k,j]
return c
m1 = x_valid[:5]
m2 = weights
m1.shape,m2.shape
%time t1=matmul(m1, m2)
t1.shape
len(x_train)
a = tensor([10., 6, -4])
b = tensor([2., 8, 7])
a,b
a + b
(a < b).float().mean()
m = tensor([[1., 2, 3], [4,5,6], [7,8,9]]); m
(m*m).sum().sqrt()
def matmul(a,b):
ar,ac = a.shape
br,bc = b.shape
assert ac==br
c = torch.zeros(ar, bc)
for i in range(ar):
for j in range(bc):
# Any trailing ",:" can be removed
c[i,j] = (a[i,:] * b[:,j]).sum()
return c
%timeit -n 10 _=matmul(m1, m2)
890.1/5
#export
def near(a,b): return torch.allclose(a, b, rtol=1e-3, atol=1e-5)
def test_near(a,b): test(a,b,near)
test_near(t1,matmul(m1, m2))
a
a > 0
a + 1
m
2*m
c = tensor([10.,20,30]); c
m
m.shape,c.shape
m + c
c + m
t = c.expand_as(m)
t
m + t
t.storage()
t.stride(), t.shape
c.unsqueeze(0)
c.unsqueeze(1)
m
c.shape, c.unsqueeze(0).shape,c.unsqueeze(1).shape
c.shape, c[None].shape,c[:,None].shape
c[None].shape,c[...,None].shape
c[:,None].expand_as(m)
m + c[:,None]
c[:,None]
def matmul(a,b):
ar,ac = a.shape
br,bc = b.shape
assert ac==br
c = torch.zeros(ar, bc)
for i in range(ar):
# c[i,j] = (a[i,:] * b[:,j]).sum() # previous
c[i] = (a[i ].unsqueeze(-1) * b).sum(dim=0)
return c
%timeit -n 10 _=matmul(m1, m2)
885000/277
test_near(t1, matmul(m1, m2))
c[None,:]
c[None,:].shape
c[:,None]
c[:,None].shape
c[None,:] * c[:,None]
c[None] > c[:,None]
# c[i,j] += a[i,k] * b[k,j]
# c[i,j] = (a[i,:] * b[:,j]).sum()
def matmul(a,b): return torch.einsum('ik,kj->ij', a, b)
%timeit -n 10 _=matmul(m1, m2)
885000/55
test_near(t1, matmul(m1, m2))
%timeit -n 10 t2 = m1.matmul(m2)
# time comparison vs pure python:
885000/18
t2 = m1@m2
test_near(t1, t2)
m1.shape,m2.shape
!python notebook2script.py 01_matmul.ipynb
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Jump_to lesson 8 video
Step2: Get data
Step3: Initial python model
Step4: Matrix multiplication
Step5: This is kinda slow - what if we could speed it up by 50,000 times? Let's try!
Step6: Elementwise ops
Step7: Frobenius norm
Step8: Elementwise matmul
Step9: Broadcasting
Step10: How are we able to do a > 0? 0 is being broadcast to have the same dimensions as a.
Step11: Broadcasting a vector to a matrix
Step12: We don't really copy the rows, but it looks as if we did. In fact, the rows are given a stride of 0.
Step13: You can index with the special value [None] or use unsqueeze() to convert a 1-dimensional array into a 2-dimensional array (although one of those dimensions has value 1).
Step14: You can always skip trailling '
Step15: Matmul with broadcasting
Step16: Broadcasting Rules
Step17: When operating on two arrays/tensors, Numpy/PyTorch compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when
Step18: pytorch op
Step19: Export
|
14,823 | <ASSISTANT_TASK:>
Python Code:
# enable plotting in notebook
%matplotlib notebook
from simulation_results import example_simulations
import physical_validation
simulation_nvt_vrescale_low = example_simulations.get(
"900 water molecules, NVT at 298K with v-rescale thermostat"
)
num_molecules = 900
simulation_data_nvt_low = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=num_molecules * 3,
volume=3.01125 ** 3,
temperature=298.15,
),
observables=physical_validation.data.ObservableData(
# This test requires only the potential energy
potential_energy=simulation_nvt_vrescale_low["potential energy"]
),
)
physical_validation.ensemble.estimate_interval(
data=simulation_data_nvt_low,
)
simulation_nvt_vrescale_high = example_simulations.get(
"900 water molecules, NVT at 308K with v-rescale thermostat"
)
simulation_data_nvt_high = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=num_molecules * 3,
volume=3.01125 ** 3,
temperature=308.15,
),
observables=physical_validation.data.ObservableData(
# This test requires only the potential energy
potential_energy=simulation_nvt_vrescale_high["potential energy"]
),
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_nvt_low,
data_sim_two=simulation_data_nvt_high,
screen=True,
)
simulation_nvt_berendsen_low = example_simulations.get(
"900 water molecules, NVT at 298K with Berendsen thermostat"
)
simulation_data_nvt_low.observables = physical_validation.data.ObservableData(
potential_energy=simulation_nvt_berendsen_low["potential energy"]
)
simulation_nvt_berendsen_high = example_simulations.get(
"900 water molecules, NVT at 308K with Berendsen thermostat"
)
simulation_data_nvt_high.observables = physical_validation.data.ObservableData(
potential_energy=simulation_nvt_berendsen_high["potential energy"]
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_nvt_low,
data_sim_two=simulation_data_nvt_high,
screen=True,
)
simulation_npt_low = example_simulations.get(
"900 water molecules, NPT at 298K and 1bar, using v-rescale and Parrinello-Rahman"
)
num_molecules = 900
simulation_data_npt_low = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NPT",
natoms=num_molecules * 3,
pressure=1.0,
temperature=298.15,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_npt_low["potential energy"],
volume=simulation_npt_low["volume"],
),
)
physical_validation.ensemble.estimate_interval(data=simulation_data_npt_low)
simulation_npt_high = example_simulations.get(
"900 water molecules, NPT at 308K and 101bar, using v-rescale and Parrinello-Rahman"
)
num_molecules = 900
simulation_data_npt_high = physical_validation.data.SimulationData(
# Example simulations were performed using GROMACS
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="NPT",
natoms=num_molecules * 3,
pressure=101.0,
temperature=308.15,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_npt_high["potential energy"],
volume=simulation_npt_high["volume"],
),
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_npt_low,
data_sim_two=simulation_data_npt_high,
)
simulation_muvt_low = example_simulations.get(
"GCMC Difluoromethane vapor, muVT at 300K and -37.5kJ/mol"
)
simulation_data_muvt_low = physical_validation.data.SimulationData(
# Example simulations were performed in units compatible with GROMACS units
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="muVT",
mu=-37.5,
volume=512,
temperature=300,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_muvt_low["potential energy"],
number_of_species=simulation_muvt_low["number of species"],
),
)
physical_validation.ensemble.estimate_interval(data=simulation_data_muvt_low)
simulation_muvt_high = example_simulations.get(
"GCMC Difluoromethane vapor, muVT at 300K and -37.0kJ/mol"
)
simulation_data_muvt_high = physical_validation.data.SimulationData(
# Example simulations were performed in units compatible with GROMACS units
units=physical_validation.data.UnitData.units("GROMACS"),
ensemble=physical_validation.data.EnsembleData(
ensemble="muVT",
mu=-37.0,
volume=512,
temperature=300,
),
observables=physical_validation.data.ObservableData(
# This test requires the potential energy and the volume
potential_energy=simulation_muvt_high["potential energy"],
number_of_species=simulation_muvt_high["number of species"],
),
)
physical_validation.ensemble.check(
data_sim_one=simulation_data_muvt_low,
data_sim_two=simulation_data_muvt_high,
screen=True
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The results imported here are the time series of kinetic and potential energy from example simulations, which are
Step2: Check NVT simulations
Step3: It is not trivial to decide at which temperature to perform a second simulation.
Step4: The second simulation available in our example set was performed at 308.15K, which
Step5: Using both simulation data objects, we can now check the ensemble sampled
Step6: By default, the ensemble check is estimating the distance in temperature between the two sampled ensembles using a maximum likelihood approach. This distance estimate is expected to be close to the true value. As a rule of thumb, if the true interval is not within about 2-3 standard deviations of the estimated interval, the trajectory is unlikely to have been sampled from the expected ensemble. The quantiles (number of standard deviations) of difference between the true value and the estimate is returned from the test as a machine-readable test result.
Step7: The check is confirming that the sampled ensemble using the Berendsen thermostat is not behaving as expected when changing the temperature. The reported estimated temperature interval is around 15 standard deviations from the true value, which makes it easy to reject the hypothesis that the potential energy was sampled from the correct ensemble.
Step8: As in the NVT case, we can use this simulation to have physical_validation suggesting a state point to perform the second simulation in.
Step9: The rule of thumb suggests that a second state point with a temperature difference of about 7.8 K and a pressure difference of about 315 bar would be optimal. The second simulation which is available in our example set was performed at 308.15 K and 101 bar, so at a distance of 10 K and 100 bar. According to the physical_validation estimate, the pressure distance should be a bit further to have optimal error recognition. The check will, however, not be invalid with this choice of state points.
Step10: Using both simulation data objects, we can now check the ensemble sampled by our simulations. Note that plotting is not available for NPT simulations which differ in both temperature and pressure, since the 2-dimensional plot would be very hard to interpret.
Step11: The ensemble check now prints both the estimated temperature and pressure intervals. We note that in both cases, the true value is within less than a standard deviation, which means that the null hypothesis of sampling the expected ensemble stands.
Step12: As for the other ensembles, we can use this simulation to have physical_validation suggesting a state point to perform the second simulation in.
Step13: The rule of thumb suggests that a second state point with chemical potential difference of about 0.6 kJ/mol would be optimal. For the temperature, the suggestion of a second state point varies depending on whether a 1D fit (simulations varying only in their target temperature) or a 2D fit (simulations varying in both their target temperature and the chemical potential) is to be performed.
Step14: Using both simulation data objects, we can now check the ensemble sampled by our simulations. We are using screen=True to display a result plot on screen. See argument filename to print that same plot to file.
|
14,824 | <ASSISTANT_TASK:>
Python Code:
# -- inputs
X_test[0]
# -- predicted output (using Keras)
yhat[0]
from tensorflow.core.framework import graph_pb2
# -- read in the graph
f = open("models/graph.pb", "rb")
graph_def = graph_pb2.GraphDef()
graph_def.ParseFromString(f.read())
import tensorflow as tf
# -- actually import the graph described by graph_def
tf.import_graph_def(graph_def, name = '')
for node in graph_def.node:
print node.name
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inspect the protobuf containing the model's architecture and logic
|
14,825 | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
with_eos = source_vocab_to_int["<EOS>"]
source2id = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\n')]
target2id = [[target_vocab_to_int[word] for word in sentence.split()] + [with_eos] for sentence in target_text.split('\n')]
return source2id, target2id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
_input = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None])
lr = tf.placeholder(tf.float32, None, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, None, name="keep_prob")
ts_length = tf.placeholder(tf.int32, [None], name="target_sequence_length")
max_ts_length = tf.reduce_max(ts_length, name="max_target_len")
ss_length = tf.placeholder(tf.int32, [None], name="source_sequence_length")
return _input, targets, lr, keep_prob, ts_length, max_ts_length, ss_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int.get("<GO>")), ending], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
embedding = tf.contrib.layers.embed_sequence(
ids=rnn_inputs,
vocab_size=source_vocab_size,
embed_dim=encoding_embedding_size
)
def enc_cell(rnn_size):
return tf.contrib.rnn.LSTMCell(
num_units=rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)
) # increased accuracy, see Sutskever et al., 2014
stacked_cell = tf.contrib.rnn.MultiRNNCell([enc_cell(rnn_size) for _ in range(num_layers)])
stacked_cell_with_drop = tf.contrib.rnn.DropoutWrapper(stacked_cell, output_keep_prob=keep_prob)
output, state = tf.nn.dynamic_rnn(stacked_cell_with_drop,
embedding,
sequence_length=source_sequence_length,
dtype=tf.float32)
return output, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
dec_cell_with_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell_with_drop, training_helper, encoder_state, output_layer)
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_summary_length)
return training_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
end_of_sequence_id)
dec_cell_with_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell_with_drop, inference_helper, encoder_state, output_layer)
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Layer for mapping outputs to targets
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# Tokens as int
start_token_id = target_vocab_to_int.get("<GO>")
end_token_id = target_vocab_to_int.get("<EOS>")
with tf.variable_scope("decode"):
#2. Creating the LSTM Cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
train_decoder_output = decoding_layer_train(
encoder_state, dec_cell, dec_embed_input, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
infer_decoder_ouput = decoding_layer_infer(
encoder_state, dec_cell, dec_embeddings, start_token_id, end_token_id,
max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob)
return train_decoder_output, infer_decoder_ouput
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# encoder does its own embed_sequence
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
# Process target data for encoders
decoder_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# Passing encoder state and decoder_inputs as is, since embedding is done inside decoding_layer
training_decoder_output, inference_decoder_output = decoding_layer(
dec_input=decoder_input,
encoder_state=enc_state,
target_sequence_length=target_sequence_length,
max_target_sequence_length=tf.reduce_max(target_sequence_length),
rnn_size=rnn_size,
num_layers=num_layers,
target_vocab_to_int=target_vocab_to_int,
target_vocab_size=target_vocab_size,
batch_size=batch_size,
keep_prob=keep_prob,
decoding_embedding_size=dec_embedding_size)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 3
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
display_step = 100
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
unk_id = 2
return [vocab_to_int.get(word, unk_id) for word in sentence.split(" ")]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'i like to work with robots .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
14,826 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from pylab import *
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# Now moving on to the SFR-M*-Size analysis
%run ~/Dropbox/pythonCode/LCSanalyzeblue.py
# using John Moustakas's stellar mass estimates
figure()
plot(s.s.ABSMAG[:,4][s.blueflag2],s.logstellarmass[s.blueflag2],'bo')
xlabel('$M_r$')
ylabel('$ log_{10}(M_*) $')
# r-band limit
rlim=17.7
# distance modulus to Hercules, the furthest cluster
mM=35.97
# absolute mag limit corresponding to r=17.7
Mr=rlim-mM
axvline(x=Mr,ls='--',color='r')
axis([-20,-16,8.5,10.5])
axhline(y=minmass)
s.plotsalimcolormag()
# For blue galaxies only
s.plotSFRStellarmassSizeBlue(blueflag=True,plotbadfits=False)
# to compare size distributions
print 'comparing size ratios for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(s.s.SIZE_RATIO[f1],s.s.SIZE_RATIO[m1])
print 'mean of field = %5.2f +/- %5.2f'%(mean(s.s.SIZE_RATIO[f1]),std(s.s.SIZE_RATIO[f1])/sqrt(1.*sum(f1)))
print 'mean of clust = %5.2f +/- %5.2f'%(mean(s.s.SIZE_RATIO[m1]),std(s.s.SIZE_RATIO[m1])/sqrt(1.*sum(m1)))
s.printsizeblue()
s.calc_size_starburst()
# comparing sizes for sample with Coma removed
nc.plotSFRStellarmassSizeBlue(blueflag=True,plotbadfits=False)
nc.printsize()
ncf1=nc.bluesampleflag & ~nc.membflag & ~nc.agnflag
ncm1=nc.bluesampleflag & nc.membflag & ~nc.agnflag
t=ks(nc.s.SIZE_RATIO[ncf1],nc.s.SIZE_RATIO[ncm1])
# stellar mass
print 'comparing stellar mass for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(s.logstellarmass[f1],s.logstellarmass[m1])
# B/T
print ''
print 'comparing B/T for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag & s.gim2dflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag & s.gim2dflag
t=ks(s.s.B_T_r[f1],s.s.B_T_r[m1])
# ssfr
print ''
print 'comparing sSFR for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(log10(s.ssfr[f1]),log10(s.ssfr[m1]))
# B/A
print ''
print 'comparing B/A for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks((s.s.SERSIC_BA[f1]),(s.s.SERSIC_BA[m1]))
# ir surface brightness
print ''
print 'comparing $L{IR}/R_e(24)^2$ for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(log10(s.sigma_ir[f1]),log10(s.sigma_ir[m1]))
# size
print ''
print 'comparing Re(24)/Re(r) for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks((s.s.SIZE_RATIO[f1]),(s.s.SIZE_RATIO[m1]))
figure(figsize=(12,5))
subplot(1,2,1)
subplots_adjust(wspace=.5)
pcolor=s.s.CLUSTER_LX
pcolorlabel='$log_{10}(L_X)$'
#pcolor=s.s.SIGMA_5
#pcolor=sqrt(s.s.DR_R200**2 + s.s.DELTA_V**2)
#pcolor=10.**s.logstellarmass
#pcolor=s.massdensity
#pcolor=s.s.B_T_r
#pcolorlabel='$log_{10}(B/T)$'
f=s.bluesampleflag & ~s.agnflag & s.membflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60)
f=s.bluesampleflag & ~s.agnflag & ~s.membflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,marker='^')
#f=s.bluesampleflag & ~s.agnflag & s.fieldflag
#plot(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),'ks',mfc='None',label='Field')
colorbar(fraction=.08,label=pcolorlabel)
xlabel('$log_{10}(\Sigma_{ir}) $')
ylabel('$sSFR/Gyr$')
subplot(1,2,2)
#pcolor=s.s.CLUSTER_LX
#pcolorlabel='$log_{10}(L_X)$'
#pcolor=s.s.SIGMA_5
#pcolor=sqrt(s.s.DR_R200**2 + s.s.DELTA_V**2)
#pcolor=10.**s.logstellarmass
#pcolor=s.massdensity
pcolor=s.s.B_T_r
pcolorlabel='$log_{10}(B/T)$'
f=s.bluesampleflag & ~s.agnflag & s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60)
f=s.bluesampleflag & ~s.agnflag & ~s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,marker='^')
#f=s.bluesampleflag & ~s.agnflag & s.fieldflag
#plot(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),'ks',mfc='None',label='Field')
colorbar(fraction=.08,label=pcolorlabel)
xlabel('$log_{10}(\Sigma_{ir}) $')
#ylabel('$sSFR/Gyr$')
figure()
#pcolor=s.s.CLUSTER_LX
#pcolorlabel='$log_{10}(L_X)$'
#pcolor=s.s.SIGMA_5
#pcolor=sqrt(s.s.DR_R200**2 + s.s.DELTA_V**2)
#pcolor=10.**s.logstellarmass
#pcolor=s.massdensity
pcolor=s.s.B_T_r
pcolorlabel='$log_{10}(B/T)$'
f=s.bluesampleflag & ~s.agnflag & s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,label='Cluster')
f=s.bluesampleflag & ~s.agnflag & ~s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,marker='^',label='Field')
#f=s.bluesampleflag & ~s.agnflag & s.fieldflag
#plot(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),'ks',mfc='None',label='Field')
colorbar(fraction=.08,label=pcolorlabel)
xlabel('$log_{10}(\Sigma_{ir}) $')
ylabel('$log_{10}(sSFR/Gyr)$')
legend(scatterpoints=1,loc='upper left')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As of 1/6/16, need to make one more pass through the sample and remove galaxies that are blended with nearby companion. Not sure if people think the numbers in each panel are useful.
Step2: RESULT
Step3: SFR-Mass-Size for Blue Galaxies only
Step4: RESULT
Step5: The Impact of Coma
Step6: RESULT
Step7: RESULT
|
14,827 | <ASSISTANT_TASK:>
Python Code:
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
path_B = datasets_dir + os.sep + 'person_table_B.csv'
# Read the CSV files and set 'ID' as the key attribute
A = em.read_csv_metadata(path_A, key='ID')
B = em.read_csv_metadata(path_B, key='ID')
# Get features (for blocking)
feature_table = em.get_features_for_blocking(A, B, validate_inferred_attr_types=False)
# Get features (for matching)
# feature_table = em.get_features_for_matching(A, B)
type(feature_table)
feature_table.head()
# Drop first row
feature_table = feature_table.drop(0)
feature_table.head()
#Remove all the features except involving name (Include only the features where the left attribute is name)
feature_table = feature_table[feature_table.left_attribute=='name']
feature_table
#Remove all the features except involving jaccard (Include only the features where the sim function is jaccard)
feature_table = feature_table[feature_table.simfunction=='jaccard']
feature_table
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, read the (sample) input tables for blocking purposes
Step2: Removing Features from Feature Table
|
14,828 | <ASSISTANT_TASK:>
Python Code:
# Import the needed packages, SymPy
import sympy as sp
from sympy import init_printing
init_printing()
# Define the variables
# Complex variable
s = sp.symbols('s')
# FOTD Coeffficients
T1,T2,T3,T4 = sp.symbols('T_1 T_2 T_3 T_4')
K1,K2,K3,K4 = sp.symbols('K_1 K_2 K_3 K_4')
# Time Delay Coefficients
L1,L2,L3,L4 = sp.symbols('L_1 L_2 L_3 L_4')
# Vectorize
TV = [T1,T2,T3,T4]
KV = [K1,K2,K3,K4]
LV = [L1,L2,L3,L4]
# Define a FOTD
def FOTD(K,T,L):
return K/(T*s+1) * sp.exp(-L*s)
#Define a Matrix of FOTD
G = sp.zeros(2)
for i in range(0,4):
G[i]= FOTD(KV[i],TV[i],LV[i])
#Get the DC Gain and invert it for Static Decoupling
G0 = G.subs(s,0)
D = sp.simplify(G0**-1)
#Get the Q Matrix -> the static decoupled system
Q = sp.simplify(G*D)
Q
# Get the Taylor series Expansion for the Interaction of Q approx I + O(s)
k12 = sp.simplify(Q[1].series(s,0,2).removeO())/s
k21 = sp.simplify(Q[2].series(s,0,2).removeO())/s
# Print the Interaction of Input 2 on Output 1 in small frequencies
k12
# Print the Interaction of Input 1 on Output 2 in small frequencies
k21
# Define the symbols for interaction
k1,k2 = sp.symbols('k_1 k_2', real = True)
c1,c2 = sp.symbols('c_1 c_2',real = True)
# Define symbols for the decoupling controller --> SMALL LETTER
ki1,ki2 = sp.symbols('k_{i1} k_{i2}',real = True)
kp1,kp2 = sp.symbols('k_{p1} k_{p2}',real = True)
b = sp.symbols('b',real = True)
# Define the symbols for the regular controller
KI1,KI2,KI3,KI4 = sp.symbols('K_{I1} K_{I2} K_{I3} K_{I4}',real = True)
KP1,KP2,KP3,KP4 = sp.symbols('K_{P1} K_{P2} K_{P3} K_{P4}',real = True)
B = sp.symbols('B',real = True)
# Vectorize
kV = [k1,k2]
cV = [c1,c2]
kiV = [ki1,ki2]
kpV = [kp1,kp2]
solvar =[ki1,ki2,kp1,kp2]
KIV = [KI1,KI2,KI3,KI4]
KPV = [KP1,KP2,KP3,KP4]
# Create a symbolic Controller
def PI(I,P,S):
# Here S is setpointweight
return P*S+I/s
# Create an decoupling controller -> Design Space Q
C_D = sp.diag(PI(ki1,kp1,1),PI(ki2,kp2,1))
# Create the regular controller -> Design Space G
C_R = sp.zeros(2)
for i in range(0,4):
C_R[i]= PI(KIV[i],KPV[i],1)
C_RD = G.subs(s,0) * C_R
# Get substituion Rules -> make a list of equations
EQ = []
for i in range(0,4):
EQ.append(C_D[i]- C_RD[i])
# Block one
# Solve for the decoupling controllers
kiEQ = [EQ[0],EQ[3]]
kisol = sp.solve(kiEQ,[C_D[0],C_D[3]],dict=True)
#Block two
# Solve for the ratio of the regular controllers
KEQ = [EQ[1],EQ[2]]
# Choose to substitute the minor diagonals -> solve for the other controllers
Ksol = sp.solve(KEQ,[C_R[1],C_R[2]], dict=True)
# Substitute the minor diagonals in the solution for ki
# We have a system of polynomials in s; Hence, we can get the right ratio by using coefficient comparision
# Regular controller
# First minor diagonal
a = C_R[1] # Chose controller from input 2 to output 1
expr = sp.simplify(Ksol[0][a]).expand()
# Get the coefficients corresponding to 1/s
I12 = expr.coeff('1/s');
# Get the coefficients corresponding to Proportional Gain
P12 = sp.simplify(expr-I12/s)
# Next controller
a = C_R[2]
expr = sp.simplify(Ksol[0][a]).expand()
# Get the coefficients corresponding to 1/s
I21= expr.coeff('1/s')
# Get the coefficients corresponding to Proportional Gain
P21 = sp.simplify(expr-I21/s)
Ksol[0]
# Decoupling Controller 1
# Set the key of the dictionary of solutions
a = C_D[0]
# Substitute the regular controllers in the current solutions
sub1 = kisol[0][a].subs(KPV[1],P12).subs(KPV[2],P21).subs(KIV[1],I12).subs(KIV[2],I21)
# Simplify the equation
sub1 = sp.simplify(sub1)
# Decoupling Controller 2
# Set the key of the dictionary of solutions
a = C_D[3]
# Substitute the regular controllers in the current solutions
sub2 = kisol[0][a].subs(KPV[1],P12).subs(KPV[2],P21).subs(KIV[1],I12).subs(KIV[2],I21)
# Simplify the equation
sub2 = sp.simplify(sub2)
# First Controller
sub1
# Second Controller
sub2
# Decoupling controller 1 -> We can directly see the proportional and integral coeffcients
sp.simplify(sub1-sub1.coeff('1/s')/s),sp.simplify(sub1.coeff('1/s'))
# Decoupling controller 2 -> We can directly see the proportional and integral coeffcients
sp.simplify(sub2-sub2.coeff('1/s')/s),sp.simplify(sub2.coeff('1/s'))
# Get the upper limit of the interaction in terms of the original coeffients
gamma1 = sp.Abs(k12*s*sp.simplify(sub2.coeff('1/s')/s)+s*k12*b*(sp.simplify(sub2-sub2.coeff('1/s')/s)))
gamma1 = sp.simplify(gamma1)
gamma2 = sp.Abs(k21*s*sp.simplify(sub1.coeff('1/s')/s)+s*k21*b*sp.simplify(sub1-sub1.coeff('1/s')/s))
gamma2 = sp.simplify(sp.simplify(gamma2))
gamma1
gamma2
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interpretation
Step2: We now have a system of 4 Equations we can set to zero. We have to solve for four variables, the parameter of the decoupling controller, to get the desired connection from the heueristic rules for FOTD Systems to the more complex system.
Step3: We now have a relation between the choosen (main loop) controller and the other (coupling loop) controller, given by the following relationships
Step4: We use this relationship to substitute the controller in the relationship for the decoupling controller.
Step5: We get the decoupled controller - original designed by using Q - as a function of the choosen (main loop) controller
Step6: Use that knowledge to derive the proportional and integral gains of the controller by comparison of the coefficients
Step7: Deriving the maximal coupling
|
14,829 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
from IPython.display import SVG
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
circle=
<svg width="%d" height="%d">
<circle cx="%d" cy="%d" r="%d" fill="%s"/>
</svg> %(width, height,cx,cy,r,fill) #used %s to include variables into the svg drawing of circle
display(SVG(circle))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
w=interactive(draw_circle,width=fixed(300),height=fixed(300), cx=(0,300,10),cy=(0,300,10), r=(0,50,5), fill='red')
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
display(w)
assert True # leave this to grade the display of the widget
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Interact with SVG display
Step5: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step6: Use interactive to build a user interface for exploing the draw_circle function
Step7: Use the display function to show the widgets created by interactive
|
14,830 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp1d
with np.load('trajectory.npz') as data:
t = data['t']
x = data['x']
y = data['y']
print(t,x,y)
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
x_interp = interp1d(t,x, kind='cubic')
y_interp = interp1d(t,y, kind='cubic')
newt = np.linspace(np.min(t), np.max(t), 200)
newx = x_interp(newt)
newy = y_interp(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
f = plt.figure(figsize=(12,12))
ax = plt.subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(x, y, 'bo')
plt.plot(newx, newy, 'r-')
plt.title("Trajectory of a Particle")
plt.xlabel("X")
plt.ylabel("Y");
assert True # leave this to grade the trajectory plot
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D trajectory interpolation
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points
|
14,831 | <ASSISTANT_TASK:>
Python Code:
%run "../src/start_session.py"
%run "../src/recurrences.py"
%run "../src/sums.py"
from oeis import oeis_search, ListData
import knowledge
sys.setrecursionlimit(10000000)
s = oeis_search(id=45)
s(data_only=True, data_representation=ListData(upper_limit=20))
with bind(IndexedBase('f'), single=True) as f:
fibonacci_rec_spec = recurrence_spec(recurrence_eq=Eq(f[n+2],f[n+1]+f[n]), recurrence_symbol=f, variables=[n])
fibonacci_rec_spec
unfolded = fibonacci_rec_spec.unfold(depth=5)
unfolded
unfolded.involute(depth=1)
unfolded.involute()
subsumed=unfolded.subsume()
subsumed
im=knowledge.fibonacci_numbers_inverted_mapping(start=2, limit=20)
im
subsumed.subs(im)
unfolded.instantiate(strategy=raw(substitutions={n:20}))
instantiated = unfolded.instantiate(strategy=based(arity=unary_indexed()))
instantiated
almost_valued = instantiated.subsume(additional_terms={f[0]:Integer(0), f[1]:Integer(1)})
almost_valued
almost_valued.involute()
ipython_latex_description(fibonacci_rec_spec, depths=range(13), arity=unary_indexed())
m, v, r, eqs = fibonacci_rec_spec.matrix_vector_product(depth=10, arity=unary_indexed(),
segment=[n-k for k in range(-2, 19)])
latex_array_src(eqs)
m, v, r
example={n:30}
to_check = Eq(m*v.subs(example), r.subs(example))
to_check
to_check.subs(knowledge.fibonacci_numbers(), simultaneous=True)
m, v, r, eqs = fibonacci_rec_spec.matrix_vector_product(depth=10, arity=unary_indexed(),
segment=[Integer(k) for k in range(0, 11)],
based_instantiation=True)
latex_array_src(eqs)
m,v,r
m = symbols('m')
thm = Eq(f[n], Sum(binomial(m,k)*f[n-2*m+k], (k,0,m)))
thm
benjamin_quinn_thm = thm.subs({n:n+m})
benjamin_quinn_thm
benjamin_quinn_thm.subs({m:n})
def expand_sum_in_eq(eq_term, in_lhs=False, in_rhs=True):
lhs, rhs = eq_term.lhs, eq_term.rhs
return Eq(expand_Sum(lhs) if in_lhs else lhs, expand_Sum(rhs) if in_rhs else rhs)
expanded_eq = expand_sum_in_eq(thm.subs({m:20}))
expanded_eq
subsed = expanded_eq.subs(n, 40)
subsed
subsed.subs(knowledge.fibonacci_numbers())
fibs = knowledge.fibonacci_numbers()
f2_diag = f[0]
f4_diag = f[0]+f[1]
f6_diag = f[0]+2*f[1]
f8_diag = f[0]+3*f[1]+f[2]
f10_diag = f[0]+4*f[1]+3*f[2]
f12_diag = f[0]+5*f[1]+6*f[2]+f[3]
f14_diag = f[0]+6*f[1]+10*f[2]+4*f[3]
f16_diag = f[0]+7*f[1]+15*f[2]+10*f[3]+f[4]
f18_diag = f[0]+8*f[1]+21*f[2]+20*f[3]+5*f[4]
f20_diag = f[0]+9*f[1]+28*f[2]+35*f[3]+15*f[4]+f[5]
f22_diag = f[0]+10*f[1]+36*f[2]+56*f[3]+35*f[4]+6*f[5]
f24_diag = f[0]+11*f[1]+45*f[2]+84*f[3]+70*f[4]+21*f[5]+f[6]
f26_diag = f[0]+12*f[1]+55*f[2]+120*f[3]+126*f[4]+56*f[5]+7*f[6]
diagonals = [Eq(f2_diag, f2_diag.subs(fibs)),
Eq(f4_diag, f4_diag.subs(fibs)),
Eq(f6_diag, f6_diag.subs(fibs)),
Eq(f8_diag, f8_diag.subs(fibs)),
Eq(f10_diag, f10_diag.subs(fibs)),
Eq(f12_diag, f12_diag.subs(fibs)),
Eq(f14_diag, f14_diag.subs(fibs)),
Eq(f16_diag, f16_diag.subs(fibs)),
Eq(f18_diag, f18_diag.subs(fibs)),
Eq(f20_diag, f20_diag.subs(fibs)),
Eq(f22_diag, f22_diag.subs(fibs)),
Eq(f24_diag, f24_diag.subs(fibs)),
Eq(f26_diag, f26_diag.subs(fibs))]
latex_array_src(diagonals)
def A059633_gf(x):
return 1/(1-2*x+x**3-x**4)
A059633_gf(t).series(t,n=14)
def do_memberwise_on_eqs(an_eq, another_eq, operator=lambda x, y: Add(x,y,evaluate=True)):
return Eq(operator(an_eq.lhs, another_eq.lhs), operator(an_eq.rhs, another_eq.rhs))
def swap_eq(eq_term): return Eq(eq_term.rhs, eq_term.lhs)
reduce(do_memberwise_on_eqs, eqs)
even_subscript_fibonacci_eq = Eq(f[2*n+1]-1, Sum(f[2*k],(k,0,n)))
even_subscript_fibonacci_eq
example = even_subscript_fibonacci_eq.subs(n,21)
example_expanded = expand_sum_in_eq(example)
example_expanded, example_expanded.subs(fibs)
from itertools import accumulate
enum_range = range(0,50,2)
def worker(accumulated_pair, current_pair):
index, current = current_pair
_, accumulated = accumulated_pair
summed_eq = do_memberwise_on_eqs(accumulated, current)
return index, summed_eq
def subs_fib_thm(pair):
index, current_eq = pair
expanded_thm = expand_sum_in_eq(even_subscript_fibonacci_eq.subs({n:Integer(index)/2}))
return index, current_eq, expanded_thm
#def eq_to_subs_dict(eq_term): return {eq_term.lhs:eq_term.rhs}
def apply_subs_on_lhs(triple):
index, current_eq, thm = triple
return current_eq.subs({thm.rhs:thm.lhs})
def latex_array_env_of_eqs(mapped):
from string import Template
template = Template(r\begin{array}{c}$content\end{array})
return template.substitute(content="\n".join(mapped))
triangle = accumulate(zip(enum_range, eqs), worker)
triangle = map(subs_fib_thm, triangle)
triangle = list(map(apply_subs_on_lhs, triangle))
triangle[0] = Eq(f[1]-1,f[0])
latex_array_src(triangle)
all(map(lambda eq: eq.subs(fibs), triangle))
to_matrix_notation(triangle, f , range(0, 11))
triangle = [eq.subs({f[0]:fibs[f[0]]}, simultaneous=True) for eq in triangle[1:]]
latex_array_src(triangle)
to_matrix_notation(triangle, f, range(1,11))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OEIS
Step2: Recurrence
Step3: Unfolding
Step4: Involution
Step5: Subsuming
Step6: We can abstract the following conjecture
Step7: Instantiation
Step8: Based
Step9: Computing
Step10: Collecting
Step11: Spec as symbolic matrix-vector product
Step12: Spec as based matrix-vector product
Step13: Identities and conjectures
Step14: which is known but left as an exercise by Benjamin and Quinn, in the following form
Step15: instantiating $m=n$ we get a closed representation of the matrix-vector product seen before
Step16: A little application of the previous identity is in the following cells
Step17: symbolic expansion of the Sum object
Step18: instantiate to let $f_{0}$ appears in the identity (our concept of based instantiation)
Step19: finally, check the consistency using the sequence of Fibonacci numbers
Step20: Looking at bottom left to top right diagonal sums
Step21: we build the sum for each diagonal, manually
Step22: looking at the sequence composed by coeffients in the rhs of each equation we recognize A059633. According to the reference, the sequence is built by the following recurrence
Step24: therefore we can conjecture the following identity
|
14,832 | <ASSISTANT_TASK:>
Python Code:
crisisInfo = {
"boston": {
"name": "Boston Marathon Bombing",
"time": 1366051740, # Timestamp in seconds since 1/1/1970, UTC
# 15 April 2013, 14:49 EDT -> 18:49 UTC
"directory": "boston",
"keywords": ["boston", "exploision", "bomb", "marathon"],
"box": { # Bounding box for geographic limits
"lowerLeftLon": -124.848974,
"lowerLeftLat": 24.396308,
"upperRightLon": -66.885444,
"upperRightLat": 49.384358,
}
},
"paris_hebdo": {
"name": "Charlie Hebdo Attack",
"time": 1420626600, # Timestamp in seconds since 1/1/1970, UTC
# 7 January 2015, 11:30 CET -> 10:30 UTC
"directory": "paris_hebdo",
"keywords": ["paris", "hebdo"],
"box": {
"lowerLeftLon": -5.1406,
"lowerLeftLat": 41.33374,
"upperRightLon": 9.55932,
"upperRightLat": 51.089062,
}
},
"nepal": {
"name": "Nepal Earthquake",
"time": 1429942286, # Timestamp in seconds since 1/1/1970, UTC
# 25 April 2015, 6:11:26 UTC
"directory": "nepal",
"keywords": ["nepal", "earthquake", "quake", "nsgs"],
"box": {
"lowerLeftLon": 80.0562,
"lowerLeftLat": 26.3565,
"upperRightLon": 88.1993,
"upperRightLat": 30.4330,
}
},
"paris_nov": {
"name": "Paris November Attacks",
"time": 1447446000, # Timestamp in seconds since 1/1/1970, UTC
# 13 November 2015, 20:20 UTC to 23:58 UTC
"directory": "paris_nov",
"keywords": ["paris", "shots", "explosion"],
"box": {
"lowerLeftLon": -5.1406,
"lowerLeftLat": 41.33374,
"upperRightLon": 9.55932,
"upperRightLat": 51.089062,
}
},
"brussels": {
"name": "Brussels Transit Attacks",
"time": 1458629880, # Timestamp in seconds since 1/1/1970, UTC
# 22 March 2016, 6:58 UTC to 08:11 UTC
"directory": "brussels",
"keywords": ["brussels", "bomb", "belgium", "explosion"],
"box": {
"lowerLeftLon": 2.54563,
"lowerLeftLat": 49.496899,
"upperRightLon": 6.40791,
"upperRightLat": 51.5050810,
}
},
}
print ("Available Crisis Names:")
for k in sorted(crisisInfo.keys()):
print ("\t", k)
# Replace the name below with your selected crisis
selectedCrisis = "nepal"
# Determine host-specific location of data
tweetDirectory = crisisInfo[selectedCrisis]["directory"]
tweetGlobPath = os.path.join("..", "00_data", tweetDirectory, "statuses.log.*.gz")
print ("Reading files from:", tweetGlobPath)
# Dictionary for mapping dates to data
frequencyMap = {}
# For counting tweets
globalTweetCounter = 0
# Twitter's time format, for parsing the created_at date
timeFormat = "%a %b %d %H:%M:%S +0000 %Y"
reader = codecs.getreader("utf-8")
for tweetFilePath in glob.glob(tweetGlobPath):
print ("Reading File:", tweetFilePath)
for line in gzip.open(tweetFilePath, 'rb'):
# Try to read tweet JSON into object
tweetObj = None
try:
tweetObj = json.loads(reader.decode(line)[0])
except Exception as e:
continue
# Deleted status messages and protected status must be skipped
if ( "delete" in tweetObj.keys() or "status_withheld" in tweetObj.keys() ):
continue
# Try to extract the time of the tweet
try:
currentTime = datetime.datetime.strptime(tweetObj['created_at'], timeFormat)
except:
print (line)
raise
currentTime = currentTime.replace(second=0)
# Increment tweet count
globalTweetCounter += 1
# If our frequency map already has this time, use it, otherwise add
if ( currentTime in frequencyMap.keys() ):
timeMap = frequencyMap[currentTime]
timeMap["count"] += 1
timeMap["list"].append(tweetObj)
else:
frequencyMap[currentTime] = {"count":1, "list":[tweetObj]}
# Fill in any gaps
times = sorted(frequencyMap.keys())
firstTime = times[0]
lastTime = times[-1]
thisTime = firstTime
# We want to look at per-minute data, so we fill in any missing minutes
timeIntervalStep = datetime.timedelta(0, 60) # Time step in seconds
while ( thisTime <= lastTime ):
if ( thisTime not in frequencyMap.keys() ):
frequencyMap[thisTime] = {"count":0, "list":[]}
thisTime = thisTime + timeIntervalStep
print ("Processed Tweet Count:", globalTweetCounter)
import matplotlib.pyplot as plt
crisisMoment = crisisInfo[selectedCrisis]["time"]
crisisTime = datetime.datetime.utcfromtimestamp(crisisMoment)
crisisTime = crisisTime.replace(second=0)
print ("Crisis Time:", crisisTime)
fig, ax = plt.subplots()
fig.set_size_inches(11, 8.5)
plt.title("Tweet Frequency")
# Sort the times into an array for future use
sortedTimes = sorted(frequencyMap.keys())
# What time span do these tweets cover?
print ("Time Frame:", sortedTimes[0], sortedTimes[-1])
# Get a count of tweets per minute
postFreqList = [frequencyMap[x]["count"] for x in sortedTimes]
# We'll have ticks every few minutes (more clutters the graph)
smallerXTicks = range(0, len(sortedTimes), 10)
plt.xticks(smallerXTicks, [sortedTimes[x] for x in smallerXTicks], rotation=90)
# Plot the post frequency
yData = [x if x > 0 else 0 for x in postFreqList]
ax.plot(range(len(frequencyMap)), yData, color="blue", label="Posts")
crisisXCoord = sortedTimes.index(crisisTime)
ax.scatter([crisisXCoord], [np.mean(yData)], c="r", marker="x", s=100, label="Crisis")
ax.grid(b=True, which=u'major')
ax.legend()
plt.show()
# Create maps for holding counts and tweets for each user
globalUserCounter = {}
globalUserMap = {}
# Iterate through the time stamps
for t in sortedTimes:
timeObj = frequencyMap[t]
# For each tweet, pull the screen name and add it to the list
for tweet in timeObj["list"]:
user = tweet["user"]["screen_name"]
if ( user not in globalUserCounter ):
globalUserCounter[user] = 1
globalUserMap[user] = [tweet]
else:
globalUserCounter[user] += 1
globalUserMap[user].append(tweet)
print ("Unique Users:", len(globalUserCounter.keys()))
sortedUsers = sorted(globalUserCounter, key=globalUserCounter.get, reverse=True)
print ("Top Ten Most Prolific Users:")
for u in sortedUsers[:10]:
print (u, globalUserCounter[u],
"\n\t", "Random Tweet:", globalUserMap[u][0]["text"], "\n----------")
# A map for hashtag counts
hashtagCounter = {}
# For each minute, pull the list of hashtags and add to the counter
for t in sortedTimes:
timeObj = frequencyMap[t]
for tweet in timeObj["list"]:
hashtagList = tweet["entities"]["hashtags"]
for hashtagObj in hashtagList:
# We lowercase the hashtag to avoid duplicates (e.g., #MikeBrown vs. #mikebrown)
hashtagString = hashtagObj["text"].lower()
if ( hashtagString not in hashtagCounter ):
hashtagCounter[hashtagString] = 1
else:
hashtagCounter[hashtagString] += 1
print ("Unique Hashtags:", len(hashtagCounter.keys()))
sortedHashtags = sorted(hashtagCounter, key=hashtagCounter.get, reverse=True)
print ("Top Twenty Hashtags:")
for ht in sortedHashtags[:20]:
print ("\t", "#" + ht, hashtagCounter[ht])
# A map for hashtag counts
urlCounter = {}
# For each minute, pull the list of hashtags and add to the counter
for t in sortedTimes:
timeObj = frequencyMap[t]
for tweet in timeObj["list"]:
urlList = tweet["entities"]["urls"]
for url in urlList:
urlStr = url["url"]
if ( urlStr not in urlCounter ):
urlCounter[urlStr] = 1
else:
urlCounter[urlStr] += 1
print ("Unique URLs:", len(urlCounter.keys()))
sortedUrls = sorted(urlCounter, key=urlCounter.get, reverse=True)
print ("Top Twenty URLs:")
for url in sortedUrls[:20]:
print ("\t", url, urlCounter[url])
print ("Top Expanded URLs:")
for url in sortedUrls[:10]:
try:
r = requests.get(url)
realUrl = r.url
print ("\t", url, urlCounter[url], "->", realUrl)
except:
print ("\t", url, urlCounter[url], "->", "UNKNOWN Failure")
# A map for mention counts
mentionCounter = {}
# For each minute, pull the list of mentions and add to the counter
for t in sortedTimes:
timeObj = frequencyMap[t]
for tweet in timeObj["list"]:
mentions = tweet["entities"]["user_mentions"]
for mention in mentions:
mentionStr = mention["screen_name"]
if ( mentionStr not in mentionCounter ):
mentionCounter[mentionStr] = 1
else:
mentionCounter[mentionStr] += 1
print ("Unique Mentions:", len(mentionCounter.keys()))
sortedMentions = sorted(mentionCounter, key=mentionCounter.get, reverse=True)
print ("Top Twenty Mentions:")
for mention in sortedMentions[:20]:
print ("\t", mention, mentionCounter[mention])
# A map for media counts
mediaCounter = {}
# For each minute, pull the list of media and add to the counter
for t in sortedTimes:
timeObj = frequencyMap[t]
for tweet in timeObj["list"]:
if ( "media" not in tweet["entities"] ):
continue
mediaList = tweet["entities"]["media"]
for media in mediaList:
mediaStr = media["media_url"]
if ( mediaStr not in mediaCounter ):
mediaCounter[mediaStr] = 1
else:
mediaCounter[mediaStr] += 1
print ("Unique Media:", len(mediaCounter.keys()))
sortedMedia = sorted(mediaCounter, key=mediaCounter.get, reverse=True)
print ("Top Twenty Media:")
for media in sortedMedia[:20]:
print ("\t", media, mediaCounter[media])
# A map for media counts
tweetRetweetCountMap = {}
rtList = []
# For each minute, pull the list of hashtags and add to the counter
for t in sortedTimes:
timeObj = frequencyMap[t]
for tweet in timeObj["list"]:
tweetId = tweet["id_str"]
rtCount = tweet["retweet_count"]
if ( "retweeted_status" in tweet ):
tweetId = tweet["retweeted_status"]["id_str"]
rtCount = tweet["retweeted_status"]["retweet_count"]
tweetRetweetCountMap[tweetId] = rtCount
rtList.append(rtCount)
sortedRetweets = sorted(tweetRetweetCountMap, key=tweetRetweetCountMap.get, reverse=True)
print ("Top Ten Retweets:")
for tweetId in sortedRetweets[:10]:
thisTweet = None
for t in reversed(sortedTimes):
for tweet in frequencyMap[t]["list"]:
if ( tweet["id_str"] == tweetId ):
thisTweet = tweet
break
if ( "retweeted_status" in tweet and tweet["retweeted_status"]["id_str"] == tweetId ):
thisTweet = tweet["retweeted_status"]
break
if ( thisTweet is not None ):
break
print ("\t", tweetId, tweetRetweetCountMap[tweetId], thisTweet["text"])
print ("Top Ten RECENT Retweets:")
foundTweets = 0
for tweetId in sortedRetweets:
thisTweet = None
# Find the most recent copy of the tweet
for t in reversed(sortedTimes):
for tweet in frequencyMap[t]["list"]:
if ( tweet["id_str"] == tweetId ):
thisTweet = tweet
break
if ( "retweeted_status" in tweet and tweet["retweeted_status"]["id_str"] == tweetId ):
thisTweet = tweet["retweeted_status"]
break
if ( thisTweet is not None ):
break
createdTime = datetime.datetime.strptime(thisTweet['created_at'], timeFormat)
# If tweet creation time is before the crisis, assume irrelevant
if ( createdTime < crisisTime ):
continue
print ("\t", tweetId, tweetRetweetCountMap[tweetId], thisTweet["text"])
foundTweets += 1
if ( foundTweets > 10 ):
break
# What keywords are we interested in?
targetKeywords = crisisInfo[selectedCrisis]["keywords"]
# Build an empty map for each keyword we are seaching for
targetCounts = {x:[] for x in targetKeywords}
totalCount = []
# For each minute, pull the tweet text and search for the keywords we want
for t in sortedTimes:
timeObj = frequencyMap[t]
# Temporary counter for this minute
localTargetCounts = {x:0 for x in targetKeywords}
localTotalCount = 0
for tweetObj in timeObj["list"]:
tweetString = tweetObj["text"].lower()
localTotalCount += 1
# Add to the counter if the target keyword is in this tweet
for keyword in targetKeywords:
if ( keyword in tweetString ):
localTargetCounts[keyword] += 1
# Add the counts for this minute to the main counter
totalCount.append(localTotalCount)
for keyword in targetKeywords:
targetCounts[keyword].append(localTargetCounts[keyword])
# Now plot the total frequency and frequency of each keyword
fig, ax = plt.subplots()
fig.set_size_inches(11, 8.5)
plt.title("Tweet Frequency")
plt.xticks(smallerXTicks, [sortedTimes[x] for x in smallerXTicks], rotation=90)
ax.semilogy(range(len(frequencyMap)), totalCount, label="Total")
ax.scatter([crisisXCoord], [100], c="r", marker="x", s=100, label="Crisis")
for keyword in targetKeywords:
ax.semilogy(range(len(frequencyMap)), targetCounts[keyword], label=keyword)
ax.legend()
ax.grid(b=True, which=u'major')
plt.show()
# A frequency map for timestamps to geo-coded tweets
geoFrequencyMap = {}
geoCount = 0
# Save only those tweets with tweet['coordinate']['coordinate'] entity
for t in sortedTimes:
geos = list(filter(lambda tweet: tweet["coordinates"] != None and
"coordinates" in tweet["coordinates"],
frequencyMap[t]["list"]))
geoCount += len(geos)
# Add to the timestamp map
geoFrequencyMap[t] = {"count": len(geos), "list": geos}
print ("Number of Geo Tweets:", geoCount)
fig, ax = plt.subplots()
fig.set_size_inches(11, 8.5)
plt.title("Geo Tweet Frequency")
gpsFreqList = [geoFrequencyMap[x]["count"] for x in sortedTimes]
postFreqList = [frequencyMap[x]["count"] for x in sortedTimes]
plt.xticks(smallerXTicks, [sortedTimes[x] for x in smallerXTicks], rotation=90)
xData = range(len(geoFrequencyMap))
gpsYData = [x if x > 0 else 0 for x in gpsFreqList]
freqYData = [x if x > 0 else 0 for x in postFreqList]
ax.semilogy(xData, freqYData, color="blue", label="Posts")
ax.semilogy(xData, gpsYData, color="green", label="GPS Posts")
ax.scatter([crisisXCoord], [100], c="r", marker="x", s=100, label="Crisis")
ax.grid(b=True, which=u'major')
ax.legend()
plt.show()
import matplotlib
import functools
from mpl_toolkits.basemap import Basemap
# Create a list of all geo-coded tweets
tmpGeoList = [geoFrequencyMap[t]["list"] for t in sortedTimes]
geoTweets = functools.reduce(lambda x, y: x + y, tmpGeoList)
# For each geo-coded tweet, extract its GPS coordinates
geoCoord = [x["coordinates"]["coordinates"] for x in geoTweets]
# Now we build a map of the world using Basemap
land_color = 'lightgray'
water_color = 'lightblue'
fig, ax = plt.subplots(figsize=(24,24))
worldMap = Basemap(projection='merc', llcrnrlat=-80, urcrnrlat=80,
llcrnrlon=-180, urcrnrlon=180, resolution='l')
worldMap.fillcontinents(color=land_color, lake_color=water_color, zorder=1)
worldMap.drawcoastlines()
worldMap.drawparallels(np.arange(-90.,120.,30.))
worldMap.drawmeridians(np.arange(0.,420.,60.))
worldMap.drawmapboundary(fill_color=water_color, zorder=0)
ax.set_title('World Tweets')
# Convert points from GPS coordinates to (x,y) coordinates
convPoints = [worldMap(p[0], p[1]) for p in geoCoord]
x = [p[0] for p in convPoints]
y = [p[1] for p in convPoints]
worldMap.scatter(x, y, s=100, marker='x', color="red", zorder=2)
plt.show()
# Get the bounding box for our crisis
bBox = crisisInfo[selectedCrisis]["box"]
fig, ax = plt.subplots(figsize=(11,8.5))
# Create a new map to hold the shape file data
targetMap = Basemap(llcrnrlon=bBox["lowerLeftLon"],
llcrnrlat=bBox["lowerLeftLat"],
urcrnrlon=bBox["upperRightLon"],
urcrnrlat=bBox["upperRightLat"],
projection='merc',
resolution='i', area_thresh=10000)
targetMap.fillcontinents(color=land_color, lake_color=water_color,
zorder=1)
targetMap.drawcoastlines()
targetMap.drawparallels(np.arange(-90.,120.,30.))
targetMap.drawmeridians(np.arange(0.,420.,60.))
targetMap.drawmapboundary(fill_color=water_color, zorder=0)
targetMap.drawcountries()
# Now we build the polygon for filtering
# Convert from lon, lat of lower-left to x,y coordinates
llcCoord = targetMap(bBox["lowerLeftLon"], bBox["lowerLeftLat"])
# Same for upper-right corner
urcCoord = targetMap(bBox["upperRightLon"], bBox["upperRightLat"])
# Now make the polygon we'll us for filtering
boxPoints = np.array([[llcCoord[0], llcCoord[1]],
[llcCoord[0], urcCoord[1]],
[urcCoord[0], urcCoord[1]],
[urcCoord[0], llcCoord[1]]])
boundingBox = matplotlib.patches.Polygon(boxPoints)
# Maps of timestamps to tweets for inside/outside Ferguson
inTargetFreqMap = {}
plottablePointsX = []
plottablePointsY = []
# For each geo-coded tweet, extract coordinates and convert
# them to the Basemap space
for t in sortedTimes:
geos = geoFrequencyMap[t]["list"]
convPoints = [(targetMap(tw["coordinates"]["coordinates"][0], tw["coordinates"]["coordinates"][1]), tw) for tw in geos]
# Local counters for this time
inTargetFreqMap[t] = {"count": 0, "list": []}
# For each point, check if it is within the bounding box or not
for point in convPoints:
x = point[0][0]
y = point[0][1]
if ( boundingBox.contains_point((x, y))):
inTargetFreqMap[t]["list"].append(point[1])
plottablePointsX.append(x)
plottablePointsY.append(y)
# Plot points in our target
targetMap.scatter(plottablePointsX, plottablePointsY, s=100, marker='x', color="red", zorder=2)
# Count the number of tweets that fall in the area
targetTweetCount = np.sum([len(inTargetFreqMap[t]["list"]) for t in sortedTimes])
print ("Tweets in Target Area:", targetTweetCount)
print ("Tweets outside:", (geoCount - targetTweetCount))
plt.show()
# Merge our list of relevant tweets
geoRelevantTweets = [tw for x in sortedTimes for tw in inTargetFreqMap[x]["list"]]
print("Time of Crisis:", crisisTime)
# Print the first few tweets
for tweet in geoRelevantTweets[:10]:
print("Tweet By:", tweet["user"]["screen_name"])
print("\t", "Tweet Text:", tweet["text"])
print("\t", "Tweet Time:", tweet["created_at"])
print("\t", "Source:", tweet["source"])
print("\t", "Retweets:", tweet["retweet_count"])
print("\t", "Favorited:", tweet["favorite_count"])
print("\t", "Twitter's Guessed Language:", tweet["lang"])
if ( "place" in tweet ):
print("\t", "Tweet Location:", tweet["place"]["full_name"])
print("-----")
from IPython.display import display
from IPython.display import Image
geoTweetsWithMedia = list(filter(lambda tweet: "media" in tweet["entities"], geoRelevantTweets))
print ("Tweets with Media:", len(geoTweetsWithMedia))
if ( len(geoTweetsWithMedia) == 0 ):
print ("Sorry, not tweets with media...")
for tweet in geoTweetsWithMedia:
imgUrl = tweet["entities"]["media"][0]["media_url"]
print (tweet["text"])
display(Image(url=imgUrl))
# What keywords are we interested in?
targetKeywords = crisisInfo[selectedCrisis]["keywords"]
# Map for storing topically relevant data
topicRelevantMap = {}
# For each minute, pull the tweet text and search for the keywords we want
for t in sortedTimes:
timeObj = frequencyMap[t]
topicRelevantMap[t] = {"count": 0, "list": []}
for tweetObj in timeObj["list"]:
tweetString = tweetObj["text"].lower()
# Add to the counter if the target keyword is in this tweet
for keyword in targetKeywords:
if ( keyword.lower() in tweetString ):
topicRelevantMap[t]["list"].append(tweetObj)
topicRelevantMap[t]["count"] += 1
break
# Now plot the total frequency and frequency of each keyword
fig, ax = plt.subplots()
fig.set_size_inches(11, 8.5)
plt.title("Tweet Frequency")
plt.xticks(smallerXTicks, [sortedTimes[x] for x in smallerXTicks], rotation=90)
ax.semilogy(range(len(frequencyMap)), totalCount, label="Total")
ax.scatter([crisisXCoord], [100], c="r", marker="x", s=100, label="Crisis")
relYData = [topicRelevantMap[t]["count"] for t in sortedTimes]
ax.semilogy(range(len(relYData)), relYData, label="Relevant")
ax.legend()
ax.grid(b=True, which=u'major')
plt.show()
allTweets = [x for t in sortedTimes for x in topicRelevantMap[t]["list"]]
# get the top retweeted tweets
onlyRetweets = filter(lambda x: "retweeted_status" in x, allTweets)
topTweets = sorted(onlyRetweets, key=lambda x: x["retweeted_status"]["retweet_count"],
reverse=True)[:10]
print("Top Retweets:")
for x in topTweets:
print(x["id"], x["user"]["screen_name"], x["retweeted_status"]["retweet_count"], x["text"])
# get tweets from users with the msot followers
topTweets = sorted(allTweets, key=lambda x: x["user"]["followers_count"], reverse=True)[:10]
print()
print("Top Accounts:")
for x in topTweets:
print(x["id"], x["user"]["screen_name"], x["user"]["followers_count"], x["text"])
# get the top retweeted tweets but only from verified accounts
verifiedTweets = filter(lambda x: x["retweeted_status"]["user"]["verified"], onlyRetweets)
topTweets = sorted(verifiedTweets, key=lambda x: x["retweeted_status"]["retweet_count"],
reverse=True)[:10]
print()
print("Top Retweets from Verified Accounts:")
for x in verifiedTweets:
print(x["id"], x["user"]["screen_name"], x["retweet_count"], x["text"])
# A frequency map for timestamps to geo-coded tweets
relGeoFreqMap = {}
relGeoCount = 0
# Save only those tweets with tweet['coordinate']['coordinate'] entity
for t in sortedTimes:
geos = list(filter(lambda tweet: tweet["coordinates"] != None and
"coordinates" in tweet["coordinates"],
topicRelevantMap[t]["list"]))
relGeoCount += len(geos)
# Add to the timestamp map
relGeoFreqMap[t] = {"count": len(geos), "list": geos}
print ("Number of Relevant Geo Tweets:", relGeoCount)
# Create a list of all geo-coded tweets
tmpGeoList = [relGeoFreqMap[t]["list"] for t in sortedTimes]
relGeoTweets = functools.reduce(lambda x, y: x + y, tmpGeoList)
# For each geo-coded tweet, extract its GPS coordinates
relGeoCoord = [x["coordinates"]["coordinates"] for x in relGeoTweets]
fig, ax = plt.subplots(figsize=(24,24))
worldMap = Basemap(projection='merc', llcrnrlat=-80, urcrnrlat=80,
llcrnrlon=-180, urcrnrlon=180, resolution='l')
worldMap.fillcontinents(color=land_color, lake_color=water_color, zorder=1)
worldMap.drawcoastlines()
worldMap.drawparallels(np.arange(-90.,120.,30.))
worldMap.drawmeridians(np.arange(0.,420.,60.))
worldMap.drawmapboundary(fill_color=water_color, zorder=0)
worldMap.drawcountries()
ax.set_title('Global Relevant Tweets')
# Convert points from GPS coordinates to (x,y) coordinates
allConvPoints = [worldMap(p[0], p[1]) for p in geoCoord]
x = [p[0] for p in allConvPoints]
y = [p[1] for p in allConvPoints]
worldMap.scatter(x, y, s=100, marker='x', color="blue", zorder=2)
# Convert points from GPS coordinates to (x,y) coordinates
relConvPoints = [worldMap(p[0], p[1]) for p in relGeoCoord]
x = [p[0] for p in relConvPoints]
y = [p[1] for p in relConvPoints]
worldMap.scatter(x, y, s=100, marker='x', color="red", zorder=2)
plt.show()
from textblob import TextBlob
# Sentiment values
polarVals = []
objVals = []
# For each minute, pull the tweet text and search for the keywords we want
for t in sortedTimes:
timeObj = topicRelevantMap[t]
# For calculating averages
localPolarVals = []
localObjVals = []
for tweetObj in timeObj["list"]:
tweetString = tweetObj["text"].lower()
blob = TextBlob(tweetString)
polarity = blob.sentiment.polarity
objectivity = blob.sentiment.subjectivity
localPolarVals.append(polarity)
localObjVals.append(objectivity)
# Add data to the polarity and objectivity measure arrays
if ( len(timeObj["list"]) > 10 ):
polarVals.append(np.mean(localPolarVals))
objVals.append(np.mean(localObjVals))
else:
polarVals.append(0.0)
objVals.append(0.0)
# Now plot this sentiment data
fig, ax = plt.subplots()
fig.set_size_inches(11, 8.5)
plt.title("Sentiment")
plt.xticks(smallerXTicks, [sortedTimes[x] for x in smallerXTicks], rotation=90)
xData = range(len(sortedTimes))
ax.scatter([crisisXCoord], [0], c="r", marker="x", s=100, label="Crisis")
# Polarity is scaled [-1, 1], for negative and positive polarity
ax.plot(xData, polarVals, label="Polarity")
# Subjetivity is scaled [0, 1], with 0 = objective, 1 = subjective
ax.plot(xData, objVals, label="Subjectivity")
ax.legend()
ax.grid(b=True, which=u'major')
plt.show()
import nltk
nltk.download("vader_lexicon")
import nltk.sentiment.util
import nltk.sentiment.vader
vader = nltk.sentiment.vader.SentimentIntensityAnalyzer()
# Sentiment values
polarVals = []
# For each minute, pull the tweet text and search for the keywords we want
for t in sortedTimes:
timeObj = topicRelevantMap[t]
# For calculating averages
localPolarVals = []
for tweetObj in timeObj["list"]:
tweetString = tweetObj["text"].lower()
polarity = vader.polarity_scores(tweetString)["compound"]
localPolarVals.append(polarity)
# Add data to the polarity and objectivity measure arrays
if ( len(timeObj["list"]) > 10 ):
polarVals.append(np.mean(localPolarVals))
else:
polarVals.append(0.0)
# Now plot this sentiment data
fig, ax = plt.subplots()
fig.set_size_inches(11, 8.5)
plt.title("Sentiment")
plt.xticks(smallerXTicks, [sortedTimes[x] for x in smallerXTicks], rotation=90)
xData = range(len(sortedTimes))
ax.scatter([crisisXCoord], [0], c="r", marker="x", s=100, label="Crisis")
# Polarity is scaled [-1, 1], for negative and positive polarity
ax.plot(xData, polarVals, label="Polarity")
ax.legend()
ax.grid(b=True, which=u'major')
plt.ylim((-0.3, 0.55))
plt.show()
# Gotta pull in a bunch of packages for this
import gensim.models.ldamulticore
import gensim.matutils
import sklearn.cluster
import sklearn.feature_extraction
import sklearn.feature_extraction.text
import sklearn.metrics
import sklearn.preprocessing
nltk.download("stopwords")
from nltk.corpus import stopwords
# Get all tweets and conver to lowercase
allTweetText = [x["text"].lower() for t in sortedTimes for x in topicRelevantMap[t]["list"]]
print ("All Tweet Count:", len(allTweetText))
enStop = stopwords.words('english')
esStop = stopwords.words('spanish')
# Skip stop words, retweet signs, @ symbols, and URL headers
stopList = enStop + esStop + ["http", "https", "rt", "@", ":", "co"]
vectorizer = sklearn.feature_extraction.text.CountVectorizer(strip_accents='unicode',
tokenizer=None,
token_pattern='(?u)#?\\b\\w+[\'-]?\\w+\\b',
stop_words=stopList)
# Analyzer
analyze = vectorizer.build_analyzer()
# Create a vectorizer for all our content
vectorizer.fit(allTweetText)
# Get all the words in our text
names = vectorizer.get_feature_names()
# Create a map for vectorizer IDs to words
id2WordDict = dict(zip(range(len(vectorizer.get_feature_names())), names))
# Create a corpus for
corpus = vectorizer.transform(allTweetText)
gsCorpus = gensim.matutils.Sparse2Corpus(corpus, documents_columns=False)
lda = gensim.models.LdaMulticore(gsCorpus,
id2word=id2WordDict,
num_topics=20,
passes=2) # ++ passes for better results
ldaTopics = lda.show_topics(num_topics=10,
num_words=10,
formatted=False)
for (i, tokenList) in ldaTopics:
print ("Topic %d:" % i, ' '.join([pair[0] for pair in tokenList]))
docArrays = filter(lambda x: len(x) > 4, [y for x in allTweetText for y in analyze(x)])
fd = nltk.FreqDist(docArrays)
print ("Most common from analyzer:")
for x in fd.most_common(20):
print (x[0], x[1])
import networkx as nx
# We'll use a directed graph since mentions/retweets are directional
graph = nx.DiGraph()
for tweet in [x for t in sortedTimes for x in topicRelevantMap[t]["list"]]:
userName = tweet["user"]["screen_name"]
graph.add_node(userName)
mentionList = tweet["entities"]["user_mentions"]
for otherUser in mentionList:
otherUserName = otherUser["screen_name"]
if ( graph.has_node(otherUserName) == False ):
graph.add_node(otherUserName)
graph.add_edge(userName, otherUserName)
print ("Number of Users:", len(graph.node))
# Now we prune for performance reasons
# remove all nodes with few edges
nodeList = [n for n,d in graph.degree_iter() if d<2]
graph.remove_nodes_from(nodeList)
print ("Number of Remaining Users:", len(graph.node))
# THis may take a while
pageRankList = nx.pagerank_numpy(graph)
highRankNodes = sorted(pageRankList.keys(), key=pageRankList.get, reverse=True)
for x in highRankNodes[:20]:
print (x, pageRankList[x])
plt.figure(figsize=(8,8))
pos = nx.spring_layout(graph, scale=100, iterations=100, k=0.2)
nx.draw(graph,
pos,
node_color='#A0CBE2',
width=1,
with_labels=False,
node_size=50)
hrNames = highRankNodes[:20]
hrDict = dict(zip(hrNames, hrNames))
hrValues = [pageRankList[x] for x in hrNames]
nx.draw_networkx_nodes(graph,pos,nodelist=hrNames,
node_size=200,
node_color=hrValues,
cmap=plt.cm.Reds_r)
nx.draw_networkx_labels(graph,
pos,
labels=hrDict,
fontsize=36,
font_color="g")
plt.axis('off')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Choose Your Crisis
Step2: <hr>
Step3: <hr>
Step4: Top Twitter Users
Step5: Many of these tweets are not relevant to the event at hand.
Step6: We can do the same with URLs to find the most shared URL.
Step7: Note how each URL is shortened using Twitter's shortener.
Step8: Since URLs and Hashtags are both entities, we can do the same for other entities, like mentions and media.
Step9: We can some data is relevant, both in pictures and in hashtags and URLs.
Step10: Retweets seem to be dominated by recent elements.
Step11: Event Detection w/ Keyword Frequency
Step12: <hr>
Step13: GPS Frequency
Step14: Plotting GPS Data
Step15: Filtering By Location
Step16: Geographically Relevant Tweet Content
Step17: Media from Within Target
Step18: Topic 6
Step19: Highly Important Relevant Tweets
Step20: Quick Geo-data Comparison
Step21: Observation
Step22: Sentiment Analysis with Vader
Step23: Topic 7
Step24: We first extract all relevant tweets' text for building our models.
Step25: Now we build a list of stop words (words we don't care about) and build a feature generator (the vectorizer) that assigns integer keys to tokens and counts the number of each token.
Step26: We then use the vectorizer to transform our tweet text into a feature set, which essentially is a table with rows of tweets, columns for each keyword, and each cell is the number of times that keyword appears in that tweet.
Step27: We can also be a little more strict and get rid of some noise by looking only at words with more than X characters.
Step28: Topic 8
Step29: Central Users
|
14,833 | <ASSISTANT_TASK:>
Python Code:
b = phoebe.default_binary()
# set parameter values
b.set_value('q', value = 0.6)
b.set_value('incl', component='binary', value = 84.5)
b.set_value('ecc', 0.2)
b.set_value('per0', 63.7)
b.set_value('sma', component='binary', value= 7.3)
b.set_value('vgamma', value= -32.84)
# add an rv dataset
b.add_dataset('rv', compute_phases=phoebe.linspace(0,1,101))
#compute the model
b.run_compute()
# extract the arrays from the model that we'll use as observables in the next step and add noise to the rvs
times = b.get_value('times', context='model', component='primary', dataset='rv01')
np.random.seed(0) # to ensure reproducibility with added noise
rvs1 = b.get_value('rvs', component='primary', context='model', dataset='rv01') + np.random.normal(size=times.shape)
rvs2 = b.get_value('rvs', component='secondary', context='model', dataset='rv01') + np.random.normal(size=times.shape)
sigmas_rv = np.ones_like(times) * 2
b = phoebe.default_binary()
b.add_dataset('rv')
b.set_value_all('times', dataset='rv01', value = times)
b.set_value('rvs', component='primary', dataset='rv01', value = rvs1)
b.set_value('rvs', component='secondary', dataset='rv01', value = rvs2)
b.set_value_all('sigmas', dataset='rv01', value = sigmas_rv)
b.run_compute()
b.plot(legend=True, show=True)
b.add_solver('estimator.rv_geometry', solver='rvgeom')
print(b['rvgeom'])
b.run_solver('rvgeom', solution='rvgeom_solution')
print(b['rvgeom_solution'])
print(b['value@fitted_twigs@rvgeom_solution'])
print(b['value@fitted_values@rvgeom_solution'])
b.flip_constraint('asini@binary', solve_for='sma@binary')
b.adopt_solution('rvgeom_solution')
b.run_compute()
b.plot(x='phase', show=True)
b = phoebe.default_binary()
b.add_dataset('rv', component='primary', times=times, rvs = rvs1, sigmas=sigmas_rv)
b.run_compute()
b.plot(legend=True, show=True)
b.add_solver('estimator.rv_geometry', solver='rvgeom')
b.run_solver('rvgeom', solution='rvgeom_solution')
print(b['value@fitted_twigs@rvgeom_solution'])
print(b['value@fitted_values@rvgeom_solution'])
b.flip_constraint('asini@primary', solve_for='sma@binary')
b.adopt_solution('rvgeom_solution')
b.run_compute()
b.plot(x='phase', show=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize the bundle
Step2: rv_geometry
Step3: The rv_geometry estimator is meant to provide an efficient starting point for q, vgamma, asini, esinw and ecosw. Similar to the light curve estimators, it will by default bin the input data if the number of data points is larger than phase_nbins and will expose the analytical (in this case, Keplerian orbit) models that were fit to the data.
Step4: The solution, as expected returns the fitted values and the analytic models we fit to get them, which can be turned off by setting expose_model to False. Let's inspect the fitted twigs and values before adopting the solution
Step5: As we can see all values look okay, and we have asini@binary in the twigs, which means we'll need to flip the asini constraint to be able to set it with adopt_solution()
Step6: * one RV
Step7: If we compare the fitted_twigs from this solution with our two-RV solution, we'll notice two things
|
14,834 | <ASSISTANT_TASK:>
Python Code:
!ls | grep "mo"
!wc -l anonymous-msweb-preprocessed.data && echo
!head anonymous-msweb-preprocessed.data
!cp anonymous-msweb-preprocessed.data log.txt
!cat mostFrequentVisitors.txt | cut -f 1,2 -d',' > urls.txt
!wc -l urls.txt && echo
!head urls.txt
%%writefile join.py
from mrjob.job import MRJob
from mrjob.step import MRStep
# Avoid broken pipe error
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE,SIG_DFL)
class Join(MRJob):
def configure_options(self):
super(Join, self).configure_options()
self.add_passthrough_option(
'--join',
default="left",
help="Options: left, inner, right")
def mapper_init(self):
self.join = self.options.join
self.urls_used = set()
self.urls = {}
try:
open("urls.txt")
filename = "urls.txt"
except FileNotFoundError:
filename = "limited_urls.txt"
with open(filename) as urls:
for line in urls:
url, key = line.strip().replace('"',"").split(",")
self.urls[key] = url
def mapper(self, _, lines):
try:
url = lines[2:6]
if self.join in ["inner", "left"]:
yield (lines, self.urls[url])
elif self.join in ["right"]:
yield (self.urls[url], lines)
self.urls_used.add(url)
except KeyError:
if self.join in ["inner", "right"]:
pass
else:
yield (lines, "None")
def mapper_final(self):
for key, value in self.urls.items():
if key not in self.urls_used:
yield (self.urls[key], "*")
def reducer(self, url, values):
quick_stash = 0
for val in values:
if val != "*":
quick_stash += 1
yield (val, url)
if quick_stash == 0:
yield ("None", url)
def steps(self):
join = self.options.join
if join in ["inner", "left"]:
mrsteps = [MRStep(mapper_init=self.mapper_init,
mapper=self.mapper)]
if join == "right":
mrsteps = [MRStep(mapper_init=self.mapper_init,
mapper=self.mapper,
mapper_final=self.mapper_final,
reducer=self.reducer)]
return mrsteps
if __name__ == "__main__":
Join.run()
!head -n 5 urls.txt > limited_urls.txt
!head log.txt | python join.py --file limited_urls.txt --join left -q
!head log.txt | python join.py --file limited_urls.txt --join inner -q
!head -n 100 log.txt | python join.py --file urls.txt --join right -r local -q | head -n 15
!head -n 50 log.txt | python join.py --file limited_urls.txt --join right -r local -q
%%writefile mini_5gram.txt
A BILL FOR ESTABLISHING RELIGIOUS 59 59 54
A Biography of General George 92 90 74
A Case Study in Government 102 102 78
A Case Study of Female 447 447 327
A Case Study of Limited 55 55 43
A Child's Christmas in Wales 1099 1061 866
A Circumstantial Narrative of the 62 62 50
A City by the Sea 62 60 49
A Collection of Fairy Tales 123 117 80
A Collection of Forms of 116 103 82
%%writefile atlas.txt
atlas boon 50 50 50
boon cava dipped 10 10 10
atlas dipped 15 15 15
with open("mini_stripes.txt", "w") as f:
f.writelines([
'"DocA"\t{"X":20, "Y":30, "Z":5}\n',
'"DocB"\t{"X":100, "Y":20}\n',
'"DocC"\t{"M":5, "N":20, "Z":5, "Y":1}\n'
])
!cat mini_stripes.txt
%%writefile MakeStripes.py
from mrjob.job import MRJob
from collections import Counter
class MakeStripes(MRJob):
def mapper(self, _, lines):
terms, term_count, page_count, book_count = lines.split("\t")
terms = terms.split()
term_count = int(term_count)
for item in terms:
yield (item, {term:term_count for term in terms if term != item})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
MakeStripes.run()
%%writefile atlas_desired_results.txt
"atlas" {"dipped": 15, "boon": 50}
"boon" {"atlas": 50, "dipped": 10, "cava": 10}
"cava" {"dipped": 10, "boon": 10}
"dipped" {"atlas": 15, "boon": 10, "cava": 10}
!cat atlas.txt | python MakeStripes.py -q > atlas_stripes.txt
!cat atlas_stripes.txt
%%writefile InvertIndex.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from collections import Counter
class InvertIndex(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key, words):
n_words = len(words)
for word in words:
yield (word, {key:n_words})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
InvertIndex.run()
!cat mini_stripes.txt | python InvertIndex.py -q > mini_stripes_inverted.txt
!cat mini_stripes_inverted.txt
!cat atlas_stripes.txt | python InvertIndex.py -q > atlas_inverted.txt
!cat atlas_inverted.txt
%%writefile Similarity.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from itertools import combinations
from statistics import mean
class Similarity(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key_term, docs):
doc_names = docs.keys()
for doc_pairs in combinations(sorted(list(doc_names)), 2):
yield (doc_pairs, 1)
for name in doc_names:
yield (name, 1)
def combiner(self, key, value):
yield (key, sum(value))
def reducer_init(self):
self.words = {}
self.results = []
def reducer(self, doc_or_docs, count):
if isinstance(doc_or_docs, str):
self.words[doc_or_docs] = sum(count)
else:
d1, d2 = doc_or_docs
d1_n_words, d2_n_words = self.words[d1], self.words[d2]
intersection = sum(count)
jaccard = round(intersection/(d1_n_words + d2_n_words - intersection), 3)
cosine = round(intersection/(d1_n_words**.5 * d2_n_words**.5), 3)
dice = round(2*intersection/(d1_n_words + d2_n_words), 3)
overlap = round(intersection/min(d1_n_words, d2_n_words), 3)
average = round(mean([jaccard, cosine, dice, overlap]), 3)
self.results.append([doc_or_docs, {"jacc":jaccard, "cos":cosine,
"dice":dice, "ol":overlap, "ave":average}])
def reducer_final(self):
for doc, result in sorted(self.results, key=lambda x: x[1]["ave"], reverse=True):
yield (doc, result)
if __name__ == "__main__":
Similarity.run()
!cat mini_stripes_inverted.txt | python Similarity.py -q --jobconf mapred.reduce.tasks=1
!cat atlas_inverted.txt | python Similarity.py -q --jobconf mapred.reduce.tasks=1
!cat atlas-boon-systems-test.txt | python MakeStripes.py -q | python InvertIndex.py -q | python Similarity.py -q --jobconf mapred.reduce.tasks=1
%%writefile GetIndexandOtherWords.py
import heapq
from re import findall
from mrjob.job import MRJob
from mrjob.step import MRStep
class TopList(list):
def __init__(self, max_size, num_position=0):
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size).
If each element of the list is an int or float, uses that value for
comparison. If the elements in the list are lists or tuples, uses the
list_position element of the list or tuple for the comparison.
self.max_size = max_size
self.pos = num_position
def _get_key(self, x):
return x[self.pos] if isinstance(x, (list, tuple)) else x
def append(self, val):
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class GetIndexandOtherWords(MRJob):
Usage: python GetIndexandOtherWords.py --index-range 9000-10000 --top-n-words 10000 --use-term-counts True
Given n-gram formatted data, outputs a file of the form:
index term
index term
...
word term
word term
...
Where there would be 1001 index words and 10000 total words. Each word would be ranked based
on either the term count listed in the Google n-gram data (i.e. the counts found in the
underlying books) or the ranks would be based on the word count of the n-grams in the actual
dataset (i.e. ignore the numbers/counts associated with each n-gram and count each n-gram
exactly once).
def configure_options(self):
super(GetIndexandOtherWords, self).configure_options()
self.add_passthrough_option(
'--index-range',
default="9-10",
help="Specify the range of the index words. ex. 9-10 means the ninth and " +
"tenth most popular words will serve as the index")
self.add_passthrough_option(
'--top-n-words',
default="10",
help="Specify the number of words to output in all")
self.add_passthrough_option(
'--use-term-counts',
default="True",
choices=["True","False"],
help="When calculating the most frequent words, choose whether to count " +
"each word based on the term counts reported by Google or just based on " +
"the number of times the word appears in an n-gram")
self.add_passthrough_option(
'--return-counts',
default="False",
choices=["True","False"],
help="The final output includes the counts of each word")
def mapper_init(self):
# Ensure command line options are sane
top_n_words = int(self.options.top_n_words)
last_index_word = int(self.options.index_range.split("-")[1])
if top_n_words < last_index_word:
raise ValueError(--top-n-words value (currently %d) must be equal to or greater than
--index-range value (currently %d). % (top_n_words, last_index_word))
self.stop_words = set(['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves',
'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him',
'his', 'himself', 'she', 'her', 'hers', 'herself', 'it',
'its', 'itself', 'they', 'them', 'their', 'theirs',
'themselves', 'what', 'which', 'who', 'whom', 'this', 'that',
'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be',
'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does',
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or',
'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for',
'with', 'about', 'against', 'between', 'into', 'through',
'during', 'before', 'after', 'above', 'below', 'to', 'from',
'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under',
'again', 'further', 'then', 'once', 'here', 'there', 'when',
'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few',
'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not',
'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't',
'can', 'will', 'just', 'don', 'should', 'now'])
def mapper(self, _, lines):
terms, term_count, page_count, book_count = lines.split("\t")
# Either use the ngram term count for the count or count each word just once
if self.options.use_term_counts == "True":
term_count = int(term_count)
else:
term_count = 1
# Iterate through each term. Skip stop words
for term in findall(r'[a-z]+', terms.lower()):
if term in self.stop_words:
pass
else:
yield (term, term_count)
def combiner(self, term, counts):
yield (term, sum(counts))
def reducer_init(self):
Accumulates the top X words and yields them. Note: should only use if
you want to emit a reasonable amount of top words (i.e. an amount that
could fit on a single computer.)
self.top_n_words = int(self.options.top_n_words)
self.TopTerms = TopList(self.top_n_words, num_position=1)
def reducer(self, term, counts):
self.TopTerms.append((term, sum(counts)))
def reducer_final(self):
for pair in self.TopTerms:
yield pair
def mapper_single_key(self, term, count):
Send all the data to a single reducer
yield (1, (term, count))
def reducer_init_top_vals(self):
# Collect top words
self.top_n_words = int(self.options.top_n_words)
self.TopTerms = TopList(self.top_n_words, num_position=1)
# Collect index words
self.index_range = [int(num) for num in self.options.index_range.split("-")]
self.index_low, self.index_high = self.index_range
# Control if output shows counts or just words
self.return_counts = self.options.return_counts == "True"
def reducer_top_vals(self, _, terms):
for term in terms:
self.TopTerms.append(term)
def reducer_final_top_vals(self):
TopTerms = self.TopTerms.final_sort()
if self.return_counts:
# Yield index words
for term in TopTerms[self.index_low-1:self.index_high]:
yield ("index", term)
# Yield all words
for term in TopTerms:
yield ("words", term)
else:
# Yield index words
for term in TopTerms[self.index_low-1:self.index_high]:
yield ("index", term[0])
# Yield all words
for term in TopTerms:
yield ("words", term[0])
def steps(self):
Step one: Yield top n-words from each reducer. Means dataset size is
n-words * num_reducers. Guarantees overall top n-words are
sent to the next step.
mr_steps = [MRStep(mapper_init=self.mapper_init,
mapper=self.mapper,
combiner=self.combiner,
reducer_init=self.reducer_init,
reducer_final=self.reducer_final,
reducer=self.reducer),
MRStep(mapper=self.mapper_single_key,
reducer_init=self.reducer_init_top_vals,
reducer=self.reducer_top_vals,
reducer_final=self.reducer_final_top_vals)]
return mr_steps
if __name__ == "__main__":
GetIndexandOtherWords.run()
!cat mini_5gram.txt | python GetIndexandOtherWords.py --index-range 16-20 \
--top-n-words 20 \
--return-counts False \
--use-term-counts True \
-q > vocabs
!cat vocabs
!cat mini_5gram.txt | python GetIndexandOtherWords.py --index-range 16-20 \
--top-n-words 20 \
--return-counts True \
--use-term-counts True \
-q
%%writefile MakeStripes.py
from mrjob.job import MRJob
from collections import Counter
from sys import stderr
from re import findall
class MakeStripes(MRJob):
def mapper_init(self):
Read in index words and word list.
self.stripes = {}
self.indexlist, self.wordslist = [],[]
with open('vocabs', 'r') as vocabFile:
for line in vocabFile:
word_type, word = line.replace('"', '').split()
if word_type == 'index':
self.indexlist.append(word)
else:
self.wordslist.append(word)
# Convert to sets to make lookups faster
self.indexlist = set(self.indexlist)
self.wordslist = set(self.wordslist)
def mapper(self, _, lines):
Make stripes using index and words list
terms, term_count, page_count, book_count = lines.split("\t")
term_count = int(term_count)
terms = findall(r'[a-z]+', terms.lower())
for item in terms:
if item in self.indexlist:
for val in terms:
if val != item and val in self.wordslist:
yield item, {val:term_count}
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
MakeStripes.run()
!python MakeStripes.py --file vocabs mini_5gram.txt -q
%%writefile InvertIndex.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from collections import Counter
class InvertIndex(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key, words):
Convert each stripe to inverted index
n_words = len(words)
for word in words:
yield (word, {key:n_words})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
InvertIndex.run()
!python MakeStripes.py --file vocabs mini_5gram.txt -q | python InvertIndex.py -q
%%writefile Similarity.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from itertools import combinations
class Similarity(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key_term, docs):
Make co-occurrence keys for each pair of documents in the inverted
index and make keys representing each document.
doc_names = docs.keys()
for doc_pairs in combinations(sorted(list(doc_names)), 2):
yield (doc_pairs, 1)
for name in doc_names:
yield (name, 1)
def combiner(self, key, value):
yield (key, sum(value))
### Custom partitioner code goes here
def reducer_init(self):
self.words = {}
self.results = []
def reducer(self, doc_or_docs, count):
if isinstance(doc_or_docs, str):
self.words[doc_or_docs] = sum(count)
else:
d1, d2 = doc_or_docs
d1_n_words, d2_n_words = self.words[d1], self.words[d2]
intersection = float(sum(count))
jaccard = round(intersection/(d1_n_words + d2_n_words - intersection), 3)
cosine = round(intersection/(d1_n_words**.5 * d2_n_words**.5), 3)
dice = round(2*intersection/(d1_n_words + d2_n_words), 3)
overlap = round(intersection/min(d1_n_words, d2_n_words), 3)
average = round(sum([jaccard, cosine, dice, overlap])/4.0, 3)
self.results.append([doc_or_docs, {"jacc":jaccard, "cos":cosine,
"dice":dice, "ol":overlap, "ave":average}])
def reducer_final(self):
for doc, result in sorted(self.results, key=lambda x: x[1]["ave"], reverse=True):
yield (doc, result)
if __name__ == "__main__":
Similarity.run()
!python MakeStripes.py --file vocabs mini_5gram.txt -q | python InvertIndex.py -q | python Similarity.py -q --jobconf mapred.reduce.tasks=1
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-* | wc
int(57432975*(200/185))
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | wc
3435179/62089702
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python GetIndexandOtherWords.py --index-range 9001-10000 \
--top-n-words 10000 \
--return-counts True \
--use-term-counts False \
-q > vocabs
!head vocabs
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python GetIndexandOtherWords.py --index-range 9001-10000 \
--top-n-words 10000 \
--return-counts True \
--use-term-counts True \
-q > vocabs
!head vocabs
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python GetIndexandOtherWords.py --index-range 9001-10000 \
--top-n-words 10000 \
--return-counts False \
--use-term-counts True \
-q > vocabs
!head vocabs
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python MakeStripes.py --file vocabs -q | python InvertIndex.py -q | python Similarity.py -q --jobconf mapred.reduce.tasks=1 > similarities.txt
!head similarities.txt
minutes_for_small_job = 3
n_small_jobs_in_big_job = 200/11
total_minutes_one_computer = minutes_for_small_job*n_small_jobs_in_big_job
computers_in_cluster = 50
total_minutes_for_cluster = total_minutes_one_computer/computers_in_cluster
total_minutes_for_cluster
%%writefile CustomPartitioner.py
from mrjob.job import MRJob
from sys import stderr
import numpy as np
from operator import itemgetter
from random import random
class CustomPartitioner(MRJob):
def __init__(self, *args, **kwargs):
super(CustomPartitioner, self).__init__(*args, **kwargs)
self.N = 30
self.NUM_REDUCERS = 4
def mapper_init(self):
def makeKeyHash(key, num_reducers):
byteof = lambda char: int(format(ord(char), 'b'), 2)
current_hash = 0
for c in key:
current_hash = (current_hash * 31 + byteof(c))
return current_hash % num_reducers
# printable ascii characters, starting with 'A'
keys = [str(chr(i)) for i in range(65,65+self.NUM_REDUCERS)]
partitions = []
for key in keys:
partitions.append([key, makeKeyHash(key, self.NUM_REDUCERS)])
parts = sorted(partitions,key=itemgetter(1))
self.partition_keys = list(np.array(parts)[:,0])
self.partition_file = np.arange(0,self.N,self.N/(self.NUM_REDUCERS))[::-1]
print((keys, partitions, parts, self.partition_keys, self.partition_file), file=stderr)
def mapper(self, _, lines):
terms, term_count, page_count, book_count = lines.split("\t")
terms = terms.split()
term_count = int(term_count)
for item in terms:
yield (item, term_count)
for item in ["A", "B", "H", "I"]:
yield (item, 0)
def reducer_init(self):
self.reducer_unique_key = int(random()*900000+100000)
def reducer(self, keys, values):
yield (self.reducer_unique_key, (keys, sum(values)))
if __name__ == "__main__":
CustomPartitioner.run()
!cat atlas.txt | python CustomPartitioner.py -r local -q
%%writefile ngram.py
from mrjob.job import MRJob
from collections import Counter
import operator
class NGram(MRJob):
def mapper_init(self):
self.length = 0
self.longest = 0
self.distribution = Counter()
def mapper(self, _, lines):
# extract word/count sets
ngram, count, pages, _ = lines.split("\t")
count, pages = int(count), int(pages)
# loop to count word length
words = ngram.lower().split()
for w in words:
yield (w, {'count':count, 'pages':pages})
# Count of ngram length
n_gram_character_count = len(ngram)
yield n_gram_character_count, count
# determine if longest word on mapper
if n_gram_character_count > self.length:
self.length = n_gram_character_count
self.longest = [words, n_gram_character_count]
yield (self.longest)
def combiner(self, word, counts):
if isinstance(word,str):
count = 0
pages = 0
for x in counts:
count += x['count']
pages += x['pages']
yield word, {'count':count,'pages':pages}
#aggregate counts
elif isinstance(word,int):
yield word, sum(counts)
#yield long ngrams
else:
for x in counts:
yield word, x
def reducer_init(self):
self.longest = []
self.length = 0
self.counts = Counter()
self.pages = Counter()
self.distribution = Counter()
def reducer(self, key, values):
# use Counter word totals
for val in values:
if isinstance(key,str):
self.counts += Counter({key:val['count']})
self.pages += Counter({key:val['pages']})
# aggregate distribution numbers
elif isinstance(key,int):
self.distribution += Counter({key:val})
else:
# Determine if longest ngram on reducer
if val > self.length:
self.longest = [key, val]
self.length = val
def reducer_final(self):
# yield density calculation
for x in sorted(self.counts):
yield ('mrj_dens',{x:(1.*self.counts[x]/self.pages[x])})
# Use most_common counter function
for x in self.counts.most_common(10):
yield x
# return longest item
if self.longest:
yield self.longest
# yield distribution values
for x in self.distribution:
yield ('mrj_dist', {x:self.distribution[x]})
if __name__ == "__main__":
NGram.run()
!python ngram.py --jobconf mapred.reduce.tasks=1 < googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt -q > dataout.txt
!cat dataout.txt
%matplotlib inline
import json
import operator
import numpy as np
import matplotlib.pyplot as plt
# sorted density list
def density(data):
x = data
sorted_x = sorted(x.items(), key=operator.itemgetter(1), reverse=True)
print sorted_x[:20]
# distribution plot
def distribution(data):
plt.scatter(data.keys(), data.values(), alpha=0.5)
plt.show()
# loader
def driver():
datain = open('dataout.txt','r')
densdata = {}
distdata = {}
# clean the mess I made
for line in datain:
parts = line.split('\t')
temp = parts[1][1:-2].replace('"', '').split(':')
mrj_val = parts[0].replace('"', '')
if mrj_val == "mrj_dens":
densdata[temp[0]]=float(temp[1])
elif mrj_val == "mrj_dist":
distdata[int(temp[0])]=int(temp[1])
#Execute density sort
density(densdata)
#Execute distribution plot
distribution(distdata)
driver()
%%writefile NLTKBenchMark.py
import nltk
import json
import numpy as np
from nltk.corpus import wordnet as wn
from mrjob.job import MRJob
from mrjob.step import MRStep
class NLTKBenchMark(MRJob):
def mapper(self, _, lines):
#parse the output file and identify the pair of words
pair, avg = lines.split("\t")
pair = json.loads(pair)
word1, word2 = pair[0], pair[1]
hit = 0
#for each word, extract the list of synonyms from nltk corpus, convert to set to remove duplicates
syn1 = set([l.name() for s in wn.synsets(word1) for l in s.lemmas()])
syn2 = set([l.name() for s in wn.synsets(word2) for l in s.lemmas()])
#keep track of words that have no synonym using '~nosync'
if len(syn1) == 0:
yield '~nosyn', [word1]
if len(syn2) == 0:
yield '~nosyn', [word2]
'''
for each occurence of word, increment the count
for word A, synset is the number of synonyms of the other word B
this value is used for calculating recall
this method becomes confusing/problematic if a word appears multiple times in the final output
if there is a hit for word A, set the hit to 1, and set the hit for the other word B to 0 (to avoid double count)
if there is not a hit for A and B, set the hit to 0 for both
'''
if word2 in syn1:
yield word2, {'hit':1, 'count':1, 'synset':len(syn1)}
yield word1, {'hit':0, 'count':1, 'synset':len(syn2)}
elif word1 in syn2:
yield word1, {'hit':1, 'count':1, 'synset':len(syn2)}
yield word2, {'hit':0, 'count':1, 'synset':len(syn1)}
else:
yield word1, {'hit':0, 'count':1, 'synset':len(syn2)}
yield word2, {'hit':0, 'count':1, 'synset':len(syn1)}
def combiner(self, term, values):
#combine '~nosyn' into a bigger list and yield the list
if term == '~nosyn':
nosynList = []
for value in values:
nosynList = nosynList+value
yield term, nosynList
else:
counters = {'hit':0, 'count':0, 'synset':0}
for value in values:
counters['hit'] += value['hit']
counters['count'] += value['count']
counters['synset'] = value['synset']
yield term, counters
def reducer_init(self):
self.plist = []
self.rlist = []
self.flist = []
def reducer(self, term, values):
#yield the final list of words that have no synonym
if term == '~nosyn':
nosynList = []
for value in values:
nosynList = nosynList+value
yield term, nosynList
else:
counters = {'hit':0.0, 'count':0.0, 'synset':0.0}
precision, recall, F1 = 0,0,0
for value in values:
counters['hit'] += value['hit']
counters['count'] += value['count']
counters['synset'] = value['synset']
if counters['hit'] > 0 and counters['synset'] > 0:
precision = float(counters['hit'])/float(counters['count'])
recall = float(counters['hit'])/float(counters['synset'])
F1 = 2*precision*recall/(precision+recall)
self.plist.append(precision)
self.rlist.append(recall)
self.flist.append(F1)
yield term, counters
elif counters['synset'] > 0:
self.plist.append(precision)
self.rlist.append(recall)
self.flist.append(F1)
yield term, counters
def reducer_final(self):
#compute the mean of all collected measurements
yield 'precision', np.mean(self.plist)
yield 'recall', np.mean(self.rlist)
yield 'F1', np.mean(self.flist)
if __name__ == "__main__":
NLTKBenchMark.run()
!python NLTKBenchMark.py nltk_bench_sample.txt
''' Performance measures '''
from __future__ import division
import numpy as np
import json
import nltk
from nltk.corpus import wordnet as wn
import sys
#print all the synset element of an element
def synonyms(string):
syndict = {}
for i,j in enumerate(wn.synsets(string)):
syns = j.lemma_names()
for syn in syns:
syndict.setdefault(syn,1)
return syndict.keys()
hits = []
TP = 0
FP = 0
TOTAL = 0
flag = False # so we don't double count, but at the same time don't miss hits
## For this part we can use one of three outputs. They are all the same, but were generated differently
# 1. the top 1000 from the full sorted dataset -> sortedSims[:1000]
# 2. the top 1000 from the partial sort aggragate file -> sims2/top1000sims
# 3. the top 1000 from the total order sort file -> head -1000 sims_parts/part-00004
top1000sims = []
with open("nltk_bench_sample.txt","r") as f:
for line in f.readlines():
line = line.strip()
lisst, avg = line.split("\t")
lisst = eval(lisst)
lisst.append(avg)
top1000sims.append(lisst)
measures = {}
not_in_wordnet = []
for line in top1000sims:
TOTAL += 1
words=line[0:2]
for word in words:
if word not in measures:
measures[word] = {"syns":0,"opps": 0,"hits":0}
measures[word]["opps"] += 1
syns0 = synonyms(words[0])
measures[words[1]]["syns"] = len(syns0)
if len(syns0) == 0:
not_in_wordnet.append(words[0])
if words[1] in syns0:
TP += 1
hits.append(line)
flag = True
measures[words[1]]["hits"] += 1
syns1 = synonyms(words[1])
measures[words[0]]["syns"] = len(syns1)
if len(syns1) == 0:
not_in_wordnet.append(words[1])
if words[0] in syns1:
if flag == False:
TP += 1
hits.append(line)
measures[words[0]]["hits"] += 1
flag = False
precision = []
recall = []
f1 = []
for key in measures:
p,r,f = 0,0,0
if measures[key]["hits"] > 0 and measures[key]["syns"] > 0:
p = measures[key]["hits"]/measures[key]["opps"]
r = measures[key]["hits"]/measures[key]["syns"]
f = 2 * (p*r)/(p+r)
# For calculating measures, only take into account words that have synonyms in wordnet
if measures[key]["syns"] > 0:
precision.append(p)
recall.append(r)
f1.append(f)
# Take the mean of each measure
print "—"*110
print "Number of Hits:",TP, "out of top",TOTAL
print "Number of words without synonyms:",len(not_in_wordnet)
print "—"*110
print "Precision\t", np.mean(precision)
print "Recall\t\t", np.mean(recall)
print "F1\t\t", np.mean(f1)
print "—"*110
print "Words without synonyms:"
print "-"*100
for word in not_in_wordnet:
print synonyms(word),word
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Count lines in log dataset. View the first 10 lines. Rename data to log.txt
Step2: Convert the output of 4.4 to be just url and url_id. Save as urls.txt.
Step3: The urls.txt file is much smaller than the log.txt data and should be what is loaded into memory. This means it would be the right-side table in a left-side join.
Step4: Make a file with only the first five urls to test left and inner join.
Step5: Using the first ten lines of the log file and left joining it to the first five lines of the urls file, we see that some of the urls are returned as "None." This is correct behavior.
Step6: Performing the same operation, but with an inner join, we see the lines that were "None" are dropped.
Step7: To prove the right-side join works, we can only use the first 100 log entries. We see that urls without corresponding log entries are listed as "None" and that all urls are returned in alphabetical order.
Step8: By using the limited urls file, we see that only five urls are returned and every logged page visit to those pages are returned (at least within the first 50 log entries).
Step9: HW5.3 <a name="1.3"></a> Systems tests on n-grams dataset (Phase1) and full experiment (Phase 2)
Step10: atlas.txt
Step11: mini_stripes.txt
Step12: TASK
Step13: Desired result
Step14: Actual result
Step15: Actual result matches desired result
Step16: Desired result
Step17: The tests pass
Step18: Desired results
Step19: The numbers calculated exactly match the systems test except for the average calculations of the mini_stripes set. In this instance, the systems test calculations are not correct.
Step26: PHASE 2
Step27: Test getting the index and other valid words excluding stop words on the mini_5gram.txt dataset. Return the top 10 most common words (based on the term counts) and mark the ninth and tenth most common words as the index words.
Step28: To spot check the results, view the term counts of each word.
Step31: 3. HW5.3.2 Full-scale experiment
Step33: <a name="2.3">
Step35: <a name="2.4">
Step36: Doesn't return anything because there are no co-occurring words in the inverted index.
Step37: Time the full calculations a slightly larger dataset
Step38: Because there are 200 files that make up the 5-gram dataset (at least that is what I thought I heard), the true line count of the dataset is about
Step39: We are going to operate on a subset of this data.
Step40: This sample of the data is only a few percent (about 5.5%) of the full size of the dataset.
Step41: Create index and words to use
Step42: Here we will return the term count of each word (not the 5gram-based count).
Step43: This code is very similar to what we would run on the full dataset
Step44: Make stripes, invert index, and calculate similarities. Print top similarities.
Step45: It about 3 minutes to run this code. The code processes 11 out of 200 files. It currently uses one machine. If the cluster has 50 machines available, we would expect it to take only a few minutes for these core operations to run.
Step46: Experimental code on custom partitioner
Step47: Results of experiment so far
Step48: <a name="2.6">
|
14,835 | <ASSISTANT_TASK:>
Python Code:
from time import time
import numpy as np
import matplotlib.pyplot as plt
from collections import deque
import random
%matplotlib inline
def benchmark(counts):
def times(f):
def ret():
timings = []
for c in counts:
start = time()
f(c)
timings.append(time() - start)
return timings
return ret
return times
counts = [int(x) for x in np.logspace(4, 7, num=40)]
@benchmark(counts)
def accumulate_strings(n):
s = ''
for i in range(n):
s += 'a'
@benchmark(counts)
def list_join(n):
l = []
for i in range(n):
l.append('a')
''.join(l)
plt.plot(counts, accumulate_strings())
plt.plot(counts, list_join())
plt.legend(['s+=', "''.join()"], loc='best')
counts = [int(x) for x in np.logspace(4, 6)]
@benchmark(counts)
def add_remove_deque(n):
q = deque()
for i in range(n):
q.append(i)
for i in range(n):
q.pop()
@benchmark(counts)
def add_remove_list(n):
l = []
for i in range(n):
l.append(i)
for i in range(n):
l.pop()
plt.plot(counts, add_remove_deque())
plt.plot(counts, add_remove_list())
plt.legend(['deque', 'list'], loc='best')
@benchmark(counts)
def just_pass(n):
l = []
for i in range(n):
pass
for i in range(n):
pass
plt.plot(counts, add_remove_deque())
plt.plot(counts, add_remove_list())
plt.plot(counts, just_pass())
plt.legend(['deque', 'list', 'pass'], loc='best')
counts = [int(x) for x in np.logspace(3, 5)]
@benchmark(counts)
def append_pop0_deque(n):
q = deque()
for i in range(n):
q.append(i)
for i in range(n):
q.popleft()
@benchmark(counts)
def append_pop0_list(n):
l = []
for i in range(n):
l.append(i)
for i in range(n):
l.pop(0)
plt.plot(counts, append_pop0_deque())
plt.plot(counts, append_pop0_list())
plt.legend(['deque', 'list'], loc='best')
counts = [int(x) for x in np.logspace(3, 5)]
@benchmark(counts)
def accumulate_right(n):
s = ''
for i in range(n):
s = s + 'a'
@benchmark(counts)
def accumulate_left(n):
s = ''
for i in range(n):
s = 'a' + s
plt.plot(counts, accumulate_right())
plt.plot(counts, accumulate_left())
plt.legend(["s = s + 'a'", "s = 'a' + s"], loc='upper left')
counts = [int(x) for x in np.logspace(3, 5)]
@benchmark(counts)
def accumulate_bytes(n):
s = b''
for i in range(n):
s += b'a'
@benchmark(counts)
def join_bytes(n):
l = []
for i in range(n):
l.append(b'a')
b''.join(l)
plt.plot(counts, accumulate_bytes())
plt.plot(counts, join_bytes())
plt.legend(['accumulate', 'join'], loc='upper left')
counts = [int(x) for x in np.logspace(3, 6)]
@benchmark(counts)
def accumulate_1ref(n):
s = ''
for i in range(n):
s += 'a'
@benchmark(counts)
def accumulate_2ref(n):
s = ''
blah = s
q = s
for i in range(n):
s += 'a'
plt.plot(counts, accumulate_1ref())
plt.plot(counts, accumulate_2ref())
plt.legend(['1ref', '2ref'], loc='best')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start with the obvious
Step2: Looks pretty linear to me. Apparently string += runs in O(1). That was honestly a shocking discovery to me. I've been telling people to avoid string += for years!
Step3: Seems like they're both linear time. Hey, how much is that is overhead?
Step4: Not that much. What if we insert to the end and take from the beginning?
Step5: Finally we see a quadratic function. Seems that lists are fine with adding and removing from one end, but they're really bad when you need to pop(0) repeatedly.
Step6: Certainly seems so. That one is a little strange. If x and y are strings, x + y runs in O(len(y)).
|
14,836 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
sys.path.append('../..')
from matplotlib import pylab
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
pylab.rcParams['figure.figsize'] = 12, 10
import functools
import numpy
import scipy
import scipy.special
from crocodile.clean import *
from crocodile.synthesis import *
from crocodile.simulate import *
from util.visualize import *
from arl.test_support import create_named_configuration
vlas = create_named_configuration('VLAA_north')
ha_range = numpy.arange(numpy.radians(0),
numpy.radians(90),
numpy.radians(90 / 36))
dec = numpy.radians(90)
vobs = xyz_to_baselines(vlas.data['xyz'], ha_range, dec)
# Wavelength: 5 metres
wvl=5
uvw = vobs / wvl
ax = plt.figure().add_subplot(111, projection='3d')
ax.scatter(uvw[:,0], uvw[:,1] , uvw[:,2])
max_uvw = numpy.amax(uvw)
ax.set_xlabel('U [$\lambda$]'); ax.set_xlim((-max_uvw, max_uvw))
ax.set_ylabel('V [$\lambda$]'); ax.set_ylim((-max_uvw, max_uvw))
ax.set_zlabel('W [$\lambda$]'); ax.set_zlim((-max_uvw, max_uvw))
ax.view_init(20, 20);
import itertools
theta = 2*0.01
lam = 18000
grid_size = int(theta * lam)
image = numpy.zeros((grid_size, grid_size))
vis = numpy.zeros(len(uvw), dtype=complex)
for l,m in theta/10 * numpy.array(list(itertools.product(range(-3, 4), range(-3, 4)))):
vis += 1.0*simulate_point(uvw, l, m)
image[grid_size//2 + int(m*lam),
grid_size//2 + int(l*lam)] += 1.0
show_image(image, "image", theta)
vis_simple = do_predict(theta, lam, uvw, None, image, simple_predict)
print("Simple: ", numpy.sum(numpy.abs(vis_simple - vis)**2) / numpy.sum(numpy.abs(vis)**2))
oversample = 1024
supports = numpy.arange(2,14)
mrange = numpy.arange(0.9, 2.0, 0.1)
conv_errs = []
for support in supports:
condition = support
aaf = anti_aliasing_function(grid_size, 0, support)
kv1 = kernel_oversample(aaf, oversample, support)
kv1 /= numpy.sum(kv1[0])
vis_conv = do_predict(theta, lam, uvw, None, image/numpy.outer(aaf,aaf), conv_predict, kv=kv1)
conv_errs.append(numpy.sum(numpy.abs(vis_conv - vis)**2))
print("Convolution %dx%d: " % (support, support),
numpy.sum(numpy.abs(vis_conv - vis)**2) / numpy.sum(numpy.abs(vis)**2),
" (mean off-centre", numpy.abs(1-numpy.mean(vis_conv / vis)),")")
# Show how error changes with support
plt.semilogy(supports, conv_errs / numpy.sum(numpy.abs(vis)**2))
plt.xlabel("Support, PSWF c"); plt.ylabel("Error"); plt.show()
# Show error distribution
plt.scatter(uvw[:,0], uvw[:,1], c=numpy.abs(vis_conv - vis))
plt.scatter(-uvw[:,0], -uvw[:,1], c=numpy.abs(vis_conv - vis));
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate baseline coordinates for an observation with a hypothetical north-pole VLA over 6 hours, with a visibility recorded every 10 minutes. The phase center is fixed in the zenith. This results in constant $w$-values of basically zero.
Step2: We then make an image with a grid of sources to generate visibilities from. For reference we also use a direct Fourier Transform to generate visibilities
Step3: Now we can attempt to generate visibilities from the image. The quality of this depends quite a bit on the quality of the used anti-aliasing function
|
14,837 | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
tf.__version__
from tensorflow.contrib import keras
from keras.datasets import cifar100
(X_train, Y_train), (X_test, Y_test) = cifar100.load_data(label_mode='fine')
from keras import backend as K
img_rows, img_cols = 32, 32
if K.image_data_format() == 'channels_first':
shape_ord = (3, img_rows, img_cols)
else: # channel_last
shape_ord = (img_rows, img_cols, 3)
shape_ord
X_train.shape
import numpy as np
nb_classes = len(np.unique(Y_train))
from keras.applications import vgg16
from keras.layers import Input
vgg16_model = vgg16.VGG16(weights='imagenet', include_top=False,
input_tensor=Input(shape_ord))
vgg16_model.summary()
for layer in vgg16_model.layers:
layer.trainable = False # freeze layer
from keras.layers.core import Dense, Dropout, Flatten
from keras.layers.normalization import BatchNormalization
x = Flatten(input_shape=vgg16_model.output.shape)(vgg16_model.output)
x = Dense(4096, activation='relu', name='ft_fc1')(x)
x = Dropout(0.5)(x)
x = BatchNormalization()(x)
predictions = Dense(nb_classes, activation = 'softmax')(x)
from keras.models import Model
#create graph of your new model
model = Model(inputs=vgg16_model.input, outputs=predictions)
#compile the model
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
from keras.callbacks import TensorBoard
## one-hot Encoding of labels (1 to 100 classes)
from keras.utils import np_utils
Y_train.shape
Y_train = np_utils.to_categorical(Y_train)
Y_train.shape
def generate_batches(X, Y, batch_size=128):
# Iterations has to go indefinitely
start = 0
while True:
yield (X[start:start+batch_size], Y[start:start+batch_size])
start=batch_size
batch_size = 64
steps_per_epoch = np.floor(X_train.shape[0] / batch_size)
model.fit_generator(generate_batches(X_train, Y_train, batch_size=batch_size),
steps_per_epoch=steps_per_epoch, epochs=20, verbose=1,
callbacks=[TensorBoard(log_dir='./tf_logs', histogram_freq=10,
write_graph=True, write_images=True,
embeddings_freq=10,
embeddings_layer_names=['block1_conv2',
'block5_conv1',
'ft_fc1'],
embeddings_metadata=None)])
%%bash
python -m tensorflow.tensorboard --logdir=./tf_logs
import operator
import threading
from functools import reduce
import keras
import keras.backend as K
from keras.engine import Model
import numpy as np
import tensorflow as tf
import time
from keras.layers import Conv2D
from tqdm import tqdm
def prod(factors):
return reduce(operator.mul, factors, 1)
TRAINING = True
with K.get_session() as sess:
shp = [10, 200, 200, 3]
shp1 = [10, 7, 7, 80]
inp = K.placeholder(shp)
inp1 = K.placeholder(shp1)
queue = tf.FIFOQueue(20, [tf.float32, tf.float32], [shp, shp1])
x1, y1 = queue.dequeue()
enqueue = queue.enqueue([inp, inp1])
model = keras.applications.ResNet50(False, "imagenet", x1, shp[1:])
for i in range(3):
model.layers.pop()
model.layers[-1].outbound_nodes = []
model.outputs = [model.layers[-1].output]
output = model.outputs[0] # 7x7
# Reduce filter size to avoid OOM
output = Conv2D(32, (1, 1), padding="same", activation='relu')(output)
output3 = Conv2D(5 * (4 + 11 + 1), (1, 1), padding="same", activation='relu')(
output) # YOLO output B (4 + nb_class +1)
cost = tf.reduce_sum(tf.abs(output3 - y1))
optimizer = tf.train.RMSPropOptimizer(0.001).minimize(cost)
sess.run(tf.global_variables_initializer())
def get_input():
# Super long processing I/O bla bla bla
return np.arange(prod(shp)).reshape(shp).astype(np.float32), np.arange(prod(shp1)).reshape(shp1).astype(
np.float32)
def generate(coord, enqueue_op):
while not coord.should_stop():
inp_feed, inp1_feed = get_input()
sess.run(enqueue_op, feed_dict={inp: inp_feed, inp1: inp1_feed})
start = time.time()
for i in tqdm(range(10)): # EPOCH
for j in range(30): # Batch
x,y = get_input()
optimizer_, s = sess.run([optimizer, queue.size()],
feed_dict={x1:x,y1:y, K.learning_phase(): int(TRAINING)})
print("Took : ", time.time() - start)
coordinator = tf.train.Coordinator()
threads = [threading.Thread(target=generate, args=(coordinator, enqueue)) for i in range(10)]
for t in threads:
t.start()
start = time.time()
for i in tqdm(range(10)): # EPOCH
for j in range(30): # Batch
optimizer_, s = sess.run([optimizer, queue.size()],
feed_dict={K.learning_phase(): int(TRAINING)})
print("Took : ", time.time() - start)
def clear_queue(queue, threads):
while any([t.is_alive() for t in threads]):
_, s = sess.run([queue.dequeue(), queue.size()])
print(s)
coordinator.request_stop()
clear_queue(queue, threads)
coordinator.join(threads)
print("DONE Queue")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tensorboard Integration
Step2: TensorBoard Callback
Step4: ```python
Step5: Runing Tensorboard
Step6: tf.Queue integration with Keras
|
14,838 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
names = ["Name_2MASS","Name_alt","Spectral_Type","T_eff","AJ","L_bol","IMF"]
tbl6 = pd.read_csv("http://iopscience.iop.org/0067-0049/173/1/104/fulltext/71585.tb6.txt",
sep='\t', names=names, na_values='\ldots')
tbl6.head()
upper_limits = (tbl6.T_eff == "\\leq 2400")
measurements = ~upper_limits
print "There are {} upper limits and {} measurements".format(upper_limits.sum(), measurements.sum())
sns.set_context("notebook", font_scale=1.5)
plt.plot(tbl6.T_eff[measurements], tbl6.L_bol[measurements], '.')
#plt.errorbar([2400.0]*upper_limits.sum(), tbl6.L_bol[upper_limits], xerr=0.0001, yerr=None, xlolims=True)
plt.ylabel(r"$L/L_{sun}$")
plt.xlabel(r"$T_{eff} (K)$")
plt.yscale("log")
plt.title("Luhman 2007 Chamaeleon I Members")
plt.xlim(5000,2000)
from astroquery.simbad import Simbad
import astropy.coordinates as coord
import astropy.units as u
customSimbad = Simbad()
customSimbad.add_votable_fields('otype', 'id', 'sptype')
query_list = '2MASS'+tbl6.Name_2MASS.dropna().values
result = customSimbad.query_objects(query_list, verbose=True)
print "There were {} sources queried, and {} sources found.".format(len(query_list), len(result))
if len(query_list) == len(result):
print "Hooray! Everything matched"
else:
print "Which ones were not found?"
result[0:2]
tbl6.to_csv("../data/Luhman2007/tbl6.csv", sep="\t")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table 6 - Derived Properties for Members of Chamaeleon I
Step2: Custom analysis
Step3: Save data tables locally.
|
14,839 | <ASSISTANT_TASK:>
Python Code:
!python3 -m pip install pip --upgrade --quiet --user
!python3 -m pip install kfp --upgrade --quiet --user
!python3 -m pip install tfx==0.21.2 --quiet --user
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
import os
from typing import Text
import kfp
import tensorflow_model_analysis as tfma
from tfx.components import Evaluator
from tfx.components import CsvExampleGen
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.orchestration import data_types
from tfx.orchestration import pipeline
from tfx.orchestration.kubeflow import kubeflow_dag_runner
from tfx.proto import pusher_pb2
from tfx.utils.dsl_utils import external_input
# In TFX MLMD schema, pipeline name is used as the unique id of each pipeline.
# Assigning workflow ID as part of pipeline name allows the user to bypass
# some schema checks which are redundant for experimental pipelines.
pipeline_name = 'taxi_pipeline_with_parameters'
# Path of pipeline data root, should be a GCS path.
# Note that when running on KFP, the pipeline root is always a runtime parameter.
# The value specified here will be its default.
pipeline_root = os.path.join('gs://{{kfp-default-bucket}}', 'tfx_taxi_simple',
kfp.dsl.RUN_ID_PLACEHOLDER)
# Location of input data, should be a GCS path under which there is a csv file.
data_root_param = data_types.RuntimeParameter(
name='data-root',
default='gs://ml-pipeline-playground/tfx_taxi_simple/data',
ptype=Text,
)
# Path to the module file, GCS path.
# Module file is one of the recommended way to provide customized logic for component
# includeing Trainer and Transformer.
# See https://github.com/tensorflow/tfx/blob/93ea0b4eda5a6000a07a1e93d93a26441094b6f5/tfx/components/trainer/component.py#L38
taxi_module_file_param = data_types.RuntimeParameter(
name='module-file',
default='gs://ml-pipeline-playground/tfx_taxi_simple/modules/taxi_utils.py',
ptype=Text,
)
# Number of epochs in training.
train_steps = data_types.RuntimeParameter(
name='train-steps',
default=10,
ptype=int,
)
# Number of epochs in evaluation.
eval_steps = data_types.RuntimeParameter(
name='eval-steps',
default=5,
ptype=int,
)
# The input data location is parameterized by _data_root_param
examples = external_input(data_root_param)
example_gen = CsvExampleGen(input=examples)
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
infer_schema = SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False)
validate_stats = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=infer_schema.outputs['schema'])
# The module file used in Transform and Trainer component is paramterized by
# _taxi_module_file_param.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=taxi_module_file_param)
# The numbers of steps in train_args are specified as RuntimeParameter with
# name 'train-steps' and 'eval-steps', respectively.
trainer = Trainer(
module_file=taxi_module_file_param,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args={'num_steps': train_steps},
eval_args={'num_steps': eval_steps})
# Set the TFMA config for Model Evaluation and Validation.
eval_config = tfma.EvalConfig(
model_specs=[
# Using signature 'eval' implies the use of an EvalSavedModel. To use
# a serving model remove the signature to defaults to 'serving_default'
# and add a label_key.
tfma.ModelSpec(signature_name='eval')
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
metrics=[
tfma.MetricConfig(class_name='ExampleCount')
],
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
thresholds = {
'binary_accuracy': tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))
}
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(feature_keys=['trip_start_hour'])
])
# The name of slicing column is specified as a RuntimeParameter.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=os.path.join(
str(pipeline.ROOT_PARAMETER), 'model_serving'))))
# Create the DSL pipeline object.
# This pipeline obj carries the business logic of the pipeline, but no runner-specific information
# was included.
dsl_pipeline = pipeline.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=[
example_gen, statistics_gen, infer_schema, validate_stats, transform,
trainer, model_analyzer, model_validator, pusher
],
enable_cache=True,
beam_pipeline_args=['--direct_num_workers=%d' % 0],
)
# Specify a TFX docker image. For the full list of tags please see:
# https://hub.docker.com/r/tensorflow/tfx/tags
tfx_image = 'gcr.io/tfx-oss-public/tfx:0.21.2'
config = kubeflow_dag_runner.KubeflowDagRunnerConfig(
kubeflow_metadata_config=kubeflow_dag_runner
.get_default_kubeflow_metadata_config(),
tfx_image=tfx_image)
kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(config=config)
# KubeflowDagRunner compiles the DSL pipeline object into KFP pipeline package.
# By default it is named <pipeline_name>.tar.gz
kfp_runner.run(dsl_pipeline)
run_result = kfp.Client(
host='1234567abcde-dot-us-central2.pipelines.googleusercontent.com' # Put your KFP endpoint here
).create_run_from_pipeline_package(
pipeline_name + '.tar.gz',
arguments={
# Uncomment following lines in order to use custom GCS bucket/module file/training data.
# 'pipeline-root': 'gs://<your-gcs-bucket>/tfx_taxi_simple/' + kfp.dsl.RUN_ID_PLACEHOLDER,
# 'module-file': '<gcs path to the module file>', # delete this line to use default module file.
# 'data-root': '<gcs path to the data>' # delete this line to use default data.
})
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: In this example we'll need TFX SDK later than 0.21 to leverage the RuntimeParameter feature.
Step3: TFX Components
|
14,840 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set() # setting seaborn default for plots
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
train.head()
train.shape
train.describe()
train.describe(include=['O'])
train.info()
train.isnull().sum()
test.shape
test.head()
test.info()
test.isnull().sum()
survived = train[train['Survived'] == 1]
not_survived = train[train['Survived'] == 0]
print ("Survived: %i (%.1f%%)"%(len(survived), float(len(survived))/len(train)*100.0))
print ("Not Survived: %i (%.1f%%)"%(len(not_survived), float(len(not_survived))/len(train)*100.0))
print ("Total: %i"%len(train))
train.Pclass.value_counts()
train.groupby('Pclass').Survived.value_counts()
train[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean()
#train.groupby('Pclass').Survived.mean().plot(kind='bar')
sns.barplot(x='Pclass', y='Survived', data=train)
train.Sex.value_counts()
train.groupby('Sex').Survived.value_counts()
train[['Sex', 'Survived']].groupby(['Sex'], as_index=False).mean()
#train.groupby('Sex').Survived.mean().plot(kind='bar')
sns.barplot(x='Sex', y='Survived', data=train)
tab = pd.crosstab(train['Pclass'], train['Sex'])
print (tab)
tab.div(tab.sum(1).astype(float), axis=0).plot(kind="bar", stacked=True)
plt.xlabel('Pclass')
plt.ylabel('Percentage')
sns.factorplot('Sex', 'Survived', hue='Pclass', size=4, aspect=2, data=train)
sns.factorplot(x='Pclass', y='Survived', hue='Sex', col='Embarked', data=train)
train.Embarked.value_counts()
train.groupby('Embarked').Survived.value_counts()
train[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean()
#train.groupby('Embarked').Survived.mean().plot(kind='bar')
sns.barplot(x='Embarked', y='Survived', data=train)
train.Parch.value_counts()
train.groupby('Parch').Survived.value_counts()
train[['Parch', 'Survived']].groupby(['Parch'], as_index=False).mean()
#train.groupby('Parch').Survived.mean().plot(kind='bar')
sns.barplot(x='Parch', y='Survived', ci=None, data=train) # ci=None will hide the error bar
train.SibSp.value_counts()
train.groupby('SibSp').Survived.value_counts()
train[['SibSp', 'Survived']].groupby(['SibSp'], as_index=False).mean()
#train.groupby('SibSp').Survived.mean().plot(kind='bar')
sns.barplot(x='SibSp', y='Survived', ci=None, data=train) # ci=None will hide the error bar
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
sns.violinplot(x="Embarked", y="Age", hue="Survived", data=train, split=True, ax=ax1)
sns.violinplot(x="Pclass", y="Age", hue="Survived", data=train, split=True, ax=ax2)
sns.violinplot(x="Sex", y="Age", hue="Survived", data=train, split=True, ax=ax3)
total_survived = train[train['Survived']==1]
total_not_survived = train[train['Survived']==0]
male_survived = train[(train['Survived']==1) & (train['Sex']=="male")]
female_survived = train[(train['Survived']==1) & (train['Sex']=="female")]
male_not_survived = train[(train['Survived']==0) & (train['Sex']=="male")]
female_not_survived = train[(train['Survived']==0) & (train['Sex']=="female")]
plt.figure(figsize=[15,5])
plt.subplot(111)
sns.distplot(total_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='blue')
sns.distplot(total_not_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='red', axlabel='Age')
plt.figure(figsize=[15,5])
plt.subplot(121)
sns.distplot(female_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='blue')
sns.distplot(female_not_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='red', axlabel='Female Age')
plt.subplot(122)
sns.distplot(male_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='blue')
sns.distplot(male_not_survived['Age'].dropna().values, bins=range(0, 81, 1), kde=False, color='red', axlabel='Male Age')
plt.figure(figsize=(15,6))
sns.heatmap(train.drop('PassengerId',axis=1).corr(), vmax=0.6, square=True, annot=True)
train_test_data = [train, test] # combining train and test dataset
for dataset in train_test_data:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.')
train.head()
pd.crosstab(train['Title'], train['Sex'])
for dataset in train_test_data:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col', \
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Other')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Other": 5}
for dataset in train_test_data:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train.head()
for dataset in train_test_data:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train.head()
train.Embarked.unique()
train.Embarked.value_counts()
for dataset in train_test_data:
dataset['Embarked'] = dataset['Embarked'].fillna('S')
train.head()
for dataset in train_test_data:
#print(dataset.Embarked.unique())
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train.head()
for dataset in train_test_data:
age_avg = dataset['Age'].mean()
age_std = dataset['Age'].std()
age_null_count = dataset['Age'].isnull().sum()
age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)
dataset['Age'][np.isnan(dataset['Age'])] = age_null_random_list
dataset['Age'] = dataset['Age'].astype(int)
train['AgeBand'] = pd.cut(train['Age'], 5)
print (train[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean())
train.head()
for dataset in train_test_data:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age'] = 4
train.head()
for dataset in train_test_data:
dataset['Fare'] = dataset['Fare'].fillna(train['Fare'].median())
train['FareBand'] = pd.qcut(train['Fare'], 4)
print (train[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean())
train.head()
for dataset in train_test_data:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train.head()
for dataset in train_test_data:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
print (train[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean())
for dataset in train_test_data:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
print (train[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean())
train.head(1)
test.head(1)
features_drop = ['Name', 'SibSp', 'Parch', 'Ticket', 'Cabin', 'FamilySize']
train = train.drop(features_drop, axis=1)
test = test.drop(features_drop, axis=1)
train = train.drop(['PassengerId', 'AgeBand', 'FareBand'], axis=1)
train.head()
test.head()
X_train = train.drop('Survived', axis=1)
y_train = train['Survived']
X_test = test.drop("PassengerId", axis=1).copy()
X_train.shape, y_train.shape, X_test.shape
# Importing Classifier Modules
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
clf = LogisticRegression()
clf.fit(X_train, y_train)
y_pred_log_reg = clf.predict(X_test)
acc_log_reg = round( clf.score(X_train, y_train) * 100, 2)
print (str(acc_log_reg) + ' percent')
clf = SVC()
clf.fit(X_train, y_train)
y_pred_svc = clf.predict(X_test)
acc_svc = round(clf.score(X_train, y_train) * 100, 2)
print (acc_svc)
clf = LinearSVC()
clf.fit(X_train, y_train)
y_pred_linear_svc = clf.predict(X_test)
acc_linear_svc = round(clf.score(X_train, y_train) * 100, 2)
print (acc_linear_svc)
clf = KNeighborsClassifier(n_neighbors = 3)
clf.fit(X_train, y_train)
y_pred_knn = clf.predict(X_test)
acc_knn = round(clf.score(X_train, y_train) * 100, 2)
print (acc_knn)
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_pred_decision_tree = clf.predict(X_test)
acc_decision_tree = round(clf.score(X_train, y_train) * 100, 2)
print (acc_decision_tree)
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)
y_pred_random_forest = clf.predict(X_test)
acc_random_forest = round(clf.score(X_train, y_train) * 100, 2)
print (acc_random_forest)
clf = GaussianNB()
clf.fit(X_train, y_train)
y_pred_gnb = clf.predict(X_test)
acc_gnb = round(clf.score(X_train, y_train) * 100, 2)
print (acc_gnb)
clf = Perceptron(max_iter=5, tol=None)
clf.fit(X_train, y_train)
y_pred_perceptron = clf.predict(X_test)
acc_perceptron = round(clf.score(X_train, y_train) * 100, 2)
print (acc_perceptron)
clf = SGDClassifier(max_iter=5, tol=None)
clf.fit(X_train, y_train)
y_pred_sgd = clf.predict(X_test)
acc_sgd = round(clf.score(X_train, y_train) * 100, 2)
print (acc_sgd)
from sklearn.metrics import confusion_matrix
import itertools
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)
y_pred_random_forest_training_set = clf.predict(X_train)
acc_random_forest = round(clf.score(X_train, y_train) * 100, 2)
print ("Accuracy: %i %% \n"%acc_random_forest)
class_names = ['Survived', 'Not Survived']
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_train, y_pred_random_forest_training_set)
np.set_printoptions(precision=2)
print ('Confusion Matrix in Numbers')
print (cnf_matrix)
print ('')
cnf_matrix_percent = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
print ('Confusion Matrix in Percentage')
print (cnf_matrix_percent)
print ('')
true_class_names = ['True Survived', 'True Not Survived']
predicted_class_names = ['Predicted Survived', 'Predicted Not Survived']
df_cnf_matrix = pd.DataFrame(cnf_matrix,
index = true_class_names,
columns = predicted_class_names)
df_cnf_matrix_percent = pd.DataFrame(cnf_matrix_percent,
index = true_class_names,
columns = predicted_class_names)
plt.figure(figsize = (15,5))
plt.subplot(121)
sns.heatmap(df_cnf_matrix, annot=True, fmt='d')
plt.subplot(122)
sns.heatmap(df_cnf_matrix_percent, annot=True)
models = pd.DataFrame({
'Model': ['Logistic Regression', 'Support Vector Machines', 'Linear SVC',
'KNN', 'Decision Tree', 'Random Forest', 'Naive Bayes',
'Perceptron', 'Stochastic Gradient Decent'],
'Score': [acc_log_reg, acc_svc, acc_linear_svc,
acc_knn, acc_decision_tree, acc_random_forest, acc_gnb,
acc_perceptron, acc_sgd]
})
models.sort_values(by='Score', ascending=False)
test.head()
submission = pd.DataFrame({
"PassengerId": test["PassengerId"],
"Survived": y_pred_random_forest
})
submission.to_csv('submission.csv', index=False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading Datasets
Step2: Looking into the training dataset
Step3: Below is a brief information about each columns of the dataset
Step4: We can see that there are 891 rows and 12 columns in our training dataset.
Step5: describe(include = ['O']) will show the descriptive statistics of object data types.
Step6: This shows that there are duplicate Ticket number and Cabins shared. The highest number of duplicate ticket number is "CA. 2343". It has been repeated 7 times. Similarly, the highest number of people using the same cabin is 4. They are using cabin number "C23 C25 C27".
Step7: We can see that Age value is missing for many rows.
Step8: There are 177 rows with missing Age, 687 rows with missing Cabin and 2 rows with missing Embarked information.
Step9: Survived column is not present in Test data.
Step10: There are missing entries for Age in Test dataset as well.
Step11: There are 86 rows with missing Age, 327 rows with missing Cabin and 1 row with missing Fare information.
Step12: Pclass vs. Survival
Step13: Sex vs. Survival
Step14: Pclass & Sex vs. Survival
Step15: From the above plot, it can be seen that
Step16: From the above plot, it can be seen that
Step17: Parch vs. Survival
Step18: SibSp vs. Survival
Step19: Age vs. Survival
Step20: From Pclass violinplot, we can see that
Step21: From the above figures, we can see that
Step22: Feature Extraction
Step23: As you can see above, we have added a new column named Title in the Train dataset with the Title present in the particular passenger name.
Step24: The number of passengers with each Title is shown above.
Step25: After that, we convert the categorical Title values into numeric form.
Step26: Sex Feature
Step27: Embarked Feature
Step28: Let's check the number of passengers for each Embarked category.
Step29: We find that category "S" has maximum passengers. Hence, we replace "nan" values with "S".
Step30: We now convert the categorical value of Embarked into numeric. We represent 0 as S, 1 as C and 2 as Q.
Step31: Age Feature
Step32: Now, we map Age according to AgeBand.
Step33: Fare Feature
Step34: Create FareBand. We divide the Fare into 4 category range.
Step35: Map Fare according to FareBand
Step36: SibSp & Parch Feature
Step37: About data shows that
Step38: This shows that travelling alone has only 30% survival chance.
Step39: Feature Selection
Step40: We are done with Feature Selection/Engineering. Now, we are ready to train a classifier with our feature set.
Step41: There are many classifying algorithms present. Among them, we choose the following Classification algorithms for our problem
Step42: Logistic Regression
Step43: Support Vector Machine (SVM)
Step44: Linear SVM
Step45: $k$-Nearest Neighbors
Step46: Decision Tree
Step47: Random Forest
Step48: Gaussian Naive Bayes
Step49: Perceptron
Step50: Stochastic Gradient Descent (SGD)
Step51: Confusion Matrix
Step52: Comparing Models
Step53: From the above table, we can see that Decision Tree and Random Forest classfiers have the highest accuracy score.
|
14,841 | <ASSISTANT_TASK:>
Python Code:
__author__ = "kyubyong. kbpark.linguist@gmail.com"
import numpy as np
np.__version__
x = np.arange(4).reshape((2, 2))
print("x=\n", x)
print("ans=\n", np.amin(x, 1))
x = np.arange(4).reshape((2, 2))
print("x=\n", x)
print("ans=\n", np.amax(x, 1, keepdims=True))
x = np.arange(10).reshape((2, 5))
print("x=\n", x)
out1 = np.ptp(x, 1)
out2 = np.amax(x, 1) - np.amin(x, 1)
assert np.allclose(out1, out2)
print("ans=\n", out1)
x = np.arange(1, 11).reshape((2, 5))
print("x=\n", x)
print("ans=\n", np.percentile(x, 75, 1))
x = np.arange(1, 10).reshape((3, 3))
print("x=\n", x)
print("ans=\n", np.median(x))
x = np.arange(5)
weights = np.arange(1, 6)
out1 = np.average(x, weights=weights)
out2 = (x*(weights/weights.sum())).sum()
assert np.allclose(out1, out2)
print(out1)
x = np.arange(5)
print("x=\n",x)
out1 = np.mean(x)
out2 = np.average(x)
assert np.allclose(out1, out2)
print("mean=\n", out1)
out3 = np.std(x)
out4 = np.sqrt(np.mean((x - np.mean(x)) ** 2 ))
assert np.allclose(out3, out4)
print("std=\n", out3)
out5 = np.var(x)
out6 = np.mean((x - np.mean(x)) ** 2 )
assert np.allclose(out5, out6)
print("variance=\n", out5)
x = np.array([0, 1, 2])
y = np.array([2, 1, 0])
print("ans=\n", np.cov(x, y))
x = np.array([0, 1, 3])
y = np.array([2, 4, 5])
print("ans=\n", np.corrcoef(x, y))
x = np.array([0, 1, 3])
y = np.array([2, 4, 5])
print("ans=\n", np.correlate(x, y))
x = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
bins = np.array([0, 1, 2, 3])
print("ans=\n", np.histogram(x, bins))
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(x, bins=bins)
plt.show()
xedges = [0, 1, 2, 3]
yedges = [0, 1, 2, 3, 4]
x = np.array([0, 0.1, 0.2, 1., 1.1, 2., 2.1])
y = np.array([0, 0.1, 0.2, 1., 1.1, 2., 3.3])
H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges))
print("ans=\n", H)
plt.scatter(x, y)
plt.grid()
x = np.array([0, 1, 1, 3, 2, 1, 7])
print("ans=\n", np.bincount(x))
x = np.array([0.2, 6.4, 3.0, 1.6])
bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0])
print("ans=\n", np.digitize(x, bins))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Order statistics
Step2: Q2. Return the maximum value of x along the second axis. Reduce the second axis to the dimension with size one.
Step3: Q3. Calcuate the difference between the maximum and the minimum of x along the second axis.
Step4: Q4. Compute the 75th percentile of x along the second axis.
Step5: Averages and variances
Step6: Q6. Compute the weighted average of x.
Step7: Q7. Compute the mean, standard deviation, and variance of x along the second axis.
Step8: Correlating
Step9: Q9. In the above covariance matrix, what does the -1 mean?
Step10: Q11. Compute cross-correlation of x and y.
Step11: Histograms
Step12: Q13. Compute the 2d histogram of x and y.
Step13: Q14. Count number of occurrences of 0 through 7 in x.
Step14: Q15. Return the indices of the bins to which each value in x belongs.
|
14,842 | <ASSISTANT_TASK:>
Python Code:
# Import required libraries
from tpot import TPOTClassifier
from sklearn.cross_validation import train_test_split
import pandas as pd
import numpy as np
#Load the data
Marketing=pd.read_csv('Data_FinalProject.csv')
Marketing.head(5)
Marketing.groupby('loan').y.value_counts()
Marketing.groupby(['loan','marital']).y.value_counts()
Marketing.rename(columns={'y': 'class'}, inplace=True)
Marketing.dtypes
for cat in ['job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month', 'day_of_week', 'poutcome' ,'class']:
print("Number of levels in category '{0}': \b {1:2.2f} ".format(cat, Marketing[cat].unique().size))
for cat in ['contact', 'poutcome','class', 'marital', 'default', 'housing', 'loan']:
print("Levels for catgeory '{0}': {1}".format(cat, Marketing[cat].unique()))
Marketing['marital'] = Marketing['marital'].map({'married':0,'single':1,'divorced':2,'unknown':3})
Marketing['default'] = Marketing['default'].map({'no':0,'yes':1,'unknown':2})
Marketing['housing'] = Marketing['housing'].map({'no':0,'yes':1,'unknown':2})
Marketing['loan'] = Marketing['loan'].map({'no':0,'yes':1,'unknown':2})
Marketing['contact'] = Marketing['contact'].map({'telephone':0,'cellular':1})
Marketing['poutcome'] = Marketing['poutcome'].map({'nonexistent':0,'failure':1,'success':2})
Marketing['class'] = Marketing['class'].map({'no':0,'yes':1})
Marketing = Marketing.fillna(-999)
pd.isnull(Marketing).any()
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
job_Trans = mlb.fit_transform([{str(val)} for val in Marketing['job'].values])
education_Trans = mlb.fit_transform([{str(val)} for val in Marketing['education'].values])
month_Trans = mlb.fit_transform([{str(val)} for val in Marketing['month'].values])
day_of_week_Trans = mlb.fit_transform([{str(val)} for val in Marketing['day_of_week'].values])
day_of_week_Trans
marketing_new = Marketing.drop(['marital','default','housing','loan','contact','poutcome','class','job','education','month','day_of_week'], axis=1)
assert (len(Marketing['day_of_week'].unique()) == len(mlb.classes_)), "Not Equal" #check correct encoding done
Marketing['day_of_week'].unique(),mlb.classes_
marketing_new = np.hstack((marketing_new.values, job_Trans, education_Trans, month_Trans, day_of_week_Trans))
np.isnan(marketing_new).any()
marketing_new[0].size
marketing_class = Marketing['class'].values
training_indices, validation_indices = training_indices, testing_indices = train_test_split(Marketing.index, stratify = marketing_class, train_size=0.75, test_size=0.25)
training_indices.size, validation_indices.size
tpot = TPOTClassifier(verbosity=2, max_time_mins=2, max_eval_time_mins=0.04, population_size=15)
tpot.fit(marketing_new[training_indices], marketing_class[training_indices])
tpot.score(marketing_new[validation_indices], Marketing.loc[validation_indices, 'class'].values)
tpot.export('tpot_marketing_pipeline.py')
# %load tpot_marketing_pipeline.py
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# NOTE: Make sure that the class is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1).values
training_features, testing_features, training_target, testing_target = \
train_test_split(features, tpot_data['target'].values, random_state=None)
# Average CV score on the training set was:0.913728927925
exported_pipeline = DecisionTreeClassifier(criterion="gini", max_depth=5, min_samples_leaf=16, min_samples_split=8)
exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration
Step2: Data Munging
Step3: At present, TPOT requires all the data to be in numerical format. As we can see below, our data set has 11 categorical variables
Step4: We then check the number of levels that each of the five categorical variables have.
Step5: As we can see, contact and poutcome have few levels. Let's find out what they are.
Step6: We then code these levels manually into numerical values. For nan i.e. the missing values, we simply replace them with a placeholder value (-999). In fact, we perform this replacement for the entire data set.
Step7: For other categorical variables, we encode the levels as digits using Scikit-learn's MultiLabelBinarizer and treat them as new features.
Step8: Drop the unused features from the dataset.
Step9: We then add the encoded features to form the final dataset to be used with TPOT.
Step10: Keeping in mind that the final dataset is in the form of a numpy array, we can check the number of features in the final dataset as follows.
Step11: Finally we store the class labels, which we need to predict, in a separate variable.
Step12: Data Analysis using TPOT
Step13: After that, we proceed to calling the fit(), score() and export() functions on our training dataset.
Step14: In the above, 4 generations were computed, each giving the training efficiency of fitting model on the training set. As evident, the best pipeline is the one that has the CV score of 91.373%. The model that produces this result is one that fits a decision tree algorithm on the data set. Next, the test error is computed for validation purposes.
|
14,843 | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-10, 10, 201)
def f(x):
return x**2
y = f(x)
fig, ax = plt.subplots(1, figsize=(8,4))
ax.plot(x,y, 'g', label='line')
ax.fill_between(x,y, color='blue', alpha=0.3, label='area under graph')
ax.grid(True)
ax.legend()
plt.show()
x = np.linspace(-5, 5, 201)
def f(x):
return 6*x**2 - 20
def F(x):
return 2*x**3 - 20*x
y = f(x)
start = 60
end = 160
section = x[start:end+1]
fig, ax = plt.subplots(1, figsize=(8,4))
ax.plot(x,y, 'g', label='y = 2x')
ax.fill_between(section,f(section), color='blue', alpha=0.3, label='area under graph')
ax.plot(x[start], 0, 'om', color='purple', label='a')
ax.plot(x[end], 0, 'om', color='r', label='b')
ax.grid(True)
ax.legend()
plt.show()
print 'shaded net area =', F(x[end]) - F(x[start])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: <a id='Fundamental_Theorem_of_Calculus'></a>
|
14,844 | <ASSISTANT_TASK:>
Python Code:
# The kernel for this notebook is running Python 3, but we'll see:
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd, gluon
mx.random.seed(1)
data_ctx = mx.cpu()
model_ctx = mx.cpu()
num_inputs = 2
num_outputs = 1
num_examples = 10000
def real_fn(X):
return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
X = nd.random_normal(shape=(num_examples, num_inputs), ctx=data_ctx)
noise = .1 * nd.random_normal(shape=(num_examples,), ctx=data_ctx)
y = real_fn(X) + noise
print(X[0])
print(y[0])
print(2 * X[0, 0] - 3.4 * X[0, 1] + 4.2)
import matplotlib.pyplot as plt
plt.scatter(X[:, 1].asnumpy(), y.asnumpy())
plt.show()
batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
batch_size=batch_size, shuffle=True)
for i, (data, label) in enumerate(train_data):
print(data, label)
break
for i, (data, label) in enumerate(train_data):
print(data, label)
break
counter = 0
for i, (data, label) in enumerate(train_data):
pass
print(i+1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set the context
Step2: Linear regression
Step3: Notice that each row in X consists of a 2-dimensional data point and that each row in y consists of a 1-dimensional target value.
Step4: Note that because our synthetic features X live on data_ctx and because our noise also lives on data_ctx, the labels y, produced by combining X and noise in real_fn also live on data_ctx.
Step5: We can visualize the correspondence between our second feature (X[
Step6: Data iterators
Step7: Once we’ve initialized our DataLoader (train_data), we can easily fetch batches by iterating over train_data just as if it were a Python list.
Step8: If we run that same code again, you'll notice that we get a different batch.
Step9: Finally, if we actually pass over the entire dataset and count the number of batches we'll find that there are 2500 batches.
|
14,845 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from scipy.io import loadmat
from sklearn.preprocessing import OneHotEncoder
data = loadmat('../data/andrew_ml_ex33507/ex3data1.mat')
data
X = data['X']
y = data['y']
X.shape, y.shape#看下维度
# 目前考虑输入是图片的像素值,20*20像素的图片有400个输入层单元,不包括需要额外添加的加上常数项。 材料已经提供了训练好的神经网络的参数,有25个单元和10个输出单元(10个输出)
weight = loadmat("../data/andrew_ml_ex33507/ex3weights.mat")
theta1, theta2 = weight['Theta1'], weight['Theta2']
theta1.shape, theta2.shape
sample_idx = np.random.choice(np.arange(data['X'].shape[0]), 100)
sample_images = data['X'][sample_idx, :]
#展示二进制图
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(8, 8))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(sample_images[5 * r + c].reshape((20, 20))).T,cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
def sigmoid(z):
return 1 / (1 + np.exp(-z))
#2st 上面传播规律,定义第一层,并计算第二层(隐藏层)的值,并添加额外值
def forward_propagate(X,theta1,theta2):
m= X.shape[0]
a1 = np.insert(X,0, values=np.ones(m), axis=1)
Z2 = a1*theta1.T
a2= np.insert(sigmoid(Z2),0, values=np.ones(m), axis=1)
Z3= a2*theta2.T
h= sigmoid(Z3)
return a1,Z2,a2,Z3,h
# 代价函数(不带规则化项(也叫权重衰减项) Y=R(5000*10) ,这里直接使用二维矩阵,代替循环累加
def cost(X,Y,theta1,theta2):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
# 对y标签进行编码 一开始我们得到的y是维500*1 的向量,但我们要把他编码成的矩阵。 比如说,原始y0=2,那么转化后的Y对应行就是[0,1,0...0],原始转化后的Y对应行就是[0,0...0,1]
# Scikitlearn有一个内置的编码函数,我们可以使用这个。
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y)
y_onehot.shape
y[0], y_onehot[0,:] # y0是数字0
# 初始化设置
input_size = 400
num_labels = 10
cost(X, y_onehot,theta1, theta2)
# 加入正则项
def cost_reg(X,Y,theta1,theta2,learning_rate):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
# 计算正则时,第一项时不参与计算
J = (-1/m)*J + (float(learning_rate) / (2 * m))*(np.sum(np.power(theta1[:,1:],2))+np.sum(np.power(theta2[:,1:],2)))
return J
# theta1.shape,theta2.shape
cost_reg(X, y_onehot,theta1, theta2,1)
# 计算sigmoid函数的导数
def sigmoid_gradient(z):
return np.multiply(sigmoid(z) ,(1-sigmoid(z)))
# 检查
sigmoid_gradient(0)
# 初始化设置
input_size = 400 #输入单元数量
hidden_size = 25 # y隐藏单元数量
num_labels = 10 # 输出单元数
epsilon = 0.001
theta01=np.random.rand(hidden_size,input_size+1) * 2*epsilon - epsilon# +1是添加偏置单元
theta02 =np.random.rand(num_labels,hidden_size+1)* 2*epsilon - epsilon
theta01.shape,theta02.shape
# 分别得出
def forward_propagateNEW(X,thetalist):
m= X.shape[0]
a = np.insert(X,0, values=np.ones(m), axis=1)
alist=[a]
zlist=[]
for i in range(len(thetalist)):
theta= thetalist[i]
z = a * theta
# a= np.insert(sigmoid(z),0, values=np.ones(m), axis=1)
a=sigmoid(z)
if(i<len(thetalist)-1):
a= np.insert(a,0, values=np.ones(m), axis=1)
zlist.append(z)
alist.append(a)
return zlist,alist
# Δ 用delta1 和delta2 替代
def backpropRegSelf(input_size, hidden_size, num_labels, X, y, learning_rate,L=3): # 随机化后的 这里为3层
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
#初始化参数
theta1 = (np.random.random((input_size+1,hidden_size))- 0.5)* 0.24
theta2 = (np.random.random((hidden_size+1,num_labels))- 0.5)* 0.24
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y) # 格式化y
# 前向计算 每层值
theta = [theta1, theta2]
zlist,alist = forward_propagateNEW(X, theta)# 返回 a1 z2 a2 。。。
# 初始化Deta
Delta=[]
for th in theta:
Delta.append(np.zeros(th.shape))
for i in range(m):
# 以计算a z
for l in range(L,1,-1): # 3,2 表示层数,最后一层已经算出来,单独列放
#最后一层
if l==L:
delta=alist[-1][i,:]-y_onehot[i,:] # 最后一层得δ
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta
else:
zl = zlist[l-2][i,:]
zl = np.insert(zl, 0, values=np.ones(1)) # (1, 26) 怎加偏执项
# d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
# delta1 = delta1 + (d2t[:,1:]).T * a1t
delta = np.multiply(delta*theta[l-1].T, sigmoid_gradient(zl)) #
# 因为数组从零开始,且 Delta 为 1 2 层开始 delta 从2 层开始 # (25, 401)# (10, 26)
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta[:,1:]
# add the gradient regularization term
gradAll = None
for j in range(len(Delta)):
Delta[j][:,1:] = Delta[j][:,1:]/m + (theta[j][:,1:] * learning_rate) / m
if gradAll is None:
gradAll = np.ravel(Delta[j])
else:
tmp=np.ravel(Delta[j])
gradAll = np.concatenate([gradAll,tmp])
# Delta[:,:,1:] = Delta[:,:,1:] + (theta[:,:,1:] * learning_rate) / m
return gradAll
grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
print(grad2.shape)
def backpropReg(params, input_size, hidden_size, num_labels, X, y, learning_rate):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
# reshape the parameter array into parameter matrices for each layer
theta1 = np.matrix(np.reshape(params[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
theta2 = np.matrix(np.reshape(params[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
# run the feed-forward pass
a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)
# initializations
J = 0
delta1 = np.zeros(theta1.shape) # (25, 401)
delta2 = np.zeros(theta2.shape) # (10, 26)
# compute the cost
for i in range(m):
first_term = np.multiply(-y[i,:], np.log(h[i,:]))
second_term = np.multiply((1 - y[i,:]), np.log(1 - h[i,:]))
J += np.sum(first_term - second_term)
J = J / m
# add the cost regularization term
J += (float(learning_rate) / (2 * m)) * (np.sum(np.power(theta1[:,1:], 2)) + np.sum(np.power(theta2[:,1:], 2)))
# perform backpropagation
for t in range(m):
a1t = a1[t,:] # (1, 401)
z2t = z2[t,:] # (1, 25)
a2t = a2[t,:] # (1, 26)
ht = h[t,:] # (1, 10)
yt = y[t,:] # (1, 10)
d3t = ht - yt # (1, 10)
z2t = np.insert(z2t, 0, values=np.ones(1)) # (1, 26)
d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
delta1 = delta1 + (d2t[:,1:]).T * a1t
delta2 = delta2 + d3t.T * a2t
delta1 = delta1 / m
delta2 = delta2 / m
# add the gradient regularization term
delta1[:,1:] = delta1[:,1:] + (theta1[:,1:] * learning_rate) / m
delta2[:,1:] = delta2[:,1:] + (theta2[:,1:] * learning_rate) / m
# unravel the gradient matrices into a single array
grad = np.concatenate((np.ravel(delta1), np.ravel(delta2)))
return J, grad
# np.random.random(size) 返回size大小的0-1随机浮点数
params = (np.random.random(size=hidden_size * (input_size + 1) + num_labels * (hidden_size + 1)) - 0.5) * 0.24
j,grad = backpropReg(params, input_size, hidden_size, num_labels, X, y, 1)
print(j,grad.shape)
# j2,grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
# print(j2,grad2[0:10])
# #J θ
# input_size = 400 #输入单元数量
# hidden_size = 25 # y隐藏单元数量
# num_labels = 10 # 输出单元数
def jcost(X, y,input_size, hidden_size, output_size,theta):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
theta1 = np.reshape(theta[0:hidden_size*(input_size+1)],(hidden_size,input_size+1))#(25,401)
theta2 = np.reshape(theta[hidden_size*(input_size+1):],(output_size,hidden_size+1))#(10.26)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(y,np.log(h))
second = np.multiply((1-y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
def check(X,y,theta1,theta2,eps):
theta = np.concatenate((np.ravel(theta1), np.ravel(theta2)))
gradapprox=np.zeros(len(theta))
for i in range(len(theta)):
thetaplus = theta
thetaplus[i] = thetaplus[i] + eps
thetaminus = theta
thetaminus[i] = thetaminus[i] - eps
gradapprox[i] = (jcost(X,y,input_size,hidden_size,num_labels,thetaplus) - jcost(X,y,input_size,hidden_size,num_labels,thetaminus)) / (2 * epsilon)
return gradapprox
# theta01.shape , theta02.shape
# 计算很慢
gradapprox = check(X,y_onehot,theta1,theta2,0.001)
numerator = np.linalg.norm(grad2-gradapprox, ord=2) # Step 1'
denominator = np.linalg.norm(grad2, ord=2) + np.linalg.norm(gradapprox, ord=2) # Step 2'
difference = numerator / denominator
print(difference)
# 使用工具库计算参数最优解
from scipy.optimize import minimize
# opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y))
fmin = minimize(fun=backpropReg, x0=(params), args=(input_size, hidden_size, num_labels, X, y_onehot, learning_rate),
method='TNC', jac=True, options={'maxiter': 250})
fmin
X = np.matrix(X)
thetafinal1 = np.matrix(np.reshape(fmin.x[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
thetafinal2 = np.matrix(np.reshape(fmin.x[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
print(thetafinal1[0,1],grad2[1])
# 计算使用优化后的θ得出的预测
a1, z2, a2, z3, h = forward_propagate(X, thetafinal1, thetafinal2 )
y_pred = np.array(np.argmax(h, axis=1) + 1)
y_pred
# 最后,我们可以计算准确度,看看我们训练完毕的神经网络效果怎么样。
# 预测值与实际值比较
from sklearn.metrics import classification_report#这个包是评价报告
print(classification_report(y, y_pred))
hidden_layer = thetafinal1[:, 1:]
hidden_layer.shape
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(12, 12))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(hidden_layer[5 * r + c].reshape((20, 20))),cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 模型展示
Step2: 反向传播
Step3: 初始话参数
Step4: 反向传播
Step5: 梯度检验
|
14,846 | <ASSISTANT_TASK:>
Python Code:
# -*- coding: utf-8 -*-
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pyfme.aircrafts import Cessna310
from pyfme.environment.environment import Environment
from pyfme.environment.atmosphere import ISA1976
from pyfme.environment.gravity import VerticalConstant
from pyfme.environment.wind import NoWind
from pyfme.models.systems import EulerFlatEarth
from pyfme.simulator import BatchSimulation
from pyfme.utils.trimmer import steady_state_flight_trimmer
aircraft = Cessna310()
atmosphere = ISA1976()
gravity = VerticalConstant()
wind = NoWind()
environment = Environment(atmosphere, gravity, wind)
TAS = 312.5 * 0.3048 # m/s
h0 = 8000 * 0.3048 # m
psi0 = 1 # rad
x0, y0 = 0, 0 # m
turn_rate = 0.0 # rad/s
gamma0 = +0.05 # rad
system = EulerFlatEarth(lat=0, lon=0, h=h0, psi=psi0, x_earth=x0, y_earth=y0)
not_trimmed_controls = {'delta_elevator': 0.05,
'hor_tail_incidence': 0.00,
'delta_aileron': 0.01 * np.sign(turn_rate),
'delta_rudder': 0.01 * np.sign(turn_rate),
'delta_t': 0.5}
controls2trim = ['delta_elevator', 'delta_aileron', 'delta_rudder', 'delta_t']
trimmed_ac, trimmed_sys, trimmed_env, results = steady_state_flight_trimmer(
aircraft, system, environment, TAS=TAS, controls_0=not_trimmed_controls,
controls2trim=controls2trim, gamma=gamma0, turn_rate=turn_rate, verbose=2)
print(results)
my_simulation = BatchSimulation(trimmed_ac, trimmed_sys, trimmed_env)
tfin = 150 # seconds
N = tfin * 100 + 1
time = np.linspace(0, tfin, N)
initial_controls = trimmed_ac.controls
controls = {}
for control_name, control_value in initial_controls.items():
controls[control_name] = np.ones_like(time) * control_value
my_simulation.set_controls(time, controls)
par_list = ['x_earth', 'y_earth', 'height',
'psi', 'theta', 'phi',
'u', 'v', 'w',
'v_north', 'v_east', 'v_down',
'p', 'q', 'r',
'alpha', 'beta', 'TAS',
'F_xb', 'F_yb', 'F_zb',
'M_xb', 'M_yb', 'M_zb']
my_simulation.set_par_dict(par_list)
my_simulation.run_simulation()
plt.style.use('ggplot')
for ii in range(len(par_list) // 3):
three_params = par_list[3*ii:3*ii+3]
fig, ax = plt.subplots(3, 1, sharex=True)
for jj, par in enumerate(three_params):
ax[jj].plot(time, my_simulation.par_dict[par])
ax[jj].set_ylabel(par)
ax[jj].set_xlabel('time (s)')
fig.tight_layout()
fig = plt.figure()
ax = Axes3D(fig)
ax.plot(my_simulation.par_dict['x_earth'],
my_simulation.par_dict['y_earth'],
my_simulation.par_dict['height'])
ax.plot(my_simulation.par_dict['x_earth'],
my_simulation.par_dict['y_earth'],
my_simulation.par_dict['height'] * 0)
ax.set_xlabel('x_earth')
ax.set_ylabel('y_earth')
ax.set_zlabel('z_earth')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize variables
Step2: Initial conditions
|
14,847 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 8.0)
# First, we need to know what's in the data file.
!head -15 R11ceph.dat
class Cepheids(object):
def __init__(self,filename):
# Read in the data and store it in this master array:
self.data = np.loadtxt(filename)
self.hosts = self.data[:,1].astype('int').astype('str')
# We'll need the plotting setup to be the same each time we make a plot:
colornames = ['red','orange','yellow','green','cyan','blue','violet','magenta','gray']
self.colors = dict(zip(self.list_hosts(), colornames))
self.xlimits = np.array([0.3,2.3])
self.ylimits = np.array([30.0,17.0])
return
def list_hosts(self):
# The list of (9) unique galaxy host names:
return np.unique(self.hosts)
def select(self,ID):
# Pull out one galaxy's data from the master array:
index = (self.hosts == str(ID))
self.mobs = self.data[index,2]
self.merr = self.data[index,3]
self.logP = np.log10(self.data[index,4])
return
def plot(self,X):
# Plot all the points in the dataset for host galaxy X.
ID = str(X)
self.select(ID)
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(self.logP, self.mobs, yerr=self.merr, fmt='.', ms=7, lw=1, color=self.colors[ID], label='NGC'+ID)
plt.xlabel('$\\log_{10} P / {\\rm days}$',fontsize=20)
plt.ylabel('${\\rm magnitude (AB)}$',fontsize=20)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
plt.title('Cepheid Period-Luminosity (Riess et al 2011)',fontsize=20)
return
def overlay_straight_line_with(self,a=0.0,b=24.0):
# Overlay a straight line with gradient a and intercept b.
x = self.xlimits
y = a*x + b
plt.plot(x, y, 'k-', alpha=0.5, lw=2)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
return
def add_legend(self):
plt.legend(loc='upper left')
return
data = Cepheids('R11ceph.dat')
print(data.colors)
data.plot(4258)
# for ID in data.list_hosts():
# data.plot(ID)
data.overlay_straight_line_with(a=-2.0,b=24.0)
data.add_legend()
# import cepheids_pgm
# cepheids_pgm.simple()
from IPython.display import Image
Image(filename="cepheids_pgm.png")
def log_likelihood(logP,mobs,merr,a,b):
return -0.5*np.sum((mobs - a*logP -b)**2/(merr**2))
def log_prior(a,b):
amin,amax = -10.0,10.0
bmin,bmax = 10.0,30.0
if (a > amin)*(a < amax)*(b > bmin)*(b < bmax):
logp = np.log(1.0/(amax-amin)) + np.log(1.0/(bmax-bmin))
else:
logp = -np.inf
return logp
def log_posterior(logP,mobs,merr,a,b):
return log_likelihood(logP,mobs,merr,a,b) + log_prior(a,b)
# Select a Cepheid dataset:
data.select(4258)
# Set up parameter grids:
npix = 100
amin,amax = -4.0,-2.0
bmin,bmax = 25.0,27.0
agrid = np.linspace(amin,amax,npix)
bgrid = np.linspace(bmin,bmax,npix)
logprob = np.zeros([npix,npix])
# Loop over parameters, computing unnormlized log posterior PDF:
for i,a in enumerate(agrid):
for j,b in enumerate(bgrid):
logprob[j,i] = log_posterior(data.logP,data.mobs,data.merr,a,b)
# Normalize and exponentiate to get posterior density:
Z = np.max(logprob)
prob = np.exp(logprob - Z)
norm = np.sum(prob)
prob /= norm
sorted = np.sort(prob.flatten())
C = sorted.cumsum()
# Find the pixel values that lie at the levels that contain
# 68% and 95% of the probability:
lvl68 = np.min(sorted[C > (1.0 - 0.68)])
lvl95 = np.min(sorted[C > (1.0 - 0.95)])
plt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[amin,amax,bmin,bmax])
plt.contour(prob,[lvl68,lvl95],colors='black',extent=[amin,amax,bmin,bmax])
plt.grid()
plt.xlabel('slope a')
plt.ylabel('intercept b / AB magnitudes')
data.plot(4258)
data.overlay_straight_line_with(a=-3.0,b=26.3)
data.add_legend()
prob_a_given_data = np.sum(prob,axis=0) # Approximate the integral as a sum
prob_b_given_data = np.sum(prob,axis=1) # Approximate the integral as a sum
print(prob_a_given_data.shape, np.sum(prob_a_given_data))
# Plot 1D distributions:
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].plot(agrid, prob_a_given_data)
ax[0].set_title('${\\rm Pr}(a|d)$')
ax[0].set_xlabel('slope $a$')
ax[0].set_ylabel('Posterior probability density')
right = ax[1].plot(bgrid, prob_b_given_data)
ax[1].set_title('${\\rm Pr}(b|d)$')
ax[0].set_xlabel('intercept $b$ / AB magnitudes')
ax[1].set_ylabel('Posterior probability density')
# Compress each PDF into a median and 68% credible interval, and report:
def compress_1D_pdf(x,pr,ci=68,dp=1):
# Interpret credible interval request:
low = (1.0 - ci/100.0)/2.0 # 0.16 for ci=68
high = 1.0 - low # 0.84 for ci=68
# Find cumulative distribution and compute percentiles:
cumulant = pr.cumsum()
pctlow = x[cumulant>low].min()
median = x[cumulant>0.50].min()
pcthigh = x[cumulant>high].min()
# Convert to error bars, and format a string:
errplus = np.abs(pcthigh - median)
errminus = np.abs(median - pctlow)
report = "$ "+str(round(median,dp))+"^{+"+str(round(errplus,dp))+"}_{-"+str(round(errminus,dp))+"} $"
return report
print("a = ",compress_1D_pdf(agrid,prob_a_given_data,ci=68,dp=2))
print("b = ",compress_1D_pdf(bgrid,prob_b_given_data,ci=68,dp=2))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A Look at Each Host Galaxy's Cepheids
Step2: OK, now we are all set up! Let's plot one of the datasets.
Step3: Q
Step4: Q
Step5: Now, let's set up a suitable parameter grid and compute the posterior PDF!
Step6: Now, plot, with confidence contours
Step7: Are these inferred parameters sensible?
Step8: OK, this looks good! Later in the course we will do some more extensive model checking.
|
14,848 | <ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2 * np.dot(errors, feature)
return(derivative)
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
derivative = feature_derivative(errors, feature_matrix[:, i])
# add the squared value of the derivative to the gradient magnitude (for assessing convergence)
gradient_sum_squares += derivative * derivative
# subtract the step size times the derivative from the current weight
weights[i] = weights[i] - step_size * derivative
# compute the square-root of the gradient sum of squares to get the gradient matnigude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
train_data,test_data = sales.random_split(.8,seed=0)
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
simple_weights = regression_gradient_descent(simple_feature_matrix, output,initial_weights, step_size, tolerance)
# weights = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)
simple_weights
(test_simple_feature_matrix, test_simple_output) = get_numpy_data(test_data, simple_features, my_output)
test_simple_predictions = predict_output(test_simple_feature_matrix, simple_weights)
test_data[['price', 'sqft_living']]
test_simple_predictions[0]
def rss(predictions, output):
df = predictions - output
return np.dot(df, df)
r1 = rss(test_simple_predictions ,test_simple_output)
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
#initial_weights = np.array([-47000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
feature_matrix.size
model2_weights = regression_gradient_descent(feature_matrix, output,initial_weights, step_size, tolerance)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
test_pred = predict_output(test_feature_matrix, model2_weights)
model2_weights
test_feature_matrix[0],test_pred[0]
test_output[0]
r2 = rss(test_pred, test_output)
r1 < r2
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Step3: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things
Step4: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output
Step5: Predicting output given regression weights
Step6: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights
Step7: If you want to test your code run the following cell
Step8: Computing the Derivative
Step9: To test your feature derivartive run the following
Step10: Gradient Descent
Step11: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
Step12: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model
Step13: Next run your gradient descent with the above parameters.
Step14: How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Step15: Now compute your predictions using test_simple_feature_matrix and your weights from above.
Step16: Quiz Question
Step17: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
Step18: Running a multiple regression
Step19: Use the above parameters to estimate the model weights. Record these values for your quiz.
Step20: Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
Step21: Quiz Question
Step22: What is the actual price for the 1st house in the test data set?
Step23: Quiz Question
Step24: Quiz Question
|
14,849 | <ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import collections
import os
from google.colab import auth
auth.authenticate_user()
#@title Choices about the dataset you want to load.
# Make choices about the dataset here.
chain_length = 3 #@param {type:"slider", min:3, max:4, step:1}
mode = 'valid' #@param ['train', 'test', 'valid']
# Train has 500 shards, valid 150, test 100.
if mode == 'train':
num_shards = 500
elif mode == 'test':
num_shards = 100
elif mode == 'valid':
num_shards = 150
DatasetInfo = collections.namedtuple(
'DatasetInfo',
['basepath', 'size', 'chain_length']
)
_DATASETS = dict(
memo=DatasetInfo(
basepath=mode,
size=num_shards,
chain_length=chain_length)
)
def _get_dataset_files(dataset_info, root):
Generates lists of files for a given dataset version.
basepath = dataset_info.basepath
base = os.path.join(root, basepath)
num_files = dataset_info.size
length = len(str(num_files))
template = 'trials-{:0%d}-of-{:0%d}' % (5, 5)
return [os.path.join(base, template.format(i, num_files))
for i in range(num_files)]
def parser_tf_examples(raw_data, chain_length=chain_length):
if chain_length == 3:
feature_map = {
'trials' : tf.io.FixedLenFeature(
shape=[48, 3, 1000],
dtype=tf.float32),
'correct_answer': tf.io.FixedLenFeature(
shape=[48],
dtype=tf.int64),
'difficulty': tf.io.FixedLenFeature(
shape=[48],
dtype=tf.int64),
'trial_type': tf.io.FixedLenFeature(
shape=[48],
dtype=tf.int64),
'memory': tf.io.FixedLenFeature(
shape=[32, 2, 1000],
dtype=tf.float32),
}
elif chain_length == 4:
feature_map = {
'trials' : tf.io.FixedLenFeature(
shape=[96, 3, 1000],
dtype=tf.float32),
'correct_answer': tf.io.FixedLenFeature(
shape=[96],
dtype=tf.int64),
'difficulty': tf.io.FixedLenFeature(
shape=[96],
dtype=tf.int64),
'trial_type': tf.io.FixedLenFeature(
shape=[96],
dtype=tf.int64),
'memory': tf.io.FixedLenFeature(
shape=[48, 2, 1000],
dtype=tf.float32),
}
example = tf.io.parse_example(raw_data, feature_map)
batch = [example["trials"],
example["correct_answer"],
example["difficulty"],
example["trial_type"],
example["memory"]]
return batch
dataset_info = 'memo'
root = 'gs://deepmind-memo/length' + str(chain_length) + '/'
num_epochs = 100
shuffle_buffer_size = 150
num_readers = 4
dataset_info = _DATASETS['memo']
filenames = _get_dataset_files(dataset_info, root)
num_map_threads = 4
batch_size = 10
data = tf.data.Dataset.from_tensor_slices(filenames)
data = data.repeat(num_epochs)
data = data.shuffle(shuffle_buffer_size)
data = data.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
data = data.shuffle(shuffle_buffer_size)
data = data.map(parser_tf_examples, num_parallel_calls=num_map_threads)
data = data.batch(batch_size)
iterator = data.__iter__()
element = iterator.get_next()
print(element[0].shape) # trials
print(element[1].shape) # correct answer
print(element[2].shape) # difficulty
print(element[3].shape) # trialtype
print(element[4].shape) # memory
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: If you choose chain_length 3 the data will look like this
Step3: Load the data.
Step4: Looking at what we loaded.
|
14,850 | <ASSISTANT_TASK:>
Python Code:
# When not running on Kaggle, comment out this import
from kaggle_datasets import KaggleDatasets
# When not running on Kaggle, set a fixed GCS path here
GCS_PATH = KaggleDatasets().get_gcs_path('jigsaw-multilingual-toxic-comment-classification')
print(GCS_PATH)
import os, time, logging
import tensorflow as tf
import tensorflow_hub as hub
from matplotlib import pyplot as plt
print(tf.version.VERSION)
tf.get_logger().setLevel(logging.ERROR)
try: # detect TPU
tpu = None
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError: # detect GPU(s) and enable mixed precision
strategy = tf.distribute.MirroredStrategy() # works on GPU and multi-GPU
policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')
tf.config.optimizer.set_jit(True) # XLA compilation
tf.keras.mixed_precision.experimental.set_policy(policy)
print('Mixed precision enabled')
print("REPLICAS: ", strategy.num_replicas_in_sync)
# mixed precision
# On TPU, bfloat16/float32 mixed precision is automatically used in TPU computations.
# Enabling it in Keras also stores relevant variables in bfloat16 format (memory optimization).
# This additional optimization was not used for TPUs in this sample.
# On GPU, specifically V100, mixed precision must be enabled for hardware TensorCores to be used.
# XLA compilation must be enabled for this to work. (On TPU, XLA compilation is the default and cannot be turned off)
SEQUENCE_LENGTH = 128
# Copy of the TF Hub model at https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2
BERT_GCS_PATH = 'gs://bert_multilingual_public/bert_multi_cased_L-12_H-768_A-12_2/'
EPOCHS = 6
if tpu:
BATCH_SIZE = 128 * strategy.num_replicas_in_sync
else:
BATCH_SIZE = 64 * strategy.num_replicas_in_sync
TRAIN_DATA = GCS_PATH + "/jigsaw-toxic-comment-train-processed-seqlen{}.csv".format(SEQUENCE_LENGTH)
TRAIN_DATA_LENGTH = 223549 # rows
VALID_DATA = GCS_PATH + "/validation-processed-seqlen{}.csv".format(SEQUENCE_LENGTH)
STEPS_PER_EPOCH = TRAIN_DATA_LENGTH // BATCH_SIZE
LR_MAX = 0.001 * strategy.num_replicas_in_sync
LR_EXP_DECAY = .9
LR_MIN = 0.0001
@tf.function
def lr_fn(epoch):
lr = (LR_MAX - LR_MIN) * LR_EXP_DECAY**(epoch) + LR_MIN
return lr
print("Learning rate schedule:")
rng = [i for i in range(EPOCHS)]
y = [lr_fn(x) for x in rng]
plt.plot(rng, [lr_fn(x) for x in rng])
plt.show()
def multilingual_bert_model(max_seq_length=SEQUENCE_LENGTH):
Build and return a multilingual BERT model and tokenizer.
input_word_ids = tf.keras.layers.Input(
shape=(max_seq_length,), dtype=tf.int32, name="input_word_ids")
input_mask = tf.keras.layers.Input(
shape=(max_seq_length,), dtype=tf.int32, name="input_mask")
segment_ids = tf.keras.layers.Input(
shape=(max_seq_length,), dtype=tf.int32, name="all_segment_id")
bert_layer = tf.saved_model.load(BERT_GCS_PATH) # copy of TF Hub model 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2'
bert_layer = hub.KerasLayer(bert_layer, trainable=True)
pooled_output, _ = bert_layer([input_word_ids, input_mask, segment_ids])
output = tf.keras.layers.Dense(32, activation='relu')(pooled_output)
output = tf.keras.layers.Dense(1, activation='sigmoid', name='labels', dtype=tf.float32)(output)
return tf.keras.Model(inputs={'input_word_ids': input_word_ids,
'input_mask': input_mask,
'all_segment_id': segment_ids},
outputs=output)
def parse_string_list_into_ints(strlist):
s = tf.strings.strip(strlist)
s = tf.strings.substr(
strlist, 1, tf.strings.length(s) - 2) # Remove parentheses around list
s = tf.strings.split(s, ',', maxsplit=SEQUENCE_LENGTH)
s = tf.strings.to_number(s, tf.int32)
s = tf.reshape(s, [SEQUENCE_LENGTH]) # Force shape here needed for XLA compilation (TPU)
return s
def format_sentences(data, label='toxic', remove_language=False):
labels = {'labels': data.pop(label)}
if remove_language:
languages = {'language': data.pop('lang')}
# The remaining three items in the dict parsed from the CSV are lists of integers
for k,v in data.items(): # "input_word_ids", "input_mask", "all_segment_id"
data[k] = parse_string_list_into_ints(v)
return data, labels
def make_sentence_dataset_from_csv(filename, label='toxic', language_to_filter=None):
# This assumes the column order label, input_word_ids, input_mask, segment_ids
SELECTED_COLUMNS = [label, "input_word_ids", "input_mask", "all_segment_id"]
label_default = tf.int32 if label == 'id' else tf.float32
COLUMN_DEFAULTS = [label_default, tf.string, tf.string, tf.string]
if language_to_filter:
insert_pos = 0 if label != 'id' else 1
SELECTED_COLUMNS.insert(insert_pos, 'lang')
COLUMN_DEFAULTS.insert(insert_pos, tf.string)
preprocessed_sentences_dataset = tf.data.experimental.make_csv_dataset(
filename, column_defaults=COLUMN_DEFAULTS, select_columns=SELECTED_COLUMNS,
batch_size=1, num_epochs=1, shuffle=False) # We'll do repeating and shuffling ourselves
# make_csv_dataset required a batch size, but we want to batch later
preprocessed_sentences_dataset = preprocessed_sentences_dataset.unbatch()
if language_to_filter:
preprocessed_sentences_dataset = preprocessed_sentences_dataset.filter(
lambda data: tf.math.equal(data['lang'], tf.constant(language_to_filter)))
#preprocessed_sentences.pop('lang')
preprocessed_sentences_dataset = preprocessed_sentences_dataset.map(
lambda data: format_sentences(data, label=label,
remove_language=language_to_filter))
return preprocessed_sentences_dataset
def make_dataset_pipeline(dataset, repeat_and_shuffle=True):
Set up the pipeline for the given dataset.
Caches, repeats, shuffles, and sets the pipeline up to prefetch batches.
cached_dataset = dataset.cache()
if repeat_and_shuffle:
cached_dataset = cached_dataset.repeat().shuffle(2048)
cached_dataset = cached_dataset.batch(BATCH_SIZE, drop_remainder=True) # no remainder on repeated dataset
else:
cached_dataset = cached_dataset.batch(BATCH_SIZE)
cached_dataset = cached_dataset.prefetch(tf.data.experimental.AUTOTUNE)
return cached_dataset
# Load the preprocessed English dataframe.
preprocessed_en_filename = TRAIN_DATA
# Set up the dataset and pipeline.
english_train_dataset = make_dataset_pipeline(
make_sentence_dataset_from_csv(preprocessed_en_filename))
# Process the new datasets by language.
preprocessed_val_filename = VALID_DATA
nonenglish_val_datasets = {}
for language_name, language_label in [('Spanish', 'es'), ('Italian', 'it'),
('Turkish', 'tr')]:
nonenglish_val_datasets[language_name] = make_sentence_dataset_from_csv(
preprocessed_val_filename, language_to_filter=language_label)
nonenglish_val_datasets[language_name] = make_dataset_pipeline(
nonenglish_val_datasets[language_name], repeat_and_shuffle=False)
nonenglish_val_datasets['Combined'] = make_sentence_dataset_from_csv(preprocessed_val_filename)
nonenglish_val_datasets['Combined'] = make_dataset_pipeline(nonenglish_val_datasets['Combined'], repeat_and_shuffle=False)
with strategy.scope():
multilingual_bert = multilingual_bert_model()
# Compile the model. Optimize using stochastic gradient descent.
multilingual_bert.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001*strategy.num_replicas_in_sync),
metrics=[tf.keras.metrics.AUC()])
multilingual_bert.summary()
%%time
# Train on English Wikipedia comment data.
lr_callback = tf.keras.callbacks.LearningRateScheduler(lr_fn)
history = multilingual_bert.fit(
english_train_dataset, steps_per_epoch=STEPS_PER_EPOCH, epochs=EPOCHS,
#validation_data=nonenglish_val_datasets['Combined'],
callbacks=[lr_callback])
# Performance on non-English comments after training.
for language in nonenglish_val_datasets:
results = multilingual_bert.evaluate(nonenglish_val_datasets[language], verbose=0)
print('{} loss, AUC after training:'.format(language), results)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overview
Step2: TPU or GPU detection
Step3: Configuration
Step5: Model
Step6: Dataset
Step8: Set up our data pipelines for training and evaluation.
Step9: Instantiate the model
|
14,851 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
# 如果不知道函数名是什么,可以只敲击函数前几个,然后按tab键,就会有下拉框提示
titanic = pd.read_csv('train.csv')
titanic.head()
titanic.info()
# 把所有数值类型的数据做一个简单的统计
titanic.describe()
# 统计None值个数
titanic.isnull().sum()
# 可以填充整个datafram里边的空值, 可以取消注释,实验一下
# titanic.fillna(0)
# 单独选择一列进行控制填充,可以取消注释,实验一下
# titanic.Age.fillna(0)
# 年龄的中位数
titanic.Age.median()
# 按年龄的中位数去填充,此时是返回一个新的Series
# titanic.Age.fillna(titanic.Age.median())
# 直接填充,并不返回新的Series
titanic.Age.fillna(titanic.Age.median(), inplace=True)
# 再次查看Age的空值
titanic.isnull().sum()
# 做简单的汇总统计,经常用到
titanic.Sex.value_counts()
# 生还者中,男女的人数
survived = titanic[titanic.Survived==1].Sex.value_counts()
# 未生还者中,男女的人数
dead = titanic[titanic.Survived==0].Sex.value_counts()
df = pd.DataFrame([survived, dead], index=['survived', 'dead'])
df.plot.bar()
# 绘图成功,但是不是我们想要的效果
# 把dataframe转置一下,就是行列替换
df = df.T
df.plot.bar() # df.plot(kind='bar') 等价的
# 仍然不是我们想要的结果
df.plot(kind='bar', stacked=True)
# 男女中生还者的比例情况
df['p_survived'] = df.survived / (df.survived + df.dead)
df['p_dead'] = df.dead / (df.survived + df.dead)
df[['p_survived', 'p_dead']].plot.bar(stacked=True)
# 简单统计
# titanic.Age.value_counts()
survived = titanic[titanic.Survived==1].Age
dead = titanic[titanic.Survived==0].Age
df = pd.DataFrame([survived, dead], index=['survived', 'dead'])
df = df.T
df.plot.hist(stacked=True)
# 直方图柱子显示多一点
df.plot.hist(stacked=True, bins=30)
# 中间很高的柱子,是因为我们把空值都替换为了中位数
# 密度图, 更直观一点
df.plot.kde()
# 可以查看年龄的分布,来绝对我们图片横轴的取值范围
titanic.Age.describe()
# 限定范围
df.plot.kde(xlim=(0,80))
age = 16
young = titanic[titanic.Age<=age]['Survived'].value_counts()
old = titanic[titanic.Age>age]['Survived'].value_counts()
df = pd.DataFrame([young, old], index=['young', 'old'])
df.columns = ['dead', 'survived']
df.plot.bar(stacked=True)
# 男女中生还者的比例情况
df['p_survived'] = df.survived / (df.survived + df.dead)
df['p_dead'] = df.dead / (df.survived + df.dead)
df[['p_survived', 'p_dead']].plot.bar(stacked=True)
# 票价跟年龄特征相似
survived = titanic[titanic.Survived==1].Fare
dead = titanic[titanic.Survived==0].Fare
df = pd.DataFrame([survived, dead], index=['survived', 'dead'])
df = df.T
df.plot.kde()
# 设定xlim范围,先查看票价的范围
titanic.Fare.describe()
df.plot(kind='kde', xlim=(0,513))
# 比如同时查看年龄和票价对生还率的影响
import matplotlib.pyplot as plt
plt.scatter(titanic[titanic.Survived==0].Age, titanic[titanic.Survived==0].Fare)
# 不美观
ax = plt.subplot()
# 未生还者
age = titanic[titanic.Survived==0].Age
fare = titanic[titanic.Survived==0].Fare
plt.scatter(age, fare, s=20, marker='o', alpha=0.3, linewidths=1, edgecolors='gray')
# 生还者
age = titanic[titanic.Survived==1].Age
fare = titanic[titanic.Survived==1].Fare
plt.scatter(age, fare, s=20, marker='o', alpha=0.3, linewidths=1, edgecolors='gray', c='red')
ax.set_xlabel('age')
ax.set_ylabel('fare')
titanic.Name.describe()
titanic['title'] = titanic.Name.apply(lambda name: name.split(',')[1].split('.')[0].strip())
s = 'Williams, Mr. Howard Hugh "Harry"'
s.split(',')[-1].split('.')[0].strip()
titanic.title.value_counts()
# 比如有一个人被称为 Mr,而年龄是不知道的,这个时候可以用 所有 Mr 的年龄平均值来替代,而不是用我们之前最简单的所有数据的中位数
# 夜光图,简单用灯光图的亮度来模拟gdp
titanic.head()
titanic['family_size'] = titanic.SibSp + titanic.Parch + 1
titanic.family_size.value_counts()
def func(family_size):
if family_size == 1:
return 'Singleton'
if family_size<=4 and family_size>=2:
return 'SmallFamily'
if family_size > 4:
return 'LargeFamily'
titanic['family_type'] = titanic.family_size.apply(func)
titanic.family_type.value_counts()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 导入数据
Step2: 快速预览
Step3: | 单词 | 翻译
Step4: 处理空值
Step5: 尝试从性别进行分析
Step6: 通过上面图片可以看出:性别特征对是否生还的影响还是挺大的
Step7: 分析票价
Step8: 可以看出低票价的人的生还率比较低
Step9: 隐含特征
Step10: gdp
|
14,852 | <ASSISTANT_TASK:>
Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98)
sim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14)
sim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1)
sim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit
%matplotlib inline
fig = rebound.OrbitPlot(sim)
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, periastron=True)
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", periastron=True, lw=2)
from IPython.display import display, clear_output
import matplotlib.pyplot as plt
sim.move_to_com()
for i in range(3):
sim.integrate(sim.t+0.31)
fig = rebound.OrbitPlot(sim,color=True,unitlabel="[AU]",lim=2.)
display(fig)
plt.close(fig)
clear_output(wait=True)
fig = rebound.OrbitPlot(sim,slices=True,color=True,unitlabel="[AU]",lim=2.,limz=0.36)
sim = rebound.Simulation()
sim.add(m=1.) #Star A
sim.add(m=1., a=1.) #Star B
sim.add(a=2.) #Planet ABb
sim.add(a=0.2, primary=sim.particles[1]) #Bb,
sim.move_to_com()
fig = rebound.OrbitPlot(sim)
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(figsize=(8,8))
ps = sim.particles
# manually set plot boundaries
lim = 2.3
ax.set_xlim([-lim, lim])
ax.set_ylim([-lim, lim])
# plot the stars and planets with separate symbols
linewidth=1.
for star in ps[:2]:
ax.scatter(star.x, star.y, s=35*linewidth, marker='*', facecolor='black', zorder=3)
for planet in ps[2:]:
ax.scatter(planet.x, planet.y, s=10*linewidth, facecolor='black', zorder=3)
# Now individually plot orbit trails with appropriate orbit
from rebound.plotting import fading_line
ABb = ps[2] # circumbinary planet, use default jacobi coordinates
o = np.array(ABb.sample_orbit())
lc = fading_line(o[:,0], o[:,1], linewidth=linewidth)
ax.add_collection(lc)
Bb = ps[3] # planet in orbit around B, assign it as primary
o = np.array(Bb.sample_orbit(primary=ps[1]))
lc = fading_line(o[:,0], o[:,1], linewidth=linewidth)
ax.add_collection(lc)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument.
Step2: Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!).
Step3: Note that all orbits are draw with respect to the center of mass of all interior particles. This coordinate system is known as Jacobi coordinates. It requires that the particles are sorted by ascending semi-major axis within the REBOUND simulation's particle array.
Step4: To get an idea of the three dimensional distribution of orbits, use the slices=True option. This will plot the orbits three times, from different perspectives. You can adjust the dimensions in the z direction using the limz keyword.
Step5: Advanced Plotting
Step6: Circumbinary Planet ABb is plotted correctly in orbit around the center of mass of A and B, but Bb's Jacobi orbit is also around the center of mass of the interior particles, which corresponds to a hyperbolic orbit. It's important to note that while the plot looks incorrect, IAS15 would correctly integrate their motions.
|
14,853 | <ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = []
for col in range(feature_matrix.shape[0]):
predictions.append(np.dot(feature_matrix[col,], weights))
return(predictions)
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
if feature_is_constant:
derivative = 2 * np.dot(errors, feature)
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
else:
derivative = 2 * np.dot(errors, feature) + 2 * l2_penalty * weight
return derivative
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)
print np.sum(errors*example_features[:,1])*2+20.
print ''
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)
print np.sum(errors)*2.
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
weights = np.array(initial_weights) # make sure it's a numpy array
#while not reached maximum number of iterations:
for _iter in range(max_iterations):
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
for i in xrange(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
derivative = feature_derivative_ridge(errors, feature_matrix[:,i], weights[i], l2_penalty, bool(i == 0))
# subtract the step size times the derivative from the current weight
weights[i] -= step_size * derivative
return weights
simple_features = ['sqft_living']
my_output = 'price'
train_data,test_data = sales.random_split(.8,seed=0)
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 0, max_iterations)
simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 1e11, max_iterations)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(simple_feature_matrix,output,'k.',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
predictions_1 = predict_output(simple_test_feature_matrix, initial_weights)
residuals_1 = [(predictions_1[i] - test_output[i]) ** 2 for i in range(len(predictions_1))]
print sum(residuals_1)
predictions_2 = predict_output(simple_test_feature_matrix, simple_weights_0_penalty)
residuals_2 = [(predictions_2[i] - test_output[i]) ** 2 for i in range(len(predictions_2))]
print sum(residuals_2)
predictions_3 = predict_output(simple_test_feature_matrix, simple_weights_high_penalty)
residuals_3 = [(predictions_3[i] - test_output[i]) ** 2 for i in range(len(predictions_3))]
print sum(residuals_3)
simple_weights_0_penalty
simple_weights_high_penalty
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, train_output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix, train_output, initial_weights, step_size, 0, max_iterations)
multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix, train_output, initial_weights, step_size, 1e11, max_iterations)
predictions_4 = predict_output(test_feature_matrix, initial_weights)
residuals_4 = [(predictions_4[i] - test_output[i]) ** 2 for i in range(len(predictions_4))]
print sum(residuals_4)
predictions_5 = predict_output(test_feature_matrix, multiple_weights_0_penalty)
residuals_5 = [(predictions_5[i] - test_output[i]) ** 2 for i in range(len(predictions_5))]
print sum(residuals_5)
predictions_6 = predict_output(test_feature_matrix, multiple_weights_high_penalty)
residuals_6 = [(predictions_6[i] - test_output[i]) ** 2 for i in range(len(predictions_6))]
print sum(residuals_6)
first = test_data[0]
a, b, c= multiple_weights_0_penalty
p_0 = a + b * first['sqft_living'] + c * first['sqft_living15']
print p_0
d, e, f = multiple_weights_high_penalty
p_high = d + e * first['sqft_living'] + f * first['sqft_living15']
print p_high
first['price']
multiple_weights_0_penalty
multiple_weights_high_penalty
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Computing the Derivative
Step5: To test your feature derivartive run the following
Step6: Gradient Descent
Step7: Visualizing effect of L2 penalty
Step8: Let us split the dataset into training set and test set. Make sure to use seed=0
Step9: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
Step10: Let's set the parameters for our optimization
Step11: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step12: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step13: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
Step14: Compute the RSS on the TEST data for the following three sets of weights
Step15: QUIZ QUESTIONS
Step16: Running a multiple regression with L2 penalty
Step17: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
Step18: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step19: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step20: Compute the RSS on the TEST data for the following three sets of weights
Step21: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
Step22: QUIZ QUESTIONS
|
14,854 | <ASSISTANT_TASK:>
Python Code:
plot = ChristmasPlot('Fake', n_dataset=3, methods=['yass', 'kilosort', 'spyking circus'], logit_y=True, eval_type="Accuracy")
for method in plot.methods:
for i in range(plot.n_dataset):
x = (np.random.rand(30) - 0.5) * 10
y = 1 / (1 + np.exp(-x + np.random.rand()))
plot.add_metric(x, y, dataset_number=i, method_name=method)
plot.generate_snr_metric_plot(save_to=None)
plot.generate_curve_plots(save_to=None)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate SNR vs Metric
Step2: Generate the curve plots
|
14,855 | <ASSISTANT_TASK:>
Python Code:
'\\'.join(['folder1','folder2','folder3','file.png']) # join all elements using the escaped (literal) '\' string
import os # contains many file path related functions
print(os.path.join('folder1','folder2','folder3','file.png')) # takes string arguments and returns OS-appropriate path
print(os.sep) # show the seperator currently being used.
# Start at current directory
defaultpath = os.path.expanduser('~/Dropbox/learn/books/Python/AutomateTheBoringStuffWithPython/')
os.chdir(defaultpath)
print(os.getcwd())
# Change path to /files folder
os.chdir('files') # changes current working directory, if not currently in it (to /files)
print(os.getcwd()) # prints the current working directory (should be /files)
# Reset back to notebook directory
os.chdir(defaultpath)
print(os.path.abspath('files')) # print absolute path of the files subdirectory
print(os.path.isabs(os.path.abspath('files'))) # Is the absolute path to files an absolute path (True)
print(os.path.relpath('../..', 'files')) # print the relative file path of a folder two folders up relative to subfolder (3 folders; ../../..)
print(os.path.dirname(os.path.abspath('files'))) # outputs absolute path above 'files'
print(os.path.basename('files/26645.pdf')) # outputs just 'files'
# Reset back to notebook directory
os.chdir(defaultpath)
print(os.path.exists(os.path.abspath('files'))) # checks if 'files' exists (True)
print(os.path.isfile('files')) # checks if 'files' is a file (False)
print(os.path.isdir('files')) # checks if 'files' is a folder (False)
A simple program to loop through a folder and find the size of all files in bytes, and the total size of the folder.
import os
# starting size
totalSize = 0
# for the fileName in the 'files' directory
for fileName in os.listdir('files'):
# generate filePaths
filePath = os.path.join('files',fileName)
# check if filePath is a file
if os.path.isfile(filePath) == True:
# if True, increase totalSize by the size of fileName
totalSize += os.path.getsize(filePath)
# also print what the file was and the size
print('%s is %d bytes.'%(filePath, os.path.getsize(filePath)))
# otherwise keep looping
else:
continue
# print the size of the folder at the end
print('\n\nThe \'%s\' folder contains %s bytes in total.'%('files',str(totalSize)))
# clear the folders if the exist already
if os.path.exists(os.path.abspath('files/newfolder/anotherone')) == True:
os.removedirs(os.path.abspath('files/newfolder/anotherone')) # clear folders if they exist
# create new folders at an absolute path
os.makedirs(os.path.abspath('files/newfolder/anotherone')) # create new folders
# check if they exist
if os.path.exists(os.path.abspath('files/newfolder/anotherone')) == True:
print('\'files/newfolder/anotherone\' exists.')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: But this string only works on Windows; to create an OS insensitive path, using the os module
Step2: If no explicit path is specified, Python will look for files in the current working directory.
Step3: There are two kinds of file paths, relative and absolute.
Step4: os.path.dirname() pulls out just the directory path component above a filepath.
Step5: os.path.exists() can check if a path exists.
Step7: os.path.getsize() returns the size of a file (bytes)
Step8: os.makedirs() creates new directories at the location.
|
14,856 | <ASSISTANT_TASK:>
Python Code:
# Example Dataset Review Entry
__ = {
'beer/ABV': 7.2,
'beer/beerId': '59261',
'beer/brewerId': '67',
'beer/name': 'Sierra Nevada Torpedo Extra IPA',
'beer/style': 'India Pale Ale (IPA)',
'review/appearance': 1.0,
'review/aroma': 0.8,
'review/overall': 0.9,
'review/palate': 0.8,
'review/taste': 1.0,
'review/text': 'Aroma is lemon and orange citrus, reddish orange hue, flavor is very well balanced between the malt and hop. this beer is very drinkable. It is not "over the top" like some of the double ipas. The alcohol content is well contained. The finish is hoppy as one would expect for this style. An excellent beer from the folks at Sierra Nevada!',
'review/timeStruct': {
'hour': 0,
'isdst': 0,
'mday': 17,
'min': 0,
'mon': 2,
'sec': 0,
'wday': 1,
'yday': 48,
'year': 2009
},
'review/timeUnix': 1234828800,
'user/profileName': 'Blazhock'
}
# import statements
import unicodedata
import random
import calendar
import math
import operator
A variety of helper methods to group data structures accordingto
def groupby_key(data, key_str):
key_map = {}
for datum in data:
key = datum.get(key_str)
key_map[key] = key_map.setdefault(key, [])
key_map[key].append(datum)
return key_map
def groupby_key_count(data, key_str, func=None):
key_map = {}
for datum in data:
key = datum.get(key_str)
try:
key = func(key)
except:
key = key
key_map[key] = key_map.setdefault(key, 0) + 1
return key_map
def group_time_split(user_array, key, func=None):
return [[groupby_key_count(time_per, key, func) for time_per in user] for user in user_array]
Reservoir sampling given an iterable input and k for number of items to be sampled
def reservoir_sample(iterator, k):
iterator = iter(iterator)
# fill the reservoir to start
result = [next(iterator) for _ in range(k)]
n = k
for item in iterator:
n += 1
s = random.randint(0, n)
if s < k:
result[s] = item
return result
Calculates the average using dictionary keys as "values" and dictionary values as "counts"
def avg_from_map(keymap):
count, total, a = 0, 0, None
for key in keymap:
if (key):
count += keymap[key]
total += key * keymap[key]
if (count):
a = total / count
return a
average from user tuple array
def avg_from_user_array_tuple(array):
out = []
for user in array:
user_avg = []
for time in user:
count, total, a = 0, 0, None
for item in time:
count += item[1]
total += item[0] * item[1]
if count > 0:
a = total/count
user_avg.append(a)
out.append(user_avg)
return out
# Parse and read in dataset from reduced_data.txt (as produced by our parsing code)
parsed_red = parse_json('data/reduced_data.txt', normalize=True)
dset = [i for i in parsed_red]
# parse dataset, grouping reviews by username
users = groupby_key(dset,'user/profileName')
user_vals = users.values()
print 'number of users', len(user_vals)
# split each users' group of reviews into two buckets: before and after site changes
user_vals_split = []
split_timestamp = calendar.timegm(datetime(2009,5,1).timetuple())
for i , posts in enumerate(user_vals):
pre = [post for post in posts if post.get('review/timeUnix') < split_timestamp]
post = [post for post in posts if post.get('review/timeUnix') >= split_timestamp]
# Only look at users who already contributed in the prior period
if len(pre) > 0:
user_vals_split.append([pre, post])
# sort reviews according to the number of reviews users contributed before
user_vals_split = sorted(user_vals_split, key=lambda user: len(user[0]))
print 'number of users post split', len(user_vals_split)
# sample "top" 5% and "low" 95% of users
users_top = user_vals_split[int(math.floor(len(user_vals_split)*0.95)):]
users_low = user_vals_split[:int(math.floor(len(user_vals_split)*0.95))]
# sample "random" user group to be equal to the "top" group
sample_size = len(users_top)
users_sampled = reservoir_sample(user_vals_split, sample_size)
Plot utility to plot the users's plus minus
Prints out the mean, median, and standard deviation for the plus minus
def plot_diff(user_list, func=lambda x: x, plot=True):
user_pre = [func(user[0]) for user in user_list]
user_post = [func(user[1]) for user in user_list]
np_pre = np.array(user_pre)
np_post = np.array(user_post)
np_pm = np_post - np_pre
print "Mean p/m: ", np.mean(np_pm)
print "Median p/m: ", np.median(np_pm)
print "Std Dev p/m: ", np.std(np_pm)
ind = np.arange(len(np_pm))
if (plot):
fig, ax = plt.subplots()
scatt = ax.scatter(ind, np_pm, c="grey",s=10,edgecolor='none')
mean = ax.axhline(y=np.mean(np_pm),xmin=0,xmax=ind[len(ind) - 1],c="blue",linewidth=0.9, linestyle = '-', zorder=1, label='mean')
median = ax.axhline(y=np.median(np_pm),xmin=0,xmax=ind[len(ind) - 1],c="red",linewidth=0.9,linestyle = '--', zorder=2, label='median')
plt.legend(loc=3, ncol=2)
# review counts
# all users stats
print 'all user review counts'
plot_diff(user_vals_split, lambda x: len(x))
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('review after - reviews before')
plt.title('Review Count Plus-Minus')
print
# top users stats
print 'top user review counts', len(users_top)
plot_diff(users_top, lambda x: len(x), False)
print
# low users stats
print 'low user review counts', len(users_low)
plot_diff(users_low, lambda x: len(x), False)
print
# sampled users stats
print 'sampled user review counts', len(users_sampled)
plot_diff(users_sampled, lambda x: len(x), False)
print
# remove users who did not post reviews after the split (for rest of metrics)
users_split_active = [user for user in user_vals_split if len(user[1]) > 0]
users_top_active = users_split_active[int(math.floor(len(users_split_active)*0.95)):]
users_low_active = users_split_active[:int(math.floor(len(users_split_active)*0.95))]
users_sampled_active = reservoir_sample(users_split_active, len(users_top_active))
# average number of styles reviewed before/after change
print 'all users style counts'
styles_all = group_time_split(users_split_active, 'beer/style')
# want number of styles
styles_all = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_all)
plot_diff(styles_all)
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('avg beer styles after - avg beer styles before')
plt.title('Average Beer Style Count Plus-Minus')
print
print 'top users style counts'
styles_top = group_time_split(users_top_active, 'beer/style')
# want number of styles
styles_top = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_top)
plot_diff(styles_top, lambda x: x, False)
print
print 'low users style counts'
styles_low = group_time_split(users_low_active, 'beer/style')
# want number of styles
styles_low = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_low)
plot_diff(styles_low, lambda x: x, False)
print
print 'sample users style counts'
styles_samp = group_time_split(users_sampled_active, 'beer/style')
# want number of styles
styles_samp = map(lambda x: [len(x[0].keys()), len(x[1].keys())], styles_samp)
plot_diff(styles_samp, lambda x: x, False)
print
# average overall ratings before/after change
print 'all users rating avg'
rates_all = group_time_split(users_split_active, 'review/overall', lambda x: float(x))
rates_all = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_all)
avg_rates = avg_from_user_array_tuple(rates_all)
plot_diff(avg_rates)
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('avg overall rating after - avg overall rating before')
plt.title('Average Overall Rating Plus-Minus (Max Rating = 1)')
print
print 'top users rating avg'
rates_top = group_time_split(users_top_active, 'review/overall', lambda x: float(x))
rates_top = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_top)
avg_rates = avg_from_user_array_tuple(rates_top)
plot_diff(avg_rates, lambda x: x, False)
print
print 'low users rating avg'
rates_low = group_time_split(users_low_active, 'review/overall', lambda x: float(x))
rates_low = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_low)
avg_rates = avg_from_user_array_tuple(rates_low)
plot_diff(avg_rates, lambda x: x, False)
print
print 'sampled users rating avg'
rates_samp = group_time_split(users_sampled_active, 'review/overall', lambda x: float(x))
rates_samp = map(lambda x: [[(rating, x[0][rating]) for rating in x[0].keys()], [(rating, x[1][rating]) for rating in x[1].keys()]], rates_samp)
avg_rates = avg_from_user_array_tuple(rates_samp)
styles_plot = plot_diff(avg_rates, lambda x: x, False)
print
# average review lengths before/after change
print 'all users review length avg'
reviews_all = group_time_split(users_split_active, 'review/text', lambda x: float(x))
reviews_all = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_all)
avg_review_len = avg_from_user_array_tuple(reviews_all)
styles_plot = plot_diff(avg_review_len)
plt.xlabel('User ID (sorted by pre-split contributions)')
plt.ylabel('avg review length after - avg review length before')
plt.title('Average Review Length Plus-Minus')
print
print 'top users review length avg'
reviews_top = group_time_split(users_top_active, 'review/text', lambda x: float(x))
reviews_top = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_top)
avg_review_len = avg_from_user_array_tuple(reviews_top)
styles_plot = plot_diff(avg_review_len, lambda x: x, False)
print
print 'low users review length avg'
reviews_low = group_time_split(users_low_active, 'review/text', lambda x: float(x))
reviews_low = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_low)
avg_review_len = avg_from_user_array_tuple(reviews_low)
styles_plot = plot_diff(avg_review_len, lambda x: x, False)
print
print 'sampled users review length avg'
reviews_samp = group_time_split(users_sampled_active, 'review/text', lambda x: float(x))
reviews_samp = map(lambda x: [[(len(review.split()), x[0][review]) for review in x[0].keys()], [(len(review.split()), x[1][review]) for review in x[1].keys()]], reviews_samp)
avg_review_len = avg_from_user_array_tuple(reviews_samp)
styles_plot = plot_diff(avg_review_len, lambda x: x, False)
print
# build feature vectors
x, y_after, y_diff = [], [], []
for user in user_vals_split:
pre_char = user[0]
style_map = groupby_key_count(pre_char, 'beer/style')
rating_map = groupby_key_count(pre_char, 'review/overall')
review_map = groupby_key_count(pre_char, 'review/text', lambda x : len(x.split()))
total_count = len(pre_char)
style_count = len(style_map.keys())
avg_rating = avg_from_map(rating_map)
rating_count = len(rating_map.keys())
avg_review = avg_from_map(review_map)
review_count = len(review_map.keys())
# throw away points that don't have rating / review
if (avg_rating is not None and avg_review is not None):
x.append([total_count, style_count, avg_rating, rating_count, avg_review, review_count])
y_after.append(len(user[1]))
y_diff.append(len(user[1])-len(user[0]))
data_pool = zip(x,y_diff)
def balance_pool(pool):
Given a pool of year-key formatted unbalanced outcome data, return a balanced set where positive outcomes have equal number of occurances to negative outcomes
The larger set is randomly sampled to reduce its size by using reservoir_sample
newpool = []
neg = []
pos = []
for user in pool:
if user[-1] < 0:
neg.append(user)
else:
pos.append(user)
minlen = min(len(pos), len(neg))
for elem in reservoir_sample(neg, minlen):
newpool.append(elem)
for elem in reservoir_sample(pos, minlen):
newpool.append(elem)
return newpool
def create_train_test(pool, test, train):
Split the data pool created in create_data_pool randomly into a 80/20 split between training data and testing data
Shuffles all the years and randomly splits 80/20 between training and test
Should only be ran once to randomly split train/test data as it will return different results between runs
random.shuffle(pool)
ind = int(len(pool) * 0.8)
train += pool[:ind]
test += pool[ind:]
# calculate pearson correlation coefficient for each variable
# use to predict plus-minus and actual number of reviews after
import scipy.stats
np_x = np.array(x)
np_y_after = np.array(y_after)
np_y_diff = np.array(y_diff)
Index to Pre-Split Feature Mapping
0 - total number of reviews
1 - number of styles reviewed
2 - average overall rating
3 - number of distinct ratings
4 - average review length
5 - number of distinct review lengths
print "Pearson Correlation Coefficients Against Post-Split Number of Reviews"
for i in xrange(len(np_x[0])):
print "pearson for char", i, ":", scipy.stats.pearsonr(np_x[:,i], np_y_after)
print
print "Pearson Correlation Coefficients Against Number of Reviews Plus-Minus"
for i in xrange(len(np_x[0])):
print "Difference -- pearson for char", i, ":", scipy.stats.pearsonr(np_x[:,i], np_y_diff)
slope, intercept, r_value, p_value, std_err= scipy.stats.linregress(np_x[:,0], np_y_after)
print "r-squared:", r_value**2
fit = np.polyfit(np_x[:,0],np_y_after,1)
fit_fn = np.poly1d(fit)
_ = plt.plot(np_x[:,0], np_y_after, 'b.', x, fit_fn(x), '--k')
plt.ylim(0)
plt.xlabel('# of Pre-Split Reviews')
plt.ylabel('# of Post-Split Reviews')
plt.title('Linear Regression of Pre/Post Split Review Count')
_ = plt.plot(np_x[:,1], np_y_after, 'b.')
plt.ylim(0)
plt.xlabel('# of Pre-Split Styles Reviewed')
plt.ylabel('# of Post-Split Reviews')
plt.title('Correlation btwn Pre Styles Types and Post Review Count')
_= plt.plot(np_x[:,2], np_y_after, 'b.')
plt.ylim(0)
plt.xlabel('Average Pre-Split Overall Rating of Reviews')
plt.ylabel('# of Post-Split Reviews')
plt.title('Correlation btwn Pre Average Rating and Post Review Count')
_= plt.plot(np_x[:,4], np_y_after, 'b.')
plt.ylim(0)
plt.xlabel('Average Pre-Split Review Length (Word Count)')
plt.ylabel('# of Post-Split Reviews')
plt.title('Correlation btwn Average Review Length and Post Review Count')
bal_data_pool = balance_pool(data_pool)
test, train = [], []
create_train_test(bal_data_pool, test, train)
train = [[user[0] for user in train], [1 if user[1] > 0 else -1 for user in train]]
test = [[user[0] for user in test], [1 if user[1] > 0 else -1 for user in test]]
from sklearn.preprocessing import StandardScaler
from sklearn import svm, neighbors, qda, metrics, cross_validation
from sklearn.pipeline import Pipeline
clf = Pipeline([('Scaler', StandardScaler()),
# ('Log-Reg', linear_model.LogisticRegression(penalty='l2', dual=True))])
# ('Log-Reg', linear_model.LogisticRegression(penalty='l2', dual=False))])
# ('SVC-linear', svm.SVC(kernel='linear'))])
('SVC-rbf', svm.SVC(kernel='rbf'))])
cv = cross_validation.KFold(len(train[0]), n_folds=4, shuffle=True)
scores = cross_validation.cross_val_score(clf, train[0], train[1], cv=cv)
print "Cross Validation Scores:", scores
print "Average Cross Validation Score:", np.average(scores)
print
# peforms test on selected model
clf = clf.fit(train[0], train[1])
predicted = clf.predict(test[0])
print "Model Accuracy:", metrics.accuracy_score(test[1], predicted)
print "Confusion Matrix"
print metrics.confusion_matrix(test[1], predicted)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: User Level Results
Step7: High-level Feature Trends
Step8: Review Counts
Step9: Average Number of Beer Styles Reviewed
Step10: Average Overall Rating
Step11: Average Review Length
Step14: Explore Potential Correlation
Step16: Pearson Correlation Coefficient
Step17: In order to visualize this relationship, we plotted these two features and fit a linear line against them. The result of this operation is shown below.
Step18: While performing the linear correlation analysis, we noticed several interesting visually-identifiable, non-linear trends when plotting the following variables against the # of post-split review count
Step19: Explore Binary Classifier
|
14,857 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 10
import random
x = np.array([i*np.pi/180 for i in range(60,300,4)])
np.random.seed(10) #Setting seed for reproducability
y = np.sin(x) + np.random.normal(0,0.15,len(x))
data = pd.DataFrame(np.column_stack([x,y]),columns=['x','y'])
plt.plot(data['x'],data['y'],'.')
print data.shape
#Import Linear Regression model from scikit-learn.
from sklearn.linear_model import LinearRegression
def linear_regression(data, power, models_to_plot):
#initialize predictors:
predictors=['x']
if power>=2:
predictors.extend(['x_%d'%i for i in range(2,power+1)])
#Fit the model
linreg = LinearRegression(normalize=True)
linreg.fit(data[predictors],data['y'])
y_pred = linreg.predict(data[predictors])
#Check if a plot is to be made for the entered power
if power in models_to_plot:
plt.subplot(models_to_plot[power])
plt.tight_layout()
plt.plot(data['x'],y_pred)
plt.plot(data['x'],data['y'],'.')
plt.title('Plot for power: %d'%power)
#Return the result in pre-defined format
rss = sum((y_pred-data['y'])**2)
ret = [rss]
ret.extend([linreg.intercept_])
ret.extend(linreg.coef_)
return ret
#Create powers upto 15:
for i in range(2,16):
colname = 'x_%d'%i
data[colname] = data['x']**i
print data.head()
#Initialize a dataframe to store the results:
col = ['rss','intercept'] + ['coef_x_%d'%i for i in range(1,16)]
ind = ['model_pow_%d'%i for i in range(1,16)]
coef_matrix_simple = pd.DataFrame(index=ind, columns=col)
#Define the powers for which a plot is required:
models_to_plot = {1:231,3:232,6:233,9:234,12:235,15:236}
#Iterate through all powers and assimilate results
for i in range(1,16):
coef_matrix_simple.iloc[i-1,0:i+2] = linear_regression(data, power=i, models_to_plot=models_to_plot)
#Set the display format to be scientific for ease of analysis
pd.options.display.float_format = '{:,.2g}'.format
coef_matrix_simple
from sklearn.linear_model import Ridge
def ridge_regression(data, predictors, alpha, models_to_plot={}):
#Fit the model
ridgereg = Ridge(alpha=alpha,normalize=True)
ridgereg.fit(data[predictors],data['y'])
y_pred = ridgereg.predict(data[predictors])
#Check if a plot is to be made for the entered alpha
if alpha in models_to_plot:
plt.subplot(models_to_plot[alpha])
plt.tight_layout()
plt.plot(data['x'],y_pred)
plt.plot(data['x'],data['y'],'.')
plt.title('Plot for alpha: %.3g'%alpha)
#Return the result in pre-defined format
rss = sum((y_pred-data['y'])**2)
ret = [rss]
ret.extend([ridgereg.intercept_])
ret.extend(ridgereg.coef_)
return ret
# predictors=['x']
# predictors.extend(['x_%d'%i for i in range(2,16)])
# alp = 1e5
# print ridge_regression(data, predictors, alpha=alp, models_to_plot={alp:111})
predictors=['x']
predictors.extend(['x_%d'%i for i in range(2,16)])
alpha_ridge = [1e-15, 1e-10, 1e-8, 1e-4, 1e-3,1e-2, 1, 5, 10, 20]
col = ['rss','intercept'] + ['coef_x_%d'%i for i in range(1,16)]
ind = ['alpha_%.2g'%alpha_ridge[i] for i in range(0,10)]
coef_matrix_ridge = pd.DataFrame(index=ind, columns=col)
models_to_plot = {1e-15:231, 1e-10:232, 1e-4:233, 1e-3:234, 1e-2:235, 5:236}
for i in range(10):
coef_matrix_ridge.iloc[i,] = ridge_regression(data, predictors, alpha_ridge[i], models_to_plot)
#Set the display format to be scientific for ease of analysis
pd.options.display.float_format = '{:,.2g}'.format
coef_matrix_ridge
coef_matrix_ridge.apply(lambda x: sum(x.values==0),axis=1)
from sklearn.linear_model import Lasso
def lasso_regression(data, predictors, alpha, models_to_plot={}):
#Fit the model
lassoreg = Lasso(alpha=alpha,normalize=True, max_iter=1e6)
lassoreg.fit(data[predictors],data['y'])
y_pred = lassoreg.predict(data[predictors])
#Check if a plot is to be made for the entered alpha
if alpha in models_to_plot:
plt.subplot(models_to_plot[alpha])
plt.tight_layout()
plt.plot(data['x'],y_pred)
plt.plot(data['x'],data['y'],'.')
plt.title('Plot for alpha: %.3g'%alpha)
#Return the result in pre-defined format
rss = sum((y_pred-data['y'])**2)
ret = [rss]
ret.extend([lassoreg.intercept_])
ret.extend(lassoreg.coef_)
return ret
predictors=['x']
predictors.extend(['x_%d'%i for i in range(2,16)])
alpha_lasso = [1e-15, 1e-10, 1e-8, 1e-5,1e-4, 1e-3,1e-2, 1, 5, 10]
col = ['rss','intercept'] + ['coef_x_%d'%i for i in range(1,16)]
ind = ['alpha_%.2g'%alpha_lasso[i] for i in range(0,10)]
coef_matrix_lasso = pd.DataFrame(index=ind, columns=col)
models_to_plot = {1e-10:231, 1e-5:232,1e-4:233, 1e-3:234, 1e-2:235, 1:236}
for i in range(10):
coef_matrix_lasso.iloc[i,] = lasso_regression(data, predictors, alpha_lasso[i], models_to_plot)
#Set the display format to be scientific for ease of analysis
pd.options.display.float_format = '{:,.2g}'.format
coef_matrix_lasso
coef_matrix_lasso.apply(lambda x: sum(x.values==0),axis=1)
data.corr()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating the data
Step2: Fit simple linear regression
Step3: Determining overfitting
Step4: Fit a Linear regression model on the 15 powers
Step5: Though RSS is going down, but the coefficients are increasing in magnitude.
Step6: Lasso Modeling
Step7: Check correlation
|
14,858 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras import layers
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_classes = metadata.features['label'].num_classes
print(num_classes)
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
IMG_SIZE = 180
resize_and_rescale = tf.keras.Sequential([
layers.Resizing(IMG_SIZE, IMG_SIZE),
layers.Rescaling(1./255)
])
result = resize_and_rescale(image)
_ = plt.imshow(result)
print("Min and max pixel values:", result.numpy().min(), result.numpy().max())
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal_and_vertical"),
layers.RandomRotation(0.2),
])
# Add the image to a batch.
image = tf.cast(tf.expand_dims(image, 0), tf.float32)
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = data_augmentation(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")
model = tf.keras.Sequential([
# Add the preprocessing layers you created earlier.
resize_and_rescale,
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
# Rest of your model.
])
aug_ds = train_ds.map(
lambda x, y: (resize_and_rescale(x, training=True), y))
batch_size = 32
AUTOTUNE = tf.data.AUTOTUNE
def prepare(ds, shuffle=False, augment=False):
# Resize and rescale all datasets.
ds = ds.map(lambda x, y: (resize_and_rescale(x), y),
num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1000)
# Batch all datasets.
ds = ds.batch(batch_size)
# Use data augmentation only on the training set.
if augment:
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y),
num_parallel_calls=AUTOTUNE)
# Use buffered prefetching on all datasets.
return ds.prefetch(buffer_size=AUTOTUNE)
train_ds = prepare(train_ds, shuffle=True, augment=True)
val_ds = prepare(val_ds)
test_ds = prepare(test_ds)
model = tf.keras.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
epochs=5
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
loss, acc = model.evaluate(test_ds)
print("Accuracy", acc)
def random_invert_img(x, p=0.5):
if tf.random.uniform([]) < p:
x = (255-x)
else:
x
return x
def random_invert(factor=0.5):
return layers.Lambda(lambda x: random_invert_img(x, factor))
random_invert = random_invert()
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = random_invert(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0].numpy().astype("uint8"))
plt.axis("off")
class RandomInvert(layers.Layer):
def __init__(self, factor=0.5, **kwargs):
super().__init__(**kwargs)
self.factor = factor
def call(self, x):
return random_invert_img(x)
_ = plt.imshow(RandomInvert()(image)[0])
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
_ = plt.colorbar()
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image, cropped)
rotated = tf.image.rot90(image)
visualize(image, rotated)
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_brightness = tf.image.stateless_random_brightness(
image, max_delta=0.95, seed=seed)
visualize(image, stateless_random_brightness)
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_contrast = tf.image.stateless_random_contrast(
image, lower=0.1, upper=0.9, seed=seed)
visualize(image, stateless_random_contrast)
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_crop = tf.image.stateless_random_crop(
image, size=[210, 300, 3], seed=seed)
visualize(image, stateless_random_crop)
(train_datasets, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
def resize_and_rescale(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = (image / 255.0)
return image, label
def augment(image_label, seed):
image, label = image_label
image, label = resize_and_rescale(image, label)
image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6)
# Make a new seed.
new_seed = tf.random.experimental.stateless_split(seed, num=1)[0, :]
# Random crop back to the original size.
image = tf.image.stateless_random_crop(
image, size=[IMG_SIZE, IMG_SIZE, 3], seed=seed)
# Random brightness.
image = tf.image.stateless_random_brightness(
image, max_delta=0.5, seed=new_seed)
image = tf.clip_by_value(image, 0, 1)
return image, label
# Create a `Counter` object and `Dataset.zip` it together with the training set.
counter = tf.data.experimental.Counter()
train_ds = tf.data.Dataset.zip((train_datasets, (counter, counter)))
train_ds = (
train_ds
.shuffle(1000)
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
# Create a generator.
rng = tf.random.Generator.from_seed(123, alg='philox')
# Create a wrapper function for updating seeds.
def f(x, y):
seed = rng.make_seeds(2)[0]
image, label = augment((x, y), seed)
return image, label
train_ds = (
train_datasets
.shuffle(1000)
.map(f, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: データ増強
Step2: データセットをダウンロードする
Step3: 花のデータセットには 5 つのクラスがあります。
Step4: データセットから画像を取得し、それを使用してデータ増強を実演してみましょう。
Step5: Keras 前処理レイヤーを使用する
Step6: 注意
Step7: ピクセルが [0, 1] の範囲にあることを確認します。
Step8: データ増強
Step9: データ拡張には、tf.keras.layers.RandomContrast、tf.keras.layers.RandomCrop、tf.keras.layers.RandomZoom など、様々な前処理レイヤーを使用できます。
Step10: この場合、2 つの重要なポイントがあります。
Step11: このアプローチでは、Dataset.map を使用して、拡張画像のバッチを生成するデータセットを作成します。この場合は、
Step12: モデルをトレーニングする
Step13: tf.keras.optimizers.Adam オプティマイザとtf.keras.losses.SparseCategoricalCrossentropy 損失関数を選択します。各トレーニングエポックのトレーニングと検証の精度を表示するには、Model.compile に metrics 引数を渡します。
Step14: 数エポック、トレーニングします。
Step15: カスタムデータ増強
Step16: 次に、サブクラス化してカスタムレイヤーを実装します。
Step17: どちらのレイヤーも、上記 1 と 2 のオプションで説明した使用が可能です。
Step18: 作業に必要な画像を取得します。
Step19: 以下の関数を使用して元の画像と拡張画像を並べて視覚化し、比較してみましょう。
Step20: データ増強
Step21: 画像をグレースケールにする
Step22: 画像の彩度を処理する
Step23: 画像の明るさを変更する
Step24: 画像を中央でトリミングする
Step25: 画像を回転させる
Step26: ランダム変換
Step27: 画像のコントラストをランダムに変更する
Step28: ランダムに画像をトリミングする
Step29: データ増強をデータセットに適用する
Step30: 次に、画像のサイズ変更と再スケーリングのためのユーティリティ関数を定義します。この関数は、データセット内の画像のサイズとスケールを統一するために使用されます。
Step31: また、画像にランダム変換を適用できる augment 関数も定義します。この関数は、次のステップのデータセットで使用されます。
Step32: オプション 1
Step33: augment 関数をトレーニングデータセットにマッピングします。
Step34: オプション 2
Step35: ラッパー関数 f をトレーニングデータセットにマッピングし、resize_and_rescale 関数を検証セットとテストセットにマッピングします。
|
14,859 | <ASSISTANT_TASK:>
Python Code:
# Imports
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
## use `%matplotlib notebook` for interactive figures
# plt.style.use('ggplot')
import sklearn
import tigramite
from tigramite import data_processing as pp
from tigramite.toymodels import structural_causal_processes as toys
from tigramite import plotting as tp
from tigramite.pcmci import PCMCI
from tigramite.independence_tests import ParCorr, GPDC, CMIknn, CMIsymb
seed = 7
auto_coeff = 0.95
coeff = 0.4
T = 500
def lin(x): return x
links ={0: [((0, -1), auto_coeff, lin),
((1, -1), coeff, lin)
],
1: [((1, -1), auto_coeff, lin),
],
2: [((2, -1), auto_coeff, lin),
((3, 0), -coeff, lin),
],
3: [((3, -1), auto_coeff, lin),
((1, -2), coeff, lin),
],
4: [((4, -1), auto_coeff, lin),
((3, 0), coeff, lin),
],
5: [((5, -1), 0.5*auto_coeff, lin),
((6, 0), coeff, lin),
],
6: [((6, -1), 0.5*auto_coeff, lin),
((5, -1), -coeff, lin),
],
7: [((7, -1), auto_coeff, lin),
((8, 0), -coeff, lin),
],
8: [],
}
# Specify dynamical noise term distributions, here unit variance Gaussians
random_state = np.random.RandomState(seed)
noises = noises = [random_state.randn for j in links.keys()]
data, nonstationarity_indicator = toys.structural_causal_process(
links=links, T=T, noises=noises, seed=seed)
T, N = data.shape
# Initialize dataframe object, specify variable names
var_names = [r'$X^{%d}$' % j for j in range(N) ]
dataframe = pp.DataFrame(data, var_names=var_names)
true_graph = toys.links_to_graph(links=links)
tp.plot_timeseries(dataframe, figsize=(15, 5)); plt.show()
parcorr = ParCorr(significance='analytic')
pcmci = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=1)
correlations = pcmci.run_bivci(tau_max=10, val_only=True)['val_matrix']
lag_func_matrix = tp.plot_lagfuncs(val_matrix=correlations,
setup_args={'var_names':var_names, 'figsize':(15, 10),
'x_base':5, 'y_base':.5})
scatter_lags = np.argmax(np.abs(correlations), axis=2)
tp.plot_scatterplots(dataframe=dataframe, setup_args={'figsize':(15, 10)}, add_scatterplot_args={'scatter_lags':scatter_lags}); plt.show()
tau_max = 3
pc_alpha = 0.01
pcmci.verbosity = 2
results = pcmci.run_pcmciplus(tau_min=0, tau_max=tau_max, pc_alpha=pc_alpha)
print("Graph")
print (results['graph'])
print("Adjacency MCI partial correlations")
print (results['val_matrix'].round(2))
print("Adjacency p-values")
print (results['p_matrix'].round(3))
q_matrix = pcmci.get_corrected_pvalues(p_matrix=results['p_matrix'], fdr_method='fdr_bh',
exclude_contemporaneous=False)
tp.plot_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='cross-MCI (edges)',
node_colorbar_label='auto-MCI (nodes)',
); plt.show()
# Plot time series graph
tp.plot_time_series_graph(
figsize=(8, 8),
node_size=0.05,
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show()
tau_max = 3
alpha_level = 0.001
pcmci.verbosity = 0
results = pcmci.run_pcmci(tau_min=0, tau_max=tau_max, pc_alpha=None, alpha_level=alpha_level)
# Mark false links as grey
true_graph = toys.links_to_graph(links=links, tau_max=tau_max)
link_attribute = np.ones(results['val_matrix'].shape, dtype = 'object')
link_attribute[true_graph==""] = 'spurious'
link_attribute[true_graph!=""] = ''
# Symmetrize contemp. link attribute
for (i,j) in zip(*np.where(true_graph[:,:,0]!="")):
link_attribute[i,j,0] = link_attribute[j,i,0] = ''
tp.plot_time_series_graph(
figsize=(8, 8),
node_size=0.05,
val_matrix=results['val_matrix'],
graph=results['graph'],
link_attribute=link_attribute,
var_names=var_names,
)
plt.show()
tau_max = 3
pcmci.verbosity = 0
results = pcmci.run_pcalg(tau_min=0, tau_max=tau_max, pc_alpha=0.01)
tp.plot_time_series_graph(
figsize=(8, 8),
node_size=0.05,
val_matrix=results['val_matrix'],
graph=results['graph'],
link_attribute=link_attribute,
var_names=var_names,
link_colorbar_label='PC')
plt.show()
data_long, nonstationarity_indicator = toys.structural_causal_process(
links=links, T=100000, noises=noises, seed=seed)
parcorr = ParCorr(significance='analytic')
pcmci_long = PCMCI(
dataframe=pp.DataFrame(data_long, var_names=var_names),
cond_ind_test=parcorr,
verbosity=1)
tau_max = 3
pcmci_long.verbosity = 0
results = pcmci_long.run_pcalg(tau_min=0, tau_max=tau_max, pc_alpha=0.001)
tp.plot_time_series_graph(
figsize=(8, 8),
node_size=0.05,
val_matrix=results['val_matrix'],
graph=results['graph'],
link_attribute=link_attribute,
var_names=var_names,
link_colorbar_label='PC')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Structural causal processes with contemporaneous and lagged dependencies
Step2: The true graph $\mathcal{G}$ here has shape (N, N, 2+1) since the maximum true time lag is tau_max=2. An entry true_graph[i,j,tau]="-->" for $\tau\geq 0$ indicates a causal link $X^i_{t-\tau} \to X^j_t$, true_graph[i,j,0]="<--" indicates a causal link $X^i_{t} \leftarrow X^j_t$ (only for $\tau=0$), and true_graph[i,j,tau]="" the absence of a causal link.
Step3: 2. PCMCI+
Step4: Despite the high autocorrelation, it's stationary and doesn't contain missing values. Check the other tutorials for discussions of non-stationarity and missing values.
Step5: Before running PCMCIplus, it's a good idea to run and plot the function run_bivci, which implements a bivariate, lagged conditional independence test (similar to bivariate Granger causality, but lag-specific). This can help to identify which maximal time lag tau_max to choose. Another option would be to plot get_lagged_dependencies, but large autocorrelation will inflate lag peaks (see https
Step6: Let's do another check and use the new plot_scatterplots function to see whether the dependencies are really linear and ParCorr is the right conditional independence test. With the argument scatter_lags set to a (N, N) integer numpy array you can choose which lag to use for every pair of variables. Here we choose the lag at which the correlations above have their maximal absolute value. Of course, you might want to use a nonlinear conditional independence test to assess the lags with maximum dependency. I.e., run pcmci.get_lagged_dependencies with PCMCI initialized with a nonlinear measure (e.g., CMIknn or GPDC as introduced below).
Step7: Since the dependencies peak maximally at a lag of around 3 in the lag function plot, we choose tau_max=3 for PCMCIplus. This choice may, however, stronly depend on expert knowledge of the system. Obviously, for contemporaneous causal discovery, we leave the default tau_min=0. The other main parameter is pc_alpha which sets the significance level for all tests in PCMCIplus. This is in contrast to PCMCI where pc_alpha only controls the significance tests in the condition-selection phase, not in the MCI tests. Also for PCMCIplus there is an automatic procedure (like for PCMCI) to choose the optimal value. If a list or None is passed for pc_alpha, the significance level is optimized for every graph across the given pc_alpha values using the score computed in cond_ind_test.get_model_selection_criterion(). Since PCMCIplus outputs not a DAG, but an equivalence class of DAGs, first one member is of this class is computed and then the score is computed as the average over all models fits for each variable. The score is the same for all members of the class.
Step8: The output of the function print_results(results) (shown at the end on the above output) gives information on the parents for each variable and also indicates unoriented links and conflicts, which are important to report in studies.
Step9: As mentioned above, p_matrix and val_matrix for PCMCIplus quantify the uncertainty and strength, respectively, only for the adjacencies in phase 2, but not for the directionality of contemporaneous links determined in phases 3 and 4. Since adjacency is a symmetric property for contemporaneous links, p_matrix and val_matrix are symmetric for tau=0, unlike the graph. We can correct the p-values by False Discovery Rate (FDR) control yielding the q_matrix.
Step10: However, the important output of PCMCIplus is the graph.
Step11: While the process graph is nicer to look at, the time series graph better represents the spatio-temporal dependency structure from which causal pathways can be read off.
Step12: Here PCMCIplus correctly estimates the underlying causal graph regarding both the lagged and contemporaneous links and the autodependencies.
Step13: False positives (grey) here appear due to indirect paths through contemporaneous parents. For example, $X^3_{t-1} \to X^2_t$ appears due to the path $X^3_{t-1} \to X^3_t \to X^2_t$ because PCMCI only removes indirect links and common drivers that come through lagged parents, not those that come through contemporaneous parents. The same holds for the false contemporaneous link $X^2_{t} - X^4_t$ due to $X^2_{t} \leftarrow X^3_t \to X^4_t$. Further, all contemporaneous links remain undirected as indicated by the lines.
Step14: Still the PC algorithm fails here in multiple ways
|
14,860 | <ASSISTANT_TASK:>
Python Code:
class MyClass:
def __init__(self, val):
self.set_val(val)
def get_val(self):
return self._val
def set_val(self, val):
if val > 0:
self._val = val
else:
raise ValueError('val must be greater 0')
myclass = MyClass(27)
myclass._val
class MyClass:
def __init__(self, val):
self.__val = val
myclass = MyClass(42)
myclass.__val
class MySpecialClass(MyClass):
def get_val(self):
return self.__val
msc = MySpecialClass(42)
msc.get_val()
class GradingError(Exception): pass
class Student:
def __init__(self, matrikelnr):
self.matrikelnr = matrikelnr
self._grade = 0
def set_grade(self, grade):
if grade > 0 and grade < 6:
self._grade = grade
else:
raise ValueError('Grade must be between 1 and 5!')
def get_grade(self):
if self._grade > 0:
return self._grade
raise GradingError('Noch nicht benotet!')
anna = Student('01754645')
anna.set_grade(6)
anna.set_grade(2)
anna.get_grade()
anna._grade
anna._grade = 6
class Student:
def __init__(self, matrikelnr):
self.matrikelnr = matrikelnr
self.__grade = 0
def set_grade(self, grade):
if grade > 0 and grade < 6:
self.__grade = grade
else:
raise ValueError('Grade must be between 1 and 5!')
def get_grade(self):
if self.__grade > 0:
return self.__grade
raise GradingError('Noch nicht benotet!')
grade = property(get_grade, set_grade)
otto = Student('01745646465')
otto.grade = 6
class Student:
def __init__(self, matrikelnr, grade):
self.matrikelnr = matrikelnr
self.__grade = grade
def get_grade(self):
if self.__grade > 0:
return self.__grade
raise GradingError('Noch nicht benotet!')
grade = property(get_grade)
albert = Student('0157897846546', 5)
albert.grade
albert.grade = 1
class Student:
def __init__(self, matrikelnr):
self.matrikelnr = matrikelnr
self.__grade = 0
@property
def grade(self):
if self.__grade > 0:
return self.__grade
raise GradingError('Noch nicht benotet!')
@grade.setter
def grade(self, grade):
if grade > 0 and grade < 6:
self.__grade = grade
else:
raise ValueError('Grade must be between 1 and 5!')
hugo = Student('0176464645454')
hugo.grade = 6
hugo.grade = 2
hugo.grade
class MyClass:
the_answer = 42
def __init__(self, val):
self.the_answer = val
MyClass.the_answer
mc = MyClass(17)
print('Objekteigenschaft:', mc.the_answer)
print('Klasseneigenschaft:', MyClass.the_answer)
class MyClass:
instance_counter = 0
def __init__(self):
MyClass.instance_counter += 1
print('Ich bin das {}. Objekt'.format(MyClass.instance_counter))
a = MyClass()
b = MyClass()
class MyOtherClass(MyClass):
instance_counter = 0
a = MyOtherClass()
b = MyOtherClass()
class MyClass:
instance_counter = 0
def __init__(self):
self.__class__.instance_counter += 1
print('Ich bin das {}. Objekt'.format(self.__class__.instance_counter))
a = MyClass()
b = MyClass()
class MyOtherClass(MyClass):
instance_counter = 0
a = MyOtherClass()
b = MyOtherClass()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wie wir sehen, ist die Eigenschaft _val durchaus von außerhalb verfügbar. Allerdings signalisiert das Underline, dass vom Programmierer der Klasse nicht vorgesehen ist, dass dieser Wert direkt verwendet wird (sondern z.B. nur über die Methoden get_val() und set_val()). Wenn ein anderer Programmierer der Meinung ist, dass er direkten Zugriff auf die Eigenschaft _val braucht, liegt das in seiner Verantwortung (wird aber von Python nicht unterbunden). Man spricht hier von protection by convention. Python-Programmierer halten sich in aller Regel an diese Konvention, weshalb dieser Art von "Schutz" weit verbreitet ist.
Step2: Hier sehen wir, dass die Eigenschaft __val von außerhalb der Klasse gar nicht sichtbar und damit auch nicht veränderbar ist. Innerhalb der Klasse ist sie jedoch normal verfügbar. Das kann zu Problemen führen
Step3: Da __val nur innerhalb der Basisklasse angelegt wurde, hat die abgeleitete Klasse keinen Zugriff darauf.
Step4: Wir können jetzt die Note setzen und auslesen
Step5: Allerdings ist der direkte Zugriff auf grade immer noch möglich
Step6: Wie wir bereits gesehen haben, können wir das verhindern, indem wir die Eigenschaft grade auf __grade umbenennen.
Step7: Wie wir sehen, können wir die Eigenschaft des Objekts direkt setzen und auslesen, der Zugriff wird aber von Python jeweils durch den Setter und Getter geleitet.
Step8: Wir können also auf unsere via property() definierte Eigenschaften zugreifen. Wir können grade aber nicht verwenden,
Step9: Der @Property-Dekorator
Step10: Klassenvariablen (Static members)
Step11: Die eine Eigenschaft hängt also am Klassenobjekt, die andere am aus der Klasse erzeugten Objekt. Solche Klassenobjekte können nützlich sein, weil sie in allen aus der Klasse erzeugten Objekten verfügbar sind (sogar via self, solange das Objekt nicht selbst eine gleichnamige Eigenschaft hat
Step12: Man kann das auch so schreiben, wodurch der Counter auch für Subklassen funktioniert
|
14,861 | <ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Imports
import time
import numpy as np
import pandas as pd
import matplotlib as mat
from matplotlib import pyplot as plt
from sklearn.datasets import load_iris
%matplotlib inline
fontsize = 14
ticklabelsize = 14
np.__version__
pd.__version__
mat.__version__
# Carregando o dataset
iris = load_iris()
df = pd.DataFrame(iris.data, columns = iris.feature_names)
print(len(df))
df.head()
# Imprima os valores numéricos da Variável target (o que queremos prever),
# uma de 3 possíveis categorias de plantas: setosa, versicolor ou virginica
# Imprima os valores numéricos da Variável target (o que queremos prever),
# uma de 3 possíveis categorias de plantas: 0, 1 ou 2
# Adicione ao dataset uma nova coluna com os nomes das espécies, pois é isso que vamos tentar prever (variável target)
# Inclua no dataset uma coluna com os valores numéricos da variável target
# Extraia as features (atributos) do dataset e imprima
# Calcule a média de cada feature para as 3 classes
# Imprima uma Transposta do dataset (transforme linhas e colunas e colunas em linhas)
# Utilize a função Info do dataset para obter um resumo sobre o dataset
# Faça um resumo estatístico do dataset
# Verifique se existem valores nulos no dataset
# Faça uma contagem de valores de sepal length
# Crie um Histograma de sepal length
# Crie um Gráficos de Dispersão (scatter Plot) da variável sepal length versus número da linha,
# colorido por marcadores da variável target
# Crie um Scatter Plot de 2 Features (atributos)
# Crie um Scatter Matrix das Features (atributos)
# Crie um Histograma de todas as features
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercício
Step2: Extração e Transformação de Dados
Step3: Exploração de Dados
Step4: Plot
|
14,862 | <ASSISTANT_TASK:>
Python Code:
# AR parameters
p = 4
a = 1.0 * np.random.rand(p) - 0.5
print "Original AR parameters:\n", a
# Time series data
N = 1000
n = np.arange(0, N)
# Input white noise
eparam = (0, 1.0)
e = np.sqrt(eparam[1]) * np.random.randn(N) + eparam[0]
# Generate AR time series.
y = genARProcess(p, a, eparam, e)
# Estimate autogressive model parameters
ahat = estARParam(p, y)
# Generate time series using estimated parameters
yhat = genARProcess(p, ahat, eparam, e)
plot_arfit(n, e, y, yhat)
print "Original AR parameters:\n", a
print "Estimated AR parameters:\n", ahat
w = whiten(y, ahat)
figure(figsize=(3, 3))
plot(e, w, '.')
xlabel("$e_n$", fontsize=16)
ylabel("$w_n$", fontsize=16)
title("$e_n$ vs. $w_n$", fontsize=16)
xticks(fontsize=16)
yticks(fontsize=16);
# Input white noise
N = 1000
n = np.arange(0, N)
# v acts as a window that changes the variance of e
v = 1 + (n >= 300) * (n < 700)
e = v * np.sqrt(eparam[1]) * np.random.randn(N) + eparam[0]
# Generate AR time series.
y = genARProcess(p, a, eparam, e)
ahat = estARParam(p, y)
yhat = genARProcess(p, ahat, eparam, e)
plot_arfit(n, e, y, yhat)
print "Original AR parameters:\n", a
print "Estimated AR parameters:\n", ahat
w = whiten(y, ahat)
figure(figsize=(3, 3))
plot(e, w, '.')
xlabel("$e_n$", fontsize=16)
ylabel("$w_n$", fontsize=16)
title("$e_n$ vs. $w_n$", fontsize=16)
xticks(fontsize=16)
yticks(fontsize=16);
%pylab inline
import seaborn as sns
# Stationary AR process implementation
def genARProcess(p, a, eparam, e):
N = len(e)
me, ve = eparam
# Generate the filtered signal
y = np.zeros(N)
for i in xrange(p, N):
y[i] = e[i]
for k in xrange(p):
y[i] += a[k] * y[i - k - 1]
return y
# Estimation of AR parameters for a stationary process,
# using the entire data set.
def estARParam(p, y):
N = len(y)
ymat = np.zeros((N - p, p))
yb = np.zeros((N - p - 1, 1))
for c in xrange(p, 0, -1):
ymat[:, p-c] = y[p-c:-c]
yb = y[p:]
return np.matmul(np.linalg.pinv(ymat), yb)[::-1]
# Implementation of stationary whitening filter.
def whiten(y, ahat):
N = len(y)
p = len(ahat)
w = np.zeros(N)
for i in xrange(N):
w[i] = y[i]
for k in xrange(p):
if i - k - 1 >= 0:
w[i] += - ahat[k] * y[i - k - 1]
return w
def plot_arfit(n, e, y, yhat):
figure(figsize=(12, 6))
subplot(221)
plot(n, e, label="$e_n$")
title("$e_n$", fontsize=25)
xticks(fontsize=18)
yticks(fontsize=18)
subplot(223)
plot(0,0)
plot(n, y, label="$y_n$")
xlabel("$n$", fontsize=25)
title("$y_n$", fontsize=25)
xticks(fontsize=18)
yticks(fontsize=18)
subplot(122)
plot(y, e, '.', label="$e_n$")
plot(y, yhat, '.', label="$\hat{y}_n$")
legend(loc=2, fontsize=25)
xlabel("$y_n$", fontsize=25)
ylabel("$e_n, \hat{y}_n$", fontsize=25)
title("$y_n$ vs. $e_n, \hat{y}_n$", fontsize=25)
xticks(fontsize=18)
yticks(fontsize=18)
tight_layout();
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Estimate AR parameters using the entire dataset
Step2: Whiten the AR process output using the $\hat{a}$
Step3: Non-stationary AR process with input with varying variance
Step4: Non-stational Autoregressive Model with Varying Model Parameters
|
14,863 | <ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/empire_psinject_PEinjection.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, ImageLoaded
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND (
lower(ImageLoaded) LIKE "%wmiclnt.dll"
OR lower(ImageLoaded) LIKE "%WmiApRpl.dll"
OR lower(ImageLoaded) LIKE "%wmiprov.dll"
OR lower(ImageLoaded) LIKE "%wmiutils.dll"
OR lower(ImageLoaded) LIKE "%wbemcomn.dll"
OR lower(ImageLoaded) LIKE "%WMINet_Utils.dll"
OR lower(ImageLoaded) LIKE "%wbemsvc.dll"
OR lower(ImageLoaded) LIKE "%fastprox.dll"
OR lower(Description) LIKE "%wmi%"
)
AND NOT (
lower(Image) LIKE "%wmiprvse.exe"
OR lower(Image) LIKE "%wmiapsrv.exe"
OR lower(Image) LIKE "%svchost.exe"
)
'''
)
df.show(10,False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Security Dataset
Step2: Analytic I
|
14,864 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import time
from matplotlib import rcParams
import matplotlib.pyplot as plt
%matplotlib inline
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate import CombinatorialOptimisation
train = DataSet('/data/REDD/redd.h5')
test = DataSet('/data/REDD/redd.h5')
building = 1
train.buildings[building].elec.mains().plot()
train.set_window(end="30-4-2011")
test.set_window(start="30-4-2011")
train_elec = train.buildings[1].elec
test_elec = test.buildings[1].elec
train_elec.mains().plot()
test_elec.mains().plot()
fridge_meter = train_elec['fridge']
fridge_df = fridge_meter.load().next()
fridge_df.head()
mains = train_elec.mains()
mains_df = mains.load().next()
mains_df.head()
top_5_train_elec = train_elec.submeters().select_top_k(k=5)
top_5_train_elec
start = time.time()
from nilmtk.disaggregate import fhmm_exact
fhmm = fhmm_exact.FHMM()
# Note that we have given the sample period to downsample the data to 1 minute
fhmm.train(top_5_train_elec, sample_period=60)
end = time.time()
print("Runtime =", end-start, "seconds.")
disag_filename = '/data/REDD/redd-disag-fhmm.h5'
output = HDFDataStore(disag_filename, 'w')
# Note that we have mentioned to disaggregate after converting to a sample period of 60 seconds
fhmm.disaggregate(test_elec.mains(), output, sample_period=60)
output.close()
disag_fhmm = DataSet(disag_filename)
disag_fhmm_elec = disag_fhmm.buildings[building].elec
from nilmtk.metrics import f1_score
f1_fhmm = f1_score(disag_fhmm_elec, test_elec)
f1_fhmm.index = disag_fhmm_elec.get_labels(f1_fhmm.index)
f1_fhmm.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("FHMM");
start = time.time()
from nilmtk.disaggregate import CombinatorialOptimisation
co = CombinatorialOptimisation()
# Note that we have given the sample period to downsample the data to 1 minute
co.train(top_5_train_elec, sample_period=60)
end = time.time()
print("Runtime =", end-start, "seconds.")
disag_filename = '/data/REDD/redd-disag-co.h5'
output = HDFDataStore(disag_filename, 'w')
# Note that we have mentioned to disaggregate after converting to a sample period of 60 seconds
co.disaggregate(test_elec.mains(), output, sample_period=60)
output.close()
disag_co = DataSet(disag_filename)
disag_co_elec = disag_co.buildings[building].elec
from nilmtk.metrics import f1_score
f1_co= f1_score(disag_co_elec, test_elec)
f1_co.index = disag_co_elec.get_labels(f1_co.index)
f1_co.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("CO");
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dividing data into train and test set
Step2: Let us use building 1 for demo purposes
Step3: Let's split data at April 30th
Step4: REDD data set has got appliance level data sampled every 3 or 4 seconds and mains data sampled every 1 second. Let us verify the same.
Step5: Since, both of these are sampled at different frequencies, we will downsample both to 1 minute resolution. We will also select the top-5 appliances in terms of energy consumption and use them for training our FHMM and CO models.
Step6: Training and disaggregation
|
14,865 | <ASSISTANT_TASK:>
Python Code:
import os
from copy import deepcopy
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
print(raw.info['bads'])
picks = mne.pick_channels_regexp(raw.ch_names, regexp='EEG 05.')
raw.plot(order=picks, n_channels=len(picks))
picks = mne.pick_channels_regexp(raw.ch_names, regexp='MEG 2..3')
raw.plot(order=picks, n_channels=len(picks))
original_bads = deepcopy(raw.info['bads'])
raw.info['bads'].append('EEG 050') # add a single channel
raw.info['bads'].extend(['EEG 051', 'EEG 052']) # add a list of channels
bad_chan = raw.info['bads'].pop(-1) # remove the last entry in the list
raw.info['bads'] = original_bads # change the whole list at once
# default is exclude='bads':
good_eeg = mne.pick_types(raw.info, meg=False, eeg=True)
all_eeg = mne.pick_types(raw.info, meg=False, eeg=True, exclude=[])
print(np.setdiff1d(all_eeg, good_eeg))
print(np.array(raw.ch_names)[np.setdiff1d(all_eeg, good_eeg)])
raw2 = raw.copy()
raw2.info['bads'] = []
events = mne.find_events(raw2, stim_channel='STI 014')
epochs = mne.Epochs(raw2, events=events)['2'].average().plot()
raw.crop(tmin=0, tmax=3).load_data()
eeg_data = raw.copy().pick_types(meg=False, eeg=True, exclude=[])
eeg_data_interp = eeg_data.copy().interpolate_bads(reset_bads=False)
for title, data in zip(['orig.', 'interp.'], [eeg_data, eeg_data_interp]):
fig = data.plot(butterfly=True, color='#00000022', bad_color='r')
fig.subplots_adjust(top=0.9)
fig.suptitle(title, size='xx-large', weight='bold')
grad_data = raw.copy().pick_types(meg='grad', exclude=[])
grad_data_interp = grad_data.copy().interpolate_bads(reset_bads=False)
for data in (grad_data, grad_data_interp):
data.plot(butterfly=True, color='#00000009', bad_color='r')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Marking bad channels
Step2: Here you can see that the
Step3: We can do the same thing for the bad MEG channel (MEG 2443). Since we
Step4: Notice first of all that the channels marked as "bad" are plotted in a light
Step5: .. sidebar
Step6: When to look for bad channels
Step7: The bad EEG channel is not so obvious, but the bad gradiometer is easy to
Step8: By default,
Step9: Note that we used the exclude=[] trick in the call to
|
14,866 | <ASSISTANT_TASK:>
Python Code:
from pandas import DataFrame
import sqlite3
query =
CREATE TABLE test
(a VARCHAR(20), b VARCHAR(20),
c REAL, d INTEGER
);
con = sqlite3.connect(':memory:')
con.execute(query)
con.commit()
data = [('Atlanta', 'Georgia', 1.25, 6),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
stmt = "INSERT INTO test VALUES(?, ?, ?, ?)"
con.executemany(stmt, data)
con.commit()
cursor = con.execute("select * from test where b LIKE 'Georgia'")
rows = cursor.fetchall()
rows
cursor.description
df = DataFrame(rows, columns=zip(*cursor.description)[0])
df
import pandas.io.sql as sql
sql.read_sql('select * from test where c>2.0', con)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Loading data from SQL into a DataFrame is fairly straightforward, and pandas has some functions to simplify the process. As an example, I’ll use an in-memory SQLite database using Python’s built-in sqlite3 driver
Step3: Then, insert a few rows of data
Step4: Most Python SQL drivers (PyODBC, psycopg2, MySQLdb, pymssql, etc.) return a list of tuples when selecting data from a table
Step5: You can pass the list of tuples to the DataFrame constructor, but you also need the column names, contained in the cursor’s description attribute
Step6: This is quite a bit of munging that you’d rather not repeat each time you query the database. pandas has a read_sql function that simplifies the process. Just pass the select statement and the connection object
|
14,867 | <ASSISTANT_TASK:>
Python Code:
#general imports
import pygslib
#get the data in gslib format into a pandas Dataframe
cluster= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
true= pygslib.gslib.read_gslib_file('../datasets/true.dat')
true['Declustering Weight'] = 1
npoints = len(cluster['Primary'])
true['Declustering Weight'] = 1
#using declustering wight
parameters_qpplt = {
# gslib parameters for qq-pp calculation
'qqorpp': 0, # integer (Optional, default 0, Q-Q plot). Q-Q plot (qqorpp=0); P-P plot (qqorpp=1)
#'npts' : None, # integer (Optional, default min length of va1 and va2). Number of points to use on the Q-Q or P-P plot (should not exceed the smallest number of data in data1 / data2
'va1' : cluster['Primary'], # rank-1 array('d') with bounds (nd). Variable 1
'wt1' : cluster['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 1.
'va2' : true['Primary'], # rank-1 array('d') with bounds (nd). Variable 2
'wt2' : true['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 2.
# visual parameters for figure (if a new figure is created)
#'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
#'title' : None, # string (Optional, "QQ plot" or "PP plot"). Figure title
#'xlabel' : 'Z1', # string (Optional, default "Z1" or "P1"). X axis label
#'ylabel' : 'Z2', # string (Optional, default "Z2" or "P2"). Y axis label
#'xlog' : True, # boolean (Optional, default True). If true plot X axis in log sale.
#'ylog' : True, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
#'style' : None, # string with valid bokeh chart type
'color' : 'black', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Declustered', # string (Optional, default "NA").
#'alpha' : None, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
#'lwidth': None, # float (Optional, default 1). Line width
# leyend
'legendloc': None} # float (Optional, default 'bottom_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left
# Calculate the non declustered qq plot
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# Calculate declustered qqplot
# a) get array of ones as weights
cluster['naive']= cluster['Declustering Weight'].values*0 +1
# update parameter dic
parameters_qpplt['wt1'] = cluster['naive']
parameters_qpplt['color'] = 'blue'
parameters_qpplt['legend']='Clustered'
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# show the plot
pygslib.plothtml.show(fig)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data ready for work
Step2: QQ-Plot
|
14,868 | <ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')
coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(None, 90, h_trans_bandwidth=10.)
raw.notch_filter(50., notch_widths=1)
# Set epoch rejection threshold a bit larger than for SQUIDs
reject = dict(mag=2e-10)
tmin, tmax = -0.5, 1
# Find median nerve stimulator trigger
event_id = dict(Median=257)
events = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')
picks = mne.pick_types(raw.info, meg=True, eeg=False)
# We use verbose='error' to suppress warning about decimation causing aliasing,
# ideally we would low-pass and then decimate instead
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, verbose='error',
reject=reject, picks=picks, proj=False, decim=10,
preload=True)
evoked = epochs.average()
evoked.plot()
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs, raw
bem = mne.read_bem_solution(bem_fname)
trans = mne.transforms.Transform('head', 'mri') # identity transformation
# To compute the forward solution, we must
# provide our temporary/custom coil definitions, which can be done as::
#
# with mne.use_coil_def(coil_def_fname):
# fwd = mne.make_forward_solution(
# raw.info, trans, src, bem, eeg=False, mindist=5.0,
# n_jobs=1, verbose=True)
fwd = mne.read_forward_solution(fwd_fname)
# use fixed orientation here just to save memory later
mne.convert_forward_solution(fwd, force_fixed=True, copy=False)
with mne.use_coil_def(coil_def_fname):
fig = mne.viz.plot_alignment(evoked.info, trans=trans, subject=subject,
subjects_dir=subjects_dir,
surfaces=('head', 'pial'), bem=bem)
mne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,
focalpoint=(0.02, 0, 0.04))
# Fit dipoles on a subset of time points
with mne.use_coil_def(coil_def_fname):
dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.040, 0.080),
cov, bem, trans, verbose=True)
idx = np.argmax(dip_opm.gof)
print('Best dipole at t=%0.1f ms with %0.1f%% GOF'
% (1000 * dip_opm.times[idx], dip_opm.gof[idx]))
# Plot N20m dipole as an example
dip_opm.plot_locations(trans, subject, subjects_dir,
mode='orthoview', idx=idx)
inverse_operator = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov, loose=0., depth=None)
del fwd, cov
method = "MNE"
snr = 3.
lambda2 = 1. / snr ** 2
stc = mne.minimum_norm.apply_inverse(
evoked, inverse_operator, lambda2, method=method,
pick_ori=None, verbose=True)
# Plot source estimate at time of best dipole fit
brain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,
initial_time=dip_opm.times[idx],
clim=dict(kind='percent', lims=[99, 99.9, 99.99]),
size=(400, 300), background='w')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare data for localization
Step2: Examine our coordinate alignment for source localization and compute a
Step3: Perform dipole fitting
Step4: Perform minimum-norm localization
|
14,869 | <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary(contact_binary=True)
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
b.add_compute('phoebe', compute='phoebe2', mesh_method='marching')
b.add_compute('legacy', compute='phoebe1')
b.set_value_all('atm', 'extern_planckint')
b.set_value_all('gridsize', 30)
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.set_value_all('refl_num',0)
b.set_value_all('rv_grav', False)
b.set_value_all('ltte', False)
b.run_compute(compute='phoebe2', model='phoebe2model', irrad_method='none')
b.run_compute(compute='phoebe1', model='phoebe1model')
axs, artists = b['lc01@phoebe2model'].plot(color='g')
axs, artists = b['lc01@phoebe1model'].plot(color='r')
leg = plt.legend(loc=4)
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2model') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
axs, artists = b['rv01@phoebe2model'].plot(color='g')
axs, artists = b['rv01@phoebe1model'].plot(color='r')
artist, = plt.plot(b.get_value('rvs@primary@phoebe2model', ) - b.get_value('rvs@primary@phoebe1model'), color='g', ls=':')
artist, = plt.plot(b.get_value('rvs@secondary@phoebe2model') - b.get_value('rvs@secondary@phoebe1model'), color='g', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1.5, 1.5)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets and Compute Options
Step3: Let's add compute options for phoebe using the new (marching) method for creating meshes.
Step4: Now we add compute options for the 'legacy' backend.
Step5: Let's use the external atmospheres available for both phoebe1 and phoebe2
Step6: Set value of gridsize for the trapezoidal (WD) mesh.
Step7: Let's also disable other special effect such as heating, gravity, and light-time effects.
Step8: Finally, let's compute our models
Step9: Plotting
Step10: Now let's plot the residuals between these two models
Step11: RVs
|
14,870 | <ASSISTANT_TASK:>
Python Code:
%pylab inline
! grep "multipv 1" log4.txt | grep -v lowerbound | grep -v upperbound > log4_g.txt
def parse_info(l):
D = {}
k = l.split()
i = 0
assert k[i] == "info"
i += 1
while i < len(k):
if k[i] == "depth":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "seldepth":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "multipv":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "score":
if k[i+1] == "cp":
D["score_p"] = int(k[i+2]) / 100. # score in pawns
i += 3
elif k[i] == "nodes":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "nps":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "hashfull":
D[k[i]] = int(k[i+1]) / 1000. # between 0 and 1
i += 2
elif k[i] == "tbhits":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "time":
D[k[i]] = int(k[i+1]) / 1000. # elapsed time in [s]
i += 2
elif k[i] == "pv":
D[k[i]] = k[i+1:]
return D
else:
raise Exception("Unknown kw")
# Convert to an array of lists
D = []
for l in open("log4_g.txt").readlines():
D.append(parse_info(l))
# Convert to a list of arrays
data = {}
for key in D[-1].keys():
d = []
for x in D:
if key in x:
d.append(x[key])
else:
d.append(-1)
if key != "pv":
d = array(d)
data[key] = d
title("Number of nodes searched in time")
plot(data["time"] / 60., data["nodes"], "o")
xlabel("Time [min]")
ylabel("Nodes")
grid()
show()
title("Positions per second in time")
plot(data["time"] / 60., data["nps"], "o")
xlabel("Time [min]")
ylabel("Positions / s")
grid()
show()
title("Hashtable usage")
hashfull = data["hashfull"]
hashfull[hashfull == -1] = 0
plot(data["time"] / 60., hashfull * 100, "o")
xlabel("Time [min]")
ylabel("Hashtable filled [%]")
grid()
show()
title("Number of nodes vs. depth")
semilogy(data["depth"], data["nodes"], "o")
x = data["depth"]
y = exp(x/2.2)
y = y / y[-1] * data["nodes"][-1]
semilogy(x, y, "-")
xlabel("Depth [half moves]")
ylabel("Nodes")
grid()
show()
title("Number of time vs. depth")
semilogy(data["depth"], data["time"]/60., "o")
xlabel("Depth [half moves]")
ylabel("Time [min]")
grid()
show()
title("Score")
plot(data["depth"], data["score_p"], "o")
xlabel("Depth [half moves]")
ylabel("Score [pawns]")
grid()
show()
for i in range(len(data["depth"])):
print "%2i %s" % (data["depth"][i], " ".join(data["pv"][i])[:100])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Speed of Search
Step2: So nodes per second is roughly constant
Step3: The hashtable usage is at full capacity
Step4: Number of nodes needed for the given depth grows exponentially, except for moves that are forced, which require very little nodes to search (those show as a horizontal plateau)
Step5: Convergence wrt. Depth
Step6: Convergence of the variations
|
14,871 | <ASSISTANT_TASK:>
Python Code:
import jax.numpy as jnp
from jax import grad, jit, vmap
from jax import random
# A helper function to randomly initialize weights and biases
# for a dense neural network layer
def random_layer_params(m, n, key, scale=1e-2):
w_key, b_key = random.split(key)
return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,))
# Initialize all layers for a fully-connected neural network with sizes "sizes"
def init_network_params(sizes, key):
keys = random.split(key, len(sizes))
return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)]
layer_sizes = [784, 512, 512, 10]
step_size = 0.01
num_epochs = 10
batch_size = 128
n_targets = 10
params = init_network_params(layer_sizes, random.PRNGKey(0))
from jax.scipy.special import logsumexp
def relu(x):
return jnp.maximum(0, x)
def predict(params, image):
# per-example predictions
activations = image
for w, b in params[:-1]:
outputs = jnp.dot(w, activations) + b
activations = relu(outputs)
final_w, final_b = params[-1]
logits = jnp.dot(final_w, activations) + final_b
return logits - logsumexp(logits)
# This works on single examples
random_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,))
preds = predict(params, random_flattened_image)
print(preds.shape)
# Doesn't work with a batch
random_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28))
try:
preds = predict(params, random_flattened_images)
except TypeError:
print('Invalid shapes!')
# Let's upgrade it to handle batches using `vmap`
# Make a batched version of the `predict` function
batched_predict = vmap(predict, in_axes=(None, 0))
# `batched_predict` has the same call signature as `predict`
batched_preds = batched_predict(params, random_flattened_images)
print(batched_preds.shape)
def one_hot(x, k, dtype=jnp.float32):
Create a one-hot encoding of x of size k.
return jnp.array(x[:, None] == jnp.arange(k), dtype)
def accuracy(params, images, targets):
target_class = jnp.argmax(targets, axis=1)
predicted_class = jnp.argmax(batched_predict(params, images), axis=1)
return jnp.mean(predicted_class == target_class)
def loss(params, images, targets):
preds = batched_predict(params, images)
return -jnp.mean(preds * targets)
@jit
def update(params, x, y):
grads = grad(loss)(params, x, y)
return [(w - step_size * dw, b - step_size * db)
for (w, b), (dw, db) in zip(params, grads)]
import tensorflow as tf
# Ensure TF does not see GPU and grab all GPU memory.
tf.config.set_visible_devices([], device_type='GPU')
import tensorflow_datasets as tfds
data_dir = '/tmp/tfds'
# Fetch full datasets for evaluation
# tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1)
# You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy
mnist_data, info = tfds.load(name="mnist", batch_size=-1, data_dir=data_dir, with_info=True)
mnist_data = tfds.as_numpy(mnist_data)
train_data, test_data = mnist_data['train'], mnist_data['test']
num_labels = info.features['label'].num_classes
h, w, c = info.features['image'].shape
num_pixels = h * w * c
# Full train set
train_images, train_labels = train_data['image'], train_data['label']
train_images = jnp.reshape(train_images, (len(train_images), num_pixels))
train_labels = one_hot(train_labels, num_labels)
# Full test set
test_images, test_labels = test_data['image'], test_data['label']
test_images = jnp.reshape(test_images, (len(test_images), num_pixels))
test_labels = one_hot(test_labels, num_labels)
print('Train:', train_images.shape, train_labels.shape)
print('Test:', test_images.shape, test_labels.shape)
import time
def get_train_batches():
# as_supervised=True gives us the (image, label) as a tuple instead of a dict
ds = tfds.load(name='mnist', split='train', as_supervised=True, data_dir=data_dir)
# You can build up an arbitrary tf.data input pipeline
ds = ds.batch(batch_size).prefetch(1)
# tfds.dataset_as_numpy converts the tf.data.Dataset into an iterable of NumPy arrays
return tfds.as_numpy(ds)
for epoch in range(num_epochs):
start_time = time.time()
for x, y in get_train_batches():
x = jnp.reshape(x, (len(x), num_pixels))
y = one_hot(y, num_labels)
params = update(params, x, y)
epoch_time = time.time() - start_time
train_acc = accuracy(params, train_images, train_labels)
test_acc = accuracy(params, test_images, test_labels)
print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time))
print("Training set accuracy {}".format(train_acc))
print("Test set accuracy {}".format(test_acc))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hyperparameters
Step2: Auto-batching predictions
Step3: Let's check that our prediction function only works on single images.
Step5: At this point, we have all the ingredients we need to define our neural network and train it. We've built an auto-batched version of predict, which we should be able to use in a loss function. We should be able to use grad to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use jit to speed up everything.
Step6: Data Loading with tensorflow/datasets
Step7: Training Loop
|
14,872 | <ASSISTANT_TASK:>
Python Code:
!pip install tensorflow==1.15 dm-sonnet==1.36 tensor2tensor==1.14
import time
import numpy as np
import tensorflow.compat.v1 as tf
tf.logging.set_verbosity(tf.logging.ERROR) # Hide TF deprecation messages
import matplotlib.pyplot as plt
%cd /tmp
%rm -rf /tmp/deepmind_research
!git clone https://github.com/deepmind/deepmind-research.git \
/tmp/deepmind_research
%cd /tmp/deepmind_research/polygen
import modules
import data_utils
!mkdir /tmp/vertex_model
!mkdir /tmp/face_model
!gsutil cp gs://deepmind-research-polygen/vertex_model.tar.gz /tmp/vertex_model/
!gsutil cp gs://deepmind-research-polygen/face_model.tar.gz /tmp/face_model/
!tar xvfz /tmp/vertex_model/vertex_model.tar.gz -C /tmp/vertex_model/
!tar xvfz /tmp/face_model/face_model.tar.gz -C /tmp/face_model/
vertex_module_config=dict(
decoder_config=dict(
hidden_size=512,
fc_size=2048,
num_heads=8,
layer_norm=True,
num_layers=24,
dropout_rate=0.4,
re_zero=True,
memory_efficient=True
),
quantization_bits=8,
class_conditional=True,
max_num_input_verts=5000,
use_discrete_embeddings=True,
)
face_module_config=dict(
encoder_config=dict(
hidden_size=512,
fc_size=2048,
num_heads=8,
layer_norm=True,
num_layers=10,
dropout_rate=0.2,
re_zero=True,
memory_efficient=True,
),
decoder_config=dict(
hidden_size=512,
fc_size=2048,
num_heads=8,
layer_norm=True,
num_layers=14,
dropout_rate=0.2,
re_zero=True,
memory_efficient=True,
),
class_conditional=False,
decoder_cross_attention=True,
use_discrete_vertex_embeddings=True,
max_seq_length=8000,
)
class_id = '49) table' #@param ['0) airplane,aeroplane,plane','1) ashcan,trash can,garbage can,wastebin,ash bin,ash-bin,ashbin,dustbin,trash barrel,trash bin','2) bag,traveling bag,travelling bag,grip,suitcase','3) basket,handbasket','4) bathtub,bathing tub,bath,tub','5) bed','6) bench','7) birdhouse','8) bookshelf','9) bottle','10) bowl','11) bus,autobus,coach,charabanc,double-decker,jitney,motorbus,motorcoach,omnibus,passenger vehi','12) cabinet','13) camera,photographic camera','14) can,tin,tin can','15) cap','16) car,auto,automobile,machine,motorcar','17) cellular telephone,cellular phone,cellphone,cell,mobile phone','18) chair','19) clock','20) computer keyboard,keypad','21) dishwasher,dish washer,dishwashing machine','22) display,video display','23) earphone,earpiece,headphone,phone','24) faucet,spigot','25) file,file cabinet,filing cabinet','26) guitar','27) helmet','28) jar','29) knife','30) lamp','31) laptop,laptop computer','32) loudspeaker,speaker,speaker unit,loudspeaker system,speaker system','33) mailbox,letter box','34) microphone,mike','35) microwave,microwave oven','36) motorcycle,bike','37) mug','38) piano,pianoforte,forte-piano','39) pillow','40) pistol,handgun,side arm,shooting iron','41) pot,flowerpot','42) printer,printing machine','43) remote control,remote','44) rifle','45) rocket,projectile','46) skateboard','47) sofa,couch,lounge','48) stove','49) table','50) telephone,phone,telephone set','51) tower','52) train,railroad train','53) vessel,watercraft','54) washer,automatic washer,washing machine']
num_samples_min = 1 #@param
num_samples_batch = 8 #@param
max_num_vertices = 400 #@param
max_num_face_indices = 2000 #@param
top_p_vertex_model = 0.9 #@param
top_p_face_model = 0.9 #@param
tf.reset_default_graph()
# Build models
vertex_model = modules.VertexModel(**vertex_module_config)
face_model = modules.FaceModel(**face_module_config)
# Tile out class label to every element in batch
class_id = int(class_id.split(')')[0])
vertex_model_context = {'class_label': tf.fill([num_samples_batch,], class_id)}
vertex_samples = vertex_model.sample(
num_samples_batch, context=vertex_model_context,
max_sample_length=max_num_vertices, top_p=top_p_vertex_model,
recenter_verts=True, only_return_complete=True)
vertex_model_saver = tf.train.Saver(var_list=vertex_model.variables)
# The face model generates samples conditioned on a context, which here is
# the vertex model samples
face_samples = face_model.sample(
vertex_samples, max_sample_length=max_num_face_indices,
top_p=top_p_face_model, only_return_complete=True)
face_model_saver = tf.train.Saver(var_list=face_model.variables)
# Start sampling
start = time.time()
print('Generating samples...')
with tf.Session() as sess:
vertex_model_saver.restore(sess, '/tmp/vertex_model/model')
face_model_saver.restore(sess, '/tmp/face_model/model')
mesh_list = []
num_samples_complete = 0
while num_samples_complete < num_samples_min:
v_samples_np = sess.run(vertex_samples)
if v_samples_np['completed'].size == 0:
print('No vertex samples completed in this batch. Try increasing ' +
'max_num_vertices.')
continue
f_samples_np = sess.run(
face_samples,
{vertex_samples[k]: v_samples_np[k] for k in vertex_samples.keys()})
v_samples_np = f_samples_np['context']
num_samples_complete_batch = f_samples_np['completed'].sum()
num_samples_complete += num_samples_complete_batch
print('Num. samples complete: {}'.format(num_samples_complete))
for k in range(num_samples_complete_batch):
verts = v_samples_np['vertices'][k][:v_samples_np['num_vertices'][k]]
faces = data_utils.unflatten_faces(
f_samples_np['faces'][k][:f_samples_np['num_face_indices'][k]])
mesh_list.append({'vertices': verts, 'faces': faces})
end = time.time()
print('sampling time: {}'.format(end - start))
data_utils.plot_meshes(mesh_list, ax_lims=0.4)
mesh_id = 4 #@param
data_utils.write_obj(
mesh_list[mesh_id]['vertices'], mesh_list[mesh_id]['faces'],
'mesh-{}.obj'.format(mesh_id))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download pre-trained model weights from Google Cloud Storage
Step2: Pre-trained model config
Step3: Generate class-conditional samples
Step4: Export meshes as .obj files
|
14,873 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import pandas as pd
from IPython.display import clear_output
from matplotlib import pyplot as plt
# 加载数据集。
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
import tensorflow as tf
tf.random.set_seed(123)
dftrain.head()
dftrain.describe()
dftrain.shape[0], dfeval.shape[0]
dftrain.age.hist(bins=20)
plt.show()
dftrain.sex.value_counts().plot(kind='barh')
plt.show()
dftrain['class'].value_counts().plot(kind='barh')
plt.show()
dftrain['embark_town'].value_counts().plot(kind='barh')
plt.show()
pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive')
plt.show()
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name,
dtype=tf.float32))
example = dict(dftrain.head(1))
class_fc = tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list('class', ('First', 'Second', 'Third')))
print('Feature value: "{}"'.format(example['class'].iloc[0]))
print('One-hot encoded: ', tf.keras.layers.DenseFeatures([class_fc])(example).numpy())
tf.keras.layers.DenseFeatures(feature_columns)(example).numpy()
# 使用大小为全部数据的 batch ,因为数据规模非常小.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# 对于训练,可以按需多次循环数据集(n_epochs=None)。
dataset = dataset.repeat(n_epochs)
# 在内存中训练不使用 batch。
dataset = dataset.batch(NUM_EXAMPLES)
return dataset
return input_fn
# 训练与评估的输入函数。
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
linear_est = tf.estimator.LinearClassifier(feature_columns)
# 训练模型。
linear_est.train(train_input_fn, max_steps=100)
# 评估。
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(pd.Series(result))
# 由于数据存入内存中,在每层使用全部数据会更快。
# 上面一个 batch 定义为整个数据集。
n_batches = 1
est = tf.estimator.BoostedTreesClassifier(feature_columns,
n_batches_per_layer=n_batches)
# 一旦建立了指定数量的树,模型将停止训练,
# 而不是基于训练步数。
est.train(train_input_fn, max_steps=100)
# 评估。
result = est.evaluate(eval_input_fn)
clear_output()
print(pd.Series(result))
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities')
plt.show()
from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,)
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 在 Tensorflow 中训练提升树(Boosted Trees)模型
Step2: 数据集由训练集和验证集组成:
Step3: 训练集和评估集分别有 627 和 264 个样本。
Step4: 大多数乘客在 20 岁或 30 岁。
Step5: 男乘客大约是女乘客的两倍。
Step6: 大多数乘客都在“三等”舱。
Step7: 大多数乘客从南安普顿出发。
Step8: 与男性相比,女性存活的几率要高得多。这显然是该模型的预测特征。
Step9: 创建特征列与输入函数
Step10: 您可以查看特征列生成的转换。例如,以下是在单个样本中使用 indicator_column 的输出:
Step11: 此外,您还可以一起查看所有特征列的转换:
Step12: 接下来,您需要创建输入函数。这些将指定如何将数据读入到我们的模型中以供训练与推理。您将使用 tf.dataAPI 中的 from_tensor_slices 方法直接从 Pandas 中读取数据。这适用于较小的内存数据集。对于较大的数据集,tf.data API 支持各种文件格式(包括 csv),以便您能处理那些不适合放入内存中的数据集。
Step13: 训练与评估模型
Step14: 下面让我们训练提升树(Boosted Trees)模型。提升树(Boosted Trees)是支持回归(BoostedTreesRegressor)和分类(BoostedTreesClassifier)的。由于目标是预测一个生存与否的标签,您将使用 BoostedTreesClassifier。
Step15: 现在您可以使用训练的模型从评估集上对乘客进行预测了。Tensorflow 模型经过优化,可以同时在一个 batch 或一个集合的样本上进行预测。之前,eval_inout_fn 是使用整个评估集定义的。
Step16: 最后,您还可以查看结果的受试者工作特征曲线(ROC),这将使我们更好地了解真阳性率与假阴性率之间的权衡。
|
14,874 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import scipy as sp
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
exec(open('settings.py').read(), globals())
cell_numbers = pd.read_csv('../data/cell_number_data.csv')
outgrowth = pd.read_csv('../data/outgrowth.csv')
lcell = 13.2
lcell_sem = 0.1
L0 = 800.0
cell_numbers = cell_numbers.query('pos > -@L0')
cell_numbers['quiescent'] = cell_numbers['SOX2'] - cell_numbers['PCNA']
cell_numbers_space_mean = cell_numbers[['ID', 'time', 'quiescent']].groupby('ID').agg({'time': ['mean'], 'quiescent': ['mean', 'sem']})
cell_numbers_mean = cell_numbers_space_mean.groupby([('time', 'mean')]).agg({('quiescent', 'mean'): ['mean', 'sem'], ('quiescent', 'sem'): ['mean']})
cell_numbers_mean['quiescent_delta'] = cell_numbers_mean['quiescent', 'mean', 'sem'] + cell_numbers_mean['quiescent', 'sem', 'mean']
cell_numbers_mean.index.name = 'time'
cell_numbers_mean['outgrowth'] = outgrowth.groupby('time').mean()['length']
cell_numbers_mean['outgrowth_sem'] = outgrowth.groupby('time').sem()['length']
for i, row in cell_numbers_space_mean.iterrows():
cell_numbers_space_mean.loc[i, 'outgrowth'] = float(outgrowth.groupby('time').mean().loc[int(row['time'])])
cell_numbers_space_mean.loc[i, 'outgrowth_sem'] = float(outgrowth.groupby('time').sem()['length'].loc[int(row['time'])])
cell_numbers_space_mean
Nq = pd.DataFrame(index = cell_numbers_mean.index)
Nq['mean'] = (cell_numbers_mean['outgrowth'] + L0) / lcell * cell_numbers_mean['quiescent', 'mean', 'mean']
Nq['delta'] = Nq['mean'] * (lcell_sem / lcell + cell_numbers_mean['outgrowth_sem'] / cell_numbers_mean['outgrowth'] + cell_numbers_mean['quiescent', 'mean', 'sem'] / cell_numbers_mean['quiescent', 'mean', 'mean'])
Nq.loc[0, 'delta'] = Nq.loc[0, 'mean'] * ((lcell_sem / lcell + float(cell_numbers_mean.loc[0, ('quiescent', 'mean', 'sem')]) / cell_numbers_mean.loc[0, ('quiescent', 'mean', 'mean')]))
Nq
day = dict()
for time in Nq.index:
day[time] = sp.array(cell_numbers_space_mean[cell_numbers_space_mean['time', 'mean'] == time]['quiescent', 'mean'])
day[time] = day[time] * (float(outgrowth.groupby('time').mean().loc[time]) + L0)
for time in Nq.index:
print 'Day 0 vs Day {0}: p = {1:.2f}'.format(time, sp.stats.ttest_ind(day[0], day[time]).pvalue)
from matplotlib.markers import TICKDOWN
def significance_bar(start,end,height,displaystring,linewidth = 1,markersize = 3,boxpad =0.3,fontsize = 12,color = 'k'):
# draw a line with downticks at the ends
plt.plot([start,end],[height]*2,'-',color = color,lw=linewidth,marker = TICKDOWN,markeredgewidth=linewidth,markersize = markersize)
# draw the text with a bounding box covering up the line
plt.text(0.5*(start+end),height,displaystring,ha = 'center',va='center',size = fontsize)
fig, ax = plt.subplots(1, figsize = (1, 31.5/25.0))
ax.errorbar(Nq.index,\
Nq['mean'],
Nq['delta'],
fmt = 'ko', ls = '-')
significance_bar(0, 4, 305.0, '*')
significance_bar(0, 6, 340.0, '*')
ax.set_xlim(-0.7, 8.7)
ax.set_ylim(0, 370)
ax.set_yticks(sp.arange(0, 350, 100))
ax.set_xlabel('Time (days)')
ax.set_ylabel('# SOX2+/PCNA- cells', labelpad = 8)
plt.savefig('../figure_plots/Fig2_total_number_quiescent.svg')
plt.show()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check for significant differences
Step2: ttest vs day0
Step3: Plot
|
14,875 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import os
import pathlib
import time
import datetime
from matplotlib import pyplot as plt
from IPython import display
dataset_name = "facades" #@param ["cityscapes", "edges2handbags", "edges2shoes", "facades", "maps", "night2day"]
_URL = f'http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/{dataset_name}.tar.gz'
path_to_zip = tf.keras.utils.get_file(
fname=f"{dataset_name}.tar.gz",
origin=_URL,
extract=True)
path_to_zip = pathlib.Path(path_to_zip)
PATH = path_to_zip.parent/dataset_name
list(PATH.parent.iterdir())
sample_image = tf.io.read_file(str(PATH / 'train/1.jpg'))
sample_image = tf.io.decode_jpeg(sample_image)
print(sample_image.shape)
plt.figure()
plt.imshow(sample_image)
def load(image_file):
# Read and decode an image file to a uint8 tensor
image = tf.io.read_file(image_file)
image = tf.io.decode_jpeg(image)
# Split each image tensor into two tensors:
# - one with a real building facade image
# - one with an architecture label image
w = tf.shape(image)[1]
w = w // 2
input_image = image[:, w:, :]
real_image = image[:, :w, :]
# Convert both images to float32 tensors
input_image = tf.cast(input_image, tf.float32)
real_image = tf.cast(real_image, tf.float32)
return input_image, real_image
inp, re = load(str(PATH / 'train/100.jpg'))
# Casting to int for matplotlib to display the images
plt.figure()
plt.imshow(inp / 255.0)
plt.figure()
plt.imshow(re / 255.0)
# The facade training set consist of 400 images
BUFFER_SIZE = 400
# The batch size of 1 produced better results for the U-Net in the original pix2pix experiment
BATCH_SIZE = 1
# Each image is 256x256 in size
IMG_WIDTH = 256
IMG_HEIGHT = 256
def resize(input_image, real_image, height, width):
input_image = tf.image.resize(input_image, [height, width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
real_image = tf.image.resize(real_image, [height, width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
return input_image, real_image
def random_crop(input_image, real_image):
stacked_image = tf.stack([input_image, real_image], axis=0)
cropped_image = tf.image.random_crop(
stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])
return cropped_image[0], cropped_image[1]
# Normalizing the images to [-1, 1]
def normalize(input_image, real_image):
input_image = (input_image / 127.5) - 1
real_image = (real_image / 127.5) - 1
return input_image, real_image
@tf.function()
def random_jitter(input_image, real_image):
# Resizing to 286x286
input_image, real_image = resize(input_image, real_image, 286, 286)
# Random cropping back to 256x256
input_image, real_image = random_crop(input_image, real_image)
if tf.random.uniform(()) > 0.5:
# Random mirroring
input_image = tf.image.flip_left_right(input_image)
real_image = tf.image.flip_left_right(real_image)
return input_image, real_image
plt.figure(figsize=(6, 6))
for i in range(4):
rj_inp, rj_re = random_jitter(inp, re)
plt.subplot(2, 2, i + 1)
plt.imshow(rj_inp / 255.0)
plt.axis('off')
plt.show()
def load_image_train(image_file):
input_image, real_image = load(image_file)
input_image, real_image = random_jitter(input_image, real_image)
input_image, real_image = normalize(input_image, real_image)
return input_image, real_image
def load_image_test(image_file):
input_image, real_image = load(image_file)
input_image, real_image = resize(input_image, real_image,
IMG_HEIGHT, IMG_WIDTH)
input_image, real_image = normalize(input_image, real_image)
return input_image, real_image
train_dataset = tf.data.Dataset.list_files(str(PATH / 'train/*.jpg'))
train_dataset = train_dataset.map(load_image_train,
num_parallel_calls=tf.data.AUTOTUNE)
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.batch(BATCH_SIZE)
try:
test_dataset = tf.data.Dataset.list_files(str(PATH / 'test/*.jpg'))
except tf.errors.InvalidArgumentError:
test_dataset = tf.data.Dataset.list_files(str(PATH / 'val/*.jpg'))
test_dataset = test_dataset.map(load_image_test)
test_dataset = test_dataset.batch(BATCH_SIZE)
OUTPUT_CHANNELS = 3
def downsample(filters, size, apply_batchnorm=True):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',
kernel_initializer=initializer, use_bias=False))
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.LeakyReLU())
return result
down_model = downsample(3, 4)
down_result = down_model(tf.expand_dims(inp, 0))
print (down_result.shape)
def upsample(filters, size, apply_dropout=False):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2DTranspose(filters, size, strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False))
result.add(tf.keras.layers.BatchNormalization())
if apply_dropout:
result.add(tf.keras.layers.Dropout(0.5))
result.add(tf.keras.layers.ReLU())
return result
up_model = upsample(3, 4)
up_result = up_model(down_result)
print (up_result.shape)
def Generator():
inputs = tf.keras.layers.Input(shape=[256, 256, 3])
down_stack = [
downsample(64, 4, apply_batchnorm=False), # (batch_size, 128, 128, 64)
downsample(128, 4), # (batch_size, 64, 64, 128)
downsample(256, 4), # (batch_size, 32, 32, 256)
downsample(512, 4), # (batch_size, 16, 16, 512)
downsample(512, 4), # (batch_size, 8, 8, 512)
downsample(512, 4), # (batch_size, 4, 4, 512)
downsample(512, 4), # (batch_size, 2, 2, 512)
downsample(512, 4), # (batch_size, 1, 1, 512)
]
up_stack = [
upsample(512, 4, apply_dropout=True), # (batch_size, 2, 2, 1024)
upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)
upsample(512, 4, apply_dropout=True), # (batch_size, 8, 8, 1024)
upsample(512, 4), # (batch_size, 16, 16, 1024)
upsample(256, 4), # (batch_size, 32, 32, 512)
upsample(128, 4), # (batch_size, 64, 64, 256)
upsample(64, 4), # (batch_size, 128, 128, 128)
]
initializer = tf.random_normal_initializer(0., 0.02)
last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4,
strides=2,
padding='same',
kernel_initializer=initializer,
activation='tanh') # (batch_size, 256, 256, 3)
x = inputs
# Downsampling through the model
skips = []
for down in down_stack:
x = down(x)
skips.append(x)
skips = reversed(skips[:-1])
# Upsampling and establishing the skip connections
for up, skip in zip(up_stack, skips):
x = up(x)
x = tf.keras.layers.Concatenate()([x, skip])
x = last(x)
return tf.keras.Model(inputs=inputs, outputs=x)
generator = Generator()
tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)
gen_output = generator(inp[tf.newaxis, ...], training=False)
plt.imshow(gen_output[0, ...])
LAMBDA = 100
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def generator_loss(disc_generated_output, gen_output, target):
gan_loss = loss_object(tf.ones_like(disc_generated_output), disc_generated_output)
# Mean absolute error
l1_loss = tf.reduce_mean(tf.abs(target - gen_output))
total_gen_loss = gan_loss + (LAMBDA * l1_loss)
return total_gen_loss, gan_loss, l1_loss
def Discriminator():
initializer = tf.random_normal_initializer(0., 0.02)
inp = tf.keras.layers.Input(shape=[256, 256, 3], name='input_image')
tar = tf.keras.layers.Input(shape=[256, 256, 3], name='target_image')
x = tf.keras.layers.concatenate([inp, tar]) # (batch_size, 256, 256, channels*2)
down1 = downsample(64, 4, False)(x) # (batch_size, 128, 128, 64)
down2 = downsample(128, 4)(down1) # (batch_size, 64, 64, 128)
down3 = downsample(256, 4)(down2) # (batch_size, 32, 32, 256)
zero_pad1 = tf.keras.layers.ZeroPadding2D()(down3) # (batch_size, 34, 34, 256)
conv = tf.keras.layers.Conv2D(512, 4, strides=1,
kernel_initializer=initializer,
use_bias=False)(zero_pad1) # (batch_size, 31, 31, 512)
batchnorm1 = tf.keras.layers.BatchNormalization()(conv)
leaky_relu = tf.keras.layers.LeakyReLU()(batchnorm1)
zero_pad2 = tf.keras.layers.ZeroPadding2D()(leaky_relu) # (batch_size, 33, 33, 512)
last = tf.keras.layers.Conv2D(1, 4, strides=1,
kernel_initializer=initializer)(zero_pad2) # (batch_size, 30, 30, 1)
return tf.keras.Model(inputs=[inp, tar], outputs=last)
discriminator = Discriminator()
tf.keras.utils.plot_model(discriminator, show_shapes=True, dpi=64)
disc_out = discriminator([inp[tf.newaxis, ...], gen_output], training=False)
plt.imshow(disc_out[0, ..., -1], vmin=-20, vmax=20, cmap='RdBu_r')
plt.colorbar()
def discriminator_loss(disc_real_output, disc_generated_output):
real_loss = loss_object(tf.ones_like(disc_real_output), disc_real_output)
generated_loss = loss_object(tf.zeros_like(disc_generated_output), disc_generated_output)
total_disc_loss = real_loss + generated_loss
return total_disc_loss
generator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
def generate_images(model, test_input, tar):
prediction = model(test_input, training=True)
plt.figure(figsize=(15, 15))
display_list = [test_input[0], tar[0], prediction[0]]
title = ['Input Image', 'Ground Truth', 'Predicted Image']
for i in range(3):
plt.subplot(1, 3, i+1)
plt.title(title[i])
# Getting the pixel values in the [0, 1] range to plot.
plt.imshow(display_list[i] * 0.5 + 0.5)
plt.axis('off')
plt.show()
for example_input, example_target in test_dataset.take(1):
generate_images(generator, example_input, example_target)
log_dir="logs/"
summary_writer = tf.summary.create_file_writer(
log_dir + "fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
@tf.function
def train_step(input_image, target, step):
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_output = generator(input_image, training=True)
disc_real_output = discriminator([input_image, target], training=True)
disc_generated_output = discriminator([input_image, gen_output], training=True)
gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(disc_generated_output, gen_output, target)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)
generator_gradients = gen_tape.gradient(gen_total_loss,
generator.trainable_variables)
discriminator_gradients = disc_tape.gradient(disc_loss,
discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(generator_gradients,
generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(discriminator_gradients,
discriminator.trainable_variables))
with summary_writer.as_default():
tf.summary.scalar('gen_total_loss', gen_total_loss, step=step//1000)
tf.summary.scalar('gen_gan_loss', gen_gan_loss, step=step//1000)
tf.summary.scalar('gen_l1_loss', gen_l1_loss, step=step//1000)
tf.summary.scalar('disc_loss', disc_loss, step=step//1000)
def fit(train_ds, test_ds, steps):
example_input, example_target = next(iter(test_ds.take(1)))
start = time.time()
for step, (input_image, target) in train_ds.repeat().take(steps).enumerate():
if (step) % 1000 == 0:
display.clear_output(wait=True)
if step != 0:
print(f'Time taken for 1000 steps: {time.time()-start:.2f} sec\n')
start = time.time()
generate_images(generator, example_input, example_target)
print(f"Step: {step//1000}k")
train_step(input_image, target, step)
# Training step
if (step+1) % 10 == 0:
print('.', end='', flush=True)
# Save (checkpoint) the model every 5k steps
if (step + 1) % 5000 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
%load_ext tensorboard
%tensorboard --logdir {log_dir}
fit(train_dataset, test_dataset, steps=40000)
display.IFrame(
src="https://tensorboard.dev/experiment/lZ0C6FONROaUMfjYkVyJqw",
width="100%",
height="1000px")
!ls {checkpoint_dir}
# Restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# Run the trained model on a few examples from the test set
for inp, tar in test_dataset.take(5):
generate_images(generator, inp, tar)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: pix2pix
Step2: Load the dataset
Step3: Each original image is of size 256 x 512 containing two 256 x 256 images
Step4: You need to separate real building facade images from the architecture label images—all of which will be of size 256 x 256.
Step5: Plot a sample of the input (architecture label image) and real (building facade photo) images
Step6: As described in the pix2pix paper, you need to apply random jittering and mirroring to preprocess the training set.
Step7: You can inspect some of the preprocessed output
Step8: Having checked that the loading and preprocessing works, let's define a couple of helper functions that load and preprocess the training and test sets
Step9: Build an input pipeline with tf.data
Step10: Build the generator
Step11: Define the upsampler (decoder)
Step12: Define the generator with the downsampler and the upsampler
Step13: Visualize the generator model architecture
Step14: Test the generator
Step15: Define the generator loss
Step16: The training procedure for the generator is as follows
Step17: Visualize the discriminator model architecture
Step18: Test the discriminator
Step19: Define the discriminator loss
Step20: The training procedure for the discriminator is shown below.
Step21: Generate images
Step22: Test the function
Step23: Training
Step24: The actual training loop. Since this tutorial can run of more than one dataset, and the datasets vary greatly in size the training loop is setup to work in steps instead of epochs.
Step25: This training loop saves logs that you can view in TensorBoard to monitor the training progress.
Step26: Finally, run the training loop
Step27: If you want to share the TensorBoard results publicly, you can upload the logs to TensorBoard.dev by copying the following into a code-cell.
Step28: Interpreting the logs is more subtle when training a GAN (or a cGAN like pix2pix) compared to a simple classification or regression model. Things to look for
Step29: Generate some images using the test set
|
14,876 | <ASSISTANT_TASK:>
Python Code:
def insertion_sort(unsorted_list):
x = ipytracer.List1DTracer(unsorted_list)
display(x)
for i in range(1, len(x)):
j = i - 1
key = x[i]
while x[j] > key and j >= 0:
x[j+1] = x[j]
j = j - 1
x[j+1] = key
return x.data
insertion_sort([6,4,7,9,3,5,1,8,2])
def insertion_sort(unsorted_list):
x = ipytracer.ChartTracer(unsorted_list)
display(x)
for i in range(1, len(x)):
j = i - 1
key = x[i]
while x[j] > key and j >= 0:
x[j+1] = x[j]
j = j - 1
x[j+1] = key
return x.data
insertion_sort([6,4,7,9,3,5,1,8,2])
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: work
Step2: Code2 - ChartTracer
Step3: work
|
14,877 | <ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=9, batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
learning_rates['sgd_momentum']=1e-2
best_model_score=0.0
for learning_rate in [1e-2,5e-3,1e-3]:
for weight_scale in [5e-2,5e-1]:
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=weight_scale)
solver = Solver(model, data,
num_epochs=8, batch_size=500,
update_rule='adam',
optim_config={
'learning_rate': learning_rate
},
verbose=True)
solver.train()
print(".")
if best_model_score < solver.val_acc_history[-1]:
best_model = model
best_model_score = solver.val_acc_history[-1]
print ("score is "+str(best_model_score))
################################################################################
# END OF YOUR CODE #
################################################################################
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fully-Connected Neural Nets
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
Step9: Loss layers
Step10: Two-layer network
Step11: Solver
Step12: Multilayer network
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Step20: Test you model
|
14,878 | <ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')
c = sales.random_split(.8,seed=0)
train_data=c
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
def simple_linear_regression(input_feature, output):
input_sum = input_feature.sum()
output_sum = output.sum()
N = input_feature.size()
input_mean = input_sum/N
output_mean = output_sum/N
in_out_prod = input_feature * output
in_out_prod_sum = in_out_prod.sum()
prod_sum = output_sum * input_sum
prod_mean = prod_sum/N
sqr_test = input_feature * input_feature
sqr_test_sum = sqr_test.sum()
sqr_sum = input_sum * input_sum
sqr_mean = sqr_sum/N
slope = (in_out_prod_sum - prod_mean)/(sqr_test_sum - sqr_mean)
intercept = output_mean - (input_mean * slope)
return(intercept, slope)
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(3 + 5*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
def get_regression_predictions(input_feature, intercept, slope):
predicted_values = intercept + slope * input_feature
return predicted_values
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
import numpy as np
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = input_feature.apply(lambda x:get_regression_predictions(x, intercept, slope))
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
diff = np.subtract(predictions,output)
# square the residuals and add them up
RSS = np.vdot(diff,diff)
return(RSS)
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept)/slope
return estimated_feature
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
sqft_bed_intercept, sqft_bed_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print "Intercept: " + str(sqft_bed_intercept)
print "Slope: " + str(sqft_bed_slope)
# Compute RSS when using bedrooms on TEST data:
rss_prices_on_bedrooms = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_bed_intercept, sqft_bed_slope)
print 'The RSS of predicting Prices based on Bedrooms is : ' + str(rss_prices_on_bedrooms)
# Compute RSS when using squarfeet on TEST data:
rss_prices_on_sqft = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load house sales data
Step2: Split data into training and testing
Step3: Useful SFrame summary functions
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Step10: Residual Sum of Squares
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Step13: Predict the squarefeet given price
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Step15: New Model
Step16: Test your Linear Regression Algorithm
|
14,879 | <ASSISTANT_TASK:>
Python Code:
import os
import re
import operator
import matplotlib.pyplot as plt
import warnings
import gensim
import numpy as np
warnings.filterwarnings('ignore') # Let's not pay heed to them right now
import nltk
nltk.download('stopwords') # Let's make sure the 'stopword' package is downloaded & updated
nltk.download('wordnet') # Let's also download wordnet, which will be used for lemmatization
from gensim.models import CoherenceModel, LdaModel, LsiModel, HdpModel
from gensim.models.wrappers import LdaMallet
from gensim.corpora import Dictionary
from pprint import pprint
%matplotlib inline
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])
lee_train_file = test_data_dir + os.sep + 'lee_background.cor'
with open(lee_train_file) as f:
for n, l in enumerate(f):
if n < 5:
print([l])
def build_texts(fname):
Function to build tokenized texts from file
Parameters:
----------
fname: File to be read
Returns:
-------
yields preprocessed line
with open(fname) as f:
for line in f:
yield gensim.utils.simple_preprocess(line, deacc=True, min_len=3)
train_texts = list(build_texts(lee_train_file))
len(train_texts)
bigram = gensim.models.Phrases(train_texts) # for bigram collocation detection
bigram[['new', 'york', 'example']]
from gensim.utils import lemmatize
from nltk.corpus import stopwords
stops = set(stopwords.words('english')) # nltk stopwords list
def process_texts(texts):
Function to process texts. Following are the steps we take:
1. Stopword Removal.
2. Collocation detection.
3. Lemmatization (not stem since stemming can reduce the interpretability).
Parameters:
----------
texts: Tokenized texts.
Returns:
-------
texts: Pre-processed tokenized texts.
texts = [[word for word in line if word not in stops] for line in texts]
texts = [bigram[line] for line in texts]
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
texts = [[word for word in lemmatizer.lemmatize(' '.join(line), pos='v').split()] for line in texts]
return texts
train_texts = process_texts(train_texts)
train_texts[5:6]
dictionary = Dictionary(train_texts)
corpus = [dictionary.doc2bow(text) for text in train_texts]
lsimodel = LsiModel(corpus=corpus, num_topics=10, id2word=dictionary)
lsimodel.show_topics(num_topics=5) # Showing only the top 5 topics
lsitopics = lsimodel.show_topics(formatted=False)
hdpmodel = HdpModel(corpus=corpus, id2word=dictionary)
hdpmodel.show_topics()
hdptopics = hdpmodel.show_topics(formatted=False)
ldamodel = LdaModel(corpus=corpus, num_topics=10, id2word=dictionary)
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary)
ldatopics = ldamodel.show_topics(formatted=False)
def evaluate_graph(dictionary, corpus, texts, limit):
Function to display num_topics - LDA graph using c_v coherence
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
limit : topic limit
Returns:
-------
lm_list : List of LDA topic models
c_v : Coherence values corresponding to the LDA model with respective number of topics
c_v = []
lm_list = []
for num_topics in range(1, limit):
lm = LdaModel(corpus=corpus, num_topics=num_topics, id2word=dictionary)
lm_list.append(lm)
cm = CoherenceModel(model=lm, texts=texts, dictionary=dictionary, coherence='c_v')
c_v.append(cm.get_coherence())
# Show graph
x = range(1, limit)
plt.plot(x, c_v)
plt.xlabel("num_topics")
plt.ylabel("Coherence score")
plt.legend(("c_v"), loc='best')
plt.show()
return lm_list, c_v
%%time
lmlist, c_v = evaluate_graph(dictionary=dictionary, corpus=corpus, texts=train_texts, limit=10)
pyLDAvis.gensim.prepare(lmlist[2], corpus, dictionary)
lmtopics = lmlist[5].show_topics(formatted=False)
def ret_top_model():
Since LDAmodel is a probabilistic model, it comes up different topics each time we run it. To control the
quality of the topic model we produce, we can see what the interpretability of the best topic is and keep
evaluating the topic model until this threshold is crossed.
Returns:
-------
lm: Final evaluated topic model
top_topics: ranked topics in decreasing order. List of tuples
top_topics = [(0, 0)]
while top_topics[0][1] < 0.97:
lm = LdaModel(corpus=corpus, id2word=dictionary)
coherence_values = {}
for n, topic in lm.show_topics(num_topics=-1, formatted=False):
topic = [word for word, _ in topic]
cm = CoherenceModel(topics=[topic], texts=train_texts, dictionary=dictionary, window_size=10)
coherence_values[n] = cm.get_coherence()
top_topics = sorted(coherence_values.items(), key=operator.itemgetter(1), reverse=True)
return lm, top_topics
lm, top_topics = ret_top_model()
print(top_topics[:5])
pprint([lm.show_topic(topicid) for topicid, c_v in top_topics[:10]])
lda_lsi_topics = [[word for word, prob in lm.show_topic(topicid)] for topicid, c_v in top_topics]
lsitopics = [[word for word, prob in topic] for topicid, topic in lsitopics]
hdptopics = [[word for word, prob in topic] for topicid, topic in hdptopics]
ldatopics = [[word for word, prob in topic] for topicid, topic in ldatopics]
lmtopics = [[word for word, prob in topic] for topicid, topic in lmtopics]
lsi_coherence = CoherenceModel(topics=lsitopics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
hdp_coherence = CoherenceModel(topics=hdptopics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
lda_coherence = CoherenceModel(topics=ldatopics, texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
lm_coherence = CoherenceModel(topics=lmtopics, texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
lda_lsi_coherence = CoherenceModel(topics=lda_lsi_topics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
def evaluate_bar_graph(coherences, indices):
Function to plot bar graph.
coherences: list of coherence values
indices: Indices to be used to mark bars. Length of this and coherences should be equal.
assert len(coherences) == len(indices)
n = len(coherences)
x = np.arange(n)
plt.bar(x, coherences, width=0.2, tick_label=indices, align='center')
plt.xlabel('Models')
plt.ylabel('Coherence Value')
evaluate_bar_graph([lsi_coherence, hdp_coherence, lda_coherence, lm_coherence, lda_lsi_coherence],
['LSI', 'HDP', 'LDA', 'LDA_Mod', 'LDA_LSI'])
from gensim.topic_coherence import (segmentation, probability_estimation,
direct_confirmation_measure, indirect_confirmation_measure,
aggregation)
from gensim.matutils import argsort
from collections import namedtuple
make_pipeline = namedtuple('Coherence_Measure', 'seg, prob, conf, aggr')
measure = make_pipeline(segmentation.s_one_one,
probability_estimation.p_boolean_sliding_window,
direct_confirmation_measure.log_ratio_measure,
aggregation.arithmetic_mean)
topics = []
for topic in lm.state.get_lambda():
bestn = argsort(topic, topn=10, reverse=True)
topics.append(bestn)
# Perform segmentation
segmented_topics = measure.seg(topics)
# Since this is a window-based coherence measure we will perform window based prob estimation
per_topic_postings, num_windows = measure.prob(texts=train_texts, segmented_topics=segmented_topics,
dictionary=dictionary, window_size=2)
confirmed_measures = measure.conf(segmented_topics, per_topic_postings, num_windows, normalize=False)
print(measure.aggr(confirmed_measures))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Analysing our corpus.
Step4: Preprocessing our data. Remember
Step5: Finalising our dictionary and corpus
Step6: Topic modeling with LSI
Step7: Topic modeling with HDP
Step8: Topic modeling using LDA
Step9: pyLDAvis is a great way to visualize an LDA model. To summarize in short, the area of the circles represent the prevelance of the topic. The length of the bars on the right represent the membership of a term in a particular topic. pyLDAvis is based on this paper.
Step11: Finding out the optimal number of topics
Step13: LDA as LSI
Step14: Inference
Step16: Evaluating all the topic models
Step17: Customizing the topic coherence measure
Step18: To get topics out of the topic model
Step19: Step 1
Step20: Step 2
Step21: Step 3
Step22: Step 4
|
14,880 | <ASSISTANT_TASK:>
Python Code:
def countGreater(arr , n , k ) :
l = 0
r = n - 1
leftGreater = n
while(l <= r ) :
m = int(l +(r - l ) / 2 )
if(arr[m ] > k ) :
leftGreater = m
r = m - 1
else :
l = m + 1
return(n - leftGreater )
if __name__== ' __main __' :
arr =[3 , 3 , 4 , 7 , 7 , 7 , 11 , 13 , 13 ]
n = len(arr )
k = 7
print(countGreater(arr , n , k ) )
<END_TASK>
| <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
14,881 | <ASSISTANT_TASK:>
Python Code:
import copy
import cPickle
import glob
import gzip
import os
import random
import shutil
import subprocess
import cdpybio as cpb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import peer
import pybedtools as pbt
import scipy.stats as stats
import seaborn as sns
import statsmodels.api as sm
import vcf as pyvcf
import ciepy
import cardipspy as cpy
%matplotlib inline
%load_ext rpy2.ipython
dy_name = 'eqtl_input'
import socket
if socket.gethostname() == 'fl-hn1' or socket.gethostname() == 'fl-hn2':
dy = os.path.join(ciepy.root, 'sandbox', dy_name)
cpy.makedir(dy)
pbt.set_tempdir(dy)
outdir = os.path.join(ciepy.root, 'output', dy_name)
cpy.makedir(outdir)
private_outdir = os.path.join(ciepy.root, 'private_output', dy_name)
cpy.makedir(private_outdir)
random.seed(20150605)
sns.set_context('notebook')
fn = os.path.join(ciepy.root, 'output', 'input_data', 'wgs_metadata.tsv')
wgs_meta = pd.read_table(fn, index_col=0, squeeze=True)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rnaseq_metadata.tsv')
rna_meta = pd.read_table(fn, index_col=0)
rna_meta = rna_meta[rna_meta.in_eqtl]
fn = os.path.join(ciepy.root, 'output', 'input_data', 'subject_metadata.tsv')
subject_meta = pd.read_table(fn, index_col=0)
gene_info = pd.read_table(cpy.gencode_gene_info, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rsem_tpm.tsv')
tpm = pd.read_table(fn, index_col=0)
tpm = tpm[rna_meta.index]
#log_tpm = np.log10(tpm + 1)
def transform_standard_normal(df):
gc_ranks = df.rank(axis=1)
gc_ranks = gc_ranks / (gc_ranks.shape[1] + 1)
std_norm = stats.norm.ppf(gc_ranks)
std_norm = pd.DataFrame(std_norm, index=gc_ranks.index,
columns=gc_ranks.columns)
return std_norm
erna = rna_meta[rna_meta.in_eqtl]
esub = subject_meta.ix[erna.subject_id]
ewgs = wgs_meta.ix[erna.wgs_id]
print('We have {} distinct subjects for eQTL analysis.'.format(erna.shape[0]))
erna.passage.value_counts()
esub.sex.value_counts()
esub.age.hist()
plt.ylabel('Number of samples')
plt.xlabel('Age');
vc = esub.family_id.value_counts()
vc = vc[vc > 1]
vc.plot(kind='bar')
plt.ylabel('Number of subjects from family')
plt.xlabel('Family ID');
print('{} unique family IDs.'.format(len(set(esub.family_id))))
print('{} families have only one subject.'.format(sum(esub.family_id.value_counts() == 1)))
esub.ethnicity_group.value_counts()
ewgs.cell.value_counts()
dy = os.path.join(private_outdir, 'kinship')
cpy.makedir(dy)
# Intersect our variants with 1KGP variants so we get variants in both cardips and
# 1KGP to use for LD pruning. This takes several hours. This could probably be sped
# up by running separate intersections for each chromosome separately.
fn = os.path.join(dy, '1kgp_isec', '0000.vcf.gz')
autosomal_variants = os.path.join(ciepy.root, 'private_output', 'input_data',
'autosomal_variants.vcf.gz')
if not os.path.exists(fn):
vcfs = [('/publicdata/1KGP_20151103/ALL.chr{}.'
'phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes.vcf.gz'.format(x))
for x in range(1, 23)]
c = ('bcftools isec -i\'TYPE="snp" & N_ALT == 1\' -c none -O z -p {} -n=2 {} {}'.format(
os.path.join(dy, '1kgp_isec'), autosomal_variants, ' '.join(vcfs)))
subprocess.check_call(c, shell=True)
tdy = os.path.join(dy, '1kgp_pruned')
if not os.path.exists(tdy):
from ipyparallel import Client
parallel_client = Client()
dview = parallel_client[:]
with dview.sync_imports():
import os
import subprocess
cpy.makedir(tdy)
dview.push(dict(tdy=tdy))
%px os.chdir(tdy)
# The plink --keep option needs a column for sample ID and a column for
# family ID. We can use the sample name as the family ID here.
se = pd.read_table('/publicdata/1KGP_20151103/EUR_individuals.txt', header=None,
squeeze=True)
se = se + '\t' + se
with open(os.path.join(tdy, 'keep_samples.tsv'), 'w') as f:
f.write('\n'.join(se.values) + '\n')
vcfs = [os.path.join(dy, '1kgp_isec', '00{:02}.vcf.gz'.format(x)) for
x in range(1, 23)]
commands = [('/frazer01/software/plink-1.90b3x/plink '
'--keep {} '
'--biallelic-only --indep-pairwise 50 5 0.2 --vcf {} '
'--out {}_pruned'.format(os.path.join(tdy, 'keep_samples.tsv'),
x, os.path.split(x)[1].split('.')[0]))
for x in vcfs]
dview.scatter('commands', commands)
%px [subprocess.check_call(c, shell=True) for c in commands];
# Now I need to take the plink text files and filter the 1KGP VCFs from the
# intersection step. I'll do this in parallel on the IPython cluster again.
tdy = os.path.join(dy, '1kgp_pruned')
vcfs = [os.path.join(dy, '1kgp_isec', '00{:02}.vcf.gz'.format(x)) for
x in range(1, 23)]
out_vcfs = [x.replace('.vcf.gz', '_pruned.vcf.gz').replace('1kgp_isec', '1kgp_pruned')
for x in vcfs]
snps = [os.path.join(tdy, '00{:02}_pruned.prune.in'.format(i)) for i in range(1, 23)]
if not os.path.exists(out_vcfs[0]):
from ipyparallel import Client
parallel_client = Client()
dview = parallel_client[:]
with dview.sync_imports():
import os
import subprocess
commands = ['/frazer01/software/bcftools-1.2/bcftools view -i\'ID=@{}\' '
'{} -O z -o {}'.format(snps[i], vcfs[i], out_vcfs[i])
for i in range(len(snps))]
dview.scatter('commands', commands)
%px [subprocess.check_call(c, shell=True) for c in commands];
# We'll index the pruned VCF files as well.
commands = ['/frazer01/software/bcftools-1.2/bcftools index -t {}'.format(x)
for x in out_vcfs]
dview.scatter('commands', commands)
%px [subprocess.check_call(c, shell=True) for c in commands];
# Intersect our variants with 1KGP variants so we get variants in both cardips and
# 1KGP to use for LD pruning. This takes a few hours. This could probably be sped
# up by running separate intersections for each chromosome separately.
tdy = os.path.join(dy, 'cardips_pruned_isec')
if not os.path.exists(tdy):
fn = os.path.join(dy, '1kgp_isec', '0000.vcf.gz')
out_vcfs = [x.replace('.vcf.gz', '_pruned.vcf.gz').replace('1kgp_isec', '1kgp_pruned')
for x in vcfs]
c = ('bcftools isec -i\'TYPE="snp" & N_ALT == 1\' -c none -O z -p {} -n=2 {} {}'.format(
tdy, fn, ' '.join(out_vcfs)))
subprocess.check_call(c, shell=True)
vcf = os.path.join(dy, 'cardips_pruned_isec', '0000.vcf.gz')
kmatrix = os.path.join(outdir, 'wgs.kin')
if not os.path.exists(kmatrix):
c = ('epacts make-kin --vcf {} --min-maf 0.01 --min-callrate 0.95 '
'--out {} --run 10'.format(vcf, kmatrix))
subprocess.check_call(c, shell=True)
kdump = os.path.join(outdir, 'wgs.kindump')
if not os.path.exists(kdump):
c = ('pEmmax kin-util --kinf {} --outf {} --dump'.format(
kmatrix, os.path.splitext(kdump)[0]))
subprocess.check_call(c, shell=True)
kmatrix = pd.read_table(kdump)
sns.clustermap(kmatrix, xticklabels=False, yticklabels=False, linewidths=0);
fn = os.path.join(outdir, 'unrelateds.tsv')
if not os.path.exists(fn):
import networkx as nx
bkmatrix = kmatrix < 0.05
t = bkmatrix.ix[set(rna_meta.wgs_id), set(rna_meta.wgs_id)]
ts = subject_meta.ix[set(rna_meta.subject_id)]
ts = ts.merge(rna_meta[['subject_id', 'wgs_id']], left_index=True, right_on='subject_id')
vc = ts.family_id.value_counts()
vc = vc[vc > 1]
keep = []
for i in vc.index:
ind = ts[ts.family_id == i].wgs_id
tbk = bkmatrix.ix[ind, ind]
g = nx.Graph(tbk.values)
c = list(nx.find_cliques(g))
m = max([len(x) for x in c])
c = [x for x in c if len(x) == m]
c = random.choice(c)
for j in c:
keep.append(ind[j])
vc = ts.family_id.value_counts()
vc = vc[vc == 1]
keep += list(ts[ts.family_id.apply(lambda x: x in vc.index)].wgs_id)
# keep has the wgs_id's for unrelated people
unrelateds = rna_meta[rna_meta.wgs_id.apply(lambda x: x in keep)]
unrelateds.to_csv(fn, sep='\t')
else:
unrelateds = pd.read_table(fn, index_col=0)
meta = wgs_meta.merge(subject_meta, left_on='subject_id', right_index=True)
kmatrix = kmatrix.ix[meta.index, meta.index]
vals = []
f = meta.father_id.dropna()
m = meta.mother_id.dropna()
for i in kmatrix.index:
if i in f.index:
fid = meta.ix[i, 'father_id']
if fid in wgs_meta.subject_id.values:
fid = wgs_meta[wgs_meta.subject_id == fid].index[0]
vals.append(kmatrix.ix[i, fid])
if i in m.index:
mid = meta.ix[i, 'mother_id']
if mid in wgs_meta.subject_id.values:
mid = wgs_meta[wgs_meta.subject_id == mid].index[0]
vals.append(kmatrix.ix[i, mid])
pd.Series(vals).plot(kind='hist')
plt.ylabel('Number of parent/child pairs')
plt.xlabel('Kinship coefficient')
plt.title('Kinship coefficient for parent/child pairs');
fn = os.path.join(outdir, 'emmax_samples.tsv')
if not os.path.exists(fn):
se = rna_meta[rna_meta.in_eqtl].wgs_id.copy(deep=True)
se.sort_values(inplace=True)
samples = pd.DataFrame(se)
samples.column = ['wgs_id']
samples['wgs_id'].to_csv(fn, sep='\t', header=False, index=False)
fn = os.path.join(outdir, 'emmax_full.tsv')
cov = pd.get_dummies(data=rna_meta.ix[samples.index],
columns=['sequence_id'])
cov = cov.merge(subject_meta, left_on='subject_id', right_index=True)
cov.index = cov.wgs_id
cov = cov[['sex', 'age', 'sequence_id_6', 'sequence_id_7', 'sequence_id_8']]
cov['sex'] = cov.sex.apply(lambda x: {'M':1, 'F':2}[x])
cov = cov.astype(int)
cov.to_csv(fn, sep='\t', header=False)
cov['sex'].to_csv(os.path.join(outdir, 'emmax_sex_only.tsv'), sep='\t', header=False)
else:
cov = pd.read_table(os.path.join(outdir, 'emmax_full.tsv'), index_col=0, header=None,
names=['sex', 'age', 'sequence_id_6', 'sequence_id_7', 'sequence_id_8'])
fn = os.path.join(outdir, 'emmax_samples_unrelated.tsv')
if not os.path.exists(fn):
se = unrelateds.wgs_id.copy(deep=True)
se.sort_values(inplace=True)
samples = pd.DataFrame(se)
samples.column = ['wgs_id']
samples['wgs_id'].to_csv(fn, sep='\t', header=False, index=False)
fn = os.path.join(outdir, 'emmax_full_unrelated.tsv')
cov = pd.get_dummies(data=rna_meta.ix[samples.index],
columns=['sequence_id'])
cov = cov.merge(subject_meta, left_on='subject_id', right_index=True)
cov.index = cov.wgs_id
cov = cov[['sex', 'age', 'sequence_id_6', 'sequence_id_7', 'sequence_id_8']]
cov['sex'] = cov.sex.apply(lambda x: {'M':1, 'F':2}[x])
cov = cov.astype(int)
cov.to_csv(fn, sep='\t', header=False)
cov['sex'].to_csv(os.path.join(outdir, 'emmax_sex_only_unrelated.tsv'), sep='\t', header=False)
random.seed(20160223)
fn = os.path.join(outdir, 'unrelated_subsets.tsv')
if not os.path.exists(fn):
samples = []
for n in np.arange(40, unrelateds.shape[0], 10):
samples.append(','.join(random.sample(unrelateds.wgs_id, n)))
samples = pd.Series(samples, index=np.arange(40, unrelateds.shape[0], 10))
samples.to_csv(fn, sep='\t')
%%R
library(edgeR)
%%R -i tpm -o tpm_size_factors
tpm_size_factors <- calcNormFactors(tpm)
tpm_size_factors = pd.Series(tpm_size_factors, index=tpm.columns)
tpm_size_factors.describe()
tpm_norm = tpm / tpm_size_factors
fn = os.path.join(outdir, 'tpm_log_filtered_phe_std_norm.tsv')
if not os.path.exists(fn):
log_tpm = np.log10(tpm_norm + 1)
tpm_f = log_tpm[rna_meta[rna_meta.in_eqtl].index]
tpm_f = tpm_f[(tpm_f > np.log10(2)).sum(axis=1) >= 10]
tpm_f = (tpm_f.T - tpm_f.mean(axis=1)).T
tpm_f.to_csv(os.path.join(outdir, 'tpm_log_filtered.tsv'), sep='\t')
# I'll also write a file with the WGS ID for each sample
# for use with EMMAX.
tpm_f.columns = rna_meta.ix[tpm_f.columns, 'wgs_id']
tpm_f.to_csv(os.path.join(outdir, 'tpm_log_filtered_phe.tsv'), sep='\t')
# Standard normal transformed.
tpm_f_std_norm = transform_standard_normal(tpm_f)
tpm_f_std_norm.to_csv(fn, sep='\t')
else:
tpm_f = pd.read_table(os.path.join(outdir, 'tpm_log_filtered.tsv'), index_col=0)
tpm_f_std_norm = pd.read_table(fn, index_col=0)
fig, axs = plt.subplots(1, 2)
ax = axs[0]
tpm_f.ix[tpm_f.index[0]].hist(ax=ax)
ax.set_title('Before std norm quantile norm')
ax.set_ylabel('Number of samples')
ax.set_xlabel('TPM')
ax = axs[1]
tpm_f_std_norm.ix[tpm_f.index[0]].hist(ax=ax)
ax.set_title('After std norm quantile norm')
ax.set_ylabel('Number of samples')
ax.set_xlabel('TPM');
fns = [os.path.join(outdir, 'peer_20_factors.tsv'),
os.path.join(outdir, 'peer_20_weights.tsv'),
os.path.join(outdir, 'peer_20_precision.tsv'),
os.path.join(outdir, 'peer_20_residuals.tsv')]
if sum([os.path.exists(x) for x in fns]) != len(fns):
tpm_f_std_norm_t = tpm_f_std_norm.T
model = peer.PEER()
model.setPhenoMean(tpm_f_std_norm_t)
model.setNk(20)
model.update()
factors_20 = pd.DataFrame(model.getX(), index=tpm_f_std_norm_t.index)
factors_20.to_csv(os.path.join(outdir, 'peer_20_factors.tsv'), sep='\t', header=False)
weights_20 = pd.DataFrame(model.getW(), index=tpm_f_std_norm_t.columns)
weights_20.to_csv(os.path.join(outdir, 'peer_20_weights.tsv'), sep='\t', header=False)
precision_20 = pd.Series(model.getAlpha()[:, 0])
precision_20.to_csv(os.path.join(outdir, 'peer_20_precision.tsv'),
sep='\t', index=False, header=False)
residuals_20 = pd.DataFrame(model.getResiduals(), index=tpm_f_std_norm_t.index,
columns=tpm_f_std_norm_t.columns)
residuals_20.to_csv(os.path.join(outdir, 'peer_20_residuals.tsv'),
sep='\t')
else:
factors_20 = pd.read_table(os.path.join(outdir, 'peer_20_factors.tsv'),
index_col=0, header=None)
weights_20 = pd.read_table(os.path.join(outdir, 'peer_20_weights.tsv'),
index_col=0, header=None)
precision_20 = pd.read_table(os.path.join(outdir, 'peer_20_precision.tsv'),
header=None, squeeze=True)
residuals_20 = pd.read_table(os.path.join(outdir, 'peer_20_residuals.tsv'),
index_col=0)
variance_20 = 1 / precision_20
variance_20.plot(kind='bar')
plt.ylabel('Variance')
plt.xlabel('Factor');
# Find correlation between PEER factors and known covariates.
vals = []
pvals = []
for c in cov.columns:
tvals = []
tpvals = []
for f in factors_20.columns:
res = stats.spearmanr(cov.ix[factors_20.index, c], factors_20.ix[:, f])
tvals.append(res[0])
tpvals.append(res[1])
vals.append(tvals)
pvals.append(tpvals)
c_corrs = pd.DataFrame(vals, index=cov.columns).T
c_pvals = pd.DataFrame(pvals, index=cov.columns).T
c_corrs.max()
c_corrs.min()
t = sns.heatmap(c_corrs)
plt.xlabel('Known covariates')
plt.ylabel('PEER factors')
plt.title('Correlation between PEER factors and covariates');
sns.heatmap(-np.log10(c_pvals))
plt.xlabel('Known covariates')
plt.ylabel('PEER factors')
plt.title('Correlation $p$-values');
sns.heatmap(-np.log10(c_pvals.drop('sex', axis=1)))
plt.xlabel('Known covariates')
plt.ylabel('PEER factors')
plt.title('Correlation $p$-values');
fns = [os.path.join(outdir, 'peer_10_factors.tsv'),
os.path.join(outdir, 'peer_10_weights.tsv'),
os.path.join(outdir, 'peer_10_precision.tsv'),
os.path.join(outdir, 'peer_10_residuals.tsv')]
if sum([os.path.exists(x) for x in fns]) != len(fns):
tpm_f_std_norm_t = tpm_f_std_norm.T
model = peer.PEER()
model.setPhenoMean(tpm_f_std_norm_t)
model.setNk(10)
model.update()
factors_10 = pd.DataFrame(model.getX(), index=tpm_f_std_norm_t.index)
factors_10.to_csv(os.path.join(outdir, 'peer_10_factors.tsv'), sep='\t', header=False)
weights_10 = pd.DataFrame(model.getW(), index=tpm_f_std_norm_t.columns)
weights_10.to_csv(os.path.join(outdir, 'peer_10_weights.tsv'), sep='\t', header=False)
precision_10 = pd.Series(model.getAlpha()[:, 0])
precision_10.to_csv(os.path.join(outdir, 'peer_10_precision.tsv'),
sep='\t', index=False, header=False)
residuals_10 = pd.DataFrame(model.getResiduals(), index=tpm_f_std_norm_t.index,
columns=tpm_f_std_norm_t.columns)
residuals_10.to_csv(os.path.join(outdir, 'peer_10_residuals.tsv'),
sep='\t')
residuals_std_norm = transform_standard_normal(residuals_10.T)
residuals_std_norm.to_csv(
os.path.join(outdir, 'tpm_log_filtered_phe_std_norm_peer_resid.tsv'), sep='\t')
else:
factors_10 = pd.read_table(os.path.join(outdir, 'peer_10_factors.tsv'),
index_col=0, header=None)
weights_10 = pd.read_table(os.path.join(outdir, 'peer_10_weights.tsv'),
index_col=0, header=None)
precision_10 = pd.read_table(os.path.join(outdir, 'peer_10_precision.tsv'),
header=None, squeeze=True)
residuals_10 = pd.read_table(os.path.join(outdir, 'peer_10_residuals.tsv'),
index_col=0)
residuals_std_norm = pd.read_table(
os.path.join(outdir, 'tpm_log_filtered_phe_std_norm_peer_resid.tsv'), index_col=0)
# Find correlation between PEER factors and known covariates.
vals = []
pvals = []
for c in cov.columns:
tvals = []
tpvals = []
for f in factors_10.columns:
res = stats.spearmanr(cov.ix[factors_10.index, c], factors_10.ix[:, f])
tvals.append(res[0])
tpvals.append(res[1])
vals.append(tvals)
pvals.append(tpvals)
c_corrs_10 = pd.DataFrame(vals, index=cov.columns).T
c_pvals_10 = pd.DataFrame(pvals, index=cov.columns).T
c_corrs_10.max()
c_corrs.min()
t = sns.heatmap(c_corrs_10)
plt.xlabel('Known covariates')
plt.ylabel('PEER factors')
plt.title('Correlation between PEER factors and covariates')
plt.savefig(os.path.join(outdir, 'peer_cov_corr.pdf'))
sns.heatmap(-np.log10(c_pvals_10))
plt.xlabel('Known covariates')
plt.ylabel('PEER factors')
plt.title('Correlation $-\log_{10}$ $p$-values');
sns.heatmap(c_pvals_10 < 0.05)
plt.xlabel('Known covariates')
plt.ylabel('PEER factors')
plt.title('Significant correlations');
gi = residuals_std_norm.index[gene_info.ix[residuals_std_norm.index, 'chrom'].apply(
lambda x: x not in ['chrY', 'chrX', 'chrM'])]
with open(os.path.join(outdir, 'eqtl_genes.tsv'), 'w') as f:
f.write('\n'.join(gi) + '\n')
fn = os.path.join(outdir, 'variant_regions.bed')
if not os.path.exists(fn):
variant_regions = cpb.gencode.make_promoter_bed(cpy.gencode_gtf, merge_by_gene=True,
up=1000000, down=1000000, out=fn)
else:
variant_regions = pbt.BedTool(fn)
a = len(variant_regions)
b = gene_info.shape[0]
print('There are {} genes and {} regions'.format(b, a))
fn = os.path.join(outdir, 'gene_to_regions.p')
if not os.path.exists(fn):
gene_to_regions = dict()
for r in variant_regions:
gene = r.name.split('_')[0]
gene_to_regions[gene] = gene_to_regions.get(gene, []) + ['{}:{}-{}'.format(
r.chrom, r.start, r.end)]
fn = os.path.join(outdir, 'gene_to_regions.p')
cPickle.dump(gene_to_regions, open(fn, 'wb') )
subject_meta.ethnicity_group.value_counts()
tdf = subject_meta[subject_meta.ethnicity_group == 'European']
a = tdf[tdf.family_id.isnull()]
b = tdf.dropna(subset=['family_id'])
b = b.drop_duplicates(subset=['family_id'])
tdf = pd.concat([a, b])
print('Number to use for HWE: {}'.format(tdf.shape[0]))
fn = os.path.join(private_outdir, 'filtered_european.vcf.gz')
if not os.path.exists(fn):
v = os.path.join(ciepy.root, 'private_output', 'input_data',
'autosomal_variants.vcf.gz')
samples = wgs_meta[wgs_meta.subject_id.apply(lambda x: x in tdf.index)].index
c = ('bcftools view -m2 -M2 --min-ac 1:minor {} -Ov '
'| vcftools --vcf - --indv {} --hwe 0.000001 --recode --stdout '
'| bgzip > {}'.format(
v, ' --indv '.join(samples), fn))
subprocess.check_call(c, shell=True)
shutil.move('out.log', os.path.join(private_outdir, 'hwe_filtering.log'))
c = ('bcftools index {}'.format(fn))
subprocess.check_call(c, shell=True)
c = ('bcftools index -t {}'.format(fn))
subprocess.check_call(c, shell=True)
fn = os.path.join(private_outdir, 'filtered_all', '0000.vcf.gz')
if not os.path.exists(fn):
filtered_vcf = os.path.join(private_outdir, 'filtered_european.vcf.gz')
all_vcf = os.path.join(ciepy.root, 'private_output', 'input_data',
'autosomal_variants.vcf.gz')
out = os.path.join(private_outdir, 'filtered_all')
c = ('bcftools isec -O z -p {} -n=2 -w1 {} {}'.format(
out, all_vcf, filtered_vcf))
subprocess.check_call(c, shell=True)
c = ('bcftools index {}'.format(fn))
subprocess.check_call(c, shell=True)
c = ('bcftools index -t {}'.format(fn))
subprocess.check_call(c, shell=True)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cohort Information
Step2: Most were collected at passage 12-16 although a few are at later passages.
Step3: Kinship Matrix
Step4: LD prune 1KGP variants
Step5: Filter cardips variants
Step6: Make kinship matrix
Step7: I'm going to get unrelated individuals. I'll take a cutoff of 0.05 for the
Step8: We can see that the kinship coefficient is near 0.5 for all parent/child pairs
Step9: I'm going to make random groups of the unrelated individuals for the purpose of
Step10: Gene Expression
Step11: Here's an example of what the standard normal quantile normalization does for a
Step12: There are a few elbows/dropoffs, but the variance starts to get really small
Step13: I want to define the genes I'll use for the eQTL. I'm not using sex chromosome
Step14: Variant Regions
Step15: In principle, there could be multiple regions per gene if a gene has TSSs located more than
Step16: SNV Filtering
|
14,882 | <ASSISTANT_TASK:>
Python Code:
data = pd.read_csv('../benchMarkingResult.txt',
header=None,
sep='\t',
names=('iteration',
'basic_result',
'efficient_result'))
x = data.iteration
y1 = data.basic_result
y2 = data.efficient_result
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(x, y1, s=15)
ax.scatter(x, y2, s=15, c='g')
d1 = {'x':x, 'y_basic':y1}
d2 = {'x':x, 'y_efficient':y2}
mod1 = smf.ols(formula='y_basic ~ x - 1',
data = d1).fit()
print mod1.summary()
mod2 = smf.ols(formula='y_efficient ~ x - 1', data=d2).fit()
mod2.summary()
y_est_basic = mod1.predict(x)
y_est_effi = mod2.predict(x)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(x, y1, s=15)
ax.scatter(x, y2, s=15, c='g')
ax.plot(x, y_est_basic)
ax.plot(x, y_est_effi)
ax.legend(scatterpoints=1)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the Basic and Efficient data first
Step2: do a linear regression for the 2 lines and evaluate the r-squared value
Step3: both r-squared look quite nice
|
14,883 | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (6.0, 6.0)
plt.rcParams['savefig.dpi'] = 100
from straightline_utils import *
(x,y,sigmay) = generate_data()
plot_yerr(x, y, sigmay)
def straight_line_log_likelihood(theta, x, y, sigmay):
'''
Returns the log-likelihood of drawing data values *y* at
known values *x* given Gaussian measurement noise with standard
deviation with known *sigmay*, where the "true" y values are
*y_t = m * x + b*
x: list of x coordinates
y: list of y coordinates
sigmay: list of y uncertainties
m: scalar slope
b: scalar line intercept
Returns: scalar log likelihood
'''
m,b = theta
return (np.sum(np.log(1./(np.sqrt(2.*np.pi) * sigmay))) +
np.sum(-0.5 * (y - (m*x + b))**2 / sigmay**2))
def straight_line_log_prior(theta, theta_limits):
m, b = theta
mlimits, blimits = theta_limits
# Uniform in m:
if (m < mlimits[0]) | (m > mlimits[1]):
log_m_prior = -np.inf
else:
log_m_prior = np.log(1.0/(mlimits[1] - mlimits[0]))
# Uniform in b:
if (b < blimits[0]) | (b > blimits[1]):
log_b_prior = -np.inf
else:
log_b_prior = np.log(1.0/(blimits[1] - blimits[0]))
return log_m_prior + log_b_prior
def straight_line_log_posterior(theta, x, y, sigmay, theta_limits):
return (straight_line_log_likelihood(theta, x, y, sigmay) +
straight_line_log_prior(theta, theta_limits))
# Evaluate log P(m,b | x,y,sigmay) on a grid.
# Define uniform prior limits, enforcing positivity in both parameters:
mlimits = [0.0, 2.0]
blimits = [0.0, 200.0]
theta_limits = (mlimits, blimits)
# Set up grid:
mgrid = np.linspace(mlimits[0], mlimits[1], 101)
bgrid = np.linspace(blimits[0], blimits[1], 101)
log_posterior = np.zeros((len(mgrid),len(bgrid)))
# Evaluate log posterior PDF:
for im,m in enumerate(mgrid):
for ib,b in enumerate(bgrid):
theta = (m,b)
log_posterior[im,ib] = straight_line_log_posterior(theta, x, y, sigmay, theta_limits)
# Convert to probability density and plot, taking care with very small values:
posterior = np.exp(log_posterior - log_posterior.max())
plt.imshow(posterior, extent=[blimits[0],blimits[1],mlimits[0],mlimits[1]],cmap='Blues',
interpolation='none', origin='lower', aspect=(blimits[1]-blimits[0])/(mlimits[1]-mlimits[0]),
vmin=0, vmax=1)
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
i = np.argmax(posterior)
i,j = np.unravel_index(i, posterior.shape)
print('Grid maximum posterior values (m,b) =', mgrid[i], bgrid[j])
plt.title('Straight line: posterior PDF for parameters');
plot_mb_setup(*theta_limits);
def metropolis(log_posterior, theta, theta_limits, stepsize, nsteps=10000):
'''
log_posterior: function of theta
theta_limits: uniform prior ranges
stepsize: scalar or vector proposal distribution width
nsteps: desired number of samples
'''
log_prob = log_posterior(theta, x, y, sigmay, theta_limits)
# Store Markov chain as an array of samples:
chain = np.empty((nsteps, len(theta)))
log_probs = np.empty(nsteps)
# Count our accepted proposals:
naccept = 0
for i in range(nsteps):
theta_new = theta + stepsize * np.random.randn(len(theta))
log_prob_new = log_posterior(theta_new, x, y, sigmay, theta_limits)
if np.log(np.random.rand()) < (log_prob_new - log_prob):
# accept, and move to the proposed position:
theta = theta_new
log_prob = log_prob_new
naccept += 1
else:
# reject, and store the same sample as before:
pass
chain[i] = theta
log_probs[i] = log_prob
acceptance_rate = naccept/float(nsteps)
return chain,log_probs,acceptance_rate
# Initialize m, b at the center of prior:
m = 0.5*(mlimits[0]+mlimits[1])
b = 0.5*(blimits[0]+blimits[1])
theta = np.array([m,b])
# Step sizes, 2% or 5% of the prior
mstep = 0.02*(mlimits[1]-mlimits[0])
bstep = 0.05*(blimits[1]-blimits[0])
stepsize = np.array([mstep,bstep])
# How many steps?
nsteps = 10000
print('Running Metropolis Sampler for', nsteps, 'steps...')
chain, log_probs, acceptance_rate = metropolis(
straight_line_log_posterior, theta, theta_limits, stepsize, nsteps=nsteps
)
print('Acceptance fraction:', acceptance_rate)
# Pull m and b arrays out of the Markov chain and plot them:
mm = [m for m,b in chain]
bb = [b for m,b in chain]
# Traces, for convergence inspection:
plt.figure(figsize=(8,5))
plt.subplot(2,1,1)
plt.plot(mm, 'k-')
plt.ylim(mlimits)
plt.ylabel('m')
plt.subplot(2,1,2)
plt.plot(bb, 'k-')
plt.ylabel('Intercept b')
plt.ylim(blimits)
!pip install --upgrade --no-deps corner
import corner
corner.corner(chain[5000:], labels=['m','b'], range=[mlimits,blimits],quantiles=[0.16,0.5,0.84],
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=True, fill_contours=True, levels=[0.68, 0.95],
color='b', bins=80, smooth=1.0);
# Generate a straight line model for each parameter combination, and plot:
X = np.linspace(xlimits[0],xlimits[1],50)
for i in (np.random.rand(100)*len(chain)).astype(int):
m,b = chain[i]
plt.plot(X, b+X*m, 'b-', alpha=0.1)
# Overlay the data, for comparison:
plot_yerr(x, y, sigmay);
# Test statistics: functions of the data, not the parameters.
# 1) Weighted mean y:
# def test_statistic(x,y,sigmay):
# return np.sum(y/sigmay**2.0)/np.sum(1.0/sigmay**2.0)
# 2) Variance of y:
# def test_statistic(x,y,sigmay):
# return np.var(y)
# 3) Pearson r correlation coefficient:
import scipy.stats
def test_statistic(x,y,dummy):
'''
Pearson r correlation coefficient:
r12 = \sum [(xi - xbar)*(yi - ybar)] / [\sum (xi - xbar)^2 * \sum (yi - ybar)^2]^1/2
'''
r12 = np.sum((x - np.mean(x))*(y - np.mean(y))) / \
np.sqrt(np.sum((x - np.mean(x))**2) * np.sum((y - np.mean(y))**2))
return r12
# Approximate the posterior predictive distribution for T,
# by drawing a replica dataset for each sample (m,b) and computing its T:
T = np.zeros(len(chain))
for k,(m,b) in enumerate(chain):
yrep = b + m*x + np.random.randn(len(x)) * sigmay
T[k] = test_statistic(x,yrep,sigmay)
# Compare with the test statistic of the data, on a plot:
Td = test_statistic(x, y, sigmay)
plt.hist(T, 100, histtype='step', color='blue', lw=2, range=(0.0,np.percentile(T,99.0)))
plt.axvline(Td, color='black', linestyle='--', lw=2)
plt.xlabel('Test statistic')
plt.ylabel('Posterior predictive distribution')
# What is Pr(T>T(d)|d)?
greater = (T > Td)
P = 100*len(T[greater])/(1.0*len(T))
print("Pr(T>T(d)|d) = ",P,"%")
# Discrepancy: functions of the data AND parameters.
# 1) Reduced chisq for the model:
def discrepancy(x,y,sigmay,b,m):
return np.sum((y - m*x - b)**2.0/sigmay**2.0)/(len(y)-2)
# Approximate the posterior predictive distribution for T,
# by drawing a replica dataset for each sample (m,b) and computing its T,
# AND ALSO its Td (which now depends on the parameters, too):
T = np.zeros(len(chain))
Td = np.zeros(len(chain))
for k,(m,b) in enumerate(chain):
yrep = b + m*x + np.random.randn(len(x)) * sigmay
T[k] = discrepancy(x,yrep,sigmay,b,m)
Td[k] = discrepancy(x,y,sigmay,b,m)
# Compare T with Td, on a scatter plot - how often is T>Td?
plt.scatter(Td, T, color='blue',alpha=0.1)
plt.plot([0.0, 100.0], [0.0, 100.], color='k', linestyle='--', linewidth=2)
plt.xlabel('Observed discrepancy $T(d,\\theta)$')
plt.ylabel('Replicated discrepancy $T(d^{\\rm rep},\\theta)$')
plt.ylim([0.0,np.percentile(Td,99.0)])
plt.xlim([0.0,np.percentile(Td,99.0)])
# Histogram of differences:
diff = T-Td
plt.hist(diff, 100, histtype='step', color='blue', lw=2, range=(np.percentile(diff,1.0),np.percentile(diff,99.0)))
plt.axvline(0.0, color='black', linestyle='--', lw=2)
plt.xlabel('Difference $T(d^{\\rm rep},\\theta) - T(d,\\theta)$')
plt.ylabel('Posterior predictive distribution')
# What is Pr(T>T(d)|d)?
greater = (T > Td)
Pline = 100*len(T[greater])/(1.0*len(T))
print("Pr(T>T(d)|d) = ",Pline,"%")
def quadratic_log_likelihood(theta, x, y, sigmay):
'''
Returns the log-likelihood of drawing data values y at
known values x given Gaussian measurement noise with standard
deviation with known sigmay, where the "true" y values are
y_t = m*x + b + q**2
x: list of x coordinates
y: list of y coordinates
sigmay: list of y uncertainties
m: scalar slope
b: scalar line intercept
q: quadratic term coefficient
where theta = (m, b, q)
Returns: scalar log likelihood
'''
m, b, q = theta
return (np.sum(np.log(1./(np.sqrt(2.*np.pi) * sigmay))) +
np.sum(-0.5 * (y - (m*x + b + q*x**2))**2 / sigmay**2))
def quadratic_log_prior(theta, theta_limits):
m, b, q = theta
mlimits, blimits, qpars = theta_limits
# m and b:
log_mb_prior = straight_line_log_prior(np.array([m,b]), np.array([mlimits, blimits]))
# q:
log_q_prior = np.log(1./(np.sqrt(2.*np.pi) * qpars[1])) - \
0.5 * (q - qpars[0])**2 / qpars[1]**2
return log_mb_prior + log_q_prior
def quadratic_log_posterior(theta, x, y, sigmay, theta_limits):
return (quadratic_log_likelihood(theta, x, y, sigmay) +
quadratic_log_prior(theta, theta_limits))
# Define uniform prior limits, enforcing positivity in m and b:
mlimits = [0.0, 2.0]
blimits = [0.0, 200.0]
# Define Gaussian prior centered on zero for q:
qpars = [0.0,0.003]
# Initialize m, b at the center of prior:
m = 0.5*(mlimits[0]+mlimits[1])
b = 0.5*(blimits[0]+blimits[1])
q = qpars[0]
# Arrays to pass to the sampler:
qtheta = np.array([m,b,q])
qtheta_limits = (mlimits, blimits, qpars)
# Step sizes, small fractions of the prior width:
mstep = 0.01*(mlimits[1]-mlimits[0])
bstep = 0.04*(blimits[1]-blimits[0])
qstep = 0.02*qpars[1]
stepsize = np.array([mstep,bstep,qstep])
# How many steps?
nsteps = 10000
print('Running Metropolis Sampler for', nsteps, 'steps...')
qchain, log_probs, acceptance_rate = metropolis(
quadratic_log_posterior, qtheta, qtheta_limits, stepsize, nsteps=nsteps
)
print('Acceptance fraction:', acceptance_rate)
# Pull m, b and q arrays out of the Markov chain and plot them:
mm = [m for m,b,q in qchain]
bb = [b for m,b,q in qchain]
qq = [q for m,b,q in qchain]
# Traces, for convergence inspection:
plt.figure(figsize=(8,5))
plt.subplot(3,1,1)
plt.plot(mm, 'k-')
plt.ylim(mlimits)
plt.ylabel('Slope m')
plt.subplot(3,1,2)
plt.plot(bb, 'k-')
plt.ylim(blimits)
plt.ylabel('Intercept b')
plt.subplot(3,1,3)
plt.plot(qq, 'k-')
plt.ylim([qpars[0]-3*qpars[1],qpars[0]+3*qpars[1]])
plt.ylabel('Quadratic coefficient q')
corner.corner(qchain, labels=['m','b','q'], range=[mlimits,blimits,(qpars[0]-3*qpars[1],qpars[0]+3*qpars[1])],quantiles=[0.16,0.5,0.84],
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=True, fill_contours=True, levels=[0.68, 0.95],
color='green', bins=80, smooth=1.0);
# Posterior visual check, in data space:
X = np.linspace(xlimits[0],xlimits[1],100)
for i in (np.random.rand(100)*len(chain)).astype(int):
m,b,q = qchain[i]
plt.plot(X, b + X*m + q*X**2, 'g-', alpha=0.1)
plot_yerr(x, y, sigmay)
# Discrepancy: functions of the data AND parameters.
# 1) Reduced chisq for the model:
def discrepancy(x,y,sigmay,m,b,q):
return np.sum((y - m*x - b - q*x**2)**2.0/sigmay**2.0)/(len(y)-3)
# Approximate the posterior predictive distribution for T,
# by drawing a replica dataset for each sample (m,b) and computing its T,
# AND ALSO its Td:
T = np.zeros(len(qchain))
Td = np.zeros(len(qchain))
for k,(m,b,q) in enumerate(qchain):
yp = b + m*x + q*x**2 + sigmay*np.random.randn(len(x))
T[k] = discrepancy(x,yp,sigmay,m,b,q)
Td[k] = discrepancy(x,y,sigmay,m,b,q)
# Histogram of differences:
diff = T - Td
plt.hist(diff, 100, histtype='step', color='green', lw=2, range=(np.percentile(diff,1.0),np.percentile(diff,99.0)))
plt.axvline(0.0, color='black', linestyle='--', lw=2)
plt.xlabel('Difference $T(d^{\\rm rep},\\theta) - T(d,\\theta)$')
plt.ylabel('Posterior predictive distribution')
# What is Pr(T>T(d)|d)?
greater = (T > Td)
Pquad = 100*len(T[greater])/(1.0*len(T))
print("Pr(T>T(d)|d,quadratic) = ",Pquad,"%, cf. Pr(T>T(d)|d,straightline) = ",Pline,"%")
# Draw a large number of prior samples and calculate the log likelihood for each one:
N = 50000
# Set the priors:
mlimits = [0.0, 2.0]
blimits = [0.0, 200.0]
qpars = [0.0,0.003]
# Sample from the prior:
mm = np.random.uniform(mlimits[0],mlimits[1], size=N)
bb = np.random.uniform(blimits[0],blimits[1], size=N)
qq = qpars[0] + qpars[1]*np.random.randn(N)
# We'll store the posterior samples as a "chain" again
schain = []
log_likelihood_straight_line = np.zeros(N)
log_likelihood_quadratic = np.zeros(N)
for i in range(N):
theta = np.array([mm[i], bb[i]])
log_likelihood_straight_line[i] = straight_line_log_likelihood(theta, x, y, sigmay)
qtheta = np.array([mm[i], bb[i], qq[i]])
log_likelihood_quadratic[i] = quadratic_log_likelihood(qtheta, x, y, sigmay)
schain.append((mm[i],bb[i],qq[i]))
# Unnormalized likelihoods for plotting:
unnormalized_likelihood_straight_line = np.exp(log_likelihood_straight_line - log_likelihood_straight_line.max())
unnormalized_likelihood_quadratic = np.exp(log_likelihood_quadratic - log_likelihood_quadratic.max())
corner.corner(schain, labels=['m','b','q'], range=[mlimits,blimits,(qpars[0]-3*qpars[1],qpars[0]+3*qpars[1])],quantiles=[0.16,0.5,0.84],
weights=unnormalized_likelihood_straight_line,
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=False, fill_contours=True, levels=[0.68, 0.95],
color='blue', bins=80, smooth=1.0);
corner.corner(schain, labels=['m','b','q'], range=[mlimits,blimits,(qpars[0]-3*qpars[1],qpars[0]+3*qpars[1])],quantiles=[0.16,0.5,0.84],
weights=unnormalized_likelihood_quadratic,
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=False, fill_contours=True, levels=[0.68, 0.95],
color='green', bins=80, smooth=1.0);
def logaverage(x):
mx = x.max()
return np.log(np.sum(np.exp(x - mx))) + mx - np.log(len(x))
log_evidence_straight_line = logaverage(log_likelihood_straight_line)
log_evidence_quadratic = logaverage(log_likelihood_quadratic)
print('log Evidence for Straight Line Model:', log_evidence_straight_line)
print('log Evidence for Quadratic Model:', log_evidence_quadratic)
print('Evidence ratio in favour of the Quadratic Model:', np.int(np.exp(log_evidence_quadratic - log_evidence_straight_line)),"to 1")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Data
Step2: Characterizing the posterior PDF
Step3: And now to draw some samples
Step4: Looks reasonable
Step5: It looks like we made a nice, precise measurement!
Step6: Posterior Predictive Model Checking
Step7: If our model is true (and we're just uncertain about its parameters, given the data), we can compute the probability of getting a $T$ less than that observed.
Step8: The conclusion drawn from the discrepancy is more interesting, in this case. All our $\theta = (m,b)$ samples are plausible, so replica datasets generated by them should also be plausible. The straight line defined by each $(m,b)$ should go through the real data points as readily (on average) as it does its replica dataset.
Step9: All parameters are again precisely constrained.
Step10: How do the two models compare? Which one matches the data better?
Step11: Now that we have the log likelihood for each sample, let's check that we did actually sample the posterior well. Here are the corner plots (note that for plotting, the weights don't need to be correctly normalized - and also that we do not want to plot the points as well as the contours, since the points are prior samples not posterior ones!)
Step12: Question
Step13: Now let's compute the evidence - we'll need a special function that stably calculates the average $x$ given an array of $\log x$...
|
14,884 | <ASSISTANT_TASK:>
Python Code:
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode,iplot
init_notebook_mode(connected=True)
import pandas as pd
df = pd.read_csv('./2014_World_Power_Consumption')
df.head()
data = {'type':'choropleth', 'locations':df['Country'],'locationmode':'country names',
'z':df['Power Consumption KWH'], 'text':df['Text']}
layout={'title':'World power consumption',
'geo':{'showframe':True, 'projection':{'type':'Mercator'}}}
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap,validate=False)
df2 = pd.read_csv('./2012_Election_Data')
df2.head()
data = {'type':'choropleth', 'locations':df2['State Abv'],'locationmode':'USA-states',
'z':df2['% Non-citizen'], 'text':df2['% Non-citizen']}
layout={'geo':{'scope':'usa'}}
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap,validate=False)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import pandas and read the csv file
Step2: Referencing the lecture notes, create a Choropleth Plot of the Power Consumption for Countries using the data and layout dictionary.
Step3: USA Choropleth
Step4: Check the head of the DataFrame.
Step5: Now create a plot that displays the Voting-Age Population (VAP) per state. If you later want to play around with other columns, make sure you consider their data type. VAP has already been transformed to a float for you.
|
14,885 | <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_constraint('semidetached', 'primary')
b['requiv@constraint@primary']
b['requiv_max@constraint@primary']
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Semi-Detached Systems
Step3: We can view the constraint on requiv by accessing the constraint
Step4: Now whenever any of the relevant parameters (q, ecc, syncpar, sma) are changed, the value of requiv will change to match the critical value as defined by requiv_max.
|
14,886 | <ASSISTANT_TASK:>
Python Code:
% matplotlib inline
import pandas as pd
import glob
import matplotlib.pyplot as plt
GRLM = "345_GRLM10.txt"; print GRLM
df_grlm = pd.read_csv(GRLM, skiprows=43, delim_whitespace=True, names="mission,cycle,date,hour,minute,lake_height,error,mean(decibels),IonoCorrection,TropCorrection".split(","), engine='python', index_col=False)
df_grlm.head(5)
df_grlm = pd.read_csv(GRLM, skiprows=43, delim_whitespace=True, names="mission,cycle,date,hour,minute,lake_height,error,mean(decibels),IonoCorrection,TropCorrection".split(","), engine='python', index_col=False)
def get_year(date): return int(str(date)[0:4])
def get_month(date): return int(str(date)[4:6])
def get_day(date): return int(str(date)[6:])
df_grlm['year'] = df_grlm['date'].apply(get_year)
df_grlm['month'] = df_grlm['date'].apply(get_month)
df_grlm['day'] = df_grlm['date'].apply(get_day)
df_grlm = df_grlm.where(df_grlm.minute < 61 ) # remove lines that do not have time
df_grlm = df_grlm.where(df_grlm.lake_height < 900 ) # remove entries that do not have lake-height
df_grlm.lake_height.plot(); plt.title("Actual data without resampling"); plt.ylabel("Variation (m)")
df_grlm.lake_height.interpolate().plot(); plt.title("Interpolated Actual data without resampling"); plt.ylabel("Variation (m)")
df = df_grlm
df[["year", "month", "day", "hour", "minute"]] = df[["year", "month", "day", "hour", "minute"]].fillna(0).astype(int)
df['Time'] = df.year.astype(str).str.cat(df.month.astype(str).astype(str), sep='-').str.cat(df.day.astype(str), sep='-')\
.str.cat(df.hour.astype(str).astype(str), sep='-').str.cat(df.minute.astype(str).astype(str), sep='-')
df = df.where(df.year>10) # to ger rid of all the nan values
df.index = pd.to_datetime(pd.Series(df["Time"]), format="%Y-%m-%d-%H-%M");
print df.index[0:3], df.index[-3:]
df["lake_height"].resample("M").mean().plot(); plt.title("Mean Monthly Altimetry"); plt.ylabel("Variation (m)")
df["lake_height"].resample("A").mean().plot(); plt.title("Mean Annual Altimetry"); plt.ylabel("Variation (m)")
df_modis = pd.read_csv('MODIS_t.txt', names=["Area"], engine='python', index_col=False)
df_time = pd.read_csv('DV.txt', sep = "\t", names=["Year", "Month", "Day", "", "", ""], engine='python', index_col=False)
df_time['Time'] = df_time.Year.astype(str).str.cat(df_time.Month.astype(str).astype(str), sep='-').str.cat(df_time.Day.astype(str), sep='-')
df_time = df_time.where(df_time.Year>10) # to ger rid of all the nan values
df_modis.index = pd.to_datetime(pd.Series(df_time["Time"]), format="%Y-%m-%d")#df.index[0:3]
df_modis.plot(); plt.title("MODIS data - Surface Area"); plt.ylabel("Surface Area (sq.m.?)")
df_glrm_subset = df["lake_height"].resample("D").mean().interpolate()
df_glrm_subset = df_glrm_subset[(df_glrm_subset.index > '2008-07-22') & (df_glrm_subset.index <= '2015-08-13')]
df_glrm_subset.plot(); plt.legend(); plt.title("Subset of Altimetry"); plt.ylabel("Variation (m)")
df_glrm_subset.index
df_modis_daily = df_modis["Area"].resample("D").mean().interpolate()
df_modis_subset = df_modis_daily[(df_modis_daily.index > '2008-07-22') & (df_modis_daily.index <= '2015-08-13')]
df_modis_subset.plot()
df_modis_subset.index
# QA: Create a time series of time alone, to check the number of data points that we should have for days.
#Note the vaiable called length
print pd.date_range('22/07/2008', periods=len(df_modis_subset), freq='D')
# Check if the two vectors are of the same length
print len(df_glrm_subset.tolist()), len(df_modis_subset.tolist())
import numpy
cor = numpy.corrcoef(df_glrm_subset.resample("W").mean().interpolate().tolist(),
df_modis_subset.resample("W").mean().interpolate().tolist())
print "correlation coefficient is: " , cor[0][1]
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: GRLM Altimetry data from July 22 2008 to September 3, 2016
Step2: Interpolate the missing data points
Step3: Add time information to the dataframe
Step4: Resample the data to get monthly and annual variation in lake height
Step5: MODIS data Lake Surface Area (Feb 18, 2000 to Aug 13, 2015)
Step6: Create subsets of both vectors (altimetry and surface area) for the overlapping period
Step7: Compute correlation coefficient
|
14,887 | <ASSISTANT_TASK:>
Python Code:
a = 10
b = 20
c = "Hello"
print a, b, c
list_items = ["milk", "cereal", "banana", 22.5, [1,2,3]] ## A list can contain another list and items of different types
print list_items
print "3rd item in the list: ", list_items[2] # Zero based index starts from 0 so 3rd item will have index 2
set_items = set([1,2,3, 1])
print set_items
print "Is 1 in set_items: ", 1 in set_items
print "Is 10 in set_items: ", 10 in set_items
item_details = {
"milk": {
"brand": "Amul",
"quantity": 2.5,
"cost": 10
},
"chocolate": {
"brand": "Cadbury",
"quantity": 1,
"cost": 5
},
}
print item_details
print "What are is the brand of milk: ", item_details["milk"]["brand"]
print "What are is the cost of chocolate: ", item_details["chocolate"]["cost"]
def get_items_from_file(filename):
data = []
with open(filename) as fp:
for line in fp:
line = line.strip().split(" ")
data.append(line)
return data
print "Data in file data/temp1.txt"
print get_items_from_file("../data/temp1.txt")
print "Data in file data/temp2.txt"
print get_items_from_file("../data/temp2.txt")
from scipy.io import arff
data, meta = arff.loadarff("../data/iris.arff")
data.shape, meta
data[0]
import pandas as pd
df_iris = pd.DataFrame(data, columns=meta.names())
df_iris.head()
print "The shape of iris data is: ", df_iris.shape
print "Show how many instances are of each class: "
df_iris["class"].value_counts()
df_iris["sepallength"].hist(bins=10)
print "Show data containing with petalwidth > 2.0"
df_iris[df_iris["petalwidth"] > 2.0]
df = pd.read_csv("../data/titanic.csv")
df.shape
df.head()
# We need the line below to show plots directly in the notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("ticks")
sns.set_context("paper")
colors = {
"Iris-setosa": "red",
"Iris-versicolor": "green",
"Iris-virginica": "blue",
}
plt.scatter(df_iris.petallength, df_iris.petalwidth, c=map(lambda x: colors[x], df_iris["class"]))
plt.xlabel("petallength")
plt.ylabel("petalwidth")
sns.lmplot(x="petallength", y="petalwidth", hue="class", data=df_iris, fit_reg=False)
sns.pairplot(df_iris, hue="class")
sns.countplot(x="sex", data=df)
sns.countplot(x="class", data=df)
sns.countplot(x="embark_town", data=df)
sns.countplot(x="alive", data=df)
sns.countplot(x="alone", data=df)
sns.lmplot(x="age", y="survived", hue="sex", data=df, fit_reg=True, logistic=True)
sns.barplot(x="sex", y="survived", hue="embark_town", data=df)
sns.barplot(x="sex", y="survived", hue="class", data=df)
sns.barplot(x="sex", y="survived", hue=pd.cut(df.age, bins=[0,18,30,100]), data=df)
sns.barplot(x="sex", y="survived", hue="alone", data=df)
sns.barplot(x="sex", y="survived", hue=pd.cut(df.sibsp, bins=[0,1,2,3,10]), data=df)
sns.barplot(x="sex", y="survived", hue=pd.cut(df.parch, bins=[0,1,2,3,10]), data=df)
sns.barplot(x="sex", y="age", hue=pd.cut(df.parch, bins=[0,1,2,3,10]), data=df)
sns.barplot(x="sex", y="age", hue=pd.cut(df.sibsp, bins=[0,1,2,3,10]), data=df)
sns.barplot(x="sex", y="age", hue="embark_town", data=df)
sns.barplot(x="sex", y="age", hue="class", data=df)
sns.barplot(x="class", y="petalwidth", hue=pd.cut(df_iris.petallength, bins=[0, 2.5, 4.5, 6.5, 10]), data=df_iris)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lists
Step2: Sets
Step3: Dictionaries
Step4: Functions
Step5: Loading Data
Step6: Pandas
Step7: Filtering data
Step8: Titanic data
Step9: Plotting data
Step10: Question
|
14,888 | <ASSISTANT_TASK:>
Python Code:
# initialize your CORDEX submission form template
from dkrz_forms import form_handler
from dkrz_forms import checks
my_email = "..." # example: sf.email = "Mr.Mitty@yahoo.com"
my_first_name = "..." # example: sf.first_name = "Harold"
my_last_name = "..." # example: sf.last_name = "Mitty"
my_keyword = "..." # example: sf.keyword = "mymodel_myrunid"
sf = form_handler.init_form("CORDEX",my_first_name,my_last_name,my_email,my_keyword)
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
sf.institute_id = "..." # example: sf.institute_id = "AWI"
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub['status_flag_validity'] = res['valid_submission']
form_handler.DictTable(res)
form_handler.form_save(sf)
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: please provide information on the contact person for this CORDEX data submission request
Step2: Type of submission
Step3: Requested general information
Step4: institute_id
Step5: model_id
Step6: experiment_id and time_period
Step7: Example file name
Step8: information on the grid_mapping
Step9: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
Step10: Please provide information on quality check performed on the data you plan to submit
Step11: Terms of use
Step12: Information on directory structure and data access path
Step13: Give the path where the data reside, for example
Step14: Exclude variable list
Step15: Uniqueness of tracking_id and creation_date
Step16: Variable list
Step17: Check your submission before submission
Step18: Save your form
Step19: officially submit your form
|
14,889 | <ASSISTANT_TASK:>
Python Code:
import numpy as np # importing this way allows us to refer to numpy as np
mylist = [1., 2., 3., 4.]
mynparray = np.array(mylist)
mynparray
one_vector = np.ones(4)
print one_vector # using print removes the array() portion
one2Darray = np.ones((2, 4)) # an 2D array with 2 "rows" and 4 "columns"
print one2Darray
zero_vector = np.zeros(4)
print zero_vector
empty_vector = np.empty(5)
print empty_vector
mynparray[2]
my_matrix = np.array([[1, 2, 3], [4, 5, 6]])
print my_matrix
print my_matrix[1,2]
print my_matrix[0:2, 2] # recall 0:2 = [0, 1]
print my_matrix[0, 0:3]
fib_indices = np.array([1, 1, 2, 3])
random_vector = np.random.random(10) # 10 random numbers between 0 and 1
print random_vector
print random_vector[fib_indices]
my_vector = np.array([1, 2, 3, 4])
select_index = np.array([True, False, True, False])
print my_vector[select_index]
select_cols = np.array([True, False, True]) # 1st and 3rd column
select_rows = np.array([False, True]) # 2nd row
print my_matrix[select_rows, :] # just 2nd row but all columns
print my_matrix[:, select_cols] # all rows and just the 1st and 3rd column
print my_matrix[select_rows, select_cols]
my_array = np.array([1., 2., 3., 4.])
print my_array*my_array
print my_array**2
print my_array - np.ones(4)
print my_array + np.ones(4)
print my_array / 3
print my_array / np.array([2., 3., 4., 5.]) # = [1.0/2.0, 2.0/3.0, 3.0/4.0, 4.0/5.0]
print np.sum(my_array)
print np.average(my_array)
print np.sum(my_array)/len(my_array)
array1 = np.array([1., 2., 3., 4.])
array2 = np.array([2., 3., 4., 5.])
print np.dot(array1, array2)
print np.sum(array1*array2)
array1_mag = np.sqrt(np.dot(array1, array1))
print array1_mag
print np.sqrt(np.sum(array1*array1))
my_features = np.array([[1., 2.], [3., 4.], [5., 6.], [7., 8.]])
print my_features
my_weights = np.array([0.4, 0.5])
print my_weights
my_predictions = np.dot(my_features, my_weights) # note that the weights are on the right
print my_predictions # which has 4 elements since my_features has 4 rows
my_matrix = my_features
my_array = np.array([0.3, 0.4, 0.5, 0.6])
print np.dot(my_array, my_matrix) # which has 2 elements because my_matrix has 2 columns
matrix_1 = np.array([[1., 2., 3.],[4., 5., 6.]])
print matrix_1
matrix_2 = np.array([[1., 2.], [3., 4.], [5., 6.]])
print matrix_2
print np.dot(matrix_1, matrix_2)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating Numpy Arrays
Step2: You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions
Step3: You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.
Step4: Accessing array elements
Step5: 2D arrays are accessed similarly by referring to the row and column index separated by a comma
Step6: Sequences of indices can be accessed using '
Step7: You can also pass a list of indices.
Step8: You can also use true/false values to select values
Step9: For 2D arrays you can select specific columns and specific rows. Passing '
Step10: Operations on Arrays
Step11: You can compute the sum with np.sum() and the average with np.average()
Step12: The dot product
Step13: Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself
Step14: We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.
Step15: Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.
Step16: Multiplying Matrices
|
14,890 | <ASSISTANT_TASK:>
Python Code:
cadena_caracteres = "Hola mundo"
print dir(cadena_caracteres)
print 'Hola mundo'
print 'Pero el print también imprime un Enter al terminar la línea'
print 'Pero al imprimir con la coma al final',
print 'cambia el enter por un espacio'
print 'También puedo escribir lo mismo' ' en dos partes'
print 'Lo que puedo usar ' \
'cuando un string es muy largo' \
'si le agrego una contrabarra'
print 'Entonces tengo que ponerlo después de la coma:', 5
print 'Al que también le agrega la coma para separarlo'
print 'También puedo ponerlo en el medio:\nHoy es', 29, 'de Octubre'
print str.format.__doc__
print 'El nombre del jugador número {0} es {1}'.format(10, 'Lionel Messi')
print 'El nombre del jugador número {} es {}'.format(10, 'Lionel Messi')
print '{0}{1}{0}'.format('abra', 'cad')
print 'La nota del alumno {padron} - {nombre} es un {nota}.'. \
format(padron=123, nombre='Carlos Sanchez', nota=8)
alumno = {
'padron': 123,
'nombre': 'Carlos Sanchez',
'nota': 8
}
print 'La nota del alumno {padron} - {nombre} es un {nota}.'.\
format(**alumno)
alumno = {
'padron': 123,
'nombre': 'Carlos Sanchez',
'tps': [8, 9]
}
print 'La nota de los tps de {nombre} son {tps[0]} y {tps[1]}.'.\
format(**alumno)
print 'Imprimo un texto alineado a la |{:<20}| de 20 posiciones'.format(
'izquierda')
print 'Imprimo un texto alineado a la |{:>20}| de 20 posiciones'.format(
'derecha')
print 'Imprimo un texto |{:^20}| de 20 posiciones'.format('centrado')
print 'Relleno |{:#<20}| con #'.format('izquierda')
print 'Relleno |{:#>20}| con #'.format('derecha')
print 'Relleno |{:#^20}| con #'.format('centrado')
cadena_caracteres = 'Hola mundo'
print '"{0}" cambia a "{1}" con title'.format(cadena_caracteres, cadena_caracteres.title())
print '"{0}" cambia a "{1}" con lower'.format(cadena_caracteres, cadena_caracteres.lower())
print '"{0}" cambia a "{1} con upper"'.format(cadena_caracteres, cadena_caracteres.upper())
print '"{0}" cambia a "{1}" con capitalize'.format(cadena_caracteres, cadena_caracteres.capitalize())
print '"{0}" cambia a "{1}" cuando reemplazamos las o por 0'.format(cadena_caracteres, cadena_caracteres.replace('o', '0'))
x = 'mi string'
y = x.replace('i', 'AA')
print x, y
print id(x)
x += 'Hola mundo'
print id(x)
print "Hola mundo".split()
print "Hola mundo".split('o')
print "Hola mundo".split('mu')
print ''.join(['Hola', 'mundo'])
print ' '.join(['Hola', 'mundo'])
var = '#separador#'.join(['Hola', 'mundo'])
print var
padron, nombre, nota = '12321,nom bekr,4'.split(',')
arch = open("ejemplo.txt")
cadena = arch.read(15)
print "# Imprimo los primeros 15 caracteres del archivo. Tiene que ser 'Python was crea'"
print cadena
print "# Leo otros 7 caracteres y dejo el cursor del archivo en la siguiente posición. Tiene que ser 'ted in '"
cadena = arch.read(7)
print cadena
print "# Ahora leo el resto del archivo."
cadena = arch.read()
print cadena
print '# Cierro el archivo'
arch.close()
arch2 = open("ejemplo2.txt", "w")
arch2.read()
# Y si intentamos con un append?
arch3 = open("ejemplo1.txt", "a")
arch3.read()
arch = open("ejemplo.txt")
linea = arch.readline() # Notar que también imprime el Enter o \n
print linea
linea = arch.readline(7) # La segunda línea es 'Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands'
print linea
arch.close()
arch = open("ejemplo.txt")
lineas = arch.readlines()
print lineas
arch.close()
arch = open("ejemplo.txt")
for linea in arch:
print len(linea)
arch.close()
arch2 = open("ejemplo2.txt", "w")
arch2.write("Es la primer cadena")
arch2.write("Seguida de la segunda con un fin de linea\n")
arch2.writelines(["1. Primero de la lista sin fin de línea. ", "2. Segundo string con fin de línea.\n", "3. Tercero con\\n.\n", "4. y último."])
arch2.flush()
arch2.close()
arch2 = open("ejemplo2.txt", "r+a")
strfile = arch2.read()
print strfile
arch2.write("Esto lo estoy agregando.\n.")
arch2.writelines("Y estas dos líneas también con un \\n al final\n de cada una.\n")
arch2.flush()
arch2 = open("ejemplo2.txt", "r") # El open hace que me mueva a la primer posición del archivo.
print arch2.read()
arch2.close()
arch = open("ejemplo.txt")
arch.seek(30) # Voy a la posición número 30 del archivo
print arch.read(7) # Debería salir 'y 1990s'
arch.seek(-5,1) # Me muevo 5 posiciones para atrás desde mi posición actual.
print arch.read(7) # Debería imprimir '1990s b'
arch.seek(-12,2) # Me muevo a la posición número 12, comenzando a contar desde el final.
print arch.read(10) # Debería imprimir 'compatible'
arch.close()
arch = open("ejemplo.txt")
arch.seek(30)
print arch.tell() # Debería imprimir 30
arch.seek(-5,1) # Retrocedo 5 posiciones
print arch.tell() # Debería imprimir 25
arch.seek(-12,2) # Voy a 12 posiciones antes del fin de archivo
print arch.tell() # Debería imprimir 1132
print arch.read(10) # Leo 10 caracteres
print arch.tell() # Debería imprimir 1142
arch = open("ejemplo.txt")
# El archivo ejemplo.txt tiene 22 líneas, por lo que
# si quiero imprimirlo completo anteponiendo el
# número de línea y la cantidad de caracteres
# puedo hacer:
for x in range(1, 25):
linea = arch.readline()
print '{:2}[{:02}] - {}'.format(x, len(linea), linea)
arch.close()
arch = open("ejemplo.txt")
# Si no sabemos la cantidad de líneas que tiene
# el archivo que queremos recorrer podemos hacer:
linea = arch.readline()
x = 0
while linea: # Es decir, mientras me devuelva algo
# distinto al sting vacío
x += 1
print '{:2}[{:02}] - {}'.format(x, len(linea), linea)
linea = arch.readline()
arch.close()
arch = open("ejemplo.txt")
# Si no sabemos la cantidad de líneas que tiene
# el archivo que queremos recorrer podemos hacer:
x = 0
for linea in arch:
x += 1
print '{:2}[{:02}] - {}'.format(x, len(linea), linea)
arch.close()
arch = open("ejemplo.txt")
# Si no sabemos la cantidad de líneas que tiene
# el archivo que queremos recorrer podemos hacer:
# Usando enumerate y comenzando en 1
for x, linea in enumerate(arch, 1):
print '{:2}[{:02}] - {}'.format(x, len(linea), linea)
arch.close()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: String formating
Step2: Pero si no queremos imprimir ese último Enter lo que tenemos que hacer es poner una coma al final de la línea
Step3: ¿Y si lo que quiero es imprimir un número pegado al string?
Step4: String formating
Step5: Format lo que hace es reemplazar las llaves con los parámetros que le pasen
Step6: Aunque en realidad los números no son obligatorios
Step7: Pero la ventaja de usar los números es que podemos imprimir ese parámetro varias veces, y no necesariamente en el órden que figura
Step8: Incluso, se pueden usar parámetros nombrados
Step9: Incluso, si en lugar de pasarle cada uno de los parámetros le pasamos un diccionario usando el operador **
Step10: Incluso, si lo que le pasamos es una lista, podemos acceder a una posición en particular
Step11: Incluso puedo alinear el texto que pongo usando los dos puntos (
Step12: Pueden ver más ejemplos en la documentación oficial de Python<br>
Step13: Y también podemos separar y combinar strings
Step14: Unicodes
Step15: La única condición que tenemos para usar este método es que el archivo lo hayamos abierto en modo lectura.
Step16: Otra primitiva que podemos usar es readline, que al igual que read, también puede recibir un parámetro que indique la cantidad máxima de bytes a leer. Si no se le pasa ningún parámetro, lee toda la línea.
Step17: Pero no es necesario que leamos de a una sola línea, sino que también podemos leer todas las líneas del archivo y guardarlas en una lista haciendo uso de la primitiva readlines.
Step18: Sin embargo, la forma más Pythonic de leer el archivo por líneas es usando la estructura for y quedaría casi como lo diríamos en castellano
Step19: Escritura de archivos
Step20: ¿Y qué pasa si le quiero agregar algunas líneas a este archivo?
Step21: Otra forma de asegurarse que se escriba lo que hay en el disco es cerrándolo.
Step22: Y así como podemos movernos en un archivo, también podemos averiguar nuestra posición usando la primitiva tell().
Step23: ¿Cómo recorrer todo un archivo?
Step24: Como pueden ver, todas las líneas hasta la 22 (que es la última linea del arhcivo) tienen una longitud mayor a 0; incluso las 5, 10 y 19 que aparentemente no tienen ningún caracter. Eso es así ya que siempre tienen por lo menos uno, que es el Enter o \n. <br>
Step25: Aunque Python también nos ofrece otra forma de recorer un archivo, y es usando una de las estructuras que ya conocemos
Step26: O, incluso, usar enumerate para saber qué línea estoy leyendo
|
14,891 | <ASSISTANT_TASK:>
Python Code:
data_original = np.loadtxt('stanford_dl_ex/ex1/housing.data')
data = np.insert(data_original, 0, 1, axis=1)
np.random.shuffle(data)
train_X = data[:400, :-1]
train_y = data[:400, -1]
test_X = data[400:, :-1]
test_y = data[400:, -1]
m, n = train_X.shape
def cost_function(theta, X, y):
squared_errors = (X.dot(theta) - y) ** 2
J = 0.5 * squared_errors.sum()
return J
def gradient(theta, X, y):
errors = X.dot(theta) - y
return errors.dot(X)
J_history = []
t0 = time.time()
res = scipy.optimize.minimize(
fun=cost_function,
x0=np.random.rand(n),
args=(train_X, train_y),
method='bfgs',
jac=gradient,
options={'maxiter': 200, 'disp': True},
callback=lambda x: J_history.append(cost_function(x, train_X, train_y)),
)
t1 = time.time()
print('Optimization took {s} seconds'.format(s=t1 - t0))
optimal_theta = res.x
plt.plot(J_history, marker='o')
plt.xlabel('Iterations')
plt.ylabel('J(theta)')
for dataset, (X, y) in (
('train', (train_X, train_y)),
('test', (test_X, test_y)),
):
actual_prices = y
predicted_prices = X.dot(optimal_theta)
print(
'RMS {dataset} error: {error}'.format(
dataset=dataset,
error=np.sqrt(np.mean((predicted_prices - actual_prices) ** 2))
)
)
plt.figure(figsize=(10, 8))
plt.scatter(np.arange(test_y.size), sorted(test_y), c='b', edgecolor='None', alpha=0.5, label='actual')
plt.scatter(np.arange(test_y.size), sorted(test_X.dot(optimal_theta)), c='g', edgecolor='None', alpha=0.5, label='predicted')
plt.legend(loc='upper left')
plt.ylabel('House price ($1000s)')
plt.xlabel('House #')
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create train & test sets.
Step2: Define the cost function and how to compute the gradient.<br>
Step3: Run a timed optimization and store the iteration values of the cost function (for latter investigation).
Step4: It's always interesting to take a more detailed look at the optimization results.
Step5: Now compute the Root Mean Square Error on both the train and the test set and hopefully they are similar to each other.
Step6: Finally, let's have a more intuitive look at the predictions.
|
14,892 | <ASSISTANT_TASK:>
Python Code:
from pynq import Overlay
from pynq.drivers import Audio
Overlay('base.bit').download()
pAudio = Audio()
pAudio.record(3)
pAudio.save("Recording_1.pdm")
pAudio.load("/home/xilinx/pynq/drivers/tests/pynq_welcome.pdm")
pAudio.play()
import time
import numpy as np
start = time.time()
af_uint8 = np.unpackbits(pAudio.buffer.astype(np.int16)
.byteswap(True).view(np.uint8))
end = time.time()
print("Time to convert {:,d} PDM samples: {:0.2f} seconds"
.format(np.size(pAudio.buffer)*16, end-start))
print("Size of audio data: {:,d} Bytes"
.format(af_uint8.nbytes))
import time
from scipy import signal
start = time.time()
af_dec = signal.decimate(af_uint8,8,zero_phase=True)
af_dec = signal.decimate(af_dec,6,zero_phase=True)
af_dec = signal.decimate(af_dec,2,zero_phase=True)
af_dec = (af_dec[10:-10]-af_dec[10:-10].mean())
end = time.time()
print("Time to convert {:,d} Bytes: {:0.2f} seconds"
.format(af_uint8.nbytes, end-start))
print("Size of audio data: {:,d} Bytes"
.format(af_dec.nbytes))
del af_uint8
from IPython.display import Audio as IPAudio
IPAudio(af_dec, rate=32000)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.figure(num=None, figsize=(15, 5))
time_axis = np.arange(0,((len(af_dec))/32000),1/32000)
plt.title('Audio Signal in Time Domain')
plt.xlabel('Time in s')
plt.ylabel('Amplitude')
plt.plot(time_axis, af_dec)
plt.show()
from scipy.fftpack import fft
yf = fft(af_dec)
yf_2 = yf[1:len(yf)//2]
xf = np.linspace(0.0, 32000//2, len(yf_2))
plt.figure(num=None, figsize=(15, 5))
plt.plot(xf, abs(yf_2))
plt.title('Magnitudes of Audio Signal Frequency Components')
plt.xlabel('Frequency in Hz')
plt.ylabel('Magnitude')
plt.show()
import matplotlib
np.seterr(divide='ignore',invalid='ignore')
matplotlib.style.use("classic")
plt.figure(num=None, figsize=(15, 4))
plt.title('Audio Signal Spectogram')
plt.xlabel('Time in s')
plt.ylabel('Frequency in Hz')
_ = plt.specgram(af_dec, Fs=32000)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Record and play
Step2: Load and play
Step3: Play in notebook
Step4: Step 2
Step5: Step 3
Step6: Plotting PCM data
Step7: Frequency spectrum
Step8: Frequency spectrum over time
|
14,893 | <ASSISTANT_TASK:>
Python Code:
import requests
response = requests.get("https://api.forecast.io/forecast/e554f37a8164ce189acd210d00a452e0/47.4079,9.4647")
weather_data = response.json()
weather_data.keys()
print(weather_data['timezone'])
print("Longitude:", weather_data['longitude'], "Latitude", weather_data['latitude'])
type(weather_data['currently'])
weather_data['currently'].keys()
print("The wind in Trogen, Switzerland, is currently blowing at", weather_data['currently']['windSpeed'], "mph.")
weather = weather_data['currently']
Temperature = int(weather['apparentTemperature']) - int(weather['temperature'])
Celsius = round((int(weather['temperature']) - 32) * 5 / 9)
if Temperature == 0:
print("The temperature feels exactly as you would expect it to feel, namely", weather['temperature'], "degrees Fahrenheit, or", Celsius, "degrees Celsius.")
elif Temperature > 0:
print("It feels", Temperature, "degrees Fahrenheit warmer than the actual temperature, which is", weather['temperature'], "degrees Fahrenheit, or", Celsius, "degrees Celsius.")
else:
print("It feels", Temperature, "degrees Fahrenheit colder than the actual temperature, which is", weather['temperature'], "degrees Fahrenheit, or", Celsius, "degrees Celsius.")
type(weather_data['daily'])
weather_data['daily'].keys()
type(weather_data['daily']['data'])
for phase in weather_data['daily']['data']:
moon_forecast_tomorrow = phase['moonPhase']
break
#print(phase['moonPhase'])
if moon_forecast_tomorrow == 0:
print("Tomorrow is New Moon.")
elif moon_forecast_tomorrow > .75:
print("Tomorrow the Moon is in a Waning Crescent phase.")
elif moon_forecast_tomorrow == .75:
print("Tomorrow is last quarter Moon.")
elif moon_forecast_tomorrow > .5:
print("Tommorrow the Moon is in a Waning Gibbous phase.")
elif moon_forecast_tomorrow == .5:
print("Tommorrow is Full Moon.")
elif moon_forecast_tomorrow > .25:
print("Tommorrow the Moon is a Waxing Gibbous phase.")
elif moon_forecast_tomorrow == .25:
print("Tommorrow is first Quarter Moon.")
elif moon_forecast_tomorrow > 0:
print("Tommorrow the Moon is in a Waxing Crescent phase. This is the first phase after New Moon.")
print(weather_data['currently'])
response = requests.get("https://api.forecast.io/forecast/e554f37a8164ce189acd210d00a452e0/47.4079,9.4647,1465420000")
hist_weather_data = response.json()
hist_weather_data.keys()
hist_weather_data['daily'].keys()
print(hist_weather_data['daily']['data'])
for n in hist_weather_data['daily']['data']:
Min = n['temperatureMin']
Max = n['temperatureMax']
Min_Max = Max - Min
Min_Max_Celsius = ((Max - 32) * 5 / 9) - ((Min - 32) * 5 / 9)
print("The diffrence between the high and low temperatures on Wedesday, June 8, in Trogen, Switzerland, was", round(Min_Max), "Fahrenheit", "or", round(Min_Max_Celsius), "Celsius.")
response = requests.get("https://api.forecast.io/forecast/e554f37a8164ce189acd210d00a452e0/40.7128,-74.0059")
weather_data = response.json()
weather_data.keys()
weather_data['daily'].keys()
print(weather_data['daily']['data'][0])
Forecast = weather_data['daily']['data']
for max_temp in Forecast:
Convert_Celsius = (max_temp['temperatureMax'] - 32) * 5 / 9
if Convert_Celsius > 30:
Temperature = "hot"
elif Convert_Celsius > 20:
Temperature = "warm"
else:
Temperature = "cold"
print("The maximum temperature in New York for next week are", max_temp['temperatureMax'], "Fahrenheit or", round(Convert_Celsius), "Celsius. That is", Temperature)
import datetime
print(datetime.datetime.fromtimestamp(int("1284101485")).strftime('%Y-%m-%d'))
import time
import datetime
Forecast = weather_data['daily']['data']
for max_temp in Forecast:
Convert_Celsius = (max_temp['temperatureMax'] - 32) * 5 / 9
time1 = max_temp['time']
if Convert_Celsius > 30:
Temperature = "hot"
elif Convert_Celsius > 20:
Temperature = "comfortably warm."
else:
Temperature = "cold"
#print("On", time.strftime('%A, %b %d', time.localtime(time1)), "the maximum temperature will be", max_temp['temperatureMax'], "Fahrenheit or", round(Convert_Celsius), "Celsius. That is", Temperature)
print("On", time.strftime('%A', time.localtime(time1)), "the maximum temperature will be", max_temp['temperatureMax'], "Fahrenheit or", round(Convert_Celsius), "Celsius. That is", Temperature)
import datetime
print(datetime.datetime.fromtimestamp(int(time)).strftime('%Y-%m-%d'))
response = requests.get("https://api.forecast.io/forecast/e554f37a8164ce189acd210d00a452e0/25.7617,-80.1918")
Florida = response.json()
Florida.keys()
Florida['hourly'].keys()
#print(Florida['hourly']['data'])
import time
import datetime
Hourly = Florida['hourly']['data']
Hourly = Hourly
for weather in Hourly:
time = weather['time']
stop_time = datetime.datetime.fromtimestamp(int(time)).strftime('%H')
if stop_time == '01':
break
if weather['cloudCover'] > 0.5:
cloudy = "and cloudy"
else:
cloudy = "not cloudy"
print(datetime.datetime.fromtimestamp(int(time)).strftime('%H:%M'), "{", weather['temperature'], "°F}",cloudy)
response = requests.get("https://api.forecast.io/forecast/e554f37a8164ce189acd210d00a452e0/40.781750,-73.966641,346593600")
weather_data = response.json()
weather_data.keys()
#print(weather_data['daily']['data'][0])
for Christmas in weather_data['daily']['data']:
Convert_Celsius = (Christmas['temperatureMax'] - 32) * 5 / 9
print("The maximum temperature on Christmas Day 1980 in Centralpark was",
Christmas['temperatureMax'], "Fahrenheit, or", round(Convert_Celsius), "degrees Celsius.")
response = requests.get("https://api.forecast.io/forecast/e554f37a8164ce189acd210d00a452e0/40.781750,-73.966641,662126400")
weather_data_1990 = response.json()
weather_data_1990.keys()
response = requests.get("https://api.forecast.io/forecast/e554f37a8164ce189acd210d00a452e0/40.781750,-73.966641,977745600")
weather_data_2000 = response.json()
weather_data_2000.keys()
for Christmas in weather_data_1990['daily']['data']:
Convert_Celsius = (Christmas['temperatureMax'] - 32) * 5 / 9
print("The maximum temperature on Christmas Day 1990 in Centralpark was",
Christmas['temperatureMax'], "Fahrenheit, or", round(Convert_Celsius), "degrees Celsius.")
for Christmas in weather_data_2000['daily']['data']:
Convert_Celsius = (Christmas['temperatureMax'] - 32) * 5 / 9
print("The maximum temperature on Christmas Day 1980 in Centralpark was",
Christmas['temperatureMax'], "Fahrenheit, or", round(Convert_Celsius), "degrees Celsius.")
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The time zone of Trogen is correct! This is where I live.
Step2: The longitude is mentioned first, and then the latitude. Usually, it is the other way round.
Step3: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step4: 4) What's the difference between the high and low temperatures for today?
Step5: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step6: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step7: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
|
14,894 | <ASSISTANT_TASK:>
Python Code:
import coiled
cluster = coiled.Cluster(n_workers=10)
from dask.distributed import Client
client = Client(cluster)
print('Dashboard:', client.dashboard_link)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's point the distributed client to the Dask cluster on Coiled and output the link to the dashboard
|
14,895 | <ASSISTANT_TASK:>
Python Code:
#Create references to important directories we will use over and over
import os, sys
DATA_HOME_DIR = '/home/nathan/olin/spring2017/line-follower/line-follower/data'
#import modules
import numpy as np
from glob import glob
from PIL import Image
from tqdm import tqdm
from scipy.ndimage import zoom
from keras.models import Sequential
from keras.metrics import categorical_crossentropy, categorical_accuracy
from keras.layers.convolutional import *
from keras.preprocessing import image
from keras.layers.core import Flatten, Dense
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
%cd $DATA_HOME_DIR
path = DATA_HOME_DIR
train_path=path + '/sun_apr_16_office_full_line_1'
valid_path=path + '/sun_apr_16_office_full_line_2'
def resize_vectorized4D(data, new_size=(64, 64)):
A vectorized implementation of 4d image resizing
Args:
data (4D array): The images you want to resize
new_size (tuple): The desired image size
Returns: (4D array): The resized images
fy, fx = np.asarray(new_size, np.float32) / data.shape[1:3]
return zoom(data, (1, fy, fx, 1), order=1) # order is the order of spline interpolation
def lowerHalfImage(array):
Returns the lower half rows of an image
Args: array (array): the array you want to extract the lower half from
Returns: The lower half of the array
return array[round(array.shape[0]/2):,:,:]
def folder_to_numpy(image_directory_full):
Read sorted pictures (by filename) in a folder to a numpy array.
We have hardcoded the extraction of the lower half of the images as
that is the relevant data
USAGE:
data_folder = '/train/test1'
X_train = folder_to_numpy(data_folder)
Args:
data_folder (str): The relative folder from DATA_HOME_DIR
Returns:
picture_array (np array): The numpy array in tensorflow format
# change directory
print ("Moving to directory: " + image_directory_full)
os.chdir(image_directory_full)
# read in filenames from directory
g = glob('*.png')
if len(g) == 0:
g = glob('*.jpg')
print ("Found {} pictures".format(len(g)))
# sort filenames
g.sort()
# open and convert images to numpy array - then extract the lower half of each image
print("Starting pictures to numpy conversion")
picture_arrays = np.array([lowerHalfImage(np.array(Image.open(image_path))) for image_path in g])
# reshape to tensorflow format
# picture_arrays = picture_arrays.reshape(*picture_arrays.shape, 1)
print ("Shape of output: {}".format(picture_arrays.shape))
# return array
return picture_arrays
return picture_arrays.astype('float32')
def flip4DArray(array):
Produces the mirror images of a 4D image array
return array[..., ::-1,:] #[:,:,::-1] also works but is 50% slower
def concatCmdVelFlip(array):
Concatentaes and returns Cmd Vel array
return np.concatenate((array, array*-1)) # multiply by negative 1 for opposite turn
%cd $train_path
Y_train = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1] # only use turning angle
Y_train = np.concatenate((Y_train, Y_train*-1))
X_train = folder_to_numpy(train_path + '/raw')
X_train = np.concatenate((X_train, flip4DArray(X_train)))
X_train.shape, Y_train.shape
%cd /tmp
img = Image.fromarray(X_train[0], 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
%cd $valid_path
Y_valid = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1]
Y_valid = np.concatenate((Y_valid, Y_valid*-1))
X_valid = folder_to_numpy(valid_path + '/raw')
X_valid = np.concatenate((X_valid, flip4DArray(X_valid)))
X_valid.shape, Y_valid.shape
img_rows, img_cols = (64, 64)
print(img_rows)
print(img_cols)
X_train = resize_vectorized4D(X_train, (img_rows, img_cols))
X_valid = resize_vectorized4D(X_valid, (img_rows, img_cols))
print(X_train.shape)
print(X_valid.shape)
%cd /tmp
img = Image.fromarray(X_train[np.random.randint(0, X_train.shape[0])], 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
gen = image.ImageDataGenerator(
# rescale=1. / 255 # normalize data between 0 and 1
)
train_generator = gen.flow(X_train, Y_train)#, batch_size=batch_size, shuffle=True)
valid_generator = gen.flow(X_valid, Y_valid)#, batch_size=batch_size, shuffle=True)
# get_batches(train_path, batch_size=batch_size,
# target_size=in_shape,
# gen=gen)
# val_batches = get_batches(valid_path, batch_size=batch_size,
# target_size=in_shape,
# gen=gen)
data, category = next(train_generator)
print ("Shape of data: {}".format(data[0].shape))
%cd /tmp
img = Image.fromarray(data[np.random.randint(0, data.shape[0])].astype('uint8'), 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
in_shape = (img_rows, img_cols, 3)
def get_model():
model = Sequential([
Convolution2D(32,3,3, border_mode='same', activation='relu', input_shape=in_shape),
MaxPooling2D(),
Convolution2D(64,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Convolution2D(128,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(2048, activation='relu'),
Dense(1024, activation='relu'),
Dense(512, activation='relu'),
Dense(1)
])
model.compile(loss='mean_absolute_error', optimizer='adam')
return model
model = get_model()
model.summary()
history = model.fit_generator(train_generator,
samples_per_epoch=train_generator.n,
nb_epoch=2500,
validation_data=valid_generator,
nb_val_samples=valid_generator.n,
verbose=True)
# %cd $DATA_HOME_DIR
# model.save_weights('epoche_2500.h5')
%cd $DATA_HOME_DIR
model.load_weights('epoche_2500.h5')
val_plot = np.convolve(history.history['val_loss'], np.repeat(1/10, 10), mode='valid')
train_plot = np.convolve(history.history['loss'], np.repeat(1/10, 10), mode='valid')
sns.tsplot(val_plot)
X_preds = model.predict(X_valid).reshape(X_valid.shape[0],)
for i in range(len(X_valid)):
print("{:07f} | {:07f}".format(Y_valid[i], X_preds[i]))
X_train_preds = model.predict(X_train).reshape(X_train.shape[0],)
for i in range(len(X_train_preds)):
print("{:07f} | {:07f}".format(Y_train[i], X_train_preds[i]))
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create paths to data directories
Step7: Helper Functions
Step8: Data
Step9: Test the shape of the arrays
Step10: Visualize the training data, currently using a hacky method to display the numpy matrix as this is being run over a remote server and I can't view new windows
Step11: Validation Data
Step12: Test the shape of the arrays
Step13: Resize Data
Step14: Visualize newly resized image.
Step15: Batches
Step16: Next, create the train and valid generators, these are shuffle and have a batch size of 32 by default
Step17: Convnet
Step18: Model
Step19: Train
Step20: Visualize Training
|
14,896 | <ASSISTANT_TASK:>
Python Code:
import deltascope as ds
import deltascope.alignment as ut
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
from scipy.optimize import minimize
import os
import tqdm
import json
import datetime
# --------------------------------
# -------- User input ------------
# --------------------------------
data = {
# Specify sample type key
'30hpf': {
# Specify path to data directory
'path': '.\\Data\\30hpf\\Output-02-14-2019',
# Specify which channels are in the directory and are of interest
'channels': ['AT','ZRF']
},
'28hpf': {
'path': '.\Data\\28hpf\\Output-02-14-2019-yot-ilastik',
'channels': ['AT','ZRF']
},
'26hpf': {
'path': '.\Data\\26hpf\\Output-02-14-2019',
'channels': ['AT','ZRF']
},
'24hpf': {
'path': '.\Data\\24hpf\\Output-02-15-2019',
'channels': ['AT','ZRF']
},
'22hpf': {
'path': '.\Data\\22hpf\\Output-02-14-2019',
'channels': ['AT','ZRF']
}
}
data_pairs = []
for s in data.keys():
for c in data[s]['channels']:
data_pairs.append((s,c))
D = {}
for s in data.keys():
D[s] = {}
for c in data[s]['channels']:
D[s][c] = ds.read_psi_to_dict(data[s]['path'],c)
# --------------------------------
# -------- User input ------------
# --------------------------------
# Pick an integer value for bin number
anum = 30
# Specify the percentiles which will be used to calculate landmarks
percbins = [50]
theta_step = np.pi/4
lm = ds.landmarks(percbins=percbins, rnull=np.nan)
lm.calc_bins(D['28hpf']['AT'], anum, theta_step)
print('Alpha bins')
print(lm.acbins)
print('Theta bins')
print(lm.tbins)
lmdf = pd.DataFrame()
# Loop through each pair of stype and channels
for s,c in tqdm.tqdm(data_pairs):
print(s,c)
# Calculate landmarks for each sample with this data pair
for k,df in tqdm.tqdm(D[s][c].items()):
lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf)
# Set timestamp for saving data
tstamp = datetime.datetime.now().strftime('%Y-%m-%d')
# Save completed landmarks to a csv file
lmdf.to_csv(os.path.join('.\Data',tstamp+'_landmarks.csv'))
# Save landmark bins to json file
bins = {
'acbins':list(lm.acbins),
'tbins':list(lm.tbins)
}
with open(os.path.join('.\Data', tstamp+'_landmarks_bins.json'), 'w') as outfile:
json.dump(bins, outfile)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import raw data
Step2: We'll generate a list of pairs of stypes and channels for ease of use.
Step3: We can now read in all datafiles specified by the data dictionary above.
Step4: Calculate landmark bins
Step5: Calculate landmark bins based on user input parameters and the previously specified control sample.
Step6: Calculate landmarks
|
14,897 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from load import mnist
X_train, X_test, y_train2, y_test2 = mnist(onehot=True)
y_train = np.argmax(y_train2, axis=1)
y_test = np.argmax(y_test2, axis=1)
X_train[1].reshape((28, 28)).round(2)[:, 4:9].tolist()
from pylab import imshow, show, cm
import matplotlib.pylab as plt
%matplotlib inline
def view_image(image, label="", predicted='', size=4):
View a single image.
plt.figure(figsize = (size, size))
plt.imshow(image.reshape((28, 28)), cmap=cm.gray, )
plt.tick_params(axis='x',which='both', bottom='off',top='off', labelbottom='off')
plt.tick_params(axis='y',which='both', left='off',top='off', labelleft='off')
show()
if predicted == '':
print("Label: %s" % label)
else:
print('Label: ', str(label), 'Predicted: ', str(predicted))
view_image(X_train[1], y_train[1])
view_image(X_train[40000], y_train[40000])
def similarity(image, images):
similarities = []
image = image.reshape((28, 28))
images = images.reshape((-1, 28, 28))
for i in range(images.shape[0]):
distance = np.sqrt(np.sum(image - images[i]) ** 2)
sim = 1 / distance
similarities.append(sim)
return similarities
np.random.seed(52)
small_train = np.random.choice(X_train.shape[0], 100)
view_image(X_test[0])
similarities = similarity(X_test[0], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]])
view_image(X_test[200])
similarities = similarity(X_test[200], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]])
import theano
from theano import tensor as T
import numpy as np
import datetime as dt
theano.config.floatX = 'float32'
def floatX(X):
# return np.asarray(X, dtype='float32')
return np.asarray(X, dtype=theano.config.floatX)
def init_weights(shape):
return theano.shared(floatX(np.random.randn(*shape) * 0.01))
def model(X, w):
return T.nnet.softmax(T.dot(X, w))
X = T.fmatrix()
Y = T.fmatrix()
w = init_weights((784, 10))
w.get_value()
py_x = model(X, w)
y_pred = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
gradient = T.grad(cost=cost, wrt=w)
update = [[w, w - gradient * 0.05]]
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)
for start, end in zip(range(0, X_train.shape[0], 128), range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors = [(np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test)))]
errors
t0 = dt.datetime.now()
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
y_pred = predict(X_test)
np.random.seed(2)
small_test = np.random.choice(X_test.shape[0], 10)
for i in small_test:
view_image(X_test[i], label=y_test[i], predicted=y_pred[i], size=1)
def sgd(cost, params, lr=0.05):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
updates.append([p, p - g * lr])
return updates
def model(X, w_h, w_o):
h = T.nnet.sigmoid(T.dot(X, w_h))
pyx = T.nnet.softmax(T.dot(h, w_o))
return pyx
w_h = init_weights((784, 625))
w_o = init_weights((625, 10))
py_x = model(X, w_h, w_o)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
params = [w_h, w_o]
updates = sgd(cost, params)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
srng = RandomStreams()
def rectify(X):
return T.maximum(X, 0.)
def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
gradient_scaling = T.sqrt(acc_new + epsilon)
g = g / gradient_scaling
updates.append((acc, acc_new))
updates.append((p, p - lr * g))
return updates
def dropout(X, p=0.):
if p > 0:
retain_prob = 1 - p
X *= srng.binomial(X.shape, p=retain_prob, dtype=theano.config.floatX)
X /= retain_prob
return X
def model(X, w_h, w_h2, w_o, p_drop_input, p_drop_hidden):
X = dropout(X, p_drop_input)
h = rectify(T.dot(X, w_h))
h = dropout(h, p_drop_hidden)
h2 = rectify(T.dot(h, w_h2))
h2 = dropout(h2, p_drop_hidden)
py_x = softmax(T.dot(h2, w_o))
return h, h2, py_x
def softmax(X):
e_x = T.exp(X - X.max(axis=1).dimshuffle(0, 'x'))
return e_x / e_x.sum(axis=1).dimshuffle(0, 'x')
w_h = init_weights((784, 625))
w_h2 = init_weights((625, 625))
w_o = init_weights((625, 10))
noise_h, noise_h2, noise_py_x = model(X, w_h, w_h2, w_o, 0.2, 0.5)
h, h2, py_x = model(X, w_h, w_h2, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w_h, w_h2, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
# from theano.tensor.nnet.conv import conv2d
from theano.tensor.nnet import conv2d
from theano.tensor.signal.downsample import max_pool_2d
def model(X, w, w2, w3, w4, w_o, p_drop_conv, p_drop_hidden):
l1a = rectify(conv2d(X, w, border_mode='full'))
l1 = max_pool_2d(l1a, (2, 2))
l1 = dropout(l1, p_drop_conv)
l2a = rectify(conv2d(l1, w2))
l2 = max_pool_2d(l2a, (2, 2))
l2 = dropout(l2, p_drop_conv)
l3a = rectify(conv2d(l2, w3))
l3b = max_pool_2d(l3a, (2, 2))
# convert from 4tensor to normal matrix
l3 = T.flatten(l3b, outdim=2)
l3 = dropout(l3, p_drop_conv)
l4 = rectify(T.dot(l3, w4))
l4 = dropout(l4, p_drop_hidden)
pyx = softmax(T.dot(l4, w_o))
return l1, l2, l3, l4, pyx
X_train2 = X_train.reshape(-1, 1, 28, 28)
X_test2 = X_test.reshape(-1, 1, 28, 28)
# now 4tensor for conv instead of matrix
X = T.ftensor4()
Y = T.fmatrix()
w = init_weights((32, 1, 3, 3))
w2 = init_weights((64, 32, 3, 3))
w3 = init_weights((128, 64, 3, 3))
w4 = init_weights((128 * 3 * 3, 625))
w_o = init_weights((625, 10))
noise_l1, noise_l2, noise_l3, noise_l4, noise_py_x = model(X, w, w2, w3, w4, w_o, 0.2, 0.5)
l1, l2, l3, l4, py_x = model(X, w, w2, w3, w4, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w, w2, w3, w4, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
t1 = dt.datetime.now()
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train2[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train2)),
np.mean(y_test != predict(X_test2))))
print(i, errors[-1])
print('Current iter time: ', (dt.datetime.now()-t1).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 12 - Introduction to Deep Learning
Step2: Naive model
Step3: Lets try an other example
Step4: Logistic Regression
Step5: ```
Step6: initialize model
Step7: One iteration
Step8: Now for 100 epochs
Step9: Checking the results
Step10: Simple Neural Net
Step11: Complex Neural Net
Step12: Understanding rectifier units
Step13: RMSprop
Step14: Convolutional Neural Network
Step15: Modify dropout function
Step16: reshape into conv 4tensor (b, c, 0, 1) format
|
14,898 | <ASSISTANT_TASK:>
Python Code:
import astropy.coordinates as coord
import astropy.table as at
from astropy.time import Time
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import corner
import pymc3 as pm
import pymc3_ext as pmx
import exoplanet as xo
import arviz as az
import thejoker as tj
# set up a random number generator to ensure reproducibility
rnd = np.random.default_rng(seed=42)
data_tbl = at.QTable.read('data.ecsv')
sub_tbl = data_tbl[rnd.choice(len(data_tbl), size=18, replace=False)] # downsample data
data = tj.RVData.guess_from_table(sub_tbl, t_ref=data_tbl.meta['t_ref'])
_ = data.plot()
prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=1e3*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s)
prior_samples = prior.sample(size=250_000,
random_state=rnd)
joker = tj.TheJoker(prior, random_state=rnd)
joker_samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=256)
joker_samples
joker_samples.tbl
_ = tj.plot_rv_curves(joker_samples, data=data)
with prior.model:
mcmc_init = joker.setup_mcmc(data, joker_samples)
trace = pmx.sample(tune=500, draws=500,
start=mcmc_init,
cores=1, chains=2)
az.summary(trace, var_names=prior.par_names)
mcmc_samples = joker.trace_to_samples(trace, data)
mcmc_samples.wrap_K()
mcmc_samples
import pickle
with open('true-orbit.pkl', 'rb') as f:
truth = pickle.load(f)
# make sure the angles are wrapped the same way
if np.median(mcmc_samples['omega']) < 0:
truth['omega'] = coord.Angle(truth['omega']).wrap_at(np.pi*u.radian)
if np.median(mcmc_samples['M0']) < 0:
truth['M0'] = coord.Angle(truth['M0']).wrap_at(np.pi*u.radian)
df = mcmc_samples.tbl.to_pandas()
truths = []
colnames = []
for name in df.columns:
if name in truth:
colnames.append(name)
truths.append(truth[name].value)
_ = corner.corner(df[colnames], truths=truths)
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we will again load some pre-generated data meant to represent well-sampled, precise radial velocity observations of a single luminous source with a single companion (we again downsample the data set here just for demonstration)
Step2: We will use the default prior, but feel free to play around with these values
Step3: The data above look fairly constraining
Step4: The sample that was returned by The Joker does look like it is a reasonable fit to the RV data, but to fully explore the posterior pdf we will use standard MCMC through pymc3. Here we will use the NUTS sampler, but you could also experiment with other backends (e.g., Metropolis-Hastings, or even emcee by following this blog post)
Step5: If you get warnings from running the sampler above, they usually indicate that we should run the sampler for many more steps to tune the sampler and for our main run, but let's ignore that for now. With the MCMC traces in hand, we can summarize the properties of the chains using pymc3.summary
Step6: To convert the trace into a JokerSamples instance, we can use the TheJoker.trace_to_samples() method. Note here that the sign of K is arbitrary, so to compare to the true value, we also call wrap_K() to store only the absolute value of K (which also increases omega by π, to stay consistent)
Step7: We can now compare the samples we got from MCMC to the true orbital parameters used to generate this data
|
14,899 | <ASSISTANT_TASK:>
Python Code:
# !pip install ray[tune]
!pip install scikit-optimize==0.8.1
!pip install sklearn==0.18.2
import time
from typing import Dict, Optional, Any
import ray
import skopt
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.suggest.skopt import SkOptSearch
ray.init(configure_logging=False)
def evaluate(step, width, height, activation):
time.sleep(0.1)
activation_boost = 10 if activation=="relu" else 0
return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost
def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"], config["activation"])
tune.report(iterations=step, mean_loss=score)
search_space = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu", "tanh"]),
}
algo = SkOptSearch()
algo = ConcurrencyLimiter(algo, max_concurrent=4)
num_samples = 1000
# We override here for our smoke tests.
num_samples = 10
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="skopt_exp",
num_samples=num_samples,
config=search_space
)
print("Best hyperparameters found were: ", analysis.best_config)
initial_params = [
{"width": 10, "height": 0, "activation": "relu"},
{"width": 15, "height": -20, "activation": "tanh"}
]
known_rewards = [-189, -1144]
algo = SkOptSearch(points_to_evaluate=initial_params)
algo = ConcurrencyLimiter(algo, max_concurrent=4)
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="skopt_exp_with_warmstart",
num_samples=num_samples,
config=search_space
)
print("Best hyperparameters found were: ", analysis.best_config)
ray.shutdown()
<END_TASK> | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Click below to see all the imports we need for this example.
Step2: Let's start by defining a simple evaluation function. Again, an explicit math formula is queried here for demonstration, yet in practice this is typically a black-box function-- e.g. the performance results after training an ML model. We artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment. This setup assumes that we're running multiple steps of an experiment while tuning three hyperparameters, namely width, height, and activation.
Step3: Next, our objective function to be optimized takes a Tune config, evaluates the score of your experiment in a training loop,
Step4: Next we define a search space. The critical assumption is that the optimal hyperparamters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.
Step5: The search algorithm is instantiated from the SkOptSearch class. We also constrain the the number of concurrent trials to 4 with a ConcurrencyLimiter.
Step6: The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples.
Step7: Finally, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run().
Step8: We now have hyperparameters found to minimize the mean loss.
Step9: Providing an initial set of hyperparameters
Step10: Now the search_alg built using SkOptSearch takes points_to_evaluate.
Step11: And again run the experiment, this time with initial hyperparameter evaluations
Step12: And we again show the ideal hyperparameters.
|