title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequencelengths 1
5
|
---|---|---|---|---|---|---|---|---|---|
Vectorizing calculations in pandas | 40,123,132 | <p>I'm trying to calculate group averages inside of the cross-validation scheme, but this iterating method is extremely slow as my dataframe contains more than 1mln rows. Is it possible to vectorize this calculation? Thanks.</p>
<pre><code>import pandas as pd
import numpy as np
data = np.column_stack([np.arange(1,101), np.random.randint(1,11, 100),np.random.randint(1,101, 100)])
df = pd.DataFrame(data, columns=['id', 'group','total'])
from sklearn.cross_validation import KFold
kf = KFold(df.shape[0], n_folds=3, shuffle = True)
f = {'total': ['mean']}
df['fold'] = 0
df['group_average'] = 0
for train_index, test_index in kf:
df.ix[train_index, 'fold'] = 0
df.ix[test_index, 'fold'] = 1
aux = df.loc[df.fold == 0, :].groupby(['group'])
aux2 = aux.agg(f)
aux2.reset_index(inplace = True)
aux2.columns = ['group', 'group_average']
for i, row in df.loc[df.fold == 1, :].iterrows():
new = aux2.ix[(aux2.group == row.group),'group_average']
if new.empty == True:
new = 0
else:
new = new.values[0]
df.ix[i, 'group_average'] = new
</code></pre>
| 3 | 2016-10-19T05:45:43Z | 40,125,786 | <p>Replace the <code>for i, row in df.loc[df.fold == 1, :].iterrows():</code>-loop with this:</p>
<pre><code>df0 = pd.merge(df[df.fold == 1],aux2,on='group').set_index('id')
df = df.set_index('id')
df.loc[(df.fold == 1),'group_average'] = df0.loc[:,'group_average_y']
df = df.reset_index()
</code></pre>
<p>This gives me the same result as your code and is almost 7 times faster.</p>
| 3 | 2016-10-19T08:17:38Z | [
"python",
"pandas",
"vectorization",
"cross-validation"
] |
two foreign keys in one model sqlalchemy duplicate | 40,123,291 | <p>There are other answers on stacked overflow and I followed them. Its been 3 days with this problem and i have searched all answers before and i feel that even if its repeat question i should be afraid to ask if I cant get it to work after much researching.</p>
<p>the desired result is foreign keys in one model using sqlalchemy</p>
<pre><code>class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
class OrderHistory(db.Model):
id = db.Column(db.Integer(), primary_key=True)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
seller_id = db.Column(db.Integer, db.ForeignKey('user.id'))
user = db.relationship(User, foreign_keys=[user_id], backref='user')
seller = db.relationship(User, foreign_keys=[seller_id], backref='seller')
</code></pre>
<p>But i keep getting this error</p>
<pre><code>AmbiguousForeignKeysError: Could not determine join condition between parent/child tables on relationship User.order_history - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.
</code></pre>
<p>What am i dong wrong?</p>
| 1 | 2016-10-19T05:55:31Z | 40,126,250 | <p>Checkout <a href="http://docs.sqlalchemy.org/en/latest/orm/join_conditions.html#self-referential-many-to-many-relationship" rel="nofollow">this example</a> on the documentation. Probably you are using a sqlalchemy version <= 0.8 and that's why your code your code won't work.</p>
| 0 | 2016-10-19T08:39:47Z | [
"python",
"sqlalchemy"
] |
Sublime text 3 plugin development for custom autocomplete like emmet? | 40,123,520 | <p>I would like to create my custom plugin like emmet for auto completion and tag expansion for html tags like <code>h2>span</code> .myclass should result in <code><div class="myclass"></div></code>.</p>
<p>I started with but didn't find any documentation for tracking the user type event and how to define scope for plugin to be only applied for html files.</p>
<p>When i tried to use print statement inside my class it throws syntax error</p>
<pre><code>def run(self, edit):
print "i am in run"
self.view.insert(edit, 0, "Hello, World!")
</code></pre>
<p>How can i debug my plugin code without print statement or is there any alternative for sublime plugin ?</p>
| 0 | 2016-10-19T06:12:30Z | 40,123,648 | <p>Typically, one doesn't program a plugin to track what the user types in Sublime Text, but instead binds a command to a keybinding. Then, when the user presses that certain key, under certain conditions, defined in the context of the keybinding, the command executes and looks at the text near the selection caret.</p>
<p>Sublime Text plugins are developed in Python 3, where <code>print</code> is not a statement, but a function. Therefore, you need to use <code>print('I am in "run"')</code> to output debug messages to the ST console.</p>
<p>For example, if this was your plugin code:</p>
<pre><code>import sublime
import sublime_plugin
class ThisIsAnExampleCommand(sublime_plugin.TextCommand):
def run(self, edit):
print('I am in the "run" method of "ThisIsAnExampleCommand"')
self.view.insert(edit, 0, "Hello, World!")
</code></pre>
<p>then you might define a keybinding like:</p>
<pre><code>{ "keys": ["tab"], "command": "this_is_an_example",
"context":
[
{ "key": "selector", "operator": "equal", "operand": "text.html", "match_all": true },
{ "key": "selection_empty", "operator": "equal", "operand": true, "match_all": true },
]
},
</code></pre>
<p>which would operate when the user presses <kbd>Tab</kbd>, but only if all selections are empty, and the syntax of the current file being edited is HTML.</p>
<p>Your plugin could look at <code>self.view.sel()</code> to get the selection/caret position(s).</p>
| 1 | 2016-10-19T06:20:47Z | [
"python",
"sublimetext3"
] |
Numba jitclass and inheritance | 40,123,521 | <p>I have an hierarchy of classes and I would like to speed up my code by using Numba jitclass. I have tested @jitclass for some examples without class inheritance and it works properly and speed up the code. However, if I have class inheritance the error occurred during the compilation. Below is the sample code demonstrating the problem. I would be very grateful for any comments and suggestions. Now for me it looks like class inheritance does not supported by Numba, but I did not find any information on it in documentation.
Code example:</p>
<pre><code>import numpy as np
from numba import jitclass
from numba import int32, float32
spec = [
('n', int32),
('val', float32[:]),
]
@jitclass(spec)
class Parent(object):
def __init__(self, n):
self.n = n
self.val = np.zeros(n, dtype=np.float32)
spec = [
('incr', float32),
]
@jitclass(spec)
class Child(Parent):
def __init__(self, n):
Parent.__init__(self, n)
self.incr = 2.
def func(self):
for i in xrange(0, self.n):
self.val[i] += self.incr
return self.val
par = Parent(10)
chl = Child(10)
print chl.func()
</code></pre>
<p>The error I got is:</p>
<pre><code>TypeError: cannot subclass from a jitclass
</code></pre>
| 1 | 2016-10-19T06:12:33Z | 40,129,735 | <p>Currently (as of 0.28.1), Numba does not support subclassing/inheriting from a <code>jitclass</code>. It's not stated in the documentation but the error message is pretty explicit. I'm guessing this capability will be added sometime in the future, but right now it's a limitation.</p>
| 1 | 2016-10-19T11:08:26Z | [
"python",
"inheritance",
"numba"
] |
Ubuntu: No module named tensorflow in IPython but works in Python (Anaconda environment) | 40,123,621 | <p>When I try to import tensorflow in IPython in my Anaconda environment, I get a <code>No module named tensorflow</code> error. However, when I import it after running the <code>python</code> command in the terminal, there are no errors.</p>
<p>I've googled for solutions and so far I have tried the following:</p>
<ul>
<li><p>copied the site-packages inside <code>/anaconda2/lib/python2.7/envs/tensorflow/lib/python2.7/site-packages/</code> to <code>/anaconda2/lib/python2.7/site-packages/</code></p></li>
<li><p>installed ipython in the conda environment with <code>conda install ipython</code></p></li>
</ul>
<p>Anyone know what else I could try?</p>
| 0 | 2016-10-19T06:18:51Z | 40,126,550 | <p>Refer: <a href="http://stackoverflow.com/questions/33646541/tensorflow-and-anaconda-on-ubuntu?rq=1">Tensorflow and Anaconda on Ubuntu?</a></p>
<p>I found a link where the tensorflow.whl files were converted to conda packages, so I went ahead and installed it using the command:</p>
<pre><code>conda install -c https://conda.anaconda.org/jjhelmus tensorflow
</code></pre>
<p>and it worked, since the $PATH points to anaconda packages, I can import it now!</p>
<p><a href="https://anaconda.org/jjhelmus/tensorflow" rel="nofollow">Source is here</a></p>
| 0 | 2016-10-19T08:54:21Z | [
"python",
"ubuntu",
"ipython",
"tensorflow",
"anaconda"
] |
get attribute from xml-node with specific value | 40,123,627 | <p>I have an XSD-file where I need to get a namespace as defined in the root-tag:</p>
<pre><code><schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:abw="http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0" xmlns:adv="http://www.adv-online.de/namespaces/adv/gid/6.0" xmlns:bfm="http://www.liegenschaftsbestandsmodell.de/ns/bfm/1.0" xmlns:gml="http://www.opengis.net/gml/3.2" xmlns:sc="http://www.interactive-instruments.de/ShapeChange/AppInfo" elementFormDefault="qualified" targetNamespace="http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0" version="1.0.1.0">
<!-- elements -->
</schema>
</code></pre>
<p>Now as the <code>targetNamespace</code> of this schema-definition is <code>"http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0"</code> I need to get the short identifier for this namespace - which is <code>abw</code>. To get this identifier I have to get that attribute from the root-tag that has the exact same value as my <code>targetNamespace</code> (I can´t rely on the identifier beeing part of the <code>targetNamespace</code>-string allready, this may change in the future). </p>
<p>On this question <a href="http://stackoverflow.com/questions/4573237/how-to-extract-xml-attribute-using-python-elementtree">How to extract xml attribute using Python ElementTree</a> I found how to get the value of an attribute given by its name. However I don´t know the attributes name, only its value, so what can I do when I have a value and want to select the attribute having this value?</p>
<p>I think of something like this:</p>
<pre><code>for key in root.attrib.keys():
if(root.attrib[key] == targetNamespace):
return root.attrib[key]
</code></pre>
<p>but <code>root.attrib</code> only contains <code>elementFormDefault</code>, <code>targetNamespace</code> and <code>version</code>, but not <code>xmlns:abw</code>. </p>
| 1 | 2016-10-19T06:19:06Z | 40,124,264 | <p>string must be Unicode else error will appear</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#62>", line 1, in <module>
it = etree.iterparse(StringIO(xml))
TypeError: initial_value must be unicode or None, not str
</code></pre>
<p>code:</p>
<pre><code>>>> from io import StringIO
>>> from xml.etree import ElementTree
>>> xml=u"""<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:abw="http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0" xmlns:adv="http://www.adv-online.de/namespaces/adv/gid/6.0" xmlns:bfm="http://www.liegenschaftsbestandsmodell.de/ns/bfm/1.0" xmlns:gml="http://www.opengis.net/gml/3.2" xmlns:sc="http://www.interactive-instruments.de/ShapeChange/AppInfo" elementFormDefault="qualified" targetNamespace="http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0" version="1.0.1.0">
<!-- elements -->
</schema>"""
>>> ns = dict([
node for _, node in ElementTree.iterparse(
StringIO(xml), events=['start-ns']
)
])
>>> for k,v in ns.iteritems():
if v=='http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0':
print k
</code></pre>
<p>output:</p>
<pre><code>abw
</code></pre>
| 1 | 2016-10-19T06:57:25Z | [
"python",
"xml"
] |
get attribute from xml-node with specific value | 40,123,627 | <p>I have an XSD-file where I need to get a namespace as defined in the root-tag:</p>
<pre><code><schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:abw="http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0" xmlns:adv="http://www.adv-online.de/namespaces/adv/gid/6.0" xmlns:bfm="http://www.liegenschaftsbestandsmodell.de/ns/bfm/1.0" xmlns:gml="http://www.opengis.net/gml/3.2" xmlns:sc="http://www.interactive-instruments.de/ShapeChange/AppInfo" elementFormDefault="qualified" targetNamespace="http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0" version="1.0.1.0">
<!-- elements -->
</schema>
</code></pre>
<p>Now as the <code>targetNamespace</code> of this schema-definition is <code>"http://www.liegenschaftsbestandsmodell.de/ns/abw/1.0.1.0"</code> I need to get the short identifier for this namespace - which is <code>abw</code>. To get this identifier I have to get that attribute from the root-tag that has the exact same value as my <code>targetNamespace</code> (I can´t rely on the identifier beeing part of the <code>targetNamespace</code>-string allready, this may change in the future). </p>
<p>On this question <a href="http://stackoverflow.com/questions/4573237/how-to-extract-xml-attribute-using-python-elementtree">How to extract xml attribute using Python ElementTree</a> I found how to get the value of an attribute given by its name. However I don´t know the attributes name, only its value, so what can I do when I have a value and want to select the attribute having this value?</p>
<p>I think of something like this:</p>
<pre><code>for key in root.attrib.keys():
if(root.attrib[key] == targetNamespace):
return root.attrib[key]
</code></pre>
<p>but <code>root.attrib</code> only contains <code>elementFormDefault</code>, <code>targetNamespace</code> and <code>version</code>, but not <code>xmlns:abw</code>. </p>
| 1 | 2016-10-19T06:19:06Z | 40,125,451 | <p>Using minidom instead of ETree did it:</p>
<pre><code>import xml.dom.minidom as DOM
tree = DOM.parse(myFile)
root = tree.documentElement
targetNamespace = root.getAttribute("targetNamespace")
d = dict(root.attributes.items())
for key in d:
if d[key] == targetNamespace: return key
</code></pre>
<p>This will return either <code>targetNamespace</code> or <code>xmlns:abw</code> depending on what comes first in the xsd. Of course we should ignore the first case, but this goes out of scope of this question.</p>
| 0 | 2016-10-19T07:59:56Z | [
"python",
"xml"
] |
Pandas Compare rows in Dataframe | 40,123,689 | <p>I have following data frame (represented by dictionary below):</p>
<pre><code>{'Name': {0: '204',
1: '110838',
2: '110999',
3: '110998',
4: '111155',
5: '111710',
6: '111157',
7: '111156',
8: '111144',
9: '118972',
10: '111289',
11: '111288',
12: '111145',
13: '121131',
14: '118990',
15: '110653',
16: '110693',
17: '110694',
18: '111577',
19: '111702',
20: '115424',
21: '115127',
22: '115178',
23: '111578',
24: '115409',
25: '115468',
26: '111711',
27: '115163',
28: '115149',
29: '115251'},
'Sequence_new': {0: 1.0,
1: 2.0,
2: 3.0,
3: 4.0,
4: 5.0,
5: 6.0,
6: 7.0,
7: 8.0,
8: 9.0,
9: 10.0,
10: 11.0,
11: 12.0,
12: nan,
13: 13.0,
14: 14.0,
15: 15.0,
16: 16.0,
17: 17.0,
18: 18.0,
19: 19.0,
20: 20.0,
21: 21.0,
22: 22.0,
23: 23.0,
24: 24.0,
25: 25.0,
26: 26.0,
27: 27.0,
28: 28.0,
29: 29.0},
'Sequence_old': {0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 11,
11: 12,
12: 13,
13: 14,
14: 15,
15: 16,
16: 17,
17: 18,
18: 19,
19: 20,
20: 21,
21: 22,
22: 23,
23: 24,
24: 25,
25: 26,
26: 27,
27: 28,
28: 29,
29: 30}}
</code></pre>
<p>I am trying to understand what changed between old and new sequences. If by <code>Name Sequence_old = Sequence_new</code>, nothing changed. If <code>Sequence+_new</code> is <code>'nan'</code>, Name removed. Can you please help implement this in pandas?
What tried till now without success:</p>
<pre><code>for i in range(0, len(Merge)):
if Merge.iloc[i]['Sequence_x'] == Merge.iloc[i]['Sequence_y']:
Merge.iloc[i]['New'] = 'N'
else:
Merge.iloc[i]['New'] = 'Y'
</code></pre>
<p>Thank you</p>
| 1 | 2016-10-19T06:23:23Z | 40,123,766 | <p>You can use double <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a> with condition with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isnull.html" rel="nofollow"><code>isnull</code></a>:</p>
<pre><code>mask = df.Sequence_old == df.Sequence_new
df['New'] = np.where(df.Sequence_new.isnull(), 'Removed',
np.where(mask, 'N', 'Y'))
</code></pre>
<pre><code>print (df)
Name Sequence_new Sequence_old New
0 204 1.0 1 N
1 110838 2.0 2 N
2 110999 3.0 3 N
3 110998 4.0 4 N
4 111155 5.0 5 N
5 111710 6.0 6 N
6 111157 7.0 7 N
7 111156 8.0 8 N
8 111144 9.0 9 N
9 118972 10.0 10 N
10 111289 11.0 11 N
11 111288 12.0 12 N
12 111145 NaN 13 Removed
13 121131 13.0 14 Y
14 118990 14.0 15 Y
15 110653 15.0 16 Y
16 110693 16.0 17 Y
17 110694 17.0 18 Y
18 111577 18.0 19 Y
19 111702 19.0 20 Y
20 115424 20.0 21 Y
21 115127 21.0 22 Y
22 115178 22.0 23 Y
23 111578 23.0 24 Y
24 115409 24.0 25 Y
25 115468 25.0 26 Y
26 111711 26.0 27 Y
27 115163 27.0 28 Y
28 115149 28.0 29 Y
29 115251 29.0 30 Y
</code></pre>
| 1 | 2016-10-19T06:27:51Z | [
"python",
"pandas",
null,
"missing-data"
] |
Pandas Compare rows in Dataframe | 40,123,689 | <p>I have following data frame (represented by dictionary below):</p>
<pre><code>{'Name': {0: '204',
1: '110838',
2: '110999',
3: '110998',
4: '111155',
5: '111710',
6: '111157',
7: '111156',
8: '111144',
9: '118972',
10: '111289',
11: '111288',
12: '111145',
13: '121131',
14: '118990',
15: '110653',
16: '110693',
17: '110694',
18: '111577',
19: '111702',
20: '115424',
21: '115127',
22: '115178',
23: '111578',
24: '115409',
25: '115468',
26: '111711',
27: '115163',
28: '115149',
29: '115251'},
'Sequence_new': {0: 1.0,
1: 2.0,
2: 3.0,
3: 4.0,
4: 5.0,
5: 6.0,
6: 7.0,
7: 8.0,
8: 9.0,
9: 10.0,
10: 11.0,
11: 12.0,
12: nan,
13: 13.0,
14: 14.0,
15: 15.0,
16: 16.0,
17: 17.0,
18: 18.0,
19: 19.0,
20: 20.0,
21: 21.0,
22: 22.0,
23: 23.0,
24: 24.0,
25: 25.0,
26: 26.0,
27: 27.0,
28: 28.0,
29: 29.0},
'Sequence_old': {0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 11,
11: 12,
12: 13,
13: 14,
14: 15,
15: 16,
16: 17,
17: 18,
18: 19,
19: 20,
20: 21,
21: 22,
22: 23,
23: 24,
24: 25,
25: 26,
26: 27,
27: 28,
28: 29,
29: 30}}
</code></pre>
<p>I am trying to understand what changed between old and new sequences. If by <code>Name Sequence_old = Sequence_new</code>, nothing changed. If <code>Sequence+_new</code> is <code>'nan'</code>, Name removed. Can you please help implement this in pandas?
What tried till now without success:</p>
<pre><code>for i in range(0, len(Merge)):
if Merge.iloc[i]['Sequence_x'] == Merge.iloc[i]['Sequence_y']:
Merge.iloc[i]['New'] = 'N'
else:
Merge.iloc[i]['New'] = 'Y'
</code></pre>
<p>Thank you</p>
| 1 | 2016-10-19T06:23:23Z | 40,123,957 | <pre><code>dic_new = {0: 1.0, 1: 2.0, 2: 3.0, 3: 4.0, 4: 5.0, 5: 6.0, 6: 7.0, 7: 8.0, 8: 9.0, 9: 10.0, 10: 11.0, 11: 12.0,
12: 'Nan', 13: 13.0, 14: 14.0, 15: 15.0, 16: 16.0, 17: 17.0, 18: 18.0, 19: 19.0, 20: 20.0, 21: 21.0,
22: 22.0, 23: 23.0, 24: 24.0, 25: 25.0, 26: 26.0, 27: 27.0, 28: 28.0, 29: 29.0}
dic_old = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 12, 12: 13, 13: 14, 14: 15, 15: 16,
16: 17, 17: 18, 18: 19, 19: 20, 20: 21, 21: 22, 22: 23, 23: 24, 24: 25, 25: 26, 26: 27, 27: 28, 28: 29,
29: 30}
# Does the same thing as the code below
for a, b in zip(dic_new.items(), dic_old.items()):
if b[1].lower() != 'nan':
# You can add whatever print statement you want here
print(a[1] == b[1])
# Does the same thing as the code above
[print(a[1] == b[1]) for a, b in zip(dic_new.items(), dic_old.items()) if b[1].lower() != 'nan']
</code></pre>
| 0 | 2016-10-19T06:39:04Z | [
"python",
"pandas",
null,
"missing-data"
] |
mongodb regex embeded search | 40,123,779 | <p>My data looks like this:</p>
<pre><code>blade = {
'model': 'FW254',
'items':{
'a': {'time':5, 'count':7},
'b': {'time':4, 'count':8},
'c': {'time':2, 'count':9}
}
}
</code></pre>
<p>I want to do something like this:</p>
<pre><code>collection.find({"items./.*/.time": { "$gte":4}})
</code></pre>
<p>here the element /.*/ is the regex to match 'a', 'b', 'c'</p>
<p>Of course this won't work. My goal is to find the blades with embedded time >= 4. Is that possible? thanks!</p>
| 0 | 2016-10-19T06:28:37Z | 40,124,750 | <p>Where did you read about using a regex in a spot like that?</p>
<p>You should recosider the structure of you data and turns <code>items</code> into an array allowing you to use <a href="https://docs.mongodb.com/v3.2/reference/operator/query/elemMatch/#op._S_elemMatch" rel="nofollow"><code>$elemMatch</code></a></p>
<pre><code>{ "_id" : ObjectId("58071dd31f4fbf7309ae639a"), "model" : "FW254",
"items" : [ { "name": "a", "time" : 5, "count" : 7 },
{ "name": "b", "time" : 4, "count" : 8},
{ "name" : "c", "time" : 2, "count" : 9 }
] }
collection.find({items: {$elemMatch: {'time': 4}}})
</code></pre>
| 0 | 2016-10-19T07:21:48Z | [
"python",
"regex",
"mongodb",
"nested"
] |
Getting HTTP POST Error : {"reason":null,"error":"Request JSON object for insert cannot be null."} | 40,123,820 | <p>I am getting HTTP POST error when I am trying to connect to a Service Now Instance for Change Request Automation using Python. Here is the script I am using with Python 3.4.4</p>
<pre><code># SNOW CR AUTOMATION SCRIPT
import requests
import json
# put the ip address or dns of your SNOW API in this url
url = 'http://<>/change_request.do?JSONv2&sysparm_action=insert'
data= {
'short_description': '<value>',
'priority': '<value>',
'reason': '<value>',
'u_reason_for_change': '<value>',
'u_business_driver': '<value>',
'u_plan_of_record_id': '<value>'
}
print ("Data Inserted :")
print (data)
#Content type must be included in the header
header = {"Authorization":"Basic V1NfRVRPX1ROOkBiY2RlNTQzMjE=","Content- Type":"application/json"}
#Performs a POST on the specified url.
response = requests.request('POST', url, auth=("<value>","<value>"), json=data, headers=header)
print ( " Header is : ")
print (response.headers)
print (" ")
print ( "HTTP Response is :" )
print (response)
print (" ")
print ("***********************")
print (" Output : ")
print ( response.text)
</code></pre>
<p>I am getting an error as below while running the above script.</p>
<pre><code>Output :
{"reason":null,"error":"Request JSON object for insert cannot be null."}
</code></pre>
<p>I am not sure why this error is thrown. Can anybody please help on this ?</p>
| 0 | 2016-10-19T06:31:08Z | 40,128,799 | <p>First of all in Service-Now you should always use SSL, so no http!
Second error I see in your script is how you pass your payload, you need to transform your dictionary into a JSON Object/String. And you don't need to authenticate twice, you have the basic http authentication handled by requests.post so no need for it in the header.</p>
<p>With this script it should work:</p>
<pre><code>import json
import requests
url = 'https://instancename.service-now.com/change_request.do?JSONv2'
user = 'admin'
pwd = 'admin'
# Set proper headers
headers = {"Content-Type":"application/json","Accept":"application/json"}
payload = {
'sysparm_action': 'insert',
'short_description': 'test_jsonv2',
'priority': '1'
}
# Do the HTTP request
response = requests.post(url, auth=(user, pwd), headers=headers, data=json.dumps(payload))
# Check for HTTP codes other than 200
if response.status_code != 200:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:',response.json())
exit()
# Decode the JSON response into a dictionary and use the data
data = response.json()
print(data)
</code></pre>
| 0 | 2016-10-19T10:30:16Z | [
"python",
"rest",
"post",
"servicenow"
] |
Getting HTTP POST Error : {"reason":null,"error":"Request JSON object for insert cannot be null."} | 40,123,820 | <p>I am getting HTTP POST error when I am trying to connect to a Service Now Instance for Change Request Automation using Python. Here is the script I am using with Python 3.4.4</p>
<pre><code># SNOW CR AUTOMATION SCRIPT
import requests
import json
# put the ip address or dns of your SNOW API in this url
url = 'http://<>/change_request.do?JSONv2&sysparm_action=insert'
data= {
'short_description': '<value>',
'priority': '<value>',
'reason': '<value>',
'u_reason_for_change': '<value>',
'u_business_driver': '<value>',
'u_plan_of_record_id': '<value>'
}
print ("Data Inserted :")
print (data)
#Content type must be included in the header
header = {"Authorization":"Basic V1NfRVRPX1ROOkBiY2RlNTQzMjE=","Content- Type":"application/json"}
#Performs a POST on the specified url.
response = requests.request('POST', url, auth=("<value>","<value>"), json=data, headers=header)
print ( " Header is : ")
print (response.headers)
print (" ")
print ( "HTTP Response is :" )
print (response)
print (" ")
print ("***********************")
print (" Output : ")
print ( response.text)
</code></pre>
<p>I am getting an error as below while running the above script.</p>
<pre><code>Output :
{"reason":null,"error":"Request JSON object for insert cannot be null."}
</code></pre>
<p>I am not sure why this error is thrown. Can anybody please help on this ?</p>
| 0 | 2016-10-19T06:31:08Z | 40,143,200 | <p>This is a working example I tested on my instance. I am using REST Table API to insert a change request. It's not true that it can not be http. It's whatever protocol your instance allows to connect, say from browser. </p>
<pre><code>#Need to install requests package for python
#easy_install requests
import requests
# Set the request parameters
url = '<yourinstance base url>/api/now/table/change_request'
user = <username>
pwd = <password>
# Set proper headers
headers = {"Content-Type":"application/json","Accept":"application/json"}
# Do the HTTP request
response = requests.post(url, auth=(user, pwd), headers=headers ,data="{\"short_description\":\"test in python\"}")
# Check for HTTP codes other than 201
if response.status_code != 201:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:',response.json())
exit()
# Decode the JSON response into a dictionary and use the data
data = response.json()
print(data)
</code></pre>
| 1 | 2016-10-19T23:37:20Z | [
"python",
"rest",
"post",
"servicenow"
] |
Python integer to hex string | 40,123,901 | <p>Consider an integer 2. I want to convert it into hex string '0x02'. By using python's build-in function hex, I can get '0x2' which is not suitable for my code. Can any one show me how to get what I want in a convenient way. Thank you. </p>
| -4 | 2016-10-19T06:36:22Z | 40,123,984 | <pre><code>integer = 2
hex_string = '0x{:02x}'.format(integer)
</code></pre>
<p>See <a href="https://www.python.org/dev/peps/pep-3101/" rel="nofollow">pep 3101</a>, especially <em>Standard Format Specifiers</em> for more info.</p>
| 4 | 2016-10-19T06:40:21Z | [
"python",
"hex"
] |
Python integer to hex string | 40,123,901 | <p>Consider an integer 2. I want to convert it into hex string '0x02'. By using python's build-in function hex, I can get '0x2' which is not suitable for my code. Can any one show me how to get what I want in a convenient way. Thank you. </p>
| -4 | 2016-10-19T06:36:22Z | 40,124,000 | <p>Simply add a leading zero if needed:</p>
<pre><code>'0x' + hex(i)[2:].rjust(2, '0')
</code></pre>
| 0 | 2016-10-19T06:41:10Z | [
"python",
"hex"
] |
Python integer to hex string | 40,123,901 | <p>Consider an integer 2. I want to convert it into hex string '0x02'. By using python's build-in function hex, I can get '0x2' which is not suitable for my code. Can any one show me how to get what I want in a convenient way. Thank you. </p>
| -4 | 2016-10-19T06:36:22Z | 40,124,604 | <pre><code>>>> integer = 2
>>> hex_string = format(integer, '#04x') # add 2 to field width for 0x
>>> hex_string
'0x02'
</code></pre>
<p>See <a href="https://docs.python.org/3.4/library/string.html#format-specification-mini-language" rel="nofollow">Format Specification Mini-Language</a></p>
| 0 | 2016-10-19T07:14:49Z | [
"python",
"hex"
] |
Error in inserting python variable in mysql table | 40,123,910 | <p>I am working on a raspberry pi project, in which I'm fetching data from plc and storing it into mysql database.</p>
<p>Here is my code:</p>
<pre><code>import minimalmodbus
import serial
import mysql.connector
instrument = minimalmodbus.Instrument('/dev/ttyAMA0',3,mode='rtu')
instrument.serial.baudrate=115200
instrument.serial.parity = serial.PARITY_NONE
instrument.serial.bytesize = 8
instrument.serial.stopbits = 1
instrument.serial.timeout = 0.05
con = mysql.connector.connect(user='root',password='raspberry',host='localhost',
database='Fujiplc')
cursor = con.cursor()
try:
reg_value=instrument.read_register(102)
print reg_value
cursor.execute("insert into Register_Values values(%s)",(reg_value))
print ('One row inserted successfully.')
except IOError:
print("Failed to read from PLC.")
print (cursor.rowcount)
con.commit()
cursor.close()
con.close()
</code></pre>
<p>After running this code, I get next error:</p>
<pre><code>Traceback (most recent call last):
File "/home/pi/rpi_to_plc_read.py", line 22, in <module>
cursor.execute("insert into Register_Values values(%d)",(reg_value))
File "/usr/local/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 477, in execute
stmt = operation % self._process_params(params)
File "/usr/local/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 355, in _process_params
"Failed processing format-parameters; %s" % err)
ProgrammingError: Failed processing format-parameters; argument 2 to map() must support iteration
</code></pre>
<p>I have gone through so many solutions but problem couldn't solve.
Please help me. </p>
| 2 | 2016-10-19T06:37:00Z | 40,124,043 | <p>Pretty common error in python.</p>
<p><code>(reg_value)</code> is not a tuple
<code>(reg_value,)</code> is a tuple</p>
| 0 | 2016-10-19T06:43:42Z | [
"python",
"mysql",
"raspberry-pi"
] |
Error in inserting python variable in mysql table | 40,123,910 | <p>I am working on a raspberry pi project, in which I'm fetching data from plc and storing it into mysql database.</p>
<p>Here is my code:</p>
<pre><code>import minimalmodbus
import serial
import mysql.connector
instrument = minimalmodbus.Instrument('/dev/ttyAMA0',3,mode='rtu')
instrument.serial.baudrate=115200
instrument.serial.parity = serial.PARITY_NONE
instrument.serial.bytesize = 8
instrument.serial.stopbits = 1
instrument.serial.timeout = 0.05
con = mysql.connector.connect(user='root',password='raspberry',host='localhost',
database='Fujiplc')
cursor = con.cursor()
try:
reg_value=instrument.read_register(102)
print reg_value
cursor.execute("insert into Register_Values values(%s)",(reg_value))
print ('One row inserted successfully.')
except IOError:
print("Failed to read from PLC.")
print (cursor.rowcount)
con.commit()
cursor.close()
con.close()
</code></pre>
<p>After running this code, I get next error:</p>
<pre><code>Traceback (most recent call last):
File "/home/pi/rpi_to_plc_read.py", line 22, in <module>
cursor.execute("insert into Register_Values values(%d)",(reg_value))
File "/usr/local/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 477, in execute
stmt = operation % self._process_params(params)
File "/usr/local/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 355, in _process_params
"Failed processing format-parameters; %s" % err)
ProgrammingError: Failed processing format-parameters; argument 2 to map() must support iteration
</code></pre>
<p>I have gone through so many solutions but problem couldn't solve.
Please help me. </p>
| 2 | 2016-10-19T06:37:00Z | 40,125,006 | <p>i think should be.</p>
<pre><code>cursor.execute("insert into Register_Values values(%s)",(reg_value))
con.commit()
</code></pre>
| 1 | 2016-10-19T07:37:03Z | [
"python",
"mysql",
"raspberry-pi"
] |
understanding marshmallow nested schema with list data | 40,123,990 | <p>Am new to python and am usign <a href="https://marshmallow.readthedocs.io/en/latest/" rel="nofollow">marshmallow</a> serialization. unable to use the nested scehma.
, my code </p>
<pre><code>from sqlalchemy import Column, Float, Integer, String, Text, text,ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
Base = declarative_base()
metadata = Base.metadata
class CompanyDemo(Base):
__tablename__ = 'company_demo'
company_id = Column(Integer, primary_key=True,
server_default=text("nextval('company_demo_company_id_seq'::regclass)"))
name = Column(Text, nullable=False)
address = Column(String(50))
location = Column(String(50))
class UsersDemo(Base):
__tablename__ = 'users_demo'
id = Column(Integer, primary_key=True,
server_default=text("nextval('users_demo_id_seq'::regclass)"))
company_id = Column(Integer,ForeignKey('company_demo.company_id'), nullable=False)
email = Column(String)
company = relationship('CompanyDemo')
</code></pre>
<p>schema </p>
<pre><code> from marshmallow import Schema, fields, pprint
class CompanySchema(Schema):
company_id = fields.Int(dump_only=True)
name = fields.Str()
address = fields.Str()
location = fields.Str()
class UserSchema(Schema):
email = fields.Str()
company = fields.Nested(CompanySchema)
user = UserSchema()
user = UserSchema(many=True)
company = CompanySchema()
company = CompanySchema(many=True)
</code></pre>
<p>and my flask app </p>
<pre><code> from flask import Flask, jsonify, url_for, render_template
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from flask_sqlalchemy import SQLAlchemy
from model import CompanyDemo, UsersDemo
from schemas.userschema import user, company
app = Flask(__name__)
app.secret_key = "shiva"
def db_connect():
engine = create_engine('postgresql://ss@127.0.0.1:5432/test')
Session = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# create a Session
session = Session()
session._model_changes = {}
return session
@app.route('/company', methods=["GET", "POST"])
def get_all_company():
db = db_connect()
allcompany = db.query(CompanyDemo).join(UsersDemo).all()
return jsonify(company.dump(allcompany, many=True).data) # company is marshmallow schema
if __name__ == '__main__':
app.run(host='0.0.0.0', port=15418, debug=True)
</code></pre>
<p>anything wrong in my code? and am facing problem with nested schema and unable to get the nested data in output.</p>
<p>the output below </p>
<blockquote>
<p>[ {
"address": "qqq ",
"company_id": 1,
"location": "www ",
"name": "eee" }, {
"address": "www ",
"company_id": 2,
"location": "qqq ",
"name": "aaa" } ]</p>
</blockquote>
| 0 | 2016-10-19T06:40:36Z | 40,131,490 | <p>Self contained example using in-memory SQLite:</p>
<pre><code>from flask import Flask, jsonify
from flask.ext.sqlalchemy import SQLAlchemy
from marshmallow import Schema, fields, pprint
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SECRET_KEY'] = 'super-secret'
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
app.config['SQLALCHEMY_ECHO'] = True
db = SQLAlchemy(app)
class CompanyDemo(db.Model):
__tablename__ = 'company_demo'
company_id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.Text, nullable=False)
address = db.Column(db.String(50))
location = db.Column(db.String(50))
def __unicode__(self):
return u"{name} ({address})".format(name=self.name, address=self.address)
class UsersDemo(db.Model):
__tablename__ = 'users_demo'
id = db.Column(db.Integer, primary_key=True,)
company_id = db.Column(db.Integer, db.ForeignKey('company_demo.company_id'), nullable=False)
company = db.relationship('CompanyDemo')
email = db.Column(db.String)
def __unicode__(self):
return u"{email}".format(email=self.email)
class CompanySchema(Schema):
company_id = fields.Int(dump_only=True)
name = fields.Str()
address = fields.Str()
location = fields.Str()
class UserSchema(Schema):
email = fields.Str()
company = fields.Nested(CompanySchema)
user_schema = UserSchema()
company_schema = CompanySchema()
@app.route('/')
def index():
return "<a href='/dump_company'>Dump Company</a><br><a href='/dump_user'>Dump User</a>"
@app.route('/dump_user')
def dump_user():
user = UsersDemo.query.first()
return jsonify(user_schema.dump(user).data)
@app.route('/dump_company')
def dump_company():
company = CompanyDemo.query.first()
return jsonify(company_schema.dump(company).data)
def build_db():
db.drop_all()
db.create_all()
company = CompanyDemo(name='Test 1', address='10 Downing Street', location='wherever')
db.session.add(company)
user = UsersDemo(email='fred@example.com', company=company)
db.session.add(user)
db.session.commit()
@app.before_first_request
def first_request():
build_db()
if __name__ == '__main__':
app.run(debug=True, port=7777)
</code></pre>
| 1 | 2016-10-19T12:31:50Z | [
"python",
"flask",
"marshmallow"
] |
the use of regular expression | 40,124,242 | <p>I'm new in regular expression, but I want to match a pattern in about 2 million strings.
There three forms of the origin strings shown as follows:</p>
<pre><code>EC-2A-07<EC-1D-10>
EC-2-07
T1-ZJF-4
</code></pre>
<p>I want to get three parts of substrings besides <code>-</code>, which is to say Iãwant to get <code>EC</code>, <code>2A</code>, <code>07</code>respectively. Especially, for the first string, I just want to divide the part before <code><</code>.</p>
<p>I have tried <code>.+[\d]\W</code>, but cannot recognize <code>EC-2-07</code>, then I use <code>.split('-')</code> to split the string, and then use index in the returned list to get what I want. But it is low efficient.</p>
<p>Can you figure out a high efficient regular expression to meet my requirements?? Thanks a lot!</p>
| -1 | 2016-10-19T06:55:43Z | 40,124,727 | <p>You can try this:</p>
<pre><code>^(\w+)-(\w+)-(\w+)(?=\W).*$
</code></pre>
<p><a href="https://regex101.com/r/5D9BeE/1" rel="nofollow">Explanation</a></p>
<p><a href="https://repl.it/EAM9" rel="nofollow">Python Demo</a></p>
| 0 | 2016-10-19T07:20:42Z | [
"python",
"regex",
"string"
] |
Python C Api: transfer dynamically allocated array to Python | 40,124,249 | <p>I am using Python and I want to do numerical calculations (Runge-Kutta for a scalar ODE, <em>dy/dt = f(t,y)</em> ) on huge arrays (size can be up to 8000 x 500 ). To speed it up, I implemented the routine in C. I am using the Python C API to pass the data to C, do the calculation and then send it back.
Since such a huge array is involved, I use dynamic memory allocation to store the calculated result (labelled R, see code).</p>
<p>Now I have problems sending this array back to Python. At the moment I am using <strong>Py_BuildValue</strong> (see code). Building the <em>.pyd</em> extension works, but when I execute the Python code, I get the message "python.exe stops working". </p>
<p>I would like to ask, if you could help me implementing it the right way:)
[I omitted some variables in the code, to make it a little shorter]</p>
<pre><code># include <stdlib.h>
# include <stdio.h>
# include <math.h>
# include <Python.h>
# include <numpy\arrayobject.h>
static PyObject* integrateODE(PyObject* self, PyObject* args);
double func ( double t, double u , double a );
int Nt, Nr, m;
double dt, rho0, eta1, eta2, rho2decay;
static PyObject* integrateODE(PyObject* self, PyObject* args)
{
int i,k, len;
double *mag;
double u, u0, u1,u2,u3 ;
double a0, a1;
double f0, f1, f2, f3;
PyObject *U;
double *R;
PyArg_ParseTuple(args, "Offifffii", &U, &dt, &rho0, &m, &eta1, &eta2,&rho2decay, &Nt, &Nr);
R = malloc(sizeof(double) * Nt*Nr);
len = PySequence_Length(U);
mag= malloc(sizeof(double) * Nt*Nr);
while (len--) {
*(mag + len) = (double) PyFloat_AsDouble(PySequence_GetItem(U, len));
}
for (k=0; k<Nr; k++ )
{
u0 = rho0;
for (i=0; i<Nt; i++ )
{
a0=mag[k*Nt + i];
a1=mag[k*Nt + i+1];
f0 = func( t0, u0, a0 );
u1 = u0 + dt * f0 / 2.0;
f1 = func( t1, u1, a1 );
u2 = u0 + dt * f1 / 2.0;
f2 = func( t2, u2, a1 );
u3 = u0 + dt * f2;
f3 = func( t3, u3, a0 );
u = u0 + dt * ( f0 + 2.0 * f1 + 2.0 * f2 + f3 ) / 6.0;
R[k*Nt + i] = u;
u0 = u;
}
}
return Py_BuildValue("O", R);
}
double func ( double t, double u, double a ){
dydt = -u*u + a * u + a;
return dydt;
}
static PyMethodDef ODE_Methods[] =
{
{"integrateODE", integrateODE, METH_VARARGS, NULL},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC initodeNRP(void) {
(void) Py_InitModule("odeNRP", ODE_Methods);
}
</code></pre>
<p>Thank you very much for any help you can give me:)</p>
| 1 | 2016-10-19T06:56:00Z | 40,124,509 | <p>In your allocation, :</p>
<pre><code>mag size = Nt*Nr,
</code></pre>
<p>In your loop, when i is max and k is max:</p>
<pre><code>k=Nr-1, i=Nt-1,
</code></pre>
<p>Hence:</p>
<p><code>a0=mag[k*Nt + i];</code> -> <code>(Nt-1)+Nt*(Nr-1)=Nt-1+NtNr-Nt=NtNr-1</code>, and you call</p>
<p><code>a1=mag[k*Nt + i+1];</code>, <code>(Nt-1)+Nt*(Nr-1)+1=Nt-1+NtNr-Nt+1=NtNr</code> which is <code>NtNr</code> -> out of bounds</p>
<p>Also you may try to use <code>PyList_New(length);</code> or <code>PyArray_SimpleNewFromData(nd, dims, NPY_DOUBLE, a);</code> to return your array.</p>
| 1 | 2016-10-19T07:09:41Z | [
"python",
"c",
"python-c-api"
] |
Error in NLTK file | 40,124,399 | <p>I have installed Anaconda3-4.2.0 for Windows (64 bit) and nltk-3.2.1. While i am running the following code in Jupyter Notebook</p>
<pre><code>`para = "Hello World. It's good to see you. Thanks for buying this book."
import nltk.data tokenizer = nltk.data.load('tokenizers/punkt/PY3/english.pickle') tokenizer.tokenize(para)'
</code></pre>
<p>I am getting the following error:</p>
<pre><code>'OSError Traceback (most recent call last)
<ipython-input-1-a87e01558cc4> in <module>()
1 para = "Hello World. It's good to see you. Thanks for buying this book."
2 import nltk.data
----> 3 tokenizer = nltk.data.load('tokenizers/punkt/PY3/english.pickle')
4 tokenizer.tokenize(para)
C:\Anaconda3\lib\site-packages\nltk\data.py in load(resource_url, format, cache, verbose, logic_parser, fstruct_reader, encoding)
799
800 # Load the resource.
--> 801 opened_resource = _open(resource_url)
802
803 if format == 'raw':
C:\Anaconda3\lib\site-packages\nltk\data.py in _open(resource_url)
917
918 if protocol is None or protocol.lower() == 'nltk':
--> 919 return find(path_, path + ['']).open()
920 elif protocol.lower() == 'file':
921 # urllib might not use mode='rb', so handle this one ourselves:
C:\Anaconda3\lib\site-packages\nltk\data.py in find(resource_name, paths)
607 return GzipFileSystemPathPointer(p)
608 else:
--> 609 return FileSystemPathPointer(p)
610 else:
611 p = os.path.join(path_, url2pathname(zipfile))
C:\Anaconda3\lib\site-packages\nltk\compat.py in _decorator(*args, **kwargs)
559 def _decorator(*args, **kwargs):
560 args = (args[0], add_py3_data(args[1])) + args[2:]
--> 561 return init_func(*args, **kwargs)
562 return wraps(init_func)(_decorator)
563
C:\Anaconda3\lib\site-packages\nltk\data.py in __init__(self, _path)
298 _path = os.path.abspath(_path)
299 if not os.path.exists(_path):
--> 300 raise IOError('No such file or directory: %r' % _path)
301 self._path = _path
302
OSError: No such file or directory: 'C:\\nltk_data\\tokenizers\\punkt\\PY3\\PY3\\english.pickle'`
</code></pre>
<p>I have downloaded punktword tokenizer in nltk. Why I am seeing this error?Please give me an answer.</p>
| 0 | 2016-10-19T07:03:40Z | 40,125,718 | <p>It seems tokenizers/punkt/PY3/english.pickle file not exists. You need check it.</p>
<p>NLTK can download pickle file use download function:</p>
<pre><code>import nltk
nltk.download()
</code></pre>
| 0 | 2016-10-19T08:13:36Z | [
"python"
] |
Weighted clustering of tags | 40,124,423 | <p>I have a list of products and each product is tagged and weights associated to it each tag. Now I want to cluster them into similar products. How do I go forward it. I have tried k-means of scikit-learn. But That is not helping much.</p>
<pre><code>Product 1: a=2.5 b=3.5 c=1 d=1
Product 2: a=0.25 c=2
Product 3: e=2 k=5
.
.
.
.
.
.
.
.
Product n: a=3 b=0.75
</code></pre>
<p>Now I want these to be clustered. I also want a product to be in many clusters if necessary. Because 1, 2, 3 can form a cluster and 2, 4, 5 can form other</p>
| 1 | 2016-10-19T07:04:42Z | 40,124,530 | <p>You could use a <a href="https://en.wikipedia.org/wiki/Mixture_model#Gaussian_mixture_model" rel="nofollow">Gaussian Mixture Model</a> which can be seen as a generalisation of k-means which allows soft clusters. You can have K clusters, and each entry belongs to all clusters with a certain amount. This amount is the probability of the entry under that cluster.
Luckily there is <a href="http://scikit-learn.org/stable/modules/mixture.html" rel="nofollow">scikit-learn code</a> for this.</p>
<p>You can treat the set of tags across all products as defining a feature space for the entries. The presence of a tag on a product means that product will have a non-zero entry, equal to the weight, in the position corresponding to that tag. From there, you have a fixed vector to describe entries and GMMs can be applied.</p>
<hr>
<p>However, it is really hard to evaluate unsupervised learning approaches like this. Rather, you should evaluate methods in light of the downstream task they are used for. like suggesting products to people or detecting fraud or detecting duplicates etc. </p>
| 0 | 2016-10-19T07:10:44Z | [
"python",
"python-3.x",
"machine-learning",
"cluster-analysis",
"data-mining"
] |
Weighted clustering of tags | 40,124,423 | <p>I have a list of products and each product is tagged and weights associated to it each tag. Now I want to cluster them into similar products. How do I go forward it. I have tried k-means of scikit-learn. But That is not helping much.</p>
<pre><code>Product 1: a=2.5 b=3.5 c=1 d=1
Product 2: a=0.25 c=2
Product 3: e=2 k=5
.
.
.
.
.
.
.
.
Product n: a=3 b=0.75
</code></pre>
<p>Now I want these to be clustered. I also want a product to be in many clusters if necessary. Because 1, 2, 3 can form a cluster and 2, 4, 5 can form other</p>
| 1 | 2016-10-19T07:04:42Z | 40,124,944 | <p>If the direct and naïve application of k-means is <em>not helping much</em>, you may need to dig a bit deeper.</p>
<p>Assuming you have <code>N</code> distinct tags of which <code>0..N</code> can be applied to each product <code>p</code>. Each assignment describes a weighted relationship with a positive weight <code>w</code>. Absence of a tag for a product equals <code>w = 0</code>.</p>
<p>This is your setup that yields an <code>N</code>-dimensional feature space for your products. You should be able to use arbitrary clustering methods; you <em>just</em> have to select the correct measures.</p>
<p><strong>Your distance (or similarity) measure should depend on your data.</strong></p>
<p>Consequently, the first thing to ask yourself is: When are two measures considered <em>similar</em>?</p>
<ul>
<li>If they have as many overlapping tags as possible?</li>
<li>If the sum of differences between non-overlapping tag weights is max?</li>
<li>If the sum of differences between overlapping tags is min?</li>
<li>...</li>
</ul>
<p>Depending on your defined <em>similarity</em>, you should be able to choose or implement a measure that yields the grade of similarity (not just the euclidean distance in <code>N</code> dimensions) when comparing two elements.</p>
<p>Also, you may want to check <a href="http://stats.stackexchange.com/questions/3713/choosing-clustering-method">this post at CrossValidated</a> or (if you want to learn more about clustering) <a href="http://infolab.stanford.edu/~ullman/mmds/ch7.pdf" rel="nofollow">Section 7.3</a> of <em>"Mining of Massive Datasets"</em> (2014, Anand Rajaraman, Jure Leskovec, and Jeffrey D. Ullman) [<a href="http://infolab.stanford.edu/~ullman/mmds/book.pdf" rel="nofollow">Entire book</a>]</p>
| 0 | 2016-10-19T07:33:56Z | [
"python",
"python-3.x",
"machine-learning",
"cluster-analysis",
"data-mining"
] |
How to set custom stop words for sklearn CountVectorizer? | 40,124,476 | <p>I'm trying to run LDA (Latent Dirichlet Allocation) on a non-English text dataset.</p>
<p>From sklearn's tutorial, there's this part where you count term frequency of the words to feed into the LDA:</p>
<pre><code>tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
</code></pre>
<p>Which has built-in stop words feature which is only available for English I think. How could I use my own stop words list for this?</p>
| 1 | 2016-10-19T07:07:30Z | 40,124,718 | <p>You may just assign a <code>frozenset</code> of your own words to the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/stop_words.py" rel="nofollow"><code>stop_words</code> argument</a>, e.g.:</p>
<pre><code>stop_words = frozenset(["word1", "word2","word3"])
</code></pre>
| 0 | 2016-10-19T07:20:28Z | [
"python",
"machine-learning",
"scikit-learn",
"nlp"
] |
Serve uploaded files from NGINX server instead of gunicorn/Django | 40,124,568 | <p>I have separate servers one running NGINX and other running gunicorn/Django, I managed to serve static files from NGINX directly as recommended from Django documentation, but I have an issue with files uploaded by users, which will be upload to server has gunicorn, not the server has NGINX, thus users can't find their files and browse them.</p>
<p>How to upload files from Django to another server? or How to transfer files from other server after uploading to NGINX?</p>
<p><strong>Note: I don't have the CDN option, I'll server my statics from my servers.</strong></p>
| 0 | 2016-10-19T07:12:43Z | 40,125,586 | <p>You need to implement a solution for sharing files from one server to another. NFS is the standard in Unixes like Linux. An alternative is to use live mirroring, i.e. create a copy of the media files directory in the nginx server and keep it synchronized. There are probably many options for setting this up; I've successfully used <code>lsyncd</code>.</p>
| 1 | 2016-10-19T08:06:48Z | [
"python",
"django",
"nginx",
"gunicorn",
"django-media"
] |
How to add space after label in ModelForms of django | 40,124,807 | <p>I want to add space after "Mobile No: ",meaning in between label and text area. But by using django ModelForms I am not able to do that, it shows "Mobile No:" without any space between label and textarea.</p>
<p>This is how I have described it in models.py file.</p>
<pre><code>phone = models.CharField(max_length=10, verbose_name="Mobile No", validators=[mobileno])
</code></pre>
<p>forms.py file</p>
<pre><code>class UserInfoForm(forms.ModelForm):
class Meta:
model = UserInfo
fields = ('phone',)
</code></pre>
<p>This is the content of my html file.</p>
<pre><code><form method="post">
{% csrf_token %}
{{ form }}
</form>
</code></pre>
<p>This is how it is showing by default.</p>
<p><a href="https://i.stack.imgur.com/mUhhS.png" rel="nofollow"><img src="https://i.stack.imgur.com/mUhhS.png" alt="enter image description here"></a></p>
<p>How can I add space between label and textarea. Thanks for you help.</p>
| 0 | 2016-10-19T07:25:22Z | 40,125,208 | <p>You should iterate through your form fields.</p>
<p>Like this:</p>
<pre><code><form method="post" > {% csrf_token %}
{% for field in form %}
<div class="form-group">
<label class="col-sm-4 control-label" for="{{ field.name }}">{{ field.label }} : </label>
<div class="col-sm-8">
{{ field }}
</div>
</div>
{% endfor %}
</div>
</code></pre>
| 1 | 2016-10-19T07:47:37Z | [
"python",
"django",
"django-models",
"django-forms"
] |
How to add space after label in ModelForms of django | 40,124,807 | <p>I want to add space after "Mobile No: ",meaning in between label and text area. But by using django ModelForms I am not able to do that, it shows "Mobile No:" without any space between label and textarea.</p>
<p>This is how I have described it in models.py file.</p>
<pre><code>phone = models.CharField(max_length=10, verbose_name="Mobile No", validators=[mobileno])
</code></pre>
<p>forms.py file</p>
<pre><code>class UserInfoForm(forms.ModelForm):
class Meta:
model = UserInfo
fields = ('phone',)
</code></pre>
<p>This is the content of my html file.</p>
<pre><code><form method="post">
{% csrf_token %}
{{ form }}
</form>
</code></pre>
<p>This is how it is showing by default.</p>
<p><a href="https://i.stack.imgur.com/mUhhS.png" rel="nofollow"><img src="https://i.stack.imgur.com/mUhhS.png" alt="enter image description here"></a></p>
<p>How can I add space between label and textarea. Thanks for you help.</p>
| 0 | 2016-10-19T07:25:22Z | 40,125,212 | <p>The layout and spacing of form labels and fields is a presentation matter and best handled through CSS.</p>
<p>See also: <a href="http://stackoverflow.com/questions/5827590/css-styling-in-django-forms">CSS styling in Django forms</a></p>
| 2 | 2016-10-19T07:47:44Z | [
"python",
"django",
"django-models",
"django-forms"
] |
Replace characters in a string from a list and create a copy for each | 40,124,811 | <p>I'm not sure if it's even possible to do what I need with sed.</p>
<p>I want to be able to replace a string in a file with another string from a list and create a separate copy for each version.</p>
<p>For example, the original file has:</p>
<pre><code>apple dog orange
</code></pre>
<p>I want to replace <code>dog</code> with a list of different words. i.e cat, rabbit, duck.
This would create copies of the original file, each with a different change.</p>
<p>File1.txt (the original file) - <code>apple dog orange</code>
File2.txt - <code>apple cat orange</code>
File3.txt - <code>apple rabbit orange</code>
File4.txt - <code>apple duck orange</code></p>
<p>I've been able to use Sed to find and replace a single file:</p>
<pre><code>sed -i '' 's/dog/cat/g' *.html
</code></pre>
<p>I need a way to replace from a list though, and create a unique copy of each.</p>
<p>I'm on OSX. As I said before, sed might not be the tool to use here, maybe Python? I'm open to any suggestions to do this.</p>
<p>Thanks</p>
| 1 | 2016-10-19T07:25:38Z | 40,124,913 | <p>You could do it with python using regex.</p>
<pre><code>import re
import os
filename = '/path/to/filename.txt'
with open(filename, 'r') as f:
text = f.read()
names = ['cat', 'rabbit', 'duck']
for i, name in enumerate(names):
new_text = re.sub(r'dog', name, text)
base, ext = os.path.splitext(filename)
with open('{}_{}{}'.format(base, i, ext), 'w') as f:
f.write(new_text)
</code></pre>
| 2 | 2016-10-19T07:32:16Z | [
"python",
"string",
"replace",
"sed"
] |
Replace characters in a string from a list and create a copy for each | 40,124,811 | <p>I'm not sure if it's even possible to do what I need with sed.</p>
<p>I want to be able to replace a string in a file with another string from a list and create a separate copy for each version.</p>
<p>For example, the original file has:</p>
<pre><code>apple dog orange
</code></pre>
<p>I want to replace <code>dog</code> with a list of different words. i.e cat, rabbit, duck.
This would create copies of the original file, each with a different change.</p>
<p>File1.txt (the original file) - <code>apple dog orange</code>
File2.txt - <code>apple cat orange</code>
File3.txt - <code>apple rabbit orange</code>
File4.txt - <code>apple duck orange</code></p>
<p>I've been able to use Sed to find and replace a single file:</p>
<pre><code>sed -i '' 's/dog/cat/g' *.html
</code></pre>
<p>I need a way to replace from a list though, and create a unique copy of each.</p>
<p>I'm on OSX. As I said before, sed might not be the tool to use here, maybe Python? I'm open to any suggestions to do this.</p>
<p>Thanks</p>
| 1 | 2016-10-19T07:25:38Z | 40,125,022 | <p>You can use regex</p>
<pre><code>import re
in_file = 'File1.txt'
out_file = 'File2.txt'
with open(filename, 'r') as in_f:
s = f.read() # Reads whole file. Be careful for large files
s = re.sub('dog', 'cat', s)
with open(out_file, 'w') as out_f:
f.write(s)
</code></pre>
<p>You can enclose the whole thing in a <code>for</code> loop to make as many substitutions as you want</p>
| 1 | 2016-10-19T07:37:57Z | [
"python",
"string",
"replace",
"sed"
] |
Replace characters in a string from a list and create a copy for each | 40,124,811 | <p>I'm not sure if it's even possible to do what I need with sed.</p>
<p>I want to be able to replace a string in a file with another string from a list and create a separate copy for each version.</p>
<p>For example, the original file has:</p>
<pre><code>apple dog orange
</code></pre>
<p>I want to replace <code>dog</code> with a list of different words. i.e cat, rabbit, duck.
This would create copies of the original file, each with a different change.</p>
<p>File1.txt (the original file) - <code>apple dog orange</code>
File2.txt - <code>apple cat orange</code>
File3.txt - <code>apple rabbit orange</code>
File4.txt - <code>apple duck orange</code></p>
<p>I've been able to use Sed to find and replace a single file:</p>
<pre><code>sed -i '' 's/dog/cat/g' *.html
</code></pre>
<p>I need a way to replace from a list though, and create a unique copy of each.</p>
<p>I'm on OSX. As I said before, sed might not be the tool to use here, maybe Python? I'm open to any suggestions to do this.</p>
<p>Thanks</p>
| 1 | 2016-10-19T07:25:38Z | 40,125,092 | <p>You can do it simply using string replace method</p>
<pre><code>import string
f = open("file1")
text = f.read()
list_of_words = ['cat','rabbit','duck']
num =2
for word in list_of_words:
new_text = string.replace(text,"dog",word,1)
f_new = open("file"+str(num),"w")
f_new.write(new_text)
f_new.close()
num +=1
</code></pre>
<p>input:
file1</p>
<pre><code>apple dog orange
</code></pre>
<p>output: file2:</p>
<pre><code>apple cat orange
</code></pre>
<p>output: file3:</p>
<pre><code>apple rabbit orange
</code></pre>
<p>output: file4:</p>
<pre><code>apple duck orange
</code></pre>
| 1 | 2016-10-19T07:41:52Z | [
"python",
"string",
"replace",
"sed"
] |
How do I set a specific action to happen when the "enter" key on my keyboard is pressed in python | 40,124,901 | <p>I am making a python calculator with GUI for school. </p>
<p>I have got some basic code from the internet and I have to customize it by changing things around. So far I have added a <code>DEL</code> button, a <code>^2</code> button and a <code>sqrt()</code> button. </p>
<p>I now want that if I type in an equation on my keyboard, e.g. "2*4", and press <kbd>Enter</kbd> it will simulate as pressing the equals button. I am having trouble finding out how to get python to register me clicking the <kbd>Enter</kbd> and then give me an answer. </p>
<p>This is the code:</p>
<pre><code>from __future__ import division
from math import *
from functools import partial
try:
# Python2
import Tkinter as tk
except ImportError:
# Python3
import tkinter as tk
class MyApp(tk.Tk):
def __init__(self):
# the root will be self
tk.Tk.__init__(self)
self.title("Magic")
# use width x height + x_offset + y_offset (no spaces!)
#self.geometry("300x150+150+50")
# or set x, y position only
self.geometry("+150+50")
self.memory = 0
self.create_widgets()
def create_widgets(self):
# this also shows the calculator's button layout
btn_list = [
'7', '8', '9', '*', 'AC',
'4', '5', '6', '/', 'x²',
'1', '2', '3', '-', 'âx',
'0', '.', '=', '+', 'DEL' ]
rel = 'ridge'
# create all buttons with a loop
r = 1
c = 0
for b in btn_list:
# partial takes care of function and argument
cmd = partial(self.calculate, b)
tk.Button(self, text=b, width=5, relief=rel,
command=cmd).grid(row=r, column=c)
c += 1
if c > 4:
c = 0
r += 1
# use an Entry widget for an editable display
self.entry = tk.Entry(self, width=37, bg="white")
self.entry.grid(row=0, column=0, columnspan=5)
def undo():
new_string = whole_string[:-1]
print(new_string)
clear_all()
display.insert(0, new_string)
def calculate(self, key):
if key == '=':
# here comes the calculation part
try:
result = eval(self.entry.get())
self.entry.insert(tk.END, " = " + str(result))
except:
self.entry.insert(tk.END, "")
elif key == 'AC':
self.entry.delete(0, tk.END)
elif key == 'x²':
self.entry.insert(tk.END, "**")
# extract the result
elif key == 'âx':
self.memory = self.entry.get()
self.entry.delete(0, tk.END)
self.entry.insert(tk.END, "sqrt(")
self.entry.insert(tk.END, self.memory)
self.entry.insert(tk.END, ")")
elif key == 'DEL':
self.memory = self.entry.get()
self.entry.delete(0, tk.END)
self.entry.insert(tk.END, self.memory[:-1])
else:# previous calculation has been done, clear entry
if '=' in self.entry.get():
self.entry.delete(0, tk.END)
self.entry.insert(tk.END, key)
app = MyApp()
app.mainloop()
</code></pre>
| 0 | 2016-10-19T07:31:19Z | 40,126,661 | <p>You can use <code>bind()</code> to assign function to <code>Entry</code> which will be executed when you press <code>Enter</code></p>
<p>Example:</p>
<pre><code>import tkinter as tk
def on_return(event):
print('keycode:', event.keycode)
print('text in entry:', event.widget.get())
root = tk.Tk()
e = tk.Entry(root)
e.pack()
e.bind('<Return>', on_return) # standard Enter
e.bind('<KP_Enter>', on_return) # KeyPad Enter
root.mainloop()
</code></pre>
<p>In your code it can be - for test</p>
<pre><code>self.entry = tk.Entry(self, width=37, bg="white")
self.entry.grid(row=0, column=0, columnspan=5)
self.entry.bind('<Return>', lambda event:print("ENTER:", event.widget.get()))
self.entry.bind('<KP_Enter>', lambda event:print("ENTER:", event.widget.get()))
</code></pre>
<p>If you have class method <code>def on_return(self, event):</code> then </p>
<pre><code>self.entry.bind('<Return>', self.on_return)
self.entry.bind('<KP_Enter>', self.on_return)
</code></pre>
<hr>
<ol>
<li><p><a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">Events and Bindings</a></p></li>
<li><p><a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">Key names</a></p></li>
</ol>
| 1 | 2016-10-19T08:59:00Z | [
"python",
"user-interface",
"tkinter"
] |
Pydot won't create a graph | 40,125,051 | <p>yesterday I added pydot package by installing it via pip in command line.
I can import the package and even create an object, but when I want to create a graph by:</p>
<pre><code>graph.write_jpg('example1_graph.jpg')
</code></pre>
<p>I get the following error:</p>
<pre><code>Exception: "dot.exe" not found in path.
</code></pre>
| 0 | 2016-10-19T07:39:30Z | 40,125,598 | <p>Try manually adding the Graphviz\bin folder to your systems PATH.</p>
<pre><code>>>> import pydot
>>> pydot.find_graphviz()
{'dot': 'C:\\Program Files (x86)\\Graphviz 2.28\\bin\\dot.exe'} #...
>>> print pydot.find_graphviz.__doc__
"""
Locate Graphviz's executables in the system.
Tries three methods:
First: Windows Registry (Windows only)
This requires Mark Hammond's pywin32 is installed.
Secondly: Search the path
It will look for 'dot', 'twopi' and 'neato' in all the directories
specified in the PATH environment variable.
Thirdly: Default install location (Windows only)
It will look for 'dot', 'twopi' and 'neato' in the default install
location under the "Program Files" directory.
It will return a dictionary containing the program names as keys
and their paths as values.
If this fails, it returns None.
"""
</code></pre>
| 0 | 2016-10-19T08:07:37Z | [
"python",
"pydot"
] |
Trying to make an autocompleter for words in Python (Schoolwork) | 40,125,081 | <p>I have a University assigment(So I'd like to get informative hints rather than an all-out code-bunch to copy) where we're supposed to make a program that automaticaly completes words. (Gives out suggestions based on letters written.)
The suggestions for these words are taken from a list called "alphabetical.csv", which is a separate data file, containing around 90000 different words.</p>
<p>I am a complete beginner in the Python-language and programming in general and only have vague ideas of where to start with this. I've been thinking of making lists to be printed out to the user, suggesting all words beginning with a certain letter, and possibly the next, and the next, and so on, but I've no idea of how to implement that effectively. I've only ever done very simple while, else and if-stuff before, as well as very simple functions that concatenate inputted strings and such.
There is a skeleton which must be used with the assigment which looks like this:</p>
<pre><code>def main():
"""Initialize main loop."""
word = ""
while word != "q":
word = input("Type word: ").lower()
print("Autocompletion finished: ", autocomplete())
def autocomplete():
"""Return autocomplete suggestions."""
pass
main()
</code></pre>
<p>We are not supposed to import anything, and the program itself is supposed to run in a terminal.</p>
<p>I'd be very grateful if anyone could get me started on this.
(Again, not looking for a straight out solution to the assigment, I'm supposed to learn something from this.)</p>
| 0 | 2016-10-19T07:41:00Z | 40,125,958 | <p>You will first need to <a href="https://docs.python.org/3.6/library/functions.html?highlight=open#open" rel="nofollow">open</a> a file and to <a href="https://docs.python.org/3.6/library/io.html#io.TextIOBase.read" rel="nofollow">read</a> it.
Then you will have to search words which start with a substring, <a href="https://docs.python.org/3.6/library/stdtypes.html?highlight=str.startswith#str.startswith" rel="nofollow">str.startswith</a> could help you.
Since you apparently already know loops and the <a href="https://docs.python.org/3.6/library/functions.html?highlight=print#print" rel="nofollow">print</a> function, you should be able to do something functionnal.</p>
| 0 | 2016-10-19T08:26:03Z | [
"python",
"autocomplete",
"words"
] |
Using multiple columns as ForeignKey to return in another table | 40,125,110 | <p>I'm new to Django so I make 3 simple tables to return a WishList. The thing is that I want whenever user asks for WishList, his/her <code>user_id</code> is used to make a <code>SELECT</code> query to return his/her own WishList. And I want to get product title and product url from my WishList table. I'm using <code>to_field</code> but with that way I only can get product title back. I don't know much about Django so help me!</p>
<h1>Product</h1>
<pre><code>class Product(models.Model):
class Meta:
unique_together = (('id', 'title'),)
title = models.CharField(max_length=200, unique=True,
help_text='Name of the product')
url = models.CharField(max_length=300, default='',
help_text='Url of the product')
def __str__(self):
return 'Product: {}'.format(self.title)
</code></pre>
<h1>WishList</h1>
<pre><code>class WishList(models.Model):
class Meta:
unique_together = (('user', 'product'),)
user = models.ForeignKey(fbuser,
on_delete=models.CASCADE,
help_text='Facebook user',
to_field='user_id')
product = models.ForeignKey(Product, to_field='title', db_column='title',
on_delete=models.CASCADE)
def __str__(self):
return 'WishList: {}'.format(self.user)
</code></pre>
| 0 | 2016-10-19T07:42:45Z | 40,125,643 | <p>Django documentation is your friend, <a href="https://docs.djangoproject.com/en/1.10/topics/db/queries/" rel="nofollow">read it</a>.</p>
<p>Im serious, read the entire <a href="https://docs.djangoproject.com/en/1.10/" rel="nofollow">documentation</a>.</p>
<p>Moreover its not a good practice to override to_field to another field different than your model.pk unless you have a really good reason and you know what you are doing (definitely not the case right now).</p>
<p>So after you read the docs, you will know that in order to get wishlisht related to a user, you can use the <code>ForeignKey</code> reverse relation to get all related wishlists for a user.</p>
<pre><code>user_wishlists = my_user.wishlist_set.all()
#Because we know that you want to access the wishlist.products
#in order to optimize things (in terms of db queries)
#you can add and .select_related('product')
#e.g, user_wishlists = my_user.wishlist_set.all().select_related('product')
#now follow the wishlist.product foreign key to access the related product for every wishlist
for wishlist in user_wishlists:
product = wishlist.product
print (product.id, product.title, product.url)
</code></pre>
<p>Now after you read a little bit more of the documentation
you will notice that your <code>WishList</code> model is in fact an <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.ManyToManyField.through" rel="nofollow">intermediate model</a> for a <code>ManyToMany</code> relation between <code>User</code> and his wished products, then you will know that you can define a M2M field between user and products via <code>WishList</code> like so:</p>
<pre><code>class FbUser(models.Model):
#...
wished_products = models.ManyToManyField(
Product,
through='WishList',
through_fields=('user', 'product')
)
#and now accessing user wished products would be easy as:
user_wished_products = my_user.wished_products.all()
for product in user_wished_products:
print (product.id, product.title, product.url)
</code></pre>
| 1 | 2016-10-19T08:10:14Z | [
"python",
"django",
"postgresql",
"object",
"django-models"
] |
Merge multiple lists horizontally in python | 40,125,126 | <p>I have two different lists like below</p>
<pre><code>A = [(Apple, Mango)]
B = [Grapes]
</code></pre>
<p>Now I would like to get a merged list as follows</p>
<pre><code>C = [(Apple,Mango,Grapes)]
</code></pre>
<p>Is there any predefined functions available in python to get above merged list.</p>
<p>Note: Already used zip method which is returning the result as different result</p>
<pre><code>C = [(Apple,Mango),(Grapes)]
</code></pre>
<p>The snippet of code which returned above result is this</p>
<pre><code>A = [('Apple','Mango')]
B = ['Grapes']
C = zip(A,B)
print C
</code></pre>
| -1 | 2016-10-19T07:43:57Z | 40,125,281 | <p>If A is list of tuples and B is normal list, this works</p>
<pre><code>A = [('Apple', 'Mango')]
B = ['Grapes']
print [ tuple( sum ( map(list,A), [] ) + B) ]
</code></pre>
| 0 | 2016-10-19T07:51:13Z | [
"python"
] |
Merge multiple lists horizontally in python | 40,125,126 | <p>I have two different lists like below</p>
<pre><code>A = [(Apple, Mango)]
B = [Grapes]
</code></pre>
<p>Now I would like to get a merged list as follows</p>
<pre><code>C = [(Apple,Mango,Grapes)]
</code></pre>
<p>Is there any predefined functions available in python to get above merged list.</p>
<p>Note: Already used zip method which is returning the result as different result</p>
<pre><code>C = [(Apple,Mango),(Grapes)]
</code></pre>
<p>The snippet of code which returned above result is this</p>
<pre><code>A = [('Apple','Mango')]
B = ['Grapes']
C = zip(A,B)
print C
</code></pre>
| -1 | 2016-10-19T07:43:57Z | 40,125,309 | <p>For you example, you could do</p>
<pre><code>C = list(A[0] + (B[0],))
</code></pre>
| 0 | 2016-10-19T07:53:08Z | [
"python"
] |
Merge multiple lists horizontally in python | 40,125,126 | <p>I have two different lists like below</p>
<pre><code>A = [(Apple, Mango)]
B = [Grapes]
</code></pre>
<p>Now I would like to get a merged list as follows</p>
<pre><code>C = [(Apple,Mango,Grapes)]
</code></pre>
<p>Is there any predefined functions available in python to get above merged list.</p>
<p>Note: Already used zip method which is returning the result as different result</p>
<pre><code>C = [(Apple,Mango),(Grapes)]
</code></pre>
<p>The snippet of code which returned above result is this</p>
<pre><code>A = [('Apple','Mango')]
B = ['Grapes']
C = zip(A,B)
print C
</code></pre>
| -1 | 2016-10-19T07:43:57Z | 40,125,885 | <p>If A and B is list of tuples then you can be use following recursive function</p>
<p><strong>Input something like this</strong></p>
<pre><code>A = [('Apple', 'Mango'), ('Mango1')]
B = [('Grapes', 'Banana'), ('Apple1')]
</code></pre>
<p><strong>Recursive function</strong> </p>
<pre><code>def recursive_fun(c, temp_list):
for i in c:
if type(i) is tuple:
recursive_fun(list(i), temp_list)
else :
temp_list.append(i)
return temp_list
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[tuple(recursive_fun(A, [])+ recursive_fun(B, []))]
</code></pre>
<p><strong>Final output</strong></p>
<pre><code>[('Apple', 'Mango', 'Mango1', 'Grapes', 'Banana', 'Apple1')]
</code></pre>
| 0 | 2016-10-19T08:22:42Z | [
"python"
] |
Merge multiple lists horizontally in python | 40,125,126 | <p>I have two different lists like below</p>
<pre><code>A = [(Apple, Mango)]
B = [Grapes]
</code></pre>
<p>Now I would like to get a merged list as follows</p>
<pre><code>C = [(Apple,Mango,Grapes)]
</code></pre>
<p>Is there any predefined functions available in python to get above merged list.</p>
<p>Note: Already used zip method which is returning the result as different result</p>
<pre><code>C = [(Apple,Mango),(Grapes)]
</code></pre>
<p>The snippet of code which returned above result is this</p>
<pre><code>A = [('Apple','Mango')]
B = ['Grapes']
C = zip(A,B)
print C
</code></pre>
| -1 | 2016-10-19T07:43:57Z | 40,126,209 | <p>here's an approach of flattening the tuple after the <code>zip</code>, it might perform less as for time complexity, but it will get what you want even for mixed type list, like <code>[str, tuple]</code> and <code>[tuple, str]</code></p>
<pre><code>a = [("apple", "mango"), "orange"] # This is [tuple, str]
b = ["grapes", ("pineapple", "lemon")] # This one's [str, tuple]
flatten = ( lambda itr: sum(map(flatten, itr), tuple())
if isinstance(itr, tuple) else (itr,)
)
c = [flatten(x) for x in zip(a, b)]
print (c)
# [('apple', 'mango', 'grapes'), ('orange', 'pineapple', 'lemon')]
</code></pre>
| 1 | 2016-10-19T08:37:48Z | [
"python"
] |
Merge multiple lists horizontally in python | 40,125,126 | <p>I have two different lists like below</p>
<pre><code>A = [(Apple, Mango)]
B = [Grapes]
</code></pre>
<p>Now I would like to get a merged list as follows</p>
<pre><code>C = [(Apple,Mango,Grapes)]
</code></pre>
<p>Is there any predefined functions available in python to get above merged list.</p>
<p>Note: Already used zip method which is returning the result as different result</p>
<pre><code>C = [(Apple,Mango),(Grapes)]
</code></pre>
<p>The snippet of code which returned above result is this</p>
<pre><code>A = [('Apple','Mango')]
B = ['Grapes']
C = zip(A,B)
print C
</code></pre>
| -1 | 2016-10-19T07:43:57Z | 40,126,535 | <p>Since tuples are not mutable, we can't append/add to <code>A[0]</code>. Let's just unpack and create a new tuple within a list:</p>
<p><code>[(*A[0], *B)]</code></p>
| 0 | 2016-10-19T08:53:33Z | [
"python"
] |
Python vectorization with a constant | 40,125,236 | <p>I have a series X of length n(=300,000). Using a window length of w (=40), I need to implement:</p>
<p>mu(i)= X(i)-X(i-w)</p>
<p>s(i) = sum{k=i-w to i} [X(k)-X(k-1) - mu(i)]^2</p>
<p>I was wondering if there's a way to prevent loops here. The fact that mu(i) is constant in second equation is causing complications in vectorization. I did the following so far:</p>
<pre><code>x1=x.shift(1)
xw=x.shift(w)
mu= x-xw
dx=(x-x1-mu)**2 # wrong because mu wouldn't be constant for each i
s=pd.rolling_sum(dx,w)
</code></pre>
<p>The above code would work (and was working) in a loop setting but takes too long, so any help regarding vectorization or other speed improvement methods would be helpful. I posted this on crossvalidated with mathjax formatting but that doesn't seem to work here.</p>
<p><a href="http://stats.stackexchange.com/questions/241050/python-vectorization-with-a-constant">http://stats.stackexchange.com/questions/241050/python-vectorization-with-a-constant</a></p>
<p>Also just to clarify, I wasn't using a double loop, just a single one originally:</p>
<pre><code> for i in np.arange(w, len(X)):
x=X.ix[i-w:i,0] # clip a series of size w
x1=x.shift(1)
mu.ix[i]= x.ix[-1]-x.ix[0]
temp= (x-x1-mu.ix[i])**2 # returns a series of size w but now mu is constant
s.ix[i]= temp.sum()
</code></pre>
| 2 | 2016-10-19T07:49:07Z | 40,126,109 | <p><strong>Approach #1 :</strong> One vectorized approach would be using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p>
<pre><code>N = X.shape[0]
a = np.arange(N)
k2D = a[:,None] - np.arange(w+1)[::-1]
mu1D = X - X[a-w]
out = ((X[k2D] - X[k2D-1] - mu1D[:,None])**2).sum(-1)
</code></pre>
<p>We can further optimize the last step to get squared summations with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> -</p>
<pre><code>subs = X[k2D] - X[k2D-1] - mu1D[:,None]
out = np.einsum('ij,ij->i',subs,subs)
</code></pre>
<p>Further improvement is possible with the use of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.strides.html" rel="nofollow"><code>NumPy strides</code></a> to get <code>X[k2D]</code> and <code>X[k2D-1]</code>.</p>
<hr>
<p><strong>Approach #2 :</strong> To save on memory when working very large arrays, we can use one loop instead of two loops used in the original code, like so -</p>
<pre><code>N = X.shape[0]
s = np.zeros((N))
k_idx = np.arange(-w,1)
for i in range(N):
mu = X[i]-X[i-w]
s[i] = ((X[k_idx]-X[k_idx-1] - mu)**2).sum()
k_idx += 1
</code></pre>
<p>Again, <code>np.einsum</code> could be used here to compute <code>s[i]</code>, like so -</p>
<pre><code>subs = X[k_idx]-X[k_idx-1] - mu
s[i] = np.einsum('i,i->',subs,subs)
</code></pre>
| 1 | 2016-10-19T08:33:32Z | [
"python",
"performance",
"pandas",
"numpy",
"vectorization"
] |
webpage access while using scrapy | 40,125,256 | <p>I am new to python and scrapy. I followed the tutorial and tried to crawl few webpages. I used the code in the <a href="https://doc.scrapy.org/en/latest/intro/tutorial.html" rel="nofollow">tutorial</a> and replaced the URLs - <code>http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0</code> and <code>http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1</code> respectively.</p>
<p>when the html file is generated the whole data is not getting displayed. only the data upto this URL - <code>http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=0&sd=0&states=ALL&near=&ps=20&p=0</code> is shown. </p>
<p>Also while the command is run the second URL has been removed stating it as duplicate and only one html file is being created.</p>
<p>I want to know if the webpage denies access to that specific data or should i change my code to get the precise data. </p>
<p>When i further give the shell command i am getting error.
The result when i used the crawl command and shell command was -</p>
<pre><code> C:\Users\MinorMiracles\Desktop\tutorial>python -m scrapy.cmdline crawl citydata
2016-10-19 12:00:27 [scrapy] INFO: Scrapy 1.2.0 started (bot: tutorial)
2016-10-19 12:00:27 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tu
torial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'ROBOTSTXT_OBEY': True,
'BOT_NAME': 'tutorial'}
2016-10-19 12:00:27 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2016-10-19 12:00:27 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-19 12:00:27 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-19 12:00:27 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-19 12:00:27 [scrapy] INFO: Spider opened
2016-10-19 12:00:27 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2016-10-19 12:00:27 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-19 12:00:27 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.
city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=
&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX
&i6819=1&ps=20&p=1> - no more duplicates will be shown (see DUPEFILTER_DEBUG to
show all duplicates)
2016-10-19 12:00:28 [scrapy] DEBUG: Crawled (200) <GET http://www.city-data.com/
robots.txt> (referer: None)
2016-10-19 12:00:29 [scrapy] DEBUG: Crawled (200) <GET http://www.city-data.com/
advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=691
4&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20
&p=0> (referer: None)
2016-10-19 12:00:29 [citydata] DEBUG: Saved file citydata-advanced.html
2016-10-19 12:00:29 [scrapy] INFO: Closing spider (finished)
2016-10-19 12:00:29 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 459,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 44649,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'dupefilter/filtered': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 19, 6, 30, 29, 751000),
'log_count/DEBUG': 5,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 10, 19, 6, 30, 27, 910000)}
2016-10-19 12:00:29 [scrapy] INFO: Spider closed (finished)
C:\Users\MinorMiracles\Desktop\tutorial>python -m scrapy.cmdline shell 'http://w
ww.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&ne
ar=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=
MAX&i6819=1&ps=20&p=0'
2016-10-19 12:21:51 [scrapy] INFO: Scrapy 1.2.0 started (bot: tutorial)
2016-10-19 12:21:51 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tu
torial.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters
.BaseDupeFilter', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'
, 'LOGSTATS_INTERVAL': 0}
2016-10-19 12:21:51 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2016-10-19 12:21:51 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-19 12:21:51 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-19 12:21:51 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-19 12:21:51 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-19 12:21:51 [scrapy] INFO: Spider opened
2016-10-19 12:21:53 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (fai
led 1 times): DNS lookup failed: address "'http:" not found: [Errno 11004] getad
drinfo failed.
2016-10-19 12:21:56 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (fai
led 2 times): DNS lookup failed: address "'http:" not found: [Errno 11004] getad
drinfo failed.
2016-10-19 12:21:58 [scrapy] DEBUG: Gave up retrying <GET http://'http:/robots.t
xt> (failed 3 times): DNS lookup failed: address "'http:" not found: [Errno 1100
4] getaddrinfo failed.
2016-10-19 12:21:58 [scrapy] ERROR: Error downloading <GET http://'http:/robots.
txt>: DNS lookup failed: address "'http:" not found: [Errno 11004] getaddrinfo f
ailed.
DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11004] get
addrinfo failed.
2016-10-19 12:22:00 [scrapy] DEBUG: Retrying <GET http://'http://www.city-data.c
om/advanced/search.php#body?fips=0> (failed 1 times): DNS lookup failed: address
"'http:" not found: [Errno 11004] getaddrinfo failed.
2016-10-19 12:22:03 [scrapy] DEBUG: Retrying <GET http://'http://www.city-data.c
om/advanced/search.php#body?fips=0> (failed 2 times): DNS lookup failed: address
"'http:" not found: [Errno 11004] getaddrinfo failed.
2016-10-19 12:22:05 [scrapy] DEBUG: Gave up retrying <GET http://'http://www.cit
y-data.com/advanced/search.php#body?fips=0> (failed 3 times): DNS lookup failed:
address "'http:" not found: [Errno 11004] getaddrinfo failed.
Traceback (most recent call last):
File "C:\Python27\lib\runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "C:\Python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 161, in <module>
execute()
File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 88, in _run_print
_help
func(*a, **kw)
File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 149, in _run_comm
and
cmd.run(args, opts)
File "C:\Python27\lib\site-packages\scrapy\commands\shell.py", line 71, in run
shell.start(url=url)
File "C:\Python27\lib\site-packages\scrapy\shell.py", line 47, in start
self.fetch(url, spider)
File "C:\Python27\lib\site-packages\scrapy\shell.py", line 112, in fetch
reactor, self._schedule, request, spider)
File "C:\Python27\lib\site-packages\twisted\internet\threads.py", line 122, in
blockingCallFromThread
result.raiseException()
File "<string>", line 2, in raiseException
twisted.internet.error.DNSLookupError: DNS lookup failed: address "'http:" not f
ound: [Errno 11004] getaddrinfo failed.
'csize' is not recognized as an internal or external command,
operable program or batch file.
'sc' is not recognized as an internal or external command,
operable program or batch file.
'sd' is not recognized as an internal or external command,
operable program or batch file.
'states' is not recognized as an internal or external command,
operable program or batch file.
'near' is not recognized as an internal or external command,
operable program or batch file.
'nam_crit1' is not recognized as an internal or external command,
operable program or batch file.
'b6914' is not recognized as an internal or external command,
operable program or batch file.
'e6914' is not recognized as an internal or external command,
operable program or batch file.
'i6914' is not recognized as an internal or external command,
operable program or batch file.
'nam_crit2' is not recognized as an internal or external command,
operable program or batch file.
'b6819' is not recognized as an internal or external command,
operable program or batch file.
'e6819' is not recognized as an internal or external command,
operable program or batch file.
'i6819' is not recognized as an internal or external command,
operable program or batch file.
'ps' is not recognized as an internal or external command,
operable program or batch file.
'p' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>My code is </p>
<pre><code>import scrapy
class QuotesSpider(scrapy.Spider):
name = "citydata"
def start_requests(self):
urls = [
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0',
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'citydata-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
</code></pre>
<p>Someone please guide me through this.</p>
| 0 | 2016-10-19T07:49:48Z | 40,127,973 | <p>First of all, this website looks like a JavaScript-heavy one. Scrapy itself only downloads HTML from servers but does not interpret JavaScript statements.</p>
<p>Second, the URL fragment (i.e. everything including and after <code>#body</code>) is not sent to the server and only <code>http://www.city-data.com/advanced/search.php</code> is fetched (scrapy does the same as your browser.
You can confirm that with your browser's dev tools network tab.)</p>
<p>So for Scrapy, the requests to </p>
<pre><code>http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0
</code></pre>
<p>and</p>
<pre><code>http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1
</code></pre>
<p>are the same resource, so it's only fetch once. They differ only in their URL fragments.</p>
<p>What you need is a JavaScript renderer. You could use Selenium or something like <a href="http://splash.readthedocs.io/" rel="nofollow">Splash</a>. I recommend using the <a href="https://github.com/scrapy-plugins/scrapy-splash" rel="nofollow">scrapy-splash plugin</a> which includes a duplicate filter that takes into account URL fragments.</p>
| 0 | 2016-10-19T09:55:24Z | [
"python",
"python-2.7",
"web-scraping",
"scrapy",
"scrapy-spider"
] |
Efficiently write SPSS data into Excel using Python | 40,125,275 | <p>I am trying to use Python to write data from an open SPSS dataset into an excel file. The below program works fine, but it takes about 35 seconds for a file with 1.4 million data-points (2500 cases, 700 variables). </p>
<p>For now, I am looping through each case (as a tuple), then assigning each element of the tuple into a cell. <code>openpyxl</code> is the Excel module of choice (as I did not use any other in the past).</p>
<p>I am going to use the Python program for much larger data-sets, so I was wondering if there is a more efficient logic of doing this. </p>
<pre><code>BEGIN PROGRAM.
import spssdata
import spss,spssaux, sys
import openpyxl
from openpyxl import Workbook
import gc
#initialise timer
time_start = time.time()
#Create the workbook to save the codebook
wb=openpyxl.Workbook()
ws1=wb.create_sheet()
spss.StartDataStep()
MyFile = spss.Dataset()
varDict = spssaux.VariableDict()
MyCases=MyFile.cases
MyVars=MyFile.varlist
for varnum, varname in enumerate(MyFile.varlist):
ws1.cell(row=1,column=varnum+1).value=varname.name
ws2.cell(row=1,column=varnum+1).value=varname.name
for eachcase in range (len(MyCases)):
for eachvar in range (len(MyCases[eachcase])):
ValueToWrite=MyCases[eachcase][eachvar]
ws1.cell(row=eachcase+2,column=eachvar+1).value=ValueToWrite
spss.EndDataStep()
wb.save("some filename")
del wb
gc.collect()
time_end = time.time()
time_taken = int(time_end-time_start)
print ("Saving took " + str(time_taken) + " seconds.")
END PROGRAM.
</code></pre>
| 2 | 2016-10-19T07:50:46Z | 40,126,359 | <p>You could experiment with using the <code>win32com</code> approach. This is normally quite slow, but it does have the advantage of being able to do most of the data transfer in a single call. You would just need to prepare your data into a suitably sized list:</p>
<pre><code>import win32com.client as win32
data = [["A", "B"] for _ in range(10000)]
excel = win32.gencache.EnsureDispatch('Excel.Application')
excel.DisplayAlerts = False
wb = excel.Workbooks.Add()
ws = wb.Worksheets.Add()
ws.Range(ws.Cells(1, 1), ws.Cells(len(data), 2)).Value = data
wb.SaveAs(r'c:\python\test.xlsx')
excel.Application.Quit()
</code></pre>
<p>Timing this with <code>range(1000000)</code> took about 7.5 seconds.</p>
<p>AFAIK there is no way in <code>openpyxl</code> to write more than one cell at a time.</p>
<hr>
<p>Based on your existing code, I would suggest something along the lines of:</p>
<pre><code>import win32com.client as win32
import time
import spss,spssaux, sys
#initialise timer
time_start = time.time()
spss.StartDataStep()
MyFile = spss.Dataset()
MyCases = MyFile.cases
spss.EndDataStep()
excel = win32.gencache.EnsureDispatch('Excel.Application')
excel.DisplayAlerts = False
wb = excel.Workbooks.Add()
ws1 = wb.Worksheets("Sheet1")
ws2 = wb.Worksheets("Sheet2")
# Add header to both sheets
ws1.Range(ws1.Cells(1, 1), ws1.Cells(1, len(MyFile.varlist))).Value = MyFile.varlist
ws2.Range(ws2.Cells(1, 1), ws2.Cells(1, len(MyFile.varlist))).Value = MyFile.varlist
# Copy data
ws1.Range(ws1.Cells(2, 1), ws1.Cells(1 + len(MyCases), len(MyCases[0]))).Value = MyCases
wb.SaveAs(r'e:\python temp\test.xlsx')
excel.Application.Quit()
print("Saving took {:.1f} seconds.".format(time.time() - time_start))
</code></pre>
| 1 | 2016-10-19T08:45:18Z | [
"python",
"excel",
"spss",
"openpyxl"
] |
Efficiently write SPSS data into Excel using Python | 40,125,275 | <p>I am trying to use Python to write data from an open SPSS dataset into an excel file. The below program works fine, but it takes about 35 seconds for a file with 1.4 million data-points (2500 cases, 700 variables). </p>
<p>For now, I am looping through each case (as a tuple), then assigning each element of the tuple into a cell. <code>openpyxl</code> is the Excel module of choice (as I did not use any other in the past).</p>
<p>I am going to use the Python program for much larger data-sets, so I was wondering if there is a more efficient logic of doing this. </p>
<pre><code>BEGIN PROGRAM.
import spssdata
import spss,spssaux, sys
import openpyxl
from openpyxl import Workbook
import gc
#initialise timer
time_start = time.time()
#Create the workbook to save the codebook
wb=openpyxl.Workbook()
ws1=wb.create_sheet()
spss.StartDataStep()
MyFile = spss.Dataset()
varDict = spssaux.VariableDict()
MyCases=MyFile.cases
MyVars=MyFile.varlist
for varnum, varname in enumerate(MyFile.varlist):
ws1.cell(row=1,column=varnum+1).value=varname.name
ws2.cell(row=1,column=varnum+1).value=varname.name
for eachcase in range (len(MyCases)):
for eachvar in range (len(MyCases[eachcase])):
ValueToWrite=MyCases[eachcase][eachvar]
ws1.cell(row=eachcase+2,column=eachvar+1).value=ValueToWrite
spss.EndDataStep()
wb.save("some filename")
del wb
gc.collect()
time_end = time.time()
time_taken = int(time_end-time_start)
print ("Saving took " + str(time_taken) + " seconds.")
END PROGRAM.
</code></pre>
| 2 | 2016-10-19T07:50:46Z | 40,127,235 | <p>Make sure lxml is installed and use openpyxl's write-only mode, assuming you can work on a row-by-row basis. If this is not directly possible then you'll need some kind of intermediate structure that can give you rows.</p>
| 0 | 2016-10-19T09:22:40Z | [
"python",
"excel",
"spss",
"openpyxl"
] |
How to convert this Python code to c#? | 40,125,289 | <p>I'm fairly decent at python and just starting to learn C#.</p>
<p>How would i write this peice of python code in c#?</p>
<pre><code> d =" ".join(c.split())
</code></pre>
<p>I'm a c# beginner so nothing too technical please.</p>
| 0 | 2016-10-19T07:51:46Z | 40,125,399 | <p>d = string.Join(null, c.Split(null)); //null is your separator where you can change it to " "</p>
| 0 | 2016-10-19T07:57:04Z | [
"c#",
"python"
] |
How to convert this Python code to c#? | 40,125,289 | <p>I'm fairly decent at python and just starting to learn C#.</p>
<p>How would i write this peice of python code in c#?</p>
<pre><code> d =" ".join(c.split())
</code></pre>
<p>I'm a c# beginner so nothing too technical please.</p>
| 0 | 2016-10-19T07:51:46Z | 40,125,405 | <p>As far as I can see</p>
<pre><code> c.split()
</code></pre>
<p>is splitting string <code>c</code> by default - <code></code> (space) - delimiter; C# equivalent is</p>
<pre><code> c.Split(' ');
</code></pre>
<p>Pythonic</p>
<pre><code> " ".join
</code></pre>
<p>is collection joining with <code>" "</code> being the delimiter; C# equivalent is</p>
<pre><code> string.Join(" ", collection);
</code></pre>
<p>Binding all together:</p>
<pre><code> d = string.Join(" ", c.Split(' '));
</code></pre>
| 2 | 2016-10-19T07:57:15Z | [
"c#",
"python"
] |
How to convert this Python code to c#? | 40,125,289 | <p>I'm fairly decent at python and just starting to learn C#.</p>
<p>How would i write this peice of python code in c#?</p>
<pre><code> d =" ".join(c.split())
</code></pre>
<p>I'm a c# beginner so nothing too technical please.</p>
| 0 | 2016-10-19T07:51:46Z | 40,125,424 | <p>Its almost the same:</p>
<pre><code>// splitting by space
var d = string.Join (" ", c.Split (' '));
</code></pre>
| 0 | 2016-10-19T07:58:50Z | [
"c#",
"python"
] |
Jython : SyntaxError: invalid syntax | 40,125,406 | <p>am getting a syntax error in my code. Can anyone say what's wrong in the syntax? I am new to this language, don't have much of an idea.</p>
<p>Error Message:</p>
<blockquote>
<p>WASX7017E: Exception received while running file "jdbcconnection.jy";
exception information: com.ibm.bsf.BSFException: exception from
Jython: Traceback (innermost last): (no code object) at line 0 File
"", line 13 AdminTask.createJDBCProvider('[-scope
Node='+nodeName+',Server='+serverName' -databaseType Oracle
-providerType "Oracle JDBC Driver" -implementationType "Connection pool data source" - name "Oracle JDBC Driver" -description "Oracle
JDBC Driver" -classpath [${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar]
-nativePath "" ]') ^ SyntaxError: invalid syntax</p>
</blockquote>
<hr>
<p>My code:</p>
<pre><code>import sys
def jdbcoracle(nodeName,serverName):
print 'Create JDBC provider'
AdminTask.createJDBCProvider('[-scope Node='+nodeName+',Server='+serverName' -databaseType Oracle -providerType "Oracle JDBC Driver" -implementationType "Connection pool data source" -name "Oracle JDBC Driver" -description "Oracle JDBC Driver" -classpath [${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar] -nativePath "" ]')
AdminTask.createJDBCProvider('[-scope Node='+nodeName+',Server='+serverName' -databaseType Oracle -providerType "Oracle JDBC Driver" -implementationType "XA data source" -name "Oracle JDBC Driver (XA)" -description "Oracle JDBC Driver (XA)" -classpath [${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar] -nativePath "" ]')
AdminConfig.save()
print 'JDBC provider created'
#-------------------------------------
# Main Application starts from here
#-------------------------------------
global nodeName, cellName
nodeName = sys.argv[0]
serverName = sys.argv[1]
jdbcoracle(nodeName,serverName)
</code></pre>
| 0 | 2016-10-19T07:57:28Z | 40,125,581 | <p>Your syntax would be invalid in any language. You have <code>'...Server='+serverName' ...'</code> - you are missing a <code>+</code> before the reopening of the quote.</p>
<p>Of course, you should not be building up strings like that; you should be using one of the many string formatting features available in Python, for example:</p>
<pre><code>'[-scope Node={},Server={} -databaseType...'.format(nodeName, serverName)
</code></pre>
<p>I suspect you also mean <code>ORACLE_JDBC_DRIVER_PATH</code> to be an interpolated variable, but only you know where that is supposed to be coming from.</p>
| 2 | 2016-10-19T08:06:32Z | [
"python",
"jython"
] |
modify pandas boxplot output | 40,125,528 | <p>I made this plot in pandas, according to the documentation:</p>
<pre><code>import pandas as pd
import numpy as np
import pyplot as plt
df = pd.DataFrame(np.random.rand(140, 4), columns=['A', 'B', 'C', 'D'])
df['models'] = pd.Series(np.repeat(['model1','model2', 'model3', 'model4', 'model5', 'model6', 'model7'], 20))
plt.figure()
bp = df.boxplot(by="models")
</code></pre>
<p><a href="https://i.stack.imgur.com/2So7W.png" rel="nofollow"><img src="https://i.stack.imgur.com/2So7W.png" alt="enter image description here"></a></p>
<p>How can I modify this plot?</p>
<p>I want:</p>
<ul>
<li>modify arrangement from (2,2) to (1,4)</li>
<li>change the labels and titles, text and font size</li>
<li>remove the '[models]' text</li>
</ul>
<p>and how do I save this plot as pdf ?</p>
| 0 | 2016-10-19T08:04:19Z | 40,126,090 | <p>A number of things you can do already using the boxplot function in pandas, see the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html" rel="nofollow">documentation</a>. </p>
<ul>
<li><p>You can already modify the arrangement, and change the fontsize: </p>
<pre><code>import pandas as pd
import numpy as np
import pyplot as plt
df = pd.DataFrame(np.random.rand(140, 4), columns=['A', 'B', 'C', 'D'])
df['models'] = pd.Series(np.repeat(['model1','model2', 'model3', 'model4', 'model5', 'model6', 'model7'], 20))
bp = df.boxplot(by="models", layout = (4,1), fontsize = 14)
</code></pre></li>
<li><p>Changing the columns the labels can be done by changing the columns labels of the dataframe itself:</p>
<pre><code>df.columns(['E', 'F', 'G', 'H', 'models'])
</code></pre></li>
<li><p>For further customization I would use the functionality from matlotlib itself; you can take a look at the examples <a href="http://matplotlib.org/examples/statistics/boxplot_demo.html" rel="nofollow">here</a>. </p></li>
</ul>
| 2 | 2016-10-19T08:32:33Z | [
"python",
"pandas",
"matplotlib"
] |
modify pandas boxplot output | 40,125,528 | <p>I made this plot in pandas, according to the documentation:</p>
<pre><code>import pandas as pd
import numpy as np
import pyplot as plt
df = pd.DataFrame(np.random.rand(140, 4), columns=['A', 'B', 'C', 'D'])
df['models'] = pd.Series(np.repeat(['model1','model2', 'model3', 'model4', 'model5', 'model6', 'model7'], 20))
plt.figure()
bp = df.boxplot(by="models")
</code></pre>
<p><a href="https://i.stack.imgur.com/2So7W.png" rel="nofollow"><img src="https://i.stack.imgur.com/2So7W.png" alt="enter image description here"></a></p>
<p>How can I modify this plot?</p>
<p>I want:</p>
<ul>
<li>modify arrangement from (2,2) to (1,4)</li>
<li>change the labels and titles, text and font size</li>
<li>remove the '[models]' text</li>
</ul>
<p>and how do I save this plot as pdf ?</p>
| 0 | 2016-10-19T08:04:19Z | 40,126,110 | <ul>
<li>For the arrangement use <code>layout</code></li>
<li>For setting x label use <code>set_xlabel('')</code></li>
<li>For figure title use <code>figure.subtitle()</code></li>
<li>For changing the figure size use <code>figsize=(w,h)</code> (inches)</li>
</ul>
<p>note: the line <code>np.asarray(bp).reshape(-1)</code> is converting the layout of the subplots (2x2 for instance) to an array. </p>
<p>code : </p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame(np.random.rand(140, 4), columns=['A', 'B', 'C', 'D'])
df['models'] = pd.Series(np.repeat(['model1','model2', 'model3', 'model4', 'model5', 'model6', 'model7'], 20))
bp = df.boxplot(by="models",layout=(4,1),figsize=(6,8))
[ax_tmp.set_xlabel('') for ax_tmp in np.asarray(bp).reshape(-1)]
fig = np.asarray(bp).reshape(-1)[0].get_figure()
fig.suptitle('New title here')
plt.show()
</code></pre>
<p>result: </p>
<p><a href="https://i.stack.imgur.com/jRV5z.png" rel="nofollow"><img src="https://i.stack.imgur.com/jRV5z.png" alt="enter image description here"></a></p>
| 2 | 2016-10-19T08:33:33Z | [
"python",
"pandas",
"matplotlib"
] |
Is it possible to host endpoint and WSGIApplication application in the same app engine project | 40,125,691 | <p>I implemented endpoint project:</p>
<pre><code>@endpoints.api(name='froom', version='v1', description='froom API')
class FRoomApi(remote.Service):
@endpoints.method(FbPutRoomRequest, RoomMessage, path='putroom/{id}', http_method='PUT', name='putroom')
def put_room(self, request):
entity = FRoom().put_room(request, request.id)
return entity.to_request_message()
application = endpoints.api_server([FRoomApi],restricted=False)
</code></pre>
<p>app.yaml</p>
<pre><code>- url: /_ah/spi/.*
script: froomMain.application
- url: .*
static_files: index.html
upload: index.html
</code></pre>
<p>and I have separate wsgi-jinja project:</p>
<pre><code>routes = [
Route(r'/', handler='handlers.PageHandler:root', name='pages-root'),
# Wipe DS
Route(r'/tasks/wipe-ds', handler='handlers.WipeDSHandler', name='wipe-ds'),
]
config = {
'webapp2_extras.sessions': {
'secret_key': 'someKey'
},
'webapp2_extras.jinja2': {
'filters': {
'do_pprint': do_pprint,
},
},
}
application = webapp2.WSGIApplication(routes, debug=DEBUG, config=config)
</code></pre>
<p>app.yaml</p>
<pre><code>- url: /.*
script: froomMain.application
</code></pre>
<p>Is it possible to host those two projects in the same application</p>
| 0 | 2016-10-19T08:12:20Z | 40,131,868 | <p>The fundamental problem that needs to be addressed is defining the appropriate overall app request namespace so that routing to the appropriate sub-app can be made reliably, keeping in mind that:</p>
<ul>
<li><strong>only one sub-app</strong> can be designated as the default one (which will handle requests not matching any of the other sub-app namespaces).</li>
<li>the namespaces for all non-default sub-apps <strong>must be checked before</strong> the namespace for the default sub-app</li>
<li>the decision to route to one sub-app is final, if it fails to handle the request it'll return a 404, there is no fallback to another sub-app which might be able to handle the request</li>
</ul>
<p>In your case the complication arises from the conflicting namespaces of the sub-apps. For example both the <code>/</code> and the <code>/tasks/wipe-ds</code> paths from the wsgi-jinja project collide with the <code>.*</code> namespace in the endpoints project. To make it work one of the sub-apps namespaces must be modified. </p>
<p>Since the endpoints project contains a lot of auto-generated code it's more difficult to change, so I'd leave that as the default one and modify the wsgi-jinja one, for example by prefix-ing it with <code>/www</code>. For this to work the wsgi-jinja's internal routes need to be modified accordingly:</p>
<ul>
<li><code>/</code> -> <code>/www</code></li>
<li><code>/tasks/wipe-ds</code> -> <code>/www/tasks/wipe-ds</code></li>
</ul>
<p>Both your existing projects seem to have a <code>froomMain.py</code> file with an <code>application</code> global inside, conflicting. I'd rename wsgi-jinja's one, let's say to <code>www.py</code>:</p>
<pre><code>routes = [
Route(r'/www/', handler='handlers.PageHandler:root', name='pages-root'),
# Wipe DS
Route(r'/www/tasks/wipe-ds', handler='handlers.WipeDSHandler', name='wipe-ds'),
]
config = {
'webapp2_extras.sessions': {
'secret_key': 'someKey'
},
'webapp2_extras.jinja2': {
'filters': {
'do_pprint': do_pprint,
},
},
}
application = webapp2.WSGIApplication(routes, debug=DEBUG, config=config)
</code></pre>
<p>Your <code>app.yaml</code> file would then be:</p>
<pre><code>- url: /www/.*
script: www.application
- url: /_ah/spi/.*
script: froomMain.application
- url: .*
static_files: index.html
upload: index.html
</code></pre>
| 1 | 2016-10-19T12:49:59Z | [
"python",
"google-app-engine",
"wsgi",
"endpoint"
] |
Starting virtualenv script rc local | 40,125,708 | <p>I'm very newbie in this field and hope someone can help me.</p>
<p>So I have a backend project which I need to launch automatically when the computer switches on (I really don't care how, using systemd or rc.local, my boss told me rc.local, but I guess either will do). I just need to start a docker container, then start my virtualenv and then run the project.</p>
<p>So far I've tried this at <code>/etc/rc.local</code></p>
<p><code>docker start cassandratt #my docker container
sleep 20 #an ugly hack to give time for the container to start
cd /home/backend/
. venv/bin/activate
. /run.py</code></p>
<p>It doesn't work, but the docker container starts, so I guess the problem is around virtualenv or python, I really don't know as I don't have any experience on this field. </p>
<p>Any idea on how I could accomplish it? </p>
<p>Thanks in advance</p>
<p><strong>Edit:</strong></p>
<p>Following Samer's guidance I tried creating a folder after activating the virtualenv and it created fine, so I suppore the problem is trying to execute run.py, perhaps loading the virtualenv's python? </p>
<p><code>docker start cassandratt #my docker container
cd /home/backend/
. venv/bin/activate
mkdir test #folder created fine
. /run.py
mkdir test2 #folder not created</code></p>
| 2 | 2016-10-19T08:13:10Z | 40,128,669 | <p>So, partially, the solution seems to set some variables instead of accesing them directly. At least this worked for me. Thanks Samer for giving us a big tip :)</p>
<p><code>HOME=/home/backend #the project path
docker start container
. $HOME/venv/bin/activate #activates the virtualenv of the project
/usr/bin/env python $HOME/run.py & #runs the run.py through virtualenv's python and runs it in the background
exit 0</code></p>
| 0 | 2016-10-19T10:24:47Z | [
"python",
"linux",
"virtualenv",
"boot"
] |
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' +HOST | 40,125,860 | <p>I am VERY new to programming and I found this program and APPLY IT BUT I FOUND THESE ERROR (I put error at the end of program),
import requests
import telnetlib
import time
import sys</p>
<pre><code>import numpy as np
import matplotlib
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.ticker as plticker
USERNAME = 'ubnt'
PASSWORD = 'ubnt1'
HOST = "192.168.1.220"
PORT = 18888
TIMEOUT = 10
FRAME_SPEED = 1
LOGIN_URI = 'http://' + HOST + ':80/login.cgi'
#LOGIN_URI = 'https://' + HOST + ':443/login.cgi'
def usage():
print ("Usage:+ sys.argv[0] + <live|replay FILENAME>")
print ("")
print ("Options:")
print ("\tlive \t=\tProcess live data from device ") + HOST
print ("\treplay FILENAME \t=\tReplay FILENAME")
print ("\trecord FILENAME \t=\tMake movie of FILENAME")
exit(128)
if len(sys.argv) == 2 and sys.argv[1] == 'live':
ACTION='live'
FILENAME = None
elif len(sys.argv) == 3 and sys.argv[1] == 'replay':
ACTION='replay'
ocessing
FRAME_SPEED = 50
elif len(sys.argv) == 3 and sys.argv[1] == 'record':
ACTION='record'
FILENAME = sys.argv[2] # Stored data processing
FRAME_SPEED = 50
else:
usage()
def parse_get_frame_resp(line):
_,vals_raw = line.split(':')
vals = map(int, vals_raw.split(','))
frame_nr = vals.pop(0)
return(frame_nr, vals)
#TODO: Make me dynamic parse from 'SCAN RANGE' response
scan_range_begin = 2402000000
scan_range_end = 2497000000
if not FILENAME:
print ("Enabling Ubiquiti airView at %s:%s@%s...") %(USERNAME, PASSWORD, HOST)
s = requests.session()
s.get(LOGIN_URI, verify=False)
r = s.post(LOGIN_URI,
{"username": USERNAME, "password": PASSWORD, "uri": "airview.cgi? start=1"},
verify=False)
if 'Invalid credentials.' in r.text:
print ("# CRIT: Username/password invalid!")
sys.exit(1)
print ("Waiting for device to enter airView modus...")
# Allow device a few moments to settle
time.sleep(TIMEOUT)
print ("Start scanning...")
tn = telnetlib.Telnet(HOST, PORT, timeout=TIMEOUT)
#tn.set_debuglevel(99)
# Storage on unique files
outfile = 'output-%s.dat' % int(time.time())
print ("Storing output at '%s'") % outfile
fh = open(outfile, 'a')
def writeline(cmd):
""" Write line to device"""
ts = time.time()
tn.write(cmd)
print (cmd)
fh.write("%s\001%s" % (ts, cmd))
return ts
def getline():
"""Read line from device"""
line = tn.read_until("\n")
print (line)
fh.write("%s\001%s" % (time.time(), line))
return line
# Commands needs to have a trailing space if no arguments specified
writeline("CONNECT: \n")
getline()
#writeline("REQUEST RANGE: 2402000000,2407000000\n") # 5 MHz
#writeline("REQUEST RANGE: 2402000000,2412000000\n") # 10 MHz
#writeline("REQUEST RANGE: 2402000000,2417000000\n") # 15 MHz
#writeline("REQUEST RANGE: 2402000000,2422000000\n") # 20 Mhz
#writeline("REQUEST RANGE: 2402000000,2477000000\n") # (ch 1-11 - US allocation)
#writeline("REQUEST RANGE: 2402000000,2487000000\n") # (ch 1-13 - UK allocation)
writeline("REQUEST RANGE: 2402000000,2497000000\n") # (ch 1-14)
getline()
writeline("START SCAN: \n")
getline()
print ("Waiting for scan to start...")
time.sleep(2)
def get_frame(frame):
""" Get frame from device airView """
# TODO: Receiving frames in order, sometimes yield of empty responses. Already flush out maybe?
#writeline("GET FRAME: %s\n" % frame)
ts = writeline("GET FRAME: \n")
line = getline()
return((ts,) + parse_get_frame_resp(line))
else:
# No need for logic since we are processing stored data
sh = open(FILENAME, 'r')
def get_frame(frame):
""" Perform replay data processing """
while True:
line = sh.readline()
if not line:
return(None, None, None)
ts_raw, a = line.split('\001', 1)
ts = float(ts_raw)
cmd, ret = a.split(':', 1)
if cmd == 'FRAME':
return((ts,) + parse_get_frame_resp(a))
# Get innitial frame number and bins sizes
_, frame_nr, vals = get_frame(None)
bin_size = len(vals)
bin_sample_khz = float(scan_range_end - scan_range_begin) / 1000 / bin_size
print ("Bin size: %s") % bin_size
# Start making picture
fig, ax = plt.subplots(figsize=(20,11))
fig.canvas.set_window_title('UBNT airView Client')
ax.set_ylabel('100ms units elapsed')
ax.set_xlabel('Frequency (sampled with bins of %s kHz)' % bin_sample_khz)
# Channel center frequencies
a = [2402,2412,2417,2422,2427,2432,2437,2442,2447,2452,2457,2462,2467,2472,2484,2497]
channels = (np.array(a,dtype='float32') - 2402) / (bin_sample_khz / 1000)
ax.get_xaxis().set_ticks(channels)
plt.xticks(rotation=90)
# Plot channel description
for i in range(1,15):
width_20mhz = 20000.0 / bin_sample_khz
if i in [1,6,11,14]:
pac = mpatches.Arc([channels[i], 0], width_20mhz, 300,
theta2=180, linestyle='solid', linewidth=2, color='black')
else:
pac = mpatches.Arc([channels[i], 0], width_20mhz, 300,
theta2=180, linestyle='dashed', linewidth=2, color='black')
ax.add_patch(pac)
ax.get_xaxis().set_major_formatter(
plticker.FuncFormatter(lambda x, p: format(int((x * bin_sample_khz / 1000) + 2402), ',')))
plt.grid(linewidth=2,linestyle='solid',color='black')
plt.tight_layout()
bbox = fig.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
width, height = bbox.width*fig.dpi, bbox.height*fig.dpi
print (width), (height)
# Initial data and history of amount of pixels of the screen, since it is
# important that all lines are draw on the screen.
bbox = fig.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
width, height = bbox.width*fig.dpi, bbox.height*fig.dpi
matrix = np.empty([int(height),bin_size]) * np.nan
pcm = ax.pcolorfast(matrix, vmin=-122, vmax=-30)
if ACTION == 'record':
# Set up formatting for the movie files
Writer = animation.writers['ffmpeg']
writer = Writer(fps=15, metadata=dict(artist='AnyWi UBNT airViewer'), bitrate=1800)
#
# Matplotlib Animation
#
def update(data):
global frame_nr, matrix
# Fast forwarding in time
for i in range(FRAME_SPEED):
frame_nr_next = -1
# The same frame (duplicated), we are too fast
while frame_nr_next <= frame_nr:
ts, frame_nr_next, row = get_frame(frame_nr + 1)
frame_nr = frame_nr_next
# We are on the end of the file
if not ts and not frame_nr and not row:
return
#matrix = np.vstack([row, pcm.get_array()[:-1]])
matrix = np.vstack([row, matrix[:-1]])
pcm.set_array(matrix)
ax.set_title('Frame %s at %s' % (frame_nr,time.asctime(time.localtime(ts))))
#fig.canvas.draw()
ani = animation.FuncAnimation(fig, update, interval=100)
# Dual display and recording data does not seems to work, use a screencast
# program like gtk-recordmydesktop for that matter
if ACTION == 'record':
ani.save('live.mp4' if not FILENAME else FILENAME.rsplit('.',1)[0] + '.mp4', writer=writer)
else:
plt.show()
#
# Takes some time (10 seconds) for device to return to an active state
#
error output
Usage:+ sys.argv[0] + <live|replay FILENAME>
Options:
live = Process live data from device
Traceback (most recent call last):
File "C:\Users\ABDULHUSSEIN\Desktop\py-ubnt-airviewer-master\airviewer.py", line 76, in <module>
usage()
File "C:\Users\ABDULHUSSEIN\Desktop\py-ubnt-airviewer-master\airviewer.py", line 58, in usage
print ("\tlive \t=\tProcess live data from device ") + HOST
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
</code></pre>
<p>Any one pls can help me????</p>
| 1 | 2016-10-19T08:21:52Z | 40,125,982 | <p>it should be <code>print ("\tlive \t=\tProcess live data from device ",HOST)</code> not <code>print ("\tlive \t=\tProcess live data from device ") + HOST</code></p>
| 0 | 2016-10-19T08:27:03Z | [
"python"
] |
Optimal way to iterate through 3 files and generate a third file in python | 40,125,895 | <p>I have three txt files with a list of lists.</p>
<p>File 1 (9.7 thousand lines):</p>
<pre><code>ID1, data 1
</code></pre>
<p>File 2 (2.1 million lines):</p>
<pre><code>ID1, ID2
</code></pre>
<p>File 3 (1.1 thousand lines):</p>
<pre><code>ID2, data 3
</code></pre>
<p>I want to make a file 4 that </p>
<ul>
<li>takes all lines in file 1 (ID1 and data 1)</li>
<li>Get the ID2 for that lines ID1.</li>
<li>Get the data 3 for that ID2.</li>
<li>Save a file with ID1, data 1, ID2, data 3 for all lines in file 1 in file 4</li>
</ul>
<p>I have made a script for it in python, but ATM it takes 1 hour.</p>
<p>Here is what it does:</p>
<pre><code>file1 = []
file4 = []
file3 = []
final_list.append("ID1, ID2, DATA1, DATA2")
#Import file1
with open('file1.txt') as inputfile: #file 1: around 9.7k
for line in inputfile:
temp = line.strip().split(' ')
file1.append(temp)
#Import file3
with open('file3.txt') as inputfile: #file 3: around 1.1k
for line in inputfile:
temp = line.strip().split(' ')
file3.append(temp)
print len(file1)
#Iterate through file2 (so I only iterate once through this)
with open('file2.txt') as inputfile: #File 2: 2.1 million
for line in inputfile:
temp = line.strip().split(' ')
for sublist in file1: #Only if first element is also in list 1
if sublist[0] == temp[0]:
for sublist2 in file3:
if sublist2[0] == temp[1]:
file4.append([temp, sublist[1], sublist2[1]])
print len(file4)
print file4[:10]
thefile = open('final.txt', 'w')
for item in file4:
thefile.write("%s\n" % item)
thefile.close()
</code></pre>
<p>As mentioned, it takes an hour ATM. How can I improve performance? I have a lot of looping and was considering if this could be done quicker in some way...</p>
<p>Note: IDs only appear once, data can be repeated values</p>
| 1 | 2016-10-19T08:23:09Z | 40,126,726 | <p>Since your IDs are unique, as you write, you could use dictionaries instead of lists for file1 and file3. So your loop check to see if the ID is present reduces to a single lookup in the set of keys to those dictionaries. I don't know your original lists, but I presume that dictionaries are faster for your purpose. Thus you save two loop iterations over your long file. Some time will be spent on assembling the lists of keys, though. Please try the following approach:</p>
<pre><code>file1 = {} # empty new dictionary
file4 = []
file3 = {}
final_list.append("ID1, ID2, DATA1, DATA2")
#Import file1
with open('file1.txt') as inputfile: #file 1: around 9.7k
for line in inputfile:
temp = line.strip().split(' ')
file1[temp[0]] = temp[1] # store ID1 and associated data in dict
#Import file3
with open('file3.txt') as inputfile: #file 3: around 1.1k
for line in inputfile:
temp = line.strip().split(' ')
file3[temp[0]] = temp[1] # store ID2 and associated data in dict
print len(file1)
#Iterate through file2 (so I only iterate once through this)
keys1 = file1.keys() # for fast lookup, precalculate the list of ID1 entries
keys3 = file3.keys() # for fast lookup, precalculate the list of ID2 entries
with open('file2.txt') as inputfile: #File 2: 2.1 million
for line in inputfile:
temp = line.strip().split(' ')
if temp[0] in keys1:
if temp[1] in keys3:
file4.append([temp, file1[temp[0]], file3[temp[0]]])
print len(file4)
print file4[:10]
thefile = open('final.txt', 'w')
for item in file4:
thefile.write("%s\n" % item)
thefile.close()
</code></pre>
<p>Regards,</p>
| 1 | 2016-10-19T09:01:10Z | [
"python",
"performance",
"file",
"optimization"
] |
Unable to import modules from relative path | 40,125,933 | <p>I have a problem for importing modules from relative path with Python. I tried all I found on the web. Here is my directory structure:</p>
<pre><code>starcipher/
__init__.py
caesar.py
tests/
__init__.py
test_caesar.py
</code></pre>
<p>As you can tell, the <code>tests/</code> directory contains all my unit tests. The <code>test_caesar.py</code> uses the class defined in <code>caesar.py</code>. Here is my files:</p>
<p><code>caesar.py</code>:</p>
<pre><code>class Caesar:
# Blabla
</code></pre>
<p><code>tests/test_caesar.py</code>:</p>
<pre><code>import unittest
from ..caesar import Caesar
# I also tried:
from caesar import Caesar
from starcipher.caesar import Caesar
from . import Caesar
from .. import Caesar
# Nothing works.
class TestCaesar(unittest.TestCase):
# Blabla
</code></pre>
<p>I have this error each time:</p>
<pre><code>Traceback (most recent call last):
File "test_caesar.py", line 2, in <module>
from ..caesar import Caesar
SystemError: Parent module '' not loaded, cannot perform relative import
</code></pre>
<p><strong>EDIT</strong></p>
<p>Here is how I run my unit test:</p>
<ul>
<li>In the root directory: <code>python -m unittest discover tests/</code></li>
<li>Or in the <code>tests/</code> directory: <code>python test_caesar.py</code></li>
<li>Or even: <code>python -m unittest</code></li>
</ul>
<p><strong>SOLUTION</strong></p>
<p>Thanks to Pocin, removing the <code>__init__.py</code> file from the <code>tests/</code> directory solved the problem!</p>
<p>Thank you.</p>
| 0 | 2016-10-19T08:24:51Z | 40,126,400 | <p>Just so the solution is nicely visible, the fix is to delete <code>tests/__init__.py</code> file.</p>
<p>However I am not really sure why that works and It would be great if someone could provide explanation.</p>
| 0 | 2016-10-19T08:47:06Z | [
"python",
"import",
"module"
] |
Assign class' staticmethods in static variable in python | 40,126,208 | <p>I have the following class which produces an error:</p>
<pre><code>class MyClass(object):
QUERIES_AGGS = {
'query3': {
"query": MyClass._make_basic_query,
'aggregations': MyClass._make_query_three_aggregations,
'aggregation_transformations': MyClass._make_query_three_transformations
}
}
@staticmethod
def _make_basic_query():
#some code here
@staticmethod
def _make_query_three_aggregations():
#some code here
@staticmethod
def _make_query_three_transformations(aggs):
#some code here
</code></pre>
<p>For now it won't recognize MyClass. If i remove "MyClass" python won't recognize the functions. I know I could move the static methods from inside the class outside as module functions. Is it possible to keep them inside the class and use them like I am trying?</p>
| 1 | 2016-10-19T08:37:46Z | 40,128,760 | <p>Change the order so that the dictionary is specified after the methods have been defined. Also don't use <code>MyClass</code> when doing so.</p>
<pre><code>class MyClass(object):
@staticmethod
def _make_basic_query():
#some code here
pass
@staticmethod
def _make_query_three_aggregations():
#some code here
pass
@staticmethod
def _make_query_three_transformations(aggs):
#some code here
pass
QUERIES_AGGS = {
'query3': {
"query": _make_basic_query,
'aggregations': _make_query_three_aggregations,
'aggregation_transformations': _make_query_three_transformations
}
}
</code></pre>
<p>This works because when in the body of the class declaration, you can reference methods without needing the class type. What you are referencing has to have already been declared though.</p>
| 1 | 2016-10-19T10:29:04Z | [
"python"
] |
get dictionary contains in list if key and value exists | 40,126,260 | <p>How to get complete dictionary data inside lists. but first I need to check them if key and value is exists and paired.</p>
<pre><code>test = [{'a': 'hello' , 'b': 'world', 'c': 1},
{'a': 'crawler', 'b': 'space', 'c': 5},
{'a': 'jhon' , 'b': 'doe' , 'c': 8}]
</code></pre>
<p>when I try to make it conditional like this</p>
<pre><code>if any((d['c'] is 8) for d in test):
</code></pre>
<p>the value is True or False, But I want the result be an dictionary like</p>
<pre><code>{'a': 'jhon', 'b': 'doe', 'c': 8}
</code></pre>
<p>same as if I do</p>
<pre><code>if any((d['a'] is 'crawler') for d in test):
</code></pre>
<p>the results is:</p>
<pre><code>{'a': 'crawler', 'b': 'space', 'c': 5}
</code></pre>
<p>Thanks before.</p>
| 2 | 2016-10-19T08:40:08Z | 40,126,362 | <p>Use comprehension:</p>
<pre><code>data = [{'a': 'hello' , 'b': 'world', 'c': 1},
{'a': 'crawler', 'b': 'space', 'c': 5},
{'a': 'jhon' , 'b': 'doe' , 'c': 8}]
print([d for d in data if d["c"] == 8])
# [{'c': 8, 'a': 'jhon', 'b': 'doe'}]
</code></pre>
| 2 | 2016-10-19T08:45:28Z | [
"python",
"list",
"if-statement",
"dictionary",
"condition"
] |
get dictionary contains in list if key and value exists | 40,126,260 | <p>How to get complete dictionary data inside lists. but first I need to check them if key and value is exists and paired.</p>
<pre><code>test = [{'a': 'hello' , 'b': 'world', 'c': 1},
{'a': 'crawler', 'b': 'space', 'c': 5},
{'a': 'jhon' , 'b': 'doe' , 'c': 8}]
</code></pre>
<p>when I try to make it conditional like this</p>
<pre><code>if any((d['c'] is 8) for d in test):
</code></pre>
<p>the value is True or False, But I want the result be an dictionary like</p>
<pre><code>{'a': 'jhon', 'b': 'doe', 'c': 8}
</code></pre>
<p>same as if I do</p>
<pre><code>if any((d['a'] is 'crawler') for d in test):
</code></pre>
<p>the results is:</p>
<pre><code>{'a': 'crawler', 'b': 'space', 'c': 5}
</code></pre>
<p>Thanks before.</p>
| 2 | 2016-10-19T08:40:08Z | 40,126,369 | <p><code>is</code> tests for identity, not for equality which means it compares the memory address not the values those variables are pointing to. So it is very likely it might return <code>False</code> for same values. You should use <code>==</code> instead for checking equality.</p>
<p>As for your question, you can use <code>filter</code> or list comprehensions over <code>any</code>:</p>
<pre><code>>>> [dct for dct in data if dct["a"] == "crawler"]
>>> filter(lambda dct: dct["a"] == "crawler", data)
</code></pre>
<p>The result is a list containing the matched dictionaries. You can get the <code>[0]</code>th element if you think it contains only one item.</p>
| 2 | 2016-10-19T08:45:51Z | [
"python",
"list",
"if-statement",
"dictionary",
"condition"
] |
Any way to replace pandas pd.merge? | 40,126,274 | <p>I have two dataframe </p>
<pre><code>>>df1.info()
>><class 'pandas.core.frame.DataFrame'>
Int64Index: 2598374 entries, 3975 to 3054366
Data columns (total 14 columns): ......
>>df2.info()
>><class 'pandas.core.frame.DataFrame'>
Int64Index: 2520405 entries, 2066 to 2519507
Data columns (total 5 columns): ......
</code></pre>
<p>I wanna inner join them. I tried <code>pd.merge</code> and I got memory error. Thus, I tried to do same things without <code>pd.merge</code>.</p>
<p>Example dataframe for original method (failed: memory error)</p>
<pre><code>df1 = pd.DataFrame({'A': ['1', '2', '3', '4','5'],
'B': ['1', '1', '1', '1','1'],
'C': ['c', 'A1', 'a', 'c3','a'],
'D': ['B1', 'B1', 'B2', 'B3','B4'],
'E': ['3', '3', '3', '3','3'],
'F': ['3', '4', '5', '6','7'],
'G': ['2', '2', '2', '2','2']})
df2 = pd.DataFrame({'A': ['1', '2', '8','4'],
'B': ['1', '2', '5','1'],
'x': ['3', '3', '2','2'],
'y': ['3', '4', '6','7'],
'z': ['2', '2', '2','2']})
>> df1
A B C D E F G
0 1 1 c B1 3 3 2
1 2 1 A1 B1 3 4 2
2 3 1 a B2 3 5 2
3 4 1 c3 B3 3 6 2
4 5 1 a B4 3 7 2
df2
A B x y z
0 1 1 3 3 2
1 2 2 3 4 2
2 8 5 2 6 2
3 4 1 2 7 2
df1 = pd.merge(df1,df2,how='inner',on=['A','B'])
>> df1
A B C D E F G x y z
0 1 1 c B1 3 3 2 3 3 2
1 4 1 c3 B3 3 6 2 2 7 2
</code></pre>
<p>Example for new method<br>
(1) I tried to delete rows in df1 which are not in df2 by column['A']['B'].<br>
(2) concat x,y,z columns to df1</p>
<pre><code>df1 = pd.DataFrame({'A': ['1', '2', '3', '4','5'],
'B': ['1', '1', '1', '1','1'],
'C': ['c', 'A1', 'a', 'c3','a'],
'D': ['B1', 'B1', 'B2', 'B3','B4'],
'E': ['3', '3', '3', '3','3'],
'F': ['3', '4', '5', '6','7'],
'G': ['2', '2', '2', '2','2']})
df2 = pd.DataFrame({'A': ['1', '2', '8','4'],
'B': ['1', '2', '5','1'],
'x': ['3', '3', '2','2'],
'y': ['3', '4', '6','7'],
'z': ['2', '2', '2','2']})
>> df1
A B C D E F G
0 1 1 c B1 3 3 2
1 2 1 A1 B1 3 4 2
2 3 1 a B2 3 5 2
3 4 1 c3 B3 3 6 2
4 5 1 a B4 3 7 2
df2
A B x y z
0 1 1 3 3 2
1 2 2 3 4 2
2 8 5 2 6 2
3 4 1 2 7 2
df1 = df1.loc[((df1['A'].isin(df2.A)) & (df1['B'].isin(df2.B)) ) ]
>> df1
A B C D E F G
0 1 1 c B1 3 3 2
1 2 1 A1 B1 3 4 2
3 4 1 c3 B3 3 6 2
</code></pre>
<p>however, I got a logical error and I have no idea to solve this problem.
Can anyone help? </p>
| 2 | 2016-10-19T08:40:43Z | 40,126,424 | <p>You can try <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a>:</p>
<pre><code>df1 = pd.concat([df1.set_index(['A','B']),
df2.set_index(['A','B'])], axis=1, join='inner')
print (df1)
C D E F G x y z
A B
1 1 c B1 3 3 2 3 3 2
4 1 c3 B3 3 6 2 2 7 2
</code></pre>
<p>Or combination with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>df1 = df1[((df1['A'].isin(df2.A)) & (df1['B'].isin(df2.B)) ) ]
print (df1)
A B C D E F G
0 1 1 c B1 3 3 2
1 2 1 A1 B1 3 4 2
3 4 1 c3 B3 3 6 2
df2 = df2[((df2['A'].isin(df1.A)) & (df2['B'].isin(df1.B)) ) ]
print (df2)
A B x y z
0 1 1 3 3 2
3 4 1 2 7 2
df3 = pd.concat([df1.set_index(['A','B']),
df2.set_index(['A','B'])], axis=1, join='inner')
print (df3)
C D E F G x y z
A B
1 1 c B1 3 3 2 3 3 2
4 1 c3 B3 3 6 2 2 7 2
</code></pre>
<p>If <code>df1</code> after filtering is not large, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p>
<pre><code>df1 = df1[((df1['A'].isin(df2.A)) & (df1['B'].isin(df2.B)) ) ]
print (df1)
A B C D E F G
0 1 1 c B1 3 3 2
1 2 1 A1 B1 3 4 2
3 4 1 c3 B3 3 6 2
df2 = df2[((df2['A'].isin(df1.A)) & (df2['B'].isin(df1.B)) ) ]
print (df2)
A B x y z
0 1 1 3 3 2
3 4 1 2 7 2
df3 = pd.merge(df1,df2, on=['A','B'])
print (df3)
A B C D E F G x y z
0 1 1 c B1 3 3 2 3 3 2
1 4 1 c3 B3 3 6 2 2 7 2
</code></pre>
| 1 | 2016-10-19T08:48:19Z | [
"python",
"pandas"
] |
Python: search string in ziped files | 40,126,279 | <p>There is any way to search some string in some file in zip file without unziping?</p>
<p>I have following structure of directories:</p>
<pre><code>.
ââââsome_zip_file.zip
â âââsome_directory.SAFE
â â âââsome_dir
â â âââsome_another_dir
â â âââmanifest.safe \\ search in this file
</code></pre>
| -1 | 2016-10-19T08:40:59Z | 40,126,624 | <p>The <a href="https://docs.python.org/2/library/zipfile.html?highlight=zipfile#module-zipfile" rel="nofollow">zipfile</a> module could help you:</p>
<ul>
<li>It allows you to <a href="https://docs.python.org/2/library/zipfile.html?highlight=zipfile#zipfile.ZipFile.open" rel="nofollow">open</a> a file from the zip to get a file-like object</li>
<li>Or you can also directly <a href="https://docs.python.org/2/library/zipfile.html?highlight=zipfile#zipfile.ZipFile.read" rel="nofollow">read</a> a file from the archive</li>
</ul>
<p>concretly, you can read and store the content of a file from the zip this way:</p>
<pre><code>import zipfile
with zipfile.ZipFile("some_zip_file.zip", "r") as zip:
with zip.open("some_directory.SAFE/manifest.safe") as manifest:
content = manifest.read()
</code></pre>
<p>or:</p>
<pre><code>import zipfile
with zipfile.ZipFile("some_zip_file.zip", "r") as zip:
content = zip.read("some_directory.SAFE/manifest.safe")
</code></pre>
| 0 | 2016-10-19T08:57:39Z | [
"python",
"zip"
] |
Get the speed of downloading a file using python | 40,126,334 | <p>I'm using requests lib to download a file, i got many info. about the response like: size,type and date. but i need to get the download speed and set a maximum and minimum for it. How could i get downloading speed ?</p>
<p>Here's the code:</p>
<pre><code>import requests
import sys
link = "https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png"
file_name = "downloaded.png"
response = requests.get(link, stream=True)
with open(file_name, "wb") as f:
print "Downloading %s" % file_name
response = requests.get(link, stream=True)
total_length = int(response.headers.get('content-length'))
print response.headers["content-type"]
print total_length / 1024, "Kb"
print int(response.headers["Age"]) * (10 ** -6), "Sec"
print response.headers["date"]
if total_length is None: # no content length header
f.write(response.content)
else:
dl = 0
for data in response.iter_content(chunk_size=4096):
dl += len(data)
f.write(data)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)) )
sys.stdout.flush()
</code></pre>
<p>and here's the output:</p>
<pre><code>Downloading downloaded.png
image/png
213 Kb
0.054918 Sec
Wed, 19 Oct 2016 08:43:47 GMT
[==================================================]
</code></pre>
| 1 | 2016-10-19T08:44:12Z | 40,126,576 | <p>I just added <code>import time</code>, the <code>start</code> variable and replaced the <code>sys.stdout.write</code> line with the one from: <a href="http://stackoverflow.com/questions/20801034/how-to-measure-download-speed-and-progress-using-requests">How to measure download speed and progress using requests?</a></p>
<pre><code>import requests
import sys
import time
link = "https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png"
file_name = "downloaded.png"
start = time.clock()
response = requests.get(link, stream=True)
with open(file_name, "wb") as f:
print "Downloading %s" % file_name
response = requests.get(link, stream=True)
total_length = int(response.headers.get('content-length'))
print response.headers["content-type"]
print total_length / 1024, "Kb"
print int(response.headers["Age"]) * (10 ** -6), "Sec"
print response.headers["date"]
if total_length is None: # no content length header
f.write(response.content)
else:
dl = 0
for data in response.iter_content(chunk_size=4096):
dl += len(data)
f.write(data)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s] %s bps" % ('=' * done, ' ' * (50-done), dl//(time.clock() - start)))
sys.stdout.flush()
</code></pre>
| 1 | 2016-10-19T08:55:27Z | [
"python",
"network-programming",
"urllib"
] |
Get SQLAlchemy to encode correctly strings with cx_Oracle | 40,126,358 | <p>My problem is that SQLAlchemy seems to be writing the text not properly encoded in my Oracle database.</p>
<p>I include fragments of the code below:</p>
<pre><code>engine = create_engine("oracle://%s:%s@%s:%s/%s?charset=utf8"%(db_username, db_password, db_hostname,db_port, db_database), encoding='utf8')
connection = engine.connect()
session = Session(bind = connection)
class MyClass(DeclarativeBase):
"""
Model of the be persisted
"""
__tablename__ = "enconding_test"
id = Column(Integer, Sequence('encoding_test_id_seq'),primary_key = True)
blabla = Column(String(255, collation='utf-8'), default = '')
autoload = True
content = unicode("äüöÃqwerty","utf_8")
t = MyClass(blabla=content.encode("utf_8"))
session.add(t)
session.commit()
</code></pre>
<p>If now I read the contents of the database, I get printed something like:</p>
<blockquote>
<p>????????qwerty</p>
</blockquote>
<p>instead of the original:</p>
<blockquote>
<p>äüöÃqwerty</p>
</blockquote>
<p>So basically my question is what do I have to do, to properly store these German characters in the database?</p>
<p>Thanks in advance!</p>
| -1 | 2016-10-19T08:45:16Z | 40,134,794 | <p>I found a related topic, that actually answers my question:</p>
<p><a href="http://stackoverflow.com/questions/39780090/python-2-7-connection-to-oracle-loosing-polish-characters">Python 2.7 connection to oracle loosing polish characters</a></p>
<p>You simply add the following line, before creating the database connection:</p>
<pre><code>os.environ["NLS_LANG"] = "GERMAN_GERMANY.UTF8"
</code></pre>
<p>Additional documentation about which strings you need for different languages are found at the Oracle website:</p>
<p><a href="https://docs.oracle.com/cd/E23943_01/bi.1111/b32121/pbr_nls005.htm#RSPUB23733" rel="nofollow">Oracle documentation on Unicode Support</a></p>
| 0 | 2016-10-19T14:48:49Z | [
"python"
] |
Inplace functions in Python | 40,126,403 | <p>In Python there is a concept of inplace functions. For example shuffle is inplace in that it returns none.</p>
<p>How do I determine if a function will be inplace or not?</p>
<pre><code>from random import shuffle
print(type(shuffle))
<class 'method'>
</code></pre>
<p>So I know it's a <code>method</code> from class <code>random</code> but is there a special variable that defines some functions as inplace?</p>
| 3 | 2016-10-19T08:47:17Z | 40,126,452 | <p>You can't have a-priory knowledge about the operation for a given function. You need to either look at the source and deduce this information, or, examine the docstring for it and hope the developer documents this behavior.</p>
<p>For example, in <code>list.sort</code>:</p>
<pre><code>help(list.sort)
Help on method_descriptor:
sort(...)
L.sort(key=None, reverse=False) -> None -- stable sort *IN PLACE*
</code></pre>
<p>For functions operating on certain types, their mutability generally lets you extract some knowledge about the operation. You can be certain, for example, that all functions operating on strings will eventually return a new one, meaning, they can't perform in-place operations.</p>
| 3 | 2016-10-19T08:49:35Z | [
"python",
"function",
"python-3.x"
] |
Inplace functions in Python | 40,126,403 | <p>In Python there is a concept of inplace functions. For example shuffle is inplace in that it returns none.</p>
<p>How do I determine if a function will be inplace or not?</p>
<pre><code>from random import shuffle
print(type(shuffle))
<class 'method'>
</code></pre>
<p>So I know it's a <code>method</code> from class <code>random</code> but is there a special variable that defines some functions as inplace?</p>
| 3 | 2016-10-19T08:47:17Z | 40,126,548 | <p>I don't think there is special variable that defines some function as in-place but a standard function should have a doc string that says that it is in-place and does not return any value. For example:</p>
<p><code>>>> print(shuffle.__doc__)</code></p>
<p><code>Shuffle list x in place, and return None.</code></p>
<pre><code> `Optional argument random is a 0-argument function returning a
random float in [0.0, 1.0); if it is the default None, the
standard random.random will be used.`
</code></pre>
| 2 | 2016-10-19T08:54:13Z | [
"python",
"function",
"python-3.x"
] |
Split and shift RGB channels in Python | 40,126,407 | <p>What I'm trying to do is recreating what is commonly called an "RGB shift" effect, which is very easy to achieve with image manipulation programs. </p>
<p>I imagine I can "split" the channels of the image by either opening the image as a matrix of triples or opening the image three times and every time operate just on one channel, but I wouldn't know how to "offset" the channels when merging them back together (possibly by creating a new image and position each channel's [0,0] pixel in an offsetted position?) and reduce each channel's opacity as to not show just the last channel inserted into the image. </p>
<p>Has anyone tried to do this? Do you know if it is possible? If so, how did you do it?</p>
<p>Thanks everyone in advance! </p>
| -1 | 2016-10-19T08:47:26Z | 40,127,791 | <p>Per color plane, replace the pixel at <code>(X, Y)</code> by the pixel at <code>(X-1, Y+3)</code>, for example. (Of course your shifts will be different.)</p>
<p>You can do that in-place, taking care to loop by increasing or decreasing coordinate to avoid overwriting.</p>
<p>There is no need to worry about transparency.</p>
| 2 | 2016-10-19T09:46:49Z | [
"python",
"image",
"image-processing",
"rgb"
] |
Tkinter how to update second combobox automatically according this combobox | 40,126,449 | <p>I have encountered an issue with combobox updates in Tkinter Python.</p>
<p>I have two comboboxes:</p>
<ul>
<li>combobox <code>A</code> with <code>values =['A','B','C']</code> and</li>
<li>combobox <code>B</code></li>
</ul>
<p>What i want is that:</p>
<ul>
<li><p>when value <code>A</code> is selected in combobox <code>A</code> then in combobox <code>B</code> show the values <code>['1','2','3']</code></p></li>
<li><p>when value <code>B</code> is selected in combobox <code>A</code> then in combobox <code>B</code> show the values <code>['11','12','13']</code></p></li>
<li><p>when value <code>C</code> is selected in combobox <code>A</code> then in combobox <code>B</code> show the value s <code>['111','112','113']</code></p></li>
</ul>
<p>Currently part of my code as follows:</p>
<pre><code>def CallHotel(*args):
global ListB
if hotel.get()==ListA[0]
ListB=ListB1
if hotel.get()==ListA[1]
ListB=ListB2
if hotel.get()==ListA[2]
ListB=ListB3
ListA=['A','B','C']
ListB1=['1','2','3']
ListB2=['11','12','13']
ListB3=['111','112','113']
ListB=ListB1
hotel = StringVar()
hotel.set('SBT')
comboboxA=ttk.Combobox(win0,textvariable=hotel,values=ListA,width=8)
comboboxA.bind("<<ComboboxSelected>>",CallHotel)
comboboxA.pack(side='left')
stp = StringVar()
stp.set('STP')
comboboxB=ttk.Combobox(win0,textvariable=stp,values=ListB,width=15)
comboboxB.pack(side='left')
</code></pre>
| 0 | 2016-10-19T08:49:31Z | 40,128,032 | <p>Actually you don't need the global variable <code>ListB</code>. And you need to add <code>comboboxB.config(values=...)</code> at the end of <code>CallHotel()</code> to set the options of <code>comboboxB</code>:</p>
<pre><code>def CallHotel(*args):
sel = hotel.get()
if sel == ListA[0]:
ListB = ListB1
elif sel == ListA[1]:
ListB = ListB2
elif sel == ListA[2]:
ListB = ListB3
comboboxB.config(values=ListB)
</code></pre>
<p>And change the initial values of <code>comboboxB</code> to <code>ListB1</code> directly:</p>
<pre><code>comboboxB=ttk.Combobox(win0,textvariable=stp,values=ListB1,width=15)
</code></pre>
| 2 | 2016-10-19T09:58:06Z | [
"python",
"tkinter",
"combobox"
] |
How to generate effectively a random number that only contains unique digits in Python? | 40,126,683 | <pre><code>import random
def get_number(size):
result = [random.randint(1,9)]
digits = list(range(0,10))
digits.remove(result[0])
if(size > 1):
result += random.sample(digits,size-1)
return ''.join(map(str,result))
print(get_number(4))
</code></pre>
<p>I solved the problem, but I feel that it's clumsy.
How can I do this more effectively and more elegant? </p>
| 3 | 2016-10-19T08:59:45Z | 40,127,616 | <p>Shuffle is the way to go as suggested by @jonrsharpe:</p>
<pre><code>import random
def get_number(size):
l = [ str(i) for i in list(range(10))]
while l[0] == '0':
random.shuffle(l)
return int("".join(l[:size]))
</code></pre>
<p>Limits:</p>
<ul>
<li>is you ask for a number of more than 10 digits, you will only get 10 digits</li>
<li>it can take some steps if first digit is initially a 0</li>
</ul>
| 1 | 2016-10-19T09:38:31Z | [
"python",
"list",
"random"
] |
How to generate effectively a random number that only contains unique digits in Python? | 40,126,683 | <pre><code>import random
def get_number(size):
result = [random.randint(1,9)]
digits = list(range(0,10))
digits.remove(result[0])
if(size > 1):
result += random.sample(digits,size-1)
return ''.join(map(str,result))
print(get_number(4))
</code></pre>
<p>I solved the problem, but I feel that it's clumsy.
How can I do this more effectively and more elegant? </p>
| 3 | 2016-10-19T08:59:45Z | 40,128,849 | <p>Just use shuffle:</p>
<pre><code>import string
x = list(string.digits)
random.shuffle(x)
print int(str(random.choice(range(1, 10))) + "".join(x[:3]))
</code></pre>
| 0 | 2016-10-19T10:32:09Z | [
"python",
"list",
"random"
] |
Convert full month and day name to short ones | 40,126,811 | <p>I have a series of dates of this format: <code>'2015 08 01'</code> or <code>'2015 12 11'</code> and I want to convert them to this format: <code>'2015 8 1'</code> or <code>'2015 12 11'</code>. Basically, if the month and day sub-strings have a 0 at the start, that should be eliminated.</p>
<p>For example: The <code>'2015 08 01'</code> date string of the format (<code>"%Y %m %d"</code>), has the month number as <strong>08</strong> and day number as <strong>01</strong>. I want the resulting date to be: <code>'2015 8 1'</code>, where the month is converted from <strong>08</strong> to <strong>8</strong> and the day from <strong>01</strong> to <strong>1</strong>.</p>
<p>I tried:</p>
<pre><code>from datetime import datetime
str_date = '2015 08 01'
data_conv = datetime.strptime(str_date, "%Y%m%e")
</code></pre>
<p>but I get this error:</p>
<pre><code>ValueError: 'e' is a bad directive in format '%Y%m%e'
</code></pre>
| 0 | 2016-10-19T09:04:32Z | 40,126,845 | <p>it should be <code>d</code>:</p>
<pre><code>>>> data_conv = datetime.strptime(str_date, "%Y %m %d")
>>> data_conv
datetime.datetime(2015, 8, 1, 0, 0)
</code></pre>
| 1 | 2016-10-19T09:06:00Z | [
"python",
"datetime"
] |
Convert full month and day name to short ones | 40,126,811 | <p>I have a series of dates of this format: <code>'2015 08 01'</code> or <code>'2015 12 11'</code> and I want to convert them to this format: <code>'2015 8 1'</code> or <code>'2015 12 11'</code>. Basically, if the month and day sub-strings have a 0 at the start, that should be eliminated.</p>
<p>For example: The <code>'2015 08 01'</code> date string of the format (<code>"%Y %m %d"</code>), has the month number as <strong>08</strong> and day number as <strong>01</strong>. I want the resulting date to be: <code>'2015 8 1'</code>, where the month is converted from <strong>08</strong> to <strong>8</strong> and the day from <strong>01</strong> to <strong>1</strong>.</p>
<p>I tried:</p>
<pre><code>from datetime import datetime
str_date = '2015 08 01'
data_conv = datetime.strptime(str_date, "%Y%m%e")
</code></pre>
<p>but I get this error:</p>
<pre><code>ValueError: 'e' is a bad directive in format '%Y%m%e'
</code></pre>
| 0 | 2016-10-19T09:04:32Z | 40,127,119 | <p>Just remove the leading zeroes:</p>
<pre><code>' '.join([x.lstrip('0') for x in str_date.split()])
</code></pre>
| 2 | 2016-10-19T09:17:42Z | [
"python",
"datetime"
] |
Fastest way to build a Matrix with a custom architecture | 40,126,853 | <p>What's the fastest way in numpy or pandas to build a matrix that has this form:</p>
<pre><code>1 1 1 1 1
1 2 2 2 1
1 2 3 2 1
1 2 2 2 1
1 1 1 1 1
</code></pre>
<p>That preserves both odd and even architectures?</p>
| 2 | 2016-10-19T09:06:18Z | 40,127,020 | <p>Using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy brodacasting</code></a>!</p>
<pre><code>In [289]: a = np.array([1,2,3,2,1])
In [290]: np.minimum(a[:,None],a)
Out[290]:
array([[1, 1, 1, 1, 1],
[1, 2, 2, 2, 1],
[1, 2, 3, 2, 1],
[1, 2, 2, 2, 1],
[1, 1, 1, 1, 1]])
</code></pre>
<p>To build the range array, we can do something like this -</p>
<pre><code>In [303]: N = 3
In [304]: np.concatenate((np.arange(1,N+1),np.arange(N-1,0,-1)))
Out[304]: array([1, 2, 3, 2, 1])
</code></pre>
<p><strong>Adding some bias</strong></p>
<p>Let's say we want to move the highest number/peak up or down. We need to create another <em>biasing</em> array and use the same strategy of <code>broadcasting</code>, like so -</p>
<pre><code>In [394]: a = np.array([1,2,3,2,1])
In [395]: b = np.array([2,3,2,1,0]) # Biasing array
In [396]: np.minimum(b[:,None],a)
Out[396]:
array([[1, 2, 2, 2, 1],
[1, 2, 3, 2, 1],
[1, 2, 2, 2, 1],
[1, 1, 1, 1, 1],
[0, 0, 0, 0, 0]])
</code></pre>
<p>Similarly, to have the bias shifted left or right, modify <code>a</code>, like so -</p>
<pre><code>In [397]: a = np.array([2,3,2,1,0]) # Biasing array
In [398]: b = np.array([1,2,3,2,1])
In [399]: np.minimum(b[:,None],a)
Out[399]:
array([[1, 1, 1, 1, 0],
[2, 2, 2, 1, 0],
[2, 3, 2, 1, 0],
[2, 2, 2, 1, 0],
[1, 1, 1, 1, 0]])
</code></pre>
| 5 | 2016-10-19T09:12:56Z | [
"python",
"pandas",
"numpy"
] |
Dump database table or work remotely for analysis? | 40,126,979 | <p>I have a table of <strong>80 million</strong> rows and I was given the task to do some light analysis like finding patterns for fields, which fields are mutually exclusive etc.</p>
<p>My initial instinct was to dump the whole table into a CSV so I can work with Pandas or similar since I assumed it would be faster and easier to work with. While figuring out ways on how to get the whole table into a CSV, a colleague insisted that it is overkill and the conventional approach is to work directly with the Oracle database.</p>
<p>From my software background, my understanding has been that databases are more for keeping the state of big applications and less for a human to fiddle with. What is the common approach for analysis when having such big tables? What is faster? Personally I don't mind the time it takes to dump the database but more about the time it takes to get back feedback when doing the actual analysis.</p>
| 0 | 2016-10-19T09:11:30Z | 40,128,138 | <p>Directly on the database with SQL is perfectly fine for any analysis <em>when you already know what you're looking for</em>.</p>
<p>When you don't know what you're looking for, and you want to do e.g. pattern recognition, the effort to dump and process in another tool is probably worth it.</p>
<p>Also consider the possibility to connect Pandas directly to your Oracle database (which allows you to skip dumping data), <a href="http://dominicgiles.com/blog/files/bbffdb638932620b3182980fbd0e3d5b-146.html" rel="nofollow">see here for an example</a>. </p>
| 0 | 2016-10-19T10:02:21Z | [
"python",
"oracle",
"pandas",
"data-analysis"
] |
Remove illegal edges from the permuation set (python) | 40,126,993 | <p>I am trying to apply brute-force method to find the shortest path between an origin and a destination node (OD pair). I create the network using networkX and call the permutations followed by the application of brute force. <br/>If all the nodes in the network are connected with all others, this is ok. But if some or many edges are not there, this method is not gonna work out.
To make it right, i should delete all the permuations which are containing the illegal edges. </p>
<p>For example if two permutation tuples are </p>
<blockquote>
<p>[(1,2,3,4,5), (1,2,4,3,5)]</p>
</blockquote>
<p>and in my network no edge exist between node 2 and 3, the first tuple in the mentioned list should be deleted.</p>
<p><strong>First question</strong>: Is it an efficient way to first create permutations and then go in there and delete those containing illegal edges? if not what should I do?</p>
<p><strong>Second question</strong>: If yes, my strategy is that I am first creating a list of tuples containing all illegal edges from networkx "G.has_edge(u,v)" command and then going into the permutations and looking if such an edge exist, delete that permutation and so on. Is it a good strategy? if no, what else do you suggest.</p>
<p>Thank you :)</p>
| 0 | 2016-10-19T09:11:53Z | 40,132,174 | <p>Exact solution for general TSP is recognized as non-polynomial. Enumerating all permutations, being the most straight-forward approach, is valid despite of its <code>O(n!)</code> complexity. Refer to <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow">wikipedia page of TSP</a> for more information. </p>
<p>As for your specific problem, generating valid permutation is possible using <a href="https://en.wikipedia.org/wiki/Depth-first_search" rel="nofollow">depth-first search</a> over the graph.</p>
<p>Python-like pseudo code showing this algorithm is as following:</p>
<pre><code>def dfs(vertex, visited):
if vertex == target_vertex:
visited.append(target_vertex)
if visited != graph.vertices:
return
do_it(visited) # visited is a valid permutation
for i in graph.vertices:
if not i in visited and graph.has_edge(vertex, i):
dfs(i, visited + [i])
dfs(start_vertex, [start_vertex])
</code></pre>
| 0 | 2016-10-19T13:02:06Z | [
"python",
"shortest-path",
"brute-force"
] |
NetworkX: how to properly create a dictionary of edge lengths? | 40,127,350 | <p>Say I have a regular grid network made of <code>10x10</code> nodes which I create like this:</p>
<pre><code>import networkx as nx
from pylab import *
import matplotlib.pyplot as plt
%pylab inline
ncols=10
N=10 #Nodes per side
G=nx.grid_2d_graph(N,N)
labels = dict( ((i,j), i + (N-1-j) * N ) for i, j in G.nodes() )
nx.relabel_nodes(G,labels,False)
inds=labels.keys()
vals=labels.values()
inds=[(N-j-1,N-i-1) for i,j in inds]
posk=dict(zip(vals,inds))
nx.draw_networkx(G, pos=posk, with_labels=True, node_size = 150, node_color='blue',font_size=10)
plt.axis('off')
plt.title('Grid')
plt.show()
</code></pre>
<p>Now say I want to create a dictionary which stores, for each edge, its length. This is the intended outcome:</p>
<pre><code>d={(0,1): 3.4, (0,2): 1.7, ...}
</code></pre>
<p>And this is how I try to get to that point:</p>
<pre><code>from math import sqrt
lengths={G.edges(): math.sqrt((x-a)**2 + (y-b)**2) for (x,y),(a,b) in G.edges()}
</code></pre>
<p>But there clearly is something wrong as I get the following error message:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-c73c212f0d7f> in <module>()
2 from math import sqrt
3
----> 4 lengths={G.edges(): math.sqrt((x-a)**2 + (y-b)**2) for (x,y),(a,b) in G.edges()}
5
6
<ipython-input-7-c73c212f0d7f> in <dictcomp>(***failed resolving arguments***)
2 from math import sqrt
3
----> 4 lengths={G.edges(): math.sqrt((x-a)**2 + (y-b)**2) for (x,y),(a,b) in G.edges()}
5
6
TypeError: 'int' object is not iterable
</code></pre>
<p><strong>What am I missing?</strong></p>
| 0 | 2016-10-19T09:26:42Z | 40,127,920 | <p>There is a lot going wrong in the last line, first and foremost that G.edges() is an iterator and not a valid dictionary key, and secondly, that G.edges() really just yields the edges, not the positions of the nodes.</p>
<p>This is what you want instead: </p>
<pre><code>lengths = dict()
for source, target in G.edges():
x1, y1 = posk[source]
x2, y2 = posk[target]
lengths[(source, target)] = math.sqrt((x2-x1)**2 + (y2-y1)**2)
</code></pre>
| 1 | 2016-10-19T09:53:04Z | [
"python",
"dictionary",
"networkx"
] |
Machine Learning Prediction - why and when beginning with PCA? | 40,127,381 | <p>From a very general point of view, when you have a dataset X and want to predict a label Y, what is the purpose of beginning with a PCA (principal component analysis) first, and then doing the prediction itself (with logistic regression , or random forest or whatever) from both intuitive and theoretical reason ? In which case can this improve the quality of prediction ?
Thanks !</p>
| -2 | 2016-10-19T09:28:07Z | 40,128,239 | <p>I assume you mean PCA-based dimensionality reduction. Low-variance data often, but not always, has little predictive power, so removing low-variance dimensions of your dataset can be an effective way of improving predictor running time. In cases where it raises the signal to noise ratio, it can even improve prediction quality. But this is just a heuristic and is not universally applicable.</p>
| 0 | 2016-10-19T10:06:31Z | [
"python",
"machine-learning",
"pca",
"prediction"
] |
Deleting blank rows and rows with text in them at the same time | 40,127,532 | <p>New to programming here. After going through google and the forums (aswell as much trial and error), I couldn't find a neat solution to my problem.</p>
<p>I'd like to delete a few initial rows (first 10 rows) in my spreadsheet/CSV file and while I did find the solution to that using:</p>
<pre><code> all(next(read) for i in range(10))
</code></pre>
<p>It does not appear to be able to delete the blank rows in the CSV file that I have. The first 10 rows I'd like to delete include those blank rows. I think the above line only deletes the rows if there are strings in them.</p>
<p>My full code is this so far [EDIT1]: Maybe this could work?</p>
<pre><code> import csv
with open('filename.csv') as csvfile:
non_blank_row=[]
filecsv = csv.reader(csvfile, delimiter=",")
for row in filecsv:
non_blank_row.append(row)
better_blank_row = non_blank_row[[i for i in range(len(non_blank_row)) if non_blank_row!=[]
</code></pre>
<p>[EDIT2]: When I tried to print(better_blank_row), I did:</p>
<pre><code> for i in better_blank_row:
print(better_blank_row)
</code></pre>
<p>However, I'm not sure why output doesn't come out.. like the shell just freezes.
Any help would be much appreciated!</p>
| 0 | 2016-10-19T09:34:26Z | 40,127,855 | <p>Your for loop is just iterating over <em>filecsv</em>. It's not doing anything.
You want something like</p>
<p><code>non_blank_rows = []
for row in filecsv:
if row: # empty strings are false-y
non_blank_rows.append(row)</code></p>
<p>Now at this point you could just set filecsv to be non_blank_rows.
Note that the more pythonic way to do all of this would probably be to use a list comprehension. I'd write the syntax for that, but if that's something you haven't heard about you might want to look them up on your own.</p>
<p><code>filecsv = [row for row in filecsv if row]</code> </p>
| 0 | 2016-10-19T09:50:04Z | [
"python",
"csv",
"rows"
] |
Deleting blank rows and rows with text in them at the same time | 40,127,532 | <p>New to programming here. After going through google and the forums (aswell as much trial and error), I couldn't find a neat solution to my problem.</p>
<p>I'd like to delete a few initial rows (first 10 rows) in my spreadsheet/CSV file and while I did find the solution to that using:</p>
<pre><code> all(next(read) for i in range(10))
</code></pre>
<p>It does not appear to be able to delete the blank rows in the CSV file that I have. The first 10 rows I'd like to delete include those blank rows. I think the above line only deletes the rows if there are strings in them.</p>
<p>My full code is this so far [EDIT1]: Maybe this could work?</p>
<pre><code> import csv
with open('filename.csv') as csvfile:
non_blank_row=[]
filecsv = csv.reader(csvfile, delimiter=",")
for row in filecsv:
non_blank_row.append(row)
better_blank_row = non_blank_row[[i for i in range(len(non_blank_row)) if non_blank_row!=[]
</code></pre>
<p>[EDIT2]: When I tried to print(better_blank_row), I did:</p>
<pre><code> for i in better_blank_row:
print(better_blank_row)
</code></pre>
<p>However, I'm not sure why output doesn't come out.. like the shell just freezes.
Any help would be much appreciated!</p>
| 0 | 2016-10-19T09:34:26Z | 40,128,698 | <p>you can use as fallowing for writing new csv file after deleting particuler row's </p>
<pre><code>import csv
#read csv file
csvfile = open('/home/17082016_ExpiryReport.csv', 'rb')
filecsv = csv.reader(csvfile, delimiter=",")
#following line increment pointer for blank line also up to range 10 lines
[next(filecsv) for i in range(10)]
#write csv file
with open('/home/17082016_ExpiryReport_output.csv', 'wb') as csvfile:
csvwriter = csv.writer(csvfile, delimiter=',')
[csvwriter.writerow(i) for i in filecsv ]
</code></pre>
| 0 | 2016-10-19T10:26:13Z | [
"python",
"csv",
"rows"
] |
How to get latest unique entries from sqlite db with the counter of entries via Django ORM | 40,127,698 | <p>I have a SQLite db which looks like this:</p>
<pre><code>|ID|DateTime|Lang|Details|
|1 |16 Oct | GB | GB1 |
|2 |15 Oct | GB | GB2 |
|3 |17 Oct | ES | ES1 |
|4 |13 Oct | ES | ES2 |
|5 |15 Oct | ES | ES3 |
|6 |10 Oct | CH | CH1 |
</code></pre>
<p>I need a Django query to select this:</p>
<pre><code>|1 |16 Oct | GB | GB1 | 2 |
|3 |17 Oct | ES | ES1 | 3 |
|6 |10 Oct | CH | CH1 | 1 |
</code></pre>
<p>So this is unique (by Lang) latest (by DateTime) entries with the number of occurrences (by Lang). Is it possible to do this with a single SQL or Django-ORM query?</p>
| -1 | 2016-10-19T09:42:32Z | 40,128,426 | <p>You can use Django annotate() and value() together: <a href="https://docs.djangoproject.com/el/1.10/topics/db/aggregation/#values" rel="nofollow">link</a>. </p>
<blockquote>
<p>when a values() clause is used to constrain the columns that are returned in the result set, the method for evaluating annotations is slightly different. Instead of returning an annotated result for each result in the original QuerySet, the original results are grouped according to the unique combinations of the fields specified in the values() clause. An annotation is then provided for each unique group; the annotation is computed over all members of the group.</p>
</blockquote>
<p>Your ORM query should looks like this:</p>
<pre><code>queryset = Model.objects.values("Lang").annotate(
max_datetime=Max("DateTime"),
count=Count("ID")
).values(
"ID", "max_datetime", "Lang", "Details", "count"
)
</code></pre>
| 0 | 2016-10-19T10:14:17Z | [
"python",
"django",
"sqlite",
"django-orm"
] |
How to get latest unique entries from sqlite db with the counter of entries via Django ORM | 40,127,698 | <p>I have a SQLite db which looks like this:</p>
<pre><code>|ID|DateTime|Lang|Details|
|1 |16 Oct | GB | GB1 |
|2 |15 Oct | GB | GB2 |
|3 |17 Oct | ES | ES1 |
|4 |13 Oct | ES | ES2 |
|5 |15 Oct | ES | ES3 |
|6 |10 Oct | CH | CH1 |
</code></pre>
<p>I need a Django query to select this:</p>
<pre><code>|1 |16 Oct | GB | GB1 | 2 |
|3 |17 Oct | ES | ES1 | 3 |
|6 |10 Oct | CH | CH1 | 1 |
</code></pre>
<p>So this is unique (by Lang) latest (by DateTime) entries with the number of occurrences (by Lang). Is it possible to do this with a single SQL or Django-ORM query?</p>
| -1 | 2016-10-19T09:42:32Z | 40,129,564 | <p>As you want distinct entries by "Lang" and latest entries by "DateTime", below query will help you,</p>
<p>queryset = Model.objects.distinct("Lang").order_by("-DateTime")</p>
| 0 | 2016-10-19T11:01:15Z | [
"python",
"django",
"sqlite",
"django-orm"
] |
python , launch an external program with ulimit | 40,127,732 | <p>I need to launch an external program from my python script.
This program crashes, so I need to get a core dump from it.</p>
<p>what can i do?</p>
| 0 | 2016-10-19T09:44:27Z | 40,127,861 | <p>Check out the python <a href="https://docs.python.org/2.7/library/resource.html?highlight=resource#module-resource" rel="nofollow">resource</a> module. It will let you set the size of core files, etc., just like the ulimit command. Specifically, you want to do something like</p>
<pre><code>resource.setrlimit(resource.RLIMIT_CORE, <size>)
</code></pre>
<p>before launching your target program.</p>
<p>My guess at usage (I haven't done this myself) is:</p>
<pre><code>import resource
import subprocess
resource.setrlimit(resource.RLIMIT_CORE,
(resource.RLIM_INFINITY,
resource.RLIM_INFINITY))
command = 'command line to be launched'
subprocess.call(command)
# os.system(command) would work, but os.system has been deprecated
# in favor of the subprocess module
</code></pre>
<hr>
| 0 | 2016-10-19T09:50:09Z | [
"python",
"ulimit"
] |
Dictionary keys cannot be encoded as utf-8 | 40,127,739 | <p>I am using the twitter streaming api (tweepy) to capture several tweets. I do this in python2.7. </p>
<p>After I have collected a corpus of tweets I break each tweet into words and add each word to a dictionary as keys, where the values are the participation of each word in <code>positive</code> or <code>negative</code> sentences.</p>
<p>When I retrieve the words as keys of the dictionary and try to process them for a next iteration I get </p>
<blockquote>
<p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2: ordinal not in range(128)</p>
</blockquote>
<p>errors</p>
<p>The weird thing is that before I place them as dictionary keys I encode them without errors. Here is a sample code</p>
<pre><code>pos = {}
neg = {}
for status in corpus:
p = s.analyze(status).polarity
words = []
# gather real words
for w in status.split(' '):
try:
words.append(w.encode('utf-8'))
except UnicodeDecodeError as e:
print(e)
# assign sentiment of the sentence to the words
for w in words:
if w not in pos:
pos[w] = 0
neg[w] = 0
if p >= 0:
pos[w] += 1
else:
neg[w] += 1
k = pos.keys()
k = [i.encode('utf-8') for i in k] # <-- for this line a get an error
p = [v for i, v in pos.items()]
n = [v for i, v in neg.items()]
</code></pre>
<p>So this piece of code will catch no errors during the splitting of the words but it will throw an error when trying to encode the keys again. I should note than normally I wouldn't try to encode the keys again, as I would think they are already properly encoded. But I added this extra encoding to narrow down the source of the error. </p>
<p>Am I missing something? Do you see anything wrong with my code?</p>
<p>to avoid confusion here is a sample code more close to the original that is not trying to encode the keys again</p>
<pre><code>k = ['happy']
for i in range(3):
print('sampling twitter --> {}'.format(i))
myStream.filter(track=k) # <-- this is where I will receive the error in the second iteration
for status in corpus:
p = s.analyze(status).polarity
words = []
# gather real words
for w in status.split(' '):
try:
words.append(w.encode('utf-8'))
except UnicodeDecodeError as e:
print(e)
# assign sentiment of the sentence to the words
for w in words:
if w not in pos:
pos[w] = 0
neg[w] = 0
if p >= 0:
pos[w] += 1
else:
neg[w] += 1
k = pos.keys()
</code></pre>
<p>(<strong>please suggest a better title for the question</strong>)</p>
| 0 | 2016-10-19T09:44:42Z | 40,127,892 | <p>Note that the error message says "'ascii' codec can't <strong>decode</strong> ...". That's because when you call <code>encode</code> on something that is already a bytestring in Python 2, it tries to decode it to Unicode first using the default codec.</p>
<p>I'm not sure why you thought that encoding again would be a good idea. Don't do it; the strings are already byetestrings, leave them as that.</p>
| 1 | 2016-10-19T09:51:34Z | [
"python",
"python-2.7",
"encoding",
"utf-8",
"tweepy"
] |
Dictionary keys cannot be encoded as utf-8 | 40,127,739 | <p>I am using the twitter streaming api (tweepy) to capture several tweets. I do this in python2.7. </p>
<p>After I have collected a corpus of tweets I break each tweet into words and add each word to a dictionary as keys, where the values are the participation of each word in <code>positive</code> or <code>negative</code> sentences.</p>
<p>When I retrieve the words as keys of the dictionary and try to process them for a next iteration I get </p>
<blockquote>
<p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2: ordinal not in range(128)</p>
</blockquote>
<p>errors</p>
<p>The weird thing is that before I place them as dictionary keys I encode them without errors. Here is a sample code</p>
<pre><code>pos = {}
neg = {}
for status in corpus:
p = s.analyze(status).polarity
words = []
# gather real words
for w in status.split(' '):
try:
words.append(w.encode('utf-8'))
except UnicodeDecodeError as e:
print(e)
# assign sentiment of the sentence to the words
for w in words:
if w not in pos:
pos[w] = 0
neg[w] = 0
if p >= 0:
pos[w] += 1
else:
neg[w] += 1
k = pos.keys()
k = [i.encode('utf-8') for i in k] # <-- for this line a get an error
p = [v for i, v in pos.items()]
n = [v for i, v in neg.items()]
</code></pre>
<p>So this piece of code will catch no errors during the splitting of the words but it will throw an error when trying to encode the keys again. I should note than normally I wouldn't try to encode the keys again, as I would think they are already properly encoded. But I added this extra encoding to narrow down the source of the error. </p>
<p>Am I missing something? Do you see anything wrong with my code?</p>
<p>to avoid confusion here is a sample code more close to the original that is not trying to encode the keys again</p>
<pre><code>k = ['happy']
for i in range(3):
print('sampling twitter --> {}'.format(i))
myStream.filter(track=k) # <-- this is where I will receive the error in the second iteration
for status in corpus:
p = s.analyze(status).polarity
words = []
# gather real words
for w in status.split(' '):
try:
words.append(w.encode('utf-8'))
except UnicodeDecodeError as e:
print(e)
# assign sentiment of the sentence to the words
for w in words:
if w not in pos:
pos[w] = 0
neg[w] = 0
if p >= 0:
pos[w] += 1
else:
neg[w] += 1
k = pos.keys()
</code></pre>
<p>(<strong>please suggest a better title for the question</strong>)</p>
| 0 | 2016-10-19T09:44:42Z | 40,127,946 | <p>You get a <strong>decode</strong> error while you are trying to <strong>encode</strong> a string. This seems weird but it is due to implicit decode/encode mechanism of Python.</p>
<p>Python allows to encode strings to obtain bytes and decode bytes to obtain strings. This means that Python can encode only strings and decode only bytes.</p>
<p>So when you try to encode bytes, Python (which does not know how to encode bytes) tries to implicitely decode the byte to obtain a string to encode and it uses its default encoding to do that.
This is why you get a decode error while trying to encode something: the implicit decoding.</p>
<p>That means that you are probably trying to encode something which is already encoded.</p>
| 1 | 2016-10-19T09:54:26Z | [
"python",
"python-2.7",
"encoding",
"utf-8",
"tweepy"
] |
merge two plot to one graph | 40,127,746 | <p>I have two dataframe, and I plot both of them.
one is for female and the other for male.</p>
<p><a href="https://i.stack.imgur.com/z7VwO.png" rel="nofollow"><img src="https://i.stack.imgur.com/z7VwO.png" alt="enter image description here"></a></p>
<p>I want merge them in one graph with different color
(since they have same feature)</p>
<p>here are codes</p>
<pre><code>female[feature].plot(kind='bar')
male[feature].plot(kind = "bar")
</code></pre>
<p>feature is the column name of data frame.
the date frame is look likes</p>
<pre><code> X1 X2 X3 ..... X46
male 100 65 75 ..... 150
female 500 75 30 ..... 350
</code></pre>
| 1 | 2016-10-19T09:44:57Z | 40,128,333 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.bar.html" rel="nofollow"><code>DataFrame.plot.bar</code></a> with transposing <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a>:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({
'X2': {'female': 75, 'male': 65},
'X46': {'female': 350, 'male': 150},
'X1': {'female': 500, 'male': 100},
'X3': {'female': 30, 'male': 75}})
print (df)
X1 X2 X3 X46
female 500 75 30 350
male 100 65 75 150
df.T.plot.bar()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/0XCLw.png" rel="nofollow"><img src="https://i.stack.imgur.com/0XCLw.png" alt="graph"></a></p>
| 1 | 2016-10-19T10:10:27Z | [
"python",
"pandas",
"matplotlib",
"plot",
"dataframe"
] |
Why do I get garbage in the file when using "w+" instead of "a+" when the filehandles are stored in a Dict? | 40,127,987 | <p>I wrote a function that takes a list of items with several fields and write each item to one or several files depending on the content of some of the fields.</p>
<p>The name of the files is based on the content of those fields, so for example an item with value <code>AAA</code> in the field <code>rating</code> and <code>Spain</code> in the field <code>country</code> will end up in the files <code>AAA_firms.txt</code>, <code>Spain_firms.txt</code> and <code>Spain_AAA_firms.txt</code> (just an example, not the real case).</p>
<p>When I first coded it I used <code>'w+'</code> as the mode to open the files, what I got was that most of the content of the files seemed to be corrupt, <code>^@</code> was the characters I had in the file, and only a few correct entries at the end of the file. For example we are talking of a file of more than 3500 entries with only less than 100 entries at the end being legible, the rest of the file was that <code>^@</code> characters.</p>
<p>I could not find the cause so I made it in a different way, I stored all the entries in lists in the dict and then wrote each list to a file in one pass, again opening the file with <code>w+</code>, and this worked fine but I was left with the curiosity of what happened.</p>
<p>Among other things I tried to change the <code>'w+'</code> to <code>'a+'</code>, and that works!</p>
<p>I would like to know the exact difference that makes <code>'w+'</code> work erratically and <code>'a+'</code> work fine.</p>
<p>I left the code below with the mode set to <code>'w+'</code> (this way it writes what seems to be garbage to the file).</p>
<p>The code is not 100% real, I had to modify names and is part of class (the source list itself, actually a dict wrapper as you can guess from the code here).</p>
<pre><code>def extractLists(self, outputDir, filenameprefix):
totalEntries = 0
aKey = "rating"
bKey = "country"
nameKey = "name"
representativeChars = 2
fileBase = outputDir + "/" + filenameprefix
filenameAll = fileBase + "_ALL.txt"
xLists = dict()
for item in self.content.values():
if (item[aKey] != aKey):
totalEntries = totalEntries + 1
filenameA = fileBase + "_" + item[aKey]+ "_ANY.txt"
filenameB = fileBase + "_ANY_" + item[bKey][0:representativeBuildingChars]+ ".txt"
filenameAB = fileBase + "_" + item[aKey]+ "_" + item[bKey][0:representativeBuildingChars] + ".txt"
xLists.setdefault(filenameAll,open(filenameAll,"w+")).write(item[nameKey]+"\n")
mailLists.setdefault(filenameA,open(filenameA,"w+")).write(item[nameKey]+"\n")
mailLists.setdefault(filenameB,open(filenameB,"w+")).write(item[nameKey]+"\n")
mailLists.setdefault(filenameAB,open(filenameAB,"w+")).write(item[nameKey]+"\n")
for fileHandle in mailLists.values():
fileHandle.close()
print(totalEntries)
return totalEntries
</code></pre>
| 0 | 2016-10-19T09:55:53Z | 40,128,156 | <p>You are <em>reopening</em> the file objects each time in the loop, even if already present in the dictionary. The expression:</p>
<pre><code>mailLists.setdefault(filenameA,open(filenameA,"w+"))
</code></pre>
<p>opens the file <em>first</em>, as both arguments to <code>setdefault()</code> need to be available. Using <code>open(..., 'w+')</code> <em>truncates the file</em>.</p>
<p>This is fine when you do so for the first time the filename is not yet present, but all subsequent times, you just truncated a file for which there is still an open file handle. That already-existing open file handle in the dictionary has a file writing position, and continues to write from that position. Since the file just has been truncated, this leads to the behaviour you observed; corrupted file contents. You'll see multiple entries written as data could still be buffered; only data already flushed to disk is lost.</p>
<p>See this short demo (executed on OSX, different operating systems and filesystems can behave differently):</p>
<pre><code>>>> with open('/tmp/testfile.txt', 'w') as f:
... f.write('The quick brown fox')
... f.flush() # flush the buffer to disk
... open('/tmp/testfile.txt', 'w') # second open call, truncates
... f.write(' jumps over the lazy fox')
...
<open file '/tmp/testfile.txt', mode 'w' at 0x10079b150>
>>> with open('/tmp/testfile.txt', 'r') as f:
... f.read()
...
'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 jumps over the lazy fox'
</code></pre>
<p>Opening the files in <code>a</code> append mode doesn't truncate, which is why that change made things work.</p>
<p>Don't keep opening files, only do so when the file is <em>actually missing</em>. You'll have to use an <code>if</code> statement for that:</p>
<pre><code>if filenameA not in mailLists:
mailLists[filenameA] = open(filenameA, 'w+')
</code></pre>
<p>I'm not sure why you are using <code>+</code> in the filemode however, since you don't appear to be reading from any of the files.</p>
<p>For <code>filenameAll</code>, that variable name never changes and you don't need to open that file in the loop at all. Move that <em>outside</em> of the loop and open just once.</p>
| 1 | 2016-10-19T10:03:04Z | [
"python",
"io"
] |
Tkinter Class structure (class per frame) issue with duplicating widgets | 40,128,061 | <p>Ive been trying out OOP for use with Tkinter - Im getting there (I think) slowly...</p>
<p>I wanted to build a structure where each frame is handled by its own class, including all of its widgets and functions. Perhaps I am coming from the wrong angle but that is what makes most logical sense to me. - Feel free to tell me if you agree / disagree!</p>
<p>I know why the problem is happening - when im calling each class my <code>__init__</code> runs everytime and builds the relevant widgets regardless of whether they are already present in the frame. However, the only way I can think of getting round this would be to build each frame in the <code>__init__</code> of my primary class <code>GUI_Start</code>. - Although this seems like a messy and un-organised soloution to the problem. </p>
<p>Is there a way I can achieve a structure where each class takes care of its own functions and widgets but doesn't build the frame each time?</p>
<p>See below for minimal example of the issue:</p>
<pre><code>from Tkinter import *
class GUI_Start:
def __init__(self, master):
self.master = master
self.master.geometry('300x300')
self.master.grid_rowconfigure(0, weight=1)
self.master.grid_columnconfigure(0, weight=1)
self.win_colour = '#D2B48C'
self.frames = {}
for window in ['win1', 'win2']:
frame = Frame(self.master, bg=self.win_colour, bd=10, relief=GROOVE)
frame.grid(row=0, column=0, sticky='news')
setattr(self, window, frame)
self.frames[window] = frame
Page_1(self.frames)
def Next_Page(self, frames, controller):
controller(frames)
class Page_1(GUI_Start):
def __init__(self, master):
self.master = master
self.master['win1'].tkraise()
page1_label = Label(self.master['win1'], text='PAGE 1')
page1_label.pack(fill=X)
page1_button = Button(self.master['win1'], text='Visit Page 2...', command=lambda: self.Next_Page(self.master, Page_2))
page1_button.pack(fill=X, side=BOTTOM)
class Page_2(GUI_Start):
def __init__(self, master):
self.master = master
self.master['win2'].tkraise()
page2_label = Label(self.master['win2'], text='PAGE 2')
page2_label.pack(fill=X)
page2_button = Button(self.master['win2'], text='Back to Page 1...', command=lambda: self.Next_Page(self.master, Page_1))
page2_button.pack(fill=X, side=BOTTOM)
root = Tk()
gui = GUI_Start(root)
root.mainloop()
</code></pre>
<p>Feel free to critique the structure as I may be trying to approach this from the wrong angle!</p>
<p>Any feedback would be much appreciated!
Luke</p>
| 0 | 2016-10-19T09:59:27Z | 40,130,619 | <p>Your use of OOP is not very logical here. Your main program is in the class GUI_start. If your pages inherit from GUI_start, basically you create a whole new program with every page instance you create. You should instead inherit from Frame as Bryan Oakley has pointed our in the comments. Here is a somewhat repaired version of what you have posted. The original one by Bryan is still much better.</p>
<pre><code>from Tkinter import *
class GUI_Start:
def __init__(self, master):
self.master = master
self.master.geometry('300x300')
self.master.grid_rowconfigure(0, weight=1)
self.master.grid_columnconfigure(0, weight=1)
self.win_colour = '#D2B48C'
self.current_page=0
self.pages = []
for i in range(5):
page = Page(self.master,i+1)
page.grid(row=0,column=0,sticky='nsew')
self.pages.append(page)
for i in range(2):
page = Page_diff(self.master,i+1)
page.grid(row=0,column=0,sticky='nsew')
self.pages.append(page)
self.pages[0].tkraise()
def Next_Page():
next_page_index = self.current_page+1
if next_page_index >= len(self.pages):
next_page_index = 0
print(next_page_index)
self.pages[next_page_index].tkraise()
self.current_page = next_page_index
page1_button = Button(self.master, text='Visit next Page',command = Next_Page)
page1_button.grid(row=1,column=0)
class Page(Frame):
def __init__(self,master,number):
super().__init__(master,bg='#D2B48C')
self.master = master
self.master.tkraise()
page1_label = Label(self, text='PAGE '+str(number))
page1_label.pack(fill=X,expand=True)
class Page_diff(Frame):
def __init__(self,master,number):
super().__init__(master)
self.master = master
self.master.tkraise()
page1_label = Label(self, text='I am different PAGE '+str(number))
page1_label.pack(fill=X)
root = Tk()
gui = GUI_Start(root)
root.mainloop()
</code></pre>
| 1 | 2016-10-19T11:49:08Z | [
"python",
"python-2.7",
"class",
"oop",
"tkinter"
] |
Tkinter Class structure (class per frame) issue with duplicating widgets | 40,128,061 | <p>Ive been trying out OOP for use with Tkinter - Im getting there (I think) slowly...</p>
<p>I wanted to build a structure where each frame is handled by its own class, including all of its widgets and functions. Perhaps I am coming from the wrong angle but that is what makes most logical sense to me. - Feel free to tell me if you agree / disagree!</p>
<p>I know why the problem is happening - when im calling each class my <code>__init__</code> runs everytime and builds the relevant widgets regardless of whether they are already present in the frame. However, the only way I can think of getting round this would be to build each frame in the <code>__init__</code> of my primary class <code>GUI_Start</code>. - Although this seems like a messy and un-organised soloution to the problem. </p>
<p>Is there a way I can achieve a structure where each class takes care of its own functions and widgets but doesn't build the frame each time?</p>
<p>See below for minimal example of the issue:</p>
<pre><code>from Tkinter import *
class GUI_Start:
def __init__(self, master):
self.master = master
self.master.geometry('300x300')
self.master.grid_rowconfigure(0, weight=1)
self.master.grid_columnconfigure(0, weight=1)
self.win_colour = '#D2B48C'
self.frames = {}
for window in ['win1', 'win2']:
frame = Frame(self.master, bg=self.win_colour, bd=10, relief=GROOVE)
frame.grid(row=0, column=0, sticky='news')
setattr(self, window, frame)
self.frames[window] = frame
Page_1(self.frames)
def Next_Page(self, frames, controller):
controller(frames)
class Page_1(GUI_Start):
def __init__(self, master):
self.master = master
self.master['win1'].tkraise()
page1_label = Label(self.master['win1'], text='PAGE 1')
page1_label.pack(fill=X)
page1_button = Button(self.master['win1'], text='Visit Page 2...', command=lambda: self.Next_Page(self.master, Page_2))
page1_button.pack(fill=X, side=BOTTOM)
class Page_2(GUI_Start):
def __init__(self, master):
self.master = master
self.master['win2'].tkraise()
page2_label = Label(self.master['win2'], text='PAGE 2')
page2_label.pack(fill=X)
page2_button = Button(self.master['win2'], text='Back to Page 1...', command=lambda: self.Next_Page(self.master, Page_1))
page2_button.pack(fill=X, side=BOTTOM)
root = Tk()
gui = GUI_Start(root)
root.mainloop()
</code></pre>
<p>Feel free to critique the structure as I may be trying to approach this from the wrong angle!</p>
<p>Any feedback would be much appreciated!
Luke</p>
| 0 | 2016-10-19T09:59:27Z | 40,132,323 | <p>The point of using classes is to encapsulate a bunch of behavior as a single unit. An object shouldn't modify anything outside of itself. At least, not by simply creating the object -- you can have methods that can have side effects.</p>
<p>In my opinion, the proper way to create "pages" is to inherit from <code>Frame</code>. All of the widgets that belong to the "page" must have the object itself as its parent. For example:</p>
<pre><code>class PageOne(tk.Frame):
def __init__(self, parent):
# use the __init__ of the superclass to create the actual frame
tk.Frame.__init__(self, parent)
# all other widgets use self (or some descendant of self)
# as their parent
self.label = tk.Label(self, ...)
self.button = tk.Button(self, ...)
...
</code></pre>
<p>Once done, you can treat instances of this class as if they were a single widget:</p>
<pre><code>root = tk.Tk()
page1 = PageOne(root)
page1.pack(fill="both", expand=True)
</code></pre>
<p>You can also create a base <code>Page</code> class, and have your actual pages inherit from it, if all of your pages have something in common (for example, a header or footer)</p>
<pre><code>class Page(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self, parent)
<code common to all pages goes here>
class PageOne(Page):
def __init__(self, parent):
# initialize the parent class
Page.__init__(self, parent)
<code unique to page one goes here>
</code></pre>
| 1 | 2016-10-19T13:08:59Z | [
"python",
"python-2.7",
"class",
"oop",
"tkinter"
] |
Jython : SyntaxError: Lexical error at line 29, column 32. Encountered: "$" (36), after : "" | 40,128,264 | <p>I am getting a syntax error in my code. Can anyone say what's wrong in the syntax? I am new to this language, don't have much of an idea.</p>
<p>Error Message:</p>
<h2>WASX7017E: Exception received while running file "jdbcConn.jy"; exception information: com.ibm.bsf.BSFException: exception from Jython: Traceback (innermost last): (no code object) at line 0 File "", line 29 classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ] ^ SyntaxError: Lexical error at line 29, column 32. Encountered: "$" (36), after : ""</h2>
<p>My Code :</p>
<pre><code>import sys
## **JDBCProvider** ##
def OracleJDBC(cellName,serverName):
name ="Oracle JDBC Driver"
print " Name of JDBC Provider which will be created ---> " + name
print " ----------------------------------------------------------------------------------------- "
# Gets the name of cell
cell = AdminControl.getCell()
print cell
cellid = AdminConfig.getid('/Cell:'+ cell +'/')
print cellid
print " ----------------------------------------------------------------------------------------- "
## Creating New JDBC Provider ##
print " Creating New JDBC Provider :"+ name
n1 = ["name" , "Oracle JDBC Driver" ]
desc = ["description" , "Oracle JDBC Driver"]
impn = ["implementationClassName" ,
"oracle.jdbc.pool.OracleConnectionPoolDataSource"]
classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ]
attrs1 = [n1 , impn , desc , classpath]
n1 = ["name" , "Oracle JDBC Driver" ]
desc = ["description" , "Oracle JDBC Driver"]
impn = ["implementationClassName" , "oracle.jdbc.pool.OracleConnectionPoolDataSource"]
classpath = ["classpath" , "${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar"]
attrs1 = [n1 , impn , desc , classpath]
Serverid = AdminConfig.getid("/Cell:" + cellName + "/ServerName:" + serverName +"/")
jdbc = AdminConfig.create('JDBCProvider', Serverid, attrs1)
print " New JDBC Provider created :" + name
AdminConfig.save()
print " Saving Configuraion "
print " ----------------------------------------------------------------------------------------- "
####################################################################################################################
####################################################################################################################
#main program starts here
if __name__ == '__main__':
cellName = sys.argv[0]
serverName = sys.argv[1]
OracleJDBC(cellName,serverName)
</code></pre>
| 0 | 2016-10-19T10:07:29Z | 40,128,315 | <p>Change this line</p>
<pre><code>classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ]
</code></pre>
<p>to this:</p>
<pre><code>classpath = ["classpath" , "${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar" ]
</code></pre>
<p>or better yet, just delete that line. Anyways, classpath is declared again later to the same value</p>
| 0 | 2016-10-19T10:09:50Z | [
"python",
"jython"
] |
Jython : SyntaxError: Lexical error at line 29, column 32. Encountered: "$" (36), after : "" | 40,128,264 | <p>I am getting a syntax error in my code. Can anyone say what's wrong in the syntax? I am new to this language, don't have much of an idea.</p>
<p>Error Message:</p>
<h2>WASX7017E: Exception received while running file "jdbcConn.jy"; exception information: com.ibm.bsf.BSFException: exception from Jython: Traceback (innermost last): (no code object) at line 0 File "", line 29 classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ] ^ SyntaxError: Lexical error at line 29, column 32. Encountered: "$" (36), after : ""</h2>
<p>My Code :</p>
<pre><code>import sys
## **JDBCProvider** ##
def OracleJDBC(cellName,serverName):
name ="Oracle JDBC Driver"
print " Name of JDBC Provider which will be created ---> " + name
print " ----------------------------------------------------------------------------------------- "
# Gets the name of cell
cell = AdminControl.getCell()
print cell
cellid = AdminConfig.getid('/Cell:'+ cell +'/')
print cellid
print " ----------------------------------------------------------------------------------------- "
## Creating New JDBC Provider ##
print " Creating New JDBC Provider :"+ name
n1 = ["name" , "Oracle JDBC Driver" ]
desc = ["description" , "Oracle JDBC Driver"]
impn = ["implementationClassName" ,
"oracle.jdbc.pool.OracleConnectionPoolDataSource"]
classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ]
attrs1 = [n1 , impn , desc , classpath]
n1 = ["name" , "Oracle JDBC Driver" ]
desc = ["description" , "Oracle JDBC Driver"]
impn = ["implementationClassName" , "oracle.jdbc.pool.OracleConnectionPoolDataSource"]
classpath = ["classpath" , "${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar"]
attrs1 = [n1 , impn , desc , classpath]
Serverid = AdminConfig.getid("/Cell:" + cellName + "/ServerName:" + serverName +"/")
jdbc = AdminConfig.create('JDBCProvider', Serverid, attrs1)
print " New JDBC Provider created :" + name
AdminConfig.save()
print " Saving Configuraion "
print " ----------------------------------------------------------------------------------------- "
####################################################################################################################
####################################################################################################################
#main program starts here
if __name__ == '__main__':
cellName = sys.argv[0]
serverName = sys.argv[1]
OracleJDBC(cellName,serverName)
</code></pre>
| 0 | 2016-10-19T10:07:29Z | 40,128,362 | <p>Your problem is in this line:</p>
<pre><code>classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ]
</code></pre>
<p>Instead, do something like</p>
<pre><code>opath = os.getenv("ORACLE_JDBC_DRIVER_PATH")
classpath = ["classpath", "{}/ojdbc6.jar".format(opath)]
</code></pre>
<p>"${ORACLE_JDBC_DRIVER_PATH}" is shell syntax, not Python.</p>
| 2 | 2016-10-19T10:11:24Z | [
"python",
"jython"
] |
Request returns partial page | 40,128,383 | <p>I'm trying to parse data of a website that loads when user scroll. There is a finite number of element that can appears while scrolling, but using this only gives the first part (25 out of 112):</p>
<pre><code>url = "http://url/to/website"
response = requests.get(url)
soup = BeautifulSoup(response.text)
</code></pre>
<p>How can I tell <code>request</code> to "scroll" before returning the html?</p>
<p>EDIT : apparently request don't do that, what solution can I use in Python? </p>
| 1 | 2016-10-19T10:12:24Z | 40,128,528 | <p>You can't. The question is based on a misunderstanding of what requests does; it loads the content of the page only. Endless scrolling is powered by Javascript, which requests won't do anything with.</p>
<p>You'd need some browser automation tools like Selenium to do this; or find out what Ajax endpoint the scrolling JS is using and load that directly.</p>
| 4 | 2016-10-19T10:18:52Z | [
"python",
"beautifulsoup",
"python-requests"
] |
Request returns partial page | 40,128,383 | <p>I'm trying to parse data of a website that loads when user scroll. There is a finite number of element that can appears while scrolling, but using this only gives the first part (25 out of 112):</p>
<pre><code>url = "http://url/to/website"
response = requests.get(url)
soup = BeautifulSoup(response.text)
</code></pre>
<p>How can I tell <code>request</code> to "scroll" before returning the html?</p>
<p>EDIT : apparently request don't do that, what solution can I use in Python? </p>
| 1 | 2016-10-19T10:12:24Z | 40,128,869 | <blockquote>
<p>The only thing you should know is how serverlet works.</p>
</blockquote>
<p>Usually, <code>onScroll</code> or <code>onClick</code> or any other event will trigger <code>AJAX request</code> to the server. And the client side javascript will render those return (JSON/XML...) So the only thing you should do is to repeat those AJAX request to the same server to get those data.</p>
<p>For example, the action in browser will like below:</p>
<pre><code>1. Enter url on browser
> [HTTP GET REQUEST] http://url/to/website
2. Scroll on the page
> [AJAX GET] http://url/to/website/1
> [javascript on front-end will process those data]
3. Then, keeping scrolling on the page
> [AJAX GET] http://url/to/website/2
> [javascript on front-end will process those data]
4. ... (and so on)
</code></pre>
<hr>
<p><strong>Q. How to use python to get those data?</strong></p>
<p>A. One simple way is using <code>browser > inspect > network_tab</code> to find what AJAX request you send when you scroll in that page. And repeat those AJAX request with correspond header by python.</p>
| 2 | 2016-10-19T10:32:57Z | [
"python",
"beautifulsoup",
"python-requests"
] |
Replace 0 with blank in dataframe Python pandas | 40,128,388 | <p>I made the following code that takes out all of the zero's from my df. However when there is a number containing a zero it takes them out as well.</p>
<pre><code>e.g.
3016.2 316.2
0.235 .235
data_usage_df['Data Volume (MB)'] = data_usage_df['Data Volume (MB)'].str.replace('0', '')
</code></pre>
<p>Could you help me to figure out how I do an exact match of the cell that equals 0 and replace it with a blank value.</p>
| 1 | 2016-10-19T10:12:36Z | 40,128,403 | <p>I think you need add <code>^</code> for matching start of string and <code>$</code> for end of string:</p>
<pre><code>data_usage_df['Data Volume (MB)']=data_usage_df['Data Volume (MB)'].str.replace('^0.0$', '')
</code></pre>
<p>Sample:</p>
<pre><code>data_usage_df = pd.DataFrame({'Data Volume (MB)':[3016.2, 0.235, 1.4001, 0, 4.00]})
print (data_usage_df)
runfile('C:/Dropbox/work-joy/so/_t/test.py', wdir='C:/Dropbox/work-joy/so/_t')
Data Volume (MB)
0 3016.2000
1 0.2350
2 1.4001
3 0.0000
4 4.0000
data_usage_df['Data Volume (MB)'] = data_usage_df['Data Volume (MB)'].astype(str)
data_usage_df['Data Volume (MB)']=data_usage_df['Data Volume (MB)'].str.replace('^0.0$', '')
print (data_usage_df)
Data Volume (MB)
0 3016.2
1 0.235
2 1.4001
3
4 4.0
</code></pre>
<p>Another solution is converting column <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> and where is <code>0</code> give empty space:</p>
<pre><code>data_usage_df['Data Volume (MB)'] = data_usage_df['Data Volume (MB)'].astype(str)
data_usage_df.ix[pd.to_numeric(data_usage_df['Data Volume (MB)'], errors='coerce') == 0,
['Data Volume (MB)']] = ''
print (data_usage_df)
Data Volume (MB)
0 3016.2
1 0.235
2 1.4001
3
4 4.0
</code></pre>
| 0 | 2016-10-19T10:13:26Z | [
"python",
"pandas",
"replace"
] |
Replace 0 with blank in dataframe Python pandas | 40,128,388 | <p>I made the following code that takes out all of the zero's from my df. However when there is a number containing a zero it takes them out as well.</p>
<pre><code>e.g.
3016.2 316.2
0.235 .235
data_usage_df['Data Volume (MB)'] = data_usage_df['Data Volume (MB)'].str.replace('0', '')
</code></pre>
<p>Could you help me to figure out how I do an exact match of the cell that equals 0 and replace it with a blank value.</p>
| 1 | 2016-10-19T10:12:36Z | 40,130,624 | <pre><code>data_usage_df = data_usage_df.astype(str)
data_usage_df['Data Volume (MB)'].replace(['0', '0.0'], '', inplace=True)
</code></pre>
| 0 | 2016-10-19T11:49:17Z | [
"python",
"pandas",
"replace"
] |
pairwise comparisons within a dataset | 40,128,515 | <p>My data is 18 vectors each with upto 200 numbers but some with 5 or other numbers.. organised as:</p>
<pre><code>[2, 3, 35, 63, 64, 298, 523, 624, 625, 626, 823, 824]
[2, 752, 753, 808, 843]
[2, 752, 753, 843]
[2, 752, 753, 808, 843]
[3, 36, 37, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, ...]
</code></pre>
<p>I would like to find the pair that is the most similar in this group of lists. The numbers themselves are not important, they may as well be strings - a 2 in one list and a 3 in another list are not comparable. </p>
<p>I am looking if the variables are the same. for example, the second list is exactly the same as the 4th list but only 1 variable different from list 3.</p>
<p>Additionally it would be nice to also find the most similar triplet or n that are the most similar, but pairwise is the first and most important task.</p>
<p>I hope i have layed out this problem clear enough but i am very happy to supply any more information that anyone might need!</p>
<p>I have a feeling it involves numpy or scipy norm/cosine calculations, but i cant quite work out how to do it, or if this is the best method.</p>
<p>Any help would be greatly appreciated!</p>
| 1 | 2016-10-19T10:18:05Z | 40,129,621 | <p>You can use <code>itertools</code> to generate your pairwise comparisons. If you just want the items which are shared between two lists you can use a <code>set</code> intersection. Using your example:</p>
<pre class="lang-python prettyprint-override"><code>import itertools
a = [2, 3, 35, 63, 64, 298, 523, 624, 625, 626, 823, 824]
b = [2, 752, 753, 808, 843]
c = [2, 752, 753, 843]
d = [2, 752, 753, 808, 843]
e = [3, 36, 37, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112]
data = [a, b, c, d, e]
def number_same(a, b):
# Find the items which are the same
return set(a).intersection(set(b))
for i in itertools.permutations([i for i in range(len(data) - 1)], r=2):
print "Indexes: ", i, len(number_same(data[i[0]], data[i[1]]))
>>>Indexes (0, 1) 1
Indexes (0, 2) 1
Indexes (0, 3) 1
Indexes (1, 0) 1
Indexes (1, 2) 4
Indexes (1, 3) 5 ... etc
</code></pre>
<p>This will give the number of items which are shared between two lists, you could maybe use this information to define which two lists are the best pair...</p>
| 1 | 2016-10-19T11:03:19Z | [
"python",
"python-2.7",
"numpy",
"scipy",
"cosine-similarity"
] |
Index data from not related models in Django-Haystack Solr | 40,128,623 | <p>I have a model to which multiple other models point through foreign keys, like:</p>
<pre><code>class MainModel(models.Model):
name=models.CharField(max_length=40)
class PointingModel1(models.Model):
color=models.CharField(max_length=40)
main_model=models.ForeignKey(MainModel)
class PointingModel2(models.Model):
othername=models.CharField(max_length=40)
main_model=models.ForeignKey(MainModel)
</code></pre>
<p>So I want to return the name of the MainModel by searching for color and othername fields in the PointingModels. Is there any way to do this?</p>
| 0 | 2016-10-19T10:22:33Z | 40,128,761 | <p>It's easy.</p>
<pre><code>colors = PointingModel1.objects.filter(color='blue')
for color in colors:
name = color.main_model.name
# now you can put `name` to a list or something else
</code></pre>
| 0 | 2016-10-19T10:29:06Z | [
"python",
"django",
"solr"
] |
Pyspark adding few seconds to time | 40,128,673 | <p>I am trying to add few seconds to time but I haven't been successful. Here is my example</p>
<pre><code>import datetime
str1 = sc.parallelize(["170745","140840"])
aa = str1.map(lambda l: datetime.datetime.strptime(l, '%H%M%S').strftime('%H:%M:%S'))
</code></pre>
<p>yields</p>
<pre><code>['17:07:45', '14:08:40']
</code></pre>
<p>but I want is</p>
<pre><code>['17:07:52', '14:08:47']
</code></pre>
<p>How could I add 7 seconds to each converted time. I know <code>timedelta</code> is there but not sure about that.</p>
| 1 | 2016-10-19T10:24:56Z | 40,131,911 | <p>You can add <code>datetime.timedelta(0,7)</code> after you have convert your string to dates:</p>
<pre><code>import datetime
str1 = sc.parallelize(["170745","140840"])
aa = str1.map(lambda l: (datetime.datetime.strptime(l, '%H%M%S') + datetime.timedelta(0,7)).strftime('%H:%M:%S'))
</code></pre>
<p><code>aa.collect()</code> returns:</p>
<pre><code>['17:07:52', '14:08:47']
</code></pre>
<p>Replacing the lambda with a regular function, arguably makes it easier to understand:</p>
<pre><code>import datetime
def processdate(timeString):
date = datetime.datetime.strptime(timeString, '%H%M%S')
date += datetime.timedelta(0,7)
return date.strftime('%H:%M:%S')
str1 = sc.parallelize(["170745","140840"])
aa = str1.map(processdate)
</code></pre>
| 1 | 2016-10-19T12:51:29Z | [
"python",
"datetime",
"pyspark"
] |
Pyspark adding few seconds to time | 40,128,673 | <p>I am trying to add few seconds to time but I haven't been successful. Here is my example</p>
<pre><code>import datetime
str1 = sc.parallelize(["170745","140840"])
aa = str1.map(lambda l: datetime.datetime.strptime(l, '%H%M%S').strftime('%H:%M:%S'))
</code></pre>
<p>yields</p>
<pre><code>['17:07:45', '14:08:40']
</code></pre>
<p>but I want is</p>
<pre><code>['17:07:52', '14:08:47']
</code></pre>
<p>How could I add 7 seconds to each converted time. I know <code>timedelta</code> is there but not sure about that.</p>
| 1 | 2016-10-19T10:24:56Z | 40,132,741 | <p>You can use "+datetime.timedelta(seconds=7)" to solve this problem.</p>
<pre><code>(datetime.datetime.strptime(l, '%H%M%S') + datetime.timedelta(seconds=7)).strftime('%H:%M:%S')
</code></pre>
<p>If you want to calculates the number of seconds between two times, minus two datetime.datetime object directly.</p>
<pre><code>import datetime
starttime = datetime.datetime.now()
endtime = datetime.datetime.now()
print (endtime - starttime).seconds
</code></pre>
| 0 | 2016-10-19T13:25:41Z | [
"python",
"datetime",
"pyspark"
] |
Pythonista user- initiated programs do not run | 40,128,684 | <p>Hi I am using Pythonista 3:0 on the ipad. As a beginner I downloaded examples to try out. They worked for a while but now lwhen I try to run them there is no response. All the sample programs in the original Phthonista install work perfectly. </p>
<p>This for example does not work. Nothing happens when I press the triangle.
Thanks</p>
<pre><code># -*- coding: utf-8 -*-from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
from itertools import product, combinations
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
#draw cube
r = [-1, 1]
for s, e in combinations(np.array(list(product(r,r,r))), 2):
if np.sum(np.abs(s-e)) == r[1]-r[0]:
ax.plot3D(*zip(s,e), color="b")
# draw sphere
u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]
x=np.cos(u)*np.sin(v)
y=np.sin(u)*np.sin(v)
z=np.cos(v)
ax.plot_wireframe(x, y, z, color="r")
#draw a point
ax.scatter([0],[0],[0],color="g",s=100)
#draw a vector
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
a = Arrow3D([0,1],[0,1],[0,1], mutation_scale=20, lw=1, arrowstyle="-|>", color="k")
ax.add_artist(a)
plt.show()
</code></pre>
| 0 | 2016-10-19T10:25:40Z | 40,132,180 | <p>In my opinion, matplotlib of Pythonista might be upgraded from 0.9x to 1.x. You should use different syntax as follows.</p>
<pre><code># -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from itertools import product, combinations
fig = plt.figure()
ax = Axes3D(fig) ## it's different now.
ax.set_aspect("equal")
</code></pre>
| 0 | 2016-10-19T13:02:17Z | [
"python",
"ipad",
"matplotlib",
"pythonista"
] |
NetworkX: how to add weights to an existing G.edges()? | 40,128,692 | <p>Given any graph G created in NetworkX, I want to be able to assign some weights to G.edges() <strong>after</strong> the graph is created. The graphs involved are grids, erdos-reyni, barabasi-albert, and so forth. </p>
<p>Given my <code>G.edges()</code>:</p>
<pre><code>[(0, 1), (0, 10), (1, 11), (1, 2), (2, 3), (2, 12), ...]
</code></pre>
<p>And my <code>weights</code>:</p>
<pre><code>{(0,1):1.0, (0,10):1.0, (1,2):1.0, (1,11):1.0, (2,3):1.0, (2,12):1.0, ...}
</code></pre>
<p><strong>How can I assign each edge the relevant weight?</strong> In this trivial case all weights are 1.</p>
<p>I've tried to add the weights to G.edges() directly like this</p>
<pre><code>for i, edge in enumerate(G.edges()):
G.edges[i]['weight']=weights[edge]
</code></pre>
<p>But I get this error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-48-6119dc6b7af0> in <module>()
10
11 for i, edge in enumerate(G.edges()):
---> 12 G.edges[i]['weight']=weights[edge]
TypeError: 'instancemethod' object has no attribute '__getitem__'
</code></pre>
<p><strong>What's wrong?</strong> Since <code>G.edges()</code> is a list, why can't I access its elements as with any other list?</p>
| 1 | 2016-10-19T10:25:59Z | 40,129,408 | <p>It fails because <code>edges</code> is a method.</p>
<p>The <a href="https://networkx.github.io/documentation/development/reference/generated/networkx.Graph.get_edge_data.html" rel="nofollow">documentation</a> says to do this like:</p>
<pre><code>G[source][target]['weight'] = weight
</code></pre>
<p>For example, the following works for me:</p>
<pre><code>import networkx as nx
G = nx.Graph()
G.add_path([0, 1, 2, 3])
G[0][1]['weight'] = 3
>>> G.get_edge_data(0, 1)
{'weight': 3}
</code></pre>
<p>However, your type of code indeed fails:</p>
<pre><code>G.edges[0][1]['weight'] = 3
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-14-97b10ad2279a> in <module>()
----> 1 G.edges[0][1]['weight'] = 3
TypeError: 'instancemethod' object has no attribute '__getitem__'
</code></pre>
<hr>
<p>In your case, I'd suggest</p>
<pre><code>for e in G.edges():
G[e[0]][e[1]] = weights[e]
</code></pre>
| 1 | 2016-10-19T10:55:15Z | [
"python",
"algorithm",
"graph",
"networkx",
"edges"
] |
Semi-supervised learning for regression by scikit-learn | 40,128,742 | <p>Can Label Propagation be used for semi-supervised regression tasks in scikit-learn?
According to its API, the answer is YES.
<a href="http://scikit-learn.org/stable/modules/label_propagation.html" rel="nofollow">http://scikit-learn.org/stable/modules/label_propagation.html</a></p>
<p>However, I got the error message when I tried to run the following code.</p>
<pre><code>from sklearn import datasets
from sklearn.semi_supervised import label_propagation
import numpy as np
rng=np.random.RandomState(0)
boston = datasets.load_boston()
X=boston.data
y=boston.target
y_30=np.copy(y)
y_30[rng.rand(len(y))<0.3]=-999
label_propagation.LabelSpreading().fit(X,y_30)
</code></pre>
<hr>
<p>It shows that "ValueError: Unknown label type: 'continuous'" in the label_propagation.LabelSpreading().fit(X,y_30) line.</p>
<p>How should I solve the problem? Thanks a lot.</p>
| 1 | 2016-10-19T10:28:19Z | 40,140,691 | <p>It looks like the error in the documentation, code itself clearly is classification only (beggining of the <code>.fit</code> call of the <a href="https://github.com/scikit-learn/scikit-learn/blob/412996f/sklearn/semi_supervised/label_propagation.py#L201" rel="nofollow">BasePropagation class</a>):</p>
<pre><code> check_classification_targets(y)
# actual graph construction (implementations should override this)
graph_matrix = self._build_graph()
# label construction
# construct a categorical distribution for classification only
classes = np.unique(y)
classes = (classes[classes != -1])
</code></pre>
<p>In theory you could remove the "check_classification_targets" call and use "regression like method", but it will not be the true regression since you will never "propagate" any value which is not encountered in the training set, you will simply treat the regression value as the class identifier. And you will be unable to use value "-1" since it is a codename for "unlabeled"...</p>
| 0 | 2016-10-19T20:13:14Z | [
"python",
"machine-learning",
"scikit-learn",
"regression"
] |
How to create xml in Python dynamically? | 40,128,776 | <p>import xml.etree.ElementTree as ET</p>
<pre><code> var3 = raw_input("Enter the root Element: \n")
root = ET.Element(var3)
var4 = raw_input("Enter the sub root Element: \n")
doc = ET.SubElement(root, var4)
no_of_rows=input("Enter the number of Element for XML files: - \n")
def printme():
var = raw_input("Enter Element: - \n")
var1 = raw_input("Enter Data: - \n")
ET.SubElement(doc, var).text =var1
return;
for num in range(0, no_of_rows):
printme()
tree = ET.ElementTree(root)
file = raw_input("Enter File Name: - \n")
tree.write(file)
ET.ElementTree(root).write(file, encoding="utf-8", xml_declaration=True)
print "Xml file Created..!!"
</code></pre>
<p>Above one the script that i want create xml dynamically using python. Just i need that it creates only one sub root. How to create multiple sub root with elements in it. What i done wrong in above code ?</p>
| 0 | 2016-10-19T10:29:34Z | 40,129,499 | <p>You are taking no. of elements from user but you are not using it.
Use a loop and get element details from user within the loop as shown below: </p>
<pre><code>import xml.etree.ElementTree as ET
try:
no_of_rows=int(input("Enter the number of Element for XML files: - \n"))
root = input("Enter the root Element: \n")
root_element = ET.Element(root)
for _ in range(1, no_of_rows):
tag = input("Enter Element: - \n")
value = input("Enter Data: - \n")
ET.SubElement(root_element, tag).text = value
tree = ET.ElementTree(root_element)
tree.write("filename.xml")
print("Xml file Created..!!")
except ValueError:
print("Value Error")
except:
print("Exception Occuured")
enter code here
</code></pre>
<p>I hope this is what you want to achieve.</p>
| 0 | 2016-10-19T10:58:37Z | [
"python",
"xml"
] |
How to create xml in Python dynamically? | 40,128,776 | <p>import xml.etree.ElementTree as ET</p>
<pre><code> var3 = raw_input("Enter the root Element: \n")
root = ET.Element(var3)
var4 = raw_input("Enter the sub root Element: \n")
doc = ET.SubElement(root, var4)
no_of_rows=input("Enter the number of Element for XML files: - \n")
def printme():
var = raw_input("Enter Element: - \n")
var1 = raw_input("Enter Data: - \n")
ET.SubElement(doc, var).text =var1
return;
for num in range(0, no_of_rows):
printme()
tree = ET.ElementTree(root)
file = raw_input("Enter File Name: - \n")
tree.write(file)
ET.ElementTree(root).write(file, encoding="utf-8", xml_declaration=True)
print "Xml file Created..!!"
</code></pre>
<p>Above one the script that i want create xml dynamically using python. Just i need that it creates only one sub root. How to create multiple sub root with elements in it. What i done wrong in above code ?</p>
| 0 | 2016-10-19T10:29:34Z | 40,129,571 | <p>If you want create xml you may just do this:</p>
<pre><code>from lxml import etree
try:
root_text = raw_input("Enter the root Element: \n")
root = etree.Element(root_text)
child_tag = raw_input("Enter the child tag Element: \n")
child_text = raw_input("Enter the child text Element: \n")
child = etree.Element(child_text )
child.text =child_text
root.append(child)
with open('file.xml', 'w') as f:
f.write(etree.tostring(root))
f.close()
except ValueError:
print("Occured Error")
</code></pre>
<p>Or if you want dynamic length you just use for loop.</p>
| -1 | 2016-10-19T11:01:38Z | [
"python",
"xml"
] |