title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequencelengths 1
5
|
---|---|---|---|---|---|---|---|---|---|
Adding paths to arguments in popen | 40,128,783 | <p>I want to execute a Linux command through Python. This works in the terminal:</p>
<p><code>/usr/bin/myprogram --path "/home/myuser"</code></p>
<p>I've tried this:</p>
<pre><code>path = "/home/myuser"
args = ['/usr/bin/myprogram', '--path ' + path]
proc = subprocess.Popen(args)
</code></pre>
<p>And this:</p>
<pre><code>path = "/home/myuser"
args = ['/usr/bin/myprogram', '--path "' + path + '"']
proc = subprocess.Popen(args)
</code></pre>
<p>But <code>myprogram</code> does not accept the path formatting. I know that paths behave differently when not executing as shell but I can't get it working. I've also tried single quoting the path instead of double quoting it. Bonus points for a solution that also works on Windows (with a different program path, obviously).</p>
<p>EDIT: Sorry, was writing this out from memory and used backslashes instead of forward slashes. The actual code did use the (correct) forward slashes.</p>
| 0 | 2016-10-19T10:29:53Z | 40,128,949 | <p>Here's something to try:</p>
<pre><code>import subprocess
import shlex
p = subprocess.Popen(shlex.split("/usr/bin/myprogram --path /home/myuser")
</code></pre>
<p>Mind the forward slashes ("/"). From what I read, Python doesn't like backslashes ("\") even when running on Windows (I've never used it on Windows myself).</p>
| 0 | 2016-10-19T10:35:41Z | [
"python",
"linux",
"python-2.7"
] |
Adding paths to arguments in popen | 40,128,783 | <p>I want to execute a Linux command through Python. This works in the terminal:</p>
<p><code>/usr/bin/myprogram --path "/home/myuser"</code></p>
<p>I've tried this:</p>
<pre><code>path = "/home/myuser"
args = ['/usr/bin/myprogram', '--path ' + path]
proc = subprocess.Popen(args)
</code></pre>
<p>And this:</p>
<pre><code>path = "/home/myuser"
args = ['/usr/bin/myprogram', '--path "' + path + '"']
proc = subprocess.Popen(args)
</code></pre>
<p>But <code>myprogram</code> does not accept the path formatting. I know that paths behave differently when not executing as shell but I can't get it working. I've also tried single quoting the path instead of double quoting it. Bonus points for a solution that also works on Windows (with a different program path, obviously).</p>
<p>EDIT: Sorry, was writing this out from memory and used backslashes instead of forward slashes. The actual code did use the (correct) forward slashes.</p>
| 0 | 2016-10-19T10:29:53Z | 40,129,140 | <p>The problem comes from your string literal, <code>'\usr\bin\myprogram'</code>. According to <a href="https://docs.python.org/2.0/ref/strings.html" rel="nofollow">escaping rules</a>, <code>\b</code> is replaced by <code>\x08</code>, so your executable is not found.</p>
<p>Pun an <code>r</code> in front of your string literals (i.e. <code>r'\usr\bin\myprogram'</code>), or use <code>\\</code> to represent a backslash (i.e. <code>'\\usr\\bin\\myprogram'</code>).</p>
| -1 | 2016-10-19T10:43:30Z | [
"python",
"linux",
"python-2.7"
] |
Cannot anchor to an item that isn't a parent or sibling QML QtQuick | 40,128,852 | <p>I'm working on a python desktop app.</p>
<p>And currently have this in my QML file.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code> SplitView {
anchors.fill: parent
orientation: Qt.Horizontal
Rectangle {
color: "#272822"
id: cameraRectangle
width: window.width / 2
Item {
//more stuff
}
Item {
Rectangle {
anchors.top: cameraRectangle.bottom
}
}
}
Rectangle {
//Rectangle info.
}
}</code></pre>
</div>
</div>
</p>
<p>I get the error that "QML Rectangle: Cannot anchor to an item that isn't a parent or sibling." On the line where I am doing anchors.top: cameraRectangle.bottom. I would have assumed that the outer rectangle IS a parent of the inner one?</p>
<p>I have searched online like here: <a href="http://doc.qt.io/qt-5/qtquick-visualcanvas-visualparent.html" rel="nofollow">http://doc.qt.io/qt-5/qtquick-visualcanvas-visualparent.html</a> and they don't seem to be doing anything differently?</p>
<p>Could it be the version of QtQuick I am using?</p>
<p>The imports are as follows:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>import QtQuick 2.6
import QtQuick.Controls 2.0
import QtQuick.Controls 1.4
import QtQuick.Controls.Material 2.0
import QtQuick.Window 2.0</code></pre>
</div>
</div>
</p>
<p>I appreciate your help.</p>
| 1 | 2016-10-19T10:32:18Z | 40,129,883 | <pre><code>SplitView {
anchors.fill: parent
orientation: Qt.Horizontal
Rectangle {
color: "#272822"
id: cameraRectangle
width: window.width / 2
Item {
//more stuff
}
Item {
// The parent of this Item is 'cameraRectangle'
// This Item will be the parent of the Rectangle
// therefore the Rectangle can't anchor to the 'cameraRectangle'
// anymore. As you are not doing anything with this Item
// (so far?) anway, you can just delete it, and everything
// will be fine.
Rectangle {
// The parent of this Rectangle is the Item that wraps it
// and not the 'cameraRectangle'.
anchors.top: cameraRectangle.bottom
}
}
}
Rectangle {
//Rectangle info.
}
}
</code></pre>
<p>As the error message stated: you can't anchor to 'ancestors' other than your parent. You can also anchor to siblings. But neither to their children, nor to yours, and not to any of your 'grand-parents', uncles or aunts ;-)</p>
| 1 | 2016-10-19T11:14:19Z | [
"python",
"qt",
"pyqt",
"qml",
"qtquick2"
] |
Where to find the source code for pandas DataFrame __add__ | 40,128,884 | <p>I am trying to understand what (how) happens when two <code>pandas.DataFrame</code>s are added/subtracted.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame([[1,2], [3,4]])
df2 = pd.DataFrame([[11,12], [13,14]])
df1 + df2 # Which function is called?
</code></pre>
<p>My understanding is <code>__add__</code> function should be implemented in a class to overload <code>+</code> operator, but in the <a href="https://github.com/pandas-dev/pandas/blob/master/pandas/core/frame.py#L204" rel="nofollow">source code</a> for <code>pandas.core.frame.DataFrame</code> and all its parent classes no such function is found. </p>
<p>Where should I look for the function which is doing this job?</p>
| 0 | 2016-10-19T10:33:39Z | 40,129,565 | <p>I think you need check <a href="https://github.com/pandas-dev/pandas/blob/master/pandas/core/ops.py#L166" rel="nofollow">this</a>:</p>
<pre><code>def add_special_arithmetic_methods(cls, arith_method=None,
comp_method=None, bool_method=None,
use_numexpr=True, force=False, select=None,
exclude=None, have_divmod=False):
...
...
</code></pre>
| 0 | 2016-10-19T11:01:15Z | [
"python",
"pandas"
] |
Numpy conversion of column values in to row values | 40,128,895 | <p>I take 3 values of a column (third) and put these values into a row on 3 new columns. And merge the new and old columns into a new matrix A</p>
<p>Input timeseries in col nr3 values in col nr 1 and 2</p>
<pre><code>[x x 1]
[x x 2]
[x x 3]
</code></pre>
<p>output : matrix A</p>
<pre><code>[x x 1 0 0 0]
[x x 2 0 0 0]
[x x 3 1 2 3]
[x x 4 2 3 4]
</code></pre>
<p>So for brevity, first the code generates the matrix 6 rows / 3 col. The last column I want to use to fill 3 extra columns and merge it into a new matrix A. This matrix A was prefilled with 2 rows to offset the starting position.</p>
<p>I have implemented this idea in the code below and it takes a really long time to process large data sets.
How to improve the speed of this conversion </p>
<pre><code>import numpy as np
matrix = np.arange(18).reshape((6, 3))
nr=3
A = np.zeros((nr-1,nr))
for x in range( matrix.shape[0]-nr+1):
newrow = (np.transpose( matrix[x:x+nr,2:3] ))
A = np.vstack([A , newrow])
total= np.column_stack((matrix,A))
print (total)
</code></pre>
| 3 | 2016-10-19T10:34:04Z | 40,129,109 | <p>Here's an approach using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> to get those sliding windowed elements and then just some stacking to get <code>A</code> -</p>
<pre><code>col2 = matrix[:,2]
nrows = col2.size-nr+1
out = np.zeros((nr-1+nrows,nr))
col2_2D = np.take(col2,np.arange(nrows)[:,None] + np.arange(nr))
out[nr-1:] = col2_2D
</code></pre>
<p>Here's an efficient alternative using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.strides.html" rel="nofollow"><code>NumPy strides</code></a> to get <code>col2_2D</code> -</p>
<pre><code>n = col2.strides[0]
col2_2D = np.lib.stride_tricks.as_strided(col2, shape=(nrows,nr), strides=(n,n))
</code></pre>
<p>It would be even better to initialize an output array of zeros of the size as <code>total</code> and then assign values into it with <code>col2_2D</code> and finally with input array <code>matrix</code>. </p>
<p><strong>Runtime test</strong></p>
<p>Approaches as functions -</p>
<pre><code>def org_app1(matrix,nr):
A = np.zeros((nr-1,nr))
for x in range( matrix.shape[0]-nr+1):
newrow = (np.transpose( matrix[x:x+nr,2:3] ))
A = np.vstack([A , newrow])
return A
def vect_app1(matrix,nr):
col2 = matrix[:,2]
nrows = col2.size-nr+1
out = np.zeros((nr-1+nrows,nr))
col2_2D = np.take(col2,np.arange(nrows)[:,None] + np.arange(nr))
out[nr-1:] = col2_2D
return out
def vect_app2(matrix,nr):
col2 = matrix[:,2]
nrows = col2.size-nr+1
out = np.zeros((nr-1+nrows,nr))
n = col2.strides[0]
col2_2D = np.lib.stride_tricks.as_strided(col2, \
shape=(nrows,nr), strides=(n,n))
out[nr-1:] = col2_2D
return out
</code></pre>
<p>Timings and verification -</p>
<pre><code>In [18]: # Setup input array and params
...: matrix = np.arange(1800).reshape((60, 30))
...: nr=3
...:
In [19]: np.allclose(org_app1(matrix,nr),vect_app1(matrix,nr))
Out[19]: True
In [20]: np.allclose(org_app1(matrix,nr),vect_app2(matrix,nr))
Out[20]: True
In [21]: %timeit org_app1(matrix,nr)
1000 loops, best of 3: 646 µs per loop
In [22]: %timeit vect_app1(matrix,nr)
10000 loops, best of 3: 20.6 µs per loop
In [23]: %timeit vect_app2(matrix,nr)
10000 loops, best of 3: 21.5 µs per loop
In [28]: # Setup input array and params
...: matrix = np.arange(7200).reshape((120, 60))
...: nr=30
...:
In [29]: %timeit org_app1(matrix,nr)
1000 loops, best of 3: 1.19 ms per loop
In [30]: %timeit vect_app1(matrix,nr)
10000 loops, best of 3: 45 µs per loop
In [31]: %timeit vect_app2(matrix,nr)
10000 loops, best of 3: 27.2 µs per loop
</code></pre>
| 2 | 2016-10-19T10:42:16Z | [
"python",
"performance",
"numpy",
"matrix"
] |
Finding holes in a binary image | 40,128,985 | <p>Assume we have following binary image</p>
<pre><code>0010
0101
0101
0010
0100
1010
0100
0000
</code></pre>
<p>0 represents background pixels and 1 represents image pixels.As you can see there are two holes in this image.Is there a way to obtain the number of holes in this image using algorithms?(Java or Python but not Matlab)</p>
| -3 | 2016-10-19T10:37:14Z | 40,130,348 | <p>Here is some idea presented as code (and it might be not what you need).</p>
<p>The problem is, that i don't understand your example. Depending on the neighborhood-definition, there are different results possible.</p>
<ul>
<li>If you have a 8-neighborhood, all zeros are connected somehow (what does that mean about the surrounding 1's?).</li>
<li>If you have a 4-neighborhood, each one surrounded by 4 1's represents a new hole
<ul>
<li>Of course you could postprocess this but the question is still unclear</li>
</ul></li>
</ul>
<h3>Code</h3>
<pre><code>import numpy as np
from skimage.measure import label
img = np.array([[0,0,1,0],
[0,1,0,1],
[0,1,0,1],
[0,0,1,0],
[0,1,0,0],
[1,0,1,0],
[0,1,0,0],
[0,0,0,0]])
labels = label(img, connectivity=1, background=-1) # conn=1 -> 4 neighbors
label_vals = np.unique(labels) # conn=2 -> 8 neighbors
counter = 0
for i in label_vals:
indices = np.where(labels == i)
if indices:
if img[indices][0] == 0:
print('hole: ', indices)
counter += 1
print(img)
print(labels)
print(counter)
</code></pre>
<h3>Output</h3>
<pre><code>('hole: ', (array([0, 0, 1, 2, 3, 3, 4]), array([0, 1, 0, 0, 0, 1, 0])))
('hole: ', (array([0]), array([3])))
('hole: ', (array([1, 2]), array([2, 2])))
('hole: ', (array([3, 4, 4, 5, 6, 6, 6, 7, 7, 7, 7]), array([3, 2, 3, 3, 0, 2, 3, 0, 1, 2, 3])))
('hole: ', (array([5]), array([1])))
[[0 0 1 0]
[0 1 0 1]
[0 1 0 1]
[0 0 1 0]
[0 1 0 0]
[1 0 1 0]
[0 1 0 0]
[0 0 0 0]]
[[ 1 1 2 3]
[ 1 4 5 6]
[ 1 4 5 6]
[ 1 1 7 8]
[ 1 9 8 8]
[10 11 12 8]
[ 8 13 8 8]
[ 8 8 8 8]]
5
</code></pre>
| 2 | 2016-10-19T11:36:36Z | [
"java",
"python",
"algorithm",
"binary-image"
] |
argparse: Emulating GCC's "-fno-<option>" semantics | 40,129,203 | <p>One nice feature of GCC's command line parsing is that most flags of the form "-fmy-option" have a negative version called "-fno-my-option". The rightmost occurrence takes precedence, so you can just append "-fno-my-option" to your CFLAGS or similar in a Makefile to disable an option without clobbering the other flags.</p>
<p>I'd like to support something similar in a tool whose wrapper script is Python and uses argparse. The obvious hack of just defining both versions of the argument with an action of <code>store_true</code> doesn't work, because that won't let me ask for the rightmost occurrence.</p>
<p>Obviously, it's easy to support a syntax like <code>--my-option=yes</code> / <code>--my-option=no</code>, but it would be nice for the user not to have to pass the parameter.</p>
<p>Is there a way to get argparse to have an on/off switch for a boolean flag like this?</p>
| 0 | 2016-10-19T10:46:27Z | 40,136,954 | <p>Without any fancy foot work I can setup a pair of arguments that write to the same <code>dest</code>, and take advantage of the fact that the last write is the one that sticks:</p>
<pre><code>In [765]: parser=argparse.ArgumentParser()
In [766]: a1=parser.add_argument('-y',action='store_true')
In [767]: a2=parser.add_argument('-n',action='store_false')
</code></pre>
<p>Without a <code>dest</code> parameter these use a name deterived from the option strings. But I can give a <code>dest</code>, or change that value after creation:</p>
<pre><code>In [768]: a1.dest
Out[768]: 'y'
In [769]: a2.dest
Out[769]: 'n'
In [770]: a1.dest='switch'
In [771]: a2.dest='switch'
</code></pre>
<p>Now use of either will set the <code>switch</code> attribute.</p>
<pre><code>In [772]: parser.parse_args([])
Out[772]: Namespace(switch=False)
</code></pre>
<p>The default comes from the first defined argument. That's a function of how defaults are set at the start of parsing. For all other inputs, it's the last argument that sets the value</p>
<pre><code>In [773]: parser.parse_args(['-y'])
Out[773]: Namespace(switch=True)
In [774]: parser.parse_args(['-n'])
Out[774]: Namespace(switch=False)
In [775]: parser.parse_args(['-n','-y','-n','-y'])
Out[775]: Namespace(switch=True)
In [776]: parser.parse_args(['-n','-y','-n'])
Out[776]: Namespace(switch=False)
</code></pre>
<p>The default could also be set with a separate command:</p>
<pre><code>parser.set_defaults(switch='foo')
</code></pre>
<p>If you wanted to use this sort of feature a lot you could write a little utility function that creates the pair of arguments with any flags and dest you want. There's even a bug/issues request for such an enhancement, but I doubt if it will be implemented.</p>
| 2 | 2016-10-19T16:30:15Z | [
"python",
"command-line-arguments",
"argparse"
] |
Getting Data in Wrong Sequence Order from DynamoDB | 40,129,365 | <p>I am facing problem in downloading the Data from DynamoDB. I tried with Python SDK as well as with the AWS CLI (aws dynamodb scan --table-name Alarms) but every time, I am getting the same problem. Does anyone have any idea that what is the cause for that.</p>
<p>Output Get</p>
<pre><code> {
"FRE": {
"S": "1"
},
"MB": {
"S": "0"
},
"TW": {
"S": "1"
},
"FNB": {
"S": "0"
},
"Date": {
"S": "2016-10-19 09:04:47.083456"
},
"TD2": {
"S": "1"
},
"TD1": {
"S": "1"
},
"TB": {
"S": "1"
}
}
</code></pre>
<p>Output Required</p>
<pre><code> {
"Date": {
"S": "2016-10-19 09:04:47.083456"
},
"FRE": {
"S": "1"
},
"MB": {
"S": "0"
},
"TW": {
"S": "1"
},
"FNB": {
"S": "0"
},
"TD2": {
"S": "1"
},
"TD1": {
"S": "1"
},
"TB": {
"S": "1"
}
}
</code></pre>
<p>Thanks
Waqas Ali Khan</p>
| 0 | 2016-10-19T10:53:42Z | 40,129,987 | <p>When you creating dictionary object, that having problem of to maintain order, it doesn't iterate in order with respect to element added in it.</p>
<pre><code>#So we have to create Ordered dict you can use collection package as follow's
from collections import OrderedDict
data_dict = OrderedDict()
</code></pre>
<p>Now, you will maintain order of your directory, in sequence of data added in it. you can able to iterate in order also. </p>
| 0 | 2016-10-19T11:19:11Z | [
"python",
"amazon-web-services",
"amazon-dynamodb"
] |
Pandas Create Column with Groupby and Sum with additional condition | 40,129,410 | <p>I'm trying to add a new column in pandas DataFrame after grouping and with additional conditions </p>
<pre><code>df = pd.DataFrame({
'A' :[4,5,7,8,2,3,5,2,1,1,4,4,2,4,5,1,3,9,7,9],
'B' :[9,5,7,8,3,3,5,2,1,1,4,4,2,4,5,1,3,5,7,9],
'C' :[9,5,7,8,3,3,5,2,1,1,4,4,2,4,5,1,3,5,7,9],
'D' :[1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,1,0,0,1,0]
})
df1 = df.groupby(['A', 'B'], as_index=False).transform('sum')
df1 = df.join(df.groupby(['A'])['C'].sum(), on='A', rsuffix='_inward')
df1
</code></pre>
<p>In above query it is able to sum and give output but how do I add condition for <code>df['D'] == 1</code></p>
<p>Expected output </p>
<pre><code> A B C D C_inward
0 4 9 9 1 13
2 7 7 7 1 14
4 2 3 3 1 3
5 3 3 3 1 3
8 1 1 1 1 3
9 1 1 1 1 3
13 4 4 4 1 13
14 5 5 5 1 5
15 1 1 1 1 3
18 7 7 7 1 14
</code></pre>
| 1 | 2016-10-19T10:55:20Z | 40,129,503 | <p>You can add <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>mask = df['D'] == 1
df1 = df[mask].join(df[mask].groupby(['A'])['C'].sum(), on='A', rsuffix='_inward')
print (df1)
A B C D C_inward
0 4 9 9 1 13
2 7 7 7 1 14
4 2 3 3 1 3
5 3 3 3 1 3
8 1 1 1 1 3
9 1 1 1 1 3
13 4 4 4 1 13
14 5 5 5 1 5
15 1 1 1 1 3
18 7 7 7 1 14
</code></pre>
| 0 | 2016-10-19T10:58:45Z | [
"python",
"pandas",
"dataframe"
] |
How to check the percentile of each row based on a column in pandas? | 40,129,428 | <p>I have a dataset with a <code>id</code> column for each event and a <code>value</code> column (among other columns) in a dataframe. What I want to do is categorize each <code>id</code> based on whether it is on the 90th percentile, 50th percentile, 25th percentile etc. of the frequency distribution of the value colum.</p>
<p>Example,</p>
<pre><code>id value
1 12.5
2 4.6
....
</code></pre>
<p>So, I'd add another column <code>category</code> to it depending upon the what percentile of the value column it falls in. How do I do that?</p>
| 0 | 2016-10-19T10:56:18Z | 40,130,836 | <p>You're looking for the <code>quantile</code> method. For instance, assigning to <code>0.0, 0.25, 0.5, 0.75</code> quantiles could be done this way:</p>
<pre><code>df['quantile'] = 0.0
for q in [0.25, 0.5, 0.75]:
df.loc[df['value'] >= df['value'].quantile(q), 'quantile'] = q
</code></pre>
| 0 | 2016-10-19T11:59:26Z | [
"python",
"pandas",
"statistics"
] |
Using alphabet as counter in a loop | 40,129,447 | <p>I am looking for the most efficient way to count the number of letters in a list. I need something like</p>
<pre><code>word=[h e l l o]
for i in alphabet:
for j in word:
if j==i:
## do something
</code></pre>
<p>Where <em>alphabet</em> should be the <strong>spanish</strong> alphabet, that is the english alphabet including the special character <em>'ñ'</em>.</p>
<p>I have thought about creating a list of pairs in the form of [[a, 0], [b,1], ...] but I suppose there is a more efficient/clean way.</p>
| 1 | 2016-10-19T10:56:52Z | 40,129,596 | <p>This is pretty easy:</p>
<pre><code>import collections
print collections.Counter("señor")
</code></pre>
<p>This prints:</p>
<pre><code>Counter({'s': 1, 'r': 1, 'e': 1, '\xa4': 1, 'o': 1})
</code></pre>
| 0 | 2016-10-19T11:02:30Z | [
"python",
"list",
"character"
] |
Using alphabet as counter in a loop | 40,129,447 | <p>I am looking for the most efficient way to count the number of letters in a list. I need something like</p>
<pre><code>word=[h e l l o]
for i in alphabet:
for j in word:
if j==i:
## do something
</code></pre>
<p>Where <em>alphabet</em> should be the <strong>spanish</strong> alphabet, that is the english alphabet including the special character <em>'ñ'</em>.</p>
<p>I have thought about creating a list of pairs in the form of [[a, 0], [b,1], ...] but I suppose there is a more efficient/clean way.</p>
| 1 | 2016-10-19T10:56:52Z | 40,129,771 | <p>It is not actually a dupe as you want to filter to only count characters from a certain set, you can use a <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow">Counter</a> dict to do the counting and a set of allowed characters to filter by:</p>
<pre><code>word = ["h", "e", "l", "l", "o"]
from collections import Counter
from string import ascii_lowercase
# create a set of the characters you want to count.
allowed = set(ascii_lowercase + 'ñ')
# use a Counter dict to get the counts, only counting chars that are in the allowed set.
counts = Counter(s for s in word if s in allowed)
</code></pre>
<p>If you actually just want the total sum:</p>
<pre><code>total = sum(s in allowed for s in word)
</code></pre>
<p>Or using a functional approach:</p>
<pre><code>total = sum(1 for _ in filter(allowed.__contains__, word))
</code></pre>
<p>Using <em>filter</em> is going to be a bit faster for any approach:</p>
<pre><code>In [31]: from collections import Counter
...: from string import ascii_lowercase, digits
...: from random import choice
...:
In [32]: chars = [choice(digits+ascii_lowercase+'ñ') for _ in range(100000)]
In [33]: timeit Counter(s for s in chars if s in allowed)
100 loops, best of 3: 36.8 ms per loop
In [34]: timeit Counter(filter(allowed.__contains__, chars))
10 loops, best of 3: 31.7 ms per loop
In [35]: timeit sum(s in allowed for s in chars)
10 loops, best of 3: 35.4 ms per loop
In [36]: timeit sum(1 for _ in filter(allowed.__contains__, chars))
100 loops, best of 3: 32 ms per loop
</code></pre>
<p>If you want a case insensitive match, use <em>ascii_letters</em> and add <code>'ñÃ'</code>:</p>
<pre><code>from string import ascii_letters
allowed = set(ascii_letters+ 'ñÃ')
</code></pre>
| 2 | 2016-10-19T11:09:48Z | [
"python",
"list",
"character"
] |
How to create HDF5 file (mutli label classification) out of txt file to use in Caffe | 40,129,454 | <p>I have the following structure in a .txt file:</p>
<blockquote>
<pre><code>/path/to/image x y
/path/to/image x y
</code></pre>
</blockquote>
<p>where x and y are integers.</p>
<p>What I want to do now is: Create a hdf5 file to use in Caffe (<code>'train.prototxt'</code>)</p>
<p>My Python code looks like this:</p>
<pre><code>import h5py
import numpy as np
import os
text = 'train'
text_dir = text + '.txt'
data = np.genfromtxt(text_dir, delimiter=" ", dtype=None)
h = h5py.File(text + '.hdf5', 'w')
h.create_dataset('data', data=data[:1])
h.create_dataset('label', data=data[1:])
with open(text + "_hdf5.txt", "w") as textfile:
textfile.write(os.getcwd() + '/' +text + '.hdf5')
</code></pre>
<p>But this does not work! Any ideas what could be wrong?</p>
| 0 | 2016-10-19T10:57:01Z | 40,132,734 | <p>It does not work because your <code>'data'</code> is <code>/path/to/image</code> instead of the image itself.</p>
<p>See <a href="http://stackoverflow.com/a/31808324/1714410">this answer</a>, and <a class='doc-link' href="http://stackoverflow.com/documentation/caffe/5344/prepare-data-for-training/19117/prepare-arbitrary-data-in-hdf5-format#t=201608100602208392787">this documentation section</a> for more information.</p>
| 1 | 2016-10-19T13:25:19Z | [
"python",
"neural-network",
"deep-learning",
"caffe",
"multilabel-classification"
] |
AttributeError: 'module' object has no attribute 'io' in caffe | 40,129,633 | <p>I am trying to do a gender recognition program, below is the code..</p>
<pre><code>import caffe
import os
import numpy as np
import sys
import cv2
import time
#Models root folder
models_path = "./models"
#Loading the mean image
mean_filename=os.path.join(models_path,'./mean.binaryproto')
proto_data = open(mean_filename, "rb").read()
a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)
mean_image = caffe.io.blobproto_to_array(a)[0]
#Loading the gender network
gender_net_pretrained=os.path.join(models_path,
'./gender_net.caffemodel')
gender_net_model_file=os.path.join(models_path,
'./deploy_gender.prototxt')
gender_net = caffe.Classifier(gender_net_model_file, gender_net_pretrained)
#Reshaping mean input image
mean_image = np.transpose(mean_image,(2,1,0))
#Gender labels
gender_list=['Male','Female']
#cv2 Haar Face detector
face_cascade=cv2.CascadeClassifier(os.path.join
(models_path,'haarcascade_frontalface_default.xml'))
#Getting prediction from live camera
cap = cv2.VideoCapture(0)
while True:
ret,frame = cap.read()
if ret is True:
start_time = time.time()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rects = face_cascade.detectMultiScale(frame_gray, 1.3, 5)
#Finding the largest face
if len(rects) >= 1:
rect_area = [rects[i][2]*rects[i][3] for i in xrange(len(rects))]
rect = rects[np.argmax(rect_area)]
x,y,w,h = rect
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
roi_color = frame[y:y+h, x:x+w]
#Resizing the face image
crop = cv2.resize(roi_color, (256,256))
#Subtraction from mean file
#input_image = crop -mean_image
input_image = rect
#Getting the prediction
start_prediction = time.time()
prediction = gender_net.predict([input_image])
gender = gender_list[prediction[0].argmax()]
print("Time taken by DeepNet model: {}").format(time.time()-start_prediction)
print prediction,gender
cv2.putText(frame,gender,(x,y), cv2.FONT_HERSHEY_SIMPLEX, 1,(0,255,0),2)
print("Total Time taken to process: {}").format(time.time()-start_time)
#Showing output
cv2.imshow("Gender Detection",frame)
cv2.waitKey(1)
#Delete objects
cap.release()
cv2.killAllWindows()
</code></pre>
<p>When I am running the I am getting an error: </p>
<pre><code>a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)
AttributeError: 'module' object has no attribute 'io'
</code></pre>
<p>How Can I solve it. I am using cnn_gender_age_prediction model. I want to make a real time gender recognition program using python and cnn_gender_age model.</p>
| 1 | 2016-10-19T11:03:54Z | 40,139,374 | <p><code>io</code> is a module in <code>caffe</code> package. Basically when you type <code>import caffe</code>, it will not automatically try to import all modules in <code>caffe</code> package including <code>io</code>. There are two solutions.</p>
<p>First one: import caffe.io manually</p>
<pre><code>import caffe
import caffe.io
</code></pre>
<p>Second one: update to the latest caffe version, in which you should find a line in <code>__init__.py</code> under <code>python/caffe</code> directory:</p>
<pre><code>from . import io
</code></pre>
| 0 | 2016-10-19T18:52:30Z | [
"python",
"machine-learning",
"computer-vision",
"caffe"
] |
Initializing Class instance within a class | 40,129,792 | <p>the entire counter list of methods in side counter class do not work. I want setcap to set of cap, and check cap to see if each counter have reached their limit as hr min sec are what a clock should know i would like to initialize them inside the clock.</p>
<pre><code>import time
class counter():
count = 0
cap = 0
def _init_(self):pass
def reset(self):
self.count = 0
def increment(self):
self.count += 1
def setcap(self,x):
print x
self.cap = x
def checkcap(self):
if self.cap > self.count:
return False
else:
return True
class clock():
_hr = counter()
_min = counter()
_sec = counter()
def _init_(self):
self._hr.setcap(23)
self._min.setcap(59)
self._sec.setcap(59)
def manualreset(self):
self._hr.reset()
self._min.reset()
self_sec.reset()
def tick(self):
if self._sec.checkcap():
self._sec.reset()
self._min.increment()
if self._min.checkcap():
self._min.reset()
self._hr.increment()
if self._hr.checkcap():
self._hr.reset()
else:
self._sec.increment()
newClock = clock()
raw_input("Press enter to start clock")
while newClock._hr != 24:
newClock.tick()
print str(newClock._hr.count).zfill(2) + str(newClock._min.count).zfill(2) + str(newClock._sec.count).zfill(2)
</code></pre>
| -1 | 2016-10-19T11:10:48Z | 40,130,195 | <p>One of the problems in your code is that your init functions are <em>init</em>.
Try using</p>
<pre><code>def __init__(self):
pass
</code></pre>
<p>This should solve one of your problems</p>
| 0 | 2016-10-19T11:29:12Z | [
"python",
"clock"
] |
Text to Zip to base64 and vice versa, in Python | 40,129,895 | <p>I have a 'text' that is converted to zip then converted to base64. How do I convert it back to plain text in python if I have that base64 value. ?</p>
| 0 | 2016-10-19T11:14:56Z | 40,129,912 | <p>You convert the base 64 back, using <a href="https://docs.python.org/3.5/library/base64.html" rel="nofollow">base64 module</a>, and then the zip, using the <a href="https://docs.python.org/3.5/library/zipfile.html" rel="nofollow">zipfile module</a>.</p>
<p>Assuming <code>file.txt</code> was zipped into <code>file.zip</code>, and then the archive was converted to base 64 as <code>encoded.txt</code>:</p>
<pre><code>import zipfile
import base64
base64.decode(open('encoded.txt'), open('file.zip', 'w'))
ZipFile('file.zip').extractall()
plaintext = open('file.txt').read()
</code></pre>
| 0 | 2016-10-19T11:15:49Z | [
"python",
"zip",
"base64",
"zlib"
] |
Dynamic class creation - Python | 40,129,989 | <p>Here is the scenario:</p>
<p>I have two classes:</p>
<pre><code>class A:
pass:
class B:
pass
</code></pre>
<p>Now I want to create a client, in that I need to have a small utility method, which should return my class template/object e.g: class A, class B, as I pass on the class name to that utility e.g <code>get_obj(classA)</code>.</p>
<p>Now, is this possible? If then please suggest an approach, as I don't get any correct answer as of now in web.</p>
<p>Hope I am making sense.</p>
| 0 | 2016-10-19T11:19:13Z | 40,130,129 | <p>Standard library function <a href="https://docs.python.org/3/library/collections.html#collections.namedtuple" rel="nofollow"><code>namedtuple</code></a> creates and returns a class. Internally it uses <code>exec</code>. It may be an inspiration for what you need.</p>
<p>Source code: <a href="https://github.com/python/cpython/blob/master/Lib/collections/__init__.py#L356" rel="nofollow">https://github.com/python/cpython/blob/master/Lib/collections/<strong>init</strong>.py#L356</a></p>
| 0 | 2016-10-19T11:26:58Z | [
"python",
"oop",
"inheritance"
] |
Dynamic class creation - Python | 40,129,989 | <p>Here is the scenario:</p>
<p>I have two classes:</p>
<pre><code>class A:
pass:
class B:
pass
</code></pre>
<p>Now I want to create a client, in that I need to have a small utility method, which should return my class template/object e.g: class A, class B, as I pass on the class name to that utility e.g <code>get_obj(classA)</code>.</p>
<p>Now, is this possible? If then please suggest an approach, as I don't get any correct answer as of now in web.</p>
<p>Hope I am making sense.</p>
| 0 | 2016-10-19T11:19:13Z | 40,130,275 | <p>Here is a possible implementation. All the code is contained in a single '.py' file</p>
<pre><code>class A:
pass
class B:
pass
# map class name to class
_classes = {
A.__name__: A,
B.__name__: B,
}
def get_obj(cname):
return _classes[cname]()
# test the function
if __name__ == '__main__':
print get_obj('A')
</code></pre>
<p>It will produce the following output</p>
<pre><code><__main__.A instance at 0x1026ea950>
</code></pre>
| 2 | 2016-10-19T11:33:06Z | [
"python",
"oop",
"inheritance"
] |
Dynamic class creation - Python | 40,129,989 | <p>Here is the scenario:</p>
<p>I have two classes:</p>
<pre><code>class A:
pass:
class B:
pass
</code></pre>
<p>Now I want to create a client, in that I need to have a small utility method, which should return my class template/object e.g: class A, class B, as I pass on the class name to that utility e.g <code>get_obj(classA)</code>.</p>
<p>Now, is this possible? If then please suggest an approach, as I don't get any correct answer as of now in web.</p>
<p>Hope I am making sense.</p>
| 0 | 2016-10-19T11:19:13Z | 40,131,753 | <p><code>globals()</code> returns a dictionary containing all symbols defined in the global scope of the module (including classes <code>A</code> and <code>B</code>):</p>
<p><em>a_and_b_module.py</em></p>
<pre class="lang-py prettyprint-override"><code>class A: pass
class B: pass
def get_cls(cls_name):
return globals()[cls_name]
</code></pre>
<p><strong>If you are looking for simplicity</strong></p>
<p>If the code that will call this function is inside the module, then you can eliminate the function altogether and use <code>globals()[cls_name]</code> directly.</p>
<p>If the code that will call this function is outside the module, then you could use <code>getattr</code> function:</p>
<p><em>a_and_b_module.py</em></p>
<pre class="lang-py prettyprint-override"><code>class A: pass
class B: pass
</code></pre>
<p><em>another_file.py</em></p>
<pre class="lang-py prettyprint-override"><code>import a_and_b_module
cls_name = 'A'
chosen_cls = getattr(a_and_b_module, cls_name)
</code></pre>
<p><strong>If you are looking for complete control</strong></p>
<p>The problem with the approach above is that it could return anything defined in <em>a_and_b_module.py</em>, not restricting itself to <code>A</code> and <code>B</code>. If you want to make sure only A and B can be returned:</p>
<pre class="lang-py prettyprint-override"><code>class A: pass
class B: pass
allowed_classes = ('A', 'B')
def get_cls(cls_name):
assert cls_name in allowed_classes
return globals()[cls_name]
</code></pre>
<p>Note: you might also be interested in the concept of <a href="https://en.wikipedia.org/wiki/Factory_(object-oriented_programming)" rel="nofollow">factory</a>.</p>
| 0 | 2016-10-19T12:44:03Z | [
"python",
"oop",
"inheritance"
] |
python pandas dataframe merge or join dataframe | 40,130,122 | <p>I want to help yours</p>
<p>if i have a pandas dataframe merge</p>
<p>first dataframe is</p>
<pre><code>D = { Year, Age, Location, column1, column2... }
2013, 20 , america, ..., ...
2013, 35, usa, ..., ...
2011, 32, asia, ..., ...
2008, 45, japan, ..., ...
</code></pre>
<p>shape is 38654rows x 14 columns</p>
<p>second dataframe is </p>
<pre><code>D = { Year, Location, column1, column2... }
2008, usa, ..., ...
2008, usa, ..., ...
2009, asia, ..., ...
2009, asia, ..., ...
2010, japna, ..., ...
</code></pre>
<p>shape is 96rows x 7 columns</p>
<p>I want to merge or join two different dataframe.
How can I do it?</p>
<p>thanks</p>
| 1 | 2016-10-19T11:26:42Z | 40,130,150 | <p>IIUC you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> with parameter <code>how='left'</code> if need left join on column <code>Year</code> and <code>Location</code>:</p>
<pre><code>print (df1)
Year Age Location column1 column2
0 2013 20 america 7 5
1 2008 35 usa 8 1
2 2011 32 asia 9 3
3 2008 45 japan 7 1
print (df2)
Year Location column1 column2
0 2008 usa 8 9
1 2008 usa 7 2
2 2009 asia 8 2
3 2009 asia 0 1
4 2010 japna 9 3
df = pd.merge(df1,df2, on=['Year','Location'], how='left')
print (df)
Year Age Location column1_x column2_x column1_y column2_y
0 2013 20 america 7 5 NaN NaN
1 2008 35 usa 8 1 8.0 9.0
2 2008 35 usa 8 1 7.0 2.0
3 2011 32 asia 9 3 NaN NaN
4 2008 45 japan 7 1 NaN NaN
</code></pre>
<p>You can also check <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging" rel="nofollow">documentation</a>.</p>
| 1 | 2016-10-19T11:27:46Z | [
"python",
"pandas",
"join",
"dataframe",
"merge"
] |
Labels show up interactively on click in python matplotlib | 40,130,126 | <p>I am plotting the following numpy array (plotDataFirst), which has 40 x 160 dimensions (and contains double values).</p>
<p>I would like to be able to hover over a plot (one of the 40 that are drawn) and see the label of that particular plot. </p>
<p>I have an array (1x40) that contains all of the labels. Is there any way to do this? I am not sure how to add this type of interactive labels. </p>
<pre><code>plt.interactive(False)
plt.plot(plotDataFirst)
plt.show()
</code></pre>
| 0 | 2016-10-19T11:26:49Z | 40,134,122 | <p>I'm not sure exactly how you want to show the label (tooltip, legend, title, label, ...), but something like this might be a first step:</p>
<pre><code>import numpy as np
import matplotlib.pylab as pl
pl.close('all')
def line_hover(event):
ax = pl.gca()
for line in ax.get_lines():
if line.contains(event)[0]:
print(line.get_label())
labels = ['line 1','line 2','line 3']
fig = pl.figure()
for i in range(len(labels)):
pl.plot(np.arange(10), np.random.random(10), label=labels[i])
pl.legend(frameon=False)
fig.canvas.mpl_connect('motion_notify_event', line_hover)
pl.show()
</code></pre>
<p>So basically, for every mouse motion (<code>motion_notify_event</code>), check if the cursor is over one of the lines, and if so, (as a quick hack / solution for now), print the label of that line to the command line.</p>
<p>Using a tooltip might be a nicer approach, but that seems to require backend-specific solutions (see e.g. <a href="http://stackoverflow.com/a/4620352/3581217">http://stackoverflow.com/a/4620352/3581217</a>) </p>
| 1 | 2016-10-19T14:21:55Z | [
"python",
"numpy",
"matplotlib"
] |
Python matplotlib.pyplot: How to make a histogram with bins counts including right bin edge? | 40,130,128 | <p>could you please help me, how to modify the code so to get a histogram with bins counts including right bin edge i.e. <code>bins[i-1] < x <= bins[i]</code> (and no the Left as by default) ? </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
data = [0,1,2,3,4]
binwidth = 1
plt.hist(data, bins=np.arange(min(data), max(data) + binwidth, binwidth))
plt.xlabel('Time')
plt.ylabel('Counts')
plt.show()
</code></pre>
<p>Thank you in advance.</p>
| 0 | 2016-10-19T11:26:56Z | 40,130,412 | <p>I do not think there is an option to do it explicitly in either matplotlib or numpy.</p>
<p>However, you may use <code>np.histogram()</code> with negative value of your <code>data</code> (and bins), then negate the output and plot it with <code>plt.bar()</code> function.</p>
<pre><code>bins = np.arange(min(data), max(data) + binwidth, binwidth)
hist, binsHist = np.histogram(-data, bins=sorted(-bins))
plt.plot(-binsHist[1:], -hist, np.diff(binHist))
</code></pre>
| 0 | 2016-10-19T11:39:43Z | [
"python",
"matplotlib",
"histogram"
] |
Python Scrapy: Skip Xpath if it's not there | 40,130,194 | <p>I have this code which scrapes a few hundred pages for me. But sometimes the xpath for <code>a</code> doesn't exist at all, how can I edit this so the script doesn't stop and keeps running to get the <code>b</code> and just give me that for that specific page?</p>
<pre><code>`a = response.xpath("//div[@class='headerDiv']/a/@title").extract()[0]
b = response.xpath("//div[@class='headerDiv']/text()").extract()[0].strip()
items['title'] = a + " " + b
yield items`
</code></pre>
| 0 | 2016-10-19T11:29:08Z | 40,130,496 | <p>Just check the result of <code>extract()</code>.</p>
<pre><code>nodes = response.xpath("//div[@class='headerDiv']/a/@title").extract()
a = nodes[0] if nodes else ""
nodes = response.xpath("//div[@class='headerDiv']/text()").extract()
b = nodes[0].strip() if nodes else ""
items['title'] = a + " " + b
yield items
</code></pre>
<p>With the good advice of Padraic Cunningham:</p>
<pre><code>a = response.xpath("//div[@class='headerDiv']/a/@title").extract_first(default='')
b = response.xpath("//div[@class='headerDiv']/text()").extract_first(default ='').strip()
items['title'] = (a + " " + b).strip()
yield items
</code></pre>
| 1 | 2016-10-19T11:43:31Z | [
"python",
"xpath",
"scrapy"
] |
Python Scrapy: Skip Xpath if it's not there | 40,130,194 | <p>I have this code which scrapes a few hundred pages for me. But sometimes the xpath for <code>a</code> doesn't exist at all, how can I edit this so the script doesn't stop and keeps running to get the <code>b</code> and just give me that for that specific page?</p>
<pre><code>`a = response.xpath("//div[@class='headerDiv']/a/@title").extract()[0]
b = response.xpath("//div[@class='headerDiv']/text()").extract()[0].strip()
items['title'] = a + " " + b
yield items`
</code></pre>
| 0 | 2016-10-19T11:29:08Z | 40,130,963 | <p>You can use as follow:</p>
<pre><code>import lxml.etree as etree
parser = etree.XMLParser(strip_cdata=False, remove_comments=True)
root = etree.fromstring(data, parser)
#Take Hyperlink as per xpath:
#But Xpath returns list of element so we have to take 0 index of it if it has element
a = root.xpath("//div[@class='headerDiv']/a/@title")
b = response.xpath("//div[@class='headerDiv']/text()")
if a:
items['title'] = a[0].strip() + " " + b[0].strip()
else:
items['title'] = b[0].strip()
yield items
</code></pre>
| 0 | 2016-10-19T12:06:00Z | [
"python",
"xpath",
"scrapy"
] |
Django: Complex Permission Model | 40,130,205 | <p>Suppose I have users, projects, memberships and in every membership a role is specified (for example: admin, read-only, user, etc.). The memberships define the relation between users and projects and the corresponding role.</p>
<p>Now I have a problem: how can I use the permission system of Django to assure that only admins can edit projects and the other roles are not allowed to edit projects?</p>
<p>The project list template should look like this:</p>
<pre><code>{% for project in object_list %}
{# user.has_perm('edit_project', project) #}
{% endfor %}
</code></pre>
<p>What is the best way of doing this? How can I implement the permission based on the membership role?</p>
| 0 | 2016-10-19T11:29:34Z | 40,130,921 | <p>You need to build your own permission system. </p>
<p>Django's built-in permission system is not suited for what you want to do.</p>
<p>Build models for the <code>Project</code>. Create a ManyToMany relationship between a <code>User</code> and a <code>Project</code> <code>through</code> a <code>Membership</code> model. This Membership model will have a <code>role</code> field.</p>
<p><a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#extra-fields-on-many-to-many-relationships" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/db/models/#extra-fields-on-many-to-many-relationships</a> has an example that is almost ideally suited for your needs.</p>
<p>You can not do <code>user.has_perm('edit_project', project)</code> in a template. Django templates do not allow function calls directly with multiple params. I think in your case a custom template tag that takes a <code>User</code> instance, a <code>Project</code> instance, and a string describing the desired permission would be the way to go.</p>
| 1 | 2016-10-19T12:03:53Z | [
"python",
"django",
"permissions",
"acl"
] |
Python How can i use buffer | 40,130,413 | <p>I have to solve this question :</p>
<p>We have a text file like this :</p>
<pre><code>"imei": "123456789",
"sim_no": "+90 xxx xxx xx xx",
"device_type": "standart",
"hw_version": "1.01",
"sw_version": "1.02"
</code></pre>
<p>And we should read this JSON data, then we should read this data each 1 min and put to buffer(idk how can i put, is buffer array or dict?) then we should delete the oldest file each 5 min. Exceptions are important. Format sould be like "imei", "hw_version", "sw_version", "device_type". We sould solve buffer overflowing.</p>
<p>I write this code :</p>
<pre><code>import json
from time import sleep
def buffer(data):
pass
#imei = data.get("imei") # I want to read like this
# this function should put the variables to array
counter=0
while True:
with open("config.txt") as f:
mydict = json.loads('{{ {} }}'.format(f.read()))
buffer(mydict)
sleep(60)
counter+=1
if counter%5==0
# delete the oldest data
</code></pre>
<p>How can i use buffer? And how can i continue this code?</p>
| -1 | 2016-10-19T11:39:44Z | 40,130,799 | <p>What you want are the last 5 values, so you should just append the new one to a list and if it is bigger than 5, delete the first one.</p>
<pre><code>my_list.append(my_dict)
if len(my_list) > 5:
my_list.pop(0)
</code></pre>
<p>So, after that you just sleep for 60 seconds and nothing else</p>
| 0 | 2016-10-19T11:57:27Z | [
"python",
"arrays",
"python-3.x"
] |
Creating nested dict from dict with nested tuples as keys using comprehension | 40,130,443 | <p>I got a very useful answer for this problem here earlier this year, but there I could use pandas. Now I have to do it with pure Python.</p>
<p>There is a dict like this:</p>
<pre><code>inp = {((0, 0), 0): -99.94360791266038,
((0, 0), 1): -1.1111111111107184,
((1, 0), 0): -1.111111111107987,
((1, 0), 1): -1.1111111111079839,
((1, 0), 3): -1.111111111108079}
</code></pre>
<p>Now I want to convert this in a nested dict like this:</p>
<pre><code>out = {(0,0): {0: -99.94360791266038, 1: -1.1111111111107184},
(1,0): {0: -1.111111111107987,
1: -1.1111111111079839,
3: -1.111111111108079}
</code></pre>
<p>How can I do this with an elegant diction comprehension? I just can't get my head around it.</p>
| -2 | 2016-10-19T11:40:57Z | 40,130,494 | <p>I'd not do this with a dict comprenhesion. Just use a simple loop:</p>
<pre><code>out = {}
for key, value in inp.items():
k1, k2 = key
out.setdefault(k1, {})[k2] = value
</code></pre>
<p>Demo:</p>
<pre><code>>>> inp = {((0, 0), 0): -99.94360791266038,
... ((0, 0), 1): -1.1111111111107184,
... ((1, 0), 0): -1.111111111107987,
... ((1, 0), 1): -1.1111111111079839,
... ((1, 0), 3): -1.111111111108079}
>>> out = {}
>>> for key, value in inp.items():
... k1, k2 = key
... out.setdefault(k1, {})[k2] = value
...
>>> from pprint import pprint
>>> pprint(out)
{(0, 0): {0: -99.94360791266038, 1: -1.1111111111107184},
(1, 0): {0: -1.111111111107987,
1: -1.1111111111079839,
3: -1.111111111108079}}
</code></pre>
<p>To do the same with a dict comprehension is possible, but you need to then sort the keys and use <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby()</code></a> to group the keys on on the first tuple element. The sorting takes O(NlogN) time, and a simple loop like the above beats that easily.</p>
<p>Still, for completeness sake:</p>
<pre><code>from itertools import groupby
out = {g: {k[1]: v for k, v in items}
for g, items in groupby(sorted(inp.items()), key=lambda kv: kv[0][0])}
</code></pre>
| 3 | 2016-10-19T11:43:27Z | [
"python",
"dictionary",
"nested",
"tuples",
"list-comprehension"
] |
Creating nested dict from dict with nested tuples as keys using comprehension | 40,130,443 | <p>I got a very useful answer for this problem here earlier this year, but there I could use pandas. Now I have to do it with pure Python.</p>
<p>There is a dict like this:</p>
<pre><code>inp = {((0, 0), 0): -99.94360791266038,
((0, 0), 1): -1.1111111111107184,
((1, 0), 0): -1.111111111107987,
((1, 0), 1): -1.1111111111079839,
((1, 0), 3): -1.111111111108079}
</code></pre>
<p>Now I want to convert this in a nested dict like this:</p>
<pre><code>out = {(0,0): {0: -99.94360791266038, 1: -1.1111111111107184},
(1,0): {0: -1.111111111107987,
1: -1.1111111111079839,
3: -1.111111111108079}
</code></pre>
<p>How can I do this with an elegant diction comprehension? I just can't get my head around it.</p>
| -2 | 2016-10-19T11:40:57Z | 40,130,679 | <p>Naive solution:</p>
<pre><code>my_dict = {
((0, 0), 0): -99.94360791266038,
((0, 0), 1): -1.1111111111107184,
((1, 0), 0): -1.111111111107987,
((1, 0), 1): -1.1111111111079839,
((1, 0), 3): -1.111111111108079
}
def get_formatted_dict(my_dict):
formatted_dict = {}
for k, v in my_dict.items():
index_1, index_2 = k
if index_1 not in formatted_dict:
formatted_dict[index_1] = {}
formatted_dict[index_1][index_2] = v
return formatted_dict
print(get_formatted_dict(my_dict))
</code></pre>
<p>Output:</p>
<pre><code>{(1, 0): {0: -1.111111111107987, 1: -1.1111111111079839, 3: -1.111111111108079}, (0, 0): {0: -99.94360791266038, 1: -1.1111111111107184}}
</code></pre>
| 1 | 2016-10-19T11:52:10Z | [
"python",
"dictionary",
"nested",
"tuples",
"list-comprehension"
] |
How to use header key from the api to authenticate URL in iOS Swift? | 40,130,468 | <p>I am using Alamofire for the HTTP networking in my app. But in my api which is written in python have an header key for getting request, if there is a key then only give response. Now I want to use that header key in my iOS app with Alamofire, I am not getting it how to implement. Below is my code of normal without any key implementation:</p>
<pre><code>Alamofire.request(.GET,"http://name/user_data/\(userName)@someURL.com").responseJSON { response in // 1
print(response.request) // original URL request
print(response.response) // URL response
print(response.data) // server data
print(response.result)
}
</code></pre>
<p>I have a key as "appkey" and value as a "test" in my api. If anyone can help. Thank you!</p>
| 0 | 2016-10-19T11:42:13Z | 40,130,704 | <p>This should work</p>
<pre><code>let headers = [
"appkey": "test"
]
Alamofire.request(.GET, "http://name/user_data/\(userName)@someURL.com", parameters: nil, encoding: .URL, headers: headers).responseJSON {
response in
//handle response
}
</code></pre>
| 1 | 2016-10-19T11:53:33Z | [
"python",
"ios",
"swift",
"alamofire"
] |
How to use header key from the api to authenticate URL in iOS Swift? | 40,130,468 | <p>I am using Alamofire for the HTTP networking in my app. But in my api which is written in python have an header key for getting request, if there is a key then only give response. Now I want to use that header key in my iOS app with Alamofire, I am not getting it how to implement. Below is my code of normal without any key implementation:</p>
<pre><code>Alamofire.request(.GET,"http://name/user_data/\(userName)@someURL.com").responseJSON { response in // 1
print(response.request) // original URL request
print(response.response) // URL response
print(response.data) // server data
print(response.result)
}
</code></pre>
<p>I have a key as "appkey" and value as a "test" in my api. If anyone can help. Thank you!</p>
| 0 | 2016-10-19T11:42:13Z | 40,130,729 | <pre><code>let headers: HTTPHeaders = [
"Accept": "application/json",
"appkey": "test"
]
Alamofire.request("http://name/user_data/\(userName)@someURL.com", headers: headers).responseJSON { response in
print(response.request) // original URL request
print(response.response) // URL response
print(response.data) // server data
print(response.result)
}
</code></pre>
| 0 | 2016-10-19T11:54:39Z | [
"python",
"ios",
"swift",
"alamofire"
] |
IndexError: " pop index out of range" with a for loop | 40,130,482 | <p>Goodmorning,
I just wrote this program in python and this <code>IndexError</code> keeps showing up. I don't know how to resolve it, I even tried by using a while loop but nothing changed...I hope someone can help me with this problem!</p>
<p>Here is my code, it should check the length of the object of two lists (la, lb) and: remove the string from the la list if the string is less long than the lb string and vice versa..plus it has to remove both of the strings if their length is the same.</p>
<pre><code>def change(l1,l2):
la1 = l1[:]
la2 = l2[:]
i = 0
for i in range(len(la1)):
if la1[i] == la2[i]:
l1.pop(i)
l2.pop(i)
elif la1[i] > la2[i]:
l2.pop(i)
elif la2[i] > la1[i]:
l1.pop(i)
</code></pre>
| 1 | 2016-10-19T11:42:50Z | 40,130,930 | <p><strong>Assuming your lists are of equal lengths</strong></p>
<p>As has been pointed out in the comments, the <code>IndexError</code> happens due to your lists' length changing when you <code>pop()</code> an item. </p>
<p>Since you're iterating over your list using a <code>range(len(l))</code> in a <code>for</code> loop, which isn't updated after every completed loop, you'll eventually hit an index that's out of range. </p>
<p>An example, which you can try easily enough yourself:</p>
<pre><code>l = [1,2,3,4,5,6,7,8,9,10]
for i in range(len(l)):
l.pop(i)
print("Length of list", len(l))
</code></pre>
<p>Do not confuse yourself by calling <code>print(range(len(l))</code> in the for loop - this will give you an updated range, but is misleading. The <code>range</code> in the for loop is only called once, hence never updates while iterating.</p>
<p><strong>A different approach</strong></p>
<p>Instead of working with indices, try using <code>zip()</code> and building new lists, instead of changing existing ones.</p>
<pre><code>def change(l1, l2):
new_l1 = []
new_l2 = []
for a, b in zip(l1, l2):
if len(a) == len(b):
continue # do nothing
elif len(a)<len(b):
new_l2.append(b)
elif len(a)>len(b):
new_l1.append(a)
return new_l1, new_l2
</code></pre>
<p>This approach, essentially, generates the same list you create using <code>pop()</code>, while avoiding usage of indices. </p>
<p>Note that <code>zip()</code> will stop once it reaches the end of the smaller of both iterables. If your lists may not be of equal length, and you'd like to iterate until the longest of both iterables is iterated over entirely, use <code>zip_longest()</code>. But I do not think this is what you need in this case.</p>
<p><strong>Additional Notes</strong></p>
<p>You would also run into a problem if you were to iterate over your <code>list</code> using the following code:</p>
<pre><code>l = [i for i in range(10)]
for item in l:
l.remove(item)
>>>[1, 3, 5, 7, 9]
</code></pre>
<p>Essentially, it's <strong><em>not advisable to iterate over any <code>iterable</code> while changing it</em></strong>. This can result in anything from an <code>Exception</code> being thrown, to silent unexpected behaviour.</p>
<p>I'm aware you were avoiding this by looping over the copies, I just wanted to add this for posterity. </p>
| 2 | 2016-10-19T12:04:28Z | [
"python"
] |
IndexError: " pop index out of range" with a for loop | 40,130,482 | <p>Goodmorning,
I just wrote this program in python and this <code>IndexError</code> keeps showing up. I don't know how to resolve it, I even tried by using a while loop but nothing changed...I hope someone can help me with this problem!</p>
<p>Here is my code, it should check the length of the object of two lists (la, lb) and: remove the string from the la list if the string is less long than the lb string and vice versa..plus it has to remove both of the strings if their length is the same.</p>
<pre><code>def change(l1,l2):
la1 = l1[:]
la2 = l2[:]
i = 0
for i in range(len(la1)):
if la1[i] == la2[i]:
l1.pop(i)
l2.pop(i)
elif la1[i] > la2[i]:
l2.pop(i)
elif la2[i] > la1[i]:
l1.pop(i)
</code></pre>
| 1 | 2016-10-19T11:42:50Z | 40,131,296 | <p>Try with that:</p>
<pre><code>def rmv(la,lb):
for i in range(len(la)):
if len(la[i])<len(lb[i]):
la[i]=None
elif len(la[i])>len(lb[i]):
lb[i]=None
else:
la[i]=lb[i]=None
la = [i for i in la if i is not None]
lb = [i for i in lb if i is not None]
return (la,lb)
</code></pre>
<p>Example:</p>
<pre><code>la = ['ant','string1','panda']
lb = ['elephant','string','panda']
lists = rmv(la,lb)
print lists[0]
print lists[1]
</code></pre>
<p>Result:</p>
<pre><code>['string1']
['elephant']
</code></pre>
| 0 | 2016-10-19T12:22:29Z | [
"python"
] |
IndexError: " pop index out of range" with a for loop | 40,130,482 | <p>Goodmorning,
I just wrote this program in python and this <code>IndexError</code> keeps showing up. I don't know how to resolve it, I even tried by using a while loop but nothing changed...I hope someone can help me with this problem!</p>
<p>Here is my code, it should check the length of the object of two lists (la, lb) and: remove the string from the la list if the string is less long than the lb string and vice versa..plus it has to remove both of the strings if their length is the same.</p>
<pre><code>def change(l1,l2):
la1 = l1[:]
la2 = l2[:]
i = 0
for i in range(len(la1)):
if la1[i] == la2[i]:
l1.pop(i)
l2.pop(i)
elif la1[i] > la2[i]:
l2.pop(i)
elif la2[i] > la1[i]:
l1.pop(i)
</code></pre>
| 1 | 2016-10-19T11:42:50Z | 40,132,492 | <p>You can traverse the lists backwards, so that when you remove an item from the list the indices of the elements that you have not examined yet won't be affected</p>
<pre><code>def f(a, b):
l = len(a) if len(a)<len(b) else len(b)
for i in range(l):
j = l-i-1
la, lb = len(a[j]), len(b[j])
if la<lb: a.pop(j)
elif lb<la: b.pop(j)
else: a.pop(j), b.pop(j)
return a, b
</code></pre>
<p>ps I staid faithful to your problem statement and not to your implementation re the comparison based on strings' lengths.</p>
| 0 | 2016-10-19T13:16:10Z | [
"python"
] |
Python large array comparison | 40,130,491 | <p>I have a large array containing URL (it can contains 100 000 URL strings), and I would like to know if my actual URL is one of the URL from the array. For that, I have to compare the actual URL string with all the URL string in the array. Is there any way to compare with this large array but with less time than I do now ? For now it's :</p>
<pre><code>error = 0
for oldUrl in urlList:
error = 1 if oldUrl == actualUrl else error
</code></pre>
| 0 | 2016-10-19T11:43:19Z | 40,130,557 | <p>To check if a <code>list</code> contains an <code>item</code>, use: <code>item in list</code>.</p>
<p>So, you can write:</p>
<pre><code>error = oldUrl in urlList
</code></pre>
| 1 | 2016-10-19T11:46:16Z | [
"python",
"arrays",
"performance",
"python-2.7",
"compare"
] |
Python large array comparison | 40,130,491 | <p>I have a large array containing URL (it can contains 100 000 URL strings), and I would like to know if my actual URL is one of the URL from the array. For that, I have to compare the actual URL string with all the URL string in the array. Is there any way to compare with this large array but with less time than I do now ? For now it's :</p>
<pre><code>error = 0
for oldUrl in urlList:
error = 1 if oldUrl == actualUrl else error
</code></pre>
| 0 | 2016-10-19T11:43:19Z | 40,131,229 | <p>Don't use a list for this. Lookups in lists have a worst case complexity of O(n).</p>
<p>Use a set (or dictionary if you have other metadata) instead. This has a lookup of roughly O(1). See <a href="http://stackoverflow.com/q/3489071/1048539">here</a> for comparisons between a set, dictionary, and list.</p>
<p>Using a set, the lookup is simple:</p>
<pre><code>urls = set(['url1', 'url2', 'url3'])
print ('url2' in urls)
print ('foobar' in urls)
</code></pre>
<p>Or in your case, convert your list object as a set:</p>
<pre><code>urlListSet = set(urlList)
print(oldUrl in urlListSet)
</code></pre>
<p>You can also add new urls to your set:</p>
<pre><code>urlListSet.add(newurl)
urlListSet.update(listOfNewUrls)
</code></pre>
| 0 | 2016-10-19T12:18:55Z | [
"python",
"arrays",
"performance",
"python-2.7",
"compare"
] |
Python large array comparison | 40,130,491 | <p>I have a large array containing URL (it can contains 100 000 URL strings), and I would like to know if my actual URL is one of the URL from the array. For that, I have to compare the actual URL string with all the URL string in the array. Is there any way to compare with this large array but with less time than I do now ? For now it's :</p>
<pre><code>error = 0
for oldUrl in urlList:
error = 1 if oldUrl == actualUrl else error
</code></pre>
| 0 | 2016-10-19T11:43:19Z | 40,136,206 | <p>As already mentioned by @Laurent and @sisanared, you can use the <code>in</code> operator for either <code>lists</code> or <code>sets</code> to check for membership. For example:</p>
<pre><code>found = x in some_list
if found:
#do stuff
else:
#other stuff
</code></pre>
<p>However, you mentioned that speed is an issue. TL;DR -- <code>sets</code> are faster if the <code>set</code> already exists. From <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">https://wiki.python.org/moin/TimeComplexity</a>, checking membership using the <code>in</code> operator is O(n) for <code>list</code> and O(1) for <code>set</code> (like @enderland pointed out). </p>
<p>For 100,000 items, or for one-time-only checks it probably doesn't make much of a difference which you use, but for a larger number of items or situations where you'll be doing many checks, you should probably use a <code>set</code>. I did a couple of tests from the interpreter and this is what I found (Python 2.7, i3 Windows 10 64bit): </p>
<pre><code>import timeit
#Case 1: Timing includes building the list/set
def build_and_check_a_list(n):
a_list = [ '/'.join( ('http:stackoverflow.com',str(i)) ) for i in xrange(1,n+1) ]
check = '/'.join( ('http:stackoverflow.com',str(n)) )
found = check in a_list
return (a_list, found)
def build_and_check_a_set(n):
a_set = set( [ '/'.join( ('http:stackoverflow.com',str(i)) ) for i in xrange(1,n+1) ] )
check = '/'.join( ('http:stackoverflow.com',str(n)) )
found = check in a_set
return (a_set, found)
timeit.timeit('a_list, found = build_and_check_a_list(100000)', 'from __main__ import build_and_check_a_list', number=50)
3.211972302022332
timeit.timeit('a_set, found = build_and_check_a_set(100000)', 'from __main__ import build_and_check_a_set', number=50)
4.5497120006930345
#Case 2: The list/set already exists (timing excludes list/set creation)
check = '/'.join( ('http:stackoverflow.com',str(100000)) )
timeit.timeit('found = check in a_list', 'from __main__ import a_list, check', number=50)
0.12173540635194513
timeit.timeit('found = check in a_set', 'from __main__ import a_set, check', number=50)
1.01052391983103e-05
</code></pre>
<p>For 1 million entries, to build and/or check membership on my computer:</p>
<pre><code>#Case 1: list/set creation included
timeit.timeit('a_list, found = build_and_check_a_list(1000000)', 'from __main__ import build_and_check_a_list', number=50)
35.71641090788398
timeit.timeit('a_set, found = build_and_check_a_set(1000000)', 'from __main__ import build_and_check_a_set', number=50)
51.41244436103625
#Case 2: list/set already exists
check = '/'.join( ('http:stackoverflow.com',str(1000000)) )
timeit.timeit('found = check in a_list', 'from __main__ import a_list, check', number=50)
1.3113457772124093
timeit.timeit('found = check in a_set', 'from __main__ import a_set, check', number=50)
8.180430086213164e-06
</code></pre>
| 1 | 2016-10-19T15:49:46Z | [
"python",
"arrays",
"performance",
"python-2.7",
"compare"
] |
Instantiate nested Cmd Interpreter in Python | 40,130,559 | <p>Hi I'm looking to create a nested interpreter in Python using the Cmd module.</p>
<p>I set up a dynamic module loading because I want my project to be easily expandable (i.e. add a new python file into a folder and without changing the main code being able to load it).</p>
<p>My nested interpreter is currently setup like this:</p>
<pre><code>def instantiateConsole(base):
class SubConsole(cmd.Cmd, base):
def __init__(self):
cmd.Cmd.__init__(self)
def do_action(self,args):
print "Action"
return SubConsole
</code></pre>
<p>This is necessary because in order to create a nested interpreter I have to pass the MainConsole as a second variable to the SubConsole class. The problem with this is that this way I can only create classes inside this method and I won't be able to add a new console module file that I can load dynamically without having the definition inside this method.</p>
<p>Is there any workaround to this?</p>
| 0 | 2016-10-19T11:46:20Z | 40,131,481 | <p>When you say "pass the MainConsole as a second variable" you appear to mean "make the new SubConsole a subclass of the MainConsole". You are effectively defining a class factory that takes the base class as an argument.</p>
<p>You say "create classes inside this method", but <code>instantiateConsole</code> in a function, it appears. It's important to be careful about terminology.</p>
<p>None of this is anything to do with dynamic import of (the modules containing) other base classes that you may wish to use as arguments to <code>instantiateClass</code>. In the simplest case you can just add a standard directory where these modules will live to your <code>sys.path</code>, import the module by name and then extract the base class (which I assume for the purpose of simplicity will always be defined as <code>BaseConsole</code>). You would then run code such as</p>
<pre><code>extension_module = importlib.import_module("my_extension")
new_console = instantiateConsole(extension_module.BaseConsole)
</code></pre>
<p>If the name of the base class can vary (how would you determine its name?) you might have to use <code>getattr()</code> in preference to simple attribute access to the dynamically imported extension module.</p>
| 0 | 2016-10-19T12:31:09Z | [
"python",
"python-cmd"
] |
How to convert/rename categories in dask | 40,130,562 | <p>I'm trying to rename categories of a dtype 'category' column of a dask dataframe to a series of numbers from 1 to len(categories).</p>
<p>In pandas I was doing it like this:</p>
<pre><code>df['name'] = dd.Categorical(df.name).codes
</code></pre>
<p>but in dask this does not work:</p>
<pre><code>Traceback (most recent call last):
File "example.py", line 47, in <module>
sys.exit(main(sys.argv))
File "example.py", line 25, in main
df['name'] = dd.Categorical(df.name).codes
AttributeError: module 'dask.dataframe' has no attribute 'Categorical'
</code></pre>
<p>So I tried to get the categories and set them as explained in the <a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html#renaming-categories" rel="nofollow">pandas documentation</a>.</p>
<pre><code>df['name'] = df['name'].astype('category')
cats = df.name.cat.categories
df.name.cat.categories = range(1, len(cats))
</code></pre>
<p>But this yields an exception as well:</p>
<pre><code>Traceback (most recent call last):
File "example.py", line 50, in <module>
sys.exit(main(sys.argv))
File "example.py", line 26, in main
cats = df.name.cat.categories
File "[...]/dask/dataframe/core.py", line 3207, in __getattr__
return self._property_map(key)
File "[...]/dask/dataframe/core.py", line 3186, in _property_map
out = self.getattr(self._series._meta_nonempty, key)
File "[...]/dask/dataframe/core.py", line 258, in _meta_nonempty
return meta_nonempty(self._meta)
File "[...]/dask/dataframe/utils.py", line 329, in meta_nonempty
return _nonempty_series(x, idx)
File "[...]/dask/dataframe/utils.py", line 308, in _nonempty_series
entry = s.cat.categories[0]
File "[...]/pandas-0.19.0-py3.5-linux-x86_64.egg/pandas/indexes/base.py", line 1393, in __getitem__
return getitem(key)
IndexError: index 0 is out of bounds for axis 0 with size 0
</code></pre>
<p>How can I rename the categories in a dask dataframe column?</p>
| 1 | 2016-10-19T11:46:30Z | 40,131,052 | <p>You probably want to look at <code>df.column.cat.codes</code>, which has the numbers you're looking for. Lets work through an example:</p>
<h3>Create Toy dataset in Pandas</h3>
<pre><code>In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'x': ['a', 'b', 'a']})
In [3]: df['x'] = df.x.astype('category')
In [4]: df
Out[4]:
x
0 a
1 b
2 a
</code></pre>
<h3>Convert to a Dask.dataframe</h3>
<pre><code>In [5]: import dask.dataframe as dd
In [6]: ddf = dd.from_pandas(df, npartitions=2)
</code></pre>
<h3>Inspect <code>.cat.codes</code> attribute</h3>
<pre><code>In [7]: ddf.x.cat.codes
Out[7]:
dd.Series<getattr..., npartitions=1, divisions=(0, 2)>
Dask Series Structure:
divisions
0 int8
2 ...
dtype: int8
In [8]: ddf.x.cat.codes.compute()
Out[8]:
0 0
1 1
2 0
dtype: int8
</code></pre>
<h3>Overwrite category series with codes series</h3>
<pre><code>In [9]: ddf['x'] = ddf.x.cat.codes
In [10]: ddf.compute()
Out[10]:
x
0 0
1 1
2 0
</code></pre>
| 0 | 2016-10-19T12:10:31Z | [
"python",
"dask"
] |
python exception message formating | 40,130,632 | <p>Why don't I get an exception message when formating the message with <code>%s</code> but I do with <code>format</code>?</p>
<p>Fails:</p>
<pre><code>>>> Exception('foo %s', 'bar').message
''
</code></pre>
<p>Works:</p>
<pre><code>>>> Exception('foo {}'.format('bar')).message
'foo bar'
</code></pre>
<p>Any explanation why it fails on <code>%s</code>?</p>
| 0 | 2016-10-19T11:49:41Z | 40,130,725 | <p>Your syntax for the %-substitution in <code>Exception</code> is incorrect. You need to use <code>%</code> to specify the replacement string:</p>
<pre><code>>>> Exception('foo %s' % 'bar').message
'foo bar'
</code></pre>
| 3 | 2016-10-19T11:54:19Z | [
"python",
"exception"
] |
python 2 [Error 32] The process cannot access the file because it is being used by another process | 40,130,958 | <p>I'm working with python 2 and have read several posts about this error i.e(<a href="http://stackoverflow.com/questions/28396759/os-remove-in-windows-gives-error-32-being-used-by-another-process">this post</a>).
However, I'm still getting the error.
What I do is:
I read the files in a directory, if any of the files contains a specific string, I delete the directory. </p>
<pre><code>def select_poo():
path = os.walk('/paila_candonga/')
texto = 'poo'
extension = '.tex'
for root, dirs, files in path:
for documento in files:
if extension in documento:
with open(os.path.join(root, documento), 'r') as fin:
for lines in fin:
if texto in lines:
shutil.rmtree(root)
else:
continue
</code></pre>
<p>Then I get the error:</p>
<pre><code>WindowsError: [Error 32] The process cannot access the file because it is being used by another process
</code></pre>
<p>I have also tried using the absolute path: </p>
<pre><code>def select_poo():
path = os.walk('/paila_candonga/')
texto = 'poo'
extension = '.tex'
for root, dirs, files in path:
for documento in files:
if extension in documento:
with open(os.path.join(root, documento), 'r') as fin:
for lines in fin:
if texto in lines:
route = (os.path.join(root, documento))
files = os.path.basename(route)
folder = os.path.dirname(route)
absolut= os.path.dirname(os.path.abspath(route))
todo = os.path.join(absolut, files)
print todo
else:
continue
</code></pre>
<p>Then I will get: </p>
<pre><code>C:\paila_candonga\la_Arepa.tex
C:\paila_candonga\sejodio\laOlla.tex
C:\paila_candonga\sejodio\laPaila.tex
</code></pre>
<p>If I remove one file at a time, using the same absolute path and os.remove(''), I won't have problems. If I try to delete all files at once using select_poo() and shutil.rmtree(folder) or os.remove(absolut), I will have the Error 32. </p>
<p>Is there a way I can do a loop through each of the paths in todo and remove them without having the error 32?</p>
<p>Thanks,</p>
| 0 | 2016-10-19T12:05:50Z | 40,131,114 | <p>it happens here :</p>
<pre><code>with open(os.path.join(root, documento), 'r') as fin:
</code></pre>
<p>So you have your file open and locked, that is why you are not able delete this folder using:</p>
<pre><code>shutil.rmtree(root)
</code></pre>
<p>within this statement, you have to do outside of <code>with</code> statement</p>
| 1 | 2016-10-19T12:13:50Z | [
"python",
"windows",
"python-2.7",
"loops",
"shutil"
] |
Elasticsearch how to query on ID field higher than x | 40,131,140 | <p>I am trying to apply pagination to results by querying multiple times to get past the 10k barrier of Elasticsearch. Since the results of Elasticsearch can differ during multiple queries I want to use the generated ID to get the next results. </p>
<p>So for example, I run a query that returns 1000 results. Then I want to get the ID value of the 1000th result, and perform a query like : match : ID {{1000thID}}</p>
<p>This way I want to get the 1001 until 2000 result. after that 2001 until 3000, so on.</p>
<p>I currently use the Elasticsearch DSL for python to query on domain name like:</p>
<pre><code>search.query('match', domainname=domainname)
</code></pre>
<p>How do I rebuild this code to match above requirements. ('match',_ID > ID_Variable) </p>
| 1 | 2016-10-19T12:14:50Z | 40,131,289 | <p>The best way to achieve what you want is to use the scroll/can API. However, if you still want to proceed that way, you can do it like this:</p>
<pre><code>last_id = ...
search.filter('range', id={'gt': last_id + 1, 'lt': last_id + 1000})
</code></pre>
| 1 | 2016-10-19T12:22:13Z | [
"python",
"elasticsearch",
"range",
"elasticsearch-dsl",
"elasticsearch-py"
] |
sympy.count_roots: type mismatch when working with real polynomial | 40,131,211 | <p>I am using sympy and trying to compute number of roots of a polynomial</p>
<pre><code>from sympy.abc import x
from sympy import Poly
p = Poly(x**4+0.1,x)
</code></pre>
<p>At this point, p is polynomial with domain 'RR': <code>Poly(1.0*x**4 + 0.1, x, domain='RR')</code></p>
<p>If I try to compute number of roots in the interval, I get:</p>
<pre><code>p.count_roots(0,2)
TypeError: unsupported operand type(s) for *=: 'RealElement' and 'PythonRational'
</code></pre>
<p>However, if I define</p>
<pre><code>q = Poly(x**3-1, x)
ans: Poly(x**3 - 1, x, domain='ZZ')
q.count_roots(0,2)
ans: 1
</code></pre>
<p>Similarly, if I ask for number of roots of <code>p</code> on the whole domain, that works as well</p>
<pre><code>p.count_roots()
ans: 1
</code></pre>
<p>What should I do to supply correct types to count_roots? </p>
| 0 | 2016-10-19T12:18:22Z | 40,131,700 | <p>When possible, use exact (instead of floating point) numbers in your symbolic expressions (this principle is true for all symbolic math software, not only sympy). </p>
<p>In this case, the constant term <code>0.1</code> in the definition of <code>p</code> can be replaced by the (exact) ratio representation <code>1/10</code>. Sympy uses <code>Rational</code> to describe ratios of numbers (since, an input <code>1/10</code> is interpreted by python as a floating point division and automatically transformed to <code>0.1</code>). </p>
<p>The following code works.</p>
<pre><code>from sympy.abc import x
from sympy import Poly, Rational
p = Poly( x**4 + Rational(1,10), x)
p.count_roots(0,2)
</code></pre>
<blockquote>
<p><code>0</code></p>
</blockquote>
<p>See also <code>sympy.nsimplify</code> for transforming arbitrary floating point numbers such as, e.g., <code>12.21525</code>, to (approximately equal) rationals.</p>
| 1 | 2016-10-19T12:41:49Z | [
"python",
"sympy",
"symbolic-math",
"polynomial-math"
] |
Issue when imoporting GDAL : ImportError, Library not loaded, Image not found | 40,131,266 | <p>Since yesterday I struggle to import some libraries such as GDAL (or iris) and I allways get the same type of outputs.</p>
<pre><code>>>> import gdal
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "gdal.py", line 28, in <module>
_gdal = swig_import_helper()
File "gdal.py", line 24, in swig_import_helper
_mod = imp.load_module('_gdal', fp, pathname, description)
ImportError: dlopen(./_gdal.so, 2): Library not loaded: @rpath/libicui18n.56.dylib
Referenced from: /Users/zoran/anaconda/lib/libgdal.20.dylib
Reason: image not found
</code></pre>
<p>I searched in my files and found:</p>
<ul>
<li>1 files containing <code>libicui18n</code></li>
<li><p>2 files containing <code>_gdal.so</code></p>
<p>/Users/zoran/anaconda/pkgs/icu-54.1-0/lib/libicui18n.54.1.dylib</p>
<p>/Users/zoran/anaconda/lib/python2.7/site-packages/osgeo/_gdal.so</p>
<p>/Library/Frameworks/GDAL.framework/Versions/2.1/Python/2.7/site-packages/osgeo/_gdal.so</p></li>
</ul>
<p>This morning I could import gdal without problem and suddenly (I don't know what I did) it was totally impossible.</p>
<p>I tried to:
- uninstall/install gdal
- uninstall/install anaconda and install again gdal
- create different new environments (in python2 and python3) and install only gdal</p>
<p>I don't know what this <code>libicui18n.56.dylib</code> is, neighter <code>libgdal.20.dylib</code>.</p>
<p>When I type otool -L with the name of the paths above I get:</p>
<pre><code>libicui18n.54.dylib (compatibility version 54.0.0, current version 54.1.0)
@loader_path/./libicuuc.54.dylib (compatibility version 54.0.0, current version 54.1.0)
@loader_path/./libicudata.54.dylib (compatibility version 54.0.0, current version 54.1.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.0.0)
/usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.4.0)
/usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0)
@rpath/libgdal.1.dylib (compatibility version 20.0.0, current version 20.5.0)
/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 120.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)
/Library/Frameworks/GDAL.framework/Versions/2.1/GDAL (compatibility version 22.0.0, current version 22.1.0)
/usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 56.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 169.3.0)
</code></pre>
<p>When I type conda info:</p>
<pre><code> platform : osx-64
conda version : 4.2.9
conda is private : False
conda-env version : 4.2.9
conda-build version : 2.0.2
python version : 2.7.12.final.0
requests version : 2.11.1
root environment : /Users/zoran/anaconda (writable)
default environment : /Users/zoran/anaconda
envs directories : /Users/zoran/anaconda/envs
package cache : /Users/zoran/anaconda/pkgs
channel URLs : https://conda.anaconda.org/anaconda/osx-64/
https://conda.anaconda.org/anaconda/noarch/
https://conda.anaconda.org/scitools/osx-64/
https://conda.anaconda.org/scitools/noarch/
https://conda.anaconda.org/conda-forge/osx-64/
https://conda.anaconda.org/conda-forge/noarch/
https://repo.continuum.io/pkgs/free/osx-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/osx-64/
https://repo.continuum.io/pkgs/pro/noarch/
config file : /Users/zoran/.condarc
offline mode : False
</code></pre>
<p>I am wondering if somehow the libraries are saved in the wrong directrory?</p>
<p>I've seen many similar issues but no trick to fix the problem.</p>
<p>Thanks for helping</p>
| 0 | 2016-10-19T12:20:23Z | 40,138,550 | <p>I found a solution to my problem <a href="https://github.com/conda-forge/gdal-feedstock/issues/111" rel="nofollow">here</a>.</p>
<p>Thank you for the clear explanation of "ocefpaf":</p>
<blockquote>
<p>You problem seems like the usuall mismatch between conda-forge and
defaults. Can you try the following instructions (if you do want to
use conda-forge's gdal of course):</p>
<ol>
<li><p>Make sure you have the latest conda to take advantage of the channel preference feature. You can do that by issuing conda update
conda in the root env of your conda installation.</p></li>
<li><p>Edit your .condarc file and place the conda-forge on top of defaults. The .condarc usually lives in your home directory. See mine
below. (Note that the more channels you have you are more likely to
face issues. I recommend having only defaults and conda-forge.)</p></li>
<li><p>Issue the following commands to check if you will get the correct installation:</p></li>
</ol>
</blockquote>
<pre><code>conda create --yes -n TEST_GDAL python=3.5 gdal
source activate TEST_GDAL
python -c "from osgeo import gdal; print(gdal.__version__)"
</code></pre>
<blockquote>
<p>If you get 2.1.1 you got a successful installation of the latest
version from conda-forge. We always recommend users to work with envs
as the the example above. But you do not need to use Python 3.5
(conda-forge has 3.4 and 2.7 also) and you do not need to name the env
TEST_GDAL.</p>
<p>And here is my .condarc file.</p>
</blockquote>
<pre><code>> cat .condarc
channels:
- conda-forge
- defaults
show_channel_urls: true
</code></pre>
| 0 | 2016-10-19T18:02:20Z | [
"python",
"anaconda",
"importerror",
"gdal"
] |
Comparing pandas dataframes of different length | 40,131,281 | <p>I have two dataframes of different lengths both indexed by date. I need both dataframes to have the same dates, ie. delete the extra entries in the longest dataframe. </p>
<p>I have found that I can reset index and make it another another column then call that column as a pandas dataseries and compare to the other data series, giving me a pandas series with only the entries that are also in the shorter dataframe: </p>
<pre><code>df1 = ...
df2 = ...
dfadj = df1.reset_index(['Date'])
dfstock = dfadj['Date'][dfadj['Date'].isin(dfindex['Date'])]
</code></pre>
<p>But then I would need to find the index positions from these values and in another step delete it from the longest dataframe. Am I missing a completely different approch which would be more logical and/or simple?</p>
| 1 | 2016-10-19T12:21:46Z | 40,131,308 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.intersection.html" rel="nofollow"><code>Index.intersection</code></a> and then select data in <code>df2</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p>
<pre><code>idx = df2.index.intersection(df1.index)
print (idx)
DatetimeIndex(['2015-02-24', '2015-02-25', '2015-02-26', '2015-02-27',
'2015-02-28', '2015-03-01', '2015-03-02', '2015-03-03',
'2015-03-04', '2015-03-05'],
dtype='datetime64[ns]', freq='D')
print (df2.ix[idx])
b
2015-02-24 10
2015-02-25 11
2015-02-26 12
2015-02-27 13
2015-02-28 14
2015-03-01 15
2015-03-02 16
2015-03-03 17
2015-03-04 18
2015-03-05 19
</code></pre>
<p>Another solution is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> with inner join, what is by deafult, so can be omited <code>how='inner'</code>:</p>
<pre><code>df = pd.merge(df1,df2, left_index=True, right_index=True)
</code></pre>
<p>Sample:</p>
<pre><code>rng1 = pd.date_range(pd.to_datetime('2015-02-24'), periods=10)
df1 = pd.DataFrame({'a': range(10)}, index=rng1)
print (df1)
a
2015-02-24 0
2015-02-25 1
2015-02-26 2
2015-02-27 3
2015-02-28 4
2015-03-01 5
2015-03-02 6
2015-03-03 7
2015-03-04 8
2015-03-05 9
rng2 = pd.date_range(pd.to_datetime('2015-02-24'), periods=20)
df2 = pd.DataFrame({'b': range(10,30)}, index=rng2)
print (df2)
b
2015-02-24 10
2015-02-25 11
2015-02-26 12
2015-02-27 13
2015-02-28 14
2015-03-01 15
2015-03-02 16
2015-03-03 17
2015-03-04 18
2015-03-05 19
2015-03-06 20
2015-03-07 21
2015-03-08 22
2015-03-09 23
2015-03-10 24
2015-03-11 25
2015-03-12 26
2015-03-13 27
2015-03-14 28
2015-03-15 29
</code></pre>
<pre><code>df = pd.merge(df1,df2, left_index=True, right_index=True)
print (df)
a b
2015-02-24 0 10
2015-02-25 1 11
2015-02-26 2 12
2015-02-27 3 13
2015-02-28 4 14
2015-03-01 5 15
2015-03-02 6 16
2015-03-03 7 17
2015-03-04 8 18
2015-03-05 9 19
</code></pre>
<p>Last if need delete some columns use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a>:</p>
<pre><code>print (df.drop(['a'], axis=1))
b
2015-02-24 10
2015-02-25 11
2015-02-26 12
2015-02-27 13
2015-02-28 14
2015-03-01 15
2015-03-02 16
2015-03-03 17
2015-03-04 18
2015-03-05 19
</code></pre>
| 1 | 2016-10-19T12:23:21Z | [
"python",
"pandas",
"dataframe"
] |
Why df[[2,3,4]][2:4] works and df[[2:4]][2:4] does not in Python | 40,131,360 | <p>suppose we have a datarame</p>
<pre><code>import pandas as pd
df = pd.read_csv('...')
df
0 1 2 3 4
0 1 2 3 4 5
1 1 2 3 4 5
2 1 2 3 4 5
3 1 2 3 4 5
4 1 2 3 4 5
</code></pre>
<p>Why one approach is working and other returns syntax error?</p>
| 2 | 2016-10-19T12:25:53Z | 40,131,402 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p>
<pre><code>print (df.ix[2:4,2:4])
2 3
2 3 4
3 3 4
4 3 4
</code></pre>
| 1 | 2016-10-19T12:27:40Z | [
"python",
"pandas",
"dataframe",
"subset"
] |
Why df[[2,3,4]][2:4] works and df[[2:4]][2:4] does not in Python | 40,131,360 | <p>suppose we have a datarame</p>
<pre><code>import pandas as pd
df = pd.read_csv('...')
df
0 1 2 3 4
0 1 2 3 4 5
1 1 2 3 4 5
2 1 2 3 4 5
3 1 2 3 4 5
4 1 2 3 4 5
</code></pre>
<p>Why one approach is working and other returns syntax error?</p>
| 2 | 2016-10-19T12:25:53Z | 40,131,492 | <p>It fails because <code>2:4</code> is invalid syntax for accessing the keys/columns of a df:</p>
<pre><code>In [73]:
df[[2:4]]
File "<ipython-input-73-f0f09617b349>", line 1
df[[2:4]]
^
SyntaxError: invalid syntax
</code></pre>
<p>This is no different to if you defined a dict and tried the same syntax:</p>
<pre><code>In [74]:
d = {0:0,1:1,2:2,3:3,4:4,5:5}
d
Out[74]:
{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5}
In [76]:
d[[2:4]]
File "<ipython-input-76-ea5d68adc389>", line 1
d[[2:4]]
^
SyntaxError: invalid syntax
</code></pre>
<p>The <code>[]</code> syntax is used to access column labels that match, you can't pass a slice in a list to access a range of columns like this, it needs to be a list of values as you've already found</p>
<p>The newer methods such as <code>iloc</code>, <code>ix</code> and <code>loc</code> support slice ranges</p>
<p>What worked for you, initially selected the columns using the labels in a list:</p>
<pre><code>In [77]:
df[[2,3,4]]
Out[77]:
2 3 4
0 3 4 5
1 3 4 5
2 3 4 5
3 3 4 5
4 3 4 5
</code></pre>
<p>And then selected the rows via a slice:</p>
<pre><code>In [79]:
df[[2,3,4]][2:4]
Out[79]:
2 3 4
2 3 4 5
3 3 4 5
</code></pre>
| 1 | 2016-10-19T12:31:52Z | [
"python",
"pandas",
"dataframe",
"subset"
] |
Making a for loop print with an index | 40,131,479 | <p>I made a program which displays a user-provided number of the fibonacci series. I wanted to format it into an indexed list, but I don't what could I use to do so. I found the enumerate() function, but seems it only works for premade lists, whereas mine is generated accordingly to the user. </p>
<p>Do I use a for loop to generate a variable along with the series, then put the variable in the for loop that prints the numbers, like so: </p>
<pre><code>print("{0}. {1}".format(index_variable, wee(n)))
</code></pre>
<p>or am I going an entirely wrong road here?</p>
| 0 | 2016-10-19T12:31:01Z | 40,132,221 | <pre><code>def fib(n):
x = 0
y = 1
for i in range(n):
yield y
tmp = x
x = y
y += tmp
def main():
n = input('How many do you want: ')
for i, f in enumerate(fib(n)):
print("{0}. {1}".format(i, f)
</code></pre>
<p>Make a generator that yields the values you want and then pass that to <code>enumerate</code> </p>
| 0 | 2016-10-19T13:03:44Z | [
"python",
"python-3.x",
"object",
"format",
"string-formatting"
] |
How to get a well-scaled table in python using matplotlib | 40,131,556 | <p>I have been trying to present a table of data in python. I've been generating the table using the matplotlib pyplot module. Unfortunately the data sets I want to present are quite large. Hence when the table displays I either get it showing the entire table, but the data is too tiny to read, or it shows the data at readable size, but cuts off the rest of the table.</p>
<p>My first thought was perhaps if I got the table formatted in a readable way I could then use standard pan/zoom button in the interactive navigation. However clicking and dragging around the screen doesn't seem to shift the table at all. I have tried this on pycharm and anaconda, just in case it made a difference for some reason.</p>
<p>Thus I am wondering, once I format the table in a readable way, how can I pan around the table? Otherwise, are there any other ways to present large amounts of data in tables using python?</p>
<p>Also please note that I want the table to be shown when the code is executed, not saved as an image.</p>
<p>Some test code I have been working with trying to solve this issue:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data=np.random.rand(100, 1)
cols=("column")
nrows, ncols = len(data)+1, len(cols)
hcell = 0.2
wcell = 1.0
hpad, wpad = 0, 0
fig=plt.figure(figsize=(ncols*wcell+wpad, nrows*hcell+hpad))
ax =fig.add_subplot(111)
ax.axis('off')
cellText=data
table=ax.table(cellText=cellText, colLabels=cols, loc='cent')
plt.tight_layout()
plt.show()
</code></pre>
| 0 | 2016-10-19T12:34:41Z | 40,131,984 | <p>Try with Tabulate module, which is very simple to use and support numpy:</p>
<p><a href="https://pypi.python.org/pypi/tabulate" rel="nofollow">tabulate module</a></p>
<p>sample code to start with:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from tabulate import tabulate
data=np.random.rand(100, 1)
print tabulate(data)
</code></pre>
<p>Using <strong>matplotlib</strong>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data = np.random.rand(100, 1)
colLabels=("Fist Column")
nrows, ncols = len(data)+1, len(colLabels)
hcell, wcell = 0.1, 0.1 # tweak as per your requirements
hpad, wpad = 0.5, 0.5
fig=plt.figure(figsize=(ncols*wcell+wpad, nrows*hcell+hpad))
ax = fig.add_subplot(111)
ax.axis('off')
the_table = ax.table(cellText=data,
colLabels=colLabels,
loc='center')
plt.show()
</code></pre>
<p>References:</p>
<ol>
<li><a href="http://stackoverflow.com/a/26937531/2575259">http://stackoverflow.com/a/26937531/2575259</a></li>
<li><a href="http://stackoverflow.com/questions/17232683/creating-tables-in-matplotlib">Creating tables in matplotlib</a></li>
<li><a href="http://stackoverflow.com/questions/3584805/in-matplotlib-what-does-the-argument-mean-in-fig-add-subplot111">In Matplotlib, what does the argument mean in fig.add_subplot(111)?</a></li>
</ol>
| 1 | 2016-10-19T12:54:12Z | [
"python",
"matplotlib",
"formatting"
] |
Running the sample code in pytesseract | 40,131,630 | <p>I am running python 2.6.6 and want to install the <a href="https://pypi.python.org/pypi/pytesseract" rel="nofollow">pytesseract</a> package. After extraction and installation, I can call the pytesseract from the command line. However I want to run the tesseract within python. I have the following code (ocr.py):</p>
<pre><code>try:
import Image
except ImportError:
from PIL import Image
import pytesseract
print(pytesseract.image_to_string(Image.open('test.png')))
print(pytesseract.image_to_string(Image.open('test-european.jpg'),lang='fra'))
</code></pre>
<p>When I run the code by python ocr.py, I get the following output:</p>
<pre><code>Traceback (most recent call last):
File "ocr.py", line 6, in <module>
print(pytesseract.image_to_string(Image.open('test.png')))
File "/pytesseract-0.1.6/build/lib/pytesseract/pytesseract.py", line 164, in image_to_string
raise TesseractError(status, errors)
pytesseract.TesseractError: (2, 'Usage: python tesseract.py [-l language] input_file')
</code></pre>
<p>test.png and test-european.jpg are in the working directory. Can Someone help me running this code?
I have tried the following:</p>
<ol>
<li>Adjusted the tesseract_cmd to 'pytesseract'</li>
<li>Installed tesseract-ocr</li>
</ol>
<p>Any help is appreciated as I am trying to solve this problem for hours now.</p>
| 0 | 2016-10-19T12:38:11Z | 40,132,819 | <p><code>tesseract_cmd</code> should point to the command line program <a href="https://github.com/tesseract-ocr/tesseract" rel="nofollow"><code>tesseract</code></a>, not <code>pytesseract</code>.</p>
<p>For instance on Ubuntu you can install the program using:</p>
<pre><code>sudo apt install tesseract-ocr
</code></pre>
<p>And then set the variable to just <code>tesseract</code> or <code>/usr/bin/tesseract</code>.</p>
| 0 | 2016-10-19T13:29:28Z | [
"python",
"tesseract",
"python-tesseract"
] |
Python - Recursive way to return a list of the divisors of an integer | 40,131,643 | <p>I was trying to write a function that would return the list of a divisors of some positive integer </p>
<p>divisors(12) => [1,2,3,4,6,12]</p>
<p>I did it with a for loop, and then tried to do it with a recursion, but I couldn't figure out how to do it and found no example of it online in any language.</p>
<pre><code>def divisors(n,l=[]):
b=1
if n < 1:
return l
if n == 1:
</code></pre>
<p>I thought using l=[] would work better than yield, but in any way, I couldn't get anywhere with it.</p>
<p><strong>Edit:</strong>
using @vks code I wrote the following:</p>
<pre><code>def fun(n, l=[],divisor=1):
if n % divisor == 0:
l.append(divisor)
if divisor == n:
return None
fun(n, l, divisor+1)
return l
</code></pre>
| 1 | 2016-10-19T12:39:13Z | 40,131,882 | <p>You can try something like this.</p>
<pre><code>x=12
l=[]
def fun(n, l):
if x%n==0:
l.append(n)
if n==1:
return None
fun(n-1, l)
fun(x, l)
print l
</code></pre>
| 3 | 2016-10-19T12:50:27Z | [
"python",
"recursion"
] |
Python - Recursive way to return a list of the divisors of an integer | 40,131,643 | <p>I was trying to write a function that would return the list of a divisors of some positive integer </p>
<p>divisors(12) => [1,2,3,4,6,12]</p>
<p>I did it with a for loop, and then tried to do it with a recursion, but I couldn't figure out how to do it and found no example of it online in any language.</p>
<pre><code>def divisors(n,l=[]):
b=1
if n < 1:
return l
if n == 1:
</code></pre>
<p>I thought using l=[] would work better than yield, but in any way, I couldn't get anywhere with it.</p>
<p><strong>Edit:</strong>
using @vks code I wrote the following:</p>
<pre><code>def fun(n, l=[],divisor=1):
if n % divisor == 0:
l.append(divisor)
if divisor == n:
return None
fun(n, l, divisor+1)
return l
</code></pre>
| 1 | 2016-10-19T12:39:13Z | 40,131,926 | <p>How about this,</p>
<pre><code>>>> n = 12
>>> l = [i for i in range(1, n+1) if n%i==0]
>>> l
[1, 2, 3, 4, 6, 12]
</code></pre>
| 1 | 2016-10-19T12:52:01Z | [
"python",
"recursion"
] |
Random Forest with bootstrap = False in scikit-learn python | 40,131,893 | <p>What does RandomForestClassifier() do if we choose bootstrap = False?</p>
<p>According to the definition in this link </p>
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier" rel="nofollow">http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier</a></p>
<blockquote>
<p>bootstrap : boolean, optional (default=True) Whether bootstrap samples
are used when building trees.</p>
</blockquote>
<p>Asking this because I want to use a Random Forest approach to a time series, so train with a rolling window of size (t-n) and predict date (t+k) and wanted to know if this is what would happen if we choose True or False:</p>
<p>1) If <code>Bootstrap = True</code>, so when training samples can be of any day and of any number of features. So for example can have samples from day (t-15), day (t-19) and day (t-35) each one with randomly chosen features and then predict the output of date (t+1). </p>
<p>2) If <code>Bootstrap = False</code>, its going to use all the samples and all the features from date (t-n) to t, to train, so its actually going to respect the dates order (meaning its going to use t-35, t-34, t-33... etc until t-1). And then will predict output of date (t+1). </p>
<p>If this is how Bootstrap works I would be inclined to use Boostrap = False, as if not it would be a bit strange (think of financial series) to just ignore the consecutive days returns and jump from day t-39 to t-19 and then to day t-15 to predict day t+1. We would be missing all the info between those days. </p>
<p>So... is this how Bootstrap works?</p>
| 2 | 2016-10-19T12:50:54Z | 40,133,130 | <p>It seems like you're conflating the bootstrap of your observations with the sampling of your features. <a href="http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf" rel="nofollow">An Introduction to Statistical Learning</a> provides a really good introduction to Random Forests.</p>
<p>The benefit of random forests comes from its creating a large variety of trees by sampling both observations and features. <code>Bootstrap = False</code> is telling it to sample <strong>observations</strong> with or without replacement - it should still sample when it's False, just without replacement.</p>
<p>You tell it what share of features you want to sample by setting <code>max_features</code>, either to a share of the features or just an integer number (and this is something that you would typically tune to find the best parameter for).</p>
<p>It will be fine that you're not going to have every day when you're building each tree - that's where the value of RF comes from. Each individual tree will be a pretty bad predictor, but when you average together the predictions from hundreds or thousands of trees you'll (probably) end up with a good model.</p>
| 0 | 2016-10-19T13:42:31Z | [
"python",
"machine-learning",
"scikit-learn"
] |
SyntaxError in setup.py with pip to install module | 40,131,953 | <p>So my teacher in python showed the turtle module, so I want to try it myself, but when I try to install the turtle module on my PC I have an error :
I'm using "pip" to install modules, so when I do "pip install turtle" on a console
(not python console) I have an error :</p>
<pre><code>Collecting turtle
using cached turtle-0.0.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Daxxas\AppData\Local\Temp\pip-build-727hpv0w\turtle\setup.py", line40
except ValueError, ve:
^
SyntaxError: invalid syntax
</code></pre>
<p>and there is this in red :</p>
<pre><code>Command "python setup.py egg_info" failed with error code 1 C:\Users\Daxxas\AppData\Local\temp\pip-build-727hpv0w\turtle\
</code></pre>
<p>And I don't know what to do. There isn't pip's folder in "Temp".</p>
<p>So how can I fix this to be able to install the turtle module ?</p>
<p>ps : Is it possible to copy/paste something in a console ?</p>
| 0 | 2016-10-19T12:52:58Z | 40,132,028 | <p>Turtle is already included in the Python standard library; you don't need to install anything.</p>
<p>The library you were installing is a completely different thing (an HTTP proxy, apparently) which looks like it's not compatible with any recent Python version.</p>
| 2 | 2016-10-19T12:55:59Z | [
"python"
] |
Python user input inside infinite loop too slow, easily confused | 40,132,067 | <p>I have a Python script running on a Raspberry Pi that sits waiting for user input and records the input in a SQLite database:</p>
<pre><code>#!/usr/bin/env python
import logging
import db
while True:
barcode = raw_input("Scan ISBN: ")
if ( len(barcode) > 1 ):
logging.info("Recording scanned ISBN: " + barcode)
print "Recording scanned ISBN: " + barcode
db.recordScan(barcode, 1)
</code></pre>
<p>That <code>db.recordScan()</code> method looks like this:</p>
<pre><code># Adds an item to queue
def recordScan(isbn, shop_id):
insert = "INSERT INTO scans ( isbn, shop_id ) VALUES ( ?, ? )"
conn = connect()
conn.cursor().execute(insert, [isbn, shop_id])
conn.commit()
conn.close()
</code></pre>
<p><em>(Note: The whole code repo is available at <a href="https://github.com/martinjoiner/bookfetch-scanner-python/" rel="nofollow">https://github.com/martinjoiner/bookfetch-scanner-python/</a> if you wanna see how I'm connecting to the db and such)</em> </p>
<p>My problem is that using a USB barcode scanner (which is effectively just a keyboard input that sends a series of keystrokes followed by the <code>Enter</code> key) it is really easy to input at such a fast rate that the command line seems to get <em>"confused"</em>. </p>
<p><strong>For example compare the following results...</strong> </p>
<p>When you go slow the script works well and the command looks neat like this:</p>
<pre><code>Scan ISBN: 9780465031467
Recording scanned ISBN: 9780465031467
Scan ISBN: 9780141014593
Recording scanned ISBN: 9780141014593
Scan ISBN:
</code></pre>
<p>But when you hammer it hard and go really fast the input prompt kind of gets ahead of itself and the messages printed by the script get written on top of the input prompt:</p>
<pre><code>Recording scanned ISBN: 9780141014593
9780141014593
9780141014593
9780465031467
Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780465031467
Scan ISBN: 9780571273188
9780141014593
</code></pre>
<p>It sometimes hangs in that position indefinitely, I don't know what it's doing but you can wake it back up again with another input and it carries on as normal although the input before the one it hung on doesn't get recorded which is bad because it makes the whole system unreliable. </p>
<p>My question is: Is this an inevitability that I just have to live with? Will I always be able to out-pace the low-powered Raspberry Pi by hitting it with too many inputs in close succession or is there some faster way of doing this? Can I push the database write operation to another thread or something along those lines? Forgive my ignorance, I am learning. </p>
| 3 | 2016-10-19T12:57:50Z | 40,132,759 | <p>Don't build SQL strings from user input. Ever. </p>
<p><em>Always</em> use parameterized queries.</p>
<pre><code># Adds an item to queue
def recordScan(isbn, shop_id):
insert = "INSERT INTO scans ( isbn, shop_id ) VALUES ( ?, ? )"
conn = connect()
conn.cursor().execute(insert, [isbn, shop_id])
conn.commit()
conn.close()
</code></pre>
<p>Please read <a href="https://docs.python.org/2/library/sqlite3.html" rel="nofollow">https://docs.python.org/2/library/sqlite3.html</a>, at the very least the upper part of the page, where they explain this approach.</p>
| 1 | 2016-10-19T13:26:23Z | [
"python",
"python-2.7",
"sqlite",
"raspberry-pi"
] |
Python user input inside infinite loop too slow, easily confused | 40,132,067 | <p>I have a Python script running on a Raspberry Pi that sits waiting for user input and records the input in a SQLite database:</p>
<pre><code>#!/usr/bin/env python
import logging
import db
while True:
barcode = raw_input("Scan ISBN: ")
if ( len(barcode) > 1 ):
logging.info("Recording scanned ISBN: " + barcode)
print "Recording scanned ISBN: " + barcode
db.recordScan(barcode, 1)
</code></pre>
<p>That <code>db.recordScan()</code> method looks like this:</p>
<pre><code># Adds an item to queue
def recordScan(isbn, shop_id):
insert = "INSERT INTO scans ( isbn, shop_id ) VALUES ( ?, ? )"
conn = connect()
conn.cursor().execute(insert, [isbn, shop_id])
conn.commit()
conn.close()
</code></pre>
<p><em>(Note: The whole code repo is available at <a href="https://github.com/martinjoiner/bookfetch-scanner-python/" rel="nofollow">https://github.com/martinjoiner/bookfetch-scanner-python/</a> if you wanna see how I'm connecting to the db and such)</em> </p>
<p>My problem is that using a USB barcode scanner (which is effectively just a keyboard input that sends a series of keystrokes followed by the <code>Enter</code> key) it is really easy to input at such a fast rate that the command line seems to get <em>"confused"</em>. </p>
<p><strong>For example compare the following results...</strong> </p>
<p>When you go slow the script works well and the command looks neat like this:</p>
<pre><code>Scan ISBN: 9780465031467
Recording scanned ISBN: 9780465031467
Scan ISBN: 9780141014593
Recording scanned ISBN: 9780141014593
Scan ISBN:
</code></pre>
<p>But when you hammer it hard and go really fast the input prompt kind of gets ahead of itself and the messages printed by the script get written on top of the input prompt:</p>
<pre><code>Recording scanned ISBN: 9780141014593
9780141014593
9780141014593
9780465031467
Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780465031467
Scan ISBN: 9780571273188
9780141014593
</code></pre>
<p>It sometimes hangs in that position indefinitely, I don't know what it's doing but you can wake it back up again with another input and it carries on as normal although the input before the one it hung on doesn't get recorded which is bad because it makes the whole system unreliable. </p>
<p>My question is: Is this an inevitability that I just have to live with? Will I always be able to out-pace the low-powered Raspberry Pi by hitting it with too many inputs in close succession or is there some faster way of doing this? Can I push the database write operation to another thread or something along those lines? Forgive my ignorance, I am learning. </p>
| 3 | 2016-10-19T12:57:50Z | 40,134,174 | <p>You appear to be opening and closing the database each and every time. That will clearly add a huge overhead, especially as you are "hammering" away at it.<br>
Connect to the database once at the beginning and close it upon exit.<br>
In between, simply perform your <code>insert</code>, <code>update</code> and <code>delete</code> statements. </p>
<p>Edit:<br>
For the purposes of this I renamed <code>db.py</code> to be called <code>barcode1.py</code> so edit appropriately.
Alter <code>listen.py</code> to be as follows: </p>
<pre><code>#!/usr/bin/env python
import logging
import barcode1
DB_FILE_NAME = "scan-queue.db"
my_db = barcode1.sqlite3.connect(DB_FILE_NAME)
my_cursor = my_db.cursor()
def InsertScan(isbn, shop_id):
insert = "INSERT INTO scans ( isbn, shop_id ) VALUES ( ?, ? )"
my_cursor.execute(insert, [isbn, shop_id])
my_db.commit()
while True:
barcode = raw_input("Scan ISBN: ")
if ( len(barcode) > 1 ):
logging.info("Recording scanned ISBN: " + barcode)
print "Recording scanned ISBN: " + barcode
InsertScan(barcode, 1)
my_db.close()
</code></pre>
<p>For your purposes replace references to "barcode1" with "db"<br>
As you can see all that happens here is that a separate function has been added to do the writing and only the writing.<br>
Clearly this is a quick mock up and could be improved immeasurably, in fact I'd rewrite it as a single script. This is one of those classic examples where in an attempt to write object oriented code, you end up shooting yourself in the foot.<br>
In fact you could do without the function and just include the <code>insert</code> code within the <code>while</code> statement.</p>
<p>Locking:
from the sqlite3 documents:</p>
<pre><code> sqlite3.connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])
</code></pre>
<p>Opens a connection to the SQLite database file database. You can use ":memory:" to open a database connection to a database that resides in RAM instead of on disk.</p>
<p>When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The timeout parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds).</p>
| 1 | 2016-10-19T14:23:57Z | [
"python",
"python-2.7",
"sqlite",
"raspberry-pi"
] |
Python user input inside infinite loop too slow, easily confused | 40,132,067 | <p>I have a Python script running on a Raspberry Pi that sits waiting for user input and records the input in a SQLite database:</p>
<pre><code>#!/usr/bin/env python
import logging
import db
while True:
barcode = raw_input("Scan ISBN: ")
if ( len(barcode) > 1 ):
logging.info("Recording scanned ISBN: " + barcode)
print "Recording scanned ISBN: " + barcode
db.recordScan(barcode, 1)
</code></pre>
<p>That <code>db.recordScan()</code> method looks like this:</p>
<pre><code># Adds an item to queue
def recordScan(isbn, shop_id):
insert = "INSERT INTO scans ( isbn, shop_id ) VALUES ( ?, ? )"
conn = connect()
conn.cursor().execute(insert, [isbn, shop_id])
conn.commit()
conn.close()
</code></pre>
<p><em>(Note: The whole code repo is available at <a href="https://github.com/martinjoiner/bookfetch-scanner-python/" rel="nofollow">https://github.com/martinjoiner/bookfetch-scanner-python/</a> if you wanna see how I'm connecting to the db and such)</em> </p>
<p>My problem is that using a USB barcode scanner (which is effectively just a keyboard input that sends a series of keystrokes followed by the <code>Enter</code> key) it is really easy to input at such a fast rate that the command line seems to get <em>"confused"</em>. </p>
<p><strong>For example compare the following results...</strong> </p>
<p>When you go slow the script works well and the command looks neat like this:</p>
<pre><code>Scan ISBN: 9780465031467
Recording scanned ISBN: 9780465031467
Scan ISBN: 9780141014593
Recording scanned ISBN: 9780141014593
Scan ISBN:
</code></pre>
<p>But when you hammer it hard and go really fast the input prompt kind of gets ahead of itself and the messages printed by the script get written on top of the input prompt:</p>
<pre><code>Recording scanned ISBN: 9780141014593
9780141014593
9780141014593
9780465031467
Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780141014593
Scan ISBN: Recording scanned ISBN: 9780465031467
Scan ISBN: 9780571273188
9780141014593
</code></pre>
<p>It sometimes hangs in that position indefinitely, I don't know what it's doing but you can wake it back up again with another input and it carries on as normal although the input before the one it hung on doesn't get recorded which is bad because it makes the whole system unreliable. </p>
<p>My question is: Is this an inevitability that I just have to live with? Will I always be able to out-pace the low-powered Raspberry Pi by hitting it with too many inputs in close succession or is there some faster way of doing this? Can I push the database write operation to another thread or something along those lines? Forgive my ignorance, I am learning. </p>
| 3 | 2016-10-19T12:57:50Z | 40,138,071 | <p>After much experimenting based on helpful advice from users @tomalak, @rolf-of-saxony and @hevlastka my conclusion is that <strong>yes, this <em>is</em> an inevitability that I just have to live with.</strong> </p>
<p>Even if you strip the example down to the basics by removing the database write process and making it a simple <em>parrot</em> script that just repeats back inputs (See <a href="http://stackoverflow.com/questions/40156905/python-on-raspberry-pi-user-input-inside-infinite-loop-misses-inputs-when-hit-wi">Python on Raspberry Pi user input inside infinite loop misses inputs when hit with many</a>), it is still possible to scan items so fast that inputs get missed/skipped/ignored. The Raspberry Pi simply cannot keep up. </p>
<p>So my approach will now be to add an audio feedback feature such as a beep sound to indicate to the user when the device is ready to receive the next input. A route I didn't want to go down but it seems my code is the most efficient it can be and we're still able to hit the limits. Responsibility is with the user to not go at breakneck speed and the best we can do a responsible product builders is give them good feedback. </p>
| 1 | 2016-10-19T17:37:19Z | [
"python",
"python-2.7",
"sqlite",
"raspberry-pi"
] |
Selenium get rendered php captcha image | 40,132,107 | <p>I trying to parse web page that has captcha. Captcha is generated by PHP:</p>
<pre><code><img src="/captcha.php" border="0" align="absmiddle">
</code></pre>
<p>What i do:</p>
<pre><code> self.driver = webdriver.Chrome()
img = self.driver.find_element_by_xpath('//table/tbody/tr/td/img')
scr = img.get_attribute('src')
</code></pre>
<p>but it contains captcha.php and i expecting to see base64.</p>
<p>Is there any way to get that image return by php script ?</p>
| 0 | 2016-10-19T12:59:21Z | 40,133,009 | <p>from <a href="http://stackoverflow.com/questions/17361742/download-image-with-selenium-python" title="this answer">this answer</a> you can download the image:</p>
<pre><code># download the image
urllib.urlretrieve(src, "captcha.png")
</code></pre>
| 0 | 2016-10-19T13:38:18Z | [
"python",
"selenium"
] |
python pandas: Reallocate index, columns and values within dataframe | 40,132,128 | <p>I have a dataframe that looks as follows. x is the index</p>
<pre><code> y value
x
1 0 0.016175
1 1 0.017832
1 2 0.021536
1 3 0.024777
2 0 0.027594
2 1 0.029950
2 2 0.031890
2 3 0.033570
3 0 0.035070
3 1 0.036329
3 2 0.037297
3 3 0.037983
</code></pre>
<p>I would like to reallocate the data in the frame so that the result looks like:</p>
<pre><code> y 1(x) 2(x) 3(x)
0 0.016175 0.027594 0.035070
1 0.017832 0.029950 0.036329
2 0.021536 0.031890 0.037297
3 0.024777 0.033570 0.037983
</code></pre>
<p>The original index should be placed as column headings and y should be the new index. Any ideas how to implement this in Python?</p>
| 1 | 2016-10-19T13:00:21Z | 40,132,166 | <p>You can use first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow"><code>pivot</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_suffix.html" rel="nofollow"><code>add_suffix</code></a>:</p>
<pre><code>print (df.reset_index().pivot(index='y', columns='x', values='value').add_suffix('(x)'))
x 1(x) 2(x) 3(x)
y
0 0.016175 0.027594 0.035070
1 0.017832 0.029950 0.036329
2 0.021536 0.031890 0.037297
3 0.024777 0.033570 0.037983
</code></pre>
<p>Last if need remove column names add <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#changes-to-rename" rel="nofollow"><code>rename_axis</code></a> (new in <code>pandas</code> <code>0.18.0</code>):</p>
<pre><code>print (df.reset_index()
.pivot(index='y', columns='x', values='value')
.add_suffix('(x)')
.rename_axis(None, axis=1))
1(x) 2(x) 3(x)
y
0 0.016175 0.027594 0.035070
1 0.017832 0.029950 0.036329
2 0.021536 0.031890 0.037297
3 0.024777 0.033570 0.037983
</code></pre>
| 3 | 2016-10-19T13:01:51Z | [
"python",
"pandas",
"dataframe",
"pivot"
] |
Python: how to make parallelize processing | 40,132,288 | <p>I need to divide the task to 8 processes.
I use <code>multiprocessing</code> to do that.
I try to describe my task:
I have dataframe and there are column with urls. Some urls have a captcha and I try to use proxies from other file to get page from every url.
It takes a lot of time and I want to divide that. I want to open first url with one proxy, secong url with another proxy etc. I can't use <code>map</code> or <code>zip</code>, because length of list with proxies is smaller.
urls looks like</p>
<pre><code>['https://www.avito.ru/moskva/avtomobili/bmw_x5_2016_840834845', 'https://www.avito.ru/moskva/avtomobili/bmw_1_seriya_2016_855898883', 'https://www.avito.ru/moskva/avtomobili/bmw_3_seriya_2016_853351780', 'https://www.avito.ru/moskva/avtomobili/bmw_3_seriya_2016_856641142', 'https://www.avito.ru/moskva/avtomobili/bmw_3_seriya_2016_856641140', 'https://www.avito.ru/moskva/avtomobili/bmw_3_seriya_2016_853351780', 'https://www.avito.ru/moskva/avtomobili/bmw_3_seriya_2016_856641134', 'https://www.avito.ru/moskva/avtomobili/bmw_3_seriya_2016_856641141']
</code></pre>
<p>and proxies looks like </p>
<pre><code>['http://203.223.143.51:8080', 'http://77.123.18.56:81', 'http://203.146.189.61:80', 'http://113.185.19.130:80', 'http://212.235.226.133:3128', 'http://5.39.89.84:8080']
</code></pre>
<p>My code:</p>
<pre><code>def get_page(url):
m = re.search(r'avito.ru\/[a-z]+\/avtomobili\/[a-z0-9_]+$', url)
if m is not None:
url = 'https://www.' + url
print url
proxy = pd.read_excel('proxies.xlsx')
proxies = proxy.proxy.values.tolist()
for i, proxy in enumerate(proxies):
print "Trying HTTP proxy %s" % proxy
try:
result = urllib.urlopen(url, proxies={'http': proxy}).read()
if 'ÐÑ Ð¾Ð±Ð½Ð°ÑÑжили, ÑÑо запÑоÑÑ, поÑÑÑпаÑÑие Ñ Ð²Ð°Ñего IP-адÑеÑа, поÑ
ожи на авÑомаÑиÑеÑкие' in result:
raise Exception
else:
page = page.read()
soup = BeautifulSoup(page, 'html.parser')
price = soup.find('span', itemprop="price")
print price
except:
print "Trying next proxy %s in 10 seconds" % proxy
time.sleep(10)
if __name__ == '__main__':
pool = Pool(processes=8)
pool.map(get_page, urls)
</code></pre>
<p>My code takes 8 urls and try open it with one proxy. How can I change algorithm to open 8 urls with 8 different proxies?</p>
| 0 | 2016-10-19T13:07:06Z | 40,132,432 | <p>Something like that might help:</p>
<pre><code>def get_page(url):
m = re.search(r'avito.ru\/[a-z]+\/avtomobili\/[a-z0-9_]+$', url)
if m is not None:
url = 'https://www.' + url
print url
proxy = pd.read_excel('proxies.xlsx')
proxies = proxy.proxy.values.tolist()
for i, proxy in enumerate(proxies):
thread.start_new_thread( run, (proxy,i ) )
def run(proxy,i):
print "Trying HTTP proxy %s" % proxy
try:
result = urllib.urlopen(url, proxies={'http': proxy}).read()
if 'ÐÑ Ð¾Ð±Ð½Ð°ÑÑжили, ÑÑо запÑоÑÑ, поÑÑÑпаÑÑие Ñ Ð²Ð°Ñего IP-адÑеÑа, поÑ
ожи на авÑомаÑиÑеÑкие' in result:
raise Exception
else:
page = page.read()
soup = BeautifulSoup(page, 'html.parser')
price = soup.find('span', itemprop="price")
print price
except:
print "Trying next proxy %s in 10 seconds" % proxy
time.sleep(10)
if __name__ == '__main__':
pool = Pool(processes=8)
pool.map(get_page, urls)
</code></pre>
| 0 | 2016-10-19T13:13:30Z | [
"python",
"multithreading",
"proxy",
"multiprocessing"
] |
Kivy Installation Error | 40,132,289 | <p>I was trying to install kivy, but I overcame this problem and I couldn't solve it. Does anyone know how to solve it?</p>
<p>kivy error
<a href="https://i.stack.imgur.com/dVXsd.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/dVXsd.jpg" alt="enter image description here"></a></p>
<p>I am informed that I needed to install glew. Thank you for help. I followed the instructions on the glew's website to install glew but after the installation, I met another problem:</p>
<p><a href="https://i.stack.imgur.com/mtlOZ.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/mtlOZ.jpg" alt="enter image description here"></a></p>
| 0 | 2016-10-19T13:07:09Z | 40,132,458 | <p>You need to install libglew-dev first.The problem is because it is already not installed on your system.</p>
<p>For installing it,see <a href="https://www.youtube.com/watch?v=u_NI7KOzyFM" rel="nofollow">This Video</a></p>
| 0 | 2016-10-19T13:14:25Z | [
"python",
"package",
"install",
"kivy"
] |
Kivy Installation Error | 40,132,289 | <p>I was trying to install kivy, but I overcame this problem and I couldn't solve it. Does anyone know how to solve it?</p>
<p>kivy error
<a href="https://i.stack.imgur.com/dVXsd.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/dVXsd.jpg" alt="enter image description here"></a></p>
<p>I am informed that I needed to install glew. Thank you for help. I followed the instructions on the glew's website to install glew but after the installation, I met another problem:</p>
<p><a href="https://i.stack.imgur.com/mtlOZ.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/mtlOZ.jpg" alt="enter image description here"></a></p>
| 0 | 2016-10-19T13:07:09Z | 40,133,968 | <p>This video explains how to install pygame and kivy in detail.</p>
<p><a href="https://www.youtube.com/watch?v=CYNWK2GpwgA&list=PLQVvvaa0QuDe_l6XiJ40yGTEqIKugAdTy" rel="nofollow">Installing Kivy</a></p>
| 0 | 2016-10-19T14:16:43Z | [
"python",
"package",
"install",
"kivy"
] |
python filter 2d array by a chunk of data | 40,132,352 | <pre><code>import numpy as np
data = np.array([
[20, 0, 5, 1],
[20, 0, 5, 1],
[20, 0, 5, 0],
[20, 1, 5, 0],
[20, 1, 5, 0],
[20, 2, 5, 1],
[20, 3, 5, 0],
[20, 3, 5, 0],
[20, 3, 5, 1],
[20, 4, 5, 0],
[20, 4, 5, 0],
[20, 4, 5, 0]
])
</code></pre>
<p>I have the following 2d array. lets called the fields <code>a, b, c, d</code> in the above order where column <code>b</code> is like <code>id</code>. I wish to delete all cells that doesnt have atlist 1 appearance of the number "1" in column <code>d</code> for all cells with the same number in column <code>b</code> (same id) so after filtering i will have the following results:</p>
<pre><code>[[20 0 5 1]
[20 0 5 1]
[20 0 5 0]
[20 2 5 1]
[20 3 5 0]
[20 3 5 0]
[20 3 5 1]]
</code></pre>
<p>all rows with <code>b = 1</code> and <code>b = 4</code> have been deleted from the data</p>
<p>to sum up because I see answers that doesnt fit. we look at chunks of data by the <code>b</code> column. if a complete chunk of data doesnt have even one appearance of the number "1" in column <code>d</code> we delete all the rows of that <code>b</code> item. in the following example we can see a chunk of data with <code>b = 1</code> and <code>b = 4</code> ("id" = 1 and "id" = 4) that have 0 appearances of the number "1" in column <code>d</code>. thats why it gets deleted from the data </p>
| 4 | 2016-10-19T13:10:04Z | 40,132,462 | <p>code: </p>
<pre><code>import numpy as np
my_list = [[20,0,5,1],
[20,0,5,1],
[20,0,5,0],
[20,1,5,0],
[20,1,5,0],
[20,2,5,1],
[20,3,5,0],
[20,3,5,0],
[20,3,5,1],
[20,4,5,0],
[20,4,5,0],
[20,4,5,0]]
all_ids = np.array(my_list)[:,1]
unique_ids = np.unique(all_ids)
indices = [np.where(all_ids==ui)[0][0] for ui in unique_ids ]
final = []
for id in unique_ids:
try:
tmp_group = my_list[indices[id]:indices[id+1]]
except:
tmp_group = my_list[indices[id]:]
if 1 in np.array(tmp_group)[:,3]:
final.extend(tmp_group)
print np.array(final)
</code></pre>
<p>result: </p>
<pre><code>[[20 0 5 1]
[20 0 5 1]
[20 0 5 0]
[20 2 5 1]
[20 3 5 0]
[20 3 5 0]
[20 3 5 1]]
</code></pre>
| 0 | 2016-10-19T13:14:34Z | [
"python",
"arrays",
"numpy"
] |
python filter 2d array by a chunk of data | 40,132,352 | <pre><code>import numpy as np
data = np.array([
[20, 0, 5, 1],
[20, 0, 5, 1],
[20, 0, 5, 0],
[20, 1, 5, 0],
[20, 1, 5, 0],
[20, 2, 5, 1],
[20, 3, 5, 0],
[20, 3, 5, 0],
[20, 3, 5, 1],
[20, 4, 5, 0],
[20, 4, 5, 0],
[20, 4, 5, 0]
])
</code></pre>
<p>I have the following 2d array. lets called the fields <code>a, b, c, d</code> in the above order where column <code>b</code> is like <code>id</code>. I wish to delete all cells that doesnt have atlist 1 appearance of the number "1" in column <code>d</code> for all cells with the same number in column <code>b</code> (same id) so after filtering i will have the following results:</p>
<pre><code>[[20 0 5 1]
[20 0 5 1]
[20 0 5 0]
[20 2 5 1]
[20 3 5 0]
[20 3 5 0]
[20 3 5 1]]
</code></pre>
<p>all rows with <code>b = 1</code> and <code>b = 4</code> have been deleted from the data</p>
<p>to sum up because I see answers that doesnt fit. we look at chunks of data by the <code>b</code> column. if a complete chunk of data doesnt have even one appearance of the number "1" in column <code>d</code> we delete all the rows of that <code>b</code> item. in the following example we can see a chunk of data with <code>b = 1</code> and <code>b = 4</code> ("id" = 1 and "id" = 4) that have 0 appearances of the number "1" in column <code>d</code>. thats why it gets deleted from the data </p>
| 4 | 2016-10-19T13:10:04Z | 40,132,496 | <p>This gets rid of all rows with 1 in the second position:</p>
<pre><code>[sublist for sublist in list_ if sublist[1] != 1]
</code></pre>
<p>This get's rid of all rows with 1 in the second position unless the fourth position is also 1:</p>
<pre><code>[sublist for sublist in list_ if not (sublist[1] == 1 and sublist[3] != 1) ]
</code></pre>
| 0 | 2016-10-19T13:16:29Z | [
"python",
"arrays",
"numpy"
] |
python filter 2d array by a chunk of data | 40,132,352 | <pre><code>import numpy as np
data = np.array([
[20, 0, 5, 1],
[20, 0, 5, 1],
[20, 0, 5, 0],
[20, 1, 5, 0],
[20, 1, 5, 0],
[20, 2, 5, 1],
[20, 3, 5, 0],
[20, 3, 5, 0],
[20, 3, 5, 1],
[20, 4, 5, 0],
[20, 4, 5, 0],
[20, 4, 5, 0]
])
</code></pre>
<p>I have the following 2d array. lets called the fields <code>a, b, c, d</code> in the above order where column <code>b</code> is like <code>id</code>. I wish to delete all cells that doesnt have atlist 1 appearance of the number "1" in column <code>d</code> for all cells with the same number in column <code>b</code> (same id) so after filtering i will have the following results:</p>
<pre><code>[[20 0 5 1]
[20 0 5 1]
[20 0 5 0]
[20 2 5 1]
[20 3 5 0]
[20 3 5 0]
[20 3 5 1]]
</code></pre>
<p>all rows with <code>b = 1</code> and <code>b = 4</code> have been deleted from the data</p>
<p>to sum up because I see answers that doesnt fit. we look at chunks of data by the <code>b</code> column. if a complete chunk of data doesnt have even one appearance of the number "1" in column <code>d</code> we delete all the rows of that <code>b</code> item. in the following example we can see a chunk of data with <code>b = 1</code> and <code>b = 4</code> ("id" = 1 and "id" = 4) that have 0 appearances of the number "1" in column <code>d</code>. thats why it gets deleted from the data </p>
| 4 | 2016-10-19T13:10:04Z | 40,134,398 | <p><strong>Generic approach :</strong> Here's an approach using <a href="https://numeric.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow"><code>np.unique</code></a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html" rel="nofollow"><code>np.bincount</code></a> to solve for a generic case -</p>
<pre><code>unq,tags = np.unique(data[:,1],return_inverse=1)
goodIDs = np.flatnonzero(np.bincount(tags,data[:,3]==1)>=1)
out = data[np.in1d(tags,goodIDs)]
</code></pre>
<p>Sample run -</p>
<pre><code>In [15]: data
Out[15]:
array([[20, 10, 5, 1],
[20, 73, 5, 0],
[20, 73, 5, 1],
[20, 31, 5, 0],
[20, 10, 5, 1],
[20, 10, 5, 0],
[20, 42, 5, 1],
[20, 54, 5, 0],
[20, 73, 5, 0],
[20, 54, 5, 0],
[20, 54, 5, 0],
[20, 31, 5, 0]])
In [16]: out
Out[16]:
array([[20, 10, 5, 1],
[20, 73, 5, 0],
[20, 73, 5, 1],
[20, 10, 5, 1],
[20, 10, 5, 0],
[20, 42, 5, 1],
[20, 73, 5, 0]])
</code></pre>
<p><strong>Specific case approach :</strong> If the second column data is always sorted and have sequential numbers starting from <code>0</code>, we can use a simplified version, like so -</p>
<pre><code>goodIDs = np.flatnonzero(np.bincount(data[:,1],data[:,3]==1)>=1)
out = data[np.in1d(data[:,1],goodIDs)]
</code></pre>
<p>Sample run -</p>
<pre><code>In [44]: data
Out[44]:
array([[20, 0, 5, 1],
[20, 0, 5, 1],
[20, 0, 5, 0],
[20, 1, 5, 0],
[20, 1, 5, 0],
[20, 2, 5, 1],
[20, 3, 5, 0],
[20, 3, 5, 0],
[20, 3, 5, 1],
[20, 4, 5, 0],
[20, 4, 5, 0],
[20, 4, 5, 0]])
In [45]: out
Out[45]:
array([[20, 0, 5, 1],
[20, 0, 5, 1],
[20, 0, 5, 0],
[20, 2, 5, 1],
[20, 3, 5, 0],
[20, 3, 5, 0],
[20, 3, 5, 1]])
</code></pre>
<p>Also, if <code>data[:,3]</code> always have ones and zeros, we can just use <code>data[:,3]</code> in place of <code>data[:,3]==1</code> in the above listed codes.</p>
<hr>
<p><strong>Benchmarking</strong> </p>
<p>Let's benchmark the vectorized approaches on the specific case for a larger array -</p>
<pre><code>In [69]: def logical_or_based(data): #@ Eric's soln
...: b_vals = data[:,1]
...: d_vals = data[:,3]
...: is_ok = np.zeros(np.max(b_vals) + 1, dtype=np.bool_)
...: np.logical_or.at(is_ok, b_vals, d_vals)
...: return is_ok[b_vals]
...:
...: def in1d_based(data):
...: goodIDs = np.flatnonzero(np.bincount(data[:,1],data[:,3])!=0)
...: out = np.in1d(data[:,1],goodIDs)
...: return out
...:
In [70]: # Setup input
...: data = np.random.randint(0,100,(10000,4))
...: data[:,1] = np.sort(np.random.randint(0,100,(10000)))
...: data[:,3] = np.random.randint(0,2,(10000))
...:
In [71]: %timeit logical_or_based(data) #@ Eric's soln
1000 loops, best of 3: 1.44 ms per loop
In [72]: %timeit in1d_based(data)
1000 loops, best of 3: 528 µs per loop
</code></pre>
| 3 | 2016-10-19T14:32:19Z | [
"python",
"arrays",
"numpy"
] |
python filter 2d array by a chunk of data | 40,132,352 | <pre><code>import numpy as np
data = np.array([
[20, 0, 5, 1],
[20, 0, 5, 1],
[20, 0, 5, 0],
[20, 1, 5, 0],
[20, 1, 5, 0],
[20, 2, 5, 1],
[20, 3, 5, 0],
[20, 3, 5, 0],
[20, 3, 5, 1],
[20, 4, 5, 0],
[20, 4, 5, 0],
[20, 4, 5, 0]
])
</code></pre>
<p>I have the following 2d array. lets called the fields <code>a, b, c, d</code> in the above order where column <code>b</code> is like <code>id</code>. I wish to delete all cells that doesnt have atlist 1 appearance of the number "1" in column <code>d</code> for all cells with the same number in column <code>b</code> (same id) so after filtering i will have the following results:</p>
<pre><code>[[20 0 5 1]
[20 0 5 1]
[20 0 5 0]
[20 2 5 1]
[20 3 5 0]
[20 3 5 0]
[20 3 5 1]]
</code></pre>
<p>all rows with <code>b = 1</code> and <code>b = 4</code> have been deleted from the data</p>
<p>to sum up because I see answers that doesnt fit. we look at chunks of data by the <code>b</code> column. if a complete chunk of data doesnt have even one appearance of the number "1" in column <code>d</code> we delete all the rows of that <code>b</code> item. in the following example we can see a chunk of data with <code>b = 1</code> and <code>b = 4</code> ("id" = 1 and "id" = 4) that have 0 appearances of the number "1" in column <code>d</code>. thats why it gets deleted from the data </p>
| 4 | 2016-10-19T13:10:04Z | 40,134,524 | <p>Let's assume the following:</p>
<ul>
<li><code>b >= 0</code></li>
<li><code>b</code> is an integer</li>
<li><code>b</code> is fairly dense, ie <code>max(b) ~= len(unique(b))</code></li>
</ul>
<p>Here's a solution using <a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ufunc.at.html" rel="nofollow"><code>np.ufunc.at</code></a>:</p>
<pre><code># unpack for clarity - this costs nothing in numpy
b_vals = data[:,1]
d_vals = data[:,3]
# build an array indexed by b values
is_ok = np.zeros(np.max(b_vals) + 1, dtype=np.bool_)
np.logical_or.at(is_ok, b_vals, d_vals)
# is_ok == array([ True, False, True, True, False], dtype=bool)
# take the rows which have a b value that was deemed OK
result = data[is_ok[b_vals]]
</code></pre>
<hr>
<p><code>np.logical_or.at(is_ok, b_vals, d_vals)</code> is a more efficient version of:</p>
<pre><code>for idx, val in zip(b_vals, d_vals):
is_ok[idx] = np.logical_or(is_ok[idx], val)
</code></pre>
| 0 | 2016-10-19T14:37:34Z | [
"python",
"arrays",
"numpy"
] |
python filter 2d array by a chunk of data | 40,132,352 | <pre><code>import numpy as np
data = np.array([
[20, 0, 5, 1],
[20, 0, 5, 1],
[20, 0, 5, 0],
[20, 1, 5, 0],
[20, 1, 5, 0],
[20, 2, 5, 1],
[20, 3, 5, 0],
[20, 3, 5, 0],
[20, 3, 5, 1],
[20, 4, 5, 0],
[20, 4, 5, 0],
[20, 4, 5, 0]
])
</code></pre>
<p>I have the following 2d array. lets called the fields <code>a, b, c, d</code> in the above order where column <code>b</code> is like <code>id</code>. I wish to delete all cells that doesnt have atlist 1 appearance of the number "1" in column <code>d</code> for all cells with the same number in column <code>b</code> (same id) so after filtering i will have the following results:</p>
<pre><code>[[20 0 5 1]
[20 0 5 1]
[20 0 5 0]
[20 2 5 1]
[20 3 5 0]
[20 3 5 0]
[20 3 5 1]]
</code></pre>
<p>all rows with <code>b = 1</code> and <code>b = 4</code> have been deleted from the data</p>
<p>to sum up because I see answers that doesnt fit. we look at chunks of data by the <code>b</code> column. if a complete chunk of data doesnt have even one appearance of the number "1" in column <code>d</code> we delete all the rows of that <code>b</code> item. in the following example we can see a chunk of data with <code>b = 1</code> and <code>b = 4</code> ("id" = 1 and "id" = 4) that have 0 appearances of the number "1" in column <code>d</code>. thats why it gets deleted from the data </p>
| 4 | 2016-10-19T13:10:04Z | 40,136,593 | <p>Untested since in a hurry, but this should work:</p>
<pre><code>import numpy_indexed as npi
g = npi.group_by(data[:, 1])
ids, valid = g.any(data[:, 3])
result = data[valid[g.inverse]]
</code></pre>
| 0 | 2016-10-19T16:09:35Z | [
"python",
"arrays",
"numpy"
] |
Mark as unseen on Gmail (imaplib) | 40,132,420 | <p>I'm trying to mark email as unseen on Gmail server.</p>
<p>I'm using this command:</p>
<pre><code>res, data = mailbox.uid('STORE', uid, '-FLAGS', '(\Seen)')
</code></pre>
<p>Everything goes OK but when I check it using web browser it's still marked as seen.
When I check flags here's what I got:</p>
<pre><code> b'46 (FLAGS (-FLAGS \\Seen))'
</code></pre>
<p>I've seen multiple questions on this issue but none of the proposed solutions work. </p>
<p>Just to mention that I'm appending this email using:</p>
<pre><code>mailbox.append(db_email.folder, "-FLAGS \Seen", time.mktime(db_email.date.timetuple()), mail.as_bytes())
</code></pre>
<p>But the flag parameter <code>-FLAGS \Seen</code> does not have any effect since it's the same when I don't pass flag argument.</p>
<p>Also, I've double-checked <code>uid</code> for given mail folder and it matches to appropriate email.</p>
| 0 | 2016-10-19T13:12:59Z | 40,138,902 | <p>It appears you've misunderstood flags on APPEND a bit.</p>
<p>By doing <code>APPEND folder (-FLAGS \Seen) ...</code> you've actually created a message with two flags: The standard <code>\Seen</code> flag, and a nonstandard <code>-FLAGS</code> flag.</p>
<p>To create a message without the \Seen flag, just use <code>()</code> as your flag list for <code>APPEND</code>.</p>
<p><code>-FLAGS</code> is a subcommand to STORE, saying to remove these flags from the current list. Conversely, <code>+FLAGS</code> is add these flags to the current list. The plain <code>FLAGS</code> overwrites the current list.</p>
<p>Also, if you do remove the <code>\Seen</code> flag over an IMAP connection, it can take sometime to show up in the GMail WebUI. You may need to refresh or switch folders to get the changes to render.</p>
<p>NB: You are not protecting your backslashes. <code>\S</code> is not a legal escape sequence, so will be passed through, but you should either use a double backslash (<code>'\\Seen'</code>) or a raw string (<code>r'\Seen'</code>)</p>
| 2 | 2016-10-19T18:23:14Z | [
"python",
"email",
"gmail",
"imap",
"imaplib"
] |
How do I schedule a job in Django? | 40,132,576 | <p>I have to schedule a job using <a href="https://pypi.python.org/pypi/schedule" rel="nofollow">Schedule</a> on my <a href="https://www.djangoproject.com/" rel="nofollow">django</a> web application.</p>
<pre><code>def new_job(request):
print("I'm working...")
file=schedulesdb.objects.filter (user=request.user,f_name__icontains ="mp4").last()
file_initiated = str(f_name)
os.startfile(f_name_initiated)
</code></pre>
<p>I need to do it with filtered time in db</p>
<pre><code>GIVEN DATETIME = schedulesdb.objects.datetimes('request_time', 'second').last()
schedule.GIVEN DATETIME.do(job)
</code></pre>
| -1 | 2016-10-19T13:19:16Z | 40,139,508 | <p>Django is a web framework. It receives a request, does whatever processing is necessary and sends out a response. It doesn't have any persistent process that could keep track of time and run scheduled tasks, so there is no good way to do it using just Django.</p>
<p>That said, Celery (<a href="http://www.celeryproject.org/" rel="nofollow">http://www.celeryproject.org/</a>) is a python framework specifically built to run tasks, both scheduled and on-demand. It also integrates with Django ORM with minimal configuration. I suggest you look into it.</p>
<p>You could, of course, write your own external script that would use schedule module that you mentioned. You would need to implement a way to write shedule objects into the database and then you could have your script read and execute them. Is your "scheduledb" model already implemented?</p>
| 0 | 2016-10-19T19:00:07Z | [
"python",
"django"
] |
Python cprofiler a function | 40,132,630 | <p>How to profile one function with cprofiler?</p>
<pre><code>label = process_one(signature)
</code></pre>
<p>become</p>
<pre><code>import cProfile
label = cProfile.run(process_one(signature))
</code></pre>
<p>but it didn't work :/</p>
| 0 | 2016-10-19T13:21:22Z | 40,133,433 | <p>according to documentation (<a href="https://docs.python.org/2/library/profile.html" rel="nofollow">https://docs.python.org/2/library/profile.html</a>) it should be <code>cProfile.run('process_one(signature)')</code></p>
<p>also, look at the answer <a href="http://stackoverflow.com/a/17259420/1966790">http://stackoverflow.com/a/17259420/1966790</a></p>
| 1 | 2016-10-19T13:55:36Z | [
"python",
"profiling"
] |
Python cprofiler a function | 40,132,630 | <p>How to profile one function with cprofiler?</p>
<pre><code>label = process_one(signature)
</code></pre>
<p>become</p>
<pre><code>import cProfile
label = cProfile.run(process_one(signature))
</code></pre>
<p>but it didn't work :/</p>
| 0 | 2016-10-19T13:21:22Z | 40,134,116 | <p>You can write some decorator which will be helpful for profiling any function in general with cProfile. This helps me to quickly get stats when I need them.</p>
<pre><code>import cProfile
import pstats
import StringIO
import commands
def qprofile(func):
def profiled_func(*args, **kwargs):
if 'profile' in kwargs and kwargs['profile']:
kwargs.pop('profile')
profile = cProfile.Profile()
try:
profile.enable()
result = func(*args, **kwargs)
profile.disable()
return result
finally:
s = StringIO.StringIO()
ps = pstats.Stats(
profile, stream=s).strip_dirs(
).sort_stats('cumulative')
ps.print_stats(30)
print s.getvalue()
else:
result = func(*args, **kwargs)
return result
return profiled_func
@qprofile
def process_one(cmd):
output = commands.getoutput(cmd)
return output
# Function is profiled if profile=True in kwargs
print(process_one('uname -a', profile=True))
</code></pre>
<p>Sample Output:</p>
<pre><code> 7 function calls in 0.013 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.013 0.013 qprofiler.py:29(process_one)
1 0.000 0.000 0.013 0.013 commands.py:48(getoutput)
1 0.000 0.000 0.013 0.013 commands.py:56(getstatusoutput)
1 0.013 0.013 0.013 0.013 {method 'read' of 'file' objects}
1 0.000 0.000 0.000 0.000 {posix.popen}
1 0.000 0.000 0.000 0.000 {method 'close' of 'file' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Linux chronin 4.4.0-42-generic #62-Ubuntu SMP Fri Oct 7 23:11:45 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>Please refer official documentation for any call specific references,
<a href="https://docs.python.org/2/library/profile.html" rel="nofollow">https://docs.python.org/2/library/profile.html</a></p>
| 1 | 2016-10-19T14:21:48Z | [
"python",
"profiling"
] |
Rename python click argument | 40,132,771 | <p>I have this chunk of code:</p>
<pre><code>import click
@click.option('--delete_thing', help="Delete some things columns.", default=False)
def cmd_do_this(delete_thing=False):
print "I deleted the thing."
</code></pre>
<p>I would like to rename the option variable in <code>--delete-thing</code>. But python does not allow dashes in variable names. Is there a way to write this kind of code? </p>
<pre><code>import click
@click.option('--delete-thing', help="Delete some things columns.", default=False, store_variable=delete_thing)
def cmd_do_this(delete_thing=False):
print "I deleted the thing."
</code></pre>
<p>So <code>delete_thing</code> will be set to the value of <code>delete-thing</code></p>
| 3 | 2016-10-19T13:26:49Z | 40,135,726 | <p>By default, click will intelligently map intra-option commandline hyphens to underscores so your code should work as-is. This is used in the click documentation, e.g., in the <a href="http://click.pocoo.org/5/options/#choice-options" rel="nofollow">Choice example</a>. If --delete-thing is intended to be a boolean option, you may also want to make it a <a href="http://stackoverflow.com/questions/40132694/gradientboostingclassifier-analog-in-r">boolean argument</a>.</p>
| 2 | 2016-10-19T15:27:30Z | [
"python",
"command-line-arguments",
"python-click"
] |
msgpack unpacks the number '10' between each item | 40,132,832 | <p>I'm trying to use <a href="https://pypi.python.org/pypi/msgpack-python" rel="nofollow">msgpack</a> to write a list of dictionaries to a file. However, when I iterate over an instance of <code>Unpacker</code>, it seems like the number <code>10</code> is unpacked between each 'real' document.</p>
<p>The test script I'm running is</p>
<pre><code>import msgpack
from faker import Faker
import logging
from logging.handlers import RotatingFileHandler
fake = Faker()
fake.seed(0)
data_file = "my_log.log"
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler(data_file, maxBytes=2000, backupCount=10)
logger.addHandler(handler)
fake_dicts = [{'name': fake.name()} for _ in range(100)]
for item in fake_dicts:
dump_string = msgpack.packb(item)
logger.debug(dump_string)
unpacker = msgpack.Unpacker(open(data_file))
for unpacked in unpacker:
print unpacked
</code></pre>
<p>where I've used <a href="https://pypi.python.org/pypi/fake-factory" rel="nofollow">fake-factory</a> to generate fake data. The resulting printed output is as follows:</p>
<pre><code>{'name': 'Joshua Carter'}
10
{'name': 'David Williams'}
10
{'name': 'Joseph Jones'}
10
{'name': 'Gary Perry'}
10
{'name': 'Terry Wells'}
10
{'name': 'Vanessa Cooper'}
10
{'name': 'Michael Simmons'}
10
{'name': 'Nicholas Kline'}
10
{'name': 'Lori Bennett'}
10
</code></pre>
<p>I don't understand why the number <code>10</code> is printed between each dictionary? Is this somehow introduced by the <code>logger</code>?</p>
| 1 | 2016-10-19T13:30:19Z | 40,133,035 | <p>This is coming from the contents of unpacker. You can replicate yourself like this:</p>
<pre><code>In [23]: unpacker = msgpack.Unpacker(open(data_file))
In [24]: unpacker.next()
Out[24]: {'name': 'Edward Ruiz'}
In [25]: unpacker.next()
Out[25]: 10
</code></pre>
| 2 | 2016-10-19T13:39:34Z | [
"python"
] |
Sort xml with python by tag | 40,132,918 | <p>I have an xml</p>
<pre><code><root>
<node1>
<B>text</B>
<A>another_text</A>
<C>one_more_text</C>
</node1>
<node2>
<C>one_more_text</C>
<B>text</B>
<A>another_text</A>
</node2>
</root>
</code></pre>
<p>I want get output like:</p>
<pre><code><root>
<node1>
<A>another_text</A>
<B>text</B>
<C>one_more_text</C>
</node1>
<node2>
<A>another_text</A>
<B>text</B>
<C>one_more_text</C>
</node2>
</root>
</code></pre>
<p>I tried with some code like:</p>
<pre><code>from xml.etree import ElementTree as et
tr = et.parse(path_in)
root = tr.getroot()
for children in root.getchildren():
for child in children.getchildren():
# sort it
tr.write(path_out)
</code></pre>
<p>I cannot use standard function <code>sort</code> and <code>sorted</code> because it sorted wrong way (not by tag).
Thanks in advance.</p>
| 1 | 2016-10-19T13:34:23Z | 40,133,028 | <p>From a similar question : </p>
<pre><code>from lxml import etree
data = """<X>
<X03>3</X03>
<X02>2</X02>
<A>
<A02>Y</A02>
<A01>X</A01>
<A03>Z</A03>
</A>
<X01>1</X01>
<B>
<B01>Z</B01>
<B02>X</B02>
<B03>C</B03>
</B>
</X>"""
doc = etree.XML(data,etree.XMLParser(remove_blank_text=True))
for parent in doc.xpath('//*[./*]'): # Search for parent elements
parent[:] = sorted(parent,key=lambda x: x.tag)
print etree.tostring(doc,pretty_print=True)
</code></pre>
<p>result : </p>
<pre><code><X>
<A>
<A01>X</A01>
<A02>Y</A02>
<A03>Z</A03>
</A>
<B>
<B01>Z</B01>
<B02>X</B02>
<B03>C</B03>
</B>
<X01>1</X01>
<X02>2</X02>
<X03>3</X03>
</X>
</code></pre>
<p>You can find more information here : <a href="http://effbot.org/zone/element-sort.htm" rel="nofollow">http://effbot.org/zone/element-sort.htm</a></p>
| 1 | 2016-10-19T13:39:21Z | [
"python",
"xml",
"sorting"
] |
Sort xml with python by tag | 40,132,918 | <p>I have an xml</p>
<pre><code><root>
<node1>
<B>text</B>
<A>another_text</A>
<C>one_more_text</C>
</node1>
<node2>
<C>one_more_text</C>
<B>text</B>
<A>another_text</A>
</node2>
</root>
</code></pre>
<p>I want get output like:</p>
<pre><code><root>
<node1>
<A>another_text</A>
<B>text</B>
<C>one_more_text</C>
</node1>
<node2>
<A>another_text</A>
<B>text</B>
<C>one_more_text</C>
</node2>
</root>
</code></pre>
<p>I tried with some code like:</p>
<pre><code>from xml.etree import ElementTree as et
tr = et.parse(path_in)
root = tr.getroot()
for children in root.getchildren():
for child in children.getchildren():
# sort it
tr.write(path_out)
</code></pre>
<p>I cannot use standard function <code>sort</code> and <code>sorted</code> because it sorted wrong way (not by tag).
Thanks in advance.</p>
| 1 | 2016-10-19T13:34:23Z | 40,133,057 | <p>You need to:</p>
<ul>
<li>get the children elements for every top-level "node"</li>
<li>sort them by the <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.tag" rel="nofollow"><code>tag</code> attribute</a> (node's name)</li>
<li>reset the child nodes of each top-level node</li>
</ul>
<p>Sample working code:</p>
<pre><code>from operator import attrgetter
from xml.etree import ElementTree as et
data = """ <root>
<node1>
<B>text</B>
<A>another_text</A>
<C>one_more_text</C>
</node1>
<node2>
<C>one_more_text</C>
<B>text</B>
<A>another_text</A>
</node2>
</root>"""
root = et.fromstring(data)
for node in root.findall("*"): # searching top-level nodes only: node1, node2 ...
node[:] = sorted(node, key=attrgetter("tag"))
print(et.tostring(root))
</code></pre>
<p>Prints:</p>
<pre><code><root>
<node1>
<A>another_text</A>
<B>text</B>
<C>one_more_text</C>
</node1>
<node2>
<A>another_text</A>
<B>text</B>
<C>one_more_text</C>
</node2>
</root>
</code></pre>
<p>Note that we are not using <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.getchildren" rel="nofollow"><code>getchildren()</code> method</a> here (it is actually <em>deprecated</em> since Python 2.7) - using the fact that each <code>Element</code> instance is an iterable over the child nodes.</p>
| 1 | 2016-10-19T13:40:16Z | [
"python",
"xml",
"sorting"
] |
Set specific values in a mixed valued DataFrame to fixed value? | 40,132,927 | <p>I have a data frame with response and predictor variables in the columns and observations in the rows. Some of the values in the responses are below a given limit of detection (LOD). As I am planing to apply a rank transformation on the responses, I would like to set all those values equal to LOD. Say, the data frame is</p>
<pre><code>data.head()
age response1 response2 response3 risk sex smoking
0 33 0.272206 0.358059 0.585652 no female yes
1 38 0.425486 0.675391 0.721062 yes female no
2 20 0.910602 0.200606 0.664955 yes female no
3 38 0.966014 0.584317 0.923788 yes female no
4 27 0.756356 0.550512 0.106534 no female yes
</code></pre>
<p>I would like to do</p>
<pre><code>responses = ['response1', 'response2', 'response3']
LOD = 0.2
data[responses][data[responses] <= LOD] = LOD
</code></pre>
<p>which for multiple reasons does not work (, as pandas doesn't know if it should produce a view on the data or not and it won't, I guess)</p>
<p>How do I set all values in</p>
<pre><code>data[responses] <= LOD
</code></pre>
<p>equal to LOD?</p>
<hr>
<p>Minimal example:</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
x = Series(random.randint(0,2,50), dtype='category')
x.cat.categories = ['no', 'yes']
y = Series(random.randint(0,2,50), dtype='category')
y.cat.categories = ['no', 'yes']
z = Series(random.randint(0,2,50), dtype='category')
z.cat.categories = ['male', 'female']
a = Series(random.randint(20,60,50), dtype='category')
data = DataFrame({'risk':x, 'smoking':y, 'sex':z,
'response1': random.rand(50),
'response2': random.rand(50),
'response3': random.rand(50),
'age':a})
</code></pre>
| 1 | 2016-10-19T13:34:44Z | 40,133,189 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mask.html" rel="nofollow"><code>DataFrame.mask</code></a>:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(123)
x = pd.Series(np.random.randint(0,2,10), dtype='category')
x.cat.categories = ['no', 'yes']
y = pd.Series(np.random.randint(0,2,10), dtype='category')
y.cat.categories = ['no', 'yes']
z = pd.Series(np.random.randint(0,2,10), dtype='category')
z.cat.categories = ['male', 'female']
a = pd.Series(np.random.randint(20,60,10), dtype='category')
data = pd.DataFrame({
'risk':x,
'smoking':y,
'sex':z,
'response1': np.random.rand(10),
'response2': np.random.rand(10),
'response3': np.random.rand(10),
'age':a})
print (data)
age response1 response2 response3 risk sex smoking
0 24 0.722443 0.425830 0.866309 no male yes
1 23 0.322959 0.312261 0.250455 yes male yes
2 22 0.361789 0.426351 0.483034 no female no
3 40 0.228263 0.893389 0.985560 no female yes
4 59 0.293714 0.944160 0.519485 no female no
5 22 0.630976 0.501837 0.612895 no male yes
6 40 0.092105 0.623953 0.120629 no female no
7 27 0.433701 0.115618 0.826341 yes male yes
8 55 0.430863 0.317285 0.603060 yes male yes
9 48 0.493685 0.414826 0.545068 no male no
</code></pre>
<pre><code>responses = ['response1', 'response2', 'response3']
LOD = 0.2
print (data[responses] <= LOD)
response1 response2 response3
0 False False False
1 False False False
2 False False False
3 False False False
4 False False False
5 False False False
6 True False True
7 False True False
8 False False False
9 False False False
data[responses] = data[responses].mask(data[responses] <= LOD, LOD)
print (data)
age response1 response2 response3 risk sex smoking
0 24 0.722443 0.425830 0.866309 no male yes
1 23 0.322959 0.312261 0.250455 yes male yes
2 22 0.361789 0.426351 0.483034 no female no
3 40 0.228263 0.893389 0.985560 no female yes
4 59 0.293714 0.944160 0.519485 no female no
5 22 0.630976 0.501837 0.612895 no male yes
6 40 0.200000 0.623953 0.200000 no female no
7 27 0.433701 0.200000 0.826341 yes male yes
8 55 0.430863 0.317285 0.603060 yes male yes
9 48 0.493685 0.414826 0.545068 no male no
</code></pre>
| 0 | 2016-10-19T13:45:02Z | [
"python",
"pandas",
"dataframe"
] |
How to group pandas DataFrame by varying dates? | 40,133,016 | <p>I am trying to roll up daily data into fiscal quarter data. For example, I have a table with fiscal quarter end dates:</p>
<pre><code>Company Period Quarter_End
M 2016Q1 05/02/2015
M 2016Q2 08/01/2015
M 2016Q3 10/31/2015
M 2016Q4 01/30/2016
WFM 2015Q2 04/12/2015
WFM 2015Q3 07/05/2015
WFM 2015Q4 09/27/2015
WFM 2016Q1 01/17/2016
</code></pre>
<p>and a table of daily data:</p>
<pre><code>Company Date Price
M 06/20/2015 1.05
M 06/22/2015 4.05
M 07/10/2015 3.45
M 07/29/2015 1.86
M 08/24/2015 1.58
M 09/02/2015 8.64
M 09/22/2015 2.56
M 10/20/2015 5.42
M 11/02/2015 1.58
M 11/24/2015 4.58
M 12/03/2015 6.48
M 12/05/2015 4.56
M 01/03/2016 7.14
M 01/30/2016 6.34
WFM 06/20/2015 1.05
WFM 06/22/2015 4.05
WFM 07/10/2015 3.45
WFM 07/29/2015 1.86
WFM 08/24/2015 1.58
WFM 09/02/2015 8.64
WFM 09/22/2015 2.56
WFM 10/20/2015 5.42
WFM 11/02/2015 1.58
WFM 11/24/2015 4.58
WFM 12/03/2015 6.48
WFM 12/05/2015 4.56
WFM 01/03/2016 7.14
WFM 01/17/2016 6.34
</code></pre>
<p>And I would like to create the table below.</p>
<pre><code>Company Period Quarter_end Sum(Price)
M 2016Q2 8/1/2015 10.41
M 2016Q3 10/31/2015 18.2
M 2016Q4 1/30/2016 30.68
WFM 2015Q3 7/5/2015 5.1
WFM 2015Q4 9/27/2015 18.09
WFM 2016Q1 1/17/2016 36.1
</code></pre>
<p>However, I don't know how to group by varying dates without looping through each record. Any help is greatly appreciated.</p>
<p>Thanks!</p>
| 4 | 2016-10-19T13:38:33Z | 40,133,488 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge_ordered.html" rel="nofollow"><code>merge_ordered</code></a>:</p>
<pre><code>#first convert columns to datetime
df1.Quarter_End = pd.to_datetime(df1.Quarter_End)
df2.Date = pd.to_datetime(df2.Date)
df = pd.merge_ordered(df1,
df2,
left_on=['Company','Quarter_End'],
right_on=['Company','Date'],
how='outer')
print (df)
Company Period Quarter_End Date Price
0 M 2016Q1 2015-05-02 NaT NaN
1 M NaN NaT 2015-06-20 1.05
2 M NaN NaT 2015-06-22 4.05
3 M NaN NaT 2015-07-10 3.45
4 M NaN NaT 2015-07-29 1.86
5 M 2016Q2 2015-08-01 NaT NaN
6 M NaN NaT 2015-08-24 1.58
7 M NaN NaT 2015-09-02 8.64
8 M NaN NaT 2015-09-22 2.56
9 M NaN NaT 2015-10-20 5.42
10 M 2016Q3 2015-10-31 NaT NaN
11 M NaN NaT 2015-11-02 1.58
12 M NaN NaT 2015-11-24 4.58
13 M NaN NaT 2015-12-03 6.48
14 M NaN NaT 2015-12-05 4.56
15 M NaN NaT 2016-01-03 7.14
16 M 2016Q4 2016-01-30 2016-01-30 6.34
17 WFM 2015Q2 2015-04-12 NaT NaN
18 WFM NaN NaT 2015-06-20 1.05
19 WFM NaN NaT 2015-06-22 4.05
20 WFM 2015Q3 2015-07-05 NaT NaN
21 WFM NaN NaT 2015-07-10 3.45
22 WFM NaN NaT 2015-07-29 1.86
23 WFM NaN NaT 2015-08-24 1.58
24 WFM NaN NaT 2015-09-02 8.64
25 WFM NaN NaT 2015-09-22 2.56
26 WFM 2015Q4 2015-09-27 NaT NaN
27 WFM NaN NaT 2015-10-20 5.42
28 WFM NaN NaT 2015-11-02 1.58
29 WFM NaN NaT 2015-11-24 4.58
30 WFM NaN NaT 2015-12-03 6.48
31 WFM NaN NaT 2015-12-05 4.56
32 WFM NaN NaT 2016-01-03 7.14
33 WFM 2016Q1 2016-01-17 2016-01-17 6.34
</code></pre>
<p>Then backfill <code>NaN</code> in columns <code>Period</code> and <code>Quarter_End</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.bfill.html" rel="nofollow"><code>bfill</code></a> and aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.sum.html" rel="nofollow"><code>sum</code></a>. If need remove all NaN values, add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dropna.html" rel="nofollow"><code>Series.dropna</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>df.Period = df.Period.bfill()
df.Quarter_End = df.Quarter_End.bfill()
print (df.groupby(['Company','Period','Quarter_End'])['Price'].sum().dropna().reset_index())
Company Period Quarter_End Price
0 M 2016Q2 2015-08-01 10.41
1 M 2016Q3 2015-10-31 18.20
2 M 2016Q4 2016-01-30 30.68
3 WFM 2015Q3 2015-07-05 5.10
4 WFM 2015Q4 2015-09-27 18.09
5 WFM 2016Q1 2016-01-17 36.10
</code></pre>
| 5 | 2016-10-19T13:57:53Z | [
"python",
"pandas",
"numpy"
] |
How to group pandas DataFrame by varying dates? | 40,133,016 | <p>I am trying to roll up daily data into fiscal quarter data. For example, I have a table with fiscal quarter end dates:</p>
<pre><code>Company Period Quarter_End
M 2016Q1 05/02/2015
M 2016Q2 08/01/2015
M 2016Q3 10/31/2015
M 2016Q4 01/30/2016
WFM 2015Q2 04/12/2015
WFM 2015Q3 07/05/2015
WFM 2015Q4 09/27/2015
WFM 2016Q1 01/17/2016
</code></pre>
<p>and a table of daily data:</p>
<pre><code>Company Date Price
M 06/20/2015 1.05
M 06/22/2015 4.05
M 07/10/2015 3.45
M 07/29/2015 1.86
M 08/24/2015 1.58
M 09/02/2015 8.64
M 09/22/2015 2.56
M 10/20/2015 5.42
M 11/02/2015 1.58
M 11/24/2015 4.58
M 12/03/2015 6.48
M 12/05/2015 4.56
M 01/03/2016 7.14
M 01/30/2016 6.34
WFM 06/20/2015 1.05
WFM 06/22/2015 4.05
WFM 07/10/2015 3.45
WFM 07/29/2015 1.86
WFM 08/24/2015 1.58
WFM 09/02/2015 8.64
WFM 09/22/2015 2.56
WFM 10/20/2015 5.42
WFM 11/02/2015 1.58
WFM 11/24/2015 4.58
WFM 12/03/2015 6.48
WFM 12/05/2015 4.56
WFM 01/03/2016 7.14
WFM 01/17/2016 6.34
</code></pre>
<p>And I would like to create the table below.</p>
<pre><code>Company Period Quarter_end Sum(Price)
M 2016Q2 8/1/2015 10.41
M 2016Q3 10/31/2015 18.2
M 2016Q4 1/30/2016 30.68
WFM 2015Q3 7/5/2015 5.1
WFM 2015Q4 9/27/2015 18.09
WFM 2016Q1 1/17/2016 36.1
</code></pre>
<p>However, I don't know how to group by varying dates without looping through each record. Any help is greatly appreciated.</p>
<p>Thanks!</p>
| 4 | 2016-10-19T13:38:33Z | 40,133,589 | <ul>
<li><code>set_index</code></li>
<li><code>pd.concat</code> to align indices</li>
<li><code>groupby</code> with <code>agg</code></li>
</ul>
<hr>
<pre><code>prd_df = period_df.set_index(['Company', 'Quarter_End'])
prc_df = price_df.set_index(['Company', 'Date'], drop=False)
df = pd.concat([prd_df, prc_df], axis=1)
df.groupby([df.index.get_level_values(0), df.Period.bfill()]) \
.agg(dict(Date='last', Price='sum')).dropna()
</code></pre>
<p><a href="https://i.stack.imgur.com/EuJ86.png" rel="nofollow"><img src="https://i.stack.imgur.com/EuJ86.png" alt="enter image description here"></a></p>
| 3 | 2016-10-19T14:02:16Z | [
"python",
"pandas",
"numpy"
] |
Python: logging and TCP handler | 40,133,059 | <p>I wrote my TCP handler as follows (adapted from: <a href="https://docs.python.org/2/library/socketserver.html#socketserver-tcpserver-example" rel="nofollow">https://docs.python.org/2/library/socketserver.html#socketserver-tcpserver-example</a>):</p>
<pre><code>#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import SocketServer
from MyModule import myFunction
class MyHandler(SocketServer.StreamRequestHandler):
def handle(self):
self.data = self.rfile.readline().strip()
result = myFunction(self.data)
self.wfile.write(result)
if __name__ == "__main__":
HOST, PORT = myhost, myport
server = SocketServer.TCPServer((HOST, PORT), MyHandler)
server.serve_forever()
</code></pre>
<p>It works perfectly and now I'm trying to add a logger:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import SocketServer
import logging
from logging.handlers import TimedRotatingFileHandler
from MyModule import myFunction
class MyHandler(SocketServer.StreamRequestHandler):
def __init__(self):
self.logger = logging.getLogger()
self.logger.setLevel(logging.DEBUG)
self.formatter = logging.Formatter('%(asctime)s :: %(levelname)s :: %(message)s')
self.file_handler = TimedRotatingFileHandler('my_log_file.log', when='D', interval=1, utc=True)
self.file_handler.setLevel(logging.DEBUG)
self.file_handler.setFormatter(self.formatter)
self.logger.addHandler(self.file_handler)
def handle(self):
self.data = self.rfile.readline().strip()
result = myFunction(self.data)
self.wfile.write(result)
self.logger.info(result)
if __name__ == "__main__":
HOST, PORT = myhost, myport
server = SocketServer.TCPServer((HOST, PORT), MyHandler)
server.serve_forever()
</code></pre>
<p>When I run it I get the following error:</p>
<p><code>TypeError: __init__() takes exactly 1 argument (4 given)</code></p>
<p>I don't understand what the 4 arguments given are.
Is there anything wrong with the code other than that?</p>
<p>EDIT: Full TraceBack:</p>
<pre><code>Exception happened during processing of request from ('MyIP', 54028)
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 290, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 318, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 331, in finish_request
self.RequestHandlerClass(request, client_address, self)
TypeError: __init__() takes exactly 1 argument (4 given)
</code></pre>
| 0 | 2016-10-19T13:40:22Z | 40,133,612 | <p><code>MyHandler</code> is a subclass of <code>SocketServer.StreamRequestHandler</code> which is a subclass of <code>BaseRequestHandler</code>. The <a href="https://github.com/python/cpython/blob/2.7/Lib/SocketServer.py#L201" rel="nofollow">call signature of <code>BaseRequestHandler.__init__</code></a> is </p>
<pre><code>def __init__(self, request, client_address, server):
</code></pre>
<p>The traceback error message shows that inside the <a href="https://github.com/python/cpython/blob/2.7/Lib/SocketServer.py#L329" rel="nofollow"><code>BaseServer.finish_request</code> method</a></p>
<pre><code>self.RequestHandlerClass(request, client_address, self)
</code></pre>
<p>is called. <code>self.RequestHandlerClass</code> is <code>MyHandler</code>. Therefore,
<code>MyHandler.__init__</code> should have call signature</p>
<pre><code>class MyHandler(SocketServer.StreamRequestHandler):
def __init__(self, request, client_address, server):
</code></pre>
<p>instead of </p>
<pre><code>class MyHandler(SocketServer.StreamRequestHandler):
def __init__(self):
</code></pre>
<hr>
<p>When <code>self.RequestHandlerClass(request, client_address, self)</code> is called, Python calls
the <code>RequestHandlerClass</code> method with <code>self</code> as its first argument. In other words,
<code>RequestHandlerClass(self, request, client_address, self)</code> gets called. <code>self, request, client_address, self</code> are the four arguments that are getting passed to <code>MyHandler</code>.
The error message</p>
<pre><code>TypeError: __init__() takes exactly 1 argument (4 given)
</code></pre>
<p>is complaining that <code>MyHandler.__init__</code> was defined to expect only 1 argument and yet it was being passed 4 arguments.</p>
| 2 | 2016-10-19T14:03:08Z | [
"python",
"python-2.7",
"logging",
"tcp"
] |
How to return JSON from Python REST API | 40,133,216 | <p>I have a Python API that receives data from mysql select query. The data looks like this:</p>
<pre><code>| val | type | status |
|-----|------|--------|
| 90 | 1 | a |
</code></pre>
<p>That data was received well in python. Now I want to present that data as JSON to my REST client - how?</p>
<p>Here is my python code:</p>
<pre><code>def somefunction(self, by, identifier):
# validate args
procedure = 'mysproc' + str(by)
try:
with self.connection.cursor() as cursor:
cursor.callproc(procedure,[str(identifier)])
self.connection.commit()
result = cursor.fetchone()
print("+++ Result: " + str(result) + " +++")
except:
result = "Request Failed"
raise
finally:
self.DestroyConnection()
return json.dumps(result)
</code></pre>
<p>with that, my client is receiving:</p>
<pre><code>"[90, 1, "a"]"
</code></pre>
<p>Question:</p>
<p>is there a way for me to receive it as a proper JSON? like:</p>
<pre><code>{'val': 90, 'type': 1 , : 'status': "a"}
</code></pre>
| 2 | 2016-10-19T13:46:03Z | 40,138,988 | <p>You will first need to get the mysql query to return a dict object instead of a list. If your library is MySQLdb then this answer: <a href="http://stackoverflow.com/questions/4147707/python-mysqldb-sqlite-result-as-dictionary">Python - mysqlDB, sqlite result as dictionary</a> is what you need.</p>
<p>Here is a link to the docs for MySQLdb: <a href="http://www.mikusa.com/python-mysql-docs/docs/MySQLdb.connections.html" rel="nofollow">http://www.mikusa.com/python-mysql-docs/docs/MySQLdb.connections.html</a></p>
<p>I think if you pass in the cursor class you want to use when you create your cursor the result of fetchone will be a dictionary. </p>
<pre><code>with self.connection.cursor(MySQLdb.cursors.DictCursor) as cursor:
</code></pre>
<p>Running json.dumps(result) on a dictionary will give the output you are looking for.</p>
| 1 | 2016-10-19T18:27:58Z | [
"python",
"mysql",
"json",
"rest",
"python-3.x"
] |
python: multiple .dat's in multiple arrays | 40,133,288 | <p>I'm trying to sort some data into (np.)arrays and get stuck with a problem.</p>
<p>I have 1000 .dat files and I need to put the data from them in 1000 different arrays. Further, every array should contain data depend on coordinates [i] [j] [k] (this part I've done already and the code looks like this (this is kind of "short" version):</p>
<pre><code>with open('177500.dat', newline='') as csvfile:
f = csv.reader(csvfile, delimiter=' ')
for row in f:
<some code which works pretty good>
cV = [[[[] for k in range(kMax)] for j in range(jMax)] for i in range(iMax)]
with open('177500.dat', newline='') as csvfile:
f = csv.reader(csvfile, delimiter=' ')
<some code which works also good>
values = np.array([np.float64(row[i]) for i in range(3, rowLen)])
cV[int(row[0])][int(row[1])][int(row[2])] = values
</code></pre>
<p>After this, i can print cV [i] [j] [k] and I get all data which is contained in one .dat file at the coordinates [i] [j] [k].</p>
<p>And now I need to create cV [i] [j] [k] <strong>[n]</strong> to get the data from the specific file number <strong>n</strong> at the coordinates [i] [j] [k]. And I absolutely don't know how can I tell python to put the data into the "right" place.</p>
<p>I tried some things like this:</p>
<pre><code>for m in range(160000,182501,2500):
with open ('output/%d.dat' % m, newline='') as csvfile:
<bla bla code>
cV = [[[[[] for k in range(kMax)] for j in range(jMax)] for i in range(iMax)] for n in range(tMax)]
if len(row) == rowLen:
values = [np.array([np.float64(row[i]) for i in range (3, rowLen)]) for n in range(tMax)]
for n in range(tMax):
cV[int(row[0])][int(row[1])][int(row[2])][int(n)] = values[n]
</code></pre>
<p>But this surely didn't work because python don't know what the hack should be this <strong>[n]</strong> after the values.</p>
<p>So, how can I tell pyhton to put this [i] [j] [k] data from the file nr. <strong>n</strong> in the array cV [i] [j] [k] <strong>[n]</strong>?</p>
<p>Thanks in advance</p>
<p>C.</p>
<p>P.S. I didn't post the whole code because I don't think it is necessary. All arrays are created properly, but the thing which isn't working ist the data in them.</p>
| 0 | 2016-10-19T13:48:50Z | 40,135,989 | <p>I think building arrays like this is going to make things more complicated for you. It would be easier to build a dictionary using tuples as keys. In the example file you sent me, each <code>(x, y, z)</code> pair was repeated twice, making me think that each file contains data on <em>two</em> iterations of a total solution of 2000 iterations. Dictionaries must have unique keys, so for each file I have implemented another counter, <code>timestep</code>, that can increment when collating data from a single file.</p>
<p>Now, if I wanted coords (1, 2, 3) on the 3rd timestep, I could do <code>simulation[(1, 2, 3, 3)]</code>.</p>
<pre><code>import csv
import numpy as np
'''
Made the assumptions that:
-Each file contains two iterations from a simulation of 2000 iterations
-Each file is numbered sequentially. Each time the same (x, y, z) coords are
discovered, it represents the next timestep in simulation
Accessing data is via a tuple key (x, y, z, n) with n being timestep
'''
simulation = {}
file_count = 1
timestep = 1
num_files = 2
for x in range(1, num_files + 1):
with open('sim_file_{}.dat'.format(file_count), 'r') as infile:
second_read = False
reader = csv.reader(infile, delimiter=' ')
for row in reader:
item = [float(x) for x in row]
if row:
if (not second_read and not
any(simulation.get((item[0], item[1], item[2], timestep), []))):
timestep += 1
second_read = True
simulation[(item[0], item[1], item[2], timestep)] = np.array(item[3:])
file_count += 1
timestep += 1
second_read = False
</code></pre>
| 0 | 2016-10-19T15:40:00Z | [
"python",
"arrays"
] |
Reading files with hdfs3 fails | 40,133,440 | <p>I am trying to read a file on HDFS with Python using the hdfs3 module. </p>
<pre><code>import hdfs3
hdfs = hdfs3.HDFileSystem(host='xxx.xxx.com', port=12345)
hdfs.ls('/projects/samplecsv/part-r-00000')
</code></pre>
<p>This produces</p>
<pre><code>[{'block_size': 134345348,
'group': 'supergroup',
'kind': 'file',
'last_access': 1473453452,
'last_mod': 1473454723,
'name': '/projects/samplecsv/part-r-00000/',
'owner': 'dr',
'permissions': 420,
'replication': 3,
'size': 98765631}]
</code></pre>
<p>So it seems to be able to access the HDFS and read the directory structure. However, reading the file fails.</p>
<pre><code>with hdfs.open('/projects/samplecsv/part-r-00000', 'rb') as f:
print(f.read(100))
</code></pre>
<p>gives</p>
<pre><code>---------------------------------------------------------------------------
OSError Traceback (most recent call last)
.
.<snipped>
.
OSError: [Errno Read file /projects/samplecsv/part-r-00000 Failed:] 1
</code></pre>
<p>What could be the issue? I am using Python3.5.</p>
| 0 | 2016-10-19T13:55:52Z | 40,134,304 | <p>if You want any operation on files then you have to pass full File path .</p>
<pre><code>import hdfs3
hdfs = hdfs3.HDFileSystem(host='xxx.xxx.com', port=12345)
hdfs.ls('/projects/samplecsv/part-r-00000')
#you have to add file to location
hdfs.put('local-file.txt', '/projects/samplecsv/part-r-00000')
with hdfs.open('projects/samplecsv/part-r-00000/local-file.txt', 'rb') as f:
print(f.read(100))
</code></pre>
| 0 | 2016-10-19T14:28:50Z | [
"python",
"hadoop",
"hdfs"
] |
Bokeh Python: Laying out multiple plots | 40,133,688 | <p>I want to array plots horizontally, use the hplot() function.
My problem is that I generate my plot names dinamically.
Dfdict is a dictionary of dataframes</p>
<pre><code>for key in dfdict.keys():
plot[key] = BoxPlot(dfdict[key], values='oex', ...)
filename = '{}.html'.format(str(key))
output_file(filename)
show(plot[key])
p = hplot(plot.values())
show(p)
</code></pre>
<p>But i have an error:</p>
<p>ValueError: expected an element of List(Instance(Component)), got seq with invalid items [[, , , , , ]]</p>
<p>Thanks</p>
| 0 | 2016-10-19T14:06:18Z | 40,135,411 | <p>I do it, intead of this</p>
<pre><code>p = hplot(plot.values())
</code></pre>
<p>I am using this</p>
<pre><code>p = hplot(*plot.values())
</code></pre>
| 0 | 2016-10-19T15:13:13Z | [
"python",
"dictionary",
"bokeh"
] |
Bokeh Python: Laying out multiple plots | 40,133,688 | <p>I want to array plots horizontally, use the hplot() function.
My problem is that I generate my plot names dinamically.
Dfdict is a dictionary of dataframes</p>
<pre><code>for key in dfdict.keys():
plot[key] = BoxPlot(dfdict[key], values='oex', ...)
filename = '{}.html'.format(str(key))
output_file(filename)
show(plot[key])
p = hplot(plot.values())
show(p)
</code></pre>
<p>But i have an error:</p>
<p>ValueError: expected an element of List(Instance(Component)), got seq with invalid items [[, , , , , ]]</p>
<p>Thanks</p>
| 0 | 2016-10-19T14:06:18Z | 40,136,450 | <p>Please note that <code>hplot</code> is deprecated in recent releases. You should use <code>bokeh.layout.row</code>:</p>
<pre><code>from bokeh.layouts import row
# define some plots p1, p2, p3
layout = row(p1, p2, p3)
show(layout)
</code></pre>
<p>Functions like <code>row</code> (and previously <code>hplot</code>) take all the things to put in the row as individual arguments. </p>
<p>There is an entire section on layouts in the user's guide: </p>
<p><a href="http://bokeh.pydata.org/en/latest/docs/user_guide/layout.html" rel="nofollow">http://bokeh.pydata.org/en/latest/docs/user_guide/layout.html</a></p>
| 1 | 2016-10-19T16:02:02Z | [
"python",
"dictionary",
"bokeh"
] |
find repeated element in list of list python | 40,133,720 | <p>I have been struggling with this problem for two days and I need help with it. I need to find repeated element in a list of lists
<code>list_of_list = [(a1, b1, c1), (a2, b2, c2), ..., (an, bn, cn)]</code> where "a" and "b" elements are integers and "c" elements are floats.</p>
<p>So if for example <code>a1 == a2</code> or <code>a1 == bn</code>, I need to create a new list with the entire list elements and I need to iterate this for all the lists (a, b, c) in the list of lists. To put it another way, I need all lists that have elements that are present in more than one list. I need to compare only "a" and "b" elements but obtain the associated value "c" in the final list.</p>
<p>For example:</p>
<pre><code>list_of_list = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99) ]
desired_result=[(1, 2, 4.99), (1, 4, 3.00), (5, 1, 1.12)]
</code></pre>
<p>I try many ideas...but nothing nice came up:</p>
<pre><code>MI_network = [] #repeated elements list
genesis = list(complete_net) #clon to work on
genesis_next = list(genesis) #clon to remove elements in iterations
genesis_next.remove(genesis_next[0])
while genesis_next != []:
for x in genesis:
if x[0] in genesis_next and x[1] not in genesis_next:
MI_network.append(x)
if x[0] not in genesis_next and x[1] in genesis_next:
MI_network.append(x)
genesis_next.remove(genesis_next[0])
</code></pre>
| -1 | 2016-10-19T14:07:13Z | 40,134,148 | <p>You can count occurrences of specific list elements and take lists with counts > 1. Something like this, using <code>collections.defaultdict()</code>:</p>
<pre><code>>>> from collections import defaultdict
>>> count = defaultdict(int)
>>> for lst in list_of_list:
... count[lst[0]] += 1
... count[lst[1]] += 1
...
>>> [lst for lst in list_of_list if count[lst[0]] > 1 or count[lst[1]] > 1]
[(1, 2, 4.99), (1, 4, 3.0), (5, 1, 1.12)]
</code></pre>
| 0 | 2016-10-19T14:22:43Z | [
"python",
"list",
"element"
] |
find repeated element in list of list python | 40,133,720 | <p>I have been struggling with this problem for two days and I need help with it. I need to find repeated element in a list of lists
<code>list_of_list = [(a1, b1, c1), (a2, b2, c2), ..., (an, bn, cn)]</code> where "a" and "b" elements are integers and "c" elements are floats.</p>
<p>So if for example <code>a1 == a2</code> or <code>a1 == bn</code>, I need to create a new list with the entire list elements and I need to iterate this for all the lists (a, b, c) in the list of lists. To put it another way, I need all lists that have elements that are present in more than one list. I need to compare only "a" and "b" elements but obtain the associated value "c" in the final list.</p>
<p>For example:</p>
<pre><code>list_of_list = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99) ]
desired_result=[(1, 2, 4.99), (1, 4, 3.00), (5, 1, 1.12)]
</code></pre>
<p>I try many ideas...but nothing nice came up:</p>
<pre><code>MI_network = [] #repeated elements list
genesis = list(complete_net) #clon to work on
genesis_next = list(genesis) #clon to remove elements in iterations
genesis_next.remove(genesis_next[0])
while genesis_next != []:
for x in genesis:
if x[0] in genesis_next and x[1] not in genesis_next:
MI_network.append(x)
if x[0] not in genesis_next and x[1] in genesis_next:
MI_network.append(x)
genesis_next.remove(genesis_next[0])
</code></pre>
| -1 | 2016-10-19T14:07:13Z | 40,134,478 | <p>And this is how i would do it since i was not aware of the <code>collections.defaultdict()</code>.</p>
<pre><code>list_of_list = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99) ]
results = []
for i_sub, subset in enumerate(list_of_list):
# test if ai == aj
rest = list_of_list[:i_sub] + list_of_list[i_sub + 1:]
if any(subset[0] == subrest[0] for subrest in rest):
results.append(subset)
# test if ai == bj
elif any(subset[0] == subrest[1] for subrest in rest):
results.append(subset)
# test if bi == aj
elif any(subset[1] == subrest[0] for subrest in rest):
results.append(subset)
print(results) # -> [(1, 2, 4.99), (1, 4, 3.0), (5, 1, 1.12)]
</code></pre>
| 0 | 2016-10-19T14:35:40Z | [
"python",
"list",
"element"
] |
find repeated element in list of list python | 40,133,720 | <p>I have been struggling with this problem for two days and I need help with it. I need to find repeated element in a list of lists
<code>list_of_list = [(a1, b1, c1), (a2, b2, c2), ..., (an, bn, cn)]</code> where "a" and "b" elements are integers and "c" elements are floats.</p>
<p>So if for example <code>a1 == a2</code> or <code>a1 == bn</code>, I need to create a new list with the entire list elements and I need to iterate this for all the lists (a, b, c) in the list of lists. To put it another way, I need all lists that have elements that are present in more than one list. I need to compare only "a" and "b" elements but obtain the associated value "c" in the final list.</p>
<p>For example:</p>
<pre><code>list_of_list = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99) ]
desired_result=[(1, 2, 4.99), (1, 4, 3.00), (5, 1, 1.12)]
</code></pre>
<p>I try many ideas...but nothing nice came up:</p>
<pre><code>MI_network = [] #repeated elements list
genesis = list(complete_net) #clon to work on
genesis_next = list(genesis) #clon to remove elements in iterations
genesis_next.remove(genesis_next[0])
while genesis_next != []:
for x in genesis:
if x[0] in genesis_next and x[1] not in genesis_next:
MI_network.append(x)
if x[0] not in genesis_next and x[1] in genesis_next:
MI_network.append(x)
genesis_next.remove(genesis_next[0])
</code></pre>
| -1 | 2016-10-19T14:07:13Z | 40,136,216 | <p>Using your idea, you can try this:</p>
<pre><code>MI_network = []
complete_net = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99)]
genesis = list(complete_net)
while genesis != []:
for x in genesis:
for gen in genesis:
if x[0] in gen and x[1] not in gen:
if x[0] != gen[2] and x[1] != gen[2]:
if x not in MI_network:
MI_network.append(x)
elif x[0] not in gen and x[1] in gen:
if x[0] != gen[2] and x[1] != gen[2]:
if x not in MI_network:
MI_network.append(x)
elif x[0] not in gen and x[1] not in gen:
pass
genesis.remove(genesis[0])
print(MI_network)
[(1, 2, 4.99), (1, 4, 3.0), (5, 1, 1.12)]
</code></pre>
| 0 | 2016-10-19T15:50:19Z | [
"python",
"list",
"element"
] |
can you help me to optimize this code | 40,133,742 | <p>can you help me to optimize this code </p>
<pre><code>def calc_potential(time, firstsale, lastsale, sold, supplied):
retval = []
for t, f, l, c, s in zip(time, firstsale, lastsale, sold, supplied):
try:
if s > c:
retval.append(c)
else:
s = (l - t).total_seconds() / 3600.
d = ((t - f).total_seconds() / 3600.) / c
retval.append(s / d + c)
except:
retval.append(None)
return retval
</code></pre>
| -5 | 2016-10-19T14:08:04Z | 40,133,990 | <p>Keeping in mind some of the comments i.e. this is not for optimizing but rather fixing broken code, I can point you in the right direction:</p>
<p>To replace this section of code: </p>
<pre><code>if s > c:
retval.append(c)
</code></pre>
<p>For something more efficient, try list comprehension:</p>
<pre><code>retval= [c for c, s in zip(sold, supplied) if s>c]
</code></pre>
<p>If you do something similar for the code in the else statement as well, and combine both lists. You will have what you want in one possible way. </p>
| 1 | 2016-10-19T14:17:51Z | [
"python"
] |
is there any way to split Spark Dataset in given logic | 40,133,761 | <p>i am looking for Spark Dataset split application which is similar to bellow mentioned logic. </p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> df1 = pd.DataFrame(np.random.randn(10, 4), columns=['a', 'b', 'c', 'd'])
>>> df1
a b c d
0 -0.398502 -1.083682 0.850632 -1.443868
1 -2.124333 1.093590 -0.338162 -1.414035
2 0.753560 0.600687 -0.998277 -2.094359
3 -0.635962 -0.291226 0.428961 1.158153
4 -0.900506 -0.545226 -0.448178 -0.567717
5 0.112911 0.351649 0.788940 2.071541
6 -0.358625 0.500367 1.009819 -1.139761
7 1.003608 0.246925 0.225138 -0.586727
8 0.355274 -0.540685 1.482472 0.364989
9 3.089610 -1.415088 -0.072107 -0.203137
>>>
>>> mask = df1.applymap(lambda x: x <-0.7)
>>>
>>> mask
a b c d
0 False True False True
1 True False False True
2 False False True True
3 False False False False
4 True False False False
5 False False False False
6 False False False True
7 False False False False
8 False False False False
9 False True False False
>>> mask.any(axis=1)
0 True
1 True
2 True
3 False
4 True
5 False
6 True
7 False
8 False
9 True
dtype: bool
>>> df1 = df1[-mask.any(axis=1)]
>>> df1
a b c d
3 -0.635962 -0.291226 0.428961 1.158153
5 0.112911 0.351649 0.788940 2.071541
7 1.003608 0.246925 0.225138 -0.586727
8 0.355274 -0.540685 1.482472 0.364989
>>>
</code></pre>
<p>In spark i gone though <code>df.filter</code> but its trying pick only matched , but in my case i need to filter(remove) data into 3 -4 level. only one level i shown above. which is kind of filtering. </p>
| 1 | 2016-10-19T14:08:41Z | 40,135,957 | <p>Preserving order is very difficult in Spark applications due to the assumptions of the RDD abstraction. The best approach you can take is to translate the pandas logic using the Spark api, like I've done here. Unfortunately, I do not think you can apply the same filter criteria to every column, so I had to manually translate the mask into operations on multiple columns. This <a href="https://databricks.com/blog/2015/08/12/from-pandas-to-apache-sparks-dataframe.html" rel="nofollow">Databricks blog post</a> is helpful for anyone transitioning from Pandas to Spark. </p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(1000)
df1 = pd.DataFrame(np.random.randn(10, 4), columns=['a', 'b', 'c', 'd'])
mask = df1.applymap(lambda x: x <-0.7)
df2 = df1[-mask.any(axis=1)]
</code></pre>
<p>The result we want is: </p>
<pre><code> a b c d
1 -0.300797 0.389475 -0.107437 -0.479983
5 -0.334835 -0.099482 0.407192 0.919388
6 0.312118 1.533161 -0.550174 -0.383147
8 -0.326925 -0.045797 -0.304460 1.923010
</code></pre>
<p>So in Spark, we create the dataframe using the Pandas data frame and use <code>filter</code> to get the correct result set: </p>
<pre><code>df1_spark = sqlContext.createDataFrame(df1).repartition(10)
df2_spark = df1_spark.filter(\
(df1_spark.a > -0.7)\
& (df1_spark.b > -0.7)\
& (df1_spark.c > -0.7)\
& (df1_spark.d > -0.7)\
)
</code></pre>
<p>Which gives us the proper result (notice the order is not preserved): </p>
<pre><code>df2_spark.show()
+-------------------+--------------------+--------------------+-------------------+
| a| b| c| d|
+-------------------+--------------------+--------------------+-------------------+
|-0.3348354532115408| -0.0994816980097769| 0.40719210034152314| 0.919387539204449|
| 0.3121180100663634| 1.5331610653579348| -0.5501738650283003|-0.3831474108842978|
|-0.3007966727870205| 0.3894745542873072|-0.10743730169089667|-0.4799830753607686|
| -0.326924675176391|-0.04579718800728687| -0.3044600616968845| 1.923010130400007|
+-------------------+--------------------+--------------------+-------------------+
</code></pre>
<p>If you <strong><em>absolutely needed</em></strong> to create the mask using Pandas, you would have to preserve the index of the original Pandas dataframe and remove individual records from the Spark by creating a broadcast variable and filtering based on the index column. Here's an example, YMMV. </p>
<p>Add an index: </p>
<pre><code>df1['index_col'] = df1.index
df1
a b c d index_col
0 -0.804458 0.320932 -0.025483 0.644324 0
1 -0.300797 0.389475 -0.107437 -0.479983 1
2 0.595036 -0.464668 0.667281 -0.806116 2
3 -1.196070 -0.405960 -0.182377 0.103193 3
4 -0.138422 0.705692 1.271795 -0.986747 4
5 -0.334835 -0.099482 0.407192 0.919388 5
6 0.312118 1.533161 -0.550174 -0.383147 6
7 -0.822941 1.600083 -0.069281 0.083209 7
8 -0.326925 -0.045797 -0.304460 1.923010 8
9 -0.078659 -0.582066 -1.617982 0.867261 9
</code></pre>
<p>Convert the mask into a Spark broadcast variable: </p>
<pre><code>myIdx = sc.broadcast(df2.index.tolist())
</code></pre>
<p>Create and modify the dataframes using the Spark api: </p>
<pre><code>df1_spark.rdd.filter(lambda row: row and row['index_col'] not in myIdx.value).collect()
df2_spark = df1_spark.rdd.filter(lambda row: row and row['index_col'] in myIdx.value).toDF()
df2_spark.show()
+-------------------+--------------------+--------------------+-------------------+---------+
| a| b| c| d|index_col|
+-------------------+--------------------+--------------------+-------------------+---------+
|-0.3007966727870205| 0.3894745542873072|-0.10743730169089667|-0.4799830753607686| 1|
|-0.3348354532115408| -0.0994816980097769| 0.40719210034152314| 0.919387539204449| 5|
| 0.3121180100663634| 1.5331610653579348| -0.5501738650283003|-0.3831474108842978| 6|
| -0.326924675176391|-0.04579718800728687| -0.3044600616968845| 1.923010130400007| 8|
+-------------------+--------------------+--------------------+-------------------+---------+
</code></pre>
| 0 | 2016-10-19T15:38:36Z | [
"python",
"pandas",
"apache-spark"
] |
JSON combined with python loop to only print integers | 40,133,806 | <p>I'm trying to loop through integers that are in the json list based on a variable provided before. Here is my JSON list:</p>
<pre><code> tracks =[
{
'album_name':'Nevermind',
1:'Smells like teen spirit',
2:'In Bloom',
3:'Come as you are',
4:'Breed',
5:'Lithium',
6:'Polly',
7:'Territorial Pissings',
8:'Drain You',
9:'Lounge act',
10:'Stay away',
11:'On a plain',
12:'Something in the way'
},
{
'album_name':'Relapse',
1:'Hello',
2':'3AM',
},
]
</code></pre>
<p>And this is my html loop:</p>
<pre><code><div class="single_album">
<h2>Track list</h2>
{% for tracks in tracks if tracks.album_name == album_name %}
<ol>
<li>{{ tracks[x] }}</li>
</ol>
{% endfor %}
</div>
</code></pre>
<p>If I put 1 instead of 'x' it works, as it prints record number one which is "1:'Smells like teen spirit'" However I don't know how to make a loop in which the x will increment each time it loops as I Im not sure whether it should be placed in python or html file.</p>
| 0 | 2016-10-19T14:10:17Z | 40,134,624 | <pre><code>In [15]: from django.template import Template,Context
In [16]: tracks =[
...: {
...: 'album_name':'Nevermind',
...: 1:'Smells like teen spirit',
...: 2:'In Bloom',
...: 3:'Come as you are',
...: 4:'Breed',
...: 5:'Lithium',
...: 6:'Polly',
...: 7:'Territorial Pissings',
...: 8:'Drain You',
...: 9:'Lounge act',
...: 10:'Stay away',
...: 11:'On a plain',
...: 12:'Something in the way'
...: },
...: {
...: 'album_name':'Relapse',
...: 1:'Hello',
...: 2:'3AM',
...:
...: },
...:
...: ]
In [17]: t = Template("""<div class="single_album"> <h2>Track list</h2> {% for track in tracks %} {%if track.album_name == album_name %}<ol> {% for key, value in track
...: .items %} {%if key != 'album_name' %}<li>{{value}}</li>{%endif%} {% endfor%} </ol>{%endif%} {% endfor %} </div>""")
In [18]: c = Context({"tracks": tracks,'album_name':'Nevermind'})
In [19]: t.render(c)
Out[19]: u'<div class="single_album"> <h2>Track list</h2> <ol> <li>Smells like teen spirit</li> <li>In Bloom</li> <li>Come as you are</li> <li>Breed</li> <li>Lithium</li> <li>Polly</li> <li>Territorial Pissings</li> <li>Drain You</li> <li>Lounge act</li> <li>Stay away</li> <li>On a plain</li> <li>Something in the way</li> </ol> </div>'
In [20]:
</code></pre>
| 0 | 2016-10-19T14:41:35Z | [
"python",
"loops",
"count",
"increment"
] |
Python save list and read data from file | 40,133,826 | <p>Basically I would like to save a list to python and then when the program starts I would like to retrieve the data from the file and put it back into the list.
<br>
So far this is the code I am using</p>
<pre><code>mylist = pickle.load("save.txt")
...
saveToList = (name, data)
mylist.append(saveList)
import pickle
pickle.dump(mylist, "save.txt")
</code></pre>
<p>But it just returns the following error: TypeError: file must have 'read' and 'readline' attributes</p>
| 3 | 2016-10-19T14:10:53Z | 40,133,930 | <p>You need a file object, not just a file name. Try this for saving:</p>
<pre><code>pickle.dump(mylist, open("save.txt", "wb"))
</code></pre>
<p>or better, to guarantee the file is closed properly:</p>
<pre><code>with open("save.txt", "wb") as f:
pickle.dump(mylist, f)
</code></pre>
<p>and then this for loading:</p>
<pre><code>with open("save.txt", "rb") as f:
mylist = pickle.load(f)
</code></pre>
<p>Also, I suggest a different extension from <code>.txt</code>, like maybe <code>.dat</code>, because the contents is not plain text.</p>
| 3 | 2016-10-19T14:14:53Z | [
"python"
] |
Python save list and read data from file | 40,133,826 | <p>Basically I would like to save a list to python and then when the program starts I would like to retrieve the data from the file and put it back into the list.
<br>
So far this is the code I am using</p>
<pre><code>mylist = pickle.load("save.txt")
...
saveToList = (name, data)
mylist.append(saveList)
import pickle
pickle.dump(mylist, "save.txt")
</code></pre>
<p>But it just returns the following error: TypeError: file must have 'read' and 'readline' attributes</p>
| 3 | 2016-10-19T14:10:53Z | 40,133,939 | <pre><code>with open("save.txt", "w") as f:
pickle.dump(f, mylist)
</code></pre>
<p>Refer to python pickle documentation for usage.</p>
| 1 | 2016-10-19T14:15:16Z | [
"python"
] |
Python save list and read data from file | 40,133,826 | <p>Basically I would like to save a list to python and then when the program starts I would like to retrieve the data from the file and put it back into the list.
<br>
So far this is the code I am using</p>
<pre><code>mylist = pickle.load("save.txt")
...
saveToList = (name, data)
mylist.append(saveList)
import pickle
pickle.dump(mylist, "save.txt")
</code></pre>
<p>But it just returns the following error: TypeError: file must have 'read' and 'readline' attributes</p>
| 3 | 2016-10-19T14:10:53Z | 40,133,995 | <p>pickle.dump accept file object as argument instead of filename string</p>
<pre><code>pickle.dump(mylist, open("save.txt", "wb"))
</code></pre>
| 1 | 2016-10-19T14:18:00Z | [
"python"
] |
Reading Database Queries into a Specific format in Python | 40,133,863 | <p>Hi I'm connecting to sqlite database using python and fetching some results. However to input these results to another file I need it to be in the following format.</p>
<pre><code> x={
(1,1):1, (1,2):0,
(2,1):1, (2,2):0,
(3,1):0, (3,2):1,
(4,1):0, (4,2):1,}
</code></pre>
<p>My database table has only two rows (id (integer) and task (integer)). So I run the query "select * from allocation" and the result I get required to be formatted as above.</p>
<p>For instance allocation table is as follows:</p>
<pre><code> id | task
1 | 1
2 | 1
3 | 2
4 | 2
</code></pre>
<p>Please Help.</p>
| -1 | 2016-10-19T14:12:33Z | 40,138,945 | <p>In this code the commented lines at the top indicate what's needed to access the sqlite database. Since I didn't want to build and populate such a database I created the object <strong>C</strong> to emulate its approximate behaviour. I used <strong>defaultdict</strong> because I don't know how many possible combinations of id's and tasks are involved. However, this means that only non-zero occurrences are represented in the final dictionary.</p>
<pre><code>#~ import sqlite3
#~ conn = sqlite3 . connect ( some database )
#~ c = conn . cursor ( )
#~ c . execute ( 'Select id, task from aTable' )
class C:
def __init__(self,iterated):
self.iterated=iterated
def fetchone (self):
for _ in iter(list(self.iterated)):
yield _
c=C( [ ['1','1'], ['2','1'], ['3','2'], ['4','2'] ] )
from collections import defaultdict
counts = defaultdict(int)
for row in c.fetchone():
print (row)
id, task = row
counts [(id,task)]+=1
print (counts)
</code></pre>
<p>Here's the output.</p>
<pre><code>['1', '1']
['2', '1']
['3', '2']
['4', '2']
defaultdict(<class 'int'>, {('4', '2'): 1, ('2', '1'): 1, ('1', '1'): 1, ('3', '2'): 1})
</code></pre>
| 0 | 2016-10-19T18:25:55Z | [
"python"
] |
Obey the Testing Goat - Traceback | 40,133,865 | <p>So I'm going through this book called "Obey the Testing Goat" and I'm running into an issue in the sixth chapter while learning Python. It says that I should be able to run the functional_tests we've set up throughout the chapter and previous one with no errors; however, I keep getting a Traceback that I don't know how to fix.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\YaYa\superlists\functional_tests\tests.py", line 54, in test_can_start_a_list_and_retrieve_it_later
self.check_for_row_in_list_table('1: Buy peacock feathers')
File "C:\Users\YaYa\superlists\functional_tests\tests.py", line 15, in check_for_row_in_list_table
table = self.browser.find_element_by_id('id_list_table')
File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 269, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 752, in find_element
'value': value})['value']
File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"id","selector":"id_list_table"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///C:/Users/YaYa/AppData/Local/Temp/tmp869pyxau/extensions/fxdriver@googlecode.com/components/driver-component.js:10770)
at fxdriver.Timer.prototype.setTimeout/<.notify (file:///C:/Users/YaYa/AppData/Local/Temp/tmp869pyxau/extensions/fxdriver@googlecode.com/components/driver-component.js:625)
</code></pre>
<p><a href="https://gist.github.com/yuyu23/8e5cce6c55e9ec3a771396048058a489" rel="nofollow">I've created a GIST in case anyone's interested in looking at the files that I've worked on throughout the chapters</a>. </p>
<p>You can also access the chapter for this book right <a href="http://www.obeythetestinggoat.com/book/chapter_06.html#_one_more_view_to_handle_adding_items_to_an_existing_list" rel="nofollow">here</a>.</p>
<p>I really don't know what the problem is (I'm not good at Python AT ALL and tried running pdb but I don't even know what half of it means) and no one that I know and that I've asked has any information on what I can do to fix it. </p>
<p>Thanks in advance!</p>
<p>EDIT: Here's the test_can_start_a_list_and_retrieve_it_later - just a note in case it's needed, but the def test_can... line number is 19.</p>
<pre><code>def test_can_start_a_list_and_retrieve_it_later(self):
# Edith has heard about a cool new online to-do app. She goes
# to check out its homepage
self.browser.get(self.live_server_url)
# She notices the page title and header mention to-do lists
self.assertIn('To-Do', self.browser.title)
header_text = self.browser.find_element_by_tag_name('h1').text
self.assertIn('To-Do', header_text)
# She is invited to enter a to-do item straight away
inputbox = self.browser.find_element_by_id('id_new_item')
self.assertEqual(
inputbox.get_attribute('placeholder'),
'Enter a to-do item'
)
# She types "Buy peacock feathers" into a text box (Edith's hobby
# is tying fly-fishing lures)
inputbox.send_keys('Buy peacock feathers')
# When she hits enter, the page updates, and now the page lists
# "1: Buy peacock feathers" as an item in a to-do list
inputbox.send_keys(Keys.ENTER)
edith_list_url = self.browser.current_url
self.assertRegex(edith_list_url, '/lists/.+')
self.check_for_row_in_list_table('1: Buy peacock feathers')
# There is still a text box inviting her to add another item. She
# enters "Use peacock feathers to make a fly" (Edith is very methodical)
inputbox = self.browser.find_element_by_id('id_new_item')
inputbox.send_keys('Use peacock feathers to make a fly')
inputbox.send_keys(Keys.ENTER)
# The page updates again, and now shows both items on her list
self.check_for_row_in_list_table('1: Buy peacock feathers')
self.check_for_row_in_list_table('2: Use peacock feathers to make a fly')
# Now a new user, Francis, comes along to the site.
##We use a new browser session to make sure that no information
##of Edith's is coming through from cookies etc
self.browser.quit()
self.browser = webdriver.Firefox()
#Francis visits the home page. There is no sign of Edith's
#list
self.browser.get(self.live_server_url)
page_text = self.browser.find_element_by_tag_name('body').text
self.assertNotIn('Buy peacock feathers', page_text)
self.assertNotIn('make a fly', page_text)
#Francis starts a new list by entering a new item. He
#is less interesting than Edith...
inputbox = self.browser.find_element_by_id('id_new_item')
inputbox.send_keys('Buy milk')
inputbox.send_keys(Keys.ENTER)
#Francis gets his own unique URL
francis_list_url = self.browser.current_url
self.assertRegex(francis_list_url, '/lists/.+')
self.assertNotEqual(francis_list_url, edith_list_url)
#Again, there is no trace of Edith's list
page_text = self.browser.find_element_by_tag_name('body').text
self.assertNotIn('Buy peacock feathers', page_text)
self.assertIn('Buy milk', page_text)
self.fail('Finish the test!')
# Satisfied, they both go back to sleep
</code></pre>
<p>EDIT 2: Here's the check_for_row_in_list_table. Note that this starts on line 14 of the document.</p>
<pre><code>def check_for_row_in_list_table(self, row_text):
table = self.browser.find_element_by_id('id_list_table')
rows = table.find_elements_by_tag_name('tr')
self.assertIn(row_text, [row.text for row in rows])
</code></pre>
| 1 | 2016-10-19T14:12:39Z | 40,139,646 | <p>Found the error in my work. I was apparently missing an s in list.html</p>
<pre><code><form method="POST" action="/lists/{{ list.id }}/add_item">
</code></pre>
| 1 | 2016-10-19T19:09:34Z | [
"python",
"traceback"
] |
Adding data to a Python list | 40,134,026 | <p>I've just started playing around with python lists, I've written the simple code below expecting the printed file to display the numbers [12,14,16,18,20,22] but only 22 is displayed. Any help would be great.</p>
<pre><code>a=10
b=14
while a <= 20:
a=a+2
b=b-1
datapoints=[]
datapoints.insert(0,a)
print datapoints
</code></pre>
| 0 | 2016-10-19T14:19:00Z | 40,134,242 | <pre><code>a=10
b=14
datapoints=[] # this needs to be established outside of your loop
while a <= 20:
a=a+2
b=b-1
datapoints.append(a)
print datapoints
</code></pre>
<p>You need to set up datapoints outside your loop, and then inside your loop, append each additional datum to datapoints</p>
| 1 | 2016-10-19T14:26:38Z | [
"python",
"python-2.7"
] |
Adding data to a Python list | 40,134,026 | <p>I've just started playing around with python lists, I've written the simple code below expecting the printed file to display the numbers [12,14,16,18,20,22] but only 22 is displayed. Any help would be great.</p>
<pre><code>a=10
b=14
while a <= 20:
a=a+2
b=b-1
datapoints=[]
datapoints.insert(0,a)
print datapoints
</code></pre>
| 0 | 2016-10-19T14:19:00Z | 40,134,542 | <p>Joel already answered but if you want a more compact code you can use range</p>
<pre><code>numbers = []
for number in range(12,24,2):
# do whatevery you want with b
numbers.append(number)
print numbers
</code></pre>
<p>or if you only want to print the numbers you can do</p>
<pre><code>print [number for number in range(12,24,2)]
</code></pre>
| 1 | 2016-10-19T14:38:20Z | [
"python",
"python-2.7"
] |
Adding data to a Python list | 40,134,026 | <p>I've just started playing around with python lists, I've written the simple code below expecting the printed file to display the numbers [12,14,16,18,20,22] but only 22 is displayed. Any help would be great.</p>
<pre><code>a=10
b=14
while a <= 20:
a=a+2
b=b-1
datapoints=[]
datapoints.insert(0,a)
print datapoints
</code></pre>
| 0 | 2016-10-19T14:19:00Z | 40,134,717 | <p>you can achieve the expected list as output by using the <a href="https://docs.python.org/2/library/functions.html#range" rel="nofollow">range()</a> method. It takes three parameters, start, stop and step. </p>
<pre><code>data_points = range(12, 23, 2) # range returns list in python 2
print data_points
</code></pre>
<p>Note that, in <code>python 3</code> the <a href="https://docs.python.org/3/library/functions.html#func-range" rel="nofollow">range()</a> is a <a href="https://docs.python.org/3/library/stdtypes.html#range" rel="nofollow">sequence-type</a>. So, you will have to cast it to <code>list</code> in python 3</p>
<pre><code>data_points = list(range(12, 23, 2)) # python 3
print(data_points)
</code></pre>
| 0 | 2016-10-19T14:45:07Z | [
"python",
"python-2.7"
] |
Python read dedicated rows from csv file | 40,134,149 | <p>need some help to read dedicated rows in python. The txt file content is defined as the following:</p>
<pre><code>A;Maria;1.5;20.0;FFM;
B;2016;20;1;2017;20;1;
</code></pre>
<p>I read the file n python as defined below:</p>
<pre><code>import csv
with open('C:/0001.txt', newline='') as csvfile:
filereader = csv.reader(csvfile, delimiter=' ')
for row in filereader :
print('; '.join(row))
</code></pre>
<p>What I am not sure about it is how can I read the first row based on the first Character </p>
<blockquote>
<p>A</p>
</blockquote>
<p>and fill every value in an own function.</p>
<p>Then the second row identified based on </p>
<blockquote>
<p>B</p>
</blockquote>
<p>fill every value in an own function etc.</p>
<p>Thanks</p>
| -2 | 2016-10-19T14:22:45Z | 40,139,790 | <p>Thanks for the answers. I defined a class and want to fill every value to the dedicated function:</p>
<pre><code>class Class(object):
def __init__(self, name, years, age, town):
self.name = name
self.years = years
self.age = age
self.town = town
def GetName(self):
return self.name
def GetYears(self):
return self.years
def GetAge(self):
return self.age
def GetTown(self):
return self.town
def __str__(self):
return "%s is a %s" % (self.name, self.years, self.age, self.town)
</code></pre>
<p>So my file reader should load the file read a line and fill the dedicated values into the function as shown below. I am just not sure how to call the reader for the first row based on A and the fill the function:</p>
<pre><code>import csv
with open('C:/0001.txt', newline='') as csvfile:
spamreader = csv.reader(csvfile, delimiter=';')
for row in spamreader:
FirstRow = row[0]
if FirstRow == 'A':
**Fill Maria into GetName
Fill 1.5 into GetYears
Fill 20.0 into GetAge
Fill FFM into GetTown**
</code></pre>
| 0 | 2016-10-19T19:17:13Z | [
"python"
] |
Conditionally calculated column for a Pandas DataFrame | 40,134,313 | <p>I have a calculated column in a Pandas DataFrame which needs to be assigned base upon a condition. For example:</p>
<pre><code>if(data['column_a'] == 0):
data['column_c'] = 0
else:
data['column_c'] = data['column_b']
</code></pre>
<p>However, that returns an error:</p>
<pre>
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</pre>
<p>I have a feeling this has something to do with the fact that is must be done in a matrix style. Changing the code to a ternary statement doesn't work either:</p>
<pre><code>data['column_c'] = 0 if data['column_a'] == 0 else data['column_b']
</code></pre>
<p>Anyone know the proper way to achieve this? Using apply with a lambda? I could iterate via a loop, but I'd rather keep this the preferred Pandas way.</p>
| 0 | 2016-10-19T14:29:09Z | 40,134,459 | <p>You can do:</p>
<pre><code>data['column_c'] = data['column_a'].where(data['column_a'] == 0, data['column_b'])
</code></pre>
<p>this is vectorised your attempts failed because the comparison with <code>if</code> doesn't understand how to treat an array of boolean values hence the error</p>
<p>Example:</p>
<pre><code>In [81]:
df = pd.DataFrame(np.random.randn(5,3), columns=list('abc'))
df
Out[81]:
a b c
0 -1.065074 -1.294718 0.165750
1 -0.041167 0.962203 0.741852
2 0.714889 0.056171 1.197534
3 0.741988 0.836636 -0.660314
4 0.074554 -1.246847 0.183654
In [82]:
df['d'] = df['b'].where(df['b'] < 0, df['c'])
df
Out[82]:
a b c d
0 -1.065074 -1.294718 0.165750 -1.294718
1 -0.041167 0.962203 0.741852 0.741852
2 0.714889 0.056171 1.197534 1.197534
3 0.741988 0.836636 -0.660314 -0.660314
4 0.074554 -1.246847 0.183654 -1.246847
</code></pre>
| 0 | 2016-10-19T14:34:47Z | [
"python",
"pandas",
"dataframe"
] |
Conditionally calculated column for a Pandas DataFrame | 40,134,313 | <p>I have a calculated column in a Pandas DataFrame which needs to be assigned base upon a condition. For example:</p>
<pre><code>if(data['column_a'] == 0):
data['column_c'] = 0
else:
data['column_c'] = data['column_b']
</code></pre>
<p>However, that returns an error:</p>
<pre>
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</pre>
<p>I have a feeling this has something to do with the fact that is must be done in a matrix style. Changing the code to a ternary statement doesn't work either:</p>
<pre><code>data['column_c'] = 0 if data['column_a'] == 0 else data['column_b']
</code></pre>
<p>Anyone know the proper way to achieve this? Using apply with a lambda? I could iterate via a loop, but I'd rather keep this the preferred Pandas way.</p>
| 0 | 2016-10-19T14:29:09Z | 40,134,518 | <p>use where() and notnull() </p>
<pre><code> data['column_c'] = data['column_b'].where(data['column_a'].notnull(), 0)
</code></pre>
| 0 | 2016-10-19T14:37:14Z | [
"python",
"pandas",
"dataframe"
] |