title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
sequencelengths
1
5
Why is the maximum recursion depth in python 1000?
40,115,683
<p>I was curious about what the MRD (maximum recursion depth) is in python, so i wrote this:</p> <pre><code>def call(n): print (n) return call(n+1) call(1) </code></pre> <p>The end result was 979, wich is a peculiar number for me. I could not find anywhere why this number is the standard. As i am a self taught programmer i would apreciate it being explained in simple terms.</p> <p>EDIT: apperantly it's supposed to be a 1000, but why this number?</p>
1
2016-10-18T18:39:55Z
40,115,931
<p>Here is a better test:</p> <pre><code>n = 0 def test_recursion_limit(): def call(): global n n += 1 call() try: call() except RuntimeError: print(n) test_recursion_limit() </code></pre> <p>If you put it in <code>spam.py</code> and execute that, it should return 998 for both python2 and python3. It's one stack frame short because of the initial <code>test_recursion_limit</code> frame. </p> <p>If you're running in a REPL such as ipython, you are already inside a few frames, so you will see a lower count - it's not that the recursion limit is undershot, it's that the implementation of the REPL itself uses some stack frames. </p> <pre><code>&gt;&gt;&gt; # freshly opened ipython session &gt;&gt;&gt; import inspect &gt;&gt;&gt; len(inspect.stack()) 10 </code></pre> <p>You can check the current recursion limit by calling <code>sys.getrecursionlimit()</code> function. The default value of 1000 is chosen as a sensible default, it's a safeguard against eclipsing system resources when you accidentally execute an infinitely recursive call. That's very easy to do when mucking around with custom <code>__getattr__</code> implementations, for example. </p> <p>If you're blowing the stack legitimately and you need to increase the limit, it can be modified with <code>sys.setrecursionlimit</code>.</p>
4
2016-10-18T18:53:22Z
[ "python" ]
Cassandra auto-complete does not work
40,115,699
<p>I have a problem with the auto-complete within Cassandra 3.9 client "cqlsh", I don't know why? I did an update my brew command on MacOS Sierra. I suppose this problem is for a python update, but how it's related?</p> <p>I tried to execute the tests:</p> <pre><code>$ cd apache-cassandra-3.9/pylib/cqlshlib/test $ python test_cqlsh_completion.py </code></pre> <p>but I got this error:</p> <pre><code>Traceback (most recent call last): File "test_cqlsh_completion.py", line 23, in &lt;module&gt; from .basecase import BaseTestCase, cqlsh ValueError: Attempted relative import in non-package </code></pre> <p>Someone know any solution?</p> <p>If I list the directory it shows:</p> <pre><code>$ ls apache-cassandra-3.9/pylib/cqlshlib/test __init__.py basecase.py run_cqlsh.py test_cqlsh_commands.py test_cqlsh_invocation.py test_cqlsh_parsing.py winpty.py ansi_colors.py cassconnect.py test_cql_parsing.py test_cqlsh_completion.py test_cqlsh_output.py test_keyspace_init.cql </code></pre>
0
2016-10-18T18:40:46Z
40,115,820
<p>you don't have <code>__init__.py</code> in your package folder</p>
0
2016-10-18T18:46:17Z
[ "java", "python", "cassandra", "cqlsh" ]
Maintaining accurate f.name through rename operation in Python w/o dropping lock
40,115,707
<p>I have a file which I'm atomically replacing in Python, while trying to persistently retain a lock.</p> <p>(Yes, I'm well aware that this will wreak havoc on any other programs waiting for a lock on the file unless they check for the directory entry pointing to a new inode after they actually receive their lock; that check is happening in practice).</p> <pre><code>import os, os.path, tempfile, fcntl def replace_file(f, new_text): f_dir = os.path.dirname(f.name) with tempfile.NamedTemporaryFile(dir=f_dir) as temp_file: temp_file.write(new_text) temp_file.flush() os.fsync(temp_file.fileno()) dest_file = os.fdopen(os.dup(temp_file.fileno()), 'r+b') fcntl.flock(dest_file.fileno(), fcntl.LOCK_EX) os.rename(temp_file.name, f.name) temp_file.delete = False # ...and after more paranoia, like fsync()ing the directory it's in... return dest_file f = open('/tmp/foo', 'w') f = replace_file(f, "new string") print f.name # name is &lt;fdup&gt;, not /tmp/foo </code></pre> <p>I'm hard-pressed to find a workaround for this that doesn't involve dropping the lock even temporarily after the rename has taken place.</p>
0
2016-10-18T18:41:02Z
40,116,142
<p>If your code is under Linux, here's a way to get filename from a file descriptor:</p> <pre><code>... f = replace_file(f, "new string") print os.readlink('/proc/self/fd/%d' % f.fileno()) </code></pre> <p>Reference: <a href="http://stackoverflow.com/a/1189582/2644759">http://stackoverflow.com/a/1189582/2644759</a></p>
0
2016-10-18T19:06:54Z
[ "python" ]
Maintaining accurate f.name through rename operation in Python w/o dropping lock
40,115,707
<p>I have a file which I'm atomically replacing in Python, while trying to persistently retain a lock.</p> <p>(Yes, I'm well aware that this will wreak havoc on any other programs waiting for a lock on the file unless they check for the directory entry pointing to a new inode after they actually receive their lock; that check is happening in practice).</p> <pre><code>import os, os.path, tempfile, fcntl def replace_file(f, new_text): f_dir = os.path.dirname(f.name) with tempfile.NamedTemporaryFile(dir=f_dir) as temp_file: temp_file.write(new_text) temp_file.flush() os.fsync(temp_file.fileno()) dest_file = os.fdopen(os.dup(temp_file.fileno()), 'r+b') fcntl.flock(dest_file.fileno(), fcntl.LOCK_EX) os.rename(temp_file.name, f.name) temp_file.delete = False # ...and after more paranoia, like fsync()ing the directory it's in... return dest_file f = open('/tmp/foo', 'w') f = replace_file(f, "new string") print f.name # name is &lt;fdup&gt;, not /tmp/foo </code></pre> <p>I'm hard-pressed to find a workaround for this that doesn't involve dropping the lock even temporarily after the rename has taken place.</p>
0
2016-10-18T18:41:02Z
40,117,055
<p>A simplicity-focused solution is to use an entirely separate lockfile with a different name (ie. <code>&lt;filename&gt;.lck</code>).</p> <ul> <li>Using a separate lockfile means that the code performing the write-and-rename operation doesn't need to be involved in locking at all, as the rename operation doesn't interact with the lock.</li> <li>Using a separate lockfile avoids the need to jump through hoops to avoid breaking clients which might find that they've successfully grabbed a lock, but hold it on a now-deleted file rather than the version currently bound to the directory.</li> </ul>
0
2016-10-18T20:01:32Z
[ "python" ]
MatPlotLib is very slow in python
40,115,725
<pre><code>import matplotlib matplotlib.use('TkAgg') def generate_graph(self,subject,target,filename): x_data = range(0, len(self.smooth_hydro)) mslen = len([i[1] for i in self.master_seq.items()][0]) diff=(mslen-len(self.smooth_hydro))/2 x1_data = range(0,len(self.smooth_groups.items()[0][-1])) x2_data = range(0,mslen) plt.figure() plt.axhline(y=0, color='black') plt.ylim(-3, 3) plt.xlim(right=mslen) plt.plot(x_data, self.smooth_hydro, linewidth=1.0, label="hydrophobicity", color='r') plt.plot(x_data, self.smooth_amphi, linewidth=1.0, label="amphipathicity", color='g') for pos in self.hmmtop: plt.axvline(x=pos-1-diff, ymin=-2, ymax = 0.1, linewidth=1, color='black',alpha=0.2) plt.axvspan(subject[0]-diff,subject[1]-diff, facecolor="orange", alpha=0.2) plt.axvspan(target[0]-diff,target[1]-diff, facecolor="orange", alpha=0.2) plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05), ncol=3, fancybox=True, shadow=True) plt.xlabel("Residue Number") plt.ylabel("Value") width = (0.0265)*len(self.master_seq[0]) if mslen &gt; 600 else 15 plt.grid('on') plt.savefig(self.out+'/graphs/'+filename+'.png') plt.clf() plt.cla() plt.close() </code></pre> <p>I am calling this function repeatedly, but the images generated are extremely slow. Can someone please help me optimize this code so that it can run faster?</p> <p>Thank you!</p>
0
2016-10-18T18:41:50Z
40,116,097
<p>I found this answer in another <a href="http://stackoverflow.com/a/11093027/5104387">post</a>. All credit to <a href="http://stackoverflow.com/users/643629/luke">Luke</a>.</p> <blockquote> <p>Matplotlib makes great publication-quality graphics, but is not very well optimized for speed. There are a variety of python plotting packages that are designed with speed in mind:</p> <ul> <li><p><a href="http://pyqwt.sourceforge.net/" rel="nofollow">http://pyqwt.sourceforge.net/</a> [ edit: pyqwt is no longer maintained;the previous maintainer is recommending pyqtgraph ]</p></li> <li><p><a href="http://code.google.com/p/guiqwt/" rel="nofollow">http://code.google.com/p/guiqwt/</a></p></li> <li><p><a href="http://code.enthought.com/projects/chaco/" rel="nofollow">http://code.enthought.com/projects/chaco/</a> </p></li> <li><p><a href="http://www.pyqtgraph.org/" rel="nofollow">http://www.pyqtgraph.org/</a></p></li> </ul> </blockquote>
1
2016-10-18T19:04:05Z
[ "python", "matplotlib" ]
Deleting cookies and changing user agent in Python 3+ without Mechanize
40,115,727
<p>How do I delete cookies from a web browser and change the user agent in Python 3+ without using mechanize? I'm not going to be accessing the web through Python, I would just like my browser (Firefox or Chrome) to delete cookies and change my user agent for example at every startup (I can do the startup bit, just not the rest!)</p>
0
2016-10-18T18:41:51Z
40,135,540
<p>set the <code>Expires</code> attribute to a date in the past (like Epoch):</p> <pre><code>Set-Cookie: name=val; expires=Thu, 01 Jan 1970 00:00:00 GMT </code></pre> <p>Read more here: <a href="http://stackoverflow.com/questions/5285940/correct-way-to-delete-cookies-server-side">Correct way to delete cookies server-side</a></p>
0
2016-10-19T15:19:21Z
[ "python", "python-3.x", "cookies", "user-agent" ]
If a character in string is found before another character
40,115,732
<p>I'm trying to figure out a way to see if a character in a string before another one to get and output. Say:</p> <pre><code>v="Hello There" x=v[0] if "Hello" in x: print("V consists of '"'Hello'"'") if "There" in x: print("Hello comes before There) if "There" in x: print("V consists of '"'There'"'") if "Hello" in x: print("There comes before Hello") </code></pre> <p>What I'm trying to get is "Hello comes before There", though it doesn't seem to work when I type it in. Help would be greatly appreciated.</p> <p>The reason why the output would indicate that Hello comes before there is because the script is read from top to bottom, and this is just an exploit of that fact.</p> <p>If any of this does not make any sense, please feel free to reach me in the answer section.</p>
0
2016-10-18T18:42:00Z
40,115,804
<p>For string 's', <code>s.find(substring)</code> returns the lowest index of <code>s</code> that begins <code>substring</code></p> <pre><code>if s.find('There') &lt; s.find('Hello'): print('There comes before Hello') </code></pre>
3
2016-10-18T18:45:24Z
[ "python", "string", "variables", "if-statement" ]
If a character in string is found before another character
40,115,732
<p>I'm trying to figure out a way to see if a character in a string before another one to get and output. Say:</p> <pre><code>v="Hello There" x=v[0] if "Hello" in x: print("V consists of '"'Hello'"'") if "There" in x: print("Hello comes before There) if "There" in x: print("V consists of '"'There'"'") if "Hello" in x: print("There comes before Hello") </code></pre> <p>What I'm trying to get is "Hello comes before There", though it doesn't seem to work when I type it in. Help would be greatly appreciated.</p> <p>The reason why the output would indicate that Hello comes before there is because the script is read from top to bottom, and this is just an exploit of that fact.</p> <p>If any of this does not make any sense, please feel free to reach me in the answer section.</p>
0
2016-10-18T18:42:00Z
40,115,908
<pre><code>v="Hello There".split() #splitting the sentence into a list of words ['Hello', 'There'], notice the order stays the same which is important #got rid of your x = v[0] since it was pointless if "Hello" in v[0]: #v[0] == 'Hello' so this passes print("V consists of '"'Hello'"'") if "There" in v[1]: #v[1] == 'There' so this passes. This line had indentation errors print("Hello comes before There") # This line had indentation errors if "There" in v[0]: #v[0] == 'Hello' so this fails print("V consists of '"'There'"'") if "Hello" in v[1]: #v[1] == 'There' so this fails. This line had indentation errors print("There comes before Hello") # This line had indentation errors </code></pre> <p>Fixed your code with some comments to show you what's happening and what not. You had indentation errors too.</p> <p>If you want a better coding practice see Patrick's answer. I just wanted to show you what you were doing wrong</p>
0
2016-10-18T18:51:38Z
[ "python", "string", "variables", "if-statement" ]
If a character in string is found before another character
40,115,732
<p>I'm trying to figure out a way to see if a character in a string before another one to get and output. Say:</p> <pre><code>v="Hello There" x=v[0] if "Hello" in x: print("V consists of '"'Hello'"'") if "There" in x: print("Hello comes before There) if "There" in x: print("V consists of '"'There'"'") if "Hello" in x: print("There comes before Hello") </code></pre> <p>What I'm trying to get is "Hello comes before There", though it doesn't seem to work when I type it in. Help would be greatly appreciated.</p> <p>The reason why the output would indicate that Hello comes before there is because the script is read from top to bottom, and this is just an exploit of that fact.</p> <p>If any of this does not make any sense, please feel free to reach me in the answer section.</p>
0
2016-10-18T18:42:00Z
40,116,325
<p>Assuming your needs are as simple as you have implied in the question details, then this should do -</p> <pre><code>v = "Hello There" # Change s1 and s2 as you please depending on your actual need. s1 = "Hello" s2 = "There" if s1 in v and s2 in v: # Refer - https://docs.python.org/2/library/string.html#string.find if v.find(s1) &lt; v.find(s2): print(s1 + " comes before " + s2) else: print(s2 + " comes before " + s1) </code></pre>
0
2016-10-18T19:19:05Z
[ "python", "string", "variables", "if-statement" ]
Python: Setting multiple continuous timeouts
40,115,841
<p>I want to have some kind of server that receives events (i.e using sockets), and each event has a different ID (i.e dst port number). </p> <p>Is there a way that from the moment I see the first packet of an specific ID, I start some kind of timeout (i.e, 1ms), and if in that time nothing else with the same ID is received an event is triggered, but if something is received the timeout is reset to 1ms. </p> <p>I have seen that something like that can be done by using <code>signals</code> and the <code>SIGALARM</code> signal. However, I want to keep multiple "timers" for every different ID.</p>
1
2016-10-18T18:47:48Z
40,116,586
<p>See the <a href="https://docs.python.org/3/library/sched.html" rel="nofollow"><code>sched</code></a> built-in module, which has a scheduler.</p> <p>You can construct a new scheduler instance, then use <code>scheduler.enter</code> to schedule a function to be called after a delay; and if you receive a message within the time limit, you can remove its event from the queue using <code>scheduler.cancel(event)</code>; you can use the <code>scheduler.run()</code> to run the scheduler in another thread, or you can use <code>scheduler.run(blocking=False)</code> in a select-multiplexing thread with timeouts.</p>
1
2016-10-18T19:33:15Z
[ "python", "sockets", "timeout", "signals" ]
Python: Setting multiple continuous timeouts
40,115,841
<p>I want to have some kind of server that receives events (i.e using sockets), and each event has a different ID (i.e dst port number). </p> <p>Is there a way that from the moment I see the first packet of an specific ID, I start some kind of timeout (i.e, 1ms), and if in that time nothing else with the same ID is received an event is triggered, but if something is received the timeout is reset to 1ms. </p> <p>I have seen that something like that can be done by using <code>signals</code> and the <code>SIGALARM</code> signal. However, I want to keep multiple "timers" for every different ID.</p>
1
2016-10-18T18:47:48Z
40,117,677
<p>Sounds like a job for <code>select</code>. As you are using sockets, you have a socket descriptor for a client (presumably one for each client but as long as you have one, it works). So you either want to wait until a packet arrives on one of your sockets or until a timeout occurs. This is exactly what <code>select</code> does.</p> <p>So calculate the expiration time for each client when you receive a message, then in your main loop, simply calculate the soonest-to-expire timeout and provide that as the <code>timeout</code> parameter to <code>select.select</code> (with all the socket descriptors as the <code>rlist</code> parameter). Then you get awakened when a new packet/message arrives or when the oldest timeout expires. If it's a new packet, you process the packet and reset that provider's timeout to 1ms; otherwise, you do whatever you do when the timeout expires. </p> <p>Then calculate the next-to-expire timeout. Rinse. Lather. Repeat.</p> <p>Something like this:</p> <pre><code>now = time.time() timeout = min([(client.expiration - now) for client in clients_list]) rrdy, wrdy, xrdy = select.select([client.sock for client in clients_list], [], [], timeout) if not rrdy: # Timeout now = time.time() for client in clients_list: if client.expiration &lt; now: process_timeout(client) else: # Process incoming messages for rsock in rrdy: process_message(rsock.recv(4096)) client.expiration = time.time() + .001 </code></pre>
2
2016-10-18T20:40:03Z
[ "python", "sockets", "timeout", "signals" ]
Pyspark update two columns using one when statement?
40,115,869
<p>So I am using <code>df.Withcolumn()</code> in PySpark to create a column and using <code>F.when()</code> to specify the criteria as to when the column should be updated.</p> <pre><code>df = df.withColumn('ab', F.when(df['text']=="0", 1).otherwise(0)) </code></pre> <p>Basically I am updating the column to be '1' if it matches the criteria. Now, I want to update another column in the same <code>df</code> if the same criteria matches (eg. <code>df['text']=="0"</code>). Is there any way in PySpark to update two columns using one when statement?</p>
0
2016-10-18T18:49:20Z
40,116,311
<p>It is not possible. You can only created struct:</p> <pre><code>&gt;&gt;&gt; from pyspark.sql.functions import * &gt;&gt;&gt; df.withColumn('ab', F.when(df['text']=="0" , struct(1, "foo")).otherwise(struct(0, "bar"))) </code></pre>
0
2016-10-18T19:18:08Z
[ "python", "pyspark" ]
changing variable between flask server and a multiprocessing
40,115,875
<p>I have a flask server running and multiprocessing running a loop working. I need to be able to change a variable in the flask server and have it to be used in an <code>if</code> statement in the loop. Here is my code, I removed a lot of things that I thought was not important to show.</p> <p>The variable that needs to be changed is <code>toggle</code> (only needs to be <code>1</code> or <code>0</code>) It gets changed in <code>sms()</code> and used with an <code>if</code> statement in the loop.</p> <pre><code># I removed a lot of stuff that I don't think was needed import time import math from pyicloud import PyiCloudService import requests.packages.urllib3 requests.packages.urllib3.disable_warnings() from math import sin, cos, sqrt, atan2, radians from twilio.rest import TwilioRestClient from multiprocessing import Process, Value from flask import Flask, request from twilio import twiml import os #os.system('clear') app = Flask(__name__) sent = 0 #only needed for loop toggle = 0 # this is the variable that needs to be changed and viewed def distance(x): #returns distance in miles or km @app.route('/sms', methods=['POST']) def sms(): global toggle #command replies error = 'You did not enter a valid command' on = 'Automatic tracker has been turned on.' already_on = 'The automatic tracker is already on.' off = 'Automatic tracker has been turned off.' already_off = 'The automatic tracker is already off.' toggle_error = 'There was a changing the status automatic tracker.' status0 = 'The automatic tracker is currently off.' status1 = 'The automatic tracker is currently on.' status_error = 'There was a error checking the status of the automatic tracker.' message_body = request.form['Body'] # message you get when sending a text to twilio number ex. I send "On" to the number and message_body will = "On" resp = twiml.Response() # what twilio will send back to your number if message_body == "on" or message_body == "On": #turn on automatic tracker if toggle == 0: #set toggle to 1 toggle = 1 resp.message(on) print on time.sleep(3) elif toggle == 1: #say toggle is on resp.message(already_on) print already_on time.sleep(3) else: #say toggle_error resp.message(toggle_error) print toggle_error time.sleep(3) elif message_body == "off" or message_body == "Off": #turn off automatic tracker if toggle == 1: #set toggle to 0 toggle = 0 resp.message(off) print off time.sleep(3) elif toggle == 0: #say toggle is off resp.message(already_off) print already_off time.sleep(3) else: #say toggle_error resp.message(toggle_error) print toggle_error time.sleep(3) elif message_body == "status" or message_body == "Status": #return status of automatic tracker if toggle == 1: #say toggle is on resp.message(status1) print status1 time.sleep(3) elif toggle == 0: #say toggle is off resp.message(status0) print status0 time.sleep(3) else: #say status_error resp.message(status_error) print status_error time.sleep(3) else: #say invalid command resp.message(error) print error print " " time.sleep(3) return str(resp) print " " def record_loop(loop_on): while True: global sent global toggle if toggle == 1: #toggle does not read as 1 when changed in sms() if distance(2) &lt; 2.5: if sent == 0: print "CLOSE" print "sending message" print distance(2) client.messages.create( to="phone number to send to", #I removed the 2 numbers from_="twillio number", #I removed the 2 numbers body= "CLOSE!", ) sent = 1 else: print "CLOSE" print "not sending" print distance(2) else: print "not close" print distance(2) sent = 0 else: print 'toggle is off' print toggle time.sleep(1) print " " time.sleep(20) if __name__ == "__main__": recording_on = Value('b', True) p = Process(target=record_loop, args=(recording_on,)) p.start() app.run(use_reloader=False) p.join() </code></pre>
-1
2016-10-18T18:49:45Z
40,117,893
<p>Multiprocessing efectivelly runs the target function in another process - which also means that is an entirely new Python program - this otherprogram won't share any variables with the parent program. That is why your use of a global variable to communicate with your secondary loop won't work in this way: the <code>toggle</code> variable available inside your <code>record_loop</code> is independent of the one used by the program's views.</p> <p>In a well formed application, all you'd need is to use an instance of <a href="https://docs.python.org/2/library/multiprocessing.html#exchanging-objects-between-processes" rel="nofollow"><code>multiprocessing.Queue</code></a> to communicate values to code running in a function in another process. </p> <p>However, "weel behaved" is not what you have there - the use of a multiprocessign.Queue assumes the originating process always has access to the one Queue object that is shared with the sub-process. However, you are using a Flask application, which in turn uses the WSGI Python model - which in turn makes mandatoy that each HTTP request processing (that is, each call to your <code>sms</code> view function) is independent of all other resources in the code - inclusing global variables. That is because in a WSGI server context, each HTTP request could be served by a different proces entirely (that will vary with teh WSGI server configuration).</p> <p>So, for "real world" cases of HTTP requests that trigger a longer process on the server side, one of the best approaches is to use <a href="http://www.celeryproject.org/" rel="nofollow">Celery</a>. With Celery, you explicitly start your workers, that exist independent of the processes used to answer HTTP requests (even if both the view and worker code lies on the same <code>.py</code> file). Your views will call functions in the worker in a transparent way, and they will simply execute in asynchronously in another process.</p> <p>The multiprocessing approach with <code>Queue</code> can be used, however, if you don't mind have several processes running your <code>record_loop</code> code in parallel, without one knowing anything about the other - since you are just triggering remote API's on this code, it looks like it would not be a problem.</p>
0
2016-10-18T20:53:48Z
[ "python", "flask", "python-multiprocessing" ]
Append each line in file
40,116,025
<p>I want to append each line in file in <code>python</code> For example:</p> <p><strong>File.txt</strong></p> <pre><code>Is it funny? Is it dog? </code></pre> <p><strong>Expected Result</strong></p> <pre><code>Is it funny? Yes Is it dog? No </code></pre> <p>Assume that YES, No is given. I am doing in this way:</p> <pre><code>with open('File.txt', 'a') as w: w.write("Yes") </code></pre> <p>But it appends at the end of file. Not on every line.</p> <p><strong>Edit 1</strong></p> <pre><code>with open('File.txt', 'r+') as w: for line in w: w.write(line + " Yes ") </code></pre> <p>This is giving the result</p> <pre><code>Is it funny? Is it dog?Is it funny? Yes Is it dog? Yes </code></pre> <p>I do not need this.It is adding new line with appended string. I need </p> <pre><code>Is it funny? Yes Is it dog? No </code></pre>
2
2016-10-18T19:00:14Z
40,116,309
<p>Here is a solution that copies existing file content to a temp file. Modifies it as per needs. Then writes back to original file. Inspiration from <a href="http://stackoverflow.com/questions/17646680/writing-back-into-the-same-file-after-reading-from-the-file">here</a></p> <pre><code>import tempfile filename = "c:\\temp\\File.txt" #Create temporary file t = tempfile.NamedTemporaryFile(mode="r+") #Open input file in read-only mode i = open(filename, 'r') #Copy input file to temporary file for line in i: #For "funny" add "Yes" if "funny" in line: t.write(line.rstrip() + "Yes" +"\n") #For "dog" add "No" elif "dog" in line: t.write(line.rstrip() + "No" +"\n") i.close() #Close input file t.seek(0) #Rewind temporary file o = open(filename, "w") #Reopen input file writable #Overwriting original file with temp file contents for line in t: o.write(line) t.close() #Close temporary file </code></pre>
1
2016-10-18T19:17:58Z
[ "python", "python-3.x" ]
Append each line in file
40,116,025
<p>I want to append each line in file in <code>python</code> For example:</p> <p><strong>File.txt</strong></p> <pre><code>Is it funny? Is it dog? </code></pre> <p><strong>Expected Result</strong></p> <pre><code>Is it funny? Yes Is it dog? No </code></pre> <p>Assume that YES, No is given. I am doing in this way:</p> <pre><code>with open('File.txt', 'a') as w: w.write("Yes") </code></pre> <p>But it appends at the end of file. Not on every line.</p> <p><strong>Edit 1</strong></p> <pre><code>with open('File.txt', 'r+') as w: for line in w: w.write(line + " Yes ") </code></pre> <p>This is giving the result</p> <pre><code>Is it funny? Is it dog?Is it funny? Yes Is it dog? Yes </code></pre> <p>I do not need this.It is adding new line with appended string. I need </p> <pre><code>Is it funny? Yes Is it dog? No </code></pre>
2
2016-10-18T19:00:14Z
40,116,338
<p>You can write to a <em>tempfile</em> then replace the original:</p> <pre><code>from tempfile import NamedTemporaryFile from shutil import move data = ["Yes", "No"] with open("in.txt") as f, NamedTemporaryFile("w",dir=".", delete=False) as temp: # pair up lines and each string for arg, line in zip(data, f): # remove the newline and concat new data temp.write(line.rstrip()+" {}\n".format(arg)) # replace original file move(temp.name,"in.txt") </code></pre> <p>You could also use <em>fileinput</em> with <em>inplace=True</em>:</p> <pre><code>import fileinput import sys for arg, line in zip(data, fileinput.input("in.txt",inplace=True)): sys.stdout.write(line.rstrip()+" {}\n".format(arg)) </code></pre> <p>Output:</p> <pre><code>Is it funny? Yes Is it dog? No </code></pre>
3
2016-10-18T19:19:52Z
[ "python", "python-3.x" ]
python pandas - how can I map values in 1 dataframe to indices in another without looping?
40,116,065
<p>I have 2 dataframes - "df_rollmax" is a derivative of "df_data" with the same shape. I am attempting to map the values of df_rollmax back to df_data and create a third df (df_maxdates) which contains the dates at which each value in df_rollmax originally showed up in df_data.</p> <pre><code>list1 = [[21,101],[22,110],[25,113],[24,112],[21,109],[28,108],[30,102],[26,106],[25,111],[24,110]] df_data = pd.DataFrame(list1,index=pd.date_range('2000-1-1',periods=10, freq='D'), columns=list('AB')) df_rollmax = pd.DataFrame(df_data.rolling(center=False,window=5).max()) mapA = pd.Series(df_data.index, index=df_data['A']) </code></pre> <p>From a previous question, I see that a single date can be found with:</p> <p><code>mapA[rollmax.ix['j','A']]</code> returns <code>Timestamp('2000-01-07 00:00:00')</code></p> <p>But my real dataset is much larger and I would like to fill the third dataframe with dates without looping over every row and column.</p> <p>Mapping back to the indices is a problem due to: <code>ValueError: cannot reindex from a duplicate axis</code> so this isn't working...</p> <pre><code>df_maxdates = pd.DataFrame(index=df_data.index, columns=df_data.columns) for s in df_data.columns: df_maxdates[s] = mapA.loc[df_rollmax[s]] </code></pre> <p>Using the last instance of the duplicate value would be fine, but <code>df.duplicated(keep='last')</code> isn't cooperating. </p> <p>Greatly appreciate any and all wisdom.</p> <p><a href="http://stackoverflow.com/questions/40078107/python-pandas-dataframe-cant-figure-out-how-to-lookup-an-index-given-a-value">Link to original question</a></p> <p>Update - this is what df_maxdates would look like:</p> <p><a href="https://i.stack.imgur.com/FVaB3.png" rel="nofollow"><img src="https://i.stack.imgur.com/FVaB3.png" alt="enter image description here"></a></p>
1
2016-10-18T19:02:27Z
40,117,126
<p>You can use <a href="http://stackoverflow.com/a/40101614/5741205">this BrenBarn's solution</a>:</p> <pre><code>W = 5 # window size df = pd.DataFrame(columns=df_data.columns, index=df_data.index[W-1:]) for col in df.columns.tolist(): df[col] = df_data.index[df_data[col].rolling(W) .apply(np.argmax)[(W-1):] .astype(int) + np.arange(len(df_data)-(W-1))] df = pd.DataFrame(columns=df_data.columns, index=df_data.index[:W-1]).append(df) In [226]: df Out[226]: A B 2000-01-01 NaT NaT 2000-01-02 NaT NaT 2000-01-03 NaT NaT 2000-01-04 NaT NaT 2000-01-05 2000-01-03 2000-01-03 2000-01-06 2000-01-06 2000-01-03 2000-01-07 2000-01-07 2000-01-03 2000-01-08 2000-01-07 2000-01-04 2000-01-09 2000-01-07 2000-01-09 2000-01-10 2000-01-07 2000-01-09 </code></pre> <p>or <a href="http://stackoverflow.com/a/40103020/5741205">this piRSquared's solution</a>:</p> <pre><code>def idxmax(s, w): i = 0 while i + w &lt;= len(s): yield(s.iloc[i:i+w].idxmax()) i += 1 x = pd.DataFrame({'A':[np.nan]*4 + list(idxmax(df_data.A, 5)), 'B':[np.nan]*4 + list(idxmax(df_data.B, 5))}, index=df_data.index) </code></pre> <p>Demo:</p> <pre><code>In [89]: x = pd.DataFrame({'A':pd.to_datetime([np.nan]*4 + list(idxmax(df_data.A, 5))), ...: 'B':pd.to_datetime([np.nan]*4 + list(idxmax(df_data.B, 5)))}, ...: index=df_data.index) ...: In [90]: x Out[90]: A B 2000-01-01 NaT NaT 2000-01-02 NaT NaT 2000-01-03 NaT NaT 2000-01-04 NaT NaT 2000-01-05 2000-01-03 2000-01-03 2000-01-06 2000-01-06 2000-01-03 2000-01-07 2000-01-07 2000-01-03 2000-01-08 2000-01-07 2000-01-04 2000-01-09 2000-01-07 2000-01-09 2000-01-10 2000-01-07 2000-01-09 </code></pre>
1
2016-10-18T20:05:20Z
[ "python", "pandas", "dataframe", "duplicates", "mapping" ]
Sum values in dictionary
40,116,099
<p>I'm working with an Excel file and openpyxl.</p> <p>Below is sample data:</p> <pre><code>Name Value Amy1 4 Bob1 5 Bob1 5 Bob2 8 Chris1 7 Chris2 3 Chris3 6 Chris3 6 Chris3 6 </code></pre> <p>Using the for loop below, I grab the value associated with each unique name.</p> <pre><code>for rowNum in range(2, 11): person = sheet.cell(row = rowNum, column = 13).value people.append(person) personValue.update({person: sheet.cell(row = rowNum, column = 26).value}) </code></pre> <p>That yields a dictionary with a single entry for each name (Amy1, Bob1, Bob2, etc.).</p> <p>I want to merge and sum the value for each matching name to return the following result:</p> <pre><code>Name Value Amy 4 Bob 13 Chris 16 </code></pre>
0
2016-10-18T19:04:12Z
40,116,699
<p>You can use the <code>get()</code> method to set a default value for a key and then add to it that way.</p> <p>Since all of your keys are in the format name#, you can get the end result you want like this:</p> <pre><code>mergedResult = dict() for name in startingDict: mergedResult[name[:-1]] = mergedResult.get(name[:-1], 0) + startingDict[name] </code></pre> <p>Here, the <code>get()</code> method checks for a key <code>name[:-1]</code> (the name from your starting dictionary without the final character) in your result dictionary. If that key isn't present, it adds it with a default value of 0 and returns that value. If that key is already present, it simply returns the corresponding value. Then, the value from your starting dictionary is added to the value in your result dictionary.</p>
0
2016-10-18T19:39:38Z
[ "python", "list", "dictionary", "sum" ]
Sum values in dictionary
40,116,099
<p>I'm working with an Excel file and openpyxl.</p> <p>Below is sample data:</p> <pre><code>Name Value Amy1 4 Bob1 5 Bob1 5 Bob2 8 Chris1 7 Chris2 3 Chris3 6 Chris3 6 Chris3 6 </code></pre> <p>Using the for loop below, I grab the value associated with each unique name.</p> <pre><code>for rowNum in range(2, 11): person = sheet.cell(row = rowNum, column = 13).value people.append(person) personValue.update({person: sheet.cell(row = rowNum, column = 26).value}) </code></pre> <p>That yields a dictionary with a single entry for each name (Amy1, Bob1, Bob2, etc.).</p> <p>I want to merge and sum the value for each matching name to return the following result:</p> <pre><code>Name Value Amy 4 Bob 13 Chris 16 </code></pre>
0
2016-10-18T19:04:12Z
40,116,710
<p>If <code>name = sheet.cell(row = rowNum, column = 13).value</code></p> <p>And <code>value = sheet.cell(row = rowNum, column = 26).value</code></p> <p>Edited according to your comments:</p> <pre><code>from collections import defaultdict people = defaultdict(int) category = defaultdict(int) for rowNum in range(2, 11): # Name with number person = sheet.cell(row = rowNum, column = 13).value # Name without number category_name = ''.join([c for c in person if not c.isdigit()]) people[person] += sheet.cell(row = rowNum, column = 26).value category[category_name] += sheet.cell(row = rowNum, column = 26).value </code></pre> <p>Result - people:</p> <pre><code>Name Value Amy1 4 Bob1 10 Bob2 8 Chris1 7 Chris2 3 Chris3 18 </code></pre> <p>Result - category:</p> <pre><code>Name Value Amy 4 Bob 18 Chris 28 </code></pre> <p>This will work.</p> <p>Thanks Padraic Cunningham. I just realized what i misunderstood.</p>
0
2016-10-18T19:40:15Z
[ "python", "list", "dictionary", "sum" ]
Get row with maximum value from groupby with several columns in PySpark
40,116,117
<p>I have a dataframe similar to </p> <pre><code>from pyspark.sql.functions import avg, first rdd = sc.parallelize( [ (0, "A", 223,"201603", "PORT"), (0, "A", 22,"201602", "PORT"), (0, "A", 22,"201603", "PORT"), (0, "C", 22,"201605", "PORT"), (0, "D", 422,"201601", "DOCK"), (0, "D", 422,"201602", "DOCK"), (0, "C", 422,"201602", "DOCK"), (1,"B", 3213,"201602", "DOCK"), (1,"A", 3213,"201602", "DOCK"), (1,"C", 3213,"201602", "PORT"), (1,"B", 3213,"201601", "PORT"), (1,"B", 3213,"201611", "PORT"), (1,"B", 3213,"201604", "PORT"), (3,"D", 3999,"201601", "PORT"), (3,"C", 323,"201602", "PORT"), (3,"C", 323,"201602", "PORT"), (3,"C", 323,"201605", "DOCK"), (3,"A", 323,"201602", "DOCK"), (2,"C", 2321,"201601", "DOCK"), (2,"A", 2321,"201602", "PORT") ] ) df_data = sqlContext.createDataFrame(rdd, ["id","type", "cost", "date", "ship"]) </code></pre> <p>and I need to aggregate by <code>id</code> and <code>type</code> and get the highest occurrence of <code>ship</code> per group. For example, </p> <pre><code>grouped = df_data.groupby('id','type', 'ship').count() </code></pre> <p>has a column with the number of times of each group:</p> <pre><code>+---+----+----+-----+ | id|type|ship|count| +---+----+----+-----+ | 3| A|DOCK| 1| | 0| D|DOCK| 2| | 3| C|PORT| 2| | 0| A|PORT| 3| | 1| A|DOCK| 1| | 1| B|PORT| 3| | 3| C|DOCK| 1| | 3| D|PORT| 1| | 1| B|DOCK| 1| | 1| C|PORT| 1| | 2| C|DOCK| 1| | 0| C|PORT| 1| | 0| C|DOCK| 1| | 2| A|PORT| 1| +---+----+----+-----+ </code></pre> <p>and I need to get</p> <pre><code>+---+----+----+-----+ | id|type|ship|count| +---+----+----+-----+ | 0| D|DOCK| 2| | 0| A|PORT| 3| | 1| A|DOCK| 1| | 1| B|PORT| 3| | 2| C|DOCK| 1| | 2| A|PORT| 1| | 3| C|PORT| 2| | 3| A|DOCK| 1| +---+----+----+-----+ </code></pre> <p>I tried to use a combination of </p> <pre><code>grouped.groupby('id', 'type', 'ship')\ .agg({'count':'max'}).orderBy('max(count)', ascending=False).\ groupby('id', 'type', 'ship').agg({'ship':'first'}) </code></pre> <p>But it fails. Is there a way to get the maximum row from a count of a group by?</p> <p>On pandas this oneliner does the job:</p> <pre><code>df_pd = df_data.toPandas() df_pd_t = df_pd[df_pd['count'] == df_pd.groupby(['id','type', ])['count'].transform(max)] </code></pre>
0
2016-10-18T19:05:16Z
40,117,220
<p>Based on your expected output, it seems you are only grouping by <code>id</code> and <code>ship</code> - since you already have distinct values in <code>grouped</code> - and consequently drop duplicate elements based on the columns <code>id</code>, <code>ship</code> and <code>count</code>, sorted by <code>type</code>.</p> <p>To accomplish this, we can use <code>Window</code> functions:</p> <pre><code>from pyspark.sql.window import Window from pyspark.sql.functions import rank, col window = (Window .partitionBy(grouped['id'], grouped['ship']) .orderBy(grouped['count'].desc(), grouped['type'])) (grouped .select('*', rank() .over(window) .alias('rank')) .filter(col('rank') == 1) .orderBy(col('id')) .dropDuplicates(['id', 'ship', 'count']) .drop('rank') .show()) +---+----+----+-----+ | id|type|ship|count| +---+----+----+-----+ | 0| D|DOCK| 2| | 0| A|PORT| 3| | 1| A|DOCK| 1| | 1| B|PORT| 3| | 2| C|DOCK| 1| | 2| A|PORT| 1| | 3| A|DOCK| 1| | 3| C|PORT| 2| +---+----+----+-----+ </code></pre>
1
2016-10-18T20:10:17Z
[ "python", "apache-spark", "pyspark" ]
List index out of range(line 5)
40,116,124
<p>Code:</p> <pre><code>selObj = mc.ls(sl=True) sizeSel = len(selObj) for a in range(sizeSel): if a &lt; sizeSel: mc.parent( selObj[a +1], selObj[a]) </code></pre>
-4
2016-10-18T19:05:34Z
40,116,204
<p>I don't know enough about your code to test, but its clear that since <code>a</code> counts to the end of your list, <code>a+1</code> overflows. Just reduce the index counter by one. And no need to check it twice.</p> <pre><code>selObj = mc.ls(sl=True) sizeSel = len(selObj) for a in range(sizeSel-1): mc.parent( selObj[a +1], selObj[a]) </code></pre>
0
2016-10-18T19:11:12Z
[ "python" ]
python: could not convert string to float
40,116,150
<p>im submitting this code..</p> <pre><code>a = float(input()) b = float(input()) c = float(input()) if abs(b - c) &lt; a &lt; (b + c) and abs(a - c) &lt; b &lt; (a + c) and abs(a - b) &lt; c &lt; (a + b): print("Perimetro = " + str(a + b + c)) else: print("Area = " + str(((a + b) * c) / 2)) </code></pre> <p>and for me it was correct, but, as a response, i get: </p> <pre><code>Traceback (most recent call last): File "Main.py", line 1, in &lt;module&gt; a = float(input()) ValueError: could not convert string to float: '6.0 4.0 2.0' Command exited with non-zero status (1) </code></pre> <p>which i dont get it cause i converted the strings in the begginning.</p> <p>what am i doing wrong here?</p> <p>thanks x</p>
-2
2016-10-18T19:07:27Z
40,116,182
<p>The issue is that you are entering all the three values at once. Add one value and then press enter. For example:</p> <pre><code>&gt;&gt;&gt; a = float(input()) 6.0 &gt;&gt;&gt; b = float(input()) 4.0 &gt;&gt;&gt; c = float(input()) 2.0 &gt;&gt;&gt; a, b, c (6.0, 4.0, 2.0) </code></pre> <p>OR, get the single string and <code>split</code> the string to assign the value to <code>a</code>, <code>b</code> and <code>c</code>. For example:</p> <pre><code>&gt;&gt;&gt; a, b, c = [float(item) for item in input().split()] 6.0 4.0 2.0 &gt;&gt;&gt; a, b, c (6.0, 4.0, 2.0) </code></pre>
0
2016-10-18T19:09:44Z
[ "python", "string", "input", "floating-point" ]
xgboost sklearn wrapper value 0for Parameter num_class should be greater equal to 1
40,116,215
<p>I am trying to use the <code>XGBClassifier</code> wrapper provided by <code>sklearn</code> for a multiclass problem. My classes are [0, 1, 2], the objective that I use is <code>multi:softmax</code>. When I am trying to fit the classifier I get </p> <blockquote> <p>xgboost.core.XGBoostError: value 0for Parameter num_class should be greater equal to 1</p> </blockquote> <p>If I try to set the num_class parameter the I get the error</p> <blockquote> <p>got an unexpected keyword argument 'num_class'</p> </blockquote> <p>Sklearn is setting this parameter automatically so I am not supposed to pass that argument. But why do I get the first error?</p>
0
2016-10-18T19:11:39Z
40,123,113
<p>You shouldn't have to set this manually, probably what's happening is that the dataset you're training only contains one label, e.g. maybe all 0's.</p>
0
2016-10-19T05:45:01Z
[ "python", "scikit-learn", "xgboost" ]
Sum of several columns from a pandas dataframe
40,116,219
<p>So say I have the following table:</p> <pre><code>In [2]: df = pd.DataFrame({'a': [1,2,3], 'b':[2,4,6], 'c':[1,1,1]}) In [3]: df Out[3]: a b c 0 1 2 1 1 2 4 1 2 3 6 1 </code></pre> <p>I can sum a and b that way:</p> <pre><code>In [4]: sum(df['a']) + sum(df['b']) Out[4]: 18 </code></pre> <p>However this is not very convenient for larger dataframe, where you have to sum multiple columns together.</p> <p>Is there a neater way to sum columns (similar to the below)? What if I want to sum the entire DataFrame without specifying the columns?</p> <pre><code>In [4]: sum(df[['a', 'b']]) #that will not work! Out[4]: 18 In [4]: sum(df) #that will not work! Out[4]: 21 </code></pre>
3
2016-10-18T19:11:50Z
40,116,249
<p>I think you can use double <code>sum</code> - first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>DataFrame.sum</code></a> create <code>Series</code> of sums and second <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sum.html" rel="nofollow"><code>Series.sum</code></a> get sum of <code>Series</code>:</p> <pre><code>print (df[['a','b']].sum()) a 6 b 12 dtype: int64 print (df[['a','b']].sum().sum()) 18 </code></pre> <p>You can also use:</p> <pre><code>print (df[['a','b']].sum(axis=1)) 0 3 1 6 2 9 dtype: int64 print (df[['a','b']].sum(axis=1).sum()) 18 </code></pre> <p>Thank you <a href="http://stackoverflow.com/questions/40116219/sum-of-several-columns-from-a-pandas-dataframe/40116249?noredirect=1#comment67506082_40116249">pirSquared</a> for another solution - convert <code>df</code> to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html" rel="nofollow"><code>values</code></a> and then <code>sum</code>:</p> <pre><code>print (df[['a','b']].values.sum()) 18 </code></pre> <hr> <pre><code>print (df.sum().sum()) 21 </code></pre>
4
2016-10-18T19:13:55Z
[ "python", "pandas", "dataframe" ]
Django "module 'portal.views' has no attribute 'MyAccount'"
40,116,421
<p>I just made another class based view in Django and it apparently isn't being imported in urls.py, which is confusing. I even simplified it by making the view just a def.</p> <p>views.py:</p> <pre><code>from django.shortcuts import render from django.views import generic from django.contrib.auth.decorators import login_required from django.utils.decorators import method_decorator from django.contrib.auth.models import User from .models import RxgAsset # Create your views here. def index(request): return render(request, 'index.html') @method_decorator(login_required, name='dispatch') class GregAssets(generic.ListView): template_name = 'greg_assets.html' context_object_name = 'greg_assets' def get_queryset(self): if self.request.user.is_superuser: return GregAsset.objects.order_by('id') else: return GregAsset.objects \ .filter(organization=self.request.user.employee.organization) \ .order_by('id') class GregShowAsset(generic.DetailView): model = GregAsset template_name = 'greg_show_asset.html' context_object_name = 'greg_asset' def get_object(self, queryset=None): asset = GregAsset.objects.get(id=self.kwargs['pk']) if self.request.user.is_superuser: return asset else: if self.request.user.employee.organization == asset.organization: return asset else: return None def MyProfile(request): return render(request, 'index.html') </code></pre> <p>urls.py: from django.conf.urls import url from portal import views</p> <p>urlpatterns = [</p> <pre><code>url(r'^$', views.index, name='index'), url(r'^rxg_assets/$', views.RxgAssets.as_view(), name='rxg_assets'), url(r'^rxg_assets/(?P&lt;pk&gt;[0-9]+)/$', views.RxgShowAsset.as_view(), name='rxg_show_asset'), url(r'^my_account/$', views.MyAccount.as_view(), name='my_account'), ] </code></pre> <p>Output:</p> <pre><code>AttributeError at /portal/my_account/ module 'portal.views' has no attribute 'MyAccount' Request Method: GET Request URL: https://www.website.com/portal/my_account/ Django Version: 1.10.2 Exception Type: AttributeError Exception Value: module 'portal.views' has no attribute 'MyAccount' Exception Location: /server/apache/partner/portal/urls.py in &lt;module&gt;, line 8 Python Executable: Python Version: 3.5.2 Python Path: ['/server/apache/partner', '/server/apache/partner-env/lib/python3.5/site-packages', '/usr/local/lib/python35.zip', '/usr/local/lib/python3.5', '/usr/local/lib/python3.5/plat-freebsd11', '/usr/local/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/site-packages'] Server time: Tue, 18 Oct 2016 19:17:27 +0000 </code></pre>
0
2016-10-18T19:23:38Z
40,118,461
<p>MyProfile != MyAccount. Thanks Daniel, I don't know why I didn't see that.</p> <p>-gns</p>
0
2016-10-18T21:33:31Z
[ "python", "django" ]
Having trouble rewriting code to list comprehension for image rotation in python
40,116,437
<p>So after not finding my problem here on stackoverflow, which is how to rewrite a for-loop to a list comprehension, where values are inserted I have to ask now if it possible to rewrite this code:</p> <pre><code>rotate = np.zeros((w,h,c), np.uint8) # create an empty image filled with zeros turned 90° for y in xrange(h): for x in xrange(w): rotate[x][y] = img[y][x] </code></pre> <p>into list comprehension? I thought something like this would work, but it didn't:</p> <pre><code>rotate[x][y] = img[y][x] for y in range(h) for x in range(w) </code></pre> <p>after that I just played around with various combinations of adding indexes and brackets and I always got some syntax errors. Just for the record, I know that there are functions for rotation of images in opencv and in numpy, I'm just interested in rewriting the for-loop to list-comprehension.</p>
0
2016-10-18T19:24:27Z
40,116,496
<pre><code>rotate = np.array([[img[y][x] for y in xrange(h)] for x in xrange(w)]) </code></pre>
3
2016-10-18T19:27:55Z
[ "python", "opencv", "numpy", "list-comprehension" ]
Django -- Form Field on change
40,116,461
<p>I'm using Django forms to display data.</p> <p>There is a HTML select field - which has 2 options a) teachers and b) Students.</p> <p>Django forms:- </p> <pre><code>self.fields['account_type'].choices = [('student','Student'),('teacher', 'Teacher')] self.helper.layout = Layout( HTML('''&lt;h5&gt;Sign Up Information&lt;/h5&gt;'''), Div( Field('account_type', placeholder="Account Type", css_class='form-control'), css_class = 'form-group' ), </code></pre> <p>Based on whether you select "Student" or "Teacher" you need auto populate another field - topics. How can I fire the 'onchange' event in Django forms.</p>
2
2016-10-18T19:25:44Z
40,116,778
<p>It looks like you are using Django Crispy forms and not plain Django forms.</p> <p>If you want to set the <code>onchange</code> attribute, you should be able to just pass that as a keyword argument, as <a href="https://django-crispy-forms.readthedocs.io/en/latest/layouts.html#layout-objects-attributes" rel="nofollow">described in the docs.</a></p> <pre><code>Field('account_type', placeholder="Account Type", css_class='form-control', onchange="myChangeHander()" ) </code></pre> <p>A better way would be to give that element an id and to attach an event in JavaScript.</p> <pre><code>Field('account_type', placeholder="Account Type", css_class='form-control', css_id="account_type_id" ) </code></pre> <p>Assuming you use jQuery, you would put something like this somewhere in a <code>&lt;script&gt;</code> tag or JavaScript file:</p> <pre><code>$("#account_type_id").on("change", function() {...}); </code></pre>
0
2016-10-18T19:44:13Z
[ "python", "django" ]
Python subprocess hangs
40,116,548
<p>I'm executing the following subprocess...</p> <p><code>p.call(["./hex2raw", "&lt;", "exploit4.txt", "|", "./rtarget"])</code></p> <p>...and it hangs.</p> <p>But if I execute <code>kmwe236@kmwe236:~/CS485/prog3/target26$ ./hex2raw &lt; exploit4.txt | ./rtarget</code> then it executes fine. Is there something wrong with using the input or piping operator?</p> <p>I also tried <code>sp.call(["./hex2raw", "&lt;", "exploit4.txt", "|", "./rtarget"], shell=True)</code></p> <p>The entire code looks like this <strong>UPDATED WITH SUGGESTIONS</strong></p> <pre><code>import subprocess as sp import pdb for i in range(4201265,4201323): pdb.set_trace() d = hex(i)[2:] output = " " for i in range(len(d),0,-2): output = output + d[i-2:i] + " " out_buffer = "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00" + output + "00 00 00 00" text_file = open("exploit4.txt", "w") text_file.write("%s" % out_buffer) # sp.call(["./hex2raw", "&lt;", "exploit4.txt", "|", "./rtarget"], shell=True) with open("exploit4.txt") as inhandle: p = sp.Popen("./hex2raw",stdin=inhandle,stdout=sp.PIPE) p2 = sp.Popen("./rtarget",stdin=p.stdout,stdout=sp.PIPE) [output,error] = p2.communicate() </code></pre> <p>I'm getting an error is </p> <pre><code> File "/usr/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 8] Exec format error </code></pre> <p>After debugging it occurs at the fire subprocess call <code>p = sp.Popen("./hex2raw",stdin=inhandle,stdout=sp.PIPE)</code></p>
0
2016-10-18T19:31:14Z
40,116,619
<p>Since you're using redirection and piping, you have to enable <code>shell=True</code></p> <pre><code>sp.call(["./hex2raw", "&lt;", "exploit4.txt", "|", "./rtarget"],shell=True) </code></pre> <p>but it would be much cleaner to use <code>Popen</code> on both executables and feeding the contents of <code>exploit4.txt</code> as input. Example below, adapted to your case:</p> <pre><code>import subprocess with open("exploit4.txt") as inhandle: p = subprocess.Popen("./hex2raw",stdin=inhandle,stdout=subprocess.PIPE) p2 = subprocess.Popen("./rtarget",stdin=p.stdout,stdout=subprocess.PIPE) [output,error] = p2.communicate() print(output) # checking return codes is also a good idea rc2 = p2.wait() rc = p.wait() </code></pre> <p>Explanation:</p> <ol> <li>open the input file, get its handle <code>inhandle</code></li> <li>open the first subprocess, redirecting <code>stdin</code> with <code>inhandle</code>, and <code>stdout</code> to an output stream. Get the pipe handle (p)</li> <li>open the second subprocess, redirecting <code>stdin</code> with previous process <code>stdout</code>, and <code>stdout</code> to an output stream</li> <li>let the second process <code>communicate</code>. It will "pull" the first one by consuming its output: both processes work in a pipe fashion</li> <li>get return codes and print the result</li> </ol> <p>Note: you get "format error" because one or both executables are actually shell or other non-native executables. In that case, just add the <code>shell=True</code> option to the relevant <code>Popen</code> calls.</p>
2
2016-10-18T19:34:55Z
[ "python", "subprocess" ]
PYSPARK : How to work with dataframes?
40,116,603
<p>I have the following dataframes</p> <pre><code>from pyspark import SparkContext from pyspark.sql import SQLContext from pyspark.sql.functions import * sc = SparkContext() sql = SQLContext(sc) df1 = sql.createDataFrame([("Mark", 68), ("John", 59), ("Mary", 49)], ['Name', \ 'Weight']) df2 = sql.createDataFrame([("White", 68), ("Smith", 59), ("Gary", 49)], ['Name', \ 'Weight']) </code></pre> <p>Now I want to randomly choose n = 2 (can be any number) pairs from the weight columns and create the following pairs, each pair consist of two unequal weights:</p> <pre><code>(68, 59) (49, 68) </code></pre> <p>then I want to choose from df1 only those with weight 68 and 49, and from df2 only those with weight 59 and 68 and create another dataframe:</p> <pre><code>df3 = sql.createDataFrame([("Mark", 68, "Smith", 59), ("Mary", 49, "White", 68)], ['Name1', \ 'Weight1', 'Name2', 'Weight2']) </code></pre> <p>I'm working with big data. Given n, I first need to generate n pairs and then create the final dataframe.</p>
-1
2016-10-18T19:33:57Z
40,117,117
<p>Try:</p> <pre><code>&gt;&gt;&gt; df1.where(df1['Weight'].between(68, 59)).union(df2.where(df2['Weight'].between(49, 68))) </code></pre>
0
2016-10-18T20:04:40Z
[ "python", "apache-spark", "pyspark" ]
Python Selenium Xpath from firebug not found
40,116,629
<p>I am trying to login to the ESPN website using selenium. Here is my code thus far</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Firefox() driver.maximize_window() url = "http://www.espn.com/fantasy/" driver.get(url) login_button = driver.find_element_by_xpath("/html/body/div[6]/section/section/div/section[1]/div/div[1]/div[2]/a[2]") login_button.click() try: element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.XPATH, "/html/body/div[2]/div/div/section/section/form/section/div[1]/div/label/span[2]/input"))) except: driver.quit() </code></pre> <p>Basically, there are 2 steps, first I have to click the login button and then I have to fill in the form. Currently, I am clicking the login button and the form is popping up but then I can't find the form. I have been using firebug to get the xpath as suggested in other SO questions. I don't really know much about selenium so I am not sure where to look</p>
0
2016-10-18T19:35:26Z
40,116,943
<p>This works for me, switching to the iframe first. Note that you will need to switch back out of the iframe after entering the credentials.</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Firefox() driver.maximize_window() url = "http://www.espn.com/fantasy/" driver.get(url) login_button = driver.find_element_by_xpath("/html/body/div[6]/section/section/div/section[1]/div/div[1]/div[2]/a[2]") login_button.click() iframe = driver.find_element_by_id("disneyid-iframe") driver.switch_to.frame(iframe) try: element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.XPATH, "/html/body/div[2]/div/div/section/section/form/section/div[1]/div/label/span[2]/input"))) element.send_keys("my username") import time time.sleep(100) finally: driver.quit() </code></pre>
1
2016-10-18T19:54:15Z
[ "python", "selenium", "xpath" ]
Python Selenium Xpath from firebug not found
40,116,629
<p>I am trying to login to the ESPN website using selenium. Here is my code thus far</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Firefox() driver.maximize_window() url = "http://www.espn.com/fantasy/" driver.get(url) login_button = driver.find_element_by_xpath("/html/body/div[6]/section/section/div/section[1]/div/div[1]/div[2]/a[2]") login_button.click() try: element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.XPATH, "/html/body/div[2]/div/div/section/section/form/section/div[1]/div/label/span[2]/input"))) except: driver.quit() </code></pre> <p>Basically, there are 2 steps, first I have to click the login button and then I have to fill in the form. Currently, I am clicking the login button and the form is popping up but then I can't find the form. I have been using firebug to get the xpath as suggested in other SO questions. I don't really know much about selenium so I am not sure where to look</p>
0
2016-10-18T19:35:26Z
40,116,944
<p>Try to use</p> <pre><code>driver.switch_to_frame('disneyid-iframe') # handle authorization pop-up driver.switch_to_default_content() # if required </code></pre>
1
2016-10-18T19:54:21Z
[ "python", "selenium", "xpath" ]
How to use a variable to find another variable in a list - python
40,116,648
<p>(Python 3.x)</p> <pre><code>z=[] x=0 while 1==1: x=x+1 y=1 z.append(x) while y==1: a = 0 b = 0 if z(a)==x: print(x) y = 2 elif x%z(a)!= 0: a = a+1 elif b == 2: y = 2 else: b = b+1 </code></pre> <p>So, I made a code to find all the prime numbers until python crashes. However, it relies on z(a) for it to work. The idea is that as "a" changes, it moves on in the list. </p> <p>"z(a)" is where the error lies, so does anyone know a way to fix this?</p>
-2
2016-10-18T19:36:39Z
40,116,670
<p><code>z</code> is a list. You can approach values inside it by indexes using the <code>z[a]</code> operator (and not <code>z(a)</code> which assumes a function call with <code>a</code> as parameter).</p> <hr> <p>I've took the liberty of using <code>boolean variables</code>, <code>+=</code> operators and unpacking values:</p> <pre><code>z = [] x = 0 while True: x += 1 y = True z.append(x) while y: a, b = 0, 0 if z[a] == x: print(x) y = False elif x % z[a]: a += 1 elif b == 2: y = False else: b += 1 </code></pre> <hr> <p>I believe that's what you want to achieve (infinitely incrementing prime generator):</p> <pre><code>def prime (n): return not any([n % i == 0 for i in range(2, n)]) x = 0 while True: if prime(x): print(x) x += 1 </code></pre>
0
2016-10-18T19:37:58Z
[ "python", "python-3.x" ]
Python: Elegant way to store items for checking item existence in a container
40,116,653
<p>In the situation I encounter, I would like to define "elegant" being having <strong>1) constant O(1) time complexity</strong> for checking if an item exists and <strong>2) store only items</strong>, nothing more.</p> <p>For example, if I use a list</p> <pre><code>num_list = [] for num in range(10): # Dummy operation to fill the container. num_list += num if 1 in num_list: print("Number exists!") </code></pre> <p>The operation "in" will take <strong>O(n) time</strong> according to [<a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">Link</a>]</p> <p>In order to achieve constant checking time, I may employ a dictionary</p> <pre><code>num_dict = {} for num in range(10): # Dummy operation to fill the container. num_dict[num] = True if 1 in num_dict: print("Number exists!") </code></pre> <p>In the case of a dictionary, the operation "in" costs <strong>O(1) time</strong> according to [<a href="http://stackoverflow.com/questions/17539367/python-dictionary-keys-in-complexity">Link</a>], but additional <strong>O(n) storage</strong> is required to store dummy values. Therefore, both implementations/containers seem inelegant.</p> <p>What would be a better implementation/container to achieve constant O(1) time for checking if an item exists while only storing the items? How to keep resource requirement to the bare minimum?</p>
0
2016-10-18T19:37:02Z
40,116,882
<p>The solution here is to use a <code>set</code>, which doesnˈt requires you to save a dummy variable for each value.</p>
1
2016-10-18T19:50:55Z
[ "python" ]
Python: Elegant way to store items for checking item existence in a container
40,116,653
<p>In the situation I encounter, I would like to define "elegant" being having <strong>1) constant O(1) time complexity</strong> for checking if an item exists and <strong>2) store only items</strong>, nothing more.</p> <p>For example, if I use a list</p> <pre><code>num_list = [] for num in range(10): # Dummy operation to fill the container. num_list += num if 1 in num_list: print("Number exists!") </code></pre> <p>The operation "in" will take <strong>O(n) time</strong> according to [<a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">Link</a>]</p> <p>In order to achieve constant checking time, I may employ a dictionary</p> <pre><code>num_dict = {} for num in range(10): # Dummy operation to fill the container. num_dict[num] = True if 1 in num_dict: print("Number exists!") </code></pre> <p>In the case of a dictionary, the operation "in" costs <strong>O(1) time</strong> according to [<a href="http://stackoverflow.com/questions/17539367/python-dictionary-keys-in-complexity">Link</a>], but additional <strong>O(n) storage</strong> is required to store dummy values. Therefore, both implementations/containers seem inelegant.</p> <p>What would be a better implementation/container to achieve constant O(1) time for checking if an item exists while only storing the items? How to keep resource requirement to the bare minimum?</p>
0
2016-10-18T19:37:02Z
40,116,958
<p>Normally you can't optimise both space and time together. One thing you can do is have more details about the range of data(here min to max value of num) and size of data(here it is number of times loop runs ie., 10). Then you will have two options :</p> <ol> <li>If range is limited then go for dictionary method(or even use array index method) </li> <li>If size is limited then go for list method.</li> </ol> <p>If you choose right method then you will probably achieve constant time and space for large sample</p> <p>EDIT: <strong>Set</strong> It is a hash table, implemented very similarly the Python dict with some optimizations that take advantage of the fact that the values are always null (in a set, we only care about the keys). Set operations do require iteration over at least one of the operand tables (both in the case of union). <strong>Iteration isn't any cheaper than any other collection ( O(n) ), but membership testing is O(1) on average.</strong></p>
0
2016-10-18T19:55:20Z
[ "python" ]
Exclude Item from Web-Scraped Loop
40,116,665
<p>Suppose I have the following <code>html</code>:</p> <pre><code>&lt;h4&gt; &lt;a href="http://www.google.com"&gt;Google&lt;/a&gt; &lt;/h4&gt; &lt;h4&gt;Random Text&lt;/h4&gt; </code></pre> <p>I am able to identify all <code>h4</code> headings via a loop such as:</p> <pre><code>for url in soup.findAll("h4") print(url.get_text()) </code></pre> <p>And that works well except it includes the "random text" element of the <code>h4</code> heading. Is it possible to programmatically remove occurrences of <code>h4</code> headings that do not meet a certain criteria - for example, those that don't contain a link?</p>
1
2016-10-18T19:37:42Z
40,116,743
<p>Sure, you can go with a straightforward approach, simply filtering the headings:</p> <pre><code>for url in soup.find_all("h4") if not url.a: # "url.a" is a shortcut to "url.find('a')" continue print(url.get_text()) </code></pre> <p>Or, a better way would be to filter them with a <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-function" rel="nofollow">function</a>:</p> <pre><code>for url in soup.find_all(lambda tag: tag.name == "h4" and tag.a): print(url.get_text()) </code></pre> <p>Or, even better, go straight to the <code>a</code> elements:</p> <pre><code>for url in soup.select("h4 &gt; a"): print(url.get_text()) </code></pre> <p><code>h4 &gt; a</code> here is a <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector</a> that would match <code>a</code> elements that are direct children of <code>h4</code> tags.</p>
3
2016-10-18T19:41:57Z
[ "python", "web-scraping", "beautifulsoup" ]
Exclude Item from Web-Scraped Loop
40,116,665
<p>Suppose I have the following <code>html</code>:</p> <pre><code>&lt;h4&gt; &lt;a href="http://www.google.com"&gt;Google&lt;/a&gt; &lt;/h4&gt; &lt;h4&gt;Random Text&lt;/h4&gt; </code></pre> <p>I am able to identify all <code>h4</code> headings via a loop such as:</p> <pre><code>for url in soup.findAll("h4") print(url.get_text()) </code></pre> <p>And that works well except it includes the "random text" element of the <code>h4</code> heading. Is it possible to programmatically remove occurrences of <code>h4</code> headings that do not meet a certain criteria - for example, those that don't contain a link?</p>
1
2016-10-18T19:37:42Z
40,116,796
<p>Use list comprehension as the most pythonic approach:</p> <pre><code>[i.get_text() for i in soup.findAll("h4") if #Insert criteria here#] </code></pre>
0
2016-10-18T19:45:29Z
[ "python", "web-scraping", "beautifulsoup" ]
Minimize memory overhead in sparse matrix inverse
40,116,690
<p>As pretense, I am continuing development in Python 2.7 from a prior question: <a href="http://stackoverflow.com/questions/40050947/determining-a-sparse-matrix-quotient">Determining a sparse matrix quotient</a> </p> <h2>My existing code:</h2> <pre><code>import scipy.sparse as sp k = sp.csr_matrix(([], ([],[])),shape=[R,R]) denom = sp.csc_matrix(denominator) halfeq = sp.linalg.inv(denom) k = numerator.dot(halfeq) </code></pre> <p>I was successful in calculating for the base <code>k</code> and <code>denom</code>. Python continued attempting calculation on <code>halfeq</code>. The process sat in limbo for aproximately 2 hours before returning an error </p> <h2>Error Statement:</h2> <pre><code>Not enough memory to perform factorization. Traceback (most recent call last): File "&lt;myfilename.py&gt;", line 111, in &lt;module&gt; halfeq = sp.linalg.inv(denom) File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 61, in inv Ainv = spsolve(A, I) File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 151, in spsolve Afactsolve = factorized(A) File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 366, in factorized return splu(A).solve File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 242, in splu ilu=False, options=_options) MemoryError </code></pre> <p>From the <a href="https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/dsolve/SuperLU/SRC/smemory.c" rel="nofollow">scipy/smemory.c sourcecode</a>, the initial statement from the error is found on line 256. I am unable to further analyze the memory defs to determine how to best reallocate memory usage sufficient for execution. </p> <p>For reference, </p> <p><code>numerator</code> has <code>shape: (552297, 552297)</code> with <code>stored elements: 301067607</code> calculated as <code>sp.csr_matrix(A.T.dot(Ap))</code></p> <p><code>denominator</code> has <code>shape: (552297, 552297)</code> with <code>stored elements: 170837213</code> calculated as <code>sp.csr_matrix(A.T.dot(A))</code></p> <p><strong>EDIT</strong>: I've found <a href="https://www.reddit.com/r/Python/comments/3c0m7b/what_is_the_most_precise_way_to_invert_large/" rel="nofollow">a related question on Reddit</a>, but cannot determine how I would change my equation from <code>numerator * inv(denominator) = k</code></p>
0
2016-10-18T19:39:06Z
40,119,148
<p>No need to 'preallocate' <code>k</code>; this isn't a compiled language. Not that this is costing anything.</p> <pre><code>k = sp.csr_matrix(([], ([],[])),shape=[R,R]) </code></pre> <p>I need to double check this, but I think the <code>dot/inv</code> can be replaced by one call to <code>spsolve</code>. Remember in the other question I noted that <code>inv</code> is <code>spsolve(A, I)</code>; </p> <pre><code>denom = sp.csc_matrix(denominator) #halfeq = sp.linalg.inv(denom) #k = numerator.dot(halfeq) k = sp.linalg.spsolve(denom, numerator) </code></pre> <p>That said, it looks like the problem is in the <code>inv</code> part, the <code>factorized(denom)</code>. While your arrays are sparse, (denom density is 0.00056), they still have a large number of values.</p> <p>Maybe it would help to step back and look at:</p> <pre><code>num = A.T.dot(Ap) den = A.T.dot(A) k = solve(den, num) </code></pre> <p>In other words, review the matrix algebra.</p> <pre><code>(A'*Ap)/(A'*A) </code></pre> <p>I'm little rusty on this. Can we reduce this? Can we partition? </p> <p>Just throwing great big arrays together, even if they are sparse, isn't working.</p> <p>How about providing small <code>A</code> and <code>Ap</code> arrays that we can use for testing? I'm not interested in testing memory limits, but I'd like to experiment with different calculation methods.</p> <p>The sparse linalg module has a number of iterative solvers. I have no idea whether their memory use is greater or less.</p>
1
2016-10-18T22:30:22Z
[ "python", "scipy", "sparse-matrix" ]
making a chat client and wont work
40,116,815
<p>what's wrong here I'm stuck :( I'm using <strong>3.4.4</strong> if that helps I've tried everything! I've even searched on this! It keeps saying:</p> <pre><code>Traceback (most recent call last): File "C:\Users\matthew\Desktop\chatclient.py", line 36, in &lt;module&gt; s.sendto(alias.encode() + ": " + message.encode(), server) TypeError: can't concat bytes to str </code></pre> <p>Here's the code:</p> <pre><code>import socket import _thread import threading import time tLock = threading.Lock() shutdown = False def recieving(name, sock): while not shutdown: try: tLock.acquire() while True: data.addr = sock.recvfrom(1024).decode() print (str(data)) except: pass finally: tLock.release() host = '127.0.0.1' port = 0 server = ('127.0.0.1', 5000) s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind((host, port)) s.setblocking(0) rT = threading.Thread(target=recieving, args=("recvthread", s)) rT.start() alias = input("Name: ") message = input(alias + "-&gt; ") while message != 'q': if message != '': s.sendto(alias.encode() + ": " + message.encode(), server) tLock.acquire() message = input(alias + "-&gt; ") tLock.release() time.sleep(0.2) shutdown = True rT.join() s.close() </code></pre> <p>could it be my server I will type it if needed! and thx again!</p>
0
2016-10-18T19:46:08Z
40,116,949
<p>The message is right! Once you encode <code>alias</code> and <code>message</code>, they are <code>bytes</code> not strings. But <code>": "</code> is a string, hence the error. In python 3.x, strings are unicode and need to be encoded to bytes to be saved to disk or sent on the wire.</p> <p>An additional but subtle error is that you are using the default encoding for your computer but really the sending and receiving computers should agree on an encoding and use that. UTF-8 is a good choice.</p> <p>So, change</p> <pre><code>s.sendto(alias.encode() + ": " + message.encode(), server) </code></pre> <p>to</p> <pre><code>s.sendto("{}: {}".format(alias, message).encode('utf-8'), server) </code></pre>
0
2016-10-18T19:54:37Z
[ "python", "sockets" ]
How to search the entire HDD for all pdf files?
40,116,923
<p>As the title suggests, I would like to get python 3.5 to search my root ('C:\') for pdf files and then move those files to a specific folder. This task can easily split into 2: 1. Search my root for files with the pdf extension. 2. Move those to a specific folder.</p> <p>Now. I know how to search for a specific file name, but not plural files that has a specific extension. </p> <pre><code>import os print('Welcome to the Walker Module.') print('find(name, path) or find_all(name, path)') def find(name, path): for root, dirs, files in os.walk(path): print('Searching for files...') if name in files: return os.path.join(root, name) def find_all(name, path): result = [] for root, dirs, files in os.walk(path): print('Searching for files...') if name in files: result.append(os.path.join(root, name)) return result </code></pre> <p>This little program will find me either the 1st or all locations of a specific file. I, however, can not modify this to be able to search for pdf files due to the lack of knowledge with python and programming in general.</p> <p>Would love to have some kind of insight on where to go from here. </p> <p>To sum it up, </p> <ol> <li>Search the root for all pdf files. </li> <li>Move those files into a specific location. Lets say 'G:\Books'</li> </ol> <p>Thanks in advance. </p>
0
2016-10-18T19:53:24Z
40,122,637
<p>Your find_all function is very close to the final result. When you loop through the files, you can check their extension with os.path.splitext, and if they are .pdf files you can move them with shutil.move</p> <p>Here's an example that walks the tree of a source directory, checks the extension of every file and, in case of match, moves the files to a destination directory:</p> <pre><code>import os import shutil def move_all_ext(extension, source_root, dest_dir): # Recursively walk source_root for (dirpath, dirnames, filenames) in os.walk(source_root): # Loop through the files in current dirpath for filename in filenames: # Check file extension if os.path.splitext(filename)[-1] == extension: # Move file shutil.move(os.path.join(dirpath, filename), os.path.join(dest_dir, filename)) # Move all pdf files from C:\ to G:\Books move_all_ext(".pdf", "C:\\", "G:\\Books") </code></pre>
0
2016-10-19T05:11:34Z
[ "python", "windows", "python-3.x" ]
How to search the entire HDD for all pdf files?
40,116,923
<p>As the title suggests, I would like to get python 3.5 to search my root ('C:\') for pdf files and then move those files to a specific folder. This task can easily split into 2: 1. Search my root for files with the pdf extension. 2. Move those to a specific folder.</p> <p>Now. I know how to search for a specific file name, but not plural files that has a specific extension. </p> <pre><code>import os print('Welcome to the Walker Module.') print('find(name, path) or find_all(name, path)') def find(name, path): for root, dirs, files in os.walk(path): print('Searching for files...') if name in files: return os.path.join(root, name) def find_all(name, path): result = [] for root, dirs, files in os.walk(path): print('Searching for files...') if name in files: result.append(os.path.join(root, name)) return result </code></pre> <p>This little program will find me either the 1st or all locations of a specific file. I, however, can not modify this to be able to search for pdf files due to the lack of knowledge with python and programming in general.</p> <p>Would love to have some kind of insight on where to go from here. </p> <p>To sum it up, </p> <ol> <li>Search the root for all pdf files. </li> <li>Move those files into a specific location. Lets say 'G:\Books'</li> </ol> <p>Thanks in advance. </p>
0
2016-10-18T19:53:24Z
40,131,752
<p>You can use <a href="https://docs.python.org/3/library/glob.html" rel="nofollow"><code>glob</code></a> from python 3.5 onwards. It supports a recursive search.</p> <blockquote> <p>If recursive is true, the pattern “**” will match any files and zero or more directories and subdirectories. If the pattern is followed by an os.sep, only directories and subdirectories match.</p> </blockquote> <p>Therefore you can use it like</p> <pre><code>import glob from os import path import shutil def searchandmove(wild, srcpath, destpath): search = path.join(srcpath,'**', wild) for fpath in glob.iglob(search, recursive=True): print(fpath) dest = path.join(destpath, path.basename(fpath)) shutil.move(fpath, dest) searchandmove('*.pdf', 'C:\\', 'G:\\Books') </code></pre> <p>With a minimum of string wrangling. For large searches however such as from the root of a filesystem it can take a while, but I'm sure any approach would have this issue. </p> <p>Tested only on linux, but should work fine on windows. Whatever you pass as <code>destpath</code> must already exist.</p>
0
2016-10-19T12:44:02Z
[ "python", "windows", "python-3.x" ]
How can I extract the information I want using this RegEx or better?
40,116,937
<p>So here's the Regular Expression I have so far.</p> <p><code>r"(?s)(?&lt;=([A-G][1-3])).*?(?=[A-G][1-3]|$)"</code></p> <p>It looks behind for a letter followed by a number between A-G and 1-3 as well as doing the same when looking ahead. I've tested it using <a href="https://regex101.com/" rel="nofollow">Regex101</a>. <a href="https://I.stack.imgur.com/UPdgd.png" rel="nofollow">Here's what it returns for each match</a></p> <p>This is the string I'm testing it against,</p> <pre><code>"A1 **ACBFEKJRQ0Z+-** F2 **.,12STLMGHD** F1 **9)(** D2 **!?56WXP** C1 **IONVU43\"\'** E1 **Y87&gt;&lt;** A3 **-=.,\'\"!?&gt;&lt;()@**" </code></pre> <p>(the string shouldn't have any spaces but I needed to embolden the values between each Letter followed by a number so it is easier to see what I want)</p> <p>What I want it to do is store the values between each of the matches for the group (The "Full Matches") and the matches for the group they coincide with to use later.</p> <p>In the end I would like to end up with either a list of tuples or a dictionary for example:</p> <pre><code>dict = {"A1":"ACBFEKJRQ0Z+-", "F2":",12STLMGHD", "F1":"9)(", "next group match":"characters that follow"} </code></pre> <p>or</p> <pre><code>list_of_tuples = (["A1","ACBFEKJRQ0Z+-"], ["F2","12STLMGHD"], ["F1","9)("], ["next group match","characters that follow"]) </code></pre> <p>The string being compared to the RegEx won't ever have something like "C1F2" btw</p> <p>P.S. Excuse the terrible explanation, any help is greatly appreciated</p>
2
2016-10-18T19:53:56Z
40,117,108
<p>I suggest</p> <pre><code>(?s)([A-G][1-3])((?:(?![A-G][1-3]).)*) </code></pre> <p>See the <a href="https://regex101.com/r/xlC4tZ/2" rel="nofollow">regex demo</a></p> <p>The <code>(?s)</code> will enable <code>.</code> to match linebreaks, <code>([A-G][1-3])</code> will capture the uppercase letter+digit into Group 1 and <code>((?:(?![A-G][1-3]).)*)</code> will match all text that is not starting the uppercase letter+digit sequence.</p> <p>The same regex can be unrolled as <code>([A-G][1-3])([^A-G]*(?:[A-G](?![1-3])[^A-G]*)*)</code> for better performance (no <code>re.DOTALL</code> modifier or <code>(?s)</code> is necessary with it). See <a href="https://regex101.com/r/PeUAym/1" rel="nofollow">this demo</a>.</p> <p><a href="http://ideone.com/kvL59F" rel="nofollow">Python demo</a>:</p> <pre><code>import re regex = r"(?s)([A-G][1-3])((?:(?![A-G][1-3]).)*)" test_str = """A1 ACBFEKJRQ0Z+-F2.,12STLMGHDF19)(D2!?56WXPC1IONVU43"'E1Y87&gt;&lt;A3-=.,'"!?&gt;&lt;()@""" dct = dict(re.findall(regex, test_str)) print(dct) </code></pre>
1
2016-10-18T20:04:11Z
[ "python", "regex" ]
Colorbar/plotting issue? "posx and posy should be finite values"
40,116,968
<p><strong>The problem</strong></p> <p>So I have a lat-lon array with <code>6</code> layers (<code>array.size = (192,288,6)</code>) containing a bunch of data ranging in values from nearly <code>0</code> to about <code>0.65</code>. When I plot data from every one of the <code>6</code> layers (<code>[:,:,0]</code>, <code>[:,:,1]</code>, etc.), I have no problems and get a nice map, except for <code>[:,:,4]</code>. For some reason, when I try to plot this 2D array, I get an error message I don't understand, and it only comes up when I try to include a colorbar. If I nix the colorbar there's no error, but I need that colorbar...</p> <p><strong>The code</strong></p> <p>Here's the code I use for a different part of the array, along with the resulting plot. Let's go with <code>[:,:,5]</code>.</p> <pre><code>#Set labels lonlabels = ['0','45E','90E','135E','180','135W','90W','45W','0'] latlabels = ['90S','60S','30S','Eq.','30N','60N','90N'] #Set cmap properties bounds = np.array([0,0.001,0.01,0.05,0.1,0.2,0.3,0.4,0.5,0.6]) boundlabels = ['0','0.001','0.01','0.05','0.1','0.2','0.3','0.4','0.5','0.6'] cmap = plt.get_cmap('jet') norm = colors.PowerNorm(0.35,vmax=0.65) #creates logarithmic scale #Create basemap fig,ax = plt.subplots(figsize=(15.,10.)) m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,llcrnrlon=0,urcrnrlon=360.,lon_0=180.,resolution='c') m.drawcoastlines(linewidth=2,color='w') m.drawcountries(linewidth=2,color='w') m.drawparallels(np.arange(-90,90,30.),linewidth=0.3) m.drawmeridians(np.arange(-180.,180.,45.),linewidth=0.3) meshlon,meshlat = np.meshgrid(lon,lat) x,y = m(meshlon,meshlat) #Plot variables trend = m.pcolormesh(x,y,array[:,:,5],cmap='jet',norm=norm,shading='gouraud') #Set plot properties #Colorbar cbar=m.colorbar(trend, size='5%',ticks=bounds,location='bottom',pad=0.8) cbar.set_label(label='Here is a label',size=25) cbar.set_ticklabels(boundlabels) for t in cbar.ax.get_xticklabels(): t.set_fontsize(25) #Titles &amp; labels ax.set_title('Here is a title for [:,:,5]',fontsize=35) ax.set_xlabel('Longitude',fontsize=25) ax.set_xticks(np.arange(0,405,45)) ax.set_xticklabels(lonlabels,fontsize=20) ax.set_yticks(np.arange(-90,120,30)) ax.set_yticklabels(latlabels,fontsize=20) </code></pre> <p><a href="https://i.stack.imgur.com/DTTwg.png" rel="nofollow"><img src="https://i.stack.imgur.com/DTTwg.png" alt="enter image description here"></a></p> <p>Now when I use the EXACT same code but plot for <code>array[:,:,4]</code> instead of <code>array[:,:,5]</code>, I get this error.</p> <pre><code>ValueError Traceback (most recent call last) /linuxapps/anaconda/lib/python2.7/site-packages/IPython/core/formatters.pyc in __call__(self, obj) 305 pass 306 else: --&gt; 307 return printer(obj) 308 # Finally look for special method names 309 method = get_real_method(obj, self.print_method) [lots of further traceback] /linuxapps/anaconda/lib/python2.7/site-packages/matplotlib/text.pyc in draw(self, renderer) 755 posy = float(textobj.convert_yunits(textobj._y)) 756 if not np.isfinite(posx) or not np.isfinite(posy): --&gt; 757 raise ValueError("posx and posy should be finite values") 758 posx, posy = trans.transform_point((posx, posy)) 759 canvasw, canvash = renderer.get_canvas_width_height() ValueError: posx and posy should be finite values </code></pre> <p>I have no idea why it's doing this as my code for every other part of the array plots just fine, and they all use the same meshgrid. There are no <code>NaN</code>'s in the array. Also here's the result if I comment out all the code between <code>#Colorbar</code> and <code>#Titles &amp; labels</code></p> <p><a href="https://i.stack.imgur.com/d7mSg.png" rel="nofollow"><img src="https://i.stack.imgur.com/d7mSg.png" alt="enter image description here"></a></p> <p>UPDATE: The problem also disappears when I include the colorbar code but changed the <code>PowerNorm</code> to <code>1.0</code> (<code>norm = colors.PowerNorm(1.0,vmax=0.65)</code>). Anything other than <code>1.0</code> generates the error when the colorbar is included.</p> <p><strong>The question</strong></p> <p>What could be causing the <code>posx</code> &amp; <code>posy</code> error message, and how can I get rid of it so I can make this plot with the colorbar included?</p> <p><strong>UPDATE</strong></p> <p>When I run the kernel from scratch, again with the same code (except that I changed the <code>0.6</code> bound to <code>0.65</code>), I get the following warnings in the <code>array[:,:,4]</code> block. I'm not sure if they're related, but I'll include them just in case.</p> <pre><code>/linuxapps/anaconda/lib/python2.7/site-packages/matplotlib/colors.py:1202: RuntimeWarning: invalid value encountered in power np.power(resdat, gamma, resdat) [&lt;matplotlib.text.Text at 0x2af62c8e6710&gt;, &lt;matplotlib.text.Text at 0x2af62c8ffed0&gt;, &lt;matplotlib.text.Text at 0x2af62cad8e90&gt;, &lt;matplotlib.text.Text at 0x2af62cadd3d0&gt;, &lt;matplotlib.text.Text at 0x2af62caddad0&gt;, &lt;matplotlib.text.Text at 0x2af62cae7250&gt;, &lt;matplotlib.text.Text at 0x2af62cacd050&gt;] /linuxapps/anaconda/lib/python2.7/site-packages/matplotlib/axis.py:1015: UserWarning: Unable to find pixel distance along axis for interval padding of ticks; assuming no interval padding needed. warnings.warn("Unable to find pixel distance along axis " /linuxapps/anaconda/lib/python2.7/site-packages/matplotlib/axis.py:1025: UserWarning: Unable to find pixel distance along axis for interval padding of ticks; assuming no interval padding needed. warnings.warn("Unable to find pixel distance along axis " </code></pre>
1
2016-10-18T19:55:51Z
40,141,093
<p>So I found out that specifying <code>vmax</code> &amp; <code>vmin</code> solves the problem. I have no idea why, but once I did, my plot turned out correctly with the colorbar.</p> <pre><code>trend = m.pcolormesh(x,y,array[:,:,5],cmap='jet',norm=norm,shading='gouraud',vmin=0.,vmax=0.6) </code></pre> <p><a href="https://i.stack.imgur.com/4Wkft.png" rel="nofollow"><img src="https://i.stack.imgur.com/4Wkft.png" alt="enter image description here"></a></p>
0
2016-10-19T20:38:02Z
[ "python", "arrays", "matplotlib", "jupyter-notebook", "colorbar" ]
Average and RMSE of n x k array
40,117,208
<p>I have this target array:</p> <pre><code>[ 0.88 0.51 0.55 0.59 0.7 ] </code></pre> <p>and this sample array:</p> <pre><code>[[ 0.4 0.02 0.52 0.44 0.48] [ 0.53 0.73 0.13 0.15 0.78] [ 0.67 0.27 0.26 0.31 0.17] [ 0.37 0.51 0.98 0.2 0.57]] </code></pre> <p>and I would like to produce another array (say 'fns') that will calculate</p> <ul> <li>row0: the average of each column of the sample array</li> <li>row1: the average of each column +1 std deviation</li> <li>row2: the average of each column -1 std deviation</li> <li>row3: the RMSE of the average to the average for each column</li> </ul> <p>anybody can suggest anything better than nested for statements?</p>
0
2016-10-18T20:09:30Z
40,118,969
<p>You can avoid the nested for loops by using the axis argument available to many <code>numpy</code> methods. </p> <pre><code>fns = np.empty((4,sample.shape[1])) stdv = np.std(sample,axis=0) fns[0,:] = np.mean(sample,axis=0) fns[1,:] = fns[0,:] - stdv fns[2,:] = fns[0,:] + stdv fns[3,:] = np.sqrt(np.mean((sample - target)**2,axis=0)) </code></pre>
1
2016-10-18T22:13:20Z
[ "python", "statistics" ]
Pack hex string using struct module?
40,117,221
<p>I want to pack a hex string with python pack. Here is my code:</p> <pre><code>import struct query='430401005001' q= ('%x' % int(query, 16)).decode('hex').decode('utf-8') qpacked=struct.pack('6s',str(q)) </code></pre> <p>Query is a hex string. The code does not work if I change the string to '53040600d0010100' and change 6s to 8s. Is there any better way to pack such a hex string?</p>
0
2016-10-18T20:10:17Z
40,117,293
<p>The string gets truncated because you're telling it you want to pack it up to length 6 (<code>6s</code>). You'll have to either raise that number, or work around your string getting truncated.</p> <p>Also, stop juggling with the encoding of your string, just <code>query.decode('hex')</code> should suffice.</p>
0
2016-10-18T20:15:39Z
[ "python", "string", "hex" ]
How to programm a stencil with Dask
40,117,237
<p>In many occasions, scientists simulates a system's dynamics using a Stencil, this is convolving a mathematical operator over a grid. Commonly, this operation consumes a lot of computational resources. <a href="https://en.wikipedia.org/wiki/Stencil_code" rel="nofollow">Here</a> is a good explanation of the idea. </p> <p>In numpy, the canonical way of programming a 2D 5-points stencil is as follows:</p> <pre><code>for i in range(rows): for j in range(cols): grid[i, j] = ( grid[i,j] + grid[i-1,j] + grid[i+1,j] + grid[i,j-1] + grid[i,j+1]) / 5 </code></pre> <p>Or, more efficiently, using slicing: </p> <pre><code>grid[1:-1,1:-1] = ( grid[1:-1,1:-1] + grid[0:-2,1:-1] + grid[2:,1:-1] + grid[1:-1,0:-2] + grid[1:-1,2:] ) / 5 </code></pre> <p>However, if your grid is really big, it won't fix in your memory, or if the convolution operation is really complicated it will take a very long time, parallel programing techniques are use to overcome this problems or simply to get the result faster. Tools like <a href="http://dask.pydata.org/en/latest/" rel="nofollow">Dask</a> allow scientist to program this simulations by themselves, in a parallel-almost-transparent manner. Currently, Dask doesn't support item assignment, so, how can I program a stencil with Dask. </p>
2
2016-10-18T20:10:53Z
40,117,491
<p>Nice question. You're correct that <a href="http://dask.pydata.org/en/latest/array.html" rel="nofollow">dask.array</a> <em>do</em> provide parallel computing but <em>don't</em> doesn't support item assignment. We can solve stencil computations by making a function to operate on a block of numpy data at a time and then by mapping that function across our array with slightly overlapping boundaries.</p> <h3>Pure Functions</h3> <p>You should make a function that takes a numpy array and returns a new numpy array with the stencil applied. This should not modify the original array.</p> <pre><code>def apply_stencil(x): out = np.empty_like(x) ... # do arbitrary computations on out return out </code></pre> <h3>Map a function with overlapping regions</h3> <p>Dask arrays operate in parallel by breaking an array into disjoint chunks of smaller arrays. Operations like stencil computations will require a bit of overlap between neighboring blocks. Fortunately this can be handled with the <a href="http://dask.pydata.org/en/latest/array-ghost.html" rel="nofollow">dask.array.ghost</a> module, and the <a href="http://dask.pydata.org/en/latest/array-api.html#dask.array.Array.map_overlap" rel="nofollow">dask.array.map_overlap</a> method in particular.</p> <p>Actually, the example in the <code>map_overlap</code> docstring is a 1d forward finite difference computation</p> <pre><code>&gt;&gt;&gt; x = np.array([1, 1, 2, 3, 3, 3, 2, 1, 1]) &gt;&gt;&gt; x = from_array(x, chunks=5) &gt;&gt;&gt; def derivative(x): ... return x - np.roll(x, 1) &gt;&gt;&gt; y = x.map_overlap(derivative, depth=1, boundary=0) &gt;&gt;&gt; y.compute() array([ 1, 0, 1, 1, 0, 0, -1, -1, 0]) </code></pre>
1
2016-10-18T20:28:02Z
[ "python", "dask" ]
How to programm a stencil with Dask
40,117,237
<p>In many occasions, scientists simulates a system's dynamics using a Stencil, this is convolving a mathematical operator over a grid. Commonly, this operation consumes a lot of computational resources. <a href="https://en.wikipedia.org/wiki/Stencil_code" rel="nofollow">Here</a> is a good explanation of the idea. </p> <p>In numpy, the canonical way of programming a 2D 5-points stencil is as follows:</p> <pre><code>for i in range(rows): for j in range(cols): grid[i, j] = ( grid[i,j] + grid[i-1,j] + grid[i+1,j] + grid[i,j-1] + grid[i,j+1]) / 5 </code></pre> <p>Or, more efficiently, using slicing: </p> <pre><code>grid[1:-1,1:-1] = ( grid[1:-1,1:-1] + grid[0:-2,1:-1] + grid[2:,1:-1] + grid[1:-1,0:-2] + grid[1:-1,2:] ) / 5 </code></pre> <p>However, if your grid is really big, it won't fix in your memory, or if the convolution operation is really complicated it will take a very long time, parallel programing techniques are use to overcome this problems or simply to get the result faster. Tools like <a href="http://dask.pydata.org/en/latest/" rel="nofollow">Dask</a> allow scientist to program this simulations by themselves, in a parallel-almost-transparent manner. Currently, Dask doesn't support item assignment, so, how can I program a stencil with Dask. </p>
2
2016-10-18T20:10:53Z
40,118,151
<p>Dask internally divides arrays into smaller numpy arays, when you create an array with dask.array, you must provide some information about how to divide it into <em>chunks</em>, like this:</p> <pre><code>grid = dask.array.zeros((100,100), chunks=(50,50)) </code></pre> <p>That requests an array of 100 x 100 divided in 4 chunks. Now, to convolve an operation over the newly created array, information of chunk's borders must be shared. <a href="http://dask.pydata.org/en/latest/array-ghost.html" rel="nofollow">Dask ghost cells</a>, manages situations like this. </p> <p>A common work-flow implies:</p> <ol> <li>Creating the array (if it didn't exist before)</li> <li>Commanding the creation of ghost cells</li> <li>Mapping a calculation</li> <li>Trimming the borders</li> </ol> <p>For example,</p> <pre><code>import dask.array as da grid = da.zeros((100,100), chunks=(50,50)) g = da.ghost.ghost(grid, depth={0:1,1:1}, boundary={0:0,1:1}) g2 = g.map_blocks( some_function ) s = da.ghost.trim_internals(g2, {0:1,1:1}) s.compute() </code></pre> <p>Remember that Dask creates a dictionary to represent the task graph, the real computation is triggered by <code>s.compute()</code>. As noted by MRocklin, the mapped function must return a numpy array. </p> <h2>A note about schedulers</h2> <p>By default, dask.array uses dask.theated scheduler to boost performance, but once the information is shared, problems similar to a stencil are embarrassing parallel, that means no resources nor information must be shared, and computations can be mapped to different cores even different computers. To do this, one can instruct dask to use a different scheduler, for example dask.multiprocessing:</p> <pre><code>import dask.multiprocessing import dask dask.set_options(get=dask.multiprocessing.get) </code></pre> <p>When <code>compute()</code> is triggered, Dask will create multiple instances of python, if your applications is big enough to pay the overhead of creating this new instances, maybe dask.multiprocessing will deliver a better performance. More information about Dask schedulers can be found <a href="http://dask.pydata.org/en/latest/scheduler-overview.html" rel="nofollow">here</a>.</p>
1
2016-10-18T21:12:36Z
[ "python", "dask" ]
NLTK separately extract leaves and non-leaf nodes
40,117,239
<p>I'm working with the <a href="http://nlp.stanford.edu/sentiment/" rel="nofollow">Standford Sentiment Treebank</a> dataset and I'm attempting to extract the leaves and the nodes. The data is given follows </p> <pre><code>(3 (2 (2 The) (2 Rock)) (4 (3 (2 is) (4 (2 destined) (2 (2 (2 (2 (2 to) (2 (2 be) (2 (2 the) (2 (2 21st) (2 (2 (2 Century) (2 's)) (2 (3 new) (2 (2 ``) (2 Conan)))))))) (2 '')) (2 and)) (3 (2 that) (3 (2 he) (3 (2 's) (3 (2 going) (3 (2 to) (4 (3 (2 make) (3 (3 (2 a) (3 splash)) (2 (2 even) (3 greater)))) (2 (2 than) (2 (2 (2 (2 (1 (2 Arnold) (2 Schwarzenegger)) (2 ,)) (2 (2 Jean-Claud) (2 (2 Van) (2 Damme)))) (2 or)) (2 (2 Steven) (2 Segal))))))))))))) (2 .))) </code></pre> <p>what I would like so for something as follows:</p> <p>i) The leaves with the label (uni-gram):</p> <pre><code>[(2 The), (2 Rock), (2 is), (2 destined),...] </code></pre> <p>ii) uper nodes with the labels (bi-gram):</p> <pre><code>[(2 (2 the) (2 Rock)), (2 (2 ``) (2 Conan)), (2 (2 Century) (2 's)),..] </code></pre> <p>until I get to the root of the tree. </p> <p>I've attempted to use regex to accomplish this but it fails to output correctly.</p> <p>The code I have (for the uni-gram):</p> <pre><code>import re import nltk location = '.../NLP/Standford_Sentiment_Tree_Data_Set/' +\ 'trainDevTestTrees_PTB/trees/train.txt' text = open(location, 'r') test = text.readlines()[0] text.close() uni_regex = re.compile(r'(\([0-4] \w+\))') temp01 = uni_regex.findall(test) # bi-gram bi_regex = re.compile(r'(\([0-4] \([0-4] \w+\) \([0-4] \w+\)\))') temp02 = bi_regex.findall(test) </code></pre> <p>The above code outputs: </p> <pre><code>['(2 The)', '(2 Rock)', '(2 is)', '(2 destined)', '(2 to)', '(2 be)', '(2 the)', '(2 21st)', '(2 Century)', '(3 new)',...] </code></pre> <p>and fails to capture <code>(2 ``)</code>, <code>(2 '')</code> and extracts <code>(2 Jean)</code> instead of <code>(2 Jean-Claude)</code></p> <p>The output fails to capture <code>(2 (2``) (2 Conan))</code></p> <p>Is there a way to get the result that I want using <code>nltk</code> or some configuration of <code>regex</code> that will not miss any tokens? </p> <p>I've had a look and attempted to modify the solution provided in <a href="http://stackoverflow.com/questions/25815002/nltk-tree-data-structure-finding-a-node-its-parent-or-children">NLTK tree data structure, finding a node, it&#39;s parent or children</a> but that question seems to deal with finding a specific word in a leave and the displaying the tree structure, whereas I require the indented solution to resemble the above n-grams. </p>
-1
2016-10-18T20:10:59Z
40,118,405
<p>Don't waste your time with regexps, this is what tree classes are for. Use the nltk's <code>Tree</code> class like this:</p> <pre><code>mytree = "(3 (2 (2 The) (2 Rock)) (4 (3 (2 is) (4 (2 destined) (2 (2 (2 (2 (2 to) (2 (2 be) (2 (2 the) (2 (2 21st) (2 (2 (2 Century) (2 's)) (2 (3 new) (2 (2 ``) (2 Conan)))))))) (2 '')) (2 and)) (3 (2 that) (3 (2 he) (3 (2 's) (3 (2 going) (3 (2 to) (4 (3 (2 make) (3 (3 (2 a) (3 splash)) (2 (2 even) (3 greater)))) (2 (2 than) (2 (2 (2 (2 (1 (2 Arnold) (2 Schwarzenegger)) (2 ,)) (2 (2 Jean-Claud) (2 (2 Van) (2 Damme)))) (2 or)) (2 (2 Steven) (2 Segal))))))))))))) (2 .)))" &gt;&gt;&gt; t = nltk.Tree.fromstring(mytree) &gt;&gt;&gt; print(t) (3 (2 (2 The) (2 Rock)) (4 (3 (2 is) (4 (2 destined) (2 ... </code></pre> <p>You can then extract and count the leaves, and request the corresponding "treepositions" (the path to each leaf, in the form of a list):</p> <pre><code>&gt;&gt;&gt; leafpos = [ t.leaf_treeposition(n) for n, x in enumerate(t.leaves()) ] &gt;&gt;&gt; print(leafpos[0:3]) [(0, 0, 0), (0, 1, 0), (1, 0, 0, 0)] </code></pre> <p>Finally, you can walk up the treepositions to get the units you want: the subtree dominated by the node immediately above each leaf, two steps above each leaf, etc:</p> <pre><code>&gt;&gt;&gt; level1_subtrees = [ t[path[:-1]] for path in leafpos ] &gt;&gt;&gt; for x in level1_subtrees: ... print(x, end = " ") (2 The) (2 Rock) (2 is) (2 destined) (2 to) (2 be) (2 the) ... &gt;&gt;&gt; level2_subtrees = [ t[path[:-2]] for path in leafpos ] </code></pre> <p>Note, however, that higher-level subtrees don't look like you imagine. If you go up two levels from leaf 3 (<code>destined</code>), for example, you won't get a "bigram". You'll be at the node labeled <code>4</code>, which dominates most of the rest of the sentence. Perhaps you're actually interested in enumerating all subtrees? In that case, just iterate over <code>t.subtrees()</code>. </p> <p>If that's not what you want, take a look at the <code>Tree</code> API and pick out another way to select the parts you need.</p>
2
2016-10-18T21:29:44Z
[ "python", "tree", "nlp", "nltk", "nodes" ]
Routing error with python flask search app
40,117,312
<p>I am trying to get a simple search function going with my flask app. I have the following code that kicks off the search</p> <pre><code>&lt;form action="/search" method=post&gt; &lt;input type=text name=search value="{{ request.form.search }}"&gt;&lt;/br&gt; &lt;div class="actions"&gt;&lt;input type=submit value="Search"&gt;&lt;/div&gt; &lt;/form&gt; </code></pre> <p>This hooks in with my search/controllers.py script that looks like this </p> <pre><code>@search.route('/search/') @search.route('/search/&lt;query&gt;', methods=['GET', 'POST']) def index(query=None): es = current_app.config.get("es") q = {"query":{ "multi_match":{"fields":["name","tags","short_desc","description"],"query":query,"fuzziness":"AUTO"}}} matches = es.search('products', 'offerings', body=q) return render_template('search/results.html', services=matches['_source']) </code></pre> <p>Unfortunately whenever I actually search I get a routing error:</p> <blockquote> <p>FormDataRoutingRedirect: A request was sent to this URL (<a href="http://localhost:8080/search" rel="nofollow">http://localhost:8080/search</a>) but a redirect was issued automatically by the routing system to "<a href="http://localhost:8080/search/" rel="nofollow">http://localhost:8080/search/</a>". The URL was defined with a trailing slash so Flask will automatically redirect to the URL with the trailing slash if it was accessed without one. Make sure to directly send your POST-request to this URL since we can't make browsers or HTTP clients redirect with form data reliably or without user interaction. Note: this exception is only raised in debug mode</p> </blockquote> <p>I tried changing the method to <code>methods=['POST']</code> but it made no difference. </p>
1
2016-10-18T20:16:40Z
40,117,376
<p>As the error states, your form is posting to /search but your handler is set up for /search/. Make them the same.</p>
0
2016-10-18T20:20:28Z
[ "python", "flask" ]
Routing error with python flask search app
40,117,312
<p>I am trying to get a simple search function going with my flask app. I have the following code that kicks off the search</p> <pre><code>&lt;form action="/search" method=post&gt; &lt;input type=text name=search value="{{ request.form.search }}"&gt;&lt;/br&gt; &lt;div class="actions"&gt;&lt;input type=submit value="Search"&gt;&lt;/div&gt; &lt;/form&gt; </code></pre> <p>This hooks in with my search/controllers.py script that looks like this </p> <pre><code>@search.route('/search/') @search.route('/search/&lt;query&gt;', methods=['GET', 'POST']) def index(query=None): es = current_app.config.get("es") q = {"query":{ "multi_match":{"fields":["name","tags","short_desc","description"],"query":query,"fuzziness":"AUTO"}}} matches = es.search('products', 'offerings', body=q) return render_template('search/results.html', services=matches['_source']) </code></pre> <p>Unfortunately whenever I actually search I get a routing error:</p> <blockquote> <p>FormDataRoutingRedirect: A request was sent to this URL (<a href="http://localhost:8080/search" rel="nofollow">http://localhost:8080/search</a>) but a redirect was issued automatically by the routing system to "<a href="http://localhost:8080/search/" rel="nofollow">http://localhost:8080/search/</a>". The URL was defined with a trailing slash so Flask will automatically redirect to the URL with the trailing slash if it was accessed without one. Make sure to directly send your POST-request to this URL since we can't make browsers or HTTP clients redirect with form data reliably or without user interaction. Note: this exception is only raised in debug mode</p> </blockquote> <p>I tried changing the method to <code>methods=['POST']</code> but it made no difference. </p>
1
2016-10-18T20:16:40Z
40,117,422
<p>Use <code>url_for('index')</code> to generate the correct url for the action.</p> <pre><code>&lt;form action="{{ url_for('index') }}"&gt; </code></pre> <hr> <p>Currently, you're submitting to the url without the trailing <code>/</code>. Flask redirects this to the route with the trailing <code>/</code>, but POST data doesn't survive redirects on many browsers, so Flask is warning you about the issue.</p>
2
2016-10-18T20:23:27Z
[ "python", "flask" ]
return len of list without changing the method that returns index of list
40,117,456
<p>How can write a function outside of this class that will return the len of the list without modifying the class at all. </p> <pre><code>class SingleMethodList(object): def __init__(self, l): self._list = l def get(self, index): try: return self._list[index] except IndexError: return None </code></pre> <p>I have tried to do the following, however, I am only allowed to use the get function. The reason for not modifying the class is because it is a challenge, </p> <pre><code> def list_length(single_method_list): return len(single_method_list._list) </code></pre> <p>The challenge is taken from <a href="https://github.com/shipperizer/crowdscores#problem-2-list-length" rel="nofollow">here</a>:</p> <p><em>You are given a list implementation, aptly named <code>SingleMethodList</code>. This list contains only one method:</em></p> <pre><code>class SingleMethodList(object): def get(self, index): """ Returns the list item at `index`, otherwise returns None. """ ... </code></pre> <p><em>You are required to calculate the length of this list, but due to immense beaurocratic restrictions at OmniCorp, you are not allowed to modify the interface of this class and you have to use it as is.</em></p> <p><em>For this task you need to write a function <code>list_length(single_method_list)</code> that takes a <code>SingleMethodList</code> instance and returns its length. You may assume that no list will contain <code>None</code> as an item.</em></p>
-3
2016-10-18T20:25:47Z
40,117,603
<p>You can simply use a function like this in order to iterate through the list until you reach the end of it:</p> <pre><code>def get_single_method_list_length(list_to_check): check_index = 0 while list_to_check.get(check_index) is not None: check_index += 1 return check_index # This will be the length of the list test_list = SingleMethodList(range(50)) # an example list of length 50 print(get_single_method_list_length(test_list)) </code></pre> <p>This prints out <code>50</code>, as it should. However, if your list has <code>None</code>s in it, the code will fail.</p>
3
2016-10-18T20:35:30Z
[ "python" ]
String Containment in Pandas
40,117,685
<p>I am trying to produce all the rows where company1 in df is contained in company2. I am doing it as follows:</p> <pre><code>df1=df[['company1','company2']][(df.apply(lambda x: x['company1'] in x['company2'], axis=1) == True)] </code></pre> <p>When I run the above line of code, it also shows "South" matched with "Southern". Also, "South" matched with "Route South". I want to get rid of all such cases. Company1 should only be contained in beginning of Company2. And, company1 should not be a part of some word in company2 like "South" (company1) matched with "Southern" (company2). How should I modify my code to accomplish above two requirements?</p>
3
2016-10-18T20:40:46Z
40,117,903
<p>I think you need:</p> <pre><code>df = pd.DataFrame({'company1': {0: 'South', 1: 'South', 2:'South'}, 'company2': {0: 'Southern', 1: 'Route South', 2: 'South Route'}}) print (df) company1 company2 0 South Southern 1 South Route South 2 South South Route df1=df[df['company2'].str.contains("|".join('^' + df['company1'] + ' '))] print (df1) company1 company2 2 South South Route </code></pre>
1
2016-10-18T20:54:15Z
[ "python", "string", "pandas" ]
TypeError: not enough arguments for format string - Python SQL connection while using %Y-%m
40,117,760
<pre><code>with engine.connect() as con: rs = con.execute(""" SELECT datediff(STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%Y-%m-%d') , current_date()) from TABLE WHERE datediff(STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%Y-%m-%d') , current_date()) &lt; 900 group by STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%Y-%m-%d'); """) </code></pre> <p>I feel the compiler is getting confused with '%Y-%m-%d', I might be wrong. Could someone help me on how to avoid this error:</p> <blockquote> <p>Type Error:not enough arguments for format string</p> </blockquote>
0
2016-10-18T20:45:43Z
40,117,918
<p>It sees your <code>%</code> signs and thinks you want to format the string. I believe you should be able to replace them with <code>%%</code> to indicate that you want the character, not a format substitution.</p>
0
2016-10-18T20:55:23Z
[ "python", "mysql", "compiler-errors", "python-3.5" ]
TypeError: not enough arguments for format string - Python SQL connection while using %Y-%m
40,117,760
<pre><code>with engine.connect() as con: rs = con.execute(""" SELECT datediff(STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%Y-%m-%d') , current_date()) from TABLE WHERE datediff(STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%Y-%m-%d') , current_date()) &lt; 900 group by STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%Y-%m-%d'); """) </code></pre> <p>I feel the compiler is getting confused with '%Y-%m-%d', I might be wrong. Could someone help me on how to avoid this error:</p> <blockquote> <p>Type Error:not enough arguments for format string</p> </blockquote>
0
2016-10-18T20:45:43Z
40,117,924
<p>You need to escape the <code>%</code>:</p> <pre><code>with engine.connect() as con: rs = con.execute(""" SELECT datediff(STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%%Y-%%m-%%d') , current_date()) from TABLE WHERE datediff(STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%%Y-%%m-%%d') , current_date()) &lt; 900 group by STR_TO_DATE(CONCAT(year,'-',month,'-',day), '%%Y-%%m-%%d'); """) </code></pre> <p><code>%</code> is a reserved character in MySQL (wildcard).</p>
0
2016-10-18T20:56:01Z
[ "python", "mysql", "compiler-errors", "python-3.5" ]
How Can I Detect Gaps and Consecutive Periods In A Time Series In Pandas
40,118,037
<p>I have a pandas Dataframe that is indexed by Date. I would like to select all consecutive gaps by period and all consecutive days by Period. How can I do this?</p> <p><strong>Example of Dataframe with No Columns but a Date Index:</strong></p> <pre><code>In [29]: import pandas as pd In [30]: dates = pd.to_datetime(['2016-09-19 10:23:03', '2016-08-03 10:53:39','2016-09-05 11:11:30', '2016-09-05 11:10:46','2016-09-05 10:53:39']) In [31]: ts = pd.DataFrame(index=dates) </code></pre> <p>As you can see there is a <em>gap from 2016-08-03 and 2016-09-19</em>. How do I detect these so I can create descriptive statistics, i.e. 40 gaps, with median gap duration of "x", etc. Also, I can see that <em>2016-09-05 and 2016-09-06 is a two day range</em>. How I can detect these and also print descriptive stats?</p> <p>Ideally the result would be returned as another Dataframe in each case since I want use other columns in the Dataframe to groupby. </p>
0
2016-10-18T21:04:14Z
40,129,387
<p>here's something to get started:</p> <pre><code>df = pd.DataFrame(np.ones(5),columns = ['ones']) df.index = pd.DatetimeIndex(['2016-09-19 10:23:03', '2016-08-03 10:53:39', '2016-09-05 11:11:30', '2016-09-05 11:10:46', '2016-09-06 10:53:39']) daily_rng = pd.date_range('2016-08-03 00:00:00', periods=48, freq='D') daily_rng = daily_rng.append(df.index) daily_rng = sorted(daily_rng) df = df.reindex(daily_rng).fillna(0) df = df.astype(int) df['ones'] = df.cumsum() </code></pre> <p>The cumsum() creates a grouping variable on 'ones' partitioning your data at the points your provided. If you print df to say a spreadsheet it will make sense:</p> <pre><code>print df.head() ones 2016-08-03 00:00:00 0 2016-08-03 10:53:39 1 2016-08-04 00:00:00 1 2016-08-05 00:00:00 1 2016-08-06 00:00:00 1 print df.tail() ones 2016-09-16 00:00:00 4 2016-09-17 00:00:00 4 2016-09-18 00:00:00 4 2016-09-19 00:00:00 4 2016-09-19 10:23:03 5 </code></pre> <p>now to complete:</p> <pre><code>df = df.reset_index() df = df.groupby(['ones']).aggregate({'ones':{'gaps':'count'},'index':{'first_spotted':'min'}}) df.columns = df.columns.droplevel() </code></pre> <p>which gives:</p> <pre><code> first_time gaps ones 0 2016-08-03 00:00:00 1 1 2016-08-03 10:53:39 34 2 2016-09-05 11:10:46 1 3 2016-09-05 11:11:30 2 4 2016-09-06 10:53:39 14 5 2016-09-19 10:23:03 1 </code></pre>
0
2016-10-19T10:54:31Z
[ "python", "pandas" ]
changed settings in python IDLE
40,118,041
<p>I tried to change the window size in IDLE but now IDLE won't launch. I have tried deleting it, removing it from trash and reinstalling both 3.5 and 2.7 again but still have the same problem. Command/option/escape indicates it hasn't launched and is not in the background.</p> <p>The screen size when this started was originally 80 x 80 pixels but I changed it to 88800 x 44480 by accident (couldn't type in the box for some reason). I think this has caused the issue but I don't know how to fix it.</p>
0
2016-10-18T21:04:25Z
40,118,249
<p>Assuming this is Windows, there is a hidden folder in your user folder</p> <pre><code>C:\Users\username\.idlerc\config-main.cfg </code></pre> <p>In this file, you can edit the size of the IDLE window, under the <code>[EditorWindow]</code> header. See <a href="https://svn.python.org/projects/python/trunk/Mac/IDLE/config-main.def" rel="nofollow">here</a> for a sample config file. Or you can just delete the <code>width</code> and <code>height</code> entries, and it should go back to default.</p>
0
2016-10-18T21:19:25Z
[ "python" ]
How to convert JSON data into a tree image?
40,118,113
<p>I'm using <a href="http://treelib.readthedocs.io/en/latest/examples.html" rel="nofollow">treelib</a> to generate trees, now I need easy-to-read version of trees, so I want to convert them into images. For example: <a href="https://i.stack.imgur.com/sr9eC.png" rel="nofollow"><img src="https://i.stack.imgur.com/sr9eC.png" alt="enter image description here"></a></p> <p>The sample JSON data, for the following tree:</p> <p><a href="https://i.stack.imgur.com/oaR1K.png" rel="nofollow"><img src="https://i.stack.imgur.com/oaR1K.png" alt="enter image description here"></a></p> <p>With data:</p> <pre><code>&gt;&gt;&gt; print(tree.to_json(with_data=True)) {"Harry": {"data": null, "children": [{"Bill": {"data": null}}, {"Jane": {"data": null, "children": [{"Diane": {"data": null}}, {"Mark": {"data": null}}]}}, {"Mary": {"data": null}}]}} </code></pre> <p>Without data:</p> <pre><code>&gt;&gt;&gt; print(tree.to_json(with_data=False)) {"Harry": {"children": ["Bill", {"Jane": {"children": [{"Diane": {"children": ["Mary"]}}, "Mark"]}}]}} </code></pre> <p>Is there anyway to use <a href="https://pypi.python.org/pypi/graphviz" rel="nofollow">graphviz</a> or <a href="https://d3js.org/" rel="nofollow">d3.js</a> or some other python library to generate tree using this JSON data?</p>
1
2016-10-18T21:10:11Z
40,130,160
<p>For a tree like this there's no need to use a library: you can generate the Graphviz DOT language statements directly. The only tricky part is extracting the tree edges from the JSON data. To do that, we first convert the JSON string back into a Python <code>dict</code>, and then parse that <code>dict</code> recursively.</p> <p>If a name in the tree dict has no children it's a simple string, otherwise, it's a dict and we need to scan the items in its <code>"children"</code> list. Each (parent, child) pair we find gets appended to a global list <code>edges</code>.</p> <p>This somewhat cryptic line:</p> <pre><code>name = next(iter(treedict.keys())) </code></pre> <p>gets a single key from <code>treedict</code>. This gives us the person's name, since that's the only key in <code>treedict</code>. In Python 2 we could do</p> <pre><code>name = treedict.keys()[0] </code></pre> <p>but the previous code works in both Python 2 and Python 3.</p> <pre><code>from __future__ import print_function import json import sys # Tree in JSON format s = '{"Harry": {"children": ["Bill", {"Jane": {"children": [{"Diane": {"children": ["Mary"]}}, "Mark"]}}]}}' # Convert JSON tree to a Python dict data = json.loads(s) # Convert back to JSON &amp; print to stderr so we can verfiy that the tree is correct. print(json.dumps(data, indent=4), file=sys.stderr) # Extract tree edges from the dict edges = [] def get_edges(treedict, parent=None): name = next(iter(treedict.keys())) if parent is not None: edges.append((parent, name)) for item in treedict[name]["children"]: if isinstance(item, dict): get_edges(item, parent=name) else: edges.append((name, item)) get_edges(data) # Dump edge list in Graphviz DOT format print('strict digraph tree {') for row in edges: print(' {0} -&gt; {1};'.format(*row)) print('}') </code></pre> <p><strong>stderr output</strong></p> <pre class="lang-none prettyprint-override"><code>{ "Harry": { "children": [ "Bill", { "Jane": { "children": [ { "Diane": { "children": [ "Mary" ] } }, "Mark" ] } } ] } } </code></pre> <p><strong>stdout output</strong></p> <pre class="lang-none prettyprint-override"><code>strict digraph tree { Harry -&gt; Bill; Harry -&gt; Jane; Jane -&gt; Diane; Diane -&gt; Mary; Jane -&gt; Mark; } </code></pre> <p>The code above runs on Python 2 &amp; Python 3. It prints the JSON data to stderr so we can verify that it's correct. It then prints the Graphviz data to stdout so we can capture it to a file or pipe it directly to a Graphviz program. Eg, if the script is name "tree_to_graph.py", then you can do this in the command line to save the graph as a PNG file named "tree.png":</p> <pre class="lang-bash prettyprint-override"><code>python tree_to_graph.py | dot -Tpng -otree.png </code></pre> <p>And here's the PNG output:</p> <p><img src="https://i.stack.imgur.com/zFvic.png" alt="Tree made by Graphviz" title="Tree made by Graphviz"></p>
1
2016-10-19T11:28:07Z
[ "python", "json", "tree" ]
Library `requests` getting different results unpredictably
40,118,133
<p>Why does this code:</p> <pre><code>import requests response = requests.post('http://evds.tcmb.gov.tr/cgi-bin/famecgi', data={ 'cgi': '$ozetweb', 'ARAVERIGRUP': 'bie_yymkpyuk.db', 'DIL': 'UK', 'ONDALIK': '5', 'wfmultiple_selection': 'ZAMANSERILERI', 'f_begdt': '07-01-2005', 'f_enddt': '07-10-2016', 'ZAMANSERILERI': ['TP.PYUK1', 'TP.PYUK2', 'TP.PYUK21', 'TP.PYUK22', 'TP.PYUK3', 'TP.PYUK4', 'TP.PYUK5', 'TP.PYUK6'], 'YON': '3', 'SUBMITDEG': 'Report', 'GRTYPE': '1', 'EPOSTA': 'xxx', 'RESIMPOSTA': '***', }) print(response.text) </code></pre> <p>produces different results in Python 2 (<code>2.7.12</code>) and Python 3 (<code>3.5.2</code>)? I'm using <code>requests==2.11.1</code>. Since the <code>requests</code> library supports both Python versions with the same API, I guess the result should be the same.</p> <p>The expected result is the one obtained from running the code with Python 2. It works every single time. When ran with Python 3, the server sometimes returns an error, and sometimes it works. (This is the intriguing part.)</p> <p>Since it works with Python 2, I figure the error must happen in the client side. Is there any caveat to how Python 3 handles encoding, or sending the data through the socket, that I should be aware of?</p> <p><strong>EDIT:</strong> In the comments below, a person was able to reproduce this and confirms this issue exists.</p>
3
2016-10-18T21:11:48Z
40,119,137
<p>It does seem to come down to different between dicts in python2 vs python3 in relation to <a href="https://docs.python.org/3/whatsnew/3.3.html#porting-python-code" rel="nofollow">Hash randomization is enabled by default</a> since python3.3 and the server needing at least the <em>cgi</em> field to come first, the following can reproduce:</p> <pre><code>good = requests.post('http://evds.tcmb.gov.tr/cgi-bin/famecgi', data=([ ('cgi', '$ozetweb'), ('ARAVERIGRUP', 'bie_yymkpyuk.db'), ('DIL', 'UK'), ('ONDALIK', '5'), ('wfmultiple_selection', 'ZAMANSERILERI'), ('f_begdt', '07-01-2005'), ('f_enddt', '07-10-2016'), ('ZAMANSERILERI', ['TP.PYUK1', 'TP.PYUK2', 'TP.PYUK21', 'TP.PYUK22', 'TP.PYUK3', 'TP.PYUK4', 'TP.PYUK5', 'TP.PYUK6']), ('YON', '3'), ('SUBMITDEG', 'Report'), ('GRTYPE', '1'), ('EPOSTA', 'xxx'), ('RESIMPOSTA', '***')])) bad = requests.post('http://evds.tcmb.gov.tr/cgi-bin/famecgi', data=([ ('ARAVERIGRUP', 'bie_yymkpyuk.db'), ('cgi', '$ozetweb'), ('DIL', 'UK'), ('wfmultiple_selection', 'ZAMANSERILERI'), ('ONDALIK', '5'), ('f_begdt', '07-01-2005'), ('f_enddt', '07-10-2016'), ('ZAMANSERILERI', ['TP.PYUK1', 'TP.PYUK2', 'TP.PYUK21', 'TP.PYUK22', 'TP.PYUK3', 'TP.PYUK4', 'TP.PYUK5', 'TP.PYUK6']), ('YON', '3'), ('SUBMITDEG', 'Report'), ('GRTYPE', '1'), ('EPOSTA', 'xxx'), ('RESIMPOSTA', '***')])) </code></pre> <p>Running the code above using python2:</p> <pre><code>In [6]: print(good.request.body) ...: print(bad.request.body) ...: ...: print(len(good.text), len(bad.text)) ...: cgi=%24ozetweb&amp;ARAVERIGRUP=bie_yymkpyuk.db&amp;DIL=UK&amp;ONDALIK=5&amp;wfmultiple_selection=ZAMANSERILERI&amp;f_begdt=07-01-2005&amp;f_enddt=07-10-2016&amp;ZAMANSERILERI=TP.PYUK1&amp;ZAMANSERILERI=TP.PYUK2&amp;ZAMANSERILERI=TP.PYUK21&amp;ZAMANSERILERI=TP.PYUK22&amp;ZAMANSERILERI=TP.PYUK3&amp;ZAMANSERILERI=TP.PYUK4&amp;ZAMANSERILERI=TP.PYUK5&amp;ZAMANSERILERI=TP.PYUK6&amp;YON=3&amp;SUBMITDEG=Report&amp;GRTYPE=1&amp;EPOSTA=xxx&amp;RESIMPOSTA=%2A%2A%2A ARAVERIGRUP=bie_yymkpyuk.db&amp;cgi=%24ozetweb&amp;DIL=UK&amp;wfmultiple_selection=ZAMANSERILERI&amp;ONDALIK=5&amp;f_begdt=07-01-2005&amp;f_enddt=07-10-2016&amp;ZAMANSERILERI=TP.PYUK1&amp;ZAMANSERILERI=TP.PYUK2&amp;ZAMANSERILERI=TP.PYUK21&amp;ZAMANSERILERI=TP.PYUK22&amp;ZAMANSERILERI=TP.PYUK3&amp;ZAMANSERILERI=TP.PYUK4&amp;ZAMANSERILERI=TP.PYUK5&amp;ZAMANSERILERI=TP.PYUK6&amp;YON=3&amp;SUBMITDEG=Report&amp;GRTYPE=1&amp;EPOSTA=xxx&amp;RESIMPOSTA=%2A%2A%2A (71299, 134) </code></pre> <p>And python3:</p> <pre><code>In [4]: print(good.request.body) ...: print(bad.request.body) ...: ...: print(len(good.text), len(bad.text)) ...: cgi=%24ozetweb&amp;ARAVERIGRUP=bie_yymkpyuk.db&amp;DIL=UK&amp;ONDALIK=5&amp;wfmultiple_selection=ZAMANSERILERI&amp;f_begdt=07-01-2005&amp;f_enddt=07-10-2016&amp;ZAMANSERILERI=TP.PYUK1&amp;ZAMANSERILERI=TP.PYUK2&amp;ZAMANSERILERI=TP.PYUK21&amp;ZAMANSERILERI=TP.PYUK22&amp;ZAMANSERILERI=TP.PYUK3&amp;ZAMANSERILERI=TP.PYUK4&amp;ZAMANSERILERI=TP.PYUK5&amp;ZAMANSERILERI=TP.PYUK6&amp;YON=3&amp;SUBMITDEG=Report&amp;GRTYPE=1&amp;EPOSTA=xxx&amp;RESIMPOSTA=%2A%2A%2A ARAVERIGRUP=bie_yymkpyuk.db&amp;cgi=%24ozetweb&amp;DIL=UK&amp;wfmultiple_selection=ZAMANSERILERI&amp;ONDALIK=5&amp;f_begdt=07-01-2005&amp;f_enddt=07-10-2016&amp;ZAMANSERILERI=TP.PYUK1&amp;ZAMANSERILERI=TP.PYUK2&amp;ZAMANSERILERI=TP.PYUK21&amp;ZAMANSERILERI=TP.PYUK22&amp;ZAMANSERILERI=TP.PYUK3&amp;ZAMANSERILERI=TP.PYUK4&amp;ZAMANSERILERI=TP.PYUK5&amp;ZAMANSERILERI=TP.PYUK6&amp;YON=3&amp;SUBMITDEG=Report&amp;GRTYPE=1&amp;EPOSTA=xxx&amp;RESIMPOSTA=%2A%2A%2A 71299 134 </code></pre> <p>Passing your dict as posted in python2:</p> <pre><code>In [4]: response.request.body Out[4]: 'cgi=%24ozetweb&amp;DIL=UK&amp;f_enddt=07-10-2016&amp;YON=3&amp;RESIMPOSTA=%2A%2A%2A&amp;wfmultiple_selection=ZAMANSERILERI&amp;ARAVERIGRUP=bie_yymkpyuk.db&amp;GRTYPE=1&amp;SUBMITDEG=Report&amp;f_begdt=07-01-2005&amp;ZAMANSERILERI=TP.PYUK1&amp;ZAMANSERILERI=TP.PYUK2&amp;ZAMANSERILERI=TP.PYUK21&amp;ZAMANSERILERI=TP.PYUK22&amp;ZAMANSERILERI=TP.PYUK3&amp;ZAMANSERILERI=TP.PYUK4&amp;ZAMANSERILERI=TP.PYUK5&amp;ZAMANSERILERI=TP.PYUK6&amp;ONDALIK=5&amp;EPOSTA=xxx' In [5]: len(response.text) Out[5]: 71299 </code></pre> <p>And the same dict in python3:</p> <pre><code>In [3]: response.request.body Out[3]: 'EPOSTA=xxx&amp;ARAVERIGRUP=bie_yymkpyuk.db&amp;DIL=UK&amp;SUBMITDEG=Report&amp;cgi=%24ozetweb&amp;GRTYPE=1&amp;f_enddt=07-10-2016&amp;wfmultiple_selection=ZAMANSERILERI&amp;ONDALIK=5&amp;f_begdt=07-01-2005&amp;RESIMPOSTA=%2A%2A%2A&amp;YON=3&amp;ZAMANSERILERI=TP.PYUK1&amp;ZAMANSERILERI=TP.PYUK2&amp;ZAMANSERILERI=TP.PYUK21&amp;ZAMANSERILERI=TP.PYUK22&amp;ZAMANSERILERI=TP.PYUK3&amp;ZAMANSERILERI=TP.PYUK4&amp;ZAMANSERILERI=TP.PYUK5&amp;ZAMANSERILERI=TP.PYUK6' In [4]: len(response.text) Out[4]: 134 </code></pre> <p>And running <code>~$ export PYTHONHASHSEED=1234</code> before starting another ipython2 shell:</p> <pre><code>In [4]: response.request.body Out[4]: 'DIL=UK&amp;GRTYPE=1&amp;ARAVERIGRUP=bie_yymkpyuk.db&amp;f_begdt=07-01-2005&amp;RESIMPOSTA=%2A%2A%2A&amp;ONDALIK=5&amp;EPOSTA=xxx&amp;YON=3&amp;SUBMITDEG=Report&amp;wfmultiple_selection=ZAMANSERILERI&amp;cgi=%24ozetweb&amp;ZAMANSERILERI=TP.PYUK1&amp;ZAMANSERILERI=TP.PYUK2&amp;ZAMANSERILERI=TP.PYUK21&amp;ZAMANSERILERI=TP.PYUK22&amp;ZAMANSERILERI=TP.PYUK3&amp;ZAMANSERILERI=TP.PYUK4&amp;ZAMANSERILERI=TP.PYUK5&amp;ZAMANSERILERI=TP.PYUK6&amp;f_enddt=07-10-2016' In [5]: os.environ["PYTHONHASHSEED"] Out[5]: '1234' In [6]: len(response.text) Out[6]: 134 </code></pre> <p>You can run the code numerous times to the same end but definitely <code>('cgi', '$ozetweb')</code> coming first is essential for the code to work, it happened to work using python3 intermittently as the order of the keys sometimes put <em>cgi</em> first. There is a bit more on the <a href="http://stackoverflow.com/a/27522708/2141635">hashing topic</a></p>
1
2016-10-18T22:29:46Z
[ "python", "python-requests" ]
How to find all pairs of neighboring values in matrix?
40,118,137
<p>For an image represented as a matrix, what is an efficient way to find all unique pairs of elements that touch within a 3x3 square?</p> <pre><code>Let A= 1 1 2 2 3 3 3 1 1 1 1 2 4 4 1 1 2 2 5 5 5 </code></pre> <p>Then we would return</p> <pre><code>(1,2),(1,3),(1,5),(2,3),(2,4),(2,5),(3,4),(4,5) </code></pre>
-2
2016-10-18T21:11:58Z
40,118,374
<p>You may use <a href="https://docs.python.org/2/library/itertools.html#itertools.combinations" rel="nofollow"><code>itertools.combinations()</code></a> to achieve this. Below is the sample code:</p> <pre><code>a = [[1, 1, 2, 2, 3, 3, 3,], [1, 1, 1, 1, 2, 4, 4,], [1, 1, 2, 2, 5, 5, 5,], ] # Extract boundry values to list boundary_vals = a[0] + a[-1] + [sub_list[0] for sub_list in a[1:-1]] + [sub_list[-1] for sub_list in a[1:-1]] # Unique set of values unique_vals = set(boundary_vals) # Calculate combinations from itertools import combinations my_values = list(combinations(unique_vals, 2)) </code></pre> <p>Here, <code>my_values</code> is a <code>list</code> of <code>tuple</code>s having value as:</p> <pre><code>[(1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)] </code></pre> <p><strong>Explanation</strong>:</p> <p>For calculating the boundary values:</p> <pre><code># 1st list &gt;&gt;&gt; a[0] [1, 1, 2, 2, 3, 3, 3] # Last list &gt;&gt;&gt; a[-1] [1, 1, 2, 2, 5, 5, 5] # all the 0th elements of sub-lists excluding 1st and last list &gt;&gt;&gt; [sub_list[0] for sub_list in a[1:-1]] [1] # all the last elements of sub-lists excluding 1st and last list &gt;&gt;&gt; [sub_list[-1] for sub_list in a[1:-1]] [4] </code></pre> <p>Adding all the above list, will give the boundary elements.</p>
0
2016-10-18T21:27:18Z
[ "python", "image-processing", "matrix" ]
How to find all pairs of neighboring values in matrix?
40,118,137
<p>For an image represented as a matrix, what is an efficient way to find all unique pairs of elements that touch within a 3x3 square?</p> <pre><code>Let A= 1 1 2 2 3 3 3 1 1 1 1 2 4 4 1 1 2 2 5 5 5 </code></pre> <p>Then we would return</p> <pre><code>(1,2),(1,3),(1,5),(2,3),(2,4),(2,5),(3,4),(4,5) </code></pre>
-2
2016-10-18T21:11:58Z
40,119,187
<p>Here is some partially-hardcoded easy-to-understand-approach.</p> <ul> <li><strong>Edit:</strong> faster version due to preprocessing</li> <li><strong>Edit 2:</strong> one more final speedup (symmetry-reduction in preprocessing)</li> <li><strong>Edit 3:</strong> okay; added one more symmetry-reduction step in preprocessing</li> </ul> <h3>Approach</h3> <ul> <li>get a block-view by skimage</li> <li>preprocess the neighborhood-logic once: <ul> <li>create a list of all indices to look up pairs (which are connected) in the given window</li> <li>some symmetry-reduction used</li> </ul></li> <li>iterate all blocks; grab pairs <ul> <li>add if some symmetry-constraint is true</li> </ul></li> </ul> <h3>Code</h3> <pre><code>import numpy as np from skimage.util.shape import view_as_blocks, view_as_windows img = np.array([[1,1,2,2,3,3,3], [1,1,1,1,2,4,4], [1,1,2,2,5,5,5]]) #img = np.random.random_integers(1, 10, size=(256,256)) WINDOW_SIZE = 3 img_windowed = view_as_windows(img, window_shape=(WINDOW_SIZE,WINDOW_SIZE)) # overlapping # Preprocessing: generate valid index_pairs index_pairs = [] for x in range(WINDOW_SIZE): for y in range(WINDOW_SIZE): if y&gt;=x: # remove symmetries if x&gt;0: index_pairs.append(((x,y), (x-1,y))) if x&lt;2: index_pairs.append(((x,y), (x+1,y))) if y&gt;0: index_pairs.append(((x,y), (x,y-1))) if y&lt;2: index_pairs.append(((x,y), (x,y+1))) if x&gt;0 and y&gt;0: index_pairs.append(((x,y), (x-1,y-1))) if x&lt;2 and y&lt;2: index_pairs.append(((x,y), (x+1,y+1))) if x&gt;0 and y&lt;2: index_pairs.append(((x,y), (x-1,y+1))) if x&lt;2 and y&gt;0: index_pairs.append(((x,y), (x+1,y-1))) index_pairs = list(filter(lambda x: x[0] &lt; x[1], index_pairs)) # remove symmetries pairs = [] def reason_pair(a,b): # remove symmetries if a&lt;b: pairs.append((a,b)) elif a&gt;b: pairs.append((b,a)) for a in range(img_windowed.shape[0]): for b in range(img_windowed.shape[1]): block = img_windowed[a,b] for i in index_pairs: reason_pair(block[i[0]], block[i[1]]) print(set(pairs)) </code></pre> <h3>Output</h3> <pre><code>set([(1, 2), (1, 3), (4, 5), (1, 5), (2, 3), (2, 5), (3, 4), (2, 4)]) </code></pre>
0
2016-10-18T22:34:05Z
[ "python", "image-processing", "matrix" ]
How do I convert number to binary just by using the math functions in python?
40,118,197
<p>My assignment is to make a code to change a number between 0 and 255 to binary. All we have learned is the print, input, and the math functions? how would I code it so that when I have it ask for a number I can type the number in and it will go through the process of converting it? sorry if my explanation is weird. I'm desperate.</p>
-7
2016-10-18T21:15:50Z
40,119,531
<p>Given the number of down votes, I’ll probably be pilloried for helping you—especially when you haven’t posted any code (pro tip: the people who answer questions on stack have actual lives where they do things with people they care about). But enough of my hectoring, start with what you have and go from there is usually a good thing.</p> <p>First off, 2^8 = 256, but your input range is 0 => 255. So you don’t have to worry about 2^8. Now, for base conversion, we usually work left to right like this</p> <pre><code> 128 64 32 16 8 4 2 1 147 1 0 0 1 0 0 1 1 147 19 19 3 3 3 1 0 </code></pre> <p>Running totals along the bottom row and converting 147 from decimal yields <strong>10010011</strong> in binary. And now you can replicate the same using code. As a starting point, you probably want something along the following lines</p> <pre><code>user_input = . . . # get the input from the user running_total = user_input for exponent in range(7, -1, -1): # if my running total is greater than 2**exponent # subtract 2**exponent from the running total and # print 1 else print 0 pass </code></pre> <hr> <p>n.b. there are all sorts of fancy ways to do this, and base conversion is a lot of fun when you get into it. But please do be respectful of other posters’ time by giving the question a try before asking for help. </p>
0
2016-10-18T23:14:22Z
[ "python", "binary" ]
get, access and modify element values for an numpy array
40,118,240
<p>I once saw the following code segment</p> <pre><code>import numpy as np nx=3 ny=3 label = np.ones((nx, ny)) mask=np.zeros((nx,ny),dtype=np.bool) label[mask]=0 </code></pre> <p>The <code>mask</code> generated is a bool array</p> <pre><code>[[False False False] [False False False] [False False False]] </code></pre> <p>If I would like to assign some elements in mask to other values, for instance, I have been trying to use <code>mask[2,1]="True"</code>, but it did not work without changing the corrsponding entry as I expected. What's the correct way to get access and change the value for an numpy array. In addition, what does <code>label[mask]=0</code> do? It seems to me that it tries to use each mask entry value to assign the corrsponding label entry value.</p>
0
2016-10-18T21:18:40Z
40,120,099
<p>Here is a code snippet with some comments that might help you make sense of this. I would suggest you look into the link that @Divakar provided and look into <a href="https://docs.scipy.org/doc/numpy-1.10.1/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow">boolean-indexing</a>. </p> <pre><code># a two dimensional array with random values arr = np.random.random((5, 5)) # assign mask to a two dimensional array (same shape as arr) # that has True for every element where the corresponding # element in arr is greater than 0.5 mask = arr &gt; 0.5 # assign all the elements in arr that are greater than 0.5 to 0 arr[mask] = 0 # the above can be more concisely written as: arr[arr&gt;0.5] = 0 # you can change the mask any way you want # here I invert the mask inv_mask = np.invert(mask) # assign all the values in arr less than 0.5 to 1 arr[inv_mask] = 1 </code></pre>
0
2016-10-19T00:22:04Z
[ "python", "numpy", "scipy" ]
Unable to create a second dataframe python pandas
40,118,259
<p>My second data frame is not loading values when i create it. Any help with why it is not working? When i make my cursor a list, it has a bunch of values in it, but for whatever reason when i try to do a normal data frame load with pandas a second time, it does not work.</p> <p>My code:</p> <pre><code> conn = pyodbc.connect(constr, autocommit=True) cursor = conn.cursor() secondCheckList = [] checkCount = 0 maxValue = 0 strsql = "SELECT * FROM CRMCSVFILE" cursor = cursor.execute(strsql) cols = [] SQLupdateNewIdField = "UPDATE CRMCSVFILE SET NEW_ID = ? WHERE Email_Address_Txt = ? OR TELEPHONE_NUM = ? OR DRIVER_LICENSE_NUM = ?" for row in cursor.description: cols.append(row[0]) df = pd.DataFrame.from_records(cursor) df.columns = cols newIdInt = 1 for row in range(len(df['Email_Address_Txt'])): #run initial search to figure out the max number of records. Look for email, phone, and drivers license, names have a chance not to be unique SQLrecordCheck = "SELECT * FROM CRMCSVFILE WHERE Email_Address_Txt = '" + str(df['Email_Address_Txt'][row]) + "' OR TELEPHONE_NUM = '" + str(df['Telephone_Num'][row]) + "' OR DRIVER_LICENSE_NUM = '" + str(df['Driver_License_Num'][row]) + "'" ## print(SQLrecordCheck) cursor = cursor.execute(SQLrecordCheck) ## maxValue is indeed a list filled with records maxValue =(list(cursor)) ## THIS IS WHERE PROBLEM OCCURS tempdf = pd.DataFrame.from_records(cursor) </code></pre>
2
2016-10-18T21:19:54Z
40,118,556
<p>Why not just use pd.read_sql_query("your_query", conn) this will return the result of the query as a dataframe and requires less code. Also you set cursor to cursor.execute(strsql) at the top and then you are trying to call execute on cursor again in your for loop but you can no longer call execute on cursor you will have to set cursor = conn.cursor() again. </p>
3
2016-10-18T21:39:35Z
[ "python", "pandas", "pyodbc" ]
sqlalchemy can't read null dates from sqlite3 (0000-00-00): ValueError: year is out of range
40,118,266
<p>When I try to query a database containing dates such as <code>0000-00-00 00:00:00</code> with <code>sqlachemy</code>, I get <code>ValueError: year is out of range</code>.</p> <p>Here's the db dump:</p> <p><a href="https://i.stack.imgur.com/k6wiR.png" rel="nofollow"><img src="https://i.stack.imgur.com/k6wiR.png" alt="enter image description here"></a></p> <p>Here's the stacktrace:</p> <pre><code>File "/home/rob/.virtualenvs/calif/lib/python3.5/site-packages/sqlalchemy/engine/result.py" in items 163. return [(key, self[key]) for key in self.keys()] File "/home/rob/.virtualenvs/calif/lib/python3.5/site-packages/sqlalchemy/engine/result.py" in &lt;listcomp&gt; 163. return [(key, self[key]) for key in self.keys()] File "/home/rob/.virtualenvs/calif/lib/python3.5/site-packages/sqlalchemy/engine/result.py" in __getitem__ 90. return processor(self._row[index]) File "/home/rob/.virtualenvs/calif/lib/python3.5/site-packages/sqlalchemy/processors.py" in process 48. return type_(*list(map(int, m.groups(0)))) Exception Type: ValueError at / Exception Value: year is out of range </code></pre> <p>Is this normal ? Can sqlalchemy read dates like that ? Is this a python limitation ? Is there a workaround to keep the date as-is (not converting to None) ?</p>
0
2016-10-18T21:20:28Z
40,118,382
<p>Got the answer via <code>inklesspen</code> on IRC: <a href="https://docs.python.org/2/library/datetime.html#datetime.MINYEAR" rel="nofollow">Python datetime representation has <strong>minimum year</strong> and it's <strong>1</strong></a></p>
0
2016-10-18T21:27:50Z
[ "python", "sqlalchemy" ]
How to set multiple function keywords at once, in Python 2?
40,118,384
<p>all.</p> <p>I was wondering if it was possible to set multiple keywords at once (via list?) in a function call.</p> <p>For example, if you do:</p> <pre><code>foo, bar = 1, 2 print(foo, bar) </code></pre> <p>The output is <code>(1,2)</code>.</p> <p>For the function</p> <pre><code>def printer(foo, bar) print(foo,bar) </code></pre> <p>Is it possible to do something like:</p> <pre><code>printer([foo, bar] = [1,2]) </code></pre> <p>where both keywords are being set with a list?</p> <p>In particular, the reason why I ask is because I have a function that returns two variables, <code>scale</code> and <code>offset</code>:</p> <pre><code>def scaleOffset(...): 'stuff happens here return [scale, offset] </code></pre> <p>I would like to pass both of these variables to a different function that accepts them as keywords, perhaps as a nested call.</p> <pre><code>def secondFunction(scale=None, offset=None): 'more stuff </code></pre> <p>So far I haven't found a way of doing a call like this:</p> <pre><code>secondFunction([scale,offset] = scaleOffset()) </code></pre>
0
2016-10-18T21:27:55Z
40,118,440
<p>To pass args as a list</p> <pre><code>arg_list = ["foo", "bar"] my_func(*arg_list) </code></pre> <p>To pass kwargs, use a dictionary</p> <pre><code>kwarg_dict = {"keyword": "value"} my_func(**kwarg_dict) </code></pre>
3
2016-10-18T21:32:05Z
[ "python", "arguments", "parameter-passing", "kwargs", "function-call" ]
Replace/remap server response body while preserving most of original header fields served to browser
40,118,464
<p>How can I replace particular file body returned to browser by remote server but leave the original respond header mostly unchanged/intact/unaffected/unaltered/untouched? (I don't know which English word is best in this context so please: fix my question!) This probably may be done using penetration testing proxy (Burp, OWASP ZAP, Charles, Fiddler, Paros etc.) but I don't find suitable option to mapping respond body to local file body without dropping important header fields (<code>Set-Cookie</code>, <code>Content-Type</code> etc.). There is not problem with rewriting only part of body using regular expression pattern. There is also not problem with remapping whole file (based on URL), however, it generates a new header instead of duplicating the original returned by server. I know that my local file may be differ in size from that on the server so <code>Content-Length</code> field should be altered by proxy. There are probably other fields in header that should be modified by penetration testing tool but fields such as <code>Set-Cookie</code>, <code>Content-Type</code> and some other selected and as well all customized fields (as the ones prefixed by <code>X-</code>) should be preserved.</p> <p>Should I write an extension or some kind of script to any of these tools? If so, then I can search for API reference of chosen tool but which penetration testing tool should I chose to write in my favorite language which is Python? Any help in pointing to particular API needed for this purpose will be appreciated. This script should:</p> <ul> <li>intercept HTTP response by setting break point on particular URL</li> <li>read and remember header returned by server</li> <li>load local file associated with requested URL</li> <li>check file size and modify <code>Content-Length</code> header field</li> <li>send modified header</li> <li>send loaded file</li> </ul> <p>The above list suggests which elements of API are needed to point me to. Ideally it would be if there is embedded option for described task in any tool but if such option does not exist then API of which tool should I learn to code in Python and on which API parts should I pay special attention? Because of portability, chosen tool should not be dependent on .NET (so using Fiddler will be a problem in this situation). Java-dependent tools are OK because there is no problem with using portable Java runtime environment.</p>
0
2016-10-18T21:33:32Z
40,125,565
<p>Yes, you can do this with OWASP ZAP.</p> <p>ZAP supports lots of scripting languages including python (actually jython;). You can change anything to do with requests and responses using proxy scripts. You have full access to all of the information about the requests and responses, all of the ZAP functionality and your local filestore.</p> <p>We have some examples here: <a href="https://github.com/zaproxy/community-scripts/tree/master/proxy" rel="nofollow">https://github.com/zaproxy/community-scripts/tree/master/proxy</a> None of those examples actually use python, but there is an example python script in <a href="https://github.com/zaproxy/community-scripts/tree/master/payloadgenerator" rel="nofollow">https://github.com/zaproxy/community-scripts/tree/master/payloadgenerator</a> You will need to install the python scripting add-on which includes the necessary templates: <a href="https://github.com/zaproxy/zap-extensions/wiki/HelpAddonsJythonJython" rel="nofollow">https://github.com/zaproxy/zap-extensions/wiki/HelpAddonsJythonJython</a></p> <p>If you have specific questions about ZAP scripting then we have a group just for that purpose: <a href="http://groups.google.com/group/zaproxy-scripts" rel="nofollow">http://groups.google.com/group/zaproxy-scripts</a></p>
0
2016-10-19T08:05:57Z
[ "python", "fiddler", "owasp", "charles", "penetration-testing" ]
Cannot connect to SQL server from python using Active Directory Authentication
40,118,470
<p>I am using pymssql library to connect python to Sql Server. I can connect using windows/sql server authentication. I want to connect using Active Directory Authentication. </p> <p>Tried the below connection string. But it fails with error : </p> <pre><code>unexpected keyword authentication conn = pymssql.connect(server='adventureworks.database.windows.net', authentication = 'Active Directory Password',user='username@server.com',password='Enterpasswordhere', database='dbo') </code></pre>
1
2016-10-18T21:33:59Z
40,120,437
<p><a href="http://pymssql.org/en/latest/ref/pymssql.html#functions" rel="nofollow">Note that pymssql.connect does not have an 'authentication' parameter</a>. You are passing it that as a named arg, which is invalid, and the reason you see your error.</p> <p>See <a href="http://pymssql.org/en/latest/pymssql_examples.html#connecting-using-windows-authentication" rel="nofollow">this example</a> for connecting using windows authentication:</p> <pre><code>conn = pymssql.connect( host=r'dbhostname\myinstance', user=r'companydomain\username', password=PASSWORD, database='DatabaseOfInterest' ) </code></pre>
0
2016-10-19T01:03:31Z
[ "python", "sql-server" ]
How to create a backward moving input class in Python
40,118,471
<p>How to create a backward moving input class in Python? I have a class called input which reads a file forward returning one character at a time now I would like to change it to read backwards.</p> <pre><code># Buffered input file. Returns one character at a time. class Input: def __init__( self, file ): self.file = file # must open( &lt;filename&gt;, 'rb' ) self.length = 0 self.used = 0 self.buffer = "" def read( self ): if self.used &lt; self.length: # if something in buffer c = self.buffer[self.used] self.used += 1 return c else: self.buffer = self.file.read( 2048 ) # or 2048 self.length = len( self.buffer ) if self.length == 0: return -1 else: c = self.buffer[0] self.used = 1 return c </code></pre>
-2
2016-10-18T21:34:03Z
40,118,787
<p>I think the only way this can work with text files in Python 3 is to read the whole text of the file in at once, and then yield characters from the end of the string you've loaded. You can't read the file in chunks starting from the end because there's no way to safely seek to an arbitrary position in the text. If you picked an arbitrary spot (e.g. 2048 bytes before the end of the file) you might land in the middle of a multi-byte character. For this reason Python doesn't support doing a <code>seek</code> to anywhere other than the start and end of the file, or to a place you've been before (and saved the position of with <code>tell</code>).</p> <p>If your file is small enough, I'd suggest something like this:</p> <pre><code>class ReverseInput(): def __init__(self, file): buffer = file.read() # read all text self.rev_iter = reversed(buffer) # save a reverse iterator into the text def read(self): try: return next(self.rev_iter) except StopIteration: return -1 # raising an exception or returning "" might be a better API </code></pre> <p>If the file is too large to store in memory at once, I suppose you could work around the limitation on seeking by reading and discarding blocks of a limited size going forwards through the file and using <code>self.file.tell()</code> to save locations you can seek back to later. It would probably be slow, awkward and easy to mess up.</p>
0
2016-10-18T21:58:13Z
[ "python", "python-3.x" ]
Controlling nonstandard classes with pywinauto
40,118,492
<p>I am using pywinauto in order to ease my work with certain program. I would like to select in this <a href="https://i.stack.imgur.com/OHbHV.png" rel="nofollow">combobox</a> item "vs. Reference". I used <code>app['Setup Potentiodynamic Experiment'].PrintControlIdentifiers()</code> to get name and class of the combobox. Python returned the following: </p> <pre><code>TComboDJ - 'b'vs. Open Circuit'' (L987, T424, R1094, B445) 'b'TComboDJ5'' 'b'vs. Open Circuit3'' 'b'vs. Open CircuitTComboDJ3'' </code></pre> <p>So, to do what I want, I used this:</p> <pre><code>app['Setup Potentiodynamic Experiment']["TComboDJ5"].Select("vs. Reference") </code></pre> <p>And the following error appeared:</p> <pre><code>Exception in Tkinter callback Traceback (most recent call last): File "E:\PY\lib\tkinter\__init__.py", line 1550, in __call__ return self.func(*args) File "E:/Python projects/test/test.py", line 40, in createxp app['Setup Potentiodynamic Experiment']["TComboDJ5"].Select("vs. Reference") File "E:\PY\lib\site-packages\pywinauto\application.py", line 245, in __getattr__ return getattr(ctrls[-1], attr) AttributeError: 'HwndWrapper' object has no attribute 'Select' </code></pre> <p>As far as I understand, pywinauto can't recognize the combobox as a combobox. Can something be done about it?</p>
1
2016-10-18T21:35:43Z
40,119,074
<p><code>ComboBoxWrapper</code> can be created explicitly:</p> <pre><code>from pywinauto.controls.win32_controls import ComboBoxWrapper hwnd_wr = app['Setup Potentiodynamic Experiment']["TComboDJ5"].WrapperObject() combo = ComboBoxWrapper(hwnd_wr) combo.Select("vs. Reference") </code></pre> <p>Of course it would work if the combo box could respond to standard window messages like <code>CB_GETCOUNT</code>. And the output tells you that combined <code>&lt;title&gt;&lt;item_text&gt;</code> access names are fortunately available.</p>
0
2016-10-18T22:24:32Z
[ "python", "combobox", "pywinauto" ]
output truncated when writing to a file in python
40,118,500
<p>i have unusual problem, i'm using Anaconda and my python code runs really good the output on the screen is perfect, however, after each print i put file.write to write my result in my file as well but not all output is written there every time it truncates the output in different positions which doesn't make sense. the file was opened in 'w' mode. my code is really long like 400 line of code so its not possible to paste it here. i tried to close the console after each run and restart it but it doesn't always work i had a correct file output like just twice in 10 run tries. can any one tell me why is this happening ...your time is highly appreciated </p> <pre><code>file.write("\n \n") current=EventQueue.head for i in range(0,EventQueue.size()): print "*" *70 file.write("\n In Time %s " %str(current.index)) file.write(str(current.data)) print "In Time ",current.index , " ",current.data current=current.next print "*" *70 file.close </code></pre> <p>when the size is equal to 25 there would be like only 16 or 19 output written in the file</p>
-4
2016-10-18T21:36:08Z
40,118,705
<p>i just added the parenthesses to the file.close it fixed the problem ... thanks and sorry for the </p>
0
2016-10-18T21:51:12Z
[ "python", "python-2.7" ]
Plotting secondary Y-axis using Panda?
40,118,548
<p>My current code takes a list from a csv file and lists the header for the user to pick from so it can plot.</p> <pre><code>import pandas as pd df = pd.DataFrame.from_csv('log40a.csv',index_col=False) from collections import OrderedDict headings = OrderedDict(enumerate(df,1)) for num, heading in headings.items(): print("{}) {}".format(num, heading)) print ('Select X-Axis') xaxis = int(input()) print ('Select Y-Axis') yaxis = int(input()) df.plot(x= headings[xaxis], y= headings[yaxis]) </code></pre> <p>My first question. How do I add a secondary Y axis. I know with matplotlib I first create a figure and then plot the first yaxis with the xaxis and then do the same thing to the 2nd yaxis. However, I am not sure how it is done in pandas. Is it similar?</p> <p>I tried using matplotlib to do it but it gave me an error:</p> <pre><code>fig1 = plt.figure(figsize= (10,10)) ax = fig1.add_subplot(211) ax.plot(headings[xaxis], headings[yaxis], label='Alt(m)', color = 'r') ax.plot(headings[xaxis], headings[yaxis1], label='AS_Cmd', color = 'blue') </code></pre> <p>Error:</p> <pre><code>ValueError: Unrecognized character a in format string </code></pre>
0
2016-10-18T21:39:10Z
40,124,887
<p>You need to create an array with the column names that you want plotted on the y axis. </p> <p>An example if you delimite the y columns with a ','</p> <pre><code>df.plot(x= headings[xaxis], y=headings[yaxis.split(",")], figsize=(15, 10)) </code></pre> <p>To run it you will need to change your input method, so that it is an array rather then a string. </p>
0
2016-10-19T07:29:50Z
[ "python", "pandas" ]
Pandas merging 2 DataFrames into one graph
40,118,638
<p>I am trying to plot two pandas dataframes. One dataframe needs to be displayed as a line graph and another as a scatter plot on the same graph.</p> <p>This plots the first dataframe:</p> <pre><code>line = pd.read_csv('nugt_daily.csv',parse_dates=['Date']) line = line.sort_values(by='Date') line.set_index('Date',inplace=True) line['Close'].plot(figsize=(16, 12)) </code></pre> <p>I want to plot the following dataframe on top of the previous graph - but as a scatter plot (rather than a line graph):</p> <pre><code>points = pandas.read_csv('test_doc.csv') points = points.sort_values(by='Date') points.set_index('Date',inplace=True) points.plot(figsize=(16, 12)) </code></pre> <p>How can I achieve this? When I run the two codes one after the other, I see two separate graphs for each dataframe.</p>
0
2016-10-18T21:46:11Z
40,118,802
<p>Use <code>return_type='axes'</code> to get <code>df1.scatterplot</code> to return a matplotlib Axes object. Then pass that axes to the second call to linegraph using <code>ax=ax</code>. This will cause both plots to be drawn on the same axes.</p> <p>Try:</p> <pre><code>ax = df1.plot() df2.plot(ax=ax) </code></pre>
0
2016-10-18T21:59:46Z
[ "python", "pandas", "numpy", "matplotlib" ]
django contact page send_email
40,118,704
<p>I am trying to create a contact page which will take 17 inputs from a visitor and then email that information to me. I found many basic tutorials but none specific to what I am trying to achieve. So far this is what I have:</p> <p>I created a new Django project "contactform" then a new app "send_email"</p> <p>This is my forms.py file located- send_email/forms.py</p> <pre><code>from django import forms class ContactForm(forms.Form): title = forms.CharField(max_length=3, required=True) first_name = forms.CharField(required=True) last_name = forms.CharField(required=True) identity_type = forms.CharField(required=True) identity_number = forms.IntegerField(required=True) current_job = forms.CharField(required=True) career_prospects = forms.CharField(required=True) age = forms.IntegerField(required=True) nationality = forms.CharField(required=True) address = forms.CharField(required=True) city = forms.CharField(required=True) province = forms.CharField(required=True) postal_code = forms.IntegerField(required=True) contact_number = forms.IntegerField(required=True) daytime_contact_number = forms.IntegerField(required=True) evening_contact_number = forms.IntegerField(required=True) email_address = forms.EmailField(required=True) </code></pre> <p>My views.py file located- send_email/views.py</p> <pre><code>from django.core.mail import send_mail, BadHeaderError from django.http import HttpResponse, HttpResponseRedirect from django.shortcuts import render, redirect from .forms import ContactForm def email(request): if request.method == 'GET': form = ContactForm() else: form = ContactForm(request.POST) if form.is_valid(): title = form.cleaned_data['title'] first_name = form.cleaned_data['first_name'] last_name = form.cleaned_data['last_name'] identity_type = form.cleaned_data['identity_type'] identity_number = form.cleaned_data['identity_number'] current_job = form.cleaned_data['current_job'] career_prospects = form.cleaned_data['career_prospects'] age = form.cleaned_data['age'] nationality = form.cleaned_data['nationality'] address = form.cleaned_data['address'] city = form.cleaned_data['city'] province = form.cleaned_data['province'] postal_code = form.cleaned_data['postal_code'] contact_number = form.cleaned_data['contact_number'] daytime_contact_number = form.cleaned_data['daytime_contact_number'] evening_contact_number = form.cleaned_data['evening_contact_number'] email_address = form.cleaned_data['email_address'] try: send_mail(first_name, message, email_address ['admin@example.com']) except BadHeaderError: return HttpResponse('Invalid header found.') return redirect('success') return render(request, "email.html", {'form': form}) def success(request): return HttpResponse('Success! Thank you for your message.') </code></pre> <p>email.html as follows:</p> <pre><code>&lt;h1&gt;Contact Us&lt;/h1&gt; &lt;form method="post"&gt; {% csrf_token %} {{ form.as_ul }} &lt;div class="form-actions"&gt; &lt;button type="submit"&gt;Send&lt;/button&gt; &lt;/div&gt; </code></pre> <p></p> <p>Now I am well aware that Django send_mail take these arguments: (subject, message, sender, recipients) I just want to know if there is a way to pass the data I am asking for into the "message" parameter and email it as a list?</p>
0
2016-10-18T21:51:02Z
40,119,216
<p>Here you go:</p> <hr> <p><strong>email.template.html</strong></p> <pre><code>Hello, someone filled out a contact form with the following information: First name: {{ first_name }} Last name : {{ last_name }} . . . and so on </code></pre> <p>-- or if you’re lazy like me --</p> <pre><code>Hello, someone filled out a contact form with the following information: {% for key, value in form.cleaned_data.items %} {{ key }}: {{ value }} {% endfor %} </code></pre> <hr> <pre><code>from django.template.loader import get_template from django.template import Context def email(request): . . . context = Context(locals()) template = get_template('email.template.html') message = template.render(context) </code></pre>
0
2016-10-18T22:36:20Z
[ "python", "django", "django-forms", "contact-form" ]
adding vector valued attribute to an edge in networkx
40,118,713
<p>I want to add a list, say [1,2,3] to every edge in a directed networkX graph. Any ideas? I found something like this can be done for nodes (using np.array) but it did not work on edges.</p>
0
2016-10-18T21:51:36Z
40,119,056
<p>You can add the attributes when you put the edges into the network.</p> <pre><code>import networkx as nx G=nx.Graph() G.add_edge(1,2, values = [1,2,3]) G.add_edge(2,3, values = ['a', 'b', 'c']) G.edge[1][2]['values'] &gt; [1, 2, 3] G.edges(data = True) &gt; [(1, 2, {'values': [1, 2, 3]}), (2, 3, {'values': ['a', 'b', 'c']})] G.get_edge_data(1,2) &gt; {'values': [1, 2, 3]} </code></pre> <p>or after the fact</p> <pre><code>G.add_edge(4,5) G.edge[4][5]['values'] = [3.14, 'parrot', 'shopkeeper'] G.edge[5][4] &gt; {'values': [3.14, 'parrot', 'shopkeeper']} </code></pre>
2
2016-10-18T22:22:59Z
[ "python", "graph", "networkx" ]
How to use REGEX with multiline
40,118,721
<p>The following expression works well extracting the portion of <code>data</code> string that starts with the word <code>Block</code> followed by open bracket <code>{</code> and ending with the closing bracket '}':</p> <pre><code>data =""" Somewhere over the rainbow Way up high Block { line 1 line 2 line 3 } And the dreams that you dreamed of Once in a lullaby """ regex = re.compile("""(Block\ {\n\ [^\{\}]*\n}\n)""", re.MULTILINE) result = regex.findall(data) print result </code></pre> <p>which returns:</p> <pre><code>['Block {\n line 1\n line 2\n line 3\n}\n'] </code></pre> <p>But if there is another curly bracket inside of the Block portion of the string the expression breaks returning an empty list:</p> <pre><code>data =""" Somewhere over the rainbow Way up high Block { line 1 line 2 {{} line 3 } And the dreams that you dreamed of Once in a lullaby Block { line 4 line 5 {{ } line 6 } Somewhere over the rainbow Blue birds fly And the dreams that you dreamed of Dreams really do come true ooh oh """ </code></pre> <p>How to modify this regex expression to make it ignore the brackets that are inside of the Blocks and yet each block is returned as the separate entity in <code>result</code> list (so each Block could be accessed separately)?</p>
0
2016-10-18T21:52:19Z
40,118,754
<p>Wouldn't this work?</p> <p><code>regex = re.compile("""(Block\ {\n\ [^\}]*\n}\n)""", re.MULTILINE)</code></p> <p>In the version you've posted, it is exiting the match whenever it comes across a second opening brace, even though you want it to exit upon the first closing brace. If you want nested opening / closing braces that's another story.</p>
2
2016-10-18T21:54:57Z
[ "python", "regex" ]
How to use REGEX with multiline
40,118,721
<p>The following expression works well extracting the portion of <code>data</code> string that starts with the word <code>Block</code> followed by open bracket <code>{</code> and ending with the closing bracket '}':</p> <pre><code>data =""" Somewhere over the rainbow Way up high Block { line 1 line 2 line 3 } And the dreams that you dreamed of Once in a lullaby """ regex = re.compile("""(Block\ {\n\ [^\{\}]*\n}\n)""", re.MULTILINE) result = regex.findall(data) print result </code></pre> <p>which returns:</p> <pre><code>['Block {\n line 1\n line 2\n line 3\n}\n'] </code></pre> <p>But if there is another curly bracket inside of the Block portion of the string the expression breaks returning an empty list:</p> <pre><code>data =""" Somewhere over the rainbow Way up high Block { line 1 line 2 {{} line 3 } And the dreams that you dreamed of Once in a lullaby Block { line 4 line 5 {{ } line 6 } Somewhere over the rainbow Blue birds fly And the dreams that you dreamed of Dreams really do come true ooh oh """ </code></pre> <p>How to modify this regex expression to make it ignore the brackets that are inside of the Blocks and yet each block is returned as the separate entity in <code>result</code> list (so each Block could be accessed separately)?</p>
0
2016-10-18T21:52:19Z
40,118,885
<p>I would suggest you to use:</p> <pre><code>(Block ?{\n ?[^$]+?\n}\n) </code></pre> <p>Since python matches <em>greedy</em>, we use <em>?</em> to be non-greedy.</p> <p>Worked well for me. In addition I would recommend you the use of <a href="https://regex101.com/" rel="nofollow">https://regex101.com/</a></p> <p>Best Regards</p>
0
2016-10-18T22:05:48Z
[ "python", "regex" ]
How to randomly select an image using findall and clickall? Sikuli
40,118,859
<p>The problem that I'm having is that when using "if image exists, then click image" the script wants to select the top image every time even if there are 8 others. How do I have it randomly select any of the images each time with equal chance?</p> <p>Example DOG it will pick this one each time. DOG I want it to pick this one.. DOG and this one.. DOG and sometimes this one too.</p>
-1
2016-10-18T22:04:03Z
40,119,110
<p>import random click(random.choice(list(findAll(("dog-1"))))</p>
0
2016-10-18T22:27:18Z
[ "python", "jython", "sikuli" ]
Python How to Check if time is midnight and not display time if true
40,118,869
<p>I'm modifying our pacific time zone filter to include a time option. I don't want the time component to be shown if midnight. The only import thus far we are using is dateutil.parser. Any pointers on best solution would be appreciated! Thanks.</p> <pre><code>def to_pacific_date_str(timestamp, format='%Y-%m-%d', time=False): pacific_timestamp = timestamp if time: format='%Y-%m-%d %H:%M' # 2016-10-03 00:00 if timestamp.tzname() is None: # setting timezone lost when pulled from DB utc_timestamp = timestamp.replace(tzinfo=pytz.utc) # always converting to pacific timezone pacific_timestamp = utc_timestamp.astimezone(pytz.timezone('US/Pacific')) return pacific_timestamp.strftime(format) </code></pre>
2
2016-10-18T22:04:37Z
40,118,929
<p>To check if the time is midnight:</p> <pre><code>from datetime import datetime def checkIfMidnight(): now = datetime.now() seconds_since_midnight = (now - now.replace(hour=0, minute=0, second=0, microsecond=0)).total_seconds() return seconds_since_midnight == 0 </code></pre>
0
2016-10-18T22:09:59Z
[ "python", "datetime", "timezone", "pytz" ]
Python How to Check if time is midnight and not display time if true
40,118,869
<p>I'm modifying our pacific time zone filter to include a time option. I don't want the time component to be shown if midnight. The only import thus far we are using is dateutil.parser. Any pointers on best solution would be appreciated! Thanks.</p> <pre><code>def to_pacific_date_str(timestamp, format='%Y-%m-%d', time=False): pacific_timestamp = timestamp if time: format='%Y-%m-%d %H:%M' # 2016-10-03 00:00 if timestamp.tzname() is None: # setting timezone lost when pulled from DB utc_timestamp = timestamp.replace(tzinfo=pytz.utc) # always converting to pacific timezone pacific_timestamp = utc_timestamp.astimezone(pytz.timezone('US/Pacific')) return pacific_timestamp.strftime(format) </code></pre>
2
2016-10-18T22:04:37Z
40,119,335
<p>I believe the best thing to do would be to just take the <code>time()</code> from the <code>datetime</code> before passing it, then compare that to <code>datetime.time(0, 0)</code>.</p> <pre><code>import pytz import datetime def to_pacific_date_str(timestamp, date_fmt='%Y-%m-%d', time=False): pacific_timestamp = timestamp if timestamp.tzinfo is None: # setting timezone lost when pulled from DB utc_timestamp = timestamp.replace(tzinfo=pytz.utc) # always converting to pacific timezone pacific_timestamp = utc_timestamp.astimezone(pytz.timezone('US/Pacific')) if time and pacific_timestamp.time() != datetime.time(0, 0): date_fmt = '%Y-%m-%d %H:%M' # 2016-10-03 00:00 return pacific_timestamp.strftime(date_fmt) </code></pre> <p>Note that I've changed <code>format</code> to <code>date_fmt</code>, because <a href="https://docs.python.org/3/library/functions.html#format" rel="nofollow"><code>format()</code> is already a builtin</a>. Also, from a design standpoint, it's probably not a great idea to have <code>time</code> override the specified format string, so maybe change the "add time" portion to be <code>date_fmt = date_fmt + ' %H:%M'</code>.</p> <p>Demonstration:</p> <pre><code>&gt;&gt;&gt; PST = pytz.timezone('US/Pacific') &gt;&gt;&gt; to_pacific_date_str(PST.localize(datetime.datetime(2015, 4, 1, 0, 0)), time=True) '2015-04-01' &gt;&gt;&gt; PST = pytz.timezone('US/Pacific') &gt;&gt;&gt; to_pacific_date_str(PST.localize(datetime.datetime(2015, 4, 1, 2, 0)), time=True) '2015-04-01 02:00' </code></pre>
2
2016-10-18T22:48:43Z
[ "python", "datetime", "timezone", "pytz" ]
Python: cannot imoport keras, ImportError: No module named tensorflow
40,118,874
<p>I just update keras package to 1.1.0 version. But it canot be properly imported. Error message:</p> <pre><code>import tensorflow as tf ImportError: No module named tensorflow </code></pre> <p>It seems that the new version requires TensorFlow. I use anaconda in windows 10. </p> <p>How to solve the problem?</p>
0
2016-10-18T22:05:01Z
40,118,982
<p>It has been fixed by changing backend setup to 'theano'</p>
0
2016-10-18T22:15:08Z
[ "python", "keras" ]
How to run 'module load < > ' command from within python script
40,118,919
<p>I tried using <code>os.system()</code>, <code>subprocess.call()</code> and <code>subprocess.Popen()</code> {<em>with and without the option <code>shell=True</code></em>} to execute <code>module load ___</code> from within my python script. Even though the script runs successfully and it mentions that my module has been loaded in the terminal, I am unable to use it. I am working on a ssh client. The <code>module load _____</code> works fine when I run it directly as a command line.</p>
-1
2016-10-18T22:09:00Z
40,119,175
<p>I believe the problem is that both os.system and subprocess are running the command in a... well, subprocess. So the module is load successfully in the subprocess context and it exists immediately. No effect in python's process context though.</p> <p>I'm not near a computer now to try it out, this should work:</p> <p>run_py.sh:</p> <pre><code>pyfile=$1 shift 1 python $pyfile $(tty) $@ &amp; </code></pre> <p>This will run your python file with the first argument a path to the current tty device, and all the other arguments follow. Parse the arguments, save the tty device path to tty_dev. Now you can run:</p> <pre><code>os.system('echo "module load &lt;&gt;" &gt; ' + tty_dev) </code></pre>
0
2016-10-18T22:32:58Z
[ "python", "shell" ]
How argument with return statement calling function?
40,118,976
<p>I am trying to understand a program how this return is calling function and how we are passing argument in function with return ?</p> <p>program is :</p> <pre><code>def hello(x,b): z=x+b print(z) return "hi" def hi(n): return hello(4,4) for i in range(3): print(hi(3)) </code></pre> <p>i am confuse how return hello(4,4) calling main function? can we pass argument in function with return ??</p> <p>if we pass argument in function with return then i tried something like this but its giving error :</p> <pre><code>def hello(x,b): z=x+b print(z) return hello(4, 4) </code></pre> <p>but its not working , I know i am not calling function but i did with return hello(4,4) why its not working when same working in previous program? Please explain </p>
-1
2016-10-18T22:14:03Z
40,119,082
<pre><code>def hello(x,b): z=x+b print(z) return hello(4, 4) </code></pre> <p>does not work because of infinite recursion. Essentially this function is making an infinite loop.</p> <p>To see how this works, look at how you would write it out in code:</p> <pre><code>x = 4 b = 4 z = x+b print(z) #And then we are running the function again. x = 4 b = 4 z=x+b print(z) #And again. x = 4 b = 4 z=x+b print(z) #And again. x = 4 b = 4 z=x+b print(z) </code></pre> <p>You get the idea. But when we call it with a different function, the code looks like this:</p> <pre><code>for i in range(3): x = 4 b = 4 z=x+b print(z) </code></pre> <p>And then the function does not run again because there are no more calls to it.</p> <p>See <a href="http://www.python-course.eu/course.php" rel="nofollow">http://www.python-course.eu/course.php</a> and <a href="http://openbookproject.net/thinkcs/python/english3e/recursion.html" rel="nofollow">http://openbookproject.net/thinkcs/python/english3e/recursion.html</a> for more information.</p>
0
2016-10-18T22:25:06Z
[ "python", "python-2.7", "function", "python-3.x", "return" ]
Invalid view definition - Odoo v9 community
40,118,981
<p>I manage to find a way to have the product price on <code>stock.picking</code>, but now I have a view error.</p> <p>This is my model:</p> <pre><code>from openerp import models, fields, api import openerp.addons.decimal_precision as dp class StockPicking(models.Model): _inherit = 'stock.picking' product_id = fields.Many2one("product.product", "Product") price_unity = fields.Float(string="Precio", store=True, readonly=True, related="product_id.lst_price") </code></pre> <p>Now, the offending code in my view:</p> <pre><code>&lt;record id="view_stock_picking_form" model="ir.ui.view"&gt; &lt;field name="name"&gt;Stock Picking Price Form&lt;/field&gt; &lt;field name="model"&gt;stock.picking&lt;/field&gt; &lt;field name="inherit_id" ref="stock.view_picking_form"/&gt; &lt;field name="arch" type="xml"&gt; &lt;xpath expr="//page/field[@name='pack_operation_product_ids']/tree/field[@name='qty_done']" position="after"&gt; &lt;field name="price_unity"/&gt; &lt;/xpath&gt; &lt;/field&gt; &lt;/record&gt; </code></pre> <p>It says <code>Error details: Field</code>price_unity<code>does not exist</code> how is this even possible?</p> <p>On tree view it doesn't throws this error:</p> <pre><code>&lt;record id="view_stock_picking_tree" model="ir.ui.view"&gt; &lt;field name="name"&gt;Stock Picking Price Tree&lt;/field&gt; &lt;field name="model"&gt;stock.picking&lt;/field&gt; &lt;field name="inherit_id" ref="stock.vpicktree"/&gt; &lt;field name="arch" type="xml"&gt; &lt;field name="state" position="before"&gt; &lt;field name="price_unity"/&gt; &lt;/field&gt; &lt;/field&gt; &lt;/record&gt; </code></pre> <p>So, how is it that in form view I can't declare it'</p> <p>Am I missing something?</p> <p>Thanks in advance!</p>
1
2016-10-18T22:14:38Z
40,122,543
<p>You are adding <em>price_unity</em> field in view inside <em>pack_operation_product_ids</em> field.</p> <p><em>pack_operation_product_ids</em> is a One2many relation type with <em>stock_pack_operation</em> object.</p> <p>So we need to add/register <em>price_unity</em> field in <em>stock_pack_operation</em> object.</p> <p>Try with following code:</p> <pre><code>class StockPackOperation(models.Model): _inherit = 'stock.pack.operation' price_unity = fields.Float(string="Precio", store=True, readonly=True, related="product_id.lst_price") #product_id is already in table so no need to add/register </code></pre> <p>Afterwards restart Odoo server and upgrade your custom module.</p> <p>NOTE:</p> <p>You are not getting error in tree of Stock Picking because you have added/registered <em>price_unity</em>.</p> <p>Your view code is good.</p>
2
2016-10-19T05:04:09Z
[ "python", "openerp", "odoo-9", "qweb" ]
Creating a chart in python 3.5 using XlsxWriter where series values are based on a loop
40,119,012
<p>I'm using Python 3.5 on Windows 10. Hopefully I can articulate what I'm trying to accomplish while permitting as little confusion as possible... </p> <p>I've written a Python script that performs several tasks, some of which include creating an Excel workbook via XlsxWriter based on data generated from code in the earlier portions of the script. I'm attempting to have my script also create a chart, once again using XlsxWriter, based on this data. I've seen several examples available online and, while useful, there is one specific difference between the examples and my own personal code that make me unsure as to how I could proceed. </p> <p>My issue arises when trying to add a series to a chart. When configuring a series, the few examples that I've come across include something like this (note: I'm not actually using this exact code):</p> <pre><code>chart.add_series({ 'name': '=Sheet1!$A$2:$A$7 ' }) </code></pre> <p>Because of the nature of my script which involves looping and variable lengths, the data generated could be populated throughout various columns and rows, so I can't assign something like 'name' a fixed reference like 'Sheet1!$A$2:$A$7' because although all the data generated from my script will exist on the same sheet, it will never consistently be only in column 'A' and only between cells 2 through 7. </p> <p>So, how can I get around this? Again, due to variable items and such, the way I've told XlsxWriter to populate cells is by creating variables <code>row_1 = 0</code> and <code>col_1 = 0</code> and incrementing them as necessary. I am able to write something like </p> <pre><code>chart.add_series({ 'name': '=Sheet1!row:col+7' }) </code></pre> <p>through XlsxWriter? </p> <p><strong>EDIT</strong>: So I just found out I can use alternative indexing which appears to be my workaround to adding a series to the chart. However, I get the following error message: </p> <pre><code>UserWarning: Must specify 'values' in add_series() warn("Must specify 'values' in add_series()") </code></pre> <p>Based on this bit of code:</p> <pre><code>chart.add_series({ 'Subscribers': ['Sheet1', row_1 + 2, col_1, 2 + len(sb_subCount_list_clean), col_1] }) </code></pre> <p>Is this because I'm using variables as my indexes? sb_subCount_list_clean is a list that contains the data I'm using to create the chart. The column will be of this length plus 2 because I have some headers occupying the first two cells. </p>
1
2016-10-18T22:18:01Z
40,127,334
<p>In almost all parts of the XlsxWriter API, anywhere there is a Cell reference like <code>A1</code> or a range like <code>=Sheet1!$A$1</code> you can use a tuple or a list of values. For charts you can use a list of values like this:</p> <pre><code># String interface. This is good for static ranges. chart.add_series({ 'name': '=Sheet1!$A$1', 'categories': '=Sheet1!$B$1:$B$5', 'values': '=Sheet1!$C$1:$C$5', }) # Or using a list of values instead of name/category/value formulas: # [sheetname, first_row, first_col, last_row, last_col] # This is better for generating programmatically. chart.add_series({ 'name': ['Sheet1', 0, 0 ], 'categories': ['Sheet1', 0, 1, 4, 1], 'values': ['Sheet1', 0, 2, 4, 2], }) </code></pre> <p>Refer to the <a href="https://xlsxwriter.readthedocs.io/chart.html#chart-class" rel="nofollow">docs</a>.</p> <p>As for the error you are getting (after the edit): <code>Subscribers</code> isn't a XlsxWriter parameter, you probably mean<code>values</code>:</p> <pre><code> chart.add_series({ 'values': ['Sheet1', row_1 + 2, col_1, 2 + len(sb_subCount_list_clean), col_1] }) </code></pre>
0
2016-10-19T09:26:00Z
[ "python", "excel", "graph", "charts", "xlsxwriter" ]
Python: Pandas, dealing with spaced column names
40,119,050
<p>If I have multiple text files that I need to parse that look like so, but can vary in terms of column names, and the length of the hashtags above: <img src="https://s4.postimg.org/8p69ptj9p/feafdfdfdfdf.png" alt="txt.file"></p> <p>How would I go about turning this into a pandas dataframe? I've tried using <code>pd.read_table('file.txt', delim_whitespace = True, skiprows = 14)</code>, but it has all sorts of problems. My issues are... </p> <p>All the text, asterisks, and pounds at the top needs to be ignored, but I can't just use skip rows because the size of all the junk up top can vary in length in another file. </p> <p>The columns "stat (+/-)" and "syst (+/-)" are seen as 4 columns because of the whitespace.</p> <p>The one pound sign is included in the column names, and I don't want that. I can't just assign the column names manually because they vary from text file to text file.</p> <p>Any help is much obliged, I'm just not really sure where to go from after I read the file using pandas.</p>
3
2016-10-18T22:22:16Z
40,121,644
<p>Consider reading in raw file, cleaning it line by line while writing to a new file using <code>csv</code> module. Regex is used to identify column headers using the <em>i</em> as match criteria. Below assumes more than one space separates columns:</p> <pre><code>import os import csv, re import pandas as pd rawfile = "path/To/RawText.txt" tempfile = "path/To/TempText.txt" with open(tempfile, 'w', newline='') as output_file: writer = csv.writer(output_file) with open(rawfile, 'r') as data_file: for line in data_file: if re.match('^.*i', line): # KEEP COLUMN HEADER ROW line = line.replace('\n', '') row = line.split(" ") writer.writerow(row) elif line.startswith('#') == False: # REMOVE HASHTAG LINES line = line.replace('\n', '') row = line.split(" ") writer.writerow(row) df = pd.read_csv(tempfile) # IMPORT TEMP FILE df.columns = [c.replace('# ', '') for c in df.columns] # REMOVE '#' IN COL NAMES os.remove(tempfile) # DELETE TEMP FILE </code></pre>
2
2016-10-19T03:30:51Z
[ "python", "python-3.x", "pandas", "text" ]
Python: Pandas, dealing with spaced column names
40,119,050
<p>If I have multiple text files that I need to parse that look like so, but can vary in terms of column names, and the length of the hashtags above: <img src="https://s4.postimg.org/8p69ptj9p/feafdfdfdfdf.png" alt="txt.file"></p> <p>How would I go about turning this into a pandas dataframe? I've tried using <code>pd.read_table('file.txt', delim_whitespace = True, skiprows = 14)</code>, but it has all sorts of problems. My issues are... </p> <p>All the text, asterisks, and pounds at the top needs to be ignored, but I can't just use skip rows because the size of all the junk up top can vary in length in another file. </p> <p>The columns "stat (+/-)" and "syst (+/-)" are seen as 4 columns because of the whitespace.</p> <p>The one pound sign is included in the column names, and I don't want that. I can't just assign the column names manually because they vary from text file to text file.</p> <p>Any help is much obliged, I'm just not really sure where to go from after I read the file using pandas.</p>
3
2016-10-18T22:22:16Z
40,137,973
<p>This is the way I'm mentioning in the comment: it uses a file object to skip the custom dirty data you need to skip at the beginning. You land the file offset at the appropriate location in the file where <code>read_fwf</code> simply does the job:</p> <pre><code>with open(rawfile, 'r') as data_file: while(data_file.read(1)=='#'): last_pound_pos = data_file.tell() data_file.readline() data_file.seek(last_pound_pos) df = pd.read_fwf(data_file) df Out[88]: i mult stat (+/-) syst (+/-) Q2 x x.1 Php 0 0 0.322541 0.018731 0.026681 1.250269 0.037525 0.148981 0.104192 1 1 0.667686 0.023593 0.033163 1.250269 0.037525 0.150414 0.211203 2 2 0.766044 0.022712 0.037836 1.250269 0.037525 0.149641 0.316589 3 3 0.668402 0.024219 0.031938 1.250269 0.037525 0.148027 0.415451 4 4 0.423496 0.020548 0.018001 1.250269 0.037525 0.154227 0.557743 5 5 0.237175 0.023561 0.007481 1.250269 0.037525 0.159904 0.750544 </code></pre>
1
2016-10-19T17:31:53Z
[ "python", "python-3.x", "pandas", "text" ]
Python: how to stall execution of function until two threads complete
40,119,179
<p>I'm new enough to python programming and starting to dabble with concurrency for the first time, so please forgive any terms.</p> <p>My programme starts two threads with each call a function "lightshow()" but rather than the programme stalling it's execution until both threads have completed it moves on to the next line in the code.</p> <p>Is it possible to "pause" the progamme until both threads have completed?</p> <p>Below is my code snippet:</p> <pre><code>import time import threading from threading import Thread # Handshake pulse on GPIO2 of 4 50ms highs def setup(): for pulse in range(5): hs.on() sleep(0.05) hs.off() sleep(0.05) print('handshake comlete') def lightshow(sequence, relay): t_end = time.time() + 60*1 while time.time() &lt; t_end: print('starting timer') # iterate over the dictionary's keys and values for key, value in sequence.items(): relay.on() sleep(key) relay.off() sleep(value) setup() #send handshake to board to prime it. t1 = threading.Thread(target = lightshow, args=(flickerRGB, relay1)) t2 = threading.Thread(target = lightshow, args=(flickerWhite, relay2)) t1.start() t2.start() #send handshake to Relay board to reset it setup() </code></pre> <p>Basically the programme switches two relays simultaneously to switch on lights with different on/off patterns.</p> <p>If there's a better way other than threading please let me know.</p> <p>Many thanks,</p> <p>Paul </p>
0
2016-10-18T22:33:20Z
40,119,203
<pre><code>#send handshake to Relay board to reset it t2.join() #block until thread exits setup() </code></pre> <p>but since its just a hardcoded timeout why not just</p> <pre><code>time.sleep(61) </code></pre>
1
2016-10-18T22:35:19Z
[ "python", "multithreading", "python-multithreading" ]
Python - Quadratic Equation PLS respond
40,119,192
<p>We have to make a program that solves a quadratic equation, I did that part but we also have to include a section where the user inputs what they think the correct answer is and if they are right the program should output something like "You are correct". If they are wrong however, it should output something like "You are wrong" and then underneath it should output the correct answers. This is what i have so far can someone please incorporate this part for me My code is below. </p> <pre><code>print "This program can be used to solve quadratic equations" print "Below, please input values for a, b, and c" print "\n----------------------------\n" import math for i in range(39479): a = float(raw_input("Enter a value for a: ")) b = float(raw_input("Enter a value for b: ")) c = float(raw_input("Enter a value for c: ")) if a==0: print "Please input a value greater than or less than 0 for a" else: break print "\n----------------------------\n" discriminant = (b**2)-(4*(a*c)) if discriminant &lt; 0: print ("This equation has no real solution") elif discriminant == 0: repeated_solution = (-b-math.sqrt(b**2-4*a*c))/2*a print ("This equation has one, repeated solution: "), repeated_solution else: root_1 = (-b+math.sqrt(discriminant))/(2*a) root_2 = (-b-math.sqrt(discriminant))/(2*a) print "This equation has two solutions:", root_1, " and/or", root_2 </code></pre>
0
2016-10-18T22:34:24Z
40,119,861
<p>I understood that you are having an issue dealing with comparison part. So you can add a realy basic validation, something look like that: <code>d = float(raw_input("what is your answer: ")) if d*d=root_1*root_1 | d*d=repeated_solution*repeated_solution: print('correct') else: print('false')</code> Mathematically you compare the square of the answer with results the both cases lead to the same mechanic. to avoid issues of negative format. And delete the part when you print the results...</p>
0
2016-10-18T23:50:08Z
[ "python", "python-2.7", "python-3.x" ]
Python - Quadratic Equation PLS respond
40,119,192
<p>We have to make a program that solves a quadratic equation, I did that part but we also have to include a section where the user inputs what they think the correct answer is and if they are right the program should output something like "You are correct". If they are wrong however, it should output something like "You are wrong" and then underneath it should output the correct answers. This is what i have so far can someone please incorporate this part for me My code is below. </p> <pre><code>print "This program can be used to solve quadratic equations" print "Below, please input values for a, b, and c" print "\n----------------------------\n" import math for i in range(39479): a = float(raw_input("Enter a value for a: ")) b = float(raw_input("Enter a value for b: ")) c = float(raw_input("Enter a value for c: ")) if a==0: print "Please input a value greater than or less than 0 for a" else: break print "\n----------------------------\n" discriminant = (b**2)-(4*(a*c)) if discriminant &lt; 0: print ("This equation has no real solution") elif discriminant == 0: repeated_solution = (-b-math.sqrt(b**2-4*a*c))/2*a print ("This equation has one, repeated solution: "), repeated_solution else: root_1 = (-b+math.sqrt(discriminant))/(2*a) root_2 = (-b-math.sqrt(discriminant))/(2*a) print "This equation has two solutions:", root_1, " and/or", root_2 </code></pre>
0
2016-10-18T22:34:24Z
40,120,037
<pre><code>print "This program can be used to solve quadratic equations" print "Below, please input values for a, b, and c" print "\n----------------------------\n" import math for i in range(39479): a = float(raw_input("Enter a value for a: ")) b = float(raw_input("Enter a value for b: ")) c = float(raw_input("Enter a value for c: ")) if a==0: print "Please input a value greater than or less than 0 for a" else: break print "\n----------------------------\n" discriminant = (b**2)-(4*(a*c)) if discriminant &lt; 0: print ("This equation has no real solution") elif discriminant == 0: repeated_solution = format((-b-math.sqrt(b**2-4*a*c))/2*a,'.3f') guess=format(float(raw_input("This equation has one, repeated solution, Make a guess:")),'.3f') if guess==repeated_solution: print "You are correct.WOW!" print ("This equation has one, repeated solution: "), repeated_solution else: print "Wrong Answer" else: root_1 = format((-b+math.sqrt(discriminant))/(2*a),'.3f') root_2 = format((-b-math.sqrt(discriminant))/(2*a),'.3f') guess1=format(float(raw_input("This equation has two solutions, Make a guess for solution1:")),'.3f') guess2=format(float(raw_input("This equation has two solutions, Make a guess for solution2:")),'.3f') if set([root_1,root_2]).issubset( [guess1,guess2]): print "You are correct.WOW!" print "This equation has two solutions:", root_1, " and/or", root_2 else: print "Wrong Answer" </code></pre>
0
2016-10-19T00:12:39Z
[ "python", "python-2.7", "python-3.x" ]
Python: edit child methods for objects contained in child object
40,119,219
<p>I apologize for the title...couldn't think of anything else. I have 2 classes as follows:</p> <pre><code>class Widget: def __init__(self): ... def remove(self): widgets_to_remove.add(self) def update(self, events, mouse_pos): pass def draw(self, screen): pass class Widget_bundle(Widget): def __init__(self, widget_group): Widget.__init__(self) self.widget_group = widget_group # list containing objects inheriting from Widget self.call_for_all("remove") def call_for_all(self, func, *args): for w in self.widget_group: getattr(w, func)(*args) </code></pre> <p>the code works but I would like it if there was a way to call a method defined by a <code>Widget</code> object on a <code>Widget_bundle</code> object and have that method called by all objects in <code>widget_group</code>. The obvious solution is to make a method for EVERY SINGLE POSSIBLE METHOD and use a for loop to iterate over the objects, or use my <code>call_for_all method</code>, which requires the function as a string and complicates other parts of the code I didn't include. Is there a third solution?</p>
0
2016-10-18T22:36:28Z
40,119,993
<p>It sounds like this is your situation:</p> <p>You have a class <code>Widget_bundle</code> that has a property <code>Widget_bundle.widget_group</code>. That property is a list of classes <code>Widget</code>.</p> <p>You want to make a call, for example, <code>Widget_bundle.remove()</code> and translate that to a call <code>Widget.remove()</code> for each <code>Widget</code> in <code>Widget_bundle.widget_group</code>.</p> <p>One way to approach this would be to customise the <code>Widget_bundle.__getattr__</code> method.</p> <pre><code>class Widget_bundle(Widget): def __init__(self, widget_group): Widget.__init__(self) self.widget_group = widget_group def __getattr__(self, attr): def _bundle_helper(*args, **kwargs): for widget in self.widget_group: getattr(widget, attr)(*args, **kwargs) return _bundle_helper </code></pre> <p>Or using <code>functools</code>:</p> <pre><code>import functools class Widget_bundle(Widget): def __init__(self, widget_group): Widget.__init__(self) self.widget_group = widget_group def __getattr__(self, attr): return functools.partial(self._bundle_helper, attr) def _bundle_helper(self, attr, *args, **kwargs): for widget in self.widget_group: getattr(widget, attr)(*args, **kwargs) </code></pre> <p>Now you can call <code>Widget_bundle.remove()</code>.</p> <p>I referred to <a href="http://stackoverflow.com/questions/3434938/python-allowing-methods-not-specifically-defined-to-be-called-ala-getattr">this StackOverflow question</a>.</p>
0
2016-10-19T00:04:43Z
[ "python", "inheritance", "methods" ]
Python regex issue. Validation works but in two parts, I to extract each valid 'part' separately
40,119,370
<p>My code is:</p> <pre><code>test1 = flight ###Referencelink: http://academe.co.uk/2014/01/validating-flight-codes/ #Do not mess up trailing strings p = re.compile(r'^([a-z][a-z]|[a-z][0-9]|[0-9][a-z])[a-z]?[0-9]{1,4}[a-z]?$') m = p.search(test1) # p.match() to find from start of string only if m: print '[200],[good date and time]' # group(1...n) for capture groups else: print('[error],[bad flight number]'),quit() </code></pre> <p>I need to get the carrier code (the first bit) and the flight number(second bit) separately. </p> <p>Can I extract the regex as in: a = 'first valid part' of regex, b = 'second valid part'</p>
0
2016-10-18T22:52:57Z
40,119,739
<p>Try this maybe.</p> <pre><code>p = re.compile(r'^([a-z][a-z]|[a-z][0-9]|[0-9][a-z])([a-z]?[0-9]{1,4}[a-z]?)$') m = p.findall(test1) </code></pre>
0
2016-10-18T23:34:26Z
[ "python", "regex", "api", "extract" ]
Why additional memory allocation makes multithread Python application work few times faster?
40,119,420
<p>I'm writing python module which one of the functions is to check multiple IP addresses if they're active and write this information to database. As those are I/O bound operations I decided to work on multiple threads:</p> <ul> <li>20 threads for pinging host and checking if it's active (function <code>check_if_active</code>)</li> <li>5 threads for updating data in database (function <code>check_if_active_callback</code>)</li> </ul> <p>Program works as followed:</p> <ol> <li>Main thread takes IPs from database and puts them to queue <code>pending_ip</code></li> <li>One of 20 threads takes record from <code>pending_ip</code> queue, pings host and puts answer to <code>done_ip</code> queue</li> <li>One of 5 threads takes record from <code>done_ip</code> queue and does update in database if needed</li> </ol> <p>What I've observed (during timing tests to get answer how many threads would suit the best in my situation) is that program works aprox. 7-8 times faster if in 5 loops I first declare and start 20+5 threads, delete those objects and then in 6th loop run the program, than if I run program without those additional 5 loops.</p> <p>I suppose this could be somehow related to memory management. Not really sure though if deleting objects makes any sense in python. My questions are:</p> <ul> <li>why is that happening?</li> <li>how I can achieve this time boost without additional code (and additional memory allocation)?</li> </ul> <p>Code:</p> <pre><code>import time, os from threading import Thread from Queue import Queue from readconfig import read_db_config from mysql.connector import Error, MySQLConnection pending_ip = Queue() done_ip = Queue() class Database: connection = MySQLConnection() def connect(self): db_config = read_db_config("mysql") try: self.connection = MySQLConnection(**db_config) except Error as e: print(e) def close_connection(self): if self.connection.is_connected() is True: self.connection.close() def query(self, sqlquery): if self.connection.is_connected() is False: self.connect() try: cursor = self.connection.cursor() cursor.execute(sqlquery) rows = cursor.fetchall() except Error as e: print(e) finally: cursor.close() return rows def update(self,sqlquery, var): if self.connection.is_connected() is False: self.connect() try: cursor = self.connection.cursor() cursor.execute(sqlquery, var) self.connection.commit() except Error as e: self.connection.rollback() print(e) finally: cursor.close() db=Database() def check_if_active(q): while True: host = q.get() response = os.system("ping -c 1 -W 2 %s &gt; /dev/null 2&gt;&amp;1" % (host)) if response == 0: ret = 1 else: ret = 0 done_ip.put((host, ret)) q.task_done() def check_if_active_callback(q, db2): while True: record = q.get() sql = "select active from table where address_ip='%s'" % record[0] rowIP = db2.query(sql) if(rowIP[0][0] != record[1]): sqlupdq = "update table set active=%s where address_ip=%s" updv = (record[1], record[0]) db2.update(sqlupdq, updv) q.task_done() def calculator(): #some irrelevant code rows = db.query("select ip_address from table limit 1000") for row in rows: pending_ip.put(row[0]) #some irrelevant code if __name__ == '__main__': num_threads_pinger = 20 num_threads_pinger_callback = 5 db = Database() for i in range(6): db_pinger_callback =[] worker_p = [] worker_cb = [] #additional memory allocation here in 5 loops for 20 threads for z in range(num_threads_pinger): worker = Thread(target=check_if_active, args=(pending_ip)) worker.setDaemon(True) worker.start() worker_p.append(worker) #additional memory allocation here in 5 loops for 5 threads for z in range(num_threads_pinger_callback): db_pinger_callback.append(Database()) worker = Thread(target=check_if_active_callback, args=(done_ip, db_pinger_callback[z])) worker.setDaemon(True) worker.start() worker_cb.append(worker) if i == 5: start_time = time.time() calculator() pending_ip.join() done_ip.join() print "%s sec" % (time.time() - start_time) #freeing (?) that additional memory for z in range(num_threads_pinger - 1, 0, -1): del worker_p[z] #freeing (?) that additional memory for z in range(num_threads_pinger_callback - 1, 0, -1): db_pinger_callback[z].close_connection() del db_pinger_callback[z] del worker_cb[z] db.close_connection() </code></pre>
0
2016-10-18T22:59:22Z
40,119,583
<p>In order to give you an exact explanation it would help to know what version of Python you're using. For instance if you're using PyPy then what you've observed is the JIT kicking in after you call your loop 5 times and it just returns a pre-calculated answer. If you're using a standard version of Python then this speed up is due to the interpreter using the compiled byte code from the .pyc files. How it works is basically Python will first create an in memory representation of your code and run it from there. During repeated calls the interpreter will convert some of the more often used code into byte code and store it on disk in .pyc files (this is python byte code similar to java byte code not to be confused with native machine code). Every time you call the same function the interpreter will go to your .pyc files and execute the corresponding byte code this makes the execution much faster as the code you're running is precompiled compared to when you call the function once and python has to parse and interpret your code.</p>
-1
2016-10-18T23:19:42Z
[ "python", "multithreading" ]
Find cells with data and use as index in dataframe
40,119,486
<p>I'm reading an excel file, but for this question purposes I will provide an example of what my dataframe looks like. I have a <code>dataframe</code> like so:</p> <pre><code>df = pd.DataFrame([ ['Texas 1', '111', '222', '333'], ['Texas 1', '444', '555', '666'], ['Texas 2', '777','888','999'] ]) df[2] = df[2].replace('222', '') 0 1 2 3 a Texas 1 111 333 b Texas 1 444 555 666 c Texas 2 777 888 999 </code></pre> <p>And I want to be able to define a multiindex based on the values of the first row that are not blank. So something like this:</p> <pre><code> 0 1 3 Texas 1 111 333 444 555 666 Texas 2 111 333 777 888 999 </code></pre> <p>The problem is that the values in row <code>a</code> will not always be in the same column, so I need a way to find which columns have a value in the first row and use that column number as an index. So far, I read my excel file like so:</p> <pre><code>df1 = pd.read_excel('excel.XLS', index_col=[1,11,24,37]) </code></pre> <p>And I've been looking for a way to read the cells that are not <code>NaN</code> and are in <code>row a</code> and find their column number to store in a list and use that as for my <code>index_col=()</code>. But I can't figure out how. Any pointers in the right direction would be awesome! </p>
3
2016-10-18T23:08:51Z
40,122,055
<p>first of all, you say "where is not NaN" but you <code>replace</code> with <code>''</code>.<br> I'll replace <code>''</code> with <code>np.nan</code> then <code>dropna</code></p> <pre><code>df.iloc[0].replace('', np.nan).dropna().index Int64Index([0, 1, 3], dtype='int64') </code></pre> <hr> <pre><code>df[df.iloc[0].replace('', np.nan).dropna().index] </code></pre> <p><a href="https://i.stack.imgur.com/1arU5.png" rel="nofollow"><img src="https://i.stack.imgur.com/1arU5.png" alt="enter image description here"></a></p>
2
2016-10-19T04:18:56Z
[ "python", "pandas" ]
php - What is quicker for search and replace in a csv file? In a string or in an array?
40,119,499
<p>I am dealing with csv files that usually have between 2 million to 5 million rows. I have (for example) 3000 specific values that need to be replaced by 3000 different values. I have two arrays of 3000 items called $search and $replace. Note: The search and replace phrases are complete values (e.g. ...,search,... -> ...,replace,...). Also, I'll eventually be importing this into a mysql database.</p> <p>Which would be the most efficient/quickest way to accomplish this?</p> <ol> <li><p>Load the entire contents of the csv file into a string and run str_replace using the arrays and the string</p></li> <li><p>Load the csv file into arrays and use array_search() to replace the values</p></li> <li><p>Load the csv file into a mysql database and then search and replace using queries</p></li> <li><p>Use python instead</p></li> <li><p>Other</p></li> </ol> <p>I know I could setup some tests and compare their runtimes, but I'm more looking to understand why one is better than the other, or the mechanism by which they search (ex: O(n), binary search, etc.?)</p>
1
2016-10-18T23:10:27Z
40,136,692
<p>If your csv file is that big (> 1 million rows), it might not be the best idea to load it all at once unless memory usage is of no concern to you.</p> <p>Therefor, I'd recommend running the replace line by line. Here's a very basic example:</p> <pre><code>$input = fopen($inputFile, 'r'); $output = fopen($outputFile, 'r+'); while (!feof($input)) { $input = fgets($input); $parsed = str_replace($search, $replace, $input); fputs($output, $parsed); } </code></pre> <p>This should be fast enough, and it allows you to easily track progress as well. If you would ever like to replace only specific column, you can use <code>fgetcsv</code> and <code>fputcsv</code> instead of <code>fgets</code> and <code>fputs</code>.</p> <p>I definitely wouldn't try to do this using mysql, as simply inserting this much data into a database will take a while.</p> <p>As for python, I'm not sure whether it can actually benefit the algorithm in any way.</p>
0
2016-10-19T16:15:05Z
[ "php", "python", "mysql", "arrays", "csv" ]
Swap list / string around a character python
40,119,616
<p>I want to swap two parts of a list or string around a specified index, example:</p> <pre><code>([1, 2, 3, 4, 5], 2) </code></pre> <p>should return</p> <pre><code>[4, 5, 3, 1, 2] </code></pre> <p>I'm only supposed to have one line of code, it works for strings but I get</p> <p>can only concatenate list (not "int") to list</p> <p>when I try to use lists.</p> <pre><code>def swap(listOrString, index): return (listOrString[index + 1:] + listOrString[index] + listOrString[:index]) </code></pre>
3
2016-10-18T23:23:00Z
40,119,652
<p>It's because you took two slices and one indexing operation and tried to concatenate. slices return sub-lists, indexing returns a single element.</p> <p>Make the middle component a slice too, e.g. <code>listOrString[index:index+1]</code>, (even though it's only a one element slice) so it keeps the type of whatever is being sliced (becoming a one element sequence of that type:</p> <pre><code>return listOrString[index + 1:] + listOrString[index:index+1] + listOrString[:index] </code></pre>
5
2016-10-18T23:26:06Z
[ "python" ]
Detect Unicode empty value in a dictionary
40,119,634
<p>I have something like this in a list.</p> <pre><code>my_arr = [{'Brand_Name': u''}, {'Brand_Name':u''}, {'Brand_Name':u'randomstr1'}] </code></pre> <p>I want to be able to get all the values that are empty for brand name. In thise case I'd get two empty values. I tried this</p> <pre><code>for dictionary in my_arr: if None in dictionary: print dictionary </code></pre> <p>This doesn't work. How would I get the unicode empty value?</p>
1
2016-10-18T23:24:30Z
40,119,654
<p>You can just check for falsey values with <code>if not ...</code> (this will also include <code>None</code>, <code>[]</code>, <code>()</code>, etc)</p> <pre><code>&gt;&gt;&gt; brands = [ {'Brand_Name': u''}, {'Brand_Name':u''}, {'Brand_Name':u'randomstr1'}, ] &gt;&gt;&gt; for brand in brands: if not brand['Brand_Name']: print brand {'Brand_Name': u''} {'Brand_Name': u''} </code></pre>
1
2016-10-18T23:26:25Z
[ "python", "unicode", "syntax" ]
How to loop through a list and then swap digits for other instances of the loop to see?
40,119,661
<p>I have an XML Document with a structure like this:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"&gt; &lt;url&gt; &lt;loc&gt;https://www.website.com/&lt;/loc&gt; &lt;changefreq&gt;daily&lt;/changefreq&gt; &lt;/url&gt; &lt;url&gt; &lt;loc&gt;https://www.website.com/location/&lt;/loc&gt; &lt;lastmod&gt;2016-10-13T06:03:41Z&lt;/lastmod&gt; &lt;changefreq&gt;daily&lt;/changefreq&gt; &lt;image:image&gt; &lt;image:loc&gt;https://website.com/image/&lt;/image:loc&gt; &lt;image:title&gt;Title of Item&lt;/image:title&gt; &lt;/image:image&gt; &lt;/url&gt; &lt;url&gt; &lt;loc&gt;https://www.website.com/location/&lt;/loc&gt; &lt;lastmod&gt;2016-09-15T07:11:22Z&lt;/lastmod&gt; &lt;changefreq&gt;daily&lt;/changefreq&gt; &lt;image:image&gt; &lt;image:loc&gt;https://website.com/image/&lt;/image:loc&gt; &lt;image:title&gt;Title of Item&lt;/image:title&gt; &lt;/image:image&gt; &lt;/url&gt; &lt;/urlset&gt; </code></pre> <p>I want to see which tag is the youngest using the tab. I have used this to get the date broken down to see if one year is newer than the next year... etc. But, it doesn't work because every time I iterate to a different node the for loop "forgets" and doesn't save which date is the newest which makes it return the date from the last loop iterated, not the newest date.</p> <p>I have tried everything based on variables, even thinking that getter and setter methods would work, but the values aren't updated.</p> <pre><code>tree = get_xml_data(line) to_log(tree) for child in tree: if child.tag.endswith("url"): for c in child: if c.tag.endswith("lastmod"): xml_date = c.text year = "" month = "" day = "" hour = "" minute = "" second = "" for i in range(4): year += str(xml_date[i]) for i in range(5, 7): month += str(xml_date[i]) for i in range(8, 10): day += str(xml_date[i]) for i in range(11, 13): hour += str(xml_date[i]) for i in range(14, 16): minute += str(xml_date[i]) for i in range(17, 19): second += str(xml_date[i]) if year &gt; nt.get_year(): nt.set_year(int(year)) if month &gt; nt.get_month(): nt.set_month(int(month)) if day &gt; nt.get_day(): nt.set_day(int(day)) if hour &gt; nt.get_hour(): nt.set_hour(int(hour)) if minute &gt; nt.get_minute(): nt.set_minute(int(minute)) if second &gt; nt.get_second(): nt.set_second(int(second)) to_log("Addition:", year, month, day, hour, minute, second) to_log("Newest addition:", nt.get_year(), nt.get_month(), nt.get_day()) to_log("Newest addition (cont.):", nt.get_hour(), nt.get_minute(), nt.get_second()) </code></pre> <p>Outputs (for example, the first addition should be the newest date):</p> <pre><code>2016-10-18 19:25:20.332031 Addition: 2016 10 05 06 21 05 2016-10-18 19:25:20.332083 Addition: 2016 07 30 01 27 21 2016-10-18 19:25:20.332134 Addition: 2016 09 19 17 48 45 2016-10-18 19:25:20.332186 Addition: 2016 09 19 17 48 52 2016-10-18 19:25:20.332235 Newest addition: 2016 9 19 2016-10-18 19:25:20.332268 Newest addition (cont.): 17 48 52 </code></pre>
0
2016-10-18T23:27:12Z
40,120,515
<p>This version remembers the newest addition date (and time):</p> <pre><code>import jdcal def julian(y, m, d, h, mi, s): return sum(jdcal.gcal2jd(y, m, d)) + (h-12.0)/24 + mi/1440.0 + s/86400.0 tree = get_xml_data(line) to_log(tree) julNewest = 0.0 # establish a comparison value for the newest addition for child in tree: if child.tag.endswith("url"): for c in child: if c.tag.endswith("lastmod"): xml_date = c.text year = float(xml_date[0:4]) month = float(xml_date[5:7]) day = float(xml_date[8:10]) hour = float(xml_date[11:13]) minute = float(xml_date[14:16]) second = float(xml_date[17:19]) julDay = julian(year, month, day, hour, minute, second) # calculate Julian day number of recent addition if julDay &gt; julNewest: nt.set_year(int(year)) nt.set_month(int(month)) nt.set_day(int(day)) nt.set_hour(int(hour)) nt.set_minute(int(minute)) nt.set_second(int(second)) julNewest = julDay to_log("Addition:", year, month, day, hour, minute, second) to_log("Newest addition:", nt.get_year(), nt.get_month(), nt.get_day()) to_log("Newest addition (cont.):", nt.get_hour(), nt.get_minute(), nt.get_second())` </code></pre> <p>You first have to import the module jdcal (if not installed, install it with "pip install jdcal"). The function that is defined then allows you to represent any date as a unique (float) number. It is much easier to compare these single numbers to other date-converted numbers to see which one is higher (more recent, newer).</p> <p>Note that I also simplified and sped up your code that constructs year, month, day information.</p> <p>Hope this helps.</p> <p>Regards,</p>
0
2016-10-19T01:14:21Z
[ "python", "xml", "list", "loops", "compare" ]
Django model formset factory and forms
40,119,792
<p>I'm trying to user Django model formset factory to render a template where a user can add images and change the images they have uploaded(very similar to what can be done in the admin). I currently can render the template and its correct fields to the template. What I cannot do is have the user preselected(want currently logged in) and when I refresh the page the image will be posted again(not sure if this is preventable). Below is my code. Thanks!</p> <p>Model:</p> <pre><code>class Image(models.Model): user = models.ForeignKey(User) image = models.ImageField(upload_to=content_file_name, null=True, blank=True) link = models.CharField(max_length=256, blank=True) </code></pre> <p>Form:</p> <pre><code>class ImageForm(forms.ModelForm): image = forms.ImageField(label='Image') class Meta: model = Image fields = ('image', 'link', ) </code></pre> <p>View:</p> <pre><code>@login_required def register(request): user_data = User.objects.get(id=request.user.id) ImageFormSet = modelformset_factory(Image, fields=('user', 'image', 'link'), extra=3) if request.method == 'POST': print '1' formset = ImageFormSet(request.POST, request.FILES, queryset=Image.objects.all()) if formset.is_valid(): formset.user = request.user formset.save() return render(request, 'portal/register.html', {'formset': formset, 'user_data': user_data}) else: print '2' formset = ImageFormSet(queryset=Image.objects.all()) return render(request, 'portal/register.html', {'formset': formset, 'user_data': user_data}) </code></pre> <p>Template:</p> <pre><code>&lt;form id="" method="post" action="" enctype="multipart/form-data"&gt; {% csrf_token %} {{ formset.management_form }} {% for form in formset %} {{ form }} {% endfor %} &lt;input type="submit" name="submit" value="Submit" /&gt; </code></pre> <p></p>
0
2016-10-18T23:41:03Z
40,125,395
<p>let me explain the way you can do it.</p> <h1>MODELS</h1> <pre><code>from django.utils.text import slugify from django.db import models from custom_user.models import AbstractEmailUser # User model class UserModel(AbstractEmailUser): full_name = models.CharField(max_length=255) def __str__(self): return str(self.id) # Function for getting images from instance of user def get_image_filename(instance, filename): title = instance.id slug = slugify(title) return "user_images/%s-%s" % (slug, filename) # Save images with user instance class UserImages(models.Model): user = models.ForeignKey('UserModel', db_index=True, default=None) image = models.ImageField(upload_to=get_image_filename, verbose_name='Image', db_index=True, blank=True, null=True) </code></pre> <p>In forms it's a just a two form, one for model User, other for UserImages model.</p> <pre><code># Images forms class ImageForm(forms.ModelForm): image = forms.ImageField(label='Image', required=False) class Meta: model = UserImages fields = ('image',) # User form class UserForm(forms.ModelForm): full_name = forms.CharField(required=True) class Meta: model = UserModel fields = ('full_name','email','password',) </code></pre> <p>And in Views for post you can do something like this</p> <pre><code># View from models import * from forms import * @csrf_protect def post_view(request): template = 'some_template.html' ImageFormSet = modelformset_factory(UserImages, form=ImageForm, extra=15) if request.method == 'POST': user_form = UserForm(request.POST, prefix='form1') formset = ImageFormSet(request.POST, request.FILES, queryset=UserImages.objects.none(), prefix='form2') if user_form.is_valid() and formset.is_valid(): # Save User form, and get user ID a = user_form.save(commit=False) a.save() images = formset.save(commit=False) for image in images: image.user = a image.save() return HttpResponseRedirect('/success/') else: user_form = UserForm(prefix='form1') formset = ImageFormSet(queryset=UserImages.objects.none(), prefix='form2') return render(request, template, {'form_user':user_form,'formset':formset}) </code></pre> <p>In template you are doing the right thing.</p>
0
2016-10-19T07:56:57Z
[ "python", "django", "django-forms" ]
Python 3.x : using a list, if a letter isn't in that list, add it. otherwise, increment the value by 1
40,119,855
<p>so this is what I'm doing: I'm pulling up a file and having the program read it. Every time it encounters a letter, it'll add the letter to list1 and add '1' to list2. Every time it encounters a letter in list1, it'll increment list2 by 1. </p> <pre><code>txt = open("Nameoffile.txt") wordcount = 0 Charcount = 0 letterlist = [] #list 1 lettercount = [] #list 2 for words in txt: print(words) for letters in words: if letters not in letterlist: letterlist.append(letters) lettercount[letters] = 1 else: lettercount[letters] += 1 Charcount += 1 print(letters) if letters == ' ': wordcount += 1 if letters == '.': wordcount += 1 if letters == '\n': Charcount -= 1 wordcount += 1 #down here it would print the results </code></pre> <p>the problem I'm running into is that when running this, I get the following error: line 14, lettercount[letters] = 1 TypeError: list indices must be integers or slices, not str</p> <p>I assumed I could get away with stating that at list[letter] set that value to a number, but it isn't liking it. any possible hints on what to do?</p>
-1
2016-10-18T23:49:43Z
40,119,936
<p>Lists work with integer indexes, you can use a dictionary instead:</p> <pre><code>lettercount = {} #list 2 </code></pre> <p>Dictionaries have the capacity to store key, values objects, so you can use not numeric keys to acees values. Their use is similar to the lists so you can still use:</p> <pre><code>lettercount[letters] = 1 </code></pre> <p>to add or update a key in the dictionary, however they are not iterable as lists, you have to iterate them using keys or iteritems methods. To print the results you can iterate over the keys and display the count:</p> <pre><code>for e in lettercount.keys(): print (e, str(lettercount[e])) </code></pre>
0
2016-10-18T23:58:37Z
[ "python", "list", "python-3.x", "append", "increment" ]
Python 3.x : using a list, if a letter isn't in that list, add it. otherwise, increment the value by 1
40,119,855
<p>so this is what I'm doing: I'm pulling up a file and having the program read it. Every time it encounters a letter, it'll add the letter to list1 and add '1' to list2. Every time it encounters a letter in list1, it'll increment list2 by 1. </p> <pre><code>txt = open("Nameoffile.txt") wordcount = 0 Charcount = 0 letterlist = [] #list 1 lettercount = [] #list 2 for words in txt: print(words) for letters in words: if letters not in letterlist: letterlist.append(letters) lettercount[letters] = 1 else: lettercount[letters] += 1 Charcount += 1 print(letters) if letters == ' ': wordcount += 1 if letters == '.': wordcount += 1 if letters == '\n': Charcount -= 1 wordcount += 1 #down here it would print the results </code></pre> <p>the problem I'm running into is that when running this, I get the following error: line 14, lettercount[letters] = 1 TypeError: list indices must be integers or slices, not str</p> <p>I assumed I could get away with stating that at list[letter] set that value to a number, but it isn't liking it. any possible hints on what to do?</p>
-1
2016-10-18T23:49:43Z
40,119,953
<p>lettercount should be type dict, not list. Type dict maps a unique key to a value, while list just contains values. The value within brackets for a list should be an integer referring to a position in the list, while a dictionary will reference the key in brackets. </p>
-1
2016-10-19T00:00:05Z
[ "python", "list", "python-3.x", "append", "increment" ]
Python 3.x : using a list, if a letter isn't in that list, add it. otherwise, increment the value by 1
40,119,855
<p>so this is what I'm doing: I'm pulling up a file and having the program read it. Every time it encounters a letter, it'll add the letter to list1 and add '1' to list2. Every time it encounters a letter in list1, it'll increment list2 by 1. </p> <pre><code>txt = open("Nameoffile.txt") wordcount = 0 Charcount = 0 letterlist = [] #list 1 lettercount = [] #list 2 for words in txt: print(words) for letters in words: if letters not in letterlist: letterlist.append(letters) lettercount[letters] = 1 else: lettercount[letters] += 1 Charcount += 1 print(letters) if letters == ' ': wordcount += 1 if letters == '.': wordcount += 1 if letters == '\n': Charcount -= 1 wordcount += 1 #down here it would print the results </code></pre> <p>the problem I'm running into is that when running this, I get the following error: line 14, lettercount[letters] = 1 TypeError: list indices must be integers or slices, not str</p> <p>I assumed I could get away with stating that at list[letter] set that value to a number, but it isn't liking it. any possible hints on what to do?</p>
-1
2016-10-18T23:49:43Z
40,120,416
<p>That line of your function essentially tries to do something like this:</p> <pre><code>lettercount['a'] += 1 </code></pre> <p>which doesn't really make any sense. Lists are ordered collections and are only accessible via numerical index, which is why you get an error telling you that an integer is required (not a string). As the other answers mentioned, you really want to store the count for each letter in a dict. The Python standard library provides a <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow"><code>Counter</code></a> dict subclass which is actually perfect for your needs - it'll count the characters for you and make it easy to remove duplicates:</p> <pre><code>import collections lettercount = collections.Counter(yourtext) letterlist = set(lettercount) charcount = len(list(c for c in lettercount.elements() if c != '\n')) wordcount = lettercount[' '] + lettercount['.'] + lettercount['\n'] </code></pre>
0
2016-10-19T01:01:19Z
[ "python", "list", "python-3.x", "append", "increment" ]
Pythonic way to create list of address letter/numbers from an input address range like 1-12A
40,119,867
<p>Simple case: For a given string input like '1-12A', I'd like to output a list like</p> <pre><code>['1A', '2A', '3A', ... , '12A'] </code></pre> <p>That's easy enough, I could use something like the following code:</p> <pre><code>import re input = '1-12A' begin = input.split('-')[0] #the first number end = input.split('-')[-1] #the last number letter = re.findall(r"([A-Z])", input)[0] #the letter [str(x)+letter for x in range(begin, end+1)] #works only if letter is behind number </code></pre> <p>But sometimes I'll have cases where the input is like 'B01-B12', and I'd like the output to be like this:</p> <pre><code>['B01', 'B02', 'B03', ... , 'B12'] </code></pre> <p>Now the challenge is, what's the most pythonic way to create a function to can build up such lists from either of the above two inputs? It might be a function that accepts the begin, end and letter inputs, but it has to account for <a href="http://stackoverflow.com/questions/134934/display-number-with-leading-zeros">leading zeros</a>, and the fact that the letter could be in front or behind the number.</p>
1
2016-10-18T23:50:58Z
40,120,439
<p>I'm not sure if there's a more <em>pythonic</em> way of doing it, but using some regexes and python's <a href="https://docs.python.org/2/library/stdtypes.html#str.format" rel="nofollow"><code>format</code></a> syntax, we can fairly easily deal with your inputs. Here is a solution :</p> <pre><code>import re def address_list(address_range): begin,end = address_range.split('-') Nb,Ne=re.findall(r"\d+", address_range) #we deduce the paading from the digits of begin padding=len(re.findall(r"\d+", begin)[0]) #first we decide whether we should use begin or end as a template for the ouput #here we keep the first that is matching something like ab01 or 01ab template_base = re.findall(r"[a-zA-Z]+\d+|\d+[a-zA-Z]+", address_range)[0] #we make a template by replacing the digits of end by some format syntax template=template_base.replace(re.findall(r"\d+", template_base)[0],"{{:0{:}}}".format(padding)) #print("template : {} , example : {}".format(template,template.format(1))) return [template.format(x) for x in range(int(Nb), int(Ne)+1)] print(address_list('1-12A')) print(address_list('B01-B12')) print(address_list('C01-9')) </code></pre> <p><strong>Output:</strong></p> <pre><code>['1A', '2A', '3A', '4A', '5A', '6A', '7A', '8A', '9A', '10A', '11A', '12A'] ['B01', 'B02', 'B03', 'B04', 'B05', 'B06', 'B07', 'B08', 'B09', 'B10', 'B11', 'B12'] ['C01', 'C02', 'C03', 'C04', 'C05', 'C06', 'C07', 'C08', 'C09'] </code></pre>
2
2016-10-19T01:03:38Z
[ "python", "list", "functional-programming", "list-comprehension", "leading-zero" ]