id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.149691
I'm teaming up with a guy who has no programming experience. We're using a tool to make our game (RPG Maker) that has an event-based system that allows you to do pretty much everything you want. They have a GUI and a simple text editor for events.My friend has no programming experience. None. I need him to understand basic stuff, like control flow (if/else, do/while), variables/constants, etc. What can I use to teach him this, bearing in mind that I don't care about specific language syntax?Ideally, I'm looking for a programming book that talks about these ideas (perhaps visually) and doesn't care much about code. Does something like this exist? My google-fu failed me.
Teaching Programming Concepts Without a Specific Language
language agnostic
One of the Learn X the Hard Way books might be a good way to go. In particular Learn Ruby the Hard Way is an interesting one. It gives tutorials that your friend can follow and learn some programming and just use one of the interactive Ruby terminals to work the tutorials. He will have also learned a useful language when he is done.
_datascience.4955
My company provides managed services to a lot of its clients. Our customers typically uses following monitoring tools to monitor their servers/webapps:OpsViewNagiosPingdomCustom shell scriptsWhenever any issue is found, an alert mail comes to our Ops team so that they act upon rectifying the issue.As we manage thousands of servers, our Ops teams' inbox is flooded with email alerts all the time. Even a single issue which has a cascading effect, can trigger 20-30 emails.Now, what I want to do is to implement a system which will be able to extract important features out of an alert email - like server IP address, type of problem, severity of problem etc. and also classify the emails into proper category, like CPU-Load-Customer1-Server2, MySQL-Replication-Customer2-DBServer3 etc. We will then have a pre-defined set of debugging steps for each category, in order to help the Ops team to rectify the problem faster. Also, the feature extractor will provide input data to the team for a problem.So far I have been able to train NaiveBayesClassifier with supervised learning techniques i.e. labeled training data(cluster data), and able to classify new unseen emails into its proper cluster/category. As the emails are based on certain templates, the accuracy of the classifier is very high. But we also get alert emails from custom scripts, which may not follow the templates. So, instead of doing supervised learning, I want to try out unsupervised learning for the same. I am looking into KMeans clustering. But again the problem is, we won't know the number of clusters beforehand. So, which algorithm will be best for this use case? Right now I am using Python's TextBlob library for classification.Also, for feature extraction out of an alert email, I am looking into NLTK (http://www.nltk.org/book/ch07.html) library. I tried it out, but it seems to work on proper English paragraphs/texts well, however, for alert emails, it extracted a lot of unnecessary features. Is there already any existing solution for the same? If not, what will be the best way to implement the same? Which library, which algorithm?PS: I am not a Data Scientist.Sample emails: PROBLEM: CRITICAL - Customer1_PROD - Customer1_PROD_SLAVE_DB_01 - CPU Load Avg Service: CPU Load Avg Host: Customer1_PROD_SLAVE_DB_01 Alias: Customer1_PROD_SLAVE_DB_01 Address: 10.10.0.100 Host Group Hierarchy: Opsview > Customer1 - BIG C > Customer1_PROD State: CRITICAL Date & Time: Sat Oct 4 07:02:06 UTC 2014 Additional Information: CRITICAL - load average: 41.46, 40.69, 37.91RECOVERY: OK - Customer1_PROD - Customer1_PROD_SLAVE_DB_01 - CPU Load Avg Service: CPU Load Avg Host: Customer1_PROD_SLAVE_DB_01 Alias: Customer1_PROD_SLAVE_DB_01 Address: 10.1.1.100 Host Group Hierarchy: Opsview > Customer1 - BIG C > Customer1_PROD State: OK Date & Time: Sat Oct 4 07:52:05 UTC 2014 Additional Information: OK - load average: 0.36, 0.23, 4.83PROBLEM: CRITICAL - Customer1_PROD - Customer1_PROD_SLAVE_DB_01 - CPU Load Avg Service: CPU Load Avg Host: Customer1_PROD_SLAVE_DB_01 Alias: Customer1_PROD_SLAVE_DB_01 Address: 10.100.10.10 Host Group Hierarchy: Opsview > Customer1 - BIG C > Customer1_PROD State: CRITICAL Date & Time: Sat Oct 4 09:29:05 UTC 2014 Additional Information: CRITICAL - load average: 29.59, 26.50, 18.49Classifier code:(format of csv - email, <disk/cpu/memory/mysql>) from textblob import TextBlobfrom textblob.classifiers import NaiveBayesClassifierimport csvtrain = []with open('cpu.txt', 'r') as csvfile: reader = csv.reader(csvfile, delimiter=',', quotechar='') for row in reader: tup = unicode(row[0], ISO-8859-1), row[1] train.append(tup)// this can be done in a loop, but for the time being let it bewith open('memory.txt', 'r') as csvfile: reader = csv.reader(csvfile, delimiter=',', quotechar='') for row in reader: tup = unicode(row[0], ISO-8859-1), row[1] train.append(tup)with open('disk.txt', 'r') as csvfile: reader = csv.reader(csvfile, delimiter=',', quotechar='') for row in reader: tup = unicode(row[0], ISO-8859-1), row[1] train.append(tup)with open('mysql.txt', 'r') as csvfile: reader = csv.reader(csvfile, delimiter=',', quotechar='') for row in reader: tup = unicode(row[0], ISO-8859-1), row[1] train.append(tup)cl = NaiveBayesClassifier(train)cl.classify(email)Feature extractor code taken from: https://gist.github.com/shlomibabluki/5539628Please let me know if any more information is required here.Thanks in advance.
How to extract features and classify alert emails coming from monitoring tools into proper category?
machine learning;classification;clustering;feature extraction
null
_webmaster.27542
I'm having this weird issue with my Domain. My domain is saoo.eu hosted on HostZilla.The issue is that whenever I open an HTML/PHP file it automatically downloads it instead of opening it into the browser. Example the saoo.eu/test.html page. Same thing happens with the index.html file. What is going on?Also if I want an PHP code ran into an HTML file I have to add an .htaccess file. But it doesn't seem to work. Tested it before.
Domain files download upon opening
php;html
The server is possibly missing this apache modDoes your browser ask if you want to download the php file instead of displaying it? If Apache is not actually parsing the php after you restarted it, install libapache2-mod-php5. It is installed when you install the php5 package, but may have been removed inadvertently by packages which need to run a different version of php.If sudo a2enmod php5 returns $ This module does not exist!, you should purge (not just remove) the libapache2-mod-php5 package and reinstall it.https://help.ubuntu.com/community/ApacheMySQLPHP
_unix.324643
On Windows, x64 versions of the OS can run both x86 and x64 binaries.However, x86 Windows can only run x86 binaries. Even if the underlying CPU is 64-bit capable, it cannot run x64 binaries.Is the situation with x86 / x64 binary compatibility on Linux the same? Or is there more (or less) compatibility?
How does x86 / x64 binary compatibility work on Linux?
linux;binary;x86;compatibility
null
_codereview.97039
I don't like how transforming the data requires a bunch of nested functions. It's not very readable. Is there a more readable way to do the same transformations in Lodash?var templates = { fullRenderV80: { stuff: stuff, screenshots: [ { device: iPhone 6, position: 1 }, { device: Watch, position: 2 } ] }, fullRenderV70: { stuff: stuff, screenshots: [ { device: iPhone 6, position: 1 }, { device: iPad, position: 2 } ] }}var transformedData = _.groupBy(_.map(templates, 'screenshots'), function(screenshotsArray) { return _.reduce(screenshotsArray, function(templateDevicesName, screenshot) { return templateDevicesName.concat(screenshot.device) }, []).join(' + ')})// transformedData is this://{// iPhone 6 + Watch: [// [// {// device: iPhone 6,// position: 1// },// {// device: Watch,// position: 2// }// ]// ],// iPhone 6 + iPad: [// [// {// device: iPhone 6,// position: 1// },// {// device: iPad,// position: 2// }// ]// ]//}
Data transformation using Lodash in a more readable way
javascript;lodash.js
null
_softwareengineering.295288
Form classes are intended (IMO) for submitted data against rules. For example: are passwords equal, is end date later than start date. submitted data--->|Form|Is it okay for Form classes to validate submitted data against data retrieved from a database as well? This seems like a responsibility issue, the Form class should responsible for validating and cleaning the submission alone, and checking against the DB belongs outside the class.submitted data--->|Form|<---data from DBHere is my example in WTForms in Python: it checks if the user id is an integer, then queries the DB to verify that it exists.class ClientForm(Form): name = StringField(validators=[DataRequired(message=Invalid value)]) user = IntegerField(validators=[DataRequired(message=Invalid value)]) zone = IntegerField(validators=[AnyOf([i[0] for i in ZONES], message=Invalid value)]) #validate user as real def validate_user(form, field): user = db.session.query(User).filter(User.id==form.user.data).first() if not user: raise ValidationError('Invalid user')The upside of adding this check to the form is convenience, WTForms handles all aspects of errors, even rendering them:form = ClientForm(post_data)if not form.validate(): # render the form again with errors highlighted return render(client_registration_form.html, form=form)else: return render(success.html)
Is it OK to use (WTF) forms to validate against stuff from DB?
database;python;class
null
_unix.65980
I've just upgraded my Debian Linux (Wheezy) to a 64 bit kernel, as well as user mode binaries, in an attempt to make use of the 4GB of memory in the system without PAE.Exchanging the kernel and packages seems to have gone fine, but I'm not getting the expected result:mymachine:~# dmesg | grep Memory[ 0.000000] Memory: 2007644k/2062784k available (3494k kernel code, 452k absent, 54688k reserved, 3042k data, 476k init)mymachine:~# uname -mx86_64What could be causing this? I would like to expand the memory further, but if I can't even make use of the current 4GB, that is a bit useless :)lshw output says shows the memory is properly installed: *-memory description: System Memory physical id: 29 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 1333 MHz (0.8 ns) product: PartNum0 vendor: Manufacturer0 physical id: 0 serial: SerNum0 slot: DIMM A1 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: DIMM DDR Synchronous 1333 MHz (0.8 ns) product: PartNum1 vendor: Manufacturer1 physical id: 1 serial: SerNum1 slot: DIMM B1 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns)The Memory Remap Feature is enabled in my BIOS.
Only two 2GB memory available on 64-bit Linux kernel
debian;64bit
Your motherboard apparently only supports 2GB or is buggy. See the bios e820 section of the kernel boot messages for exactly what memory your bios is telling the kernel it has.
_unix.374077
I have a large 16TB raid file server that has a bad hard drive and it won't rebuilt. I'm able to access the file system with a live CD but I'm just wondering whats the best and quickest way to transfer files to another linux server. The server is running centos 7 and I'm connected to it via KVM.I have had the datacenter set-up a new server identical to the current on beside this one and they are connected via their second Ethernet ports.I read about Samba but I'm not sure if this is the best way. I've used rsync in the past for this type of process but perhaps there's a better solution.Also, how can I direct the transfer process to use the second ethernet port? This is my first time doing such a job.Thanks in advance.
Backing up files to new server over live CD
linux;centos
null
_scicomp.7390
In semiconductor simulation, it is common that the equations are scaled so they have normalised values. For example, in extreme cases electron density in semiconductors can vary over 18 order of magnitude, and electric field can change shapely, over 6 (or more) orders of magnitude.However, the papers never really give a reason for doing this. Personally I am happy dealing with equations in real units, is there any numerical advantage to do this, is it impossible otherwise? I thought with double precision there would be enough digits to cope with these fluctuations.Both answers are very useful, thanks very much!
Is variable scaling essential when solving some PDE problems numerically?
pde;condition number;scaling
Solving a (linear) PDE consists in discretizing the equation to yield a linear system, which is then solved by a linear solver whose convergence (rate) depends on the condition number of the matrix. Scaling the variables often reduces this condition number, thus improving convergence. (This basically amounts to applying a diagonal preconditioner, see Nicholas Higham's Accuracy and Stability of Numerical Algorithms.)Solving nonlinear PDEs in addition requires a method for solving nonlinear equations such as Newton's method, where the scaling can also influence convergence.Since normalizing everything usually takes very little effort, it is almost always a good idea.
_webmaster.24769
Let assume that someone types a long-tail keyword, how would you dynamically generate a page based on those keywords?Example:summer light pink florescent lights would generate a page with those keywords.Is there a way to know what keyword someone has searched and which generated content was displayed?I can do this on Adwords and I was wondering if this is possible with Google Search traffic?
Is it possible to dynamically display a page according to long tail Google search?
seo;google;traffic;content;dynamic
null
_softwareengineering.157656
I've only worked at once place since graduating my CS degree. This is a pretty basic architecture question but i don't know any better since i've only worked at one place. Where i work we maintain a large number of code tables. For example say you have a Sales/Order system and you have an Order Status. We would maintain an order status table that would look like the following:OrderStatus: OrderStatusId(PK TinyInt) OrderStatusDesc(VarChar(50)) 1 Created 2 Submitted 3 Processing4 Canceled5 Verified6 CompleteWe then create an enumeration off each code table. For example:Public Enum TblSalesOrderStatus eosCreated = 1 eosSubmitted = 2 eosProcess = 3 eosCanceled = 4 eosVerified = 5 eosComplete = 6End EnumThen in our code we have code like the following:If OrderStatus = eosCreated OrElse OrderStatus = eosSubmitted Then ...do some workEnd ifOn every screen we create we have the id's stored in the controls (like comboboxes). I don't know something about this has always made me think its bad design. Maybe i'm wrong though. Especially when i started getting into REST design. i wanted to pass the Ids rather than descriptions. Of course this doesn't seem right since i'd be the only one i've ever seen pass those type of Ids in a REST service. So is this bad design?Edit:Trying to make thing a little clearer. Are Code Tables are only stored in two places: The Enum + The Database. When we need a new Id when email our database staff which creates a new code table value for us and emails us back the Id. When then put the new value into our enumerations. We never had problems with it getting out of sync but if we have needed either delete values or change them in the past (adding is easy) which has been pain because everything in the system had to be recompiled. We have went down the route of trying to make things semi-dynamic (which doesn't work out in every case but helps in some). There are tons of examples but a simple example is we have a IsVisible flag in some of the code tables. If we ever want to obsolete or make one of the values not selectable then we set the IsVisible flag = 0. They prefer this compared to having to change code/compile/deploy. Thinking about it i think it would be preferable to have it in buisness logic and have test around it (which we don't :-( ) depending on your prospective.
General Architecture with Code Tables?
architecture
null
_unix.239833
I try to establish a remote ssh connection. I tried to connect Remote with ssh -fN -R 10110:localhost:22 GatewayUser@GatewayHostand Gateway with ssh -p10110 RemoteUser@localhostI got the response on the Gateway Console Connection closed by ::1running it with -v ssh -v -fN -R 10110:localhost:22 GatewayUser@GatewayHostproduces that response in the Remote-Consoledebug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1debug1: client_input_channel_open: ctype forwarded-tcpip rchan 2 win 2097152 max 32768debug1: client_request_forwarded_tcpip: listen localhost port 10110, originator ::1 port 48481debug1: connect_next: host localhost ([127.0.0.1]:22) in progress, fd=4debug1: channel 0: new [::1]debug1: confirm forwarded-tcpipdebug1: channel 0: connected to localhost port 22debug1: channel 0: free: ::1, nchannels 1debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1debug1: client_input_global_request: rtype keepalive@openssh.com want_reply 1PS: a SSH connection from Remote to Gateway is workingMany thanks in advance!__Here the console-output when connecting from the gateway machineemanuel@UbuntuServer:~$ ssh -vvv -p10110 pi@localhostOpenSSH_6.7p1 Ubuntu-5ubuntu1.3, OpenSSL 1.0.1f 6 Jan 2014debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to localhost [::1] port 10110.debug1: Connection established.debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_rsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/emanuel/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.7p1 Ubuntu-5ubuntu1.3debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-4+deb7u2debug1: match: OpenSSH_6.0p1 Debian-4+deb7u2 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug3: put_host_port: [localhost]:10110debug3: load_hostkeys: loading entries for host [localhost]:10110 from file /home/emanuel/.ssh/known_hostsdebug3: load_hostkeys: loaded 0 keysdebug1: SSH2_MSG_KEXINIT sentConnection closed by ::1emanuel@UbuntuServer:~$
Reverse SSH Connection closed by ::1
ssh;ssh tunneling;openssh
What you do is: Crate a ssh connection from the raspi to the gateway, and forward the *:10110 from the gateway to 127.0.0.1:22 on the raspi. Then you connect to port 10110@localhost, which may in some configurations use the ip6-address (::1) which has no tunnel behind it. sshd then closes the connection.Try ssh -4 -p10110 pi@localhostThis should get you one step further.If you have problems finding the correct key (ssh stops after a certain amount of checked keys) then disable pubkeyauth with ssh -oPubkeyAuthentication=no -4 -p10110 pi@localhost
_unix.219534
I am not able to ftp to the server server from a client. I am using my base machine as Ubuntu 14.04 . Below is the ifconfig output:-raghav@raghav-HP-15-Notebook-PC:~$ ifconfigeth0 Link encap:Ethernet HWaddr 38:63:bb:e5:ee:29 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:6302 errors:0 dropped:0 overruns:0 frame:0 TX packets:6302 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:603664 (603.6 KB) TX bytes:603664 (603.6 KB)ra0 Link encap:Ethernet HWaddr c0:38:96:7f:5c:97 inet addr:192.168.1.5 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::c238:96ff:fe7f:5c97/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:354373 errors:0 dropped:0 overruns:0 frame:0 TX packets:69134 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:121914289 (121.9 MB) TX bytes:12476694 (12.4 MB) Interrupt:16 vmnet1 Link encap:Ethernet HWaddr 00:50:56:c0:00:01 inet addr:192.168.5.1 Bcast:192.168.5.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:178 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)vmnet8 Link encap:Ethernet HWaddr 00:50:56:c0:00:08 inet addr:172.16.172.1 Bcast:172.16.172.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:0 TX packets:177 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)Now i have installed vm playes on the machine and configured RHEL 6 SErver with below configuration:-raghav@raghav-HP-15-Notebook-PC:~$ ifconfigeth0 Link encap:Ethernet HWaddr 38:63:bb:e5:ee:29 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:6302 errors:0 dropped:0 overruns:0 frame:0 TX packets:6302 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:603664 (603.6 KB) TX bytes:603664 (603.6 KB)ra0 Link encap:Ethernet HWaddr c0:38:96:7f:5c:97 inet addr:192.168.1.5 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::c238:96ff:fe7f:5c97/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:354373 errors:0 dropped:0 overruns:0 frame:0 TX packets:69134 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:121914289 (121.9 MB) TX bytes:12476694 (12.4 MB) Interrupt:16 vmnet1 Link encap:Ethernet HWaddr 00:50:56:c0:00:01 inet addr:192.168.5.1 Bcast:192.168.5.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:178 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)vmnet8 Link encap:Ethernet HWaddr 00:50:56:c0:00:08 inet addr:172.16.172.1 Bcast:172.16.172.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:0 TX packets:177 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)On the server i have deployed a VM RHEL 6 with following configuration :-Ipaddr: 192.168.122.217While configuring the network NAT protocol was used. Help would be appreciated.
FTP not happening on RHEL 6 Server configured using a Vm Player 11
linux;networking;rhel;ftp
null
_unix.119965
I have an external hard drive with elementary os on it, and when I took it out my preinstalled windows 8 laptop, it came up with the error: no such device:'hex number' grub rescue>. I can't boot from the external hdd, no live usbs, can't get to bios, and when I type in ls, it comes back with only (hd0). I have found articles that tell how to fix either just the ls problem, or just the boot error problem, but none of them will let me fix all of them. Ask ubuntu is telling me that this is off topic on their site. At this point, I don't think that I really need the files on the computer, but just an operating system on it. Either the elementary, or some way to get back to windows 8. If anyone can help me just install any kind of os on it, I would be overjoyed. The computer is an Asus X502C. When I put in find /boot/vmlinuz, it just says that find is an unknown command.
stuck at grub rescue on boot, no bios, no live cd, ls returns hd0
boot;grub2;livecd;elementary os
null
_unix.332994
On CentOS release 5.11 (Final) I created a user and added them to the wheel group with usermod but when I look in the sudoers file at /etc/sudoers the relevant line is commented out. Looking at groups:[root@arrakis ~]# [root@arrakis ~]# grep wheel /etc/groupwheel:x:10:root,hawat[root@arrakis ~]# [root@arrakis ~]# su hawat[hawat@arrakis root]$ [hawat@arrakis root]$ cd[hawat@arrakis ~]$ [hawat@arrakis ~]$ whoamihawat[hawat@arrakis ~]$ [hawat@arrakis ~]$ sudo echo hello sudo[sudo] password for hawat: hawat is not in the sudoers file. This incident will be reported.[hawat@arrakis ~]$ and[root@arrakis ~]# [root@arrakis ~]# groups wheel ; getent passwd hawatid: wheel: No such userhawat:x:505:505::/home/hawat:/bin/bash[root@arrakis ~]# Taking a closer look at sudoers:[root@arrakis ~]# [root@arrakis ~]# grep wheel /etc/sudoers## Allows people in group wheel to run all commands# %wheel ALL=(ALL) ALL# %wheel ALL=(ALL) NOPASSWD: ALL[root@arrakis ~]# I hesitate to uncomment those lines so that the wheel group can run all commands with sudo. This is an Elastix 2.5 system on CentOS; perhaps there's a reason not to have wheel in the sudo list?Should I just go ahead an manually edit sudoers with visudo?
what are the unforseen consequences to using visudo to enable wheel group to run all sudo commands?
centos;security;sudo;group;su
null
_softwareengineering.50024
I'm thinking about writing an application that will have a web-version and an iPhone version (and perhaps later also an android version).Since there is some algorithms that are the same on the iPhone and the web versions, I was wondering if it is possible to write that part in c++, while keeping the rest of the application in objective-c?
Is it possible to use C++ code in an objective-c iPhone app
iphone
Absolutely. You can write in C and C++, as well as Objective-C. Your algorithms can easily be in straight C++.
_codereview.86763
I am going through the CodingBat exercises for Java. Here is the task I have just completed:Given a string, return the sum of the numbers appearing in the string, ignoring all other characters. A number is a series of 1 or more digit chars in a row.And here is my code:public int sumNumbers(String str){ int total = 0; for (int i = 0; i < str.length(); i++) { if (Character.isDigit(str.charAt(i))) { StringBuilder appendNums = new StringBuilder(); appendNums.append(str.charAt(i)); for (int j = i+1; j < str.length(); j++) { if (Character.isDigit(str.charAt(j))) { appendNums.append(str.charAt(j)); } else { break; } } String appendNums2String = appendNums.toString(); total += Integer.parseInt(appendNums2String); i += appendNums2String.length()-1; } } return total;}The questions I have are:Are there any parts of this code you find to be poor practise orpoor quality?Is it acceptable to append one digit before the (conditional) digits that proceed it, or should I be identifying the entire blockfirst, and then append?What is a more concise way of solving this?For this question I relied heavily on Eclipse's debugger until I got it right. Is this good or bad practise?Is it a better idea to use a while loop instead of the second for loop, because the characters tested are relative to i rather than the string's length? (I played around with this but couldn't figure it out.)
Identifying numeric substrings of an alphanumeric string, and summing them
java;beginner;programming challenge
Learning to use, and using the debugger are fantastic ideas. It is a valuable skill that will serve you well. Understanding how your code works (and breaks) is very valuable, and the debugger helps you through there.The remainder of your specific questions are somewhat linked to your actual implementation, and I am going to suggest that your implementation would be simpler with a String split operation. The specification says:Given a string, return the sum of the numbers appearing in the string, ignoring all other characters. A number is a series of 1 or more digit chars in a row.Taking this literally, I would code this up as a Regular expresison system matching digits. Something like:private static final Pattern NUMBERS = Pattern.compile([0-9]+);private static int digitSum(String s) { Matcher matcher = NUMBERS.matcher(s); int sum = 0; while (matcher.find()) { sum += Integer.parseInt(matcher.group()); } return sum;}The Pattern says Look in the string for a sequence of digits.The method loops through all digit sequences, converts them to integers, and totals them.
_unix.42141
I have just compiles and installed BIND 9.9.1-P1 on Debian 6.0 as the version in the repositories is too old for Samba4, and am getting the said error. I have been looking for source for named, but cannot find any.# ls grep named in /usr/sbinnamed-checkconfnamed-checkzonenamed-compilezoneAny suggestions?Regards
bind9named binary missing - not starting
debian;dns;bind
null
_codereview.91693
I'm looking for a very simple (at first sight) CommandBus, which will handle some ICommand publication. The CommandBus implementation will find the appropriate IHandler to Execute the Command and then Notify some possible IObservers.My first step is to make it work, I've chosen to use a synchronous dependency injection pattern which I may extends in the future, maybe using a real ServiceBus with some asynchronous capability.Here is the abstract definition:public interface ICommandBus{ TResult Publish<TCommand, TResult>(TCommand command) where TResult : ICommandResult where TCommand : ICommand<TResult>;}public interface ICommand<T> where T : ICommandResult{ int Id_User { get; }}public interface ICommandResult{ bool Success { get; } Exception Error { get; } ICollection<ValidationRule> BrokenRules { get; }}public interface ICommandHandler<TCommand, TResult> where TCommand : ICommand<TResult> where TResult : ICommandResult{ TResult Execute(TCommand command);}public interface ICommandObserver<TCommand, TResult> where TCommand : ICommand<TResult> where TResult : ICommandResult{ void Trigger(TCommand command, TResult result);}And here is the CommandBus implementation (using Structuremap as the DI)public class CommandBus : ICommandBus{ private readonly IContainer m_container; public CommandBus(IContainer container) { m_container = container; } public TResult Publish<TCommand, TResult>(TCommand command) where TResult : ICommandResult where TCommand : ICommand<TResult> { using (MiniProfiler.Current.Step(CommandBus.Publish)) { var handler = m_container.GetInstance<ICommandHandler<TCommand, TResult>>(); var observers = m_container.GetAllInstances<ICommandObserver<TCommand, TResult>>(); var result = handler.Execute(command); foreach (var observer in observers) observer.Trigger(command, result); return result; } }}How to use:public class CreateItemCommand: ICommand<CreateItemResult>{ public string Name { get; private set; }}public class CreateItemResult: ICommandResult{ public bool Success { get; private set; }}var command = new CreateItemCommand();var result = commandBus.Publish<CreateItemCommand, CreateItemResult>(command);What do you think about this pattern? Do you think it will be easy to upgrade? I'm afraid it can be an obvious bottleneck in my application...Edit: I just want to past a link to my question about this pattern, in order to simplify all the Generics...
CommandBus with Handlers & Observers
c#;dependency injection
null
_opensource.729
I have been working on an open source software. The software is in its final stages of testing and I will soon publish it under a CC license. I would like to release it under the least strict license (to provide users full access to do whatever they wish to it).My question:Is the CC-BY license the best license to use if I want to release it under no conditions or rules?
Is CC-BY the least strict CC license for my open source project?
licensing;cc by
The use of Creative Commons licenses for software is not recommended. The CC licenses do not address concerns specific to software (such as the source code/object code relationship, or patent issues), and are incompatible with most open-source software licenses.If you want to license your software with no conditions or rules, you probably want either:The Creative Commons Public Domain Dedication (which is suitable for use with software)The Do What The Fuck You Want To Public LicenseReleasing into the public domain has the problem that not all jurisdictions permit you to simply abandon your copyright to a work; to get around this, CC-0 and WTFPL both retain copyright while relinquishing all rights to the greatest extent permissible by law; of the two, CC-0 is probably the better written.If you want to retain the requirement of being attributed for your work, you want one of the highly-permissive licenses such as the three-clause BSD license or the MIT license. These are similar to CC-BY, but are designed for the needs of software.
_unix.303099
The current default behavior of Docker is COW (copy on write), aka allocate on write. This relies on free space in a drive in order to write to disk.In contrast - with memory, unreferenced files are left available, to be overwritten if something else is needed, or re-linked in constant time if they are needed once again.We'd like to implement a similar mechanism for caching remote files on a local disk. That is, there would be a set location for files which would be allowed to be overwritten if the space was needed, or linked if the files themselves were needed.Such a piece of software would ideally hook in to the FS driver when it tries do a write, or when it reports space available. It is my assumption that a polling method would be insufficient, as a piece of software may allocate arbitrary large files at any time.Does anything like this exist already in the open source world? If not, is it possible? Are there major impediments? What is a good way to get started?
Delete on write with devicemapper and Docker
docker;device mapper
null
_unix.317136
I need to encrypt a huge file but don't have sufficient storage on my hard-drive to store the file and its encrypted version at the same time.It appears possible to gradually delete the file alongside the encryption so that the used space remains more or less the same.If I encrypt my file with openssl aes-256-cbc -in myfile -out myfile.aes-256-cbcHow would you suggest to gradually delete the original file myfile alongside the encryption?
How to gradually delete a file in parallel to encryption?
pipe;encryption;openssl;dm crypt;ecryptfs
null
_unix.355095
I am trying to make a plot with values on the x axis ranging from 0 to 2 ms. I want to show a tick every .1 ms, with the labels going from 0.0, 0.1, ... up to 1.9, 2.0. I don't want to show the power at each tick, because I label the axis in ms, not in seconds. My data is given in seconds.I can make this work using ($1*1000) to manually multiply the value by 1000plot 'data.txt' using ($1*1000.0):3 w l lw 2This does work. However, that method means I have to edit all my plots, which there are quite a few of. Also if I decide to change it later, I have to change all of them again. I would much rather make these settings in one config file.I tried using the format specifier, e.g.set format x %1.1sBut unfortunately I can't figure out how to specify a fixed power of 10 for it to use. The labels I get now are 0.0, 1.0, 2.0, ..., 9.0, 1.0, 1.1, ..., etc., instead of 0.0, 0.1, 0.2, etc.What is the best way to do this?
gnuplot, format axis with constant power of 10
gnuplot
null
_softwareengineering.337742
Instead of unintelligible names and acronyms for new products and services, is there any other way of naming software that could also say what it is and what it does?For example, instead of naming a software Abricudabrah or similar which typically means nothing and says nothing about what it does, could there be a naming scheme for instance <what it does>.<what it is>.<id> e.g.A means databaseB means web serverC means a browser plug-inand then a software could be named A.java.H2 and then you would know that the H2 product is a database in Java and likewise.
Naming software products and tools?
naming
Great idea. But its not really in the interests of the people selling the software. I mean would you buy :no mans sky : elite.clone.twelvestar citizen : elite.clone.notfinishedmacbook : pc.without.touchscreenlinux : os.without.guiiphone : phone.with.touchscreen.5the commoditisation of products helps consumers by standardising quality and encouraging competition on price. But as a producer of goods you want people to buy your brand and your product to seem qualitatively different to your competitors
_unix.302434
I'm trying to install Java 8 on a GitLab Runner like this: apt-get --quiet update --yesapt-get --quiet upgrade --yesapt-get --quiet install --yes software-properties-common python-software-propertiesecho oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selectionsadd-apt-repository ppa:webupd8team/javaecho deb http://ppa.launchpad.net/webupd8team/java/ubuntu xenial main | tee /etc/apt/sources.list.d/webupd8team-java.listecho deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu xenial main | tee -a /etc/apt/sources.list.d/webupd8team-java.listapt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886apt-get --quiet update --yesapt-get --quiet install --yes oracle-java8-installer oracle-java8-set-defaultBut I still get this error, which says a website can't be found:W: Failed to fetch http://ppa.launchpad.net/webupd8team/java/ubuntu/dists/jessie/main/binary-amd64/Packages 404 Not FoundI tried it for several hours now and hope someone can help me.
Install Oracle Java 8
debian;java;ppa;oracle;gitlab
null
_webapps.56583
How do I block the hundreds of Facebook Lookback videos being shared on my timeline?
How to block Lookback Videos from Facebook newsfeed
facebook timeline;facebook lookback
null
_webmaster.63310
I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed:www.example.comwww.example.com/about_uswww.example.com/products/something...and example.comexample.com/about_usexample.com/products/something...For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent.The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time.How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?
Request Removal of naked domain from Google Index
google;google search console;google search;indexing
Google Webmaster Tools has a setting for it.Sign in to Webmaster ToolsAdd, verify, and select your websiteUse the gear icon and select Site SettingsFrom Preferred Domain, select Display URLs as www.example.comYour 301 redirect solution is correct, but it may take Googlebot a couple weeks to index your site fully and change the search results. I'd expect that the setting in Webmaster tools would take effect in a couple days at the most.I also wouldn't worry too much about within-site canonicalization these days. Google and Googlebot are much better about detecting duplicate content caused by:www vs naked domain/ vs /index.htmldirectory/ vs no trailing slashThe only time that I would make it my top priority to fix onsite canonicalization issues is if they prevented Googlebot from crawling the site fully. For example having session id parameters in the URL such that Googlebot got a different URL every time it visited the page.10 years ago, Google didn't do a great job of dealing with the duplicate content caused by these issues. You might see:Duplicate results in the SERPsPenalities for duplicate contentPageRank split between two versions of the same pageToday when Google finds duplicate content like this they generally just choose one of the two URLs and treat all links as if they were to that URL. It doesn't cause penalties. It doesn't cause a loss of rankings. It doesn't cause any problems other than users seeing a form of the URL that you might not prefer. See: What is duplicate content and how can I avoid being penalized for it on my site?
_unix.249016
I have a directory with 52 subdirectories, and I'd like to split them in 11 folders with 5 subdirectories each of them.Can anyone suggest me a way to achieve this?
How to split the subdirectories of a directory in n parts?
bash
In the first place, you're asking for a mathematical impossibility, but I'll overlook it. The basic thing you ask is very simply done:[ ! -e split ] &&set ./*/ &&while mkdir split && [ 4 -lt $# ]do mv $1 $2 $3 $4 $5 split mv split ${1%/} shift 5done&& mv $@ split && mv split ${1%/}Because you don't specify any kind of names or similar that takes some care to avoid overwriting anything, and winds up just moving every 5 directories as sorted lexicographically into a directory named for every 5th. That is, it does so if there is no file or directory in the current directory named split
_softwareengineering.141624
I work in a small startup with two front end developers and one designer. Currently the process starts with the designer sending a png file with the whole page design and assets if needed. My task as front end developer is to convert it to a HTML/CSS page. My work flow currently looks like this:Lay out the distinct parts using html elements.Style each element very roughly (floats, minimal fonts and padding) so I can modify it using inspection.Using Chrome Developer Tools (inspect) add/change css attributes while updating the css file.Refresh the page after X amount of changesUse Pixel Perfect to refine the design more.Sit with the designer to make last adjustments.Inferring the paddings, margins, font sizes using trial and error takes a lot of time and I feel the process could become more efficient but not sure how to improve it. Using PSD files is not an option since buying Photoshop for each developer is currently not considered. Design guide is also not available since design is still evolving and new features are introduced.Ideas for improving the process above and sharing how the process looks like in your company will be great.
How to improve designer and developer work flow?
design;workflows;front end
null
_webapps.66029
I have a Processing sketch that I'd like to embed on my Tumblr. I've followed the instructions on this post for doing so, but all I get is an empty canvas and my script showing up as text. I definitely have the required code they mention in my blog's header tags and have enabled plaintext/HTML as my text-editor. My code matches their format and I stripped all the returns out of the Processing sketch - is there some Tumblr-magic I've neglected, or is there an easier way of doing this?
How to embed this Processing sketch on Tumblr?
tumblr
Same problem! What worked for me was to put a div at the beginning (before canvas...) and a /div at the end (after /script) when you are in the html editor.Or maybe you could try with this tumbleryfier found at http://p5lyon.tumblr.com/ProcessingJSTumblrEn
_unix.326520
I want to connect from ServerA to ServerB , and check Oracle Database Status and PendingLogs then record results, and use the result on ServerA ,and compare with the result on serverA and generate logs on serverA.I used ssh -q root@192.168.11.131 sh -s < /root/script.sh > /root/output.txtbut I still have to enter password manually. is there any way to turn off interactive login?how can I run script file via spawn ssh?
run script remotely and use result locally with ssh auto login
ssh;scripting;remote;file copy;autologin
1- is there any way to turn off interactive login?Yes, use public key authentication or sshpass to enter password.2- how can I run script file via spawn ssh ?Yes, use expect script. If you want to run some other script inside (awk), you need to escape the special characters (\$).
_cstheory.8539
There is always a way for application in topics related to theoretical computer science. But textbooks and undergraduate courses usually don't explain the reason that automata theory is an important topic and whether it still has applications in practice. Therefore undergraduate students might have trouble in understanding the importance of automata theory and might think it is not of any practical use anymore.Is automata theory still useful in practice?Should it be part of undergraduate CS curriculum?
How practical is Automata Theory?
soft question;fl.formal languages;automata theory;teaching
Ever used a tool like grep/awk/sed? Regular expressions form the heart of these tools.You'll be surprised how much coding you can avoid by principled use of regular expressions - in practical projects, like an email server.If you're a CS major, you'll definitely be writing a compiler/interpreter for a (at least a small) language. If you've ever tried this task before and got stuck, you'll appreciate how much a little theory (aka context free grammars) can help you. This theory has made a once impossible task into something that can be completed over a weekend. (And it won the inventor a Turing award - google BNF).If you're a CS major, at some point, you need to sit back and think about the philosophical foundations of computing, and not just about how cool the next version of the Android API is. On a related note, it is the job of the university not to prepare you for the next 5 years of your life, but to prepare you for the next 50. The only thing they can do in this regard is to help you think - think of automata theory as one of those courses.
_unix.137735
I have around 500 strings and I want to search for the files which contain them inside a directory and get the file names which contain the strings. So far I've been using:find -name 'LYFNRE.*' -exec grep -f file1.txt {} \; -printbut the problem is a string can be found in many files so it is difficult to find which strings are present and which are missing due to the huge output. Can you help me in printing the strings with their corresponding file names where they were found.
searching multiple strings in multiple files inside a directory and printing the string and corresponding file name where it was found
grep;search
null
_vi.5120
I'm working with python code. After some modifications, I want to update the identation but obviously, select everything and press '=' doesn't work, python being python.So, is there any other way to add (and/or remove) some character (here, tabs) at the beggining of each line ?
Add tab in front of each line
normal mode
As Sato mentioned in comments, :help v_> will show you help for the best tool you can use for this. >> in normal mode will indent the current line; >3> will indent the current line and the following two lines; << will decrease indentation.Another feature that works will in combination with > can be found at :help text-objects. (aB, a[, and so on.) For example in C-style code that uses curly braces, >aB or >iB to indent the current block including curly braces, or only indent the lines between the curly braces, respectively. For Python code, >ap (indent a paragraph) may be more useful, but using visual mode to select the lines as described in :help v_> is even more adjustable.For the general answer to How to add a character/some text at the beginning of each line? i.e., when you want to insert something other than tabs or spaces, there are a couple ways::%normal Itext to insert will prepend text to insert to every line in the file. % can be replaced with any range you like. (See :help range and also :help :normal)Or you can use Ctrlvto enter blockwise-visual mode, use j and k to select a column of characters, then I (capital) to insert text at that point in all selected lines. (It will only be visible in all the lines after you Esc and then make another motion of any sort.) This has the advantage that you can enter text at ANY point in the line, not only at the beginning. (See :help v_b_I)
_codereview.44584
Part of my program is a variable-sized set of Star Systems randomly linked by Warp Points. I have an A* algorithm working rather well in the grid, but the random warp point links mean that even though the systems have X,Y coordinates for where they're located on a galactic map, a system at 2,3 isn't always linked directly to a system at 2,4 and so the shortest path may actually lead away from the target before it heads back towards it. I think this limitation eliminates A* since there's almost no way to get a good heuristic figured out.What I've done instead is a recursive node search (I believe this specific pattern is a Depth-First Search), and while it gets the job done, it also evaluates every possible path in the entire network of systems and warp points, so I'm worried it will run very slowly on larger sets of systems. My test data is 11 systems with 1-4 warp points each, and it averages over 700 node recursions for any non-adjacent path.My knowledge of search/pathfinding algorithms is limited, but surely there's a way to not search every single node without needing to calculate a heuristic, or at least is there a heuristic here I'm not seeing?Here's my code so far:private int getNextSystem(StarSystem currentSystem, StarSystem targetSystem, List<StarSystem> pathVisited){ // If we're in the target system, stop recursion and // start counting backwards for comparison to other paths if (currentSystem == targetSystem) return 0; // Arbitrary number higher than maximum count of StarSystems int countOfJumps = 99; StarSystem bestSystem = currentSystem; foreach (StarSystem system in currentSystem.GetConnectedStarSystems() .Where(f=>!pathVisited.Contains(f))) { // I re-create the path list for each node-path so // that it doesn't modify the source list by reference // and mess up other node-paths List<StarSystem> newPath = new List<StarSystem>(); foreach (StarSystem s in pathVisited) newPath.Add(s); newPath.Add(system); // recursive call until current == target int jumps = getNextSystem(system, targetSystem, newPath); // changes only if this is better than previously found if (jumps < countOfJumps) { countOfJumps = jumps; bestSystem = system; } } // returns 100 if current path is a dead-end return countOfJumps + 1;}
Efficient pathfinding without heuristics?
c#;recursion;search;pathfinding
Complete and Incomplete AlgorithmsSearch algorithms can be classed into two categories: complete and incomplete. A complete algorithm will always succeed in finding what your searching for. And not surprisingly an incomplete algorithm may not always find your target node.For arbitrary connected graphs, without any a priori knowledge of the graph topology a complete algorithm may be forced to visit all nodes. But may find your sought for node before visiting all nodes. A* is a complete best-first method with a heuristic to try to avoid searching unlikely parts of the graph unless absolutely necessary.So unfortunately you can not guarantee that you will never visit all nodes whatever algorithm you choose. But you can reduce the likelihood of that happening.Without pre-processingIf you cannot consider pre-processing your graph then you're stuck with a myriad of on-line algorithms such as depth-first, breadth-first, A* and greedy-best-first. Out of the bunch I'd bet on A* in most cases if the heuristic is even half good and the graphs are non-trivial.If you expect all routes to be short, a breadth-first algorithm with cycle detection and duplicate node removal may outperform A* with a poor heuristic. I wouldn't bet on it though, you need to evaluate.With pre-processingIn your case I'd see if I could pre-process the graph, even if you need to repeatedly re-do the pre-processing when the graph changes as long as you do sufficiently many searches between pre-processing it would be worth it.You should look up Floyd-Warshall (or some derivative) and calculate the pairwise cost/distance/jumps between all nodes and use this table as a heuristic for your A*. This heuristic will not only be admissible, it will be exact and your A* search will complete in no-time.Unless you of course modify the algorithm to store all pairwise routes as they are detected in a matrix of vectors, then you have O(1) run time at the cost of memory.
_unix.198835
After logging in, when I invoke Slingshot search for the first time the highlight is there -- and it is very useful when scrolling up and down.But after that it is absent. In all cases except during the first use after log-in, it will not be there anymore and scrolling up and down will not be visible (although it will operate in practice, only that I would not be able to see what is selected, thus making that list almost useless).
No highlighted selection in Slingshot (Elementary OS Freya)
search;elementary os;highlighting;slingshot launcher
Solved after fully updating to Freya stable following this method.It involves: removing daily sources, adding stable sources, replacing kernel 3.13 with 3.16.
_cs.70768
I think they are because, SLR and LALR have same number of states and, since there are no conflicts in SLR table this means all the SLR information is needed and correctly used for parsing, so all that has to be in LALR.
If a grammar is SLR(1) then are the LALR and SLR tables the same?
compilers;parsers
null
_vi.6446
I have been trying to install YouCompleteMe for a long time. At first, I failed because I need to build Vim having python support. Now I have it solved, but I always fail at building YouCompleteMe itself because of various kinds of problems (if you guys want to help me on this, that's okay too :-)). The build log of YouCompleteMe is this (using only ./install.py):./install.py-- The C compiler identification is GNU 5.3.1...... Successful Detects ......-- Detecting CXX compile features - doneYour C++ compiler supports C++11, compiling in that mode.-- Found PythonLibs: /usr/local/lib/libpython2.7.a (found suitable version 2.7.10, minimum required is 2.6) -- Found PythonInterp: /usr/local/bin/python2 (found suitable version 2.7.10, minimum required is 2.6) NOT using libclang, no semantic completion for C/C++/ObjC will be available-- Found PythonInterp: /usr/local/bin/python2 (found version 2.7.10) -- Looking for include file pthread.h-- Looking for include file pthread.h - found-- Looking for pthread_create-- Looking for pthread_create - not found-- Looking for pthread_create in pthreads-- Looking for pthread_create in pthreads - not found-- Looking for pthread_create in pthread-- Looking for pthread_create in pthread - found-- Found Threads: TRUE -- Configuring done-- Generating done-- Build files have been written to: /tmp/ycm_build.jmI4vOScanning dependencies of target BoostParts[ 0%] [ 2%] [ 2%] Building CXX object BoostParts/CMakeFiles/BoostParts.dir/libs/atomic/src/lockpool.cpp.o...... Successfully Builds ......[ 92%] Building CXX object ycm/CMakeFiles/ycm_client_support.dir/PythonSupport.cpp.oIn file included from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/type_traits/ice.hpp:15:0, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/python/detail/def_helper.hpp:9, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/python/class.hpp:29, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/python.hpp:18, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/PythonSupport.h:21, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/ycm_client_support.cpp:19:/home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/type_traits/detail/ice_or.hpp:17:71: note: #pragma message: NOTE: Use of this header (ice_or.hpp) is deprecated # pragma message(NOTE: Use of this header (ice_or.hpp) is deprecated) ^...... Some similar messages ......In file included from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/type_traits/ice.hpp:18:0, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/python/detail/def_helper.hpp:9, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/python/class.hpp:29, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/python.hpp:18, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/PythonSupport.h:21, from /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/ycm_core.cpp:19:/home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/BoostParts/boost/type_traits/detail/ice_eq.hpp:17:71: note: #pragma message: NOTE: Use of this header (ice_eq.hpp) is deprecated # pragma message(NOTE: Use of this header (ice_eq.hpp) is deprecated) ^[ 96%] Building CXX object ycm/CMakeFiles/ycm_client_support.dir/CustomAssert.cpp.o[ 97%] Building CXX object ycm/CMakeFiles/ycm_client_support.dir/Result.cpp.o[ 98%] [100%] Building CXX object ycm/CMakeFiles/ycm_core.dir/CustomAssert.cpp.oBuilding CXX object ycm/CMakeFiles/ycm_core.dir/Result.cpp.oLinking CXX shared library /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/ycm_client_support.so/usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32S against `_Py_NotImplementedStruct' can not be used when making a shared object; recompile with -fPIC/usr/local/lib/libpython2.7.a: error adding symbols: Bad valuecollect2: error: ld returned 1 exit statusycm/CMakeFiles/ycm_client_support.dir/build.make:387: recipe for target '/home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/ycm_client_support.so' failedmake[3]: *** [/home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/ycm_client_support.so] Error 1CMakeFiles/Makefile2:130: recipe for target 'ycm/CMakeFiles/ycm_client_support.dir/all' failedmake[2]: *** [ycm/CMakeFiles/ycm_client_support.dir/all] Error 2make[2]: *** Waiting for unfinished jobs....Linking CXX shared library /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/ycm_core.so/usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32S against `_Py_NotImplementedStruct' can not be used when making a shared object; recompile with -fPIC/usr/local/lib/libpython2.7.a: error adding symbols: Bad valuecollect2: error: ld returned 1 exit statusycm/CMakeFiles/ycm_core.dir/build.make:387: recipe for target '/home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/ycm_core.so' failedmake[3]: *** [/home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/ycm_core.so] Error 1CMakeFiles/Makefile2:165: recipe for target 'ycm/CMakeFiles/ycm_core.dir/all' failedmake[2]: *** [ycm/CMakeFiles/ycm_core.dir/all] Error 2CMakeFiles/Makefile2:209: recipe for target 'ycm/CMakeFiles/ycm_support_libs.dir/rule' failedmake[1]: *** [ycm/CMakeFiles/ycm_support_libs.dir/rule] Error 2Makefile:148: recipe for target 'ycm_support_libs' failedmake: *** [ycm_support_libs] Error 2Traceback (most recent call last): File /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/build.py, line 372, in <module> Main() File /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/build.py, line 361, in Main BuildYcmdLibs( args ) File /home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/build.py, line 275, in BuildYcmdLibs subprocess.check_call( build_command ) File /usr/local/lib/python2.7/subprocess.py, line 540, in check_call raise CalledProcessError(retcode, cmd)subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'ycm_support_libs', '--', '-j', '4']' returned non-zero exit status 2Traceback (most recent call last): File ./install.py, line 32, in <module> Main() File ./install.py, line 21, in Main subprocess.check_call( [ python_binary, build_file ] + sys.argv[1:] ) File /usr/local/lib/python2.7/subprocess.py, line 540, in check_call raise CalledProcessError(retcode, cmd)subprocess.CalledProcessError: Command '['/usr/local/bin/python', '/home/bunny/.vim/bundle/YouCompleteMe/third_party/ycmd/build.py']' returned non-zero exit status 1. So I install the package vim-youcompleteme by apt successfully. So how can I use YouCompleteMe now?EDIT: I used vam to install YouCompleteMe, no problem, but when I open vi, it gives me this error:Error detected while processing function youcompleteme#Enable.. <SNR>30_SetUpPython:line 29:Traceback (most recent call last):Press ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29: File <string>, line 25, in <module>Press ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29: File /usr/share/vim-youcompleteme/python/ycm/youcompleteme.py, line 34, in <module>Press ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29: from ycm.client.ycmd_keepalive import YcmdKeepalivePress ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29: File /usr/share/vim-youcompleteme/python/ycm/client/ycmd_keepalive.py, line 22, in <module>Press ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29:from ycm.client.base_request import BaseRequestPress ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29: File /usr/share/vim-youcompleteme/python/ycm/client/base_request.py, line 20, in <module>Press ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29: import requestsPress ENTER or type command to continueError detected while processing function youcompleteme#Enable..<SNR>30_SetUpPython:line 29:ImportError: No module named requestsPress ENTER or type command to continuevi --version output:VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Feb 11 2016 19:19:30)Compiled by Vostro-3400Normal version with GTK2 GUI. Features included (+) or not (-):-arabic +file_in_path -mouse_sgr +tag_binary+autocmd +find_in_path -mouse_sysmouse +tag_old_static+balloon_eval +float -mouse_urxvt -tag_any_white+browse +folding +mouse_xterm -tcl+builtin_terms -footer +multi_byte +terminfo+byte_offset +fork() +multi_lang +termresponse+cindent +gettext -mzscheme +textobjects+clientserver -hangul_input +netbeans_intg +title+clipboard +iconv +path_extra +toolbar+cmdline_compl +insert_expand -perl +user_commands+cmdline_hist +jumplist +persistent_undo +vertsplit+cmdline_info -keymap +postscript +virtualedit+comments -langmap +printer +visual-conceal +libcall -profile +visualextra+cryptv +linebreak +python +viminfo-cscope +lispindent -python3 +vreplace+cursorbind +listcmds +quickfix +wildignore+cursorshape +localmap +reltime +wildmenu+dialog_con_gui -lua -rightleft +windows+diff +menu -ruby +writebackup+digraphs +mksession +scrollbind +X11+dnd +modify_fname +signs -xfontset-ebcdic +mouse +smartindent +xim-emacs_tags +mouseshape -sniff +xsmp_interact+eval -mouse_dec +startuptime +xterm_clipboard+ex_extra -mouse_gpm +statusline -xterm_save+extra_search -mouse_jsbterm -sun_workshop -farsi -mouse_netterm +syntax system vimrc file: $VIM/vimrc user vimrc file: $HOME/.vimrc 2nd user vimrc file: ~/.vim/vimrc user exrc file: $HOME/.exrc system gvimrc file: $VIM/gvimrc user gvimrc file: $HOME/.gvimrc2nd user gvimrc file: ~/.vim/gvimrc system menu file: $VIMRUNTIME/menu.vim fall-back for $VIM: /usr/local/share/vimCompilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -pthread -I/usr/include/gtk-2.0 -I/usr/lib/x86_64-linux-gnu/gtk-2.0/include -I/usr/include/gio-unix-2.0/ -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/libpng12 -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/libpng12 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/freetype2 -I/usr/local/include -g -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -Wl,--as-needed -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -lpangocairo-1.0 -latk-1.0 -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lpangoft2-1.0 -lpango-1.0 -lgobject-2.0 -lglib-2.0 -lfontconfig -lfreetype -lSM -lICE -lXpm -lXt -lX11 -lXdmcp -lSM -lICE -lm -ltinfo -lnsl -lselinux -ldl -L/usr/lib/python2.7/config-x86_64-linux-gnu -lpython2.7 -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions
Installation of YouCompleteMe
plugin you complete me
After some tries, I finally find out what's going on with my installation of vim-youcompleteme, and I decided to answer my own question.First, building from source is not a good idea, unless you have to do it (just like my non-python support vim). Installing pre-built packages is always a good idea and a good starting point.Second, the dependencies are important. When having the package installed and plugin installed by vam(vim-addon-manager), you need to solve the problems showing on the start of vim. For me, the main problem is the requests and requests_futures. After figuring it out, just use pip to install it and done!By the way, thank you for those guys have helped me! You guys give some good advice and hints!
_cs.48505
When completing exercises on Codility.com you submit your code to a server for analysis. You then receive a report containing the detected algorithm complexity of the code.I was just wondering how it does this automatically - i.e. without any O(n) analysis?
How does the automatic complexity analysis of Codility work?
algorithm analysis;runtime analysis
null
_scicomp.24055
Suppose I have to solve the 2-D heat equation in a rectangular domain using the finite difference method, for the boundary conditions say:$T_1$ is the temperature of the right side of the rectangle,$T_2$ is the temperature of the top side,$T_3$ is the temperature of the left side,and $T_4$ is the temperature at the bottom. T2 _______________ | |T3 | |T1 | | |_______________| T4at the vertices points of the rectangle, how to set the boundary conditions? do I need to take the average value of the two temperatures?Regards
How to set the temperature at the vertices points for a rectangular domain?
finite difference
I believe you cannot apply two boundary conditions at the same point. You need to choose between T2 and T3 for the top-left point for instance. The average might be acceptable but I'm not sure of the physical meaning though.Anyway, with a fine discretization, it should not be a problem since the consequence of this choice will be insignificant on the solution.
_computerscience.3707
I am trying to make a glow shader using separable gaussian blurring.I have recently been inspired by the short youtube video computer color is broken and I have messed with it with color interpolation and boy his suggestion is beautiful!A big thing the video talks about is that this principal should be applied to blurring, however I am pretty confused. I don't really know what to square when and what to sqrt when when values are being added. My current theory is each texture sample for the gaussian blur gets raised by the power of two weighted with a bell curve and added to a sum like usual. At the end the sum is square rooted, but i'm not sure if that is correct. Could someone please confirm? Would this make an appreciable difference that made things worth it?
Applying correct light physics to gaussian blur formulas for glow
color science;blur;gamma
Yes, your theory is correct. A gamma-correct blur entails converting the input pixels to linear color space, performing the blur weighting and accumulation in that space, and then converting back to gamma space at the end.As noted in the comments, the actual transform is not literally squaring and square-rooting, that's just an approximation (and not that good of one). For the true sRGB gamma transform, see the equations in this Wikipedia article (look down the page for the equations involving $C_\text{srgb}$ and $C_\text{linear}$).By the way, some visual comparisons of gamma-correct and gamma-incorrect blurs can be found on this page by Elle Stone, which shows why this whole thing matters.
_webapps.3408
I'm considering installing Wordpress at example.com/blog. However, example.com functions mainly as a forum/bulletin board, and thus has a user database. I'd like users of example.com to be able to login to the Wordpress install with their existing credentials. Is that in any way possible? This is further complicated by the fact that example.com is written in .NET and uses a Microsoft SQL Server database.
Is it possible for Wordpress to get user information from an existing, non-MySQL database?
wordpress
I haven't tried it but this plugin seems to be what you need: External DB authentication.
_cs.231
In programming languages, closures are a popular and often desired feature. Wikipedia says (emphasis mine):In computer science, a closure (...) is a function together with a referencing environment for the non-local variables of that function. A closure allows a function to access variables outside its immediate lexical scope.So a closure is essentially a (anonymous?) function value which can use variables outside of its own scope. In my experience, this means it can access variables that are in scope at its definition point.In practice, the concept seems to be diverging, though, at least outside of functional programming. Different languages implement different semantics, there even seem to be wars of opinons on. Many programmers do not seem to know what closures are, viewing them as little more than anonymous functions.Also, there seem to exist major hurdles when implementing closures. Most notable, Java 7 was supposed to include them but the feature was pushed back to a future release.Why are closures so hard (to understand and) to realise? This is too broad and vague a question, so let me focus it more with these interconnected questions: Are there problems with expressing closures in common semantic formalisms (small-step, big-step, ...)? Are existing type systems not suited for closures and can not be extended easily?Is it problematic to bring closures in line with a traditional, stack-based procedure translation?Note that the question relates mostly to procedural, object-oriented and scripting languages in general. As far as I know, functional languages do not have any problems.
Problems Implementing Closures in Non-functional Settings
programming languages;semantics
May I direct you to the Funarg problem wikipedia page? At least this is how the compiler people used to reference the closure implementing problem.So a closure is essentially a (anonymous?) function value which can use variables outside of its own scope. In my experience, this means it can access variables that are in scope at its definition point.While this definition makes sense, it does not help describe the problem of implementing first-class functions in a traditional runtime-stack based language. When it comes to implementation issues, first class functions can be roughly divided into two classes:Local variables in the functions are never used after the function returns.Local variables can be used after the function returns.The first case (downwards funargs) is not that hard to implement and can be found on even the older procedural languages, like Algol, C and Pascal. C kind of skirts the issue, since it does not allow nested functions but Algol and Pascal do the necessary bookkeeping to allow inner functions to reference the stack variables of the outer function.The second case (upwards funargs), on the other hand, requires activation records to be saved outside the stack, in the heap. This means that it is very easy to leak memory resources unless the language runtime includes a garbage collector. While almost everything is garbage collected today, requiring one is still a significant design decision and was even more so some time ago.As for the particular example of Java, if I remember correctly, the main issue was not actually being able to implement closures, but how to introduce them to the language in a way that was not redundant with existing features (like anonymous inner classes) and that did not clash with existing features (like checked exceptions - a problem that is not trivia l to solve and that most people don't think of at first).I can also think of other things that make first class functions less trivial to implement, such as deciding what to do with magical variables such as this, self or super and how to interact with existing control flow operators, such as break and return (do we want to allow for non-local returns or not?). But in the end, the recent popularity of first-class functions seems to indicate that languages that don't have them mostly do so for historical reasons or due to some significant design decision early on.
_unix.284885
What yum repository provides gstreamer-plugins-ugly (or/and other gstreamer plugins) for Red Hat Linux 6? And do repositories for CentOS and Fedora suit for Red Hat?Reason of this question is that explicit query of RHEL repositories in Google leads to CentOS and Fedora repositories but as far as I know problems with binary compatibility between same applications in different Linux distributions may occur. Is that correct?
rhel repositories with gstreamer-plugins-ugly
rhel;package management;yum;gstreamer;rhythmbox
gstreamer-plugins-ugly' : You can search http://rpm.pbone.net/index.php3 and see the available repos : rpmfusion.repo, el.repo, nux.repo, atrpms.repo, repoforge.repo (= rpmforge.repo ).One is compatible with the Redhat repo : That's rpmfusion : http://rpmfusion.org/ >>> http://download1.rpmfusion.org/free/el/updates/6/i386/rpmfusion-free-release-6-1.noarch.rpmThe other can cause trouble at an update. ( # yum update ).
_codereview.169811
This script is meant to do the tedious work involved in a monthly event I run on a subreddit. It does a search for all posts relevant to the event since the last posting, and creates the bulk of the next month's post.I would most like criticism at an organizational level. My functions ramble together, and it's hard to keep track of what I have, so I'd like suggestions for a better to do that. In the problem domain, the name Piece is not as horribly vague as it seems. Of course, if you're aware of this and still think it's an awful name, I welcome your thoughts.import configparserimport datetimeimport loggingimport reimport picklefrom typing import Optionalimport prawimport praw.modelsDELIMITER = '---' # type: strREDDIT = NoneJAM_MAINTAINER = 'G01denW01f11'def init_reddit(config_pathname: str) -> praw.Reddit: Create global Reddit object from config file config = configparser.ConfigParser() config.read(config_pathname) return praw.Reddit(client_id=config['RedditParams']['client_id'], client_secret=config['RedditParams']['client_secret'], user_agent=config['RedditParams']['user_agent'])def get_reddit() -> praw.Reddit: Get the global Reddit object. Create it if it hasn't been created global REDDIT if not REDDIT: REDDIT = init_reddit('config.ini') return REDDITclass Piece(object): A piece to be listed in the piano jam def __init__(self, composer: str = None, title: str = None, video_url: str = None, score_url: str = None, category: str = None): self.composer = composer # type: str self.title = title # type: str self.video_url = video_url # type: str self.score_url = score_url # type: str self.category = category # type: str def __eq__(self, other: 'Piece') -> bool: return self.composer == other.composer and self.title == other.title def __ne__(self, other: 'Piece') -> bool: return not self == other def __str__(self) -> str: return '{}: [{}]({}) | [Sheet Music]({})'.format(self.composer, self.title, self.video_url.replace(')', '\)'), self.score_url.replace(')', '\)'))class Submission(object): A submission to the month's Jam def __init__(self, username: str = None, url: str = None, title: str = None, piece: Piece = None): self.username = username # type: str self.url = url # type: str self.title = title # type: str self.piece = piece # type: Piece def __eq__(self, other: 'Submission') -> bool: return self.username == other.username and self.piece == other.piece def __ne__(self, other: 'Submission') -> bool: return not self == other def __str__(self) -> str: return '{}\'s {} by [/u/{}]({})'.format(self.piece.composer, self.piece.title, self.username, self.url) def set_piece(self, pieces: [Piece]) -> None: From a list of valid pieces, set the one that matches :param pieces: A list of pieces to choose from self.piece = find_piece_matching_title(pieces, self.title) if not self.piece: logging.warning('Could not find piece for {} | {}'.format(self.title, self.url))def find_piece_matching_title(pieces: [Piece], title: str) -> Optional[Piece]: Use a simple heuristic to tell which piece a submission is from the title :param pieces: Pieces to choose from :param title: Submission title :return: Appropriate piece, if any for piece in pieces: if biggest_word_in_line(piece.title).lower() in title.lower(): return piece return Nonedef format_title(section_title: str) -> str: Apply proper formatting to the title of a section :param section_title: The title of a section to be formatted :return: Formatted title return '**{}**'.format(section_title)class Jam(object): A Piano Jam posting CATEGORIES = ['Jazz', 'Classical', 'Ragtime', 'Video Game / Anime / Film'] # type: [str] def __init__(self, outline_pathname: str = 'jam_outline.txt'): Create a Piano Jam instance from a given outline file :param outline_pathname: pathname to file with default jam contents self.filename = '' # type: str self.submissions = [] # type: [Submission] self.pieces = [] # type: [Piece] with open(outline_pathname, 'r') as f: self.text = f.read() def __str__(self): submissions_str = '' for submission in self.submissions: submissions_str += str(submission) + '\n\n' pieces_str = '' for piece in self.pieces: pieces_str += str(piece) + '\n\n' return self.text.format(submissions_str, pieces_str) def add_submission(self, submission: Submission): Add a submission to the Jam. Multiple submissions do not get added :param submission: Submission to the Piano Jam :return: None for prior_submission in self.submissions: if submission.username == prior_submission.username and submission.piece == submission.piece: if submission.url != prior_submission.url: logging.warning('User {0} attempted to submit a piece multiple times'.format(submission.username)) return self.submissions.append(submission) def add_piece(self, piece: Piece): if piece not in self.pieces: self.pieces.append(piece) def save(self, filename: str='') -> None: if filename: self.filename = filename if not self.filename: raise ValueError('No filename to save to!') with open(self.filename, 'wb') as f: pickle.dump(self, f) @classmethod def load(cls, filename: str) -> 'Jam': with open(filename, 'rb') as f: jam = pickle.load(f) # type: Jam if type(jam) != Jam: raise TypeError('Tried to load a Jam. Got {}'.format(type(jam))) assert jam.filename == filename return jamdef parse_piece(piece_text: str) -> Piece: Construct a Piece from its string representation. Expected format: Composer: [Title](url) | [Sheet Music](sheetUrl) :param piece_text: Line from Piano Jam specifying a Piece to learn piece = Piece() piece.composer = piece_text[:piece_text.index(':')] piece.title = re.findall(re.compile('\[(.*?)\]'), piece_text)[0] # type: str urls = re.findall(re.compile('\((.*?)\)'), piece_text) piece.video_url = urls[0] # type: str piece.score_url = urls[1] # type: str return piecedef parse_pieces(section_text: str) -> [Piece]: Parse all the pieces in a given section pieces = section_text.split('\n')[1:] # First line is the category; discard return (parse_piece(piece_text) for piece_text in pieces if piece_text.strip() != '')def get_pieces_from_jam(jam_text: str) -> [Piece]: Parse all the pieces from a Jam, given the contents of a post :param jam_text: The contents of a Piano Jam posting :return: List of pieces to be used for the Jam sections = jam_text.split(DELIMITER) sections = (section.strip() for section in sections) filtered_sections = [] for section in sections: section = section.strip() for category in Jam.CATEGORIES: category = format_title(category) if section.startswith(category): filtered_sections.append(section) break pieces = [] for section in filtered_sections: pieces.extend(parse_pieces(section)) return piecesdef get_selections_from_url(url: str) -> [Piece]: Parse all the pieces from a jam, given its url :param url: URL to a Piano Jam post :return: List of pieces to be used for the Jam try: post = praw.models.Submission(get_reddit(), url=url) except KeyError: raise KeyError('Could not recognize url {0}'.format(url)) return get_pieces_from_jam(post.selftext)def search_for_submissions(): Search Reddit for posts with [Piano Jam] in title within past month :return: List of urls to posts subreddit = get_reddit().subreddit('piano') results = subreddit.search('[Piano Jam]', sort='new', time_filter='month') return (result for result in results)def filter_submissions(submissions: [praw.models.Submission], jam: praw.models.Submission): return [submission for submission in submissions if '[piano jam]' in submission.title.lower() and datetime.datetime.fromtimestamp(submission.created) > datetime.datetime.fromtimestamp(jam.created)]def find_last_jam() -> praw.models.Submission: candidates = search_for_submissions() for candidate in candidates: if candidate.author == JAM_MAINTAINER and '[' not in candidate.title: return candidate raise ValueError('Could not find last Piano Jam')def biggest_word_in_line(line: str) -> str: words = line.split() length = 0 biggest_word = None for word in words: if len(word) > length: length = len(word) biggest_word = word assert biggest_word return biggest_worddef create_jam() -> [Submission]: Find all Piano Jam submissions since the last posting Log a warning if there are submissions not in the previous Jam. Create Jam from submissions and pickle it for later use. previous_jam = find_last_jam() entries = filter_submissions(search_for_submissions(), previous_jam) submissions = [Submission(entry.author, entry.shortlink, entry.title) for entry in entries] pieces = get_pieces_from_jam(previous_jam.selftext) new_jam = Jam() for submission in submissions: submission.set_piece(pieces) if submission.piece: new_jam.add_submission(submission) new_jam.save('current_jam.txt')
Script to summarize Reddit posts over the past month
python;reddit
It doesn't make a whole lot of sense to have a function with no arguments that modifies a global object. Your init_reddit function is better than your get_reddit function because of this.IMHO you should rethink why you have a function that has more comments than code in it. There may be a more idiomatic way to express that. (see find_piece_matching_title, format_title)Classes are good; consider making a Reddit class that either inherits from praw.Reddit or has your reddit instance as a member variable. You could put search_for_submissions and filter_submissions in there.Your parse_piece, parse_pieces, get_pieces_from_jam, etc. functions should be a part of your Piece or Jam objects. If you're using objects to contain your data, it makes sense to have functions manipulating that data as methods.Overall, I see in your code a whole bunch of top-level functions and objects without a clear indication of how they're supposed to work together. The difficult part in coding is not necessarily writing the individual pieces, but in figuring out the simplest (least complected) way for them to interact.
_codereview.94520
Previous iteration.You know, I think this is the fastest I've ever pushed out an update to anything. This is Version 2 of my Brainf**k to Ruby converter, and the generated code looks... Well, like Brainf**k, converted directly to Ruby, with no attempt at making it more readable.I'm looking for any tips on making things more idiomatic, both in the generator and generated code. The nest if/cases really bug me, but I'm not quite sure how to get rid of them, especially since just two characters are blindly replaced. I'd also like advice on making it run faster.bf_to_ruby.rbinput_file, output_file = ARGVcode = IO.read(input_file).tr('^+-<>.,[]', '')open(output_file, File::CREAT | File::WRONLY) do |output| output.puts <<-END.gsub(/^[ \t]*\||\s*#@.*$/, '') |#!/usr/bin/env ruby |class Mem < Hash #@ `Hash` because it's more memory-efficient and allows negative values. | def initialize; super(0); end | def []=(i, val); super(i, val & 255); end |end |data = Mem.new |pointer = 0 END indent_level = 0 code.scan(/(\++)|(\-+)|(<+)|(>+)|([.,\[\]])/) .map do |string| if string[0] next #{' ' * indent_level}data[pointer] += #{string[0].length} elsif string[1] next #{' ' * indent_level}data[pointer] -= #{string[1].length} elsif string[2] next #{' ' * indent_level}pointer -= #{string[2].length} elsif string[3] next #{' ' * indent_level}pointer += #{string[3].length} elsif string[4] case string[4] when '[' ret = #{' ' * indent_level}until data[pointer] == 0 indent_level += 1 next ret #Split it so that it's clear that indent is increased *after* the line when ']' indent_level -= 1 next #{' ' * indent_level}end when ',' next #{' ' * indent_level}data[pointer] = $stdin.readbyte when '.' next #{' ' * indent_level}putc data[pointer] end end end.each { |line| output.puts(line) }endDemoInput:++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.Output:#!/usr/bin/env rubyclass Mem < Hash def initialize; super(0); end def []=(i, val); super(i, val & 255); endenddata = Mem.newpointer = 0data[pointer] += 8until data[pointer] == 0 pointer += 1 data[pointer] += 4 until data[pointer] == 0 pointer += 1 data[pointer] += 2 pointer += 1 data[pointer] += 3 pointer += 1 data[pointer] += 3 pointer += 1 data[pointer] += 1 pointer -= 4 data[pointer] -= 1 end pointer += 1 data[pointer] += 1 pointer += 1 data[pointer] += 1 pointer += 1 data[pointer] -= 1 pointer += 2 data[pointer] += 1 until data[pointer] == 0 pointer -= 1 end pointer -= 1 data[pointer] -= 1endpointer += 2putc data[pointer]pointer += 1data[pointer] -= 3putc data[pointer]data[pointer] += 7putc data[pointer]putc data[pointer]data[pointer] += 3putc data[pointer]pointer += 2putc data[pointer]pointer -= 1data[pointer] -= 1putc data[pointer]pointer -= 1putc data[pointer]data[pointer] += 3putc data[pointer]data[pointer] -= 6putc data[pointer]data[pointer] -= 8putc data[pointer]pointer += 2data[pointer] += 1putc data[pointer]pointer += 1data[pointer] += 2putc data[pointer]You'll notice that the generated code is nearly the same as the last version's generated code, but with a lot of duplicate lines merged and indenting.
Brainf**k to Ruby converter -- v2
ruby;converting;brainfuck
You're correct that nesting a case in an elsif ladder is a bit clunky here, but the two things that jump out at me as making the code difficult to comprehend are: 1) we have to know everything that's going on with the Regexp passed to scan() in order to figure out the intent of the if clauses, and 2) the variable string is not a String, but an Array.You can remove the capture groups from your Regexp to just get an Array of Strings back from scan() (instead of an Array of Arrays of Strings), and figure out what exactly was matched by looking at the strings themselves - which means you don't need a nested case anymore, and makes it a little more obvious what scan() is doing without having to actually parse its argument.Like this:# example.rbcode = '--+++.,,'code.scan(/\++|\-+|[.,]/).map { |str| case str[0] when '+' Plus signs: #{str.length} when '-' Minus signs: #{str.length} when '.' 'Dot!' when ',' 'Comma!' end}.each {|ll| puts ll}Produces:$ ruby example.rb Minus signs: 2Plus signs: 3Dot!Comma!Comma!
_unix.98478
To help with setting up multi-monitor X I'm using a simple script to get screen resolution information using xrandr. This was working fine until I tried attaching an amplifier to attempt to use HDMI audio output. Now the xrandr output contains extraneous information about a monitor which is not present. Is there some way to avoid xrandr detecting this device as another monitor? The device is an Onkyo TX-NR509.An obvious workaround is to detach the amplifier every time I start X.
xrandr detects amplifier as monitor
xorg;xrandr;hdmi
null
_unix.373375
As far as I can tell, IPC through shared memory is fastest, but the drawback is that when one process introduces memory corruption (stack/heap corruption due to programming errors) then all bets are off and all processes accessing it might get affected.What is the situation with IPC using sockets or named pipes? If a corrupted process communicates with healthy processes through pipes/sockets, can other processes also get affected, or is there some kind of data validation in place? For example, would passing an open file descriptor to a potentially corrupted process be safer than using shared files/memory?
Is using POSIX pipes/sockets for IPC memory-safe?
ipc
null
_unix.199377
I have set up a cronjob on my server which is supposed to run every minute and store the output in the given file. I have been trying a lot and saw a lot of links but nothing seems to work. Following is the line which I wrote in crontab -e* * * * * /root/snmp_codes/snmp/.\/snmpstats.py -f file -g > logfile.logCan anyone please tell me what mistake I have made?
Crontab giving no result
linux;cron;python
Fix the path so it's correct. Based on your comment it's likely to be /root/snmp_codes/snmp/snmpstats.py.You can also modify the command so that it captures stderr as well as stdout like this (the 2>&1 attaches stderr to stdout so you get both written to the logfile.log):* * * * * /root/snmp_codes/snmp/snmpstats.py -f file -g > logfile.log 2>&1
_unix.330351
I'm trying to get NodeJS installed on my server. The instructions suggest doing:sudo apt-get updatesudo apt-get install nodejsTo which I get:root@steampunklinode:~# sudo apt-get updateHit http://nginx.org jessie InReleaseHit http://nginx.org jessie/nginx SourcesHit http://nginx.org jessie/nginx amd64 PackagesIgn http://nginx.org jessie/nginx Translation-en_USIgn http://nginx.org jessie/nginx Translation-enReading package lists... Doneroot@steampunklinode:~# sudo apt-get install nodejsReading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package nodejsI found this post:How to install latest NodeJS on Debian Jessie?I tried the suggestion there, but get an error:root@steampunklinode:~# curl -sL https://deb.nodesource.com/setup | bash -================================================================================================================================================================ SCRIPT DEPRECATION WARNING This script, located at https://deb.nodesource.com/setup, used to install Node.js v0.10, is being deprecated and will eventually be made inactive. You should use the script that corresponds to the version of Node.js you wish to install. e.g. * https://deb.nodesource.com/setup_4.x Node.js v4 LTS Argon (recommended) * https://deb.nodesource.com/setup_6.x Node.js v6 Current Please see https://github.com/nodejs/LTS/ for details about which version may be appropriate for you. The NodeSource Node.js Linux distributions GitHub repository contains information about which versions of Node.js and which Linux distributions are supported and how to use the install scripts. https://github.com/nodesource/distributions================================================================================================================================================================Continuing in 10 seconds (press Ctrl-C to abort) ...## Installing the NodeSource Node.js v0.10 repo...## Populating apt-get cache...+ apt-get updateHit http://nginx.org jessie InReleaseHit http://nginx.org jessie/nginx SourcesHit http://nginx.org jessie/nginx amd64 PackagesIgn http://nginx.org jessie/nginx Translation-en_USIgn http://nginx.org jessie/nginx Translation-enReading package lists... Done## Installing packages required for setup: apt-transport-https...+ apt-get install -y apt-transport-https > /dev/null 2>&1Error executing command, exitingCan someone please point me in the right direction? I must be missing something dumb! Thanks
Setting up node on Debian Jessie
apt;node.js
Your repositories are mis-configured. Since you're running Jessie, your /etc/apt/sources.list needs to havedeb http://httpredir.debian.org/debian jessie maindeb-src http://httpredir.debian.org/debian jessie maindeb http://security.debian.org/ jessie/updates maindeb-src http://security.debian.org/ jessie/updates mainGiven your apt-get update output it looks like you've only got Nginx repositories there. If you add the above lines, you'll be able to update again and install Node either using the Debian nodejs package (0.10) or a newer release following the instructions you linked to.
_codereview.12530
I have scoring system where I want to change icon color by changing CSS classes.How can I optimize this jQuery? After testing, I found this the only alternative, but I trust there is a more simplified method.// Count scores$('#bodyScore').text((90 / myarray.length) * bodyScore + '/90'); // Change icon colorvar bodyPercent = bodyScore / myarray.length;var successRatio = 0.9;var warningRatio = 0.5;if (bodyPercent >= successRatio) { $(.bkf).removeClass(label-success); $(.bkf).removeClass(label-warning); $(.bkf).removeClass(label-important); $(.bkf).addClass(label-success);}else if (bodyPercent >= warningRatio) { $(.bkf).removeClass(label-success); $(.bkf).removeClass(label-warning); $(.bkf).removeClass(label-important); $(.bkf).addClass(label-warning);}else { $(.bkf).removeClass(label-success); $(.bkf).removeClass(label-warning); $(.bkf).removeClass(label-important); $(.bkf).addClass(label-important);}
removeClass and addClass optimization in a scoring system
javascript;jquery
Both the removeClass and addClass methods will accept a space-separated list of class names to add/remove, and both can be chained.You can cache the selector so you don't have to repeatedly search the DOM.And since you remove the same three classes in each branch of execution, you can move that outside of the if/else if/else:var bkf = $(.bkf).removeClass(label-success label-warning label-important);if (bodyPercent >= successRatio) { bkf.addClass(label-success);}else if (bodyPercent >= warningRatio) { bkf.addClass(label-warning);}else { bkf.addClass(label-important);}
_softwareengineering.253695
I, for the life of me, cannot find any literature on this, simply because I have no clue how it is called.I want to learn how to implement a payment option that consists of paying with your credit/debit card directly without the use of a third party like paypal.This is what I am talking aboutCan you please give me some information about how this payment method is called and possibly some articles I can read up on. Thank you!
Billing from card directly
php;billing
You'll need to have an account with a payment processor company. If not PayPal, there are others. Moneris is one.Usually, your payment processing partner will send you an API which contains the code necessary to submit a payment. You use the API to make calls to the payment processor, but you don't submit raw HTTP requests to them.You'll might also want to look in to PCI Compliance.
_unix.152098
Instead of logging me in, PAM greets me with the message Cannot make/remove an entry for the specified session after I enter the password. What entry is it talking about (and what session)?The string with the error message is found in libpam.so.0(.83.1).My system is based on binaries from Fedora release 20 (Heisenburg).How can I troubleshoot PAM to figure out what is needed to successfully login?I have no syslog (and no persistent disk, only an initramfs).Updates:SELinux is Disabled.I am more than willing to replace the entire PAM config with something simple that allows login (normal user and root) on the virtual consoles only.Source code from Linux-PAM-1.1.8, libpam/pam_strerror.c reveals that the message comes from the error code PAM_SESSION_ERR, which can be caused by all sorts of internal problems, such as memory allocation error or failure to locate the users home directory. So much for trying to interpret the error message. :-(Below are my config files based on the comment indicating /etc/pam.d/login as a starting point:(I have also tried removing all lines containing pam_loginuid.so without any noticeable difference)/etc/pam.d/login:auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.soauth substack system-authauth include postloginaccount required pam_nologin.soaccount include system-authpassword include system-authsession required pam_selinux.so closesession required pam_loginuid.sosession optional pam_console.sosession required pam_selinux.so opensession required pam_namespace.sosession optional pam_keyinit.so force revokesession include system-authsession include postlogin-session optional pam_ck_connector.so/etc/pam.d/postlogin:session [success=1 default=ignore] pam_succeed_if.so service !~ gdm* service !~ su* quietsession [default=1] pam_lastlog.so nowtmp showfailedsession optional pam_lastlog.so silent noupdate showfailed/etc/pam.d/system-auth:auth required pam_env.soauth sufficient pam_fprintd.soauth sufficient pam_unix.so nullok try_first_passauth requisite pam_succeed_if.so uid >= 1000 quiet_successauth required pam_deny.soaccount required pam_unix.soaccount sufficient pam_localuser.soaccount sufficient pam_succeed_if.so uid < 1000 quietaccount required pam_permit.sopassword requisite pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtokpassword required pam_deny.sosession optional pam_keyinit.so revokesession required pam_limits.so-session optional pam_systemd.sosession [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uidsession required pam_unix.soI have these shared PAM-related libraries:libpam_misc.so.0libpam.so.0pam_access.sopam_console.sopam_deny.sopam_env.sopam_fprintd.sopam_gnome_keyring.sopam_keyinit.sopam_lastlog.sopam_limits.sopam_localuser.sopam_loginuid.sopam_namespace.sopam_nologin.sopam_permit.sopam_pkcs11.sopam_pwquality.sopam_rootok.sopam_securetty.sopam_selinux_permit.sopam_selinux.sopam_sepermit.sopam_succeed_if.sopam_systemd.sopam_timestamp.sopam_unix_acct.sopam_unix_auth.sopam_unix.sopam_xauth.soas well as these that are referenced by the above shared libraries (according to ldd):libattr.so.1libaudit.so.1libcap.so.2libcrack.so.2libcrypt.so.1libc.so.6libdbus-1.so.3libdbus-glib-1.so.2libdl.so.2libffi.so.6libfreebl3.solibgcc_s.so.1libgio-2.0.so.0libglib-2.0.so.0libgmodule-2.0.so.0libgobject-2.0.so.0liblzma.so.5libnsl.so.1libnspr4.solibnss3.solibnssutil3.solibpcre.so.1libpcre.so.3libplc4.solibplds4.solibpthread.so.0libpwquality.so.1libresolv.so.2librt.so.1libselinux.so.1libsmime3.solibssl3.solibutil.so.1libz.so.1
What does Cannot make/remove an entry for the specified session mean?
login;pam;debugging
null
_unix.342426
When we do a find in linux, I am guessing the kernel will store the result into buffer/cache. Let say after an hour, some changes of the folder and files happened, so my question is, when we do the next find:i) Will the kernel get the wrong old result stored in buffer/cache ?ii) How does kernel know there already some changes of the folder and files and it can't use back the result from the buffer/cache ? Is it doing a comparison between the new result and old result ? Wouldn't this even take up more time ? If not, how the kernel achieve such intelligent choice ?iii) Do we ever need to worry about dropping the cache (i.e. : echo 3 > /proc/sys/vm/drop_caches) to get the latest result of our operation let say find ? or there are some scenario where we need to do such thing ? (although I feel we shouldn't need to, but just want to make sure)iv) Let say there is a scenario where some cron script run a command (may be grep a very huge file) that causing to taken up most of the resource of the server. We kill that process and truncate that huge file. That cron job then run again after few minutes. Do we need to drop the buffer/cache in order to avoid the next grep getting the huge file content result stored in it to avoid it hang the server again ? (sorry if this question sound too silly to you)
How does Linux kernel know when to use memory buffer/cache?
linux kernel;memory;cache;buffer
null
_unix.332862
I am attempting to install git on Debian 8.6 Jessie and have run into some dependency issues. What's odd is that I didn't have any issues the few times I recently installed Git in a VM while I was getting used to Linux.apt-get install git Results in:The following packages have unmet dependencies: git : Depends: liberror-perl but is not installable Recommends: rsync but it is not installableE: Unable to correct problems, you have held broken packages.UPDATEmy sources.listSeems to be an issue with my system. I can no longer properly install anything. I'm getting dependency issues installing things like Pulseaudio which I've previously installed successfully a few days ago.
Unmet dependencies while installing Git on Debian
debian;apt;package management;git;dependencies
You should edit your sources.list , by adding the following line:deb http://ftp.ca.debian.org/debian/ jessie main contribThen upgrade your package and install git:apt-get update && apt-get upgrade && apt-get dist-upgradeapt-get -f installapt-get install gitEditthe following package git , liberror-perl and [rsync]3 can be downloaded from the main repo , because you don't have the main repo on your sources.list you cannot install git and its dependencies .Your sources.list should be (with non-free packages):deb http://ftp.ca.debian.org/debian/ jessie main contrib non-freedeb-src http://ftp.ca.debian.org/debian/ jessie main contrib non-freedeb http://security.debian.org/ jessie/updates main contrib non-freedeb-src http://security.debian.org/ jessie/updates main contrib non-freedeb http://ftp.ca.debian.org/debian/ jessie-updates main contrib non-freedeb-src http://ftp.ca.debian.org/debian/ jessie-updates main contrib non-freedeb http://ftp.ca.debian.org/debian/ jessie-backports main contrib non-free
_webapps.98208
OK, so I have six main sheets in question...Overall Team AssignmentsDesign Team AssignmentsMechanical Team AssignmentsProgramming Team AssignmentsCommunications Team AssignmentsMedia Team AssignmentsI want to take data from sheets two through six, from cells B3:D, and insert it into Overall Team Assignments in C3:E. I also want to sort it by the dates from column D.This seems like something that should be fairly simple with QUERY, but everything I try either throws an error or doesn't physically change anything.Here's the example spreadsheet that's uneditable, just in case something happens to the editable one.Here's the editable one for anyone that wants one to copy, or a playground to try your hand at.EDIT: This question was referenced, raising question as to whether or not my question was a duplicate. The difference being that I need to sort the data that is being queried as well (that is, sorted by a certain column within each sheet that is being queried from).EDIT2: From the reference in the previous edit, this...={filter(A:A, len(A:A)); filter(B:B, len(B:B)); filter(C:C, len(C:C))}is how to filter out and stack the results of the cells that I want. The problem being that I don't want one of the columns that seperates the first part of the data that I want from the second part. So if I grab the data in two seperate cells (one for the data before the column that I don't want and one for the data after said column) then I have no way to ensure that there isn't some issue with only some of the cells in each row being filled (on the spreadsheet that is being grabbed from) which would mean that if the formula removed empty cells, it could misalign rows in the transfer from one sheet to another. So I need the formula to not match up things from seperate rows when it cuts out empty space (i.e. it needs to make sure that the rows were not altered from one sheet to another).EDIT3: OK, now that I've finally had some time to work on this again, I've gotten the chance to try to use FILTER, but it can only grab a single row or column at a time. So that means that the results would be stacked up in each column without any way to sort said columns retroactively (after the data has been updated if something has changed, that is) other than manually. In short, FILTER won't work for what's being done.But I did figure out what will work. See my answer below.
Google Sheets querying and sorting data from multiple sheets
google spreadsheets
To sort a query you simply have to wrap it in SORT.=SORT( QUERY( {SheetOne!C3:F;SheetTwo!C3:F;SheetThree!F4:G;SheetFour!F4:G} ) , 1, TRUE)For example, the above formula takes the data from the sheets inside the query, and then sorts by ascending order from column 1.This also works well to solve the problem of row splitting, where some rows get split up if you use different queries to pull from rows you intend to keep uniform.
_webapps.36278
I have a GitHub account and I follow some of my friends and other people whose projects interest me. When following someone, I was expecting to see their public activity in my news feed, but it is not the case. I am used to following someone on Twitter and I thought GitHub would work the same way. How exactly does following people work on GitHub?
How does following work on GitHub?
github;followers
After contacting GitHub support about this, here is what they responded me with :Just to clarify things, I did a bit more digging and found that if you follow a user you will receive the following notifications:when they follow userswhen they star repositorieswhen they fork or create a public repositorySo it seems like you don't get notifications for every action they make. For that, you have to either:Check their public activity on their profile, orWatch a specific repo to get notifications for commits, etc.While I understand that you would get tons of notifications if every action was displayed in the news feed, I'd like that we should at least have the control on which kind of notifications I want to see in my feed.
_codereview.41503
Suggestions for improving coding style are greatly appreciated.import qualified Data.List as Limport qualified Data.Map.Strict as Mimport qualified Data.Vector as Vtype Queue a = ([a], [a])emptyQueue = ([], [])pushListToAnother fromLst toLst = L.foldl' (\ys x -> (x:ys)) toLst fromLstenqueue :: Queue a -> a -> Queue aenqueue (inList, outList) x = ((x:inList), outList)dequeue :: Queue a -> Maybe (a, Queue a)dequeue (inList, outList) = case outList of (y:ys) -> Just (y, (inList, ys)) [] -> if (null inList) then Nothing else dequeue ([], reverse inList)massEnqueue :: Queue a -> [a] -> Queue amassEnqueue (inList, outList) items = ((pushListToAnother items inList), outList)-- consider moving the above Queue code into a separate module.type Grid a = V.Vector (V.Vector a)type Indices = (Int, Int)access grid (x, y) = (grid V.! x) V.! ymassInsert :: Ord k => [(k, v)] -> M.Map k v -> M.Map k vmassInsert elems theMap = L.foldl' (\m (k, v) -> M.insert k v m) theMap elemsvalidAndTraversable :: (a -> Bool) -> Grid a -> Indices -> BoolvalidAndTraversable traversability grid xy@(x, y) = let xbound = V.length grid in let ybound = V.length (V.head grid) in let withinBounds = (x >= 0) && (x < xbound) && (y >= 0) && (y < ybound) in withinBounds && (traversability (access grid xy))getPath :: Ord a => M.Map a a -> a -> a -> [a]getPath visitedFromMap start current = pathHelper visitedFromMap start current [] where pathHelper prevIndicesMap start current path = let newPath = (current:path) in if current == start then newPath else case (M.lookup current prevIndicesMap) of Nothing -> [] Just e -> (pathHelper prevIndicesMap start e) $! newPathmazeSolverLoop :: Indices -> (Indices -> a -> Bool) -> (a -> Bool) -> Grid a -> Queue Indices -> M.Map Indices Indices -> [Indices]mazeSolverLoop start isFinish traversability mazeGrid queue visitedFromMap = let item = dequeue queue in case item of Nothing -> [] Just (currentXY, rest) -> if isFinish currentXY (access mazeGrid currentXY) then getPath visitedFromMap start currentXY else let (x, y) = currentXY in let potentialNeighbors = [(x+1, y), (x, y+1), (x-1, y), (x, y-1)] in let isVisitable = \xy -> (validAndTraversable traversability mazeGrid xy) && (M.notMember xy visitedFromMap) in let unvisitedNeighbors = filter isVisitable potentialNeighbors in let newVisitedFromMap = massInsert (map (\xy -> (xy, currentXY)) unvisitedNeighbors) visitedFromMap in let newQueue = massEnqueue rest unvisitedNeighbors in (mazeSolverLoop start isFinish traversability mazeGrid newQueue) $! newVisitedFromMap-- the solving functionsfindUnknownFinish :: Indices -> (Indices -> a -> Bool) -> (a -> Bool) -> Grid a -> [Indices]findUnknownFinish start isFinish traversability grid = let validityPredicate = validAndTraversable traversability grid in if validityPredicate start then let m = M.singleton start start in let q = enqueue emptyQueue start in mazeSolverLoop start isFinish traversability grid q m else []findKnownFinish :: Indices -> Indices -> (a -> Bool) -> Grid a -> [Indices]findKnownFinish start finish traversability grid = let isFinish = (\xy _ -> xy == finish) in findUnknownFinish start isFinish traversability gridescapeMaze :: Indices -> (a -> Bool) -> Grid a -> [Indices]escapeMaze start traversability grid = let isOnBounds = \b x -> (x == 0) || (x == (b-1)) in let xbound = V.length grid in let ybound = V.length (V.head grid) in let isFinish = \(x, y) _ -> (isOnBounds xbound x) || (isOnBounds ybound y) in findUnknownFinish start isFinish traversability gridescapeMazeV2 :: Indices -> (a -> Bool) -> Grid a -> [Indices]escapeMazeV2 start traversability grid = let isOnBounds = \b x -> (x == 0) || (x == (b-1)) in let xbound = V.length grid in let ybound = V.length (V.head grid) in let isFinish = \(x, y) _ -> (isOnBounds xbound x) || (isOnBounds ybound y) in let acceptableFinish = \xy a -> (isFinish xy a) && (xy /= start) in findUnknownFinish start acceptableFinish traversability gridmaze1 = V.fromList [(V.fromList [1,1,1,1,1,1,0]), (V.fromList [0,0,0,0,0,0,0]), (V.fromList [1,1,1,1,1,1,0]), (V.fromList [0,0,0,0,0,0,0]), (V.fromList [0,1,1,1,1,1,1]), (V.fromList [0,0,0,0,0,0,0]), (V.fromList [1,1,1,0,1,1,1]), (V.fromList [0,0,0,0,0,0,0]), (V.fromList [0,1,1,1,1,1,0])]show_solve_maze1 = let solve_maze1 = findKnownFinish (1,0) (8,6) (\a -> a == 0) maze1 in mapM_ (putStrLn.show) solve_maze1maze2 = V.fromList (map V.fromList [xxxxxxxxxxxxxxxxxxxxx, x x x, xx xxxx xxxxxx xxx x, x x x x xx x, x xxxxx xxxxxxxx x x, x x xx x, xxxxxx xxxxx xxxx x, x xxxx x x x, x xx x x x x x x xxx, x xx x x x x x x, xx x x x xxx xxx xxx, x xx x x, xxxx x xxxxxx xxxx x, x xx x x x x, xxxxxx x x xxxxx xxx, x xx x x x x, xxx x xx xxx xxx x x, x x x x x x, x x xxxxxx xxxx xxx x, x x ox, x xxxxxxxxxxxxxxxxxxx])show_solve_maze2 = let solve_maze2 = findUnknownFinish (1,1) (\_ a -> a == 'o') (\a -> a /= 'x') maze2 in mapM_ (putStrLn.show) solve_maze2show_solve_maze2v2 = let solve_maze2 = escapeMaze (1,1) (\a -> a /= 'x') maze2 in mapM_ (putStrLn.show) solve_maze2maze3 = V.fromList (map V.fromList [###########, # #, # ##### # #, # # #, ### # ### #, # # #, # # ### ###, # # # , # ### # # #, # # #, ###########])show_solve_maze3_v1 = let solve_maze3_v1 = escapeMazeV2 (3,0) (\a -> a /= '#') maze3 in mapM_ (putStrLn.show) solve_maze3_v1show_solve_maze3_v2 = let solve_maze3_v2 = escapeMazeV2 (7,10) (\a -> a /= '#') maze3 in mapM_ (putStrLn.show) solve_maze3_v2
Solve a maze in the form of a 2D array using BFS - Haskell
haskell;breadth first search
null
_webmaster.98792
I have in the log file a following line:xxx.xxx.xxx.xxx;[05/Aug/2016:00:00:48 +0200];GET /extensions/css/example.css?rev=example HTTP/1.1;200;66931;http://www.example.com/page.htmlPlease explain me, what means the part between the date and the server answer? I'm a bit perplexed about it - it isn't a redirect URL, but what is it then?Edit:I come to decision, that i've completely misformulated my question - sorry for that.I don't mean with my question, what are the senses of certain url parts, like rev=example etc. I'm rather interesting, why are in the log entry two kinds of url: one of them, the first, beginning with GET, and the second at the entry's end.I thought firstly its a kind of redirect - but no, the answer code is 200. So what mean two urls / paths in this log entry?
What means the part between the date and the server answer in Apaches log?
apache log files
The first URL should be the file that is access, while the second one is the referrer, aka the file/page that made the browser access the first URL.You can configure what shows up in your log files. Typically by modifying the 'LogFormat' lines in your /etc/apache2/apache2.conf More information about what information you can show: http://httpd.apache.org/docs/current/mod/mod_log_config.html
_softwareengineering.323270
I am trying to solve this exercise. It is about reimplementing the map function in haskell for learning purpose. I found a solution which doesn't browse all the elements of the list (simple linked list, so accessing the last element will browse all the list) at each iteration, but I didn't found one which is tail recursive.accumulateRec :: (a -> b) -> [a] -> [b]accumulateRec func [] = []accumulateRec func (h:t) = (func h) : accumulateRec func tIs there a way to implement map in a tail recursive way and without browsing all the list at each iteration ?PS: exercism.io is an awesome way to learn a new language.
Implementing map with tail recursion
haskell;recursion;map
Tail recursion is not a good idea in Haskell with list functions, because tail recursion prevents lazy evaluation from returning a partial result.But anyway, to answer your question, it is possible to write a reversed map function (like map except the order of elements is reversed) that is tail-recursive and does not go through the list each step. It maintain an accumulator which is the list of results so far (backwards), and for each new element in the input, it prepends the result to the accumulator (and that's why it's backwards).reverseMap :: (a -> b) -> [a] -> [b]reverseMap func = helper [] where helper acc [] = acc helper acc (h:t) = helper (func h : acc) tOf course, since you got the results backwards, you need to reverse it again, and since reverse is also tail-recursive, the whole operation is tail-recursive.myMap :: (a -> b) -> [a] -> [b]myMap func = helper [] where helper acc [] = reverse acc helper acc (h:t) = helper (func h : acc) t
_softwareengineering.7859
As a solo developer, I think I'm using an Agile-like process, but I'd like to compare what I'm doing to real Agile and see if I can improve my own process.Is there a book out there that's the de-facto standard for describing best practices, methodologies, and other helpful information on Agile? What about that book makes it special?
Is there a canonical book on Agile?
agile;books
Is there a canonical book?There is the agile manifesto, but for a canonical book?No. There are lots of books out there.Specific book recommendations:Agile Software Development, Principles, Patterns, and Practices by Robert C. MartinAgile Software Development, Principles, Patterns, and Practices. This is focused on developer practices and coding and is a must read for any developer serious about agile software development. There is also a C# version of the book that he and his son Micah wrote, so if you are a .NET developer, that version might be the one for you.The art of Agile Development by James ShoreFor an insight into overall agile project practices look at The Art of Agile by James Shore & Shane Warden. It's focussed on XP practices (but that's really because XP is where all the specific developer practices are defined), but has a big picture focus on how Agile projects work.A great thing about this book is that James Shore is publishing the whole text on his website for free, so you can try before you buy.Practices of an Agile Developer: Working in the Real World by Subramaniam and HuntPractices of an Agile Developer: Working in the Real WorldScrum and XP from the Trenches by Henrik KnibergIt's a great book for getting a feel for how an agile team works, and it it's a very quick read (couple of hours). I give it to new staff in my organisation - technical and non-technical - and I've had consistently positive feedback.AmazonExtreme Programming Explained by Kent BeckProbably the oldest book I can remember which helped make Agile principles popular. Agile is fast becoming a buzz word in the world of Tech. I feel Extreme Programming (XP) is a good place to start before the term Agile just seems to lose meaning.AmazonAgile Estimating and Planning by Mike CohnFor the Agile process - look to Mike Cohn's Agile Estimating and Planning - bearing in mind that it's Scrum-centric.Cohn covers a lot of the basics as well as some of the things new Scrum teams often struggle with - estimation using Story Points vs. Ideal days, what do do if you fail a story in a sprint, when to re-estimate/size and when not to, etc.He also goes into some really interesting stuff that's mainly the domain of a Product Owner - things like how to assess and prioritize features, etc.The Art of Unit Testing by Roy OsheroveOsherove presents a very pragmatic approach to unit testing. Presents a good approach on how to refactor code to become more testable, how to look for seams, etc. It is a .Net centric book, however.AmazonThe Agile Samurai by Jonathan RasmussonJust purchased this myself and found it to be a refreshing look on how to get started with agile. Amazon Alistair Cockburns book on his Crystal methodologies is worth while reading - partly because it gives you an alternative the the usual Scrum methods, and partly because he was one of the original guys who came up with Agile in the first place, so I hope he know what he's talking about.Crystal is an interesting methodology as it scales from small teams to very large ones, he describes the changes required to make agile work in these different environments.Unsorted books mentionedAgile Adoption Patterns: A Roadmap to Organizational Success by Amr ElssamadisyAgile and Iterative Development: A Managers Guide by Craig LarmanAgile Estimating and Planning by Mike CohnAgile Project Management: Creating Innovative Products by Jim HighsmithAgile Retrospectives: Making Good Teams Great by Esther Derby and Diana LarsenAgile Software Development by Alistair CockburnAgile Software Development with Scrum by Ken Schwaber and Mike BeedleBecoming Agile: ...in an imperfect world by Greg Smith and Dr. Ahmed SidkyThe Business Value of Agile Software Methods: Maximizing Roi with Just-In-Time Processes and Documentation by David F. Rico, Hasan H. Sayani, and Saya SoneCollaboration Explained by Jean TabakaContinuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and FarleyCrystal Clear: A Human-Powered Methodology for Small Teams by Alistair CockburnEncyclopedia of Software Engineering edited by Phillip A. LaplanteFearless Change by Linda Rising and Mary Lynn MannsGrowing Object-Oriented Software, Guided by Tests Freeman and PryceInnovation Games: Creating Breakthrough Products Through Collaborative Play by Luke HohmannLean Software Development An Agile Toolkit for Software Development Managers by Mary and Tom PoppendieckLean Solutions by Jim Womack and Dan JonesLean Thinking by Jim Womack and Dan JonesManaging Agile Projects by Sanjiv AugustineManaging the Design Factory by Donald G. ReinertsenPlanning Extreme Programming by Kent Beck and Martin FowlerScaling Lean & Agile Development: Thinking and Organizational Tools for Large-Scale Scrum by Craig Larman and Bas VoddeScrum Pocket Guide: A Quick Start Guide to Agile Software Development by Peter SaddingtonThe Software Project Manager's Bridge to Agility by Michele Sliger and Stacia BroderickToday and Tomorrow by Henry Ford (From 1926)User Stories Applied by Mike CohnBook listsAgile Design Recommended Reading
_unix.295340
Oracle Linux Server contains more utility, because it is smaller than less utility. How can one install less in this RPM based distro?
How to install less on Oracle Linux Server
less;oracle linux
null
_webmaster.106665
I'm trying to set up a (.tk) website for a school project using 000WebHost, but at the same time I want to integrate CloudFlare DDoS protection into my website as well. The two ways to hook up a domain name to 000WebHost are to either 1) add a CNAME record that points to the free subdomain 000WebHost provides (your-domain.000webhostapp.com) or 2) change your DNS nameservers to 000WebHost's own. But in order to use CloudFlare you need to either change your nameservers to Cloudflare's own or add a CNAME record to CloudFlare themselves. The thing is that the registrar for .tk domains (Freenom [sorry, can't post a third link because I have under 10 rep]) only allows you to use CNAME records or different nameservers, you can't have both different namservers and CNAME records at the same time. Ideally, I would have a CNAME pointed at 000webhostapp.com and my nameservers pointed at Cloudflare, but I can't do that due to said restrictions. So my question is: is there some way to circumvent Freenom's restrictions and use a CNAME and different nameservers at the same time, or should I jump ship to a Cloudflare Partner's web hosting service i.e. Free Virtual Servers (sorry, again I can't post a link) so I can just activate CloudFlare through cPanel without changing the nameservers or adding CNAME records?
How to use CloudFlare and web host at same time?
dns;nameserver;cname;cloudflare
You can't have different nameservers and CNAME records at the one host - even that sentence doesn't make sense really. By changing the nameservers you are shifting the DNS to that host and that is where you will have to set up the CNAME
_unix.18352
I have 2 monitors which can rotate physically. They are working fine right now as non-mirrored, dual monitors.I went to System -> Preference -> Monitors and tried to rotate the screens. As soon as I click Apply, I appear at the log in screen as if I just logged out.Does anybody know what might be going on?Here are some lines that might be relevant in /var/log/Xorg.0.log.(--) PCI:*(0:1:0:0) 1002:9552:1458:21ac ATI Technologies Inc M92 LP [Mobility Radeon HD 4300 Series] rev 0, Mem @ 0xd0000000/268435456, 0xfe9f0000/65536, I/O @ 0x0000c000/256, BIOS @ 0x????????/131072...(--) RADEON(0): Chipset: ATI Mobility Radeon 4300 Series (ChipID = 0x9552)...(II) RADEON(0): RandR 1.2 enabled, ignore the following RandR disabled message.(--) RandR disabledThere's also the gdm3 service running if that helps.
Cannot rotate screen in Debian Squeeze
debian;x11
null
_unix.386804
So a tool just told me that two of my SSL certificates expired, namely: /etc/ssl/certs/ca-certificates.crt and /etc/ssl/certs/ssl-cert-snakeoil.pem.What should I do to fix this problem? Remove those certificates? If so how? I'm using Debian 9.1 with KDE.
SSL certificates ca-certificates.crt and ssl-cert-snakeoil.pem expired - what do?
debian;security;ssl;certificates
null
_unix.147242
If I want to kill a process as careful and politely as possible,which signals should I use in a kill command, in which order? I would like to give the programm any kind of time to clean up, if it likes to, so just sending a SIGTERM will be to harsh, I think? I'll use SIGKILL (-9) last, that's clear. But which to start? SIGHUP? Which signals are just a waste of time? The relevant signals for reference, from man 7 signal Signal Value Action Comment SIGHUP 1 Term Hangup detected on controlling terminal or death of controlling process SIGINT 2 Term Interrupt from keyboard SIGQUIT 3 Core Quit from keyboard SIGKILL 9 Term Kill signal SIGPIPE 13 Term Broken pipe: write to pipe with no readers SIGTERM 15 Term Termination signal
How to kill - softly?
process;kill;zombie process
null
_cstheory.18368
Consider the deterministic (resp. non-deterministic) one-way finite automaton that is defined in the usual way except that it has k heads and in each step can decide which head to move. (It is allowed to run until all heads reach the end-marker of the input.) These automata are denoted by k-DFA (resp. k-FA) and it was shown in several papers that k+1 heads are better than k, i.e., their is a language that can be recognized only with more heads. Probably the simplest of these arguments is by Yao and Rivest (http://people.csail.mit.edu/rivest/pubs/YR78.pdf).However, notice that if we allow the k-headed automata to read the input k+1 times, then it can also recognize the language given as a counterexample. (Here define reading t times as you would like to - when the first reading is finished, start the second one etc. OR run the machines in parallel t times from t different starting states and then take some boolean function of their final states.)So my question:Is there a language that can be recognized by a k+1-headed automaton but by no k-headed automaton that is allowed to read the input t times? (Here t can depend on the language but not on the input.)Note: Please do not link me to papers asking if I have seen it! I have read many related things...
Are k+1 heads better than k for multiread finite automata?
automata theory;dfa;hierarchy theorems
null
_softwareengineering.134287
I'm looking for a programming judge system which supports putting contestants programs against each other in matches. The format could be either for example a tournament style or chess ranking, but this isn't that important. A good example would be Google's AI challenge (http://aichallenge.org/)The only systems I've found so far are regular programming judges, i.e. those who can check if a program passes or fails a given problem, like DOMJudge.Do you know of any systems like this?
Programming judge with versus system
algorithms
You could have a look at Caia. It is used by the Dutch informatics olympiad to organize programming competitions where the submitted programs compete against each other.
_codereview.32109
Ok, code reviewers, I want you to pick my code apart and give me some feedback on how I could make it better or more simple. public final class StringValueOf { private StringValueOf () {} // note that int max value is 10 digits final static int [] sizeTable = { 9, 99, 999, 9999, 99999, 999999, 9999999, 99999999, 999999999, Integer.MAX_VALUE }; private final static char[] DigitOne = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', }; private final static char[] DigitTens = { '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '2', '2', '2', '2', '2', '3', '3', '3', '3', '3', '3', '3', '3', '3', '3', '4', '4', '4', '4', '4', '4', '4', '4', '4', '4', '5', '5', '5', '5', '5', '5', '5', '5', '5', '5', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '7', '7', '7', '7', '7', '7', '7', '7', '7', '7', '8', '8', '8', '8', '8', '8', '8', '8', '8', '8', '9', '9', '9', '9', '9', '9', '9', '9', '9', '9', }; private static int stringSize(int x) { for (int i = 0; ; i++) { if (x <= sizeTable[i]) { return i + 1; } } } private static void getChars (char[] buf, int size, int i) { int charPos = size - 1; if (i < 0) { i = -i; } while (i >= 10) { int r = i % 100; i = i / 100; buf[charPos--] = DigitOne[r]; buf[charPos--] = DigitTens[r]; } if (i > 0) { buf[charPos--] = DigitOne[i]; } if (charPos == 0) { buf[charPos] = '-'; } } public static String valueOf(int i) { if (i == Integer.MAX_VALUE) { return -2147483648; } int size = (i < 0) ? stringSize(-i ) + 1 : stringSize(i); char[] buf = new char[size]; buf.toString(); getChars(buf, size, i); /** * There are 2 ways to convert a char into string. * 1. buf.toString() * 2. String(buf) * * but we should use String(buf) because: * 1. Mostly buf.toString would internally call String(buf) * 2. Integer class uses new String. */ return new String(buf); } public static void main(String[] args) { System.out.println(valueOf(101)); System.out.println(valueOf(-2010)); } }
Implement String's valueOf function, code review request
java
I would replace (just to reduce size of the source-codeprivate final static char[] DigitOne = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', };private final static char[] DigitTens = { '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '2', '2', '2', '2', '2', '3', '3', '3', '3', '3', '3', '3', '3', '3', '3', '4', '4', '4', '4', '4', '4', '4', '4', '4', '4', '5', '5', '5', '5', '5', '5', '5', '5', '5', '5', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '7', '7', '7', '7', '7', '7', '7', '7', '7', '7', '8', '8', '8', '8', '8', '8', '8', '8', '8', '8', '9', '9', '9', '9', '9', '9', '9', '9', '9', '9',};by private final static char[] DigitOne = (0123456789+0123456789+0123456789+0123456789+0123456789 // +0123456789+0123456789+0123456789+0123456789+0123456789) .toCharArray();};private final static char[] DigitTens = { (0000000000+ 1111111111+2222222222+3333333333+4444444444 // +5555555555+6666666666+7777777777+8888888888+9999999999) .toCharArray(); };Or even generate the constants with a static method using loops.If you want to avoid speed loss by the loop, unroll it completely.There are only 10 possible cases for the length. With a binary-decision you can determine the length with 4 if-statements and than convert the value without any loop at all. Code would get a bit long, but also very fast.
_unix.263476
I have bought an android tv box which has an hdmi capture card inside. From time to time the capture card stops to record. I need to restart the device for it to work again. I am trying to find a way to 'reset' the card with some commands, eg by restaring a service. The problem is that i can't find what service or process to restart. I havs taken a look at kernel logs with dmesg.I can see lines like initialiase pcm capture when i start a recording. But how can i find out what process or service actually 'triggered' the kernel log??
How to find out what process wrote a particular log in dmesg?
dmesg
null
_codereview.83510
I wrote this code that's supposed to correct quizzes on a webpage, but it feels bloated and I'm sure there's tons of ways to improve it. Any thoughts?$ -> #When submit_quiz div is clicked, do the following $(document).on click, #submit_quiz, (e) -> #Gets the quiz id for later use qid = getQuestions(this) #console.log qid #Create a jquery object for the specific quiz div, select the quiz quiz = $(#quiz_+qid) #Setup some variables to be used later window.qn = null window.checked = null wrong = [] #Select questions inside the quiz quizchild = quiz.children() #Gets all questions like so in an object, I THINK #Loop through the questions one at a time for z,y in quizchild #Iterate if id = submit_quiz if $(z).attr('id') == 'submit_quiz' break #Get children of a question question = $(z).children() #Loop through the options for u,i in question #Check if the id exists, so it ignores <br> if $(u).attr('id')? #Set the length of our option array window.qn = $(u).length #The correct answer to the question, modify later correct = null #Get children of options, used for getting each individual options params options = $(question).children() #Loop the options and check if they're correct, if it's wrong, push it to a variable 'wrong' for later use for d,i in options child = $(d).children()[1] if $(child).is(:checked) window.checked+=1 if $(child).attr('id') == 'correct' correct = $(child).attr('id') else $(child).css(color, red) wrong.push $(u).attr('id') #Check if 'wrong' has anything in it if wrong.present() for o,i in wrong $('#'+o).css('color', 'red') console.log o console.log This quiz is wrong.. This is bad! #If it hasn't, check if all boxes are filled, if they are, the quiz is correctly answered else if !wrong.present() && window.checked == window.qn + 1 console.log This quiz is correctly answered yes guys!!!! #Else, not all boxes are checked else console.log Not all boxes in this quiz checked!Array::present = -> @.length > 0getQuestions = (thiss) -> fvalue = $(thiss).attr(quiz) return fvalueBear with me on the obsolete comments and console logs please.
Correcting a quiz
coffeescript;quiz
null
_softwareengineering.204464
Where I work we recently switched the Agile development using Scrum. We went through the typical growing pains but have reached an approach that seems to work for now (whether it'll work in the long term is for another question!).Obviously, the department management is happy the transition to Scrum is working. But they have starting doing something that, to me, feels wrong.Management will observe a team, see what works for them, and the prescribe it to the entire department. Things like:The definition of DoneWhich story point values can be used for story pointing (eg, omitting 8 from the fib. sequence because 1, 2, 3, 5, 13, etc were the only ones used during a sprint they observed)Telling teams they must calibrate their story point value of 1 to updating a UI label, and limiting them to an upper bound of 20(although not all our projects have clients and not all developershave UI experience)Telling teams to use story point estimates of 100 to mean we'll split this story laterTelling teams to use story point estimates of infinity to mean this is an epic or we need more infoI understand they're trying to be helpful, but shouldn't all the things above be Scrum-team specific? That is to say, what works for one group of individuals on one project may not make sense to another group on another project.I'm concerned we're drifting into a very prescriptive and stiff Agile approach. Am I justified in thinking this, or am I overreacting?EditJust to clarify... by Management and Manager I don't mean the Product Owner. I mean any manager outside the Scrum Team, but within the Software Department.
Enforcing a uniform Scrum approach to all teams within a department
agile;scrum;management
Ofcourse you're justified in thinking that. The very fact that you're talking about enforcing Scrum is a blaring alarm siren. Scrum is first and foremost about self-organisation of the team; they get to choose how to do their work and how to organize themselves. Management only has a say in what work needs to be done. The reason why teams should organize themselves is that they are always unique, due to the different natures of the individual team members (and the people they work with) and the due to the differences of the products they work on. A practice that works perfectly well for one team, can have adverse effects on another team. That's why within a certain scope (a sandbox metaphore is often used), they have to experiment, learn and find out what works best for them. What you need is a very competent Scrum master (one per team), who can guide a team in this discovery, but at the same time can also work with management to obtain the freedom for the team to go on that discovery.
_webmaster.8017
Anybody know how these are made. I see them a lot, mostly web designers have them. Are they hand made in psd or illustrator or is there a web service that converts real photos?
web designers icon avatars?
graphics
http://www.photoshopsupport.com/tutorials/jennifer/favicon.htmlSummary:Photoshop + Photoshop Plugin + Save As... Favicon.ico + File Type 'Windows Icon' = your icon works.Put in the same directory as your main page and put the following HTML inside the head:ORSkip all of that and use a .gif or .png (Won't work in IE), and put the following HTML inside of the head: or
_unix.361987
I'm trying to abstract away some /dev/input files so that user-level systems can know when the touchscreen is being used vs. when the touchpad is being used (without having access to raw mouse data).To do this, I want to create a root system service which watches the /dev/input files for changes, and publishes currently using touchpad or currently using touchscreen messages which non-root session services can pick up (e.g., so a service under /etc/systemd/user/ can leverage the information)Potential Methods I've thought of:Have the root service manage a file which non-root services can watch for updates onPublish over some sort of bus, like dbus (I haven't worked with dbus before, but it seems like the system bus vs. session bus is isolated)What are some recommended patterns here? I haven't worked much at all with process -> process communication on linux, but I figure there must be a clean way to do something like this.
system service to user service information flow
root;not root user;d bus;ipc
null
_codereview.88470
My code does what it's suppose to do but the code is failing Pylint checks.After pasting my code into Pylint checker, it says that Too many branches (15/12) and its failed me on the Pylint checks.How can this be cleaned up?Docstringimport wordsdef nicer_englishify_word(word): Docstring wordx = word.lower() wordy = if wordx.endswith('moo'): vowel = True else: vowel = False if vowel == True: wordy1 = wordx[-3] + wordx[:-3] wordy2 = wordx[:-3] if wordy1 in words.get_word_list(): wordy11 = True else: wordy11 = False if wordy2 in words.get_word_list(): wordy22 = True else: wordy22 = False if wordy11 == wordy22: if wordy11 == True: wordy = sametrue else: wordy = samefalse else: if wordy22 == False: wordy = [ + wordx[-3] + wordx[:-3] + ] else: wordy = [ + wordx[:-3] + ] if wordy == sametrue: wordy = wordx[-3] + wordx[:-3] + or + wordx[:-3] wordy = < + wordy + > if wordy == samefalse: wordy = wordx[-3] + wordx[:-3] + or + wordx[:-3] wordy = ( + wordy + ) else: wordy = wordx[-3] + wordx[:-3] return wordydef nicer_englishify_sentence(sentence): docstring stringlist = sentence.split() result = for item in stringlist: result = result + nicer_englishify_word(item) + result = result[:-1] return result
Word transformation function
python;pig latin
null
_unix.180018
Some of the context might not be relevant to the problem so feel free to skim through the rest of this post but essentially what I'm doing is trying to install an operating system onto the KAOS partition in the picture.HardwareOK so I'm on a Lenovo Yoga Ultrabook 13 (specs) and I'd rather keep Windows since I read online that I'd be having further problems once I finish this headache (mainly that the wifi is going to be screwed and stuff but I'm prepared for that).ConstraintsIn order to keep the original setup intact while creating a new partition I had to do a little bit of magic because all the various partitions can somehow get out of sync or something if you're not careful. tl;dr: I was VERY careful and I believe that my problem is related to the flash drive and not the partition setup.OptionsI've tried to install DeepinOS, openSUSE and KaOS and have had various different error messages which returned no relevant search results until I found this article. Which has a plausible explanation for my problems.DiagnosticsI believe the usb drive is trying to use the wrong partition and that I need to specify in some config file exactly which partition I wish to use. This makes sense since theopenSUSE error came after I got a dialog box asking me to specify some path (the default is / and according to guides you're supposed to hit enter and things happen, I got some red box asking me to verify installation media).KaOS error said:ERROR: Root device mounted successfully but /sbin/init does not exist.Bailing out, you are on your own. Good luck.ShortcomingsThe one thing about that explanation that makes no sense is that these various installations have never tried to override anything (or indeed write anything). I'm sorry for not giving a more detailed description of the errors but I'm sick of restarting my computer and...Other... I've created bootable USB sticks with three different flash drives and 3-4 different tools (unetbootin, UUI, some windows only tool and the SUSE tool). I've both formatted and prepared them on my Linux Mint workstation and the windows laptop (as well as troubleshooted (troubleshot?) some other aspects of the process such as making sure the ISO's are OK etc).I've had several problems with the flash drives, especially at first. However although none of my installation attempts has gotten farther than the first step of the installation process that seems to be a symptom of some other problem.SummaryI will be completely content if anyone can help me get a working dualboot setup with any of the following OSes (based on screenshots and random biases so I'm pretty open to suggestions if there is some easy alternative although I'm guessing choice of OS is not related to the problem).List of OSes:Arch (I might be able to do this one myself thanks to their superb wiki)BodhiDeepinElementaryopenSUSEKaOSPinguyI'll be stalking this post since I'd really prefer to have access to linux in school so please ask me if you need anything clarified and please help me :(
Trouble installing onto correct partition for a dual boot
partition;system installation;dual boot;bootable
null
_unix.6992
When running vim under GNU screen, I'm finding that combinations of CTRL with the arrow and Pg* keys don't work as expected.I'm using the Ubuntu 10.10 vim-gnome package.On a different machine, also running Ubuntu, this did work without problems; unfortunately I don't have that configuration available to me now.There is a related question here: How to fix Ctrl + arrows in Vim?However, the suggested solution there is to remap vim's keybindings to work with the terminal emulator, in that case PuTTY. I don't recall doing anything of the sort, and suspect that there is a screen configuration option which will resolve this issue.There's also a thread on the gnu-screen mailing list which suggests that running vim via $TERM=xtermvim is an appropriate fix or workaround. This does work, but I'm a bit concerned that there might be side effects. It also doesn't sound familiar enough to be the solution I set up on the other machine (if a solution was necessary).
Fixing CTRL-* in vim under GNU screen
ubuntu;vim;gnu screen
null
_webmaster.5571
An error with our URL rewrite rules caused some pages to be not found (404 errors) when Google crawled our site.I regularly watch for errors and fix database or coding issues when they arise. We usually avoid them altogether, but sometimes we're surprised to find a 404 and we fix it.How bad is it? Supposing our goal is to establish trust and authority and keep the spider coming back for more pages, does anyone know how the spider acts programatically with crawl errors. Is there a point where the crawl frequency is reduced, or the SERPs affected?
What's the extent of the damage to having a broken URL found by the spiders (showing in Google Webmaster Tools)?
google search console;url;404;web crawlers
How bad is it?Badness is directly proportional to the number of links to the 404 and the number of visitors who are disappointed when they arrive at your site; Google Webmaster Tools won't show you a 404 unless there's a link to the content somewhere on the internet.Does anyone know how the spider acts programatically with crawl errors[?]If your domain expires you can expect it to be removed from search results post haste (i.e. about 48 hours from my experience).If your domain stops accepting all traffic (i.e. internal server error or a multitude of 404's) it's far less certain how long your rank will remain - my anecdotal experience is 1-2 weeks for extremely low-value search terms (wouldn't let that happen for ranking on high-value terms).Per this Webmaster World how long to removal thread, Google may keep dead content in its supplemental index for quite a while, but you're on the right track with fixing the problem as it appears*.*You are applying a 301 redirect to the correct URL or most relevant content, right? :)
_unix.335883
I'm running a Flask application on a server via nginx. I want to create a systemd file and here's what I have:[Unit]Description=my123 websiteAfter=syslog.targetRequires=postgresql.service[Service]ExecStart=/home/user_123/my_web_app/run.py &ExecStop=Restart=on-abortWorkingDirectory=/home/user_123/my_web_app/SyslogIdentifier=my_web_appUser=user_123[Install]WantedBy=multi-user.targetLocally I run it as ./run.py. Now, what should I have in ExecStart and ExecStop? I think ExecStart is correct because I have the & in it. But how about ExecStop?
Creating a systemd service for Flask via nginx
ubuntu;systemd;python
No, you should never add & to ExecStart. That will make systemd think that your unit is the process doing the forking. Also, you do not need ExecStop for a service that understands SIGTERM (flask's built-in webserver does), systemd knows where to send the signal.i.e. systemd tracks the PID that ExecStart started at and then knows where to send SIGTERM when you ask to terminate the process.(A unit type= can make forks a little more complicated than that. But the default unit type= considers that there are no forks.)ExtraRunning the built-in flask server behind Nginx will render you very vulnerable to even a trivial DoS attack. The flask built-in server is not meant for production use. With Nginx you should be using something like uWSGI.Flask docs have a section on uwsgi, and you can map the command line easily to edit uWSGI configuration file options. And uWSGI docs have a section on systemd. And you really should be using systemd to start a real webserver akin of uWSGI, not the flask built-in one.
_unix.243754
My understand of bind in handling non-authoritative queries is:forward mode. It just forward the client queries to an upstream DNS server, which is defined in forwarders directive.recursive mode. It actually start asking from root DNS server, then 2nd level DNS server etc till it finally get an authoritative answer for the host in question.Non of these modes seems to depends/relates to the system DNS settings on the host which bind is running on, e.g. /etc/resolv.conf.AMIRITE?
Does bind depends on system DNS settings for lookup?
bind;bind9
null
_unix.236135
I am looking at which multicast groups the linux kernel is subscribed at the moment. What do the values in the Querier column mean?
What is an IGMP querier?
multicast
null
_unix.250962
I am new to the Linux and trying to set up my server on Virtual Box but once finished create a virtual machine and start to install CentOS 7 for 64-bit on my windows 7 with a 32-bit, I get this error shown here:I have tried to do the research of how this issue can be addressed and the only solution was to change my setting of virtualization technologies in BIOS. Unfortunately I ended up with no solution.Any suggestions, please?
Centos installation failled
linux
null
_codereview.96800
I am using a foreach loop to sort an associative array alphabetically. I would like to know if there is a more proper and/or efficient way of doing it.The array:Array( [gr_c] => Array ( [f] => 'value...' [a] => 'value...' [d] => 'value...' [m] => 'value...' [c] => 'value...' [t] => 'value...' ) [gr_a] => Array ( [h] => 'value...' [e] => 'value...' [m] => 'value...' [a] => 'value...' [o] => 'value...' [i] => 'value...' [c] => 'value...' [t] => 'value...' [b] => 'value...' ) [gr_b] => Array ( [h] => 'value...' [d] => 'value...' ))became: Array( [gr_c] => Array ( [a] => 'value...' [c] => 'value...' [d] => 'value...' [f] => 'value...' [m] => 'value...' [t] => 'value...' ) [gr_a] => Array ( [a] => 'value...' [b] => 'value...' [c] => 'value...' [e] => 'value...' [h] => 'value...' [i] => 'value...' [m] => 'value...' [o] => 'value...' [t] => 'value...' ) [gr_b] => Array ( [d] => 'value...' [h] => 'value...' ))used snippet:foreach ($array_name as $key => $value) { ksort($array_name[$key]);}
Sorting an associative array alphabetically
php;array;sorting
That snippet of 3 lines you used,is fine as it is, nothing really wrong with it.It's proper, efficient, natural, easy to understand.There is just one thing I'd pick on,is that the $value variable in the foreach expression is not used.Another way to achieve the same thing without unused variables is to use & to pass the loop variable by reference:foreach ($array_name as &$arr) { ksort($arr);}This has the advantage that the loop index variable $key is now gone too,we're working with the data that really matters, which is the $arr to sort.
_unix.53345
I created a shell script that logs me in to a server. The script was in a directory that was added to $PATHI think I might have deleted the script when not paying attention (not on purpose). I cannot find the script anymore. I tried several things:use Spotlight (I'm using a Mac - spotlight does pretty much the same as locate afaik)use which [scriptname]go to root and type find * | grep [scriptname]None of these solutions located the script. HOWEVER: the script is still working. Even after a reboot. What is going on here? Is the script still somewhere on my drive?
Can't locate shell script
search
Thanks guys, this is one of those days:I actually didn't wrote a script... I thought I did. Instead of writing a script, I just put an alias in my .bash_profile.I remembered when doing the type scriptname as jw013 suggested.Thanks guys.
_cogsci.16689
What is this conversational stratagem called when someone wants to fish out certain information from you but, because they don't want to ask you about it directly (as they expect you might get uncomfortable, distressed or angry), they ask you simple innocent-looking questions instead; answers to these questions will make up for them a picture of what they actually want to know, whilst you are supposed to NOT comprehend/realize their sly plan?That stratagem is often employed by parents on their small kids not yet witted enough to get that they're being puppeted, psychiatrists on their patients, or simply by someone mistakenly thinking they're smarter than their interlocutor and so their tricks won't be comprehended.
What is this stratagem called when someone talks to you like you're a slow-witted kid?
terminology;communication;behavior
null
_unix.15594
Is there a way to turn on line numbering for nano?
Is there line numbering for nano?
nano
The only thing coming close to what you want is option to display your current cursor position. You activate it by using --const (manpage: Constantly show the cursor position) option or by pressing AltC on an open text file.
_codereview.91959
I am going though some PHP arrays exercises on w3resource. I have just completed the following:Write a PHP script to calculate and display average temperature, five lowest and highest temperatures.Recorded temperatures : 78, 60, 62, 68, 71, 68, 73, 85, 66, 64, 76, 63, 75, 76, 73, 68, 62, 73, 72, 65, 74, 62, 62, 65, 64, 68, 73, 75, 79, 73Here is my code:<?php echo <pre>; $temperatures = array(78, 60, 62, 68, 71, 68, 73, 85, 66, 64, 76, 63, 75, 76, 73, 68, 62, 73, 72, 65, 74, 62, 62, 65, 64, 68, 73, 75, 79, 73); function listvalues($value) { echo $value, ; } function printAverage($array) { $total = 0; foreach($array as $element) { $total += $element; } echo number_format($total / count($array), 1); } echo Recorded temperatures : ; array_walk($temperatures, listvalues); echo <br>; echo Average Temperature is : ; printAverage($temperatures); echo <br>; //sort the temperatures in ascending order for both of the following lists. sort($temperatures); //print the first 5 values echo List of five lowest temperatures : ; for($i = 0; $i < 5; $i++) { echo $temperatures[$i], ; } echo <br>; //print the last 5 values echo List of five highest temperatures : ; for($i = count($temperatures) - 5; $i <= count($temperatures) - 1; $i++) { echo $temperatures[$i], ; } echo <br>; echo </pre>;?>And here is its output:Recorded temperatures : 78, 60, 62, 68, 71, 68, 73, 85, 66, 64, 76, 63, 75, 76, 73, 68, 62, 73, 72, 65, 74, 62, 62, 65, 64, 68, 73, 75, 79, 73, Average Temperature is : 69.8List of seven lowest temperatures : 60, 62, 62, 62, 62, List of seven highest temperatures : 76, 76, 78, 79, 85,(I paid no attention to fixing the commas at the end of each part of the output, for now.)My questions are:Is this an appropriate use of array_walk()? Is it more appropriate to use a foreach() loop?Is ($total / count($array) the easiest way to find the average of an array, and is that the best way to format it for readability?Can you spot parts of my code which you think could be coded better?
Finding the average, the five smallest, and the five largest numbers in an array
php;beginner;programming challenge;array
There are a number of built-in functions in PHP that would make your code shorter and more direct.The first is implode. implode takes and optional glue string and an array of values, and joins them together into a single string. So for implode(, , array(1, a, 3)), it would return 1, a, 3. This avoids the need to loop over the elements of an array to print them with a separator, and to catch the last case where there should be no separator.The second is array_sum. This, as its name suggests, takes an array and sums the values. This also avoids an explicit loop.The third is array_slice. This function will take an array, and return a portion of it based on the given parameters. It can even be used to index in reverse with a negative offset.Putting it all together, we get the following:<?php echo <pre>; $temperatures = array(78, 60, 62, 68, 71, 68, 73, 85, 66, 64, 76, 63, 75, 76, 73, 68, 62, 73, 72, 65, 74, 62, 62, 65, 64, 68, 73, 75, 79, 73); echo Recorded temperatures : ; echo implode(, , $temperatures); echo <br>; echo Average Temperature is : ; echo number_format(array_sum($temperatures) / count($temperatures), 1); echo <br>; //sort the temperatures in ascending order for both of the following lists. sort($temperatures); //print the first 5 values echo List of five lowest temperatures : ; echo implode(, , array_slice($temperatures, 0, 5)); echo <br>; //print the last 5 values echo List of five highest temperatures : ; echo implode(, , array_slice($temperatures, -5, 5)); echo <br>; echo </pre>;?>
_unix.157572
For a string in this format:./my-site.sub.domain.comI would like to remove the .,-, and /, and trim the remainder to 16 characters: mysitesubdomaincI've done thismysite=./my-site.sub.domain.commysite=${mysite//[\.|\/|-]//}mysite=${mysite:0:16}echo $mysitemysitesubdomaincIs there a way to combine the replacement and sub-string extraction?
Bash Combine Replacement and Sub String Extraction in One Step
bash;shell script;regular expression;string
There's no way to chain the Bash built-in parameter expansion, but of course this can be done in a single line with external tools like sed:$ sed 's/[\.\/-]//g;s/^\(.\{16\}\).*/\1/' <<< ./my-site.sub.domain.commysitesubdomaincUnfortunately this very quickly turns into unmaintainable code, and is probably less efficient than using Bash internals, so I would advise against it.
_unix.347358
I created a service by adding a custom **.service* file in /etc/systemd/system and then running systemctl daemon-reload. However, I can start or stop the service only under root. I would actually like the service to always run as a different user. How can I do that?
How to change service user in CentOS 7?
centos;users;services;not root user
null
_webmaster.74294
have a small website. When I perform a netstat is shows a lot of traffic from .p.mail.I think this is some kind of mail bot, trying to harvest email addresses from my website. How can I prevent this?tcp 0 64 128.199.152.125:ssh 254.96.96.58.stat:49174 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-7.p.mail:52455 CLOSE_WAITtcp6 1 0 128.199.152.125:http crawl-66-249-71-7:39927 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9-5.p.mail:48034 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-6.p.mail:38781 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-3.p.mail:49137 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9.mail.ru:46906 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-3.p.mail:49102 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9-4.p.mail:60833 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-1.p.mail:58404 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-3.p.mail:38515 CLOSE_WAITtcp6 1 0 128.199.152.125:http crawl-66-249-71-9:65419 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-4.p.mail:39761 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9-3.p.mail:46664 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-5.p.mail:57961 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-2.p.mail:58029 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-6.p.mail:53075 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9.mail.ru:47363 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-4.p.mail:52394 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9.mail.ru:54476 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9.mail.ru:36110 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-2.p.mail:55155 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-7.p.mail:59306 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-2.p.mail:36667 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9-5.p.mail:51968 ESTABLISHEDtcp6 0 0 128.199.152.125:http fetcher9-4.p.mail:41478 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-5.p.mail:60032 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-2.p.mail:44335 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-6.p.mail:57922 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-1.p.mail:59718 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-3.p.mail:47470 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9-6.p.mail:59941 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-1.p.mail:54604 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9.mail.ru:48307 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-6.p.mail:47410 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-2.p.mail:52740 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9.mail.ru:48957 ESTABLISHEDtcp6 0 0 128.199.152.125:http fetcher9-6.p.mail:55988 ESTABLISHEDtcp6 0 0 128.199.152.125:http fetcher9-6.p.mail:45431 ESTABLISHEDtcp6 0 0 128.199.152.125:http crawl-66-249-71-1:54299 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-1.p.mail:44075 CLOSE_WAITtcp6 0 0 128.199.152.125:http fetcher9-7.p.mail:51332 ESTABLISHEDtcp6 1 0 128.199.152.125:http fetcher9-6.p.mail:40081 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-2.p.mail:47806 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-5.p.mail:40396 CLOSE_WAITtcp6 1 0 128.199.152.125:http baiduspider-180-7:53078 CLOSE_WAITtcp6 1 0 128.199.152.125:http fetcher9-1.p.mail:46357 CLOSE_WAIT
Website is being targeted by mail bots
search engines;web development;security
null
_unix.335517
I've started preferring key-val storage over SQL. I'm hoping there's something built in that's a littler than pstore or redis. I've been doing a lot of string (de)serializing of lists and dictionaries stored in a single file. But I figured this wouldn't perform well for a large file. So I tried using the filesystem (storing each piece of data in a file).Is the filesystem a potentially functional alternative to a database? Is there a util I can use instead?
Is there a simple key-val storage system built in?
files;performance;database
null
_unix.269248
Wikipedia says that The protocol has been version 11 (hence X11) since September 1987.That's almost 30 years.Why did the X protocol freeze?
What happened with X12?
x11
null
_softwareengineering.141588
Is there a practical solution to organizing the initial tasks for a new project?To elaborate, imagine the features/stories/goals are laid out for a project. How might one go about organizing those into sane tasks for the first few versions?The scenario I typically have in mind has the features listed as a high-level reference for what the end user-experience should involve. The tasks for constructing such features are then broken down into chunks (such as create interface for X component). Such a task is not necessarily tied to only that feature and may be useful when building subsequent features. Is breaking features down into small, code-able solutions valid? Or should they be slightly removed from any specific implementation?I do not expect that there is one right answer to this question, but I am looking for a fairly pragmatic and unobtrusive approach.As a note, I'm looking for solutions that are independent of any tools or systems used for managing the tasks themselves.
What is an effective way to organize tasks for a new project?
project management;specifications;task organization
null
_hardwarecs.4224
I am in the market for an android tablet, I am mostly rounded down to an 8 model (though not 100% on this) . Looking at the Asus Zenpad s 8.0 or the Galaxy S2 8.0My primary uses will be watching movies on the plane (I travel a lot) and using the Hema Explorer GPS tracking and navigation app for offroad driving. ANd some games, but not a key purpose.My movies/tv will be a mix of Google Play store rentals, Netflix, stan and the Virgin Entertainment system.My current phone is a Xperia Z3 compact, I like having a compact phone so don't want to upgrade. I have run the Hema app on my phone, it lagged a bit when recording tracks. So performance of the tablet is pretty important to me. My main concerns with above tablets is the aspect ratio and watching movies.So ;What would you recommend that is on the current market (US or AU)? In real world how much screen do i actually lose with the 4:3 aspect ratio watching moviesI really don't want to spend over $500, buying from the US should make this possible.I have considered iPad as well, but i am in the Android ecosystem so unless there is some great reason why it would suit me better i just don't see the point.
Asus Zenpad S 8.0 vs Galaxy Tab S2 8.0 - Travel
android;tablet
null
_unix.57525
How can I stop telnet access for a particular IP address on the command line?
How to stop telnet access for a particular IP address?
linux;networking;telnet;access control
null
_codereview.62846
I am writing a game where a car drives and makes jumps. When a jump is landed, the player is rewarded if they land all four wheels either at the same time, or near to the same time. If they don't, they are penalised.I am concerned my solution is not well abstracted (although I am open to all topics of feedback), and particularly that it will not be very extensible in the future should I come to add additional things to check for (such as bonuses for good air time, or tricks) on landing.using System.Collections.Generic;using System.Linq;using UnityEngine;/// <summary>/// Checks if the car has made a good landing or not./// </summary>[RequireComponent(typeof(CollisionEvent))] //Unity engine code to ensure a CollisionEvent is always attached to the same object as this script.public class GoodLandingChecker : MonoBehaviour{ /// <summary> /// Gets or sets the bad landing threshold. /// When a bad landing occurs, the player is penalised. /// </summary> /// <value> /// The bad landing threshold. /// </value> public float BadLandingThreshold { get { return badLandingThreshold; } set { badLandingThreshold = value; } } /// <summary> /// Gets or sets the good landing threshold. /// When a good landing occurs, the player is rewarded. /// </summary> /// <value> /// The good landing threshold. /// </value> public float GoodLandingThreshold { get { return goodLandingThreshold; } set { goodLandingThreshold = value; } } /// <summary> /// Gets or sets the great landing threshold. /// When a great landing occurs, the player is rewarded greatly. /// </summary> /// <value> /// The great landing threshold. /// </value> public float GreatLandingThreshold { get { return greatLandingThreshold; } set { greatLandingThreshold = value; } } /// <summary> /// Gets or sets the minimum flying time before a landing will be considered. /// </summary> /// <value> /// The minimum flying time. /// </value> public float MinimumFlyingTime { get { return minFlyingTime; } set { minFlyingTime = value; } } /// <summary> /// Gets or sets the wheels used to calculate when landings happen. /// </summary> /// <value> /// The wheels. /// </value> public List<Transform> Wheels { get { return wheels; } set { wheels = value; } } /// <summary> /// The bad landing threshold /// </summary> [SerializeField] //Unity code to make this private field show up in Unity's //Inspector where its value is set by the designer. This is used because the //Inspector cannot display properties. private float badLandingThreshold; /// <summary> /// Flying checker is a separate class that simply checks if every wheel is touching the ground. /// </summary> [SerializeField] private FlyingChecker flyingChecker; /// <summary> /// The time spent in the air since the last landing. /// </summary> private float flyingTime = 0f; /// <summary> /// The good landing threshold /// </summary> [SerializeField] private float goodLandingThreshold; /// <summary> /// The great landing threshold /// </summary> [SerializeField] private float greatLandingThreshold; /// <summary> /// Whether the car is flying right now or not. /// </summary> private bool isFlying = false; /// <summary> /// The minimum flying time before a landing will be considered. /// </summary> [SerializeField] private float minFlyingTime; /// <summary> /// The wheels /// </summary> [SerializeField] private List<Transform> wheels; /// <summary> /// Checks the angle of the car. /// If it's fairly flat, a good landing is awarded. /// </summary> /// <param name=wheel>The wheel that landed.</param> public void WheelLanded(Transform wheel) { if (isFlying && flyingTime >= minFlyingTime) { isFlying = false; //Get height of other wheel float wheelHeightDifference = Wheels.Where(x => x != wheel).Sum(x => Mathf.Abs(x.position.y - wheel.position.y)); if (wheelHeightDifference <= GreatLandingThreshold) { Debug.Log(Great!); //Give the car a *huge* forward push rigidbody.AddForce(transform.forward * 500000); } else if (wheelHeightDifference <= GoodLandingThreshold) { Debug.Log(Good); //Give the car a forward push rigidbody.AddForce(transform.forward * 200000); } else if (wheelHeightDifference >= BadLandingThreshold) { Debug.Log(Awful!); //Give the car a backward push rigidbody.AddForce(-transform.forward * 200000); } else { Debug.Log(Ok); //Normal landing, ignore } } flyingTime = 0f; } /// <summary> /// Handles the CollisionEntered event of the GoodLandingChecker control. /// This occurs when something has hit something. /// </summary> /// <param name=sender>The source of the event.</param> /// <param name=e>The <see cref=Game.View.CollisionEventArgs/> instance containing the event data.</param> private void GoodLandingChecker_CollisionEntered(object sender, Game.View.CollisionEventArgs e) { //Check whether any wheels hit the ground. var wheel = Wheels.FirstOrDefault(w => e.Collision.contacts.Select(x => x.thisCollider).Any(x => w.collider == x)); if (wheel != null) { WheelLanded(wheel); } } /// <summary> /// Called when this object is destroyed by Unity. /// </summary> private void OnDestroy() { GetComponent<CollisionEvent>().CollisionEntered -= GoodLandingChecker_CollisionEntered; } /// <summary> /// Called by Unity when the game starts. /// </summary> private void Start() { GetComponent<CollisionEvent>().CollisionEntered += GoodLandingChecker_CollisionEntered; } /// <summary> /// Called by Unity every frame. /// </summary> private void Update() { //Flying checker is a separate class that simply checks if every wheel is touching the ground isFlying = isFlying || flyingChecker.IsFlying(); if (isFlying) { flyingTime += Time.deltaTime; } }}
Determining vehicle jump landing quality
c#;game;unity3d;physics
First, I am not a C# programmer, but a Java programmer. So maybe some of what I write here is wrong for C#.You write too many comments. Let the code speak for itself.GoodLandingChecker_CollisionEntered Are you supposed to use underscore notation in C#?You can probably find a better name for WheelLanded. Either OnWheelLanded if it's a callback, or something which is a verb otherwise, ie analyzeLanding.I would split WheelLanded in a series of methods: a method that returns the height differencea method that transform the numerical height into an enumone (or many) delegates that register to the get the enum result.For example, the debugging print out and the added forward motion would be two separate delegates.I am not sure if FlyingChecker should be separated from this class. Maybe you could have the method that determines what kind of landing (enum) it is in FlyingChecker, but have the delegates that act on that information being defined elsewhere.
_unix.163647
I'm trying to install Arch Linux over virtual box guest machine in a UEFI mode.I've followed beginner's guide to install base system, generate fstab and etc and my system now boots into grub command prompt.I had used GPT partition table to create two partitions./dev/sda1 - 500Mb fat32 UEFI system partition; /dev/sda2 - 7.5Gb ext4 mounted as /; /etc/fstab generated with command genfstab -U -p /mnt >> /mnt/etc/fstaband contains:# /dev/sda2UUID=ce8f33a9-4bb8-42b8-b082-c2ada96cc2bb / ext4 rw,relatime,data-ordered 0 1# /dev/sda1UUID=3D70-B6C5 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,error=remount-ro 0 2grub installed with commands:\# grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck\# mkdir /boot/EFI/boot\# cp /boot/EFI/arch_grub/grubx64.efi /boot/EFI/boot/bootx64.efi(without mkdir and cp it won't boot at all)grub config generated with grub-mkconfig -o /boot/grub/grub.cfg and its contents are quite hard to get and post here; if it's necessary, I'll try.And after reboot system boots into grub> command prompt and nothing helps.Unlike this question: UEFI install (14.04) boots to GRUB command prompt, no GUI in my case command configfile (hd1,1)/boot/grub/grub.cfg does not make anything except clears the screen.I can boot to installed system via chroot from installing cd environment, but no way other that that.How can I fix it?
arch Linux boots into grub command line
arch linux;boot;virtualbox;grub;uefi
I found a case when I tried to use gummiboot instead of grub. Gummiboot reported error, that it cant find kernel images. It looks like I mounted /boot and configured fstab after I installed base system with pacstrap -i. So kernel images, that were placed in a /boot directory were lost after mounting and thus system could not boot. I wonder what happened with them? Are they were still on hard drive, but were just shadowed with mounted partition ?Anyway I just reinstalled everything again with carefully following instructions on arch wiki and it works now.Thanks everyone, I hope this will be usefull to someone, who just learning about linux and can make same mistake as I did.
_codereview.135884
I have been trying to solve a modification of the Longest Common Prefix problem. It is defined below.Defining substring For a string P with characters P1, P2,, Pq, let us denote by P[i, j] the substring Pi, Pi+1,, Pj.Defining longest common prefix LCP(S1, S2,, SK), is defined as largest possible integer j such that S1[1, j] = S2[1, j] = = SK[1, j].You are given an array of N strings, A1, A2 ,, AN and an integer K. Count how many indices (i, j) exist such that 1 i j N and LCP(Ai, Ai+1,, Aj) K. Print required answer modulo 109+7.Note that K does not exceed the length of any of the N strings. K <= min(len(Ai)) for all iFor example,A = [ab, ac, bc] and K=1.LCP(A[1, 1]) = LCP(A[2, 2]) = LCP(A[3, 3]) = 2LCP(A[1, 2]) = LCP(ab, ac) = 1LCP(A[1, 3]) = LCP(ab, ac, bc) = 0LCP(A[2, 3]) = LCP(ac, bc) = 0So, the answer is 4. Return your answer % MOD = 1000000007Constraints1 Sum of length of all strings 5*105. Strings consist of small alphabets only.Here is my approach:class Solution: # @param A : list of strings # @param B : integer # @return an integer def LCPrefix(self, A, B): res = 0 for i in xrange(len(A)): prev = A[i] prevLCP = len(A[i]) for j in xrange(i, len(A)): prevLCP = self.getLCP(prev, A[j], prevLCP) prev = A[j] if prevLCP >= B: res += 1 return res % 1000000007 def getLCP(self, A, B, upto): i = 0 lim = min(upto, len(B)) while i < lim: if A[i] != B[i]: break i += 1 return iThe time complexity of this algorithm is O(n^2*m), where n is the length of the list and m is the maximum length of the string. The online judge (InterviewBit) does not accept this solution in terms of time complexity. Can anyone think of a way to improve it?
Finding longest common prefix
python;algorithm;strings;programming challenge;time limit exceeded
null
_unix.268640
I'm trying to use sed to edit a config file. There are a few lines I'd like to change. I know that under Linux sed -i allows for in place edits but it requires you save to a backup file. However I would like to avoid having multiple backup files and make all my in place changes at once.Is there a way to do so with sed -i or is there a better alternative?
Make multiple edits with a single call to sed
sed
You can tell sed to carry out multiple operations by just repeating -e (or -f if your script is in a file). sed -i -e 's/a/b/g' -e 's/b/d/g' file makes both changes in the single file named file, in-place. Without a backup file.sed -ibak -e 's/a/b/g' -e 's/b/d/g' file makes both changes in the single file named file, in-place. With a single backup file named filebak.
_unix.30171
We know that Android is an open source Linux-based distro. And we know some features developed for Android are required for the Linux community for many years, and are denied (like Unity3D Player).What are the difficulties in importing Android features to other Linux distributions, like Ubuntu, Fedora and others?
What are the difficulties in importing features of Android, like Unity3D Player, to Linux in general?
linux;android;unity
The difficulty is that it's a completely different operating system. Android is not a Linux distribution. The only thing that's common between Android and GNU/X11/Apache/Linux/TeX/Perl/Python/FreeCiv (usually known as Linux or Linux distributions) is the Linux kernel. Linux is based on POSIX-based APIs, the X Window System for the graphical interface, and many libraries that build upon these foundations, using core concepts such as processes, files, pipes and windows. Android is based on its own Java APIs with specific concepts, using core concepts such as activities, services, binders and intents. Porting something like Unity3D to Linux would be as much work as other ports such as OSX (which has more POSIX bits than Android, but also has a GUI that's completely different from Unix/Linux's X11) and Android.