id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.247481
I have a file with semicolon separated fields, which I want to sort according to general numeric value of the 26th column. I tried this:cat file.txt | grep -v setch | sort -t; -k26 -gThe grep command is there to filter out some lines I don't want. The file after the grep command looks like this:5;0;0;0;0;17;0.040000;3.00;17;0.030000;2.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;;;;;;;33.15;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.020000;3.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.18901342e+02;-7.78309691e+01;-7.78225676e+01;-7.78079745e+01;-7.77838466e+01;;39.3333333333333;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.020000;20.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.78309996e+01;-7.78285783e+01;-7.78259409e+01;-7.78212922e+01;-7.78200550e+01;;39.8166666666667;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.030000;3.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-9.38492178e+02;-5.44898488e+02;-7.78311132e+01;-7.78228037e+01;-7.78082194e+01;;40.6166666666667;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.030000;8.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.78321216e+01;-7.78265847e+01;-7.78213151e+01;-7.78175760e+01;-7.78102439e+01;;40.4833333333333;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.030000;15.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.78326108e+01;-7.78282041e+01;-7.78246496e+01;-7.78216823e+01;-7.78198536e+01;;40.0333333333333;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.020000;15.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.78317280e+01;-7.78275891e+01;-7.78237230e+01;-7.78209144e+01;-7.78197521e+01;;44.3;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.030000;10.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.78322942e+01;-7.78274590e+01;-7.78225495e+01;-7.78192915e+01;-7.78148301e+01;;43.65;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.020000;8.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.78322863e+01;-7.78266434e+01;-7.78211618e+01;-7.78173451e+01;-7.78097348e+01;;45.4833333333333;;X;;E;5;0;0;0;0;17;0.040000;3.00;17;0.030000;4.00;17;0.040000;7.00;11.5833330154419;11.5833330154419;11.5833330154419;0.522556364536285;312.500000000000;-1384.20000000000;39.0625000000000;6000.00000000000;;;;-7.61265100e+02;-7.78321802e+01;-7.78247066e+01;-7.78104129e+01;-7.78053976e+01;;44.8833333333333;;X;;E;The output is however not sorted according to general numerical value, but according to numerical value (without reference to the powers). Is there anything I can do to get sort to do what I want?Update: This is the output of the above pipe (only the relevant column shown), and it is also the output of sort -t\; -g -k26,26, which was suggested in the answer. -9.38492178e+02-7.78317280e+01-7.78309996e+01-7.18901342e+02-7.78322863e+01-7.78322942e+01-7.78326108e+01 -7.61265100e+02-7.78321216e+01
sort behaves weirdly with scientific notation
sort
note the difference in output between these two pipelines:<yourexample \sort -t\; -gk26 |cut -d\; -f26-7.18901342e+02-7.78309996e+01-9.38492178e+02-7.78321216e+01-7.78326108e+01-7.78317280e+01-7.78322942e+01-7.78322863e+01-7.61265100e+02...and...<yourexample \sort -t\; -gk26,26 |cut -d\; -f26-9.38492178e+02-7.61265100e+02-7.18901342e+02-7.78326108e+01-7.78322942e+01-7.78322863e+01-7.78321216e+01-7.78317280e+01-7.78309996e+01sorting just on -key 26 is the same as sorting from key 26 through to the end of the line, but sorting on -key 26,26 sorts only on that key. if you want to consider other fields in the sort order as tie breakers, add more -keys - but be specific.All that aside, you've commented that you're working with a 5-year-old GNU Coreutils package. Curious, I skipped through a few changelogs after your release, and this stood out within two releases (Oct 2010 for v8.6):sort -g now uses long doubles for greater range and precision.sort -h no longer rejects numbers with leading or trailing ., and no longer accepts numbers with multiple .. It now considers all zeros to be equal.You might update.
_codereview.54732
I have a shell script to configure Synaptics touchpad settings at login.BackgroundYou configure the touchpad with the synclient command. Its usage issynclient [-lV?] [var1=value1 [var2=value2] ...]For example:synclient PalmDetect=1 to set a single settingsynclient PalmDetect=1 PalmMinZ=10 to set multipleI'm trying to use my script itself as a configuration file by playing around with heredocs and grep. The script greps out all comment lines (starting with #) and then passes everything in the heredoc as arguments to synclient. Is this code clean? How could I improve it?Script#!/bin/sh# script to configure Synaptic touchpad settings# This script treats itself as a configuration file.<< END_OF_SETTINGS grep -v '^#' | xargs synclient# Try to disable touchpad while typingPalmDetect=1PalmMinZ=100# Allow three-finger middle button tap, clickTapButton3=2ClickFinger3=2END_OF_SETTINGS
Script to configure Synaptics touchpad
sh
null
_softwareengineering.321456
I come from a C# background, where LINQ evolved into Rx.NET, but always had some interest in FP. After some introduction to monads and some side-projects in F#, I was ready to try and step to the next level.Now, after several talks on the free monad from people from Scala, and multiple writeups in Haskell, or F#, I have found grammars with interpreters on for comprehension to be quite similar to IObservable chains.In FRP you compose an operation definition from smaller domain specific chunks including side-effects and failures that stay inside the chain, and model your application as a set of operations and side effects. In free monad, if I understood correctly, you do the same by making your operations as functors, and lifting them using coyoneda.What would be the differences between both that tilt the needle towards any of the approaches? What is the fundamental difference when defining your service or program?
How does the Free monad and Reactive Extensions correlate?
functional programming;monad;reactive
null
_webapps.31338
I first referred to a link explaining how to set a page as your front page. What I want is to set a category of my blogs as my front page display. Hoping for an answer, I went to the following link:http://en.forums.wordpress.com/topic/1-category-as-front-page?replies=4It effectively redirects me to my first link. Though I can make a page on wordpress.com and set it as my front page, I still haven't understood how to set one of my blog categories as my front display.
On wordpress.com how to set a category as front page?
wordpress
null
_vi.4187
I have multiple buffers open, some currently visible, some not. I know I can write all of them with :wa. However, I am curious, why :bufdo w does not work. When I try it in a buffer with unsaved changes, vim tells me E37: No write since last change (add ! to override)Why is this so?
Why can't I execute write command on all buffers with :bufdo?
buffers;save
I'm feeling like you don't have the option hidden set.Basically, it means that you cannot switch from an unwritten buffer to another one.In your case you cannot save any buffer because it should change from an unwritten buffer. Adding the hidden option will fix this.You can find more with : :h hidden.
_webmaster.1305
We've got a page that embeds a .MOV file into a webpage. In the last 6 months it stopped working on some Macs. Then it stopped working on all macs. Then it stopped working on Windows XP. But it works fine in Windows 7. Here's what is embedded in the HTML:<embed src=/Magic94Scripts/mgrqispi94.dll?APPNAME=FileManager&PRGNAME=prjfilmview&ResID=2784&size=9 style=float: left; height=600 width=1030>This has worked perfectly for years. The QuickTime player pulls the file out of the requester, inspects the MIME type from the response headers and plays the file appropriately. A Wireshark dump from a Windows 7 looks like this:Quicktime Windows 7 dump http://goodoil.enets.com.au/QuickTime-Win7.pngThe intial request for the page that has the <embed> tag in itThe QuickTime plugin requesting the MOV file through the back-end requesterPerforming the exact same actions on OSX or Windows XP wields:Quicktime XP-OSX dump http://goodoil.enets.com.au/QuickTime-XPOSX.pngThe versions of quicktime and safari on all the different machines is the latest (5.0) and I assume this is something that was broken in an update, and as our clients moved to the newer version of the browser they were breaking one by one.Any ideas what might cause this? Is this a bug in Safari? Are there better ways of embedding the MOV file?
Embedding Quicktime Player behaves differently on XP/OSX/Windows7
safari;embed
I have solved the situation by doing URL ReWriting with the following rule:RewriteRule (/res/)(.*)/(.*)/(.*)/(.*) /Magic94Scripts/mgrqispi94.dll?APPNAME=$2&PRGNAME=ViewResource&ResID=$3&size=$4 [I,O,U]And using URLs such as:/res/FileManager/2785/9/TheVideo.mp4Crazy, but it now works. I can only assume quicktime now only inspects the URL for the filetype, rather than grabbing the content-header.
_softwareengineering.345074
Is there a generally accepted convention for brace placement in F#? I found some examples in the documentation, but they didn't seem to be consistent with each other. In particular, and to take an actual example, is there a consensus as to which is better, thisseq { for polarity, a in Set.filter (fun (polarity, _) -> polarity) c do let c = Set.remove (polarity, a) c for a0, a1 in orientations (eqn a) do for polarity, b in Set.filter (fun (polarity, _) -> polarity) c do let c = Set.remove (polarity, b) c for b0, b1 in orientations (eqn b) do match unify a0 b0 with | None -> () | Some m -> yield c |> Set.add (true, equal (a0, a1)) |> Set.add (false, equal (a1, b1)) |> evalClause m }or this?seq { for polarity, a in Set.filter (fun (polarity, _) -> polarity) c do let c = Set.remove (polarity, a) c for a0, a1 in orientations (eqn a) do for polarity, b in Set.filter (fun (polarity, _) -> polarity) c do let c = Set.remove (polarity, b) c for b0, b1 in orientations (eqn b) do match unify a0 b0 with | None -> () | Some m -> yield c |> Set.add (true, equal (a0, a1)) |> Set.add (false, equal (a1, b1)) |> evalClause m }And similarly for square brackets in list and array literals that are too large to be written on one line - is it usual to follow the same convention?
F# convention for brace placement
coding standards;f#
null
_codereview.55375
After some free ASCII flowchart drawer started charging money, I decided to write my own. Salient features are that you can draw a box (mouse down, mouse move, mouse up), then ctrlB will draw a box. Copy/Paste, Undo, Redo, click anywhere and type are all working features. I pasted the main code here, and it relies on two other minor files to provide a cursor and box object, but this code should still be very reviewable.I did run the code through JSHint and made a judgement call on the few remaining ickies. This is my first project with canvas that's not just a prototype, so any insights there are welcome. Finally, this will become a Chrome extension, so I only care about this working on Chrome. Except when I don't of course (which)./* Unidraw, because we can*///Documentation:// http://en.wikipedia.org/wiki/Box-drawing_character//Competition:// http://www.asciidraw.com/#Draw // http://asciiflow.com/(function IIFE(){use strict;var canvas, context, clipboard;var model = (function(){ //Privates var cells = [], tabSize = 4; //Exposed var cursor = new Cursor(); function write( x, y, s ) { //Make sure that we have an array for y //Always assume overwrite mode var originalX = x; cells[y] = cells[y] || []; for( var i = 0; i < s.length ; i++) { var c = s[i]; if( c.charCodeAt(0) > 31 ) { cells[y][x++] = s[i]; } else if ( c == \n ) { y++; cells[y] = cells[y] || []; x = originalX; } else if ( c == '\t' ) { x += tabSize; } } return new Cursor( x, y ); } function setCell( cursor , c ) { return write( cursor.x , cursor.y , c ); } function getCell( cursor ) { return cells[cursor.y] ? cells[cursor.y][cursor.x] || : ; } function stringify() { var s = '', x, y; for( y = 0 ; y < cells.length ; y++ ) { if( cells[y] ) for( x = 0 ; x < cells[y].length ; x++ ) s = s + ( cells[y][x] || ); s = s + '\n'; } return s?s: ; } function backspace() { //Move everything one character to the left of the cursor if( cells[ model.cursor.y ] ) cells[ model.cursor.y ].splice( model.cursor.x-1 , 1 ); model.cursor.recede(); } function addVersion( key ) { //Called internally. add a version to a version array (found with `key`) var json = localStorage[key]; var versions = json ? JSON.parse( json ) : []; versions.push( stringify() ); localStorage[key] = JSON.stringify( versions ); } function getVersion( key ) { //Called internally, get a version (and remove it) from a version array var json = localStorage[key]; var versions = json ? JSON.parse( json ) : []; var version = versions.pop(); localStorage[key] = JSON.stringify( versions ); return version; } function storeVersion() { //Called from controller, removes all redo versions addVersion( 'undo' , stringify() ); localStorage.removeItem( 'redo' ); } function restoreVersion() { //Called from controller, adds a redo version var version = getVersion( 'undo' ); if(version){ addVersion( 'redo' ); cells = []; write( 0 , 0 , version ); } } function redo() { //Called from controller, puts version back on to undo var version = getVersion( 'redo' ); if(version){ addVersion( 'undo' ); cells = []; write( 0 , 0 , version ); } } function isLineCharacter( cursor, dx , dy , returnValue ) { cursor = { x: cursor.x + dx , y: cursor.y + dy }; return ~'><'.indexOf( getCell( cursor ) ) ? returnValue : 0; } //Modulify return { write: write, stringify: stringify, setCell: setCell, getCell: getCell, cursor: cursor, backspace: backspace, storeVersion: storeVersion, restoreVersion: restoreVersion, redo: redo, isLineCharacter: isLineCharacter };}());var ui = (function(){ //Privates var fontSize = 15, breatheDuration = 5 * 1000, //5 seconds lightGrey = 211, black = 0, greyRange = lightGrey - black, p = 20, //Padding.. magicalMultiplier = 0.8, //Dont ask w, //Width h, //Height fh, //fontHeight fw, //fontWidth vo, //Vertical offset for writing ho, //Horizontal offset for writing metrics, box; //Exposed function breathe() { //Set the `caret` in a grey shade that follows a breathing cycle var rightNow = new Date(), position = rightNow % breatheDuration, radians = position / breatheDuration * Math.PI, sine = Math.sin( radians ), shade = Math.floor( lightGrey - greyRange/2 + sine * greyRange / 2 ), cx = model.cursor.x, cy = model.cursor.y; context.strokeStyle = 'rgb(' + shade + ',' + shade + ',' + shade + ')'; context.lineWidth = 0.5; context.beginPath(); context.moveTo(cx*fw + p, cy*fh + p); context.lineTo(cx*fw + p + fw, cy*fh + p); context.lineTo(cx*fw + p + fw, cy*fh + p + fh); context.lineTo(cx*fw + p , cy*fh + p + fh); context.lineTo(cx*fw + p, cy*fh + p); context.stroke(); } function drawBox() { context.strokeStyle = 'black'; context.lineWidth = 0.5; context.beginPath(); context.moveTo(box.from.x *fw + p, box.from.y *fh + p); //Top left context.lineTo((box.to.x+1) *fw + p, box.from.y *fh + p); //Top Right context.lineTo((box.to.x+1) *fw + p, (box.to.y+1) *fh + p); //Bottom Right context.lineTo(box.from.x *fw + p, (box.to.y+1) *fh + p); //Bottom Left context.lineTo(box.from.x *fw + p, box.from.y *fh + p); //Top Left context.stroke(); } function setBox( cell1 , cell2 ) { box = new Box( cell1 , cell2 ); } function clearBox() { box = undefined; } function getBox() { return box; } function adapt() { //Adapt the UI to the current size of the body //Clearly, the UI maintains it's own model w = canvas.width = document.body.clientWidth; h = canvas.height = window.innerHeight; document.documentElement.style.overflow = 'hidden'; context.font = fontSize + (~navigator.userAgent.indexOf('Mac') ? px Consolas : px Monospace); //EVIL Mac Fix metrics = context.measureText('A'); fh = fontSize+1; fw = metrics.width; vo = p+fh*magicalMultiplier; ho = p; drawGrid(); } function drawGrid() { context.clearRect(0, 0, canvas.width, canvas.height); for (var x = 0; x < w; x += fw) { context.moveTo(x + p, 0 + p); context.lineTo(x + p, h ); } for (var y = 0; y < h; y += fh) { context.moveTo(0 + p, y + p); context.lineTo(w , y + p); } context.lineWidth = 0.1; context.strokeStyle = lightgrey; context.stroke(); context.strokeStyle = black; context.fillStyle = black; var string = model.stringify(); if( string ){ var strings = string.split(\n); for( var row = 0 ; row < strings.length ; row++ ) for( var col = 0 ; col < strings[row].length ; col++ ) context.fillText( strings[row][col] , ho + fw * col , vo + fh * row ); } if( box ) drawBox( box.from , box.to ); } function translate( cursor ) { //Translate screen coordinates to cell coordinates var x = Math.floor((cursor.x - p ) / fw ), y = Math.floor((cursor.y - p ) / fh ); //Cheat on boundaries x = x < 0 ? 0 : x; y = y < 0 ? 0 : y; //Return a new cell cursor object return new Cursor(x,y); } //Modulify return { breathe : breathe, drawGrid : drawGrid, adapt: adapt, translate: translate, setBox: setBox, clearBox: clearBox, getBox: getBox };}());var controller = (function(){ var BACKSPACE = 8, TAB = 9, ARROW_LEFT = 37, ARROW_UP = 38, ARROW_RIGHT = 39, ARROW_DOWN = 40, DELETE = 46, KEY_B = 66, KEY_C = 67, KEY_Y = 89, KEY_Z = 90; var startingCell, currentCell; function normalizeEvent(e) { //Normalize which for key events, inspiration:SO if ( e.which === null && (e.charCode !== null || e.keyCode !== null) ) { e.which = e.charCode !== null ? e.charCode : e.keyCode; } } function onContentLoaded() { //Could have been called onInit //Set the 3 globals canvas = document.getElementById(canvas); context = canvas.getContext (2d); clipboard = document.getElementById('clipboard'); //Occupy full body & draw the initial UI ui.adapt(); //Set up listeners window.addEventListener( resize, ui.adapt ); canvas.addEventListener( mouseover, onMouseOver ); canvas.addEventListener( mousemove, onMouseOver ); canvas.addEventListener( mousedown, onMouseDown ); canvas.addEventListener( mouseup, onMouseUp ); canvas.addEventListener( click, onClick ); document.addEventListener( keypress, onKeyPress ); document.addEventListener( keydown, onKeyDown ); document.addEventListener( paste, onPaste ); //Make the cursor breathe setInterval( ui.breathe , 1000/12 ); // 12 frames per second } function onPaste(e) { //Determine where to paste, paste, determine & set new cursor location, redraw everything var cursor = model.cursor; model.cursor = model.write( cursor.x , cursor.y , e.clipboardData.getData('text/plain') ); ui.drawGrid(); } function onMouseDown(e) { //Remember where we start startingCell = ui.translate( e ); //Clear any old boxes ui.clearBox(); //Force the UI in onMouseOver to draw the new cursor without a mouse up currentCell = { x : -1 , y : -1 }; onMouseOver(e); } function onMouseUp() { ui.setBox( startingCell , currentCell ); currentCell = startingCell = undefined; } function onMouseOver(e) { //Are we dragging?, which cell are we on, update if we are in a different cell, and draw if(!startingCell) return; var cell = ui.translate( e ); if( cell.x != currentCell.x || cell.y != currentCell.y ) { currentCell = cell; model.cursor = cell; ui.setBox( startingCell , currentCell ); ui.drawGrid(); } } function onClick(e) { //Move the cursor to where the user clicked model.cursor = ui.translate( e ); ui.drawGrid(); } function onKeyPress(e) { //console.log( e , String.fromCharCode( e.charCode || 32 ) ); if( e.ctrlKey ) return; model.storeVersion(); model.setCell( model.cursor, String.fromCharCode( e.charCode || 32 ) ); ui.clearBox(); model.cursor.advance(); ui.drawGrid(); } function onKeyDown(e) { //console.log( e , String.fromCharCode( e.charCode || 32 ) ); normalizeEvent(e); var box = ui.getBox(); if( e.which == BACKSPACE ) { model.backspace(); e.preventDefault(); } else if( e.which == TAB ) { model.cursor = model.setCell( model.cursor , '\t' ); e.preventDefault(); } else if( e.which == ARROW_LEFT ){ model.cursor.recede(); } else if( e.which == ARROW_RIGHT ){ model.cursor.advance(); } else if( e.which == ARROW_UP ){ model.cursor.up(); } else if( e.which == ARROW_DOWN ){ model.cursor.down(); } else if( e.keyIdentifier == 'Home' && e.ctrlKey ){ model.cursor = new Cursor( 0, 0 ); } else if( e.keyIdentifier == 'Home' ) { //Move to complete left unless already there, in that case go top left model.cursor.x ? model.cursor.x = 0 : model.cursor.y = 0; } else if( e.which == KEY_C && e.ctrlKey ) { //Copy a box or a whole character if( box ) { var lines = []; box.eachRow( function(y){ lines[ y - box.from.y ] = ''; } ); box.each( function(cursor){ lines[ cursor.y - box.from.y ] += model.getCell(cursor); } ); var line = lines.join(\n); clipboard.value = line; } else { clipboard.value = model.getCell( ui.getCursor() ) || ; } clipboard.focus(); clipboard.select(); } else if( e.which == KEY_B && e.ctrlKey ) { /* Styles: */ var onLeft = 1; //Bitflag 1 var onRight = 2; //Bitflag 2 var onTop = 4; //Bitflag 3 var onBottom = 8; //Bitflag 4 var lineRules = {}; lineRules[onLeft+onRight] = ''; lineRules[onTop+onBottom] = ''; lineRules[onTop+onLeft] = ''; lineRules[onTop+onRight] = ''; lineRules[onBottom+onLeft] = ''; lineRules[onBottom+onRight] = ''; lineRules[onLeft+onRight+onTop+onBottom] = ''; lineRules[onLeft+onRight+onTop] = ''; lineRules[onLeft+onRight+onBottom] = ''; lineRules[onTop+onBottom+onLeft] = ''; lineRules[onTop+onBottom+onRight] = ''; if( box ) { model.storeVersion(); //Show intent box.eachRow( function(y){ model.write( box.from.x, y, '' ); model.write( box.to.x, y , '' ); } ); box.eachColumn( function(x){ model.write( x, box.from.y, '' ); model.write( x, box.to.y , '' ); } ); //Line up box.each( function lineUp( cursor ) { if( !model.isLineCharacter( cursor , 0 , 0 , true ) ) return; var neighbourBitFlag = model.isLineCharacter( cursor , -1 , +0 , onLeft ) + model.isLineCharacter( cursor , +1 , +0 , onRight ) + model.isLineCharacter( cursor , +0 , +1 , onBottom ) + model.isLineCharacter( cursor , +0 , -1 , onTop ); if( lineRules[neighbourBitFlag] ) model.setCell( cursor , lineRules[neighbourBitFlag] ); }); } } else if ( e.which == DELETE ) { if( box ){ box.each( function(cursor){ model.setCell( cursor, ); } ); } } else if ( e.which == KEY_Z && e.ctrlKey ) { //Undo model.restoreVersion(); } else if ( e.which == KEY_Y && e.ctrlKey ) { //Undo model.redo(); } //Clear the selection box after a key press (Control does not count) if( e.keyIdentifier != Control && box ) { ui.clearBox(); } //Draw the grid in all cases ui.drawGrid(); } return { onContentLoaded: onContentLoaded, };}());//Engage!document.addEventListener( DOMContentLoaded, controller.onContentLoaded, false );})();A plunker can be found here.
ASCII flow chart drawer
javascript;google chrome;ascii art
I like it! I've got some quibbles about the style, but that's personal opinion, and the code works, so I won't go into that. I haven't gone through the code line-by-line (there's a lot!), but I've tried looking at the overall structure.If I were to suggest something, it might be a more declarative way to handle events, and keyboard event in particular. It's a minor thing, but the sort of thing I find more readable/direct.You've got a lot of functions that are simply named for the event they handle, but then you have to repeat that name when attaching them to events. I'd consider defining them as properties on an object, and then loop through them, e.g.var canvasEvents = { mouseover: function () { ... } mousemove: function () { ... } mousedown: function () { ... } mouseup: function () { ... } click: function () { ... }};for(var event in canvasEvents) { canvas.addEventListener(event, canvasEvents[event]);}Again, it's a minor thing, but defining the event handlers in one place and have them automatically attached would keep things nicely contained, I think.And you could do a similar thing sort of thing for keyboard events, to avoid the large else if... else if... structure. For instance,// Not the complete list - just a sampling// Order still matters of course, so ctrl+Home gets matches before Homevar keyCommands = [ { mask: { which: BACKSPACE }, preventDefault: true, handler: model.backspace }, { mask: { which: ARROW_UP }, handler: model.cursor.up }, { mask: { keyIdentifier: 'Home', ctrlKey: true }, handler: function () { model.cursor = new Cursor( 0, 0 ) } }, { mask: { keyIdentifier: 'Home' }, handler: function () { model.cursor.x ? model.cursor.x = 0 : model.cursor.y = 0 } }, { mask: { which: KEY_C, ctrlKey: true }, handler: function () { /*... etc ...*/ } }, ....];// ....function handleKeyCombo(event) { normalizeEvent(event); function matchMask(mask) { for( var property in mask ) { if(event[property] != mask[property]) { // maybe use a strict comparison; your call return false; } } return true; } for( var i = 0, l = keyCommands.length ; i < l ; i++ ) { var command = keyCommands[i]; if( matchMask(command.mask) ) { command.handler(event); // or use call/apply if necessary if(command.preventDefault) { event.preventDefault(); } break; } }}(note that you'll have to figure out a way to pass the local var box to the handlers, now that they're defined in a different scope)Aside: It might be a good use-case for the Map object, if available - masks as keys, handlers as values?Alternatively, you could also make an addKeyboardShortcutListener function that works similar to addEventListener, but accepts the key-combo mask as well as a handler.Basically, you can go more or less in-depth with this, but it'd make it (hopefully) easier to set up keyboard-handling, and (with some modification) use different key combos depending on platform.For instance, on the Mac, cmd is used instead of ctrl, but checking metaKey instead of ctrlKey isn't always enough. Something like undo is cmd+shift+Z by convention, and cmd + arrows is used for most navigation (though Home/End works too, and sometimes emacs-style combos too). Not that I'm demanding Mac support, but in general it might be nice to make the key-combo mapping more flexible.You've got a nice separation of model, ui, controller and so forth, but perhaps a bit more encapsulation within each of those would be nice, such as abstracting/encapsulating the key-combo matching.Semi-related: A long time ago, I wrote something to handle keyboard shortcuts. Maybe you can use it for something. I'm linking it mostly because for some complex combinations, key events start to get weird. I don't think the code quite works right anymore, but the technique might still be viable.Oh, and one thing I noticed is that making a 1-column vertical selection and drawing it, gave me something like:which seems a little off. I expected it to only draw vertical pipe glyphs, or, if the top and bottom should be flat, use the 3-way pipe glyphs for the ends: or
_unix.64552
I have a Wacom CTF-221 graphics tablet, and I use it with Linux Wacom drivers.However when I draw it's annoying that the mouse pointer moves with the pen and clicks outside of the drawing window.When I draw in GIMP, I see another pointer that is locked inside the image, so I think that my PC sees two devices, one as a tablet and one as a virtual mouse.Is it possible to disable this behavior, so that my tablet movement will be seen only by the program I'm drawing to?
Prevent Wacom tablet from moving mouse pointer
mouse
null
_softwareengineering.340127
I've been brainstorming on a specific problem for a while and today I've thought of a solution. But am not too sure about it. Hence this question for feedback and suggestions.I'll use the simple example of a product T-Shirt.The T-Shirt has multiple options:Color: White, BlackSize: Small, Medium, LargeNow in the case of a White T-shirt, there is no Large and Medium. So Large and Medium option should not be available when selecting White.This means that if you first select Large or Medium. Then White should not be available.Previous implementation was done as a tree structure. So you always have to select Color then Size. But it's not really a tree the way I see it.My idea was to create a list of rules.pseudo code:rule1: if color is white, sizes not allowed are [large, medium]//then generate the opposite rules based on rule1.rule2: if size is medium, color not allowed are [white]rule3: if size is large, color not allowed are [white]store rules in databaseWhen you are dealing with products that have many options this could get complicated, that's why I thought generating the other rules based on the first rule can reduce the complexity.Thoughts anyone?Update:Someone remarked below and I realised I used the wrong example. It's not a product which has a SKU and stock level. It's a service. A better example would be a configurable computer. Many different CPU, RAM, GPU, etc combinations. Which all produce different price and depending on specific motherboard or some specific selection, not all CPUs and/or RAM etc are selectable.Update2:The products/services each have around 7 options. Each option can have between 2 - 7 values. A matrix structure as suggested, would become complex IMO.Also we've moved away from having a price for each single variation (which was ridiculous to manage) to having formula's to generate prices dynamically.There was always an issue with the DB load because of the tree structure. Each time an option is selected it has to fetch the values of the subsequent options. Each time you add a new value to an option you also duplicate a lot of the subsequent options. So it gets out of hand really quickly.To go into more details my solution was to use a document based database (NoSQL)You would have a Products or Services collection.Each product/service would look something like this:{ product: T-Shirt, options: { size: [], color: [], pattern: [], ... about 4 more }, rules: [....],}Initially you just load all the options in the interface. Then as you make selections you run the rules to disable the specified option values.Using such a structure seems to me that it would have less overhead by having the rules embedded in each product/service instead of having a large relational table with all the options (which is already massive).The client side benefits because it doesn't have to query the DB each time an option is changed.
Modeling complex product options
design
null
_codereview.28611
Straight to the point: Can you give me pointers on how to make this code more maintainable? In the future, I want to add more more things to this, but first it should be easy to maintain/read. Should making a library be considered?What it does:Takes input from the serial line and converts it to servo commands + all the bells and whistles added.#include <NewPing.h>#include <Servo.h>#define errorLED 13#define ThrottlePin 2#define RollPin 3#define PitchPin 5#define YawPin 4#define Aux1Pin 6#define Trigger 18#define Echo 19#define HEADING_PIN 15NewPing sonar(Trigger, Echo);Servo Throttle;Servo Roll;Servo Pitch;Servo Yaw;Servo Aux1;unsigned int throttle = DEFAULT_THROTTLE;unsigned int roll = DEFAULT_ROLL;unsigned int pitch = DEFAULT_PITCH;unsigned int yaw = DEFAULT_YAW;unsigned int aux1 = DEFAULT_AUX1;unsigned int temp = 0;unsigned int range = 7;unsigned long lastSerialData = 0;unsigned long lastThrottleUpdate = 0;unsigned long ledPreviousMillis = 0;const int DEFAULT_THROTTLE = 45;const int DEFAULT_ROLL = 70;const int DEFAULT_PITCH = 70;const int DEFAULT_YAW = 70;const int DEFAULT_AUX1 = 45;boolean isLedOn = false;//Pins 7-12 power//A0-A3 pin 13-16boolean index = 7;boolean circleIndex = 14;void setup(){ Throttle.attach(ThrottlePin); Roll.attach(RollPin); Pitch.attach(PitchPin); Yaw.attach(YawPin); Aux1.attach(Aux1Pin); Serial.begin(115200); for (byte x = 7; x <= 12; x++) pinMode(x, OUTPUT); for (byte xy = 14; xy <= 17; xy++) pinMode(xy, OUTPUT); setPowerPinsOn(false); setGroundPinsOn(true); pinMode(errorLED, OUTPUT);}int GetFromSerial(){ // wait until we have some serial data while (Serial.available() == 0) { if (millis() - lastSerialData > 1000) { digitalWrite(errorLED, HIGH); readSonar(); blinkingLed(); autoLand(); } } lastSerialData = millis(); digitalWrite(errorLED, LOW); return Serial.read();}void loop(){ switch (GetFromSerial()) { case 't': temp = 0; temp = GetFromSerial() + 45; if (temp >= 45 && temp <= 141) throttle = temp; //45 to 141 Throttle.write(throttle); break; case 'r': temp = 0; temp = GetFromSerial() + 45; if (temp >= 45 && temp <= 141) roll = map(temp, 45, 141, 69, 117); //45 to 141 if (roll < (93 + range) && roll > (93 - range)) roll = 93; Roll.write(roll); break; case 'p': temp = 0; temp = GetFromSerial() + 45; if (temp >= 45 && temp <= 141) pitch = map(temp, 45, 141, 69, 117); //45 to 141 if (pitch < (93 + range) && pitch > (93 - range)) pitch = 93; Pitch.write(pitch); break; case 'y': temp = 0; temp = GetFromSerial() + 45; if (temp >= 45 && temp <= 141) yaw = map(temp, 45, 141, 68, 117); //45 to 141 Yaw.write(yaw); break; case 'a': temp = 0; temp = GetFromSerial() + 45; if (temp >= 45 && temp <= 141) aux1 = temp; //45 to 141 Aux1.write(aux1); break; } // end switch if (throttle <= 45) //Connected but not flying circleLed(); else if (throttle >= 45 && aux1 > 45) headingLed();}void autoLand(){ if (throttle <= 60 && aux1 >= 50) { throttle = 45; aux1 = 45; } else if (throttle >= 45) if (millis() - lastThrottleUpdate > 400) { throttle = throttle * .95; aux1 = 45; lastThrottleUpdate = millis(); } writeAllValues();}void writeAllValues(){ Throttle.write(throttle); Roll.write(roll); Pitch.write(pitch); Yaw.write(yaw); Aux1.write(aux1);}void setPowerPinsOn(boolean on){ if (on) { for (byte x = 7; x <= 12; x++) digitalWrite(x, HIGH); } else for (byte x = 7; x <= 12; x++) digitalWrite(x, LOW);}void setGroundPinsOn(boolean on){ if (on) { for (byte x = 14; x <= 17; x++) digitalWrite(x, LOW); } else for (byte x = 14; x <= 17; x++) digitalWrite(x, HIGH);}void circleLed(){ if (millis() - ledPreviousMillis > 400) { if (circleIndex == 18) circleIndex = 14; ledPreviousMillis = millis(); setPowerPinsOn(true); setGroundPinsOn(false); digitalWrite(circleIndex, LOW); circleIndex++; }}void blinkingLed(){ if (millis() - ledPreviousMillis > 1000) { ledPreviousMillis = millis(); isLedOn = !isLedOn; setPowerPinsOn(isLedOn); setGroundPinsOn(true); }}void headingLed(){ if (millis() - ledPreviousMillis > 300 - (throttle - 45) * 2) { ledPreviousMillis = millis(); if (index == 13) index = 7; setPowerPinsOn(false); setGroundPinsOn(false); digitalWrite(HEADING_PIN, LOW); digitalWrite(index, HIGH); index++; }}void readSonar(){ int uS = sonar.ping_median(); Serial.println(uS / US_ROUNDTRIP_CM);}Summary:How can this code be more maintainable?Should making a library be considered?
Servo commands based on serial port input
c;arduino;serial port;device driver
I'll start by breaking the code down into different files. There are a many #define and const int here. If I am correct that will definitely grow with your project. Having all the constants in a different file is always a good idea if you are considering making it even moderately big. So to your question Should making a library be considered?I would say you should definitely make a library.After that I'll go and indent this code properly. I couldn't understand where the loop() function started and where it ended in the first go. If you want readability that is a something that you must seriously consider. All of your functions need proper indentation but for the loop() function it is a must.There is a lot of redundancy in your loop() function. The first 2 lines of each of your case is same. That is really bad programming. It made your code so big that you had to write comments to note down the end of your braces.You are writing same comments over and over again with no actual use. If you needed to change these you'll be getting a headache.I'll write your loop() function something like this#define TEMP_RANGE(x,y) (temp >= x && temp <= y)#define CHECK_RANGE(x, y, z) ((y + z) > x && x > (y - z))void loop(){ temp = 0; temp = GetFromSerial() + 45; switch (GetFromSerial()) { case 't': if (TEMP_RANGE(45,141)) throttle = temp; Throttle.write(throttle); break; case 'r': if (TEMP_RANGE(45,141)) roll = map(temp, 45, 141, 69, 117); if (CHECK_RANGE(roll, 93, range)) roll = 93; Roll.write(roll); break; case 'p': if (TEMP_RANGE(45,141)) pitch = map(temp, 45, 141, 69, 117); if (CHECK_RANGE(pitch, 93, range)) pitch = 93; Pitch.write(pitch); break; case 'y': if (TEMP_RANGE(45,141)) yaw = map(temp, 45, 141, 68, 117); Yaw.write(yaw); break; case 'a': if (TEMP_RANGE(45,141)) aux1 = temp; Aux1.write(aux1); break; } if (throttle <= 45) //Connected but not flying circleLed(); else if (aux1 > 45) headingLed();}Note theseI placed the range checks which were being done more than once into their own define. This made them much easier to read and change. You can make better names based on the context. I didn't use functions but you can use them for even better abstraction and readability.Due to proper indentation people will be able to read this much faster. That is very important if you want maintainability.Took out redundancy from your code. You were checking whether a number was <= 45 something and in the else if you were checking whether it was >=. Obviously it is > 45 otherwise the fist condition would have executed.It can be improve more if you use functions or a bit more little complex macros. Functions might be preferable.I think I covered the biggest issues. Use a proper indentation in your whole code. That is essential. Maybe add function prototypes at the top to make it easier to find what is where. If you are thinking of making it big try to break it down into logical parts first.Use braces consistently. Sometimes you are using braces for containing single statements but sometimes you are not. That is a source of many bugs.Hope this helped.If you want more feedback I suggest you to fix these problems and ask another question with the updated code for more feedback.
_unix.216775
For the last 5 years, I have used linux as my everyday OS for performing scientific computing. My work recently gave me a Mac that I will be the primary user for the next few months. I keep running into conflicts between the Free-BSD bash environment on the Mac and the GNU environment I am used to, both with bash scripts I have setup and as I try to run bash commands (coreutils, findutils, etc). I do not want to switch completely to the Free-BSD utilies as all my other computers as well as our HPC's use linux with the GNU utilities. I want to avoid having to maintain two sets of bash scripts and also having to remember the nuances of the differing flags and functionalities between the two systems. I also do not want to break any of Mac's gui utilites etc that other users will use (either in the next few months or when it is given to someone else). Additionally, responses to this related question warn against completely replacing the Mac Free-BSD utilities with GNU ones.Is it possible to install/setup a separate bash environment to use only the GNU utilities while leaving the system Free-BSD ones in place? My expect the most promising option is setting up my $PATH variable to point to a directory containing the GNU executables (with their standard names) while ignoring the Free-BSD ones. How could I apply this to my cross-platform bash scripts? Are there alternative options worth considering?
How can I setup a separate bash environment with only GNU utilities on OS X?
shell;osx
First, this is about a lot more than just coreutils. The BSD equivalent to GNU findutils is also quite different, pretty much every command related to dynamic linkage is different, etc.And then on top of that, you have to deal with versioning differences: OS X still ships a lot of older software to remain on GPL2 instead of GPL3, such as Bash 3.x instead of Bash 4.x. The autotools are also often outdated with respect to bleeding-edge Linux distros.The answer to the core part of your question is, Sure, why not? You can use Homebrew to install all of these alternative GNU tools, then put $(brew --prefix coreutils)/libexec/gnubin and /usr/local/bin first in your PATH to make sure they're found first:export PATH=$(brew --prefix coreutils)/libexec/gnubin:/usr/local/bin:$PATHIf for some reason brew installs a package to another location, also include that in the PATH variable. If you would rather only replace a few packages, the tricky bit is dealing with all the name changes. Whenever brew installs a program that already has an implementation in the core OS, such as when installing GNU coreutils, it names its version differently so that you can run either, depending on your need at the time. Instead of renaming all of these symlinks, I recommend that you fix all of this up with a layer of indirection:$ mkdir ~/linux$ cd ~/linux$ ln -s /usr/local/bin/gmv mv...etc for all the other tools you want to rename to cover OS versions$ export PATH=$HOME/linux:$PATH...try it out...Once you're happy with your new environment, you can put export PATH=$HOME/linux:$PATH into your ~/.bash_profile.That takes care of interactive use, either with bulk replacement or single application replacement.Unfortunately, it does not completely solve the shell script problem, because sometimes shell scripts get their own environment, such as when launched from cron. In that case, you could modify the PATH at the top of each cross-platform shell script:#!/bin/bashexport PATH=$HOME/linux:$(brew --prefix coreutils)/libexec/gnubin:/usr/local/bin:$PATHYou do not need to make it conditional, since it is just offering the shell another place to look for programs.Footnotese.g. /usr/local/bin/gmv → ../Cellar/coreutils/$version/bin/gmvRelated PostsHow to replace Mac OS X utilities with GNU core utilities?Install and Use GNU Command Line Tools on Mac OS X
_unix.180306
I'd like to use fgrep to handle searching literal words with periods and other meta-characters in grep, but I need to ensure the word is at the beginning of the line.For example, fgrep 'miss.' will match miss. exactly which is what I want, but also admiss. or co. miss. which I don't want.I might be able to escape meta-characters, e.g. grep '^miss\.', but the source is so large, I'm bound to miss something, and then need to run it again (will take the whole night). And in some cases, e.g. \1, the escaped code is the one with meta-meaning.Any way around this?
fgrep beginning of line?
bash;grep;quoting
With GNU grep if built with PCRE support and assuming $string doesn't contain \E, you can do:grep -P ^\Q$string
_webmaster.2854
For example, when I dump the response header for my server I get:Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3ubuntu4.5 with Suhosin-Patch mod_ssl/2.2.11 OpenSSL/0.9.8gIs this used for anything? Is it a security risk (albeit small) broadcasting the server makeup?
Does the 'Server' header serve any purpose?
security;http headers
No, it is not used for anything important. (Netcraft's server market share surveys probably use it, as presumably do other 3rd party surveys.)Yes, it is a (very) small security issue. Of course your server should be secured and up to date at all times, but having an extra layer of 'obscurity' on top of a well secured server is only beneficial. If nothing else, if an attacker needs to undertake extensive 'fingerprinting' before attacking, then you might get some early warning of an attack if you monitor your logfiles closely.You can safely turn down the level of detail being broadcasted if you want to. On the other hand, it isn't a big deal, and if you're on a shared server where you cannot change this, then don't sweat it.
_unix.370880
I copy-paste Gedit-hash (#) in a part of filepath into internet browser to read PDF file unsuccessfully: no file found with Gedit-hash symbol. Inputing there directly the hash symbol from the keyboard is consireded correct. Copy-pasting the gedit-hash to Vim shows Ascii 035 correctly, also tested on the ASCII tool here. Example filepath misinterpreted in Internet browser where # expands to %23 wrongly/home/masi/Documents/Edition.pdf#page=605Do Copy the filepath to GeditCopy the filepath from GeditPaste the filepath to any internet browserOutput: # symbol is expanded to %23Methods inserting filepath which are correctly interpretedtype hash directly to the internet browser fieldOS: Debian 8.7Internet browsers: Google Chrome 58.0.x, Firefox latest
Why Gedit's hash symbol is expanded and interpreted wrongly in internet browsers?
debian;character encoding;gedit;ascii
UNIX filenames are not URLs.You can see that '#' is not interpreted in unix filenames, but it is in URLs.$ ls '/home/masi/Documents/Edition.pdf#page=605'ls: cannot access '/home/masi/Documents/Edition.pdf#page=605': No such file or directory$ curl '/home/masi/Documents/Edition.pdf#page=605'curl: (3) <url> malformed$ curl 'file:///home/masi/Documents/Edition.pdf#page=605'curl: (37) Couldn't open file /home/masi/Documents/Edition.pdfFirefox is applying the correct escaping, to protect the filename character # from being interpreted as delimiting a fragment in the URL.
_softwareengineering.225924
Fail-fast seems like a right way since it simplifies bug detection. But it's a harm for performance cause of multiple checking the same thing at several levels of the system.Simple example.There is a function that input parameters must be not null. And there is function wrapping it, that also await the same parameters also not null. After some activities function-wrapper passes input parameters into first one. So that the same items checked two times: at the beginning of function wrapper and inside wrapped function.So I would like to know how much widespread this style is. Should I write fail-fast code or check everything just once?
Fail-fast paradigm overheads
programming practices;exceptions;error handling
You're missing a vital point - it's not an either or scenario.You only need to check untrusted parameters. In general, this means the borders of your public interface. If you have a chain of public functions that call other public functions and so on, yes, you'll need to check the input multiple times. You also maybe need to revisit your design to properly abstract and encapsulate things that maybe don't need to be public.Another spot where this occurs is working with components that can fail (like databases or network connections). If a network connection fails, you don't just pass null back up the call stack, you identify it early and throw an exception or otherwise abort. There's no need to check the return value all the way up the stack.
_unix.12570
Is there a way to shebang-ify ftp and write small ftp scripts?For example:#!/usr/bin/ftpopen 192.168.1.1put *.gzquitAny thoughts?
ftp and shebang
shell;shell script;ftp
Not with the ftp programs I've run into, as they expect a script on their standard input but a shebang would pass the script name on their command line.You can use a here document to pass a script to ftp through a shell wrapper.#!/bin/shftp <<EOFopen 192.168.1.1put *.gzEOFLftp accepts a script name passed as an argument.#!/usr/bin/lftp -fopen 192.168.1.1put *.gzNcftp comes with two tools ncftpget and ncftpput for simple batches of gets or puts.Zsh includes an FTP module. Using a proper shell rather than a straight FTP script has the advantage that you can react to failures.#!/bin/zshzmodload zsh/zftpopen 192.168.1.1put *.gzOf course there are plenty of other languages you could use: Perl, Python, Ruby, etc.Another approach is to mount the FTP server as a directory, and then use cp (or rsync or other tools) to copy files. There are many FUSE filesystems for FTP access, principally CurlFtpFS and LftpFS.Note that if you were planning to use authentication (likely if you're uploading), and you have control over the server, you'd be better off with SSH access. It's more secure and more flexible. To copy files over SSH, you can use scp or sftp, or rsync for efficient synchronization (if some of the files may already be there), or Unison (for bidirectional synchronization), or mount with SshFS.
_unix.351199
This is my homework to write calculator on shell script but there is two errors and I couldn't find the solutions. Can you help me?echo ---------Welcome to Simple Calculator--------echo p=PLUSecho m=MINUSecho x=MULTIPLICATIONecho d=DIVISIONread -p Enter your choice chif $ch -eq pthen echo Enter Two Number For PLUS read x read y echo Sonu: $((x+y))elif $ch -eq mthen echo Enter Two Number For MINUS read x read y echo Sonu: $((x-y))elif $ch -eq xthen echo Enter Two Number For MULTIPLICATION read x read y echo Sonu: $((x\*y))elif $ch -eq dthen echo Enter Two Number For DIVISION read x read y echo scale=2;x/y | bcelse echo Stopping calculatorfi
[line 35 unexpected EOF while looking for matching ' ' and line 40 unexpected end of file]
linux;syntax
null
_unix.193290
I'm running OpenMediaVault which is a Debian based Linux distribution and the file command is missing.Is there any way to install it?
Missing file command?
linux;debian;package management
On a Debian systems file can be installed with:sudo apt-get install file
_unix.212232
When I try to install a 32bit library apt-get install liblua5.2:i386 apt warns that is is going to uninstall a number of essential 64 bit packages. Some of them are being replaced with 32 bit versions but others will not be replaced.Aren't 32bit and 64bit packages supposed to be able to work side by side? The system is Ubuntu 14.04 64bit and essential packages like kde-plasma-desktop, kde-workspace, build-essential, gcc-4.8 etc, are slated for removal.Is that a fault in the design of the package and its dependents.The following packages will be REMOVED build-essential cpp cpp-4.8 g++ g++-4.8 gcc gcc-4.8 gcc-4.8-multilib gcc-multilib kde-plasma-desktop kde-workspace kde-workspace-bin libbonobo2-0 libbonoboui2-0 libgnome2-0 libgnome2-bin libgnome2-perl libgnomeui-0 libidl-common libidl0 liblua5.2-rrd-dev liblua5.2-rrd0 liborbit2 librrd4 libtool php5-dev shutter x11-apps x11-session-utils x11-xserver-utilsThe output in full:The following extra packages will be installed: gcc-4.8-base gcc-4.8-base:i386 lib32asan0 lib32atomic1 lib32gcc-4.8-dev lib32gomp1 lib32itm1 lib32quadmath0 libasan0 libatomic1 libc6-dev:i386 libdbi1:i386 libgcc-4.8-dev libgfortran3 libgomp1 libitm1 libquadmath0 libreadline-dev:i386 libreadline6-dev:i386 librrd4:i386 libstdc++-4.8-dev libstdc++6 libstdc++6:i386 libtinfo-dev:i386 libtsan0 libx32asan0 libx32atomic1 libx32gcc-4.8-dev libx32gomp1 libx32itm1 libx32quadmath0 linux-libc-dev linux-libc-dev:i386Suggested packages: glibc-doc:i386 manpages-dev:i386 libstdc++-4.8-docRecommended packages: gcc:i386 c-compiler:i386 ttf-dejavu:i386 ttf-bitstream-vera:i386The following packages will be REMOVED build-essential cpp cpp-4.8 g++ g++-4.8 gcc gcc-4.8 gcc-4.8-multilib gcc-multilib kde-plasma-desktop kde-workspace kde-workspace-bin libbonobo2-0 libbonoboui2-0 libgnome2-0 libgnome2-bin libgnome2-perl libgnomeui-0 libidl-common libidl0 liblua5.2-rrd-dev liblua5.2-rrd0 liborbit2 librrd4 libtool php5-dev shutter x11-apps x11-session-utils x11-xserver-utilsThe following NEW packages will be installed libc6-dev:i386 libdbi1:i386 liblua5.2-0:i386 liblua5.2-0-dbg:i386 liblua5.2-dev:i386 liblua5.2-rrd-dev:i386 liblua5.2-rrd0:i386 libreadline-dev:i386 libreadline6-dev:i386 librrd4:i386 libtinfo-dev:i386 linux-libc-dev:i386The following packages will be upgraded: gcc-4.8-base gcc-4.8-base:i386 lib32asan0 lib32atomic1 lib32gcc-4.8-dev lib32gomp1 lib32itm1 lib32quadmath0 libasan0 libatomic1 libgcc-4.8-dev libgfortran3 libgomp1 libitm1 libquadmath0 libstdc++-4.8-dev libstdc++6 libstdc++6:i386 libtsan0 libx32asan0 libx32atomic1 libx32gcc-4.8-dev libx32gomp1 libx32itm1 libx32quadmath0 linux-libc-dev26 to upgrade, 12 to newly install, 30 to remove and 316 not to upgrade.Need to get 12.1 MB of archives.After this operation, 73.3 MB disk space will be freed.Do you want to continue? [Y/n]
Why is installing a 32bit package on a 64bit system warning about removing critical 64bit packages?
apt
Try apt-get install liblua5.2-0:i386 instead; there is no liblua5.2 package, so apt-get install liblua5.2:i386 is trying to install liblua5.2-dev:i386, liblua5.2-0-dbg:i386 and liblua5.2-0:i386. The -dev package is the one causing the removals.The search extension happens because the package name given contains a .; from apt-get's manual:If no package matches the given expression and the expression contains one of '.', '?' or '*' then it is assumed to be a POSIX regular expression, and it is applied to all package names in the database. Any matches are then installed (or removed). Note that matching is done by substring so 'lo.*' matches 'how-lo' and 'lowest'. If this is undesired, anchor the regular expression with a '^' or '$' character, or create a more specific regular expression.So you could avoid this by runningapt-get install ^liblua5.2:i386$(which correctly fails). The rule is generalisable apparently; from what I've seen apt-get tries using the package name as a regex if it doesn't match a package name exactly, even if the expression doesn't contain ., ? or *.
_unix.173838
I am curious to know if the following command could be executed on a system/server with the Shellshock bug: curl -H User-Agent: () { :; }; sudo /bin/eject http://example.com/(This code is an elaborated version of an example here: http://blog.cloudflare.com/inside-shellshock/)From what I understand the bug would allow one to inject code into the system but not necessarily execute code with elevated privileges. If this is not possible is there another way that code could be injected to give a hacker root privileges? No examples necessary, I am simply curious as to the extent of damage this bug could potentially cause. **I have no malicious intent, I ask out of curiosity.
Can the shellshock bug be expoloited to run a command as a privileged user?
bash;shellshock
curl -H User-Agent: () { :; }; sudo /bin/eject http://example.com/Wouldn't work because it couldn't find sudo. curl -H User-Agent: () { :; }; /usr/bin/sudo /bin/eject http://example.com/would manage to invoke sudo, but unless the administrator has configured sudo to allow the user running the web server to run any command as root (which would be the most unwise thing to do), that can't do much.Even if the remote server allowed the web server user to run a particular command as root, first, you'd need to guess which it is, and then even if it were a bash script, the HTTP_USER_AGENT environment variable would not be passed to it even if sudo were configured with env_reset disabled because sudo always blacklists the variables whose content starts with ().Though some scenarii can be imagined, there is no common way for shellshock to be used for local privilege escalation. See this answer on security.stackexchange for more details.
_unix.223019
After rebooting my Ubuntu 14.04 machine, I could not log into Unity again and I had to fall back on Gnome. I discovered that there is something wrong with GLX since when I run:/usr/lib/nux/unity_support_test -pI get the following message:Error: GLX is not available on the systemAlso, (relevant parts of) the output of less /var/log/Xorg.0.log looks like this:[ 682.533] (II) LoadModule: glx[ 682.533] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so[ 682.533] (EE) Failed to load /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so: libnvidia-tls.so.349.16: cannot open shared object file: No such file or directory[ 682.533] (II) UnloadModule: glx[ 682.533] (II) Unloading glx[ 682.533] (EE) Failed to load module glx (loader failed, 7)[ 682.533] (==) Matched nvidia as autoconfigured driver 0[ 682.533] (==) Matched nouveau as autoconfigured driver 1[ 682.533] (==) Matched nvidia as autoconfigured driver 2[ 682.533] (==) Matched nouveau as autoconfigured driver 3[ 682.533] (==) Matched modesetting as autoconfigured driver 4[ 682.533] (==) Matched fbdev as autoconfigured driver 5[ 682.533] (==) Matched vesa as autoconfigured driver 6[ 682.533] (==) Assigned the driver to the xf86ConfigLayout[ 682.533] (II) LoadModule: nvidia[ 682.533] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so[ 682.534] (II) Module nvidia: vendor=NVIDIA Corporation[ 682.534] compiled for 4.0.2, module version = 1.0.0[ 682.534] Module class: X.Org Video Driver[ 682.534] (II) LoadModule: nouveau[ 682.534] (WW) Warning, couldn't open module nouveau[ 682.534] (II) UnloadModule: nouveau[ 682.534] (II) Unloading nouveau[ 682.534] (EE) Failed to load module nouveau (module does not exist, 0)[ 682.534] (II) LoadModule: modesetting[ 682.534] (WW) Warning, couldn't open module modesetting[ 682.534] (II) UnloadModule: modesetting[ 682.534] (II) Unloading modesetting[ 682.534] (EE) Failed to load module modesetting (module does not exist, 0)[ 682.534] (II) LoadModule: fbdev[ 682.534] (WW) Warning, couldn't open module fbdev[ 682.534] (II) UnloadModule: fbdev[ 682.534] (II) Unloading fbdev[ 682.534] (EE) Failed to load module fbdev (module does not exist, 0)[ 682.534] (II) LoadModule: vesa[ 682.534] (WW) Warning, couldn't open module vesa[ 682.534] (II) UnloadModule: vesa[ 682.534] (II) Unloading vesa[ 682.534] (EE) Failed to load module vesa (module does not exist, 0)[ 682.534] (==) Matched nvidia as autoconfigured driver 0[ 682.534] (==) Matched nouveau as autoconfigured driver 1[ 682.534] (==) Matched nvidia as autoconfigured driver 2[ 682.534] (==) Matched nouveau as autoconfigured driver 3[ 682.534] (==) Matched modesetting as autoconfigured driver 4[ 682.534] (==) Matched fbdev as autoconfigured driver 5[ 682.534] (==) Matched vesa as autoconfigured driver 6[ 682.534] (==) Assigned the driver to the xf86ConfigLayout[ 682.534] (II) LoadModule: nvidia[ 682.534] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so[ 682.534] (II) Module nvidia: vendor=NVIDIA Corporation[ 682.534] compiled for 4.0.2, module version = 1.0.0[ 682.534] Module class: X.Org Video Driver[ 682.534] (II) UnloadModule: nvidia[ 682.534] (II) Unloading nvidia[ 682.534] (II) Failed to load module nvidia (already loaded, 32523)[ 682.534] (II) LoadModule: nouveau[ 682.534] (WW) Warning, couldn't open module nouveau[ 682.534] (II) UnloadModule: nouveau[ 682.534] (II) Unloading nouveau[ 682.534] (EE) Failed to load module nouveau (module does not exist, 0)[ 682.534] (II) LoadModule: modesetting[ 682.535] (WW) Warning, couldn't open module modesetting[ 682.535] (II) UnloadModule: modesetting[ 682.535] (II) Unloading modesetting[ 682.535] (EE) Failed to load module modesetting (module does not exist, 0)[ 682.535] (II) LoadModule: fbdev[ 682.535] (WW) Warning, couldn't open module fbdev[ 682.535] (II) UnloadModule: fbdev[ 682.535] (II) Unloading fbdev[ 682.535] (EE) Failed to load module fbdev (module does not exist, 0)[ 682.535] (II) LoadModule: vesa[ 682.535] (WW) Warning, couldn't open module vesa[ 682.535] (II) UnloadModule: vesa[ 682.535] (II) Unloading vesa[ 682.535] (EE) Failed to load module vesa (module does not exist, 0)[ 682.535] (II) NVIDIA dlloader X Driver 349.16 Tue Apr 7 23:19:49 PDT 2015[ 682.535] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs[ 682.535] (++) using VT number 7[ 682.539] (II) Loading sub module fb[ 682.539] (II) LoadModule: fb[ 682.539] (II) Loading /usr/lib/xorg/modules/libfb.so[ 682.540] (II) Module fb: vendor=X.Org Foundation[ 682.540] compiled for 1.15.1, module version = 1.0.0[ 682.540] ABI class: X.Org ANSI C Emulation, version 0.4[ 682.540] (II) Loading sub module wfb[ 682.540] (II) LoadModule: wfb[ 682.540] (II) Loading /usr/lib/xorg/modules/libwfb.so[ 682.540] (II) Module wfb: vendor=X.Org Foundation[ 682.540] compiled for 1.15.1, module version = 1.0.0[ 682.540] ABI class: X.Org ANSI C Emulation, version 0.4[ 682.540] (II) Loading sub module ramdac[ 682.540] (II) LoadModule: ramdac[ 682.540] (II) Module ramdac already built-in[ 682.540] (II) NVIDIA(0): Creating default Display subsection in Screen section Default Screen Section for depth/fbbpp 24/32[ 682.540] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32[ 682.540] (==) NVIDIA(0): RGB weight 888[ 682.540] (==) NVIDIA(0): Default visual is TrueColor[ 682.540] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)[ 682.540] (**) NVIDIA(0): Enabling 2D acceleration[ 682.540] (EE) NVIDIA(0): Failed to initialize the GLX module; please check in your X[ 682.540] (EE) NVIDIA(0): log file that the GLX module has been loaded in your X[ 682.540] (EE) NVIDIA(0): server, and that the module is the NVIDIA GLX module. If[ 682.540] (EE) NVIDIA(0): you continue to encounter problems, Please try[ 682.540] (EE) NVIDIA(0): reinstalling the NVIDIA driver.[ 682.549] (II) NVIDIA(GPU-0): Found DRM driver nvidia-drm (20150116)[ 682.550] (II) NVIDIA(0): NVIDIA GPU Quadro K2200 (GM107GL-A) at PCI:3:0:0 (GPU-0)[ 682.550] (--) NVIDIA(0): Memory: 4194304 kBytes[ 682.550] (--) NVIDIA(0): VideoBIOS: 82.07.5a.00.01[ 682.550] (II) NVIDIA(0): Detected PCI Express Link width: 16X[ 682.622] (--) NVIDIA(0): Valid display device(s) on Quadro K2200 at PCI:3:0:0[ 682.622] (--) NVIDIA(0): CRT-0[ 682.622] (--) NVIDIA(0): DFP-0[ 682.622] (--) NVIDIA(0): DFP-1[ 682.622] (--) NVIDIA(0): DFP-2[ 682.622] (--) NVIDIA(0): DELL U2711 (DFP-3) (connected)[ 682.622] (--) NVIDIA(0): DELL 2709W (DFP-4) (boot, connected)[ 682.622] (--) NVIDIA(GPU-0): CRT-0: 400.0 MHz maximum pixel clock[ 682.622] (--) NVIDIA(0): DFP-0: Internal TMDS[ 682.622] (--) NVIDIA(GPU-0): DFP-0: 330.0 MHz maximum pixel clock[ 682.622] (--) NVIDIA(0): DFP-1: Internal TMDS[ 682.622] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock[ 682.622] (--) NVIDIA(0): DFP-2: Internal TMDS[ 682.622] (--) NVIDIA(GPU-0): DFP-2: 165.0 MHz maximum pixel clock[ 682.622] (--) NVIDIA(0): DELL U2711 (DFP-3): Internal DisplayPort[ 682.622] (--) NVIDIA(GPU-0): DELL U2711 (DFP-3): 960.0 MHz maximum pixel clock[ 682.623] (--) NVIDIA(0): DELL 2709W (DFP-4): Internal DisplayPort[ 682.623] (--) NVIDIA(GPU-0): DELL 2709W (DFP-4): 960.0 MHz maximum pixel clock[ 682.623] (**) NVIDIA(0): Using HorizSync/VertRefresh ranges from the EDID for display[ 682.623] (**) NVIDIA(0): device DELL U2711 (DFP-3) (Using EDID frequencies has been[ 682.623] (**) NVIDIA(0): enabled on all display devices.)[ 682.625] (**) NVIDIA(0): Using HorizSync/VertRefresh ranges from the EDID for display[ 682.625] (**) NVIDIA(0): device DELL 2709W (DFP-4) (Using EDID frequencies has been[ 682.625] (**) NVIDIA(0): enabled on all display devices.)[ 682.642] (==) NVIDIA(0): [ 682.642] (==) NVIDIA(0): No modes were requested; the default mode nvidia-auto-select[ 682.642] (==) NVIDIA(0): will be used as the requested mode.[ 682.642] (==) NVIDIA(0): [ 682.642] (II) NVIDIA(0): Validated MetaModes:[ 682.642] (II) NVIDIA(0): DFP-4:nvidia-auto-select,DFP-3:nvidia-auto-select[ 682.642] (II) NVIDIA(0): Virtual screen size determined to be 4480 x 1440[ 682.643] (--) NVIDIA(0): DPI set to (84, 84); computed from UseEdidDpi X config[ 682.643] (--) NVIDIA(0): option[ 682.643] (--) Depth 24 pixmap format is 32 bpp[ 682.644] (II) NVIDIA: Using 3072.00 MB of virtual memory for indirect memory[ 682.644] (II) NVIDIA: access.[ 682.649] (II) NVIDIA(0): Setting mode DFP-4:nvidia-auto-select,DFP-3:nvidia-auto-select[ 682.723] Loading extension NV-GLX[ 682.769] (==) NVIDIA(0): Disabling shared memory pixmaps[ 682.769] (==) NVIDIA(0): Backing store enabled[ 682.769] (==) NVIDIA(0): Silken mouse enabled[ 682.769] (==) NVIDIA(0): DPMS enabled[ 682.769] Loading extension NV-CONTROL[ 682.769] Loading extension XINERAMA[ 682.769] (II) Loading sub module dri2[ 682.769] (II) LoadModule: dri2[ 682.769] (II) Module dri2 already built-in[ 682.769] (II) NVIDIA(0): [DRI2] Setup complete[ 682.769] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidiaI already tried re-installing the NVidia driver (346.82) and also tried with a newer version (355.06) but I couldn't solve the problem. Also, I'm not able to play any kind of video that I used to play without any problem.How can I make the GLX work correctly again?EDIT:Also, here is the output of glxinfo on terminal:Error: couldn't find RGB GLX visual or fbconfigI think this confirms GLX doesn't work. Also when I run nvidia-settings , under OpenGL/GLX Information, I see:Failed to query the GLX server vendor.
Error: GLX is not available on the system
nvidia;opengl;unity
null
_softwareengineering.177703
Being a PHP programmer for the last couple of years, I'm just starting to get into advanced programming styles and using polymorphic patterns. I was watching a video on polymorphism the other day, and the guy giving the lecture said that if at all possible, you should get rid of if statements in your code, and that a switch is almost always a sign that polymorphism is needed. At this point I was quite inspired and immediately went off to try out these new concepts, so I decided to make a small caching module using a factory method. Of course the very first thing I have to do is create a switch to decide what file encoding to choose. DANG!class Main { public static function methodA($parameter='') { switch ($parameter) { case 'a': $object = new \name\space\object1(); break; case 'b': $object = new \name\space\object2(); break; case 'c': $object = new \name\space\object3(); break; default: $object = new \name\space\object1(); } return (sekretInterface $object); }}At this point I'm not really sure what to do. As far as I can tell, I either have to use a different pattern and have separate methods for each object instance, or accept that a switch is necessary to switch between them. What do you guys think?
Abstract Factory Method and Polymorphism
design patterns;abstraction;polymorphism
switch and if are generally required for primitives (which $parameter is), although PHP allows you to create class names and call methods from variables, so you can get away with a little magic.For example you could do this:public static function methodA($parameter = '') { $class = '\name\space\object' . $parameter; return new $class;}This of course requires that $parameter and the existing class names are all named appropriately, and it does not account for the default case.The other alternative requires that $parameter be an object. You can remove the switch, but it doesn't cut down on verbosity (although it is polymorphic). One possible way:public static function methodA(EncodingType $parameter = null) { return new $parameter->getObject();}...which would return the required object string.I should note that the principle of avoiding conditionals in contemplation of using polymorphism is great, especially in theory -- but if you try to eliminate all conditionals in your code you can half kill yourself and end up writing unnecessarily complicated code. It may also be a signal to rethink your design.I will say that one common spot where you find conditionals is in factory methods, which is what your example seems to be.
_cstheory.32091
I am currently an undergraduate heading into my senior year. I've taken some theory/math classes (algorithms, and set theory/topology) in the past year and am taking quite a few more this year (more algorithms, abstract algebra, graph theory). My theory classes were the first classes in college I truly enjoyed. I definitely have an interest/passion for more theoretical and mathematical areas of computer science. I am pretty set on going to grad school/getting a PhD and if had the option, I would want to do research in an algorithms/theory group. However, I am wondering how can I assess if I have what it takes to succeed. Success in the case equates to getting a PhD --not necessarily at the most prestigious university -- but just making my contribution (as tiny as it may be) to the field. The reason I am asking this is because I am not some whiz kid who aces their classes with no sweat. I don't struggle by any means, but considering that algorithms/theory is a very intellectually challenging field, do you think hard work and passion for a subject can allow me to achieve my goal? Essentially, I don't want to put limitations on myself and regret not pursuing something I truly enjoy. However, at the same time, I want to be realistic with myself. Would love to hear your thoughts/experiences with this subject matter.
General question about pursuing TCS
career
null
_cs.10690
Define the problem $W$:Input: A multi-set of numbers $S$, and a number $t$.Question: What is the smallest subset $s \subseteq S$ so that $\sum_{k \in s} k = t$, if there is one? (If not, return none.)I am trying to find some polytime equivalent decision problem $D$ and provide a polytime algorithm for the non-decision problem $W$ assuming the existence of a polytime algorithm for $D$.Here is my attempt at a related decision problem:$\mathrm{MIN\text{-}W}$:Input: A multi-set of numbers $S$, two numbers $t$ and $k$.Question: Is there a subset $s \subseteq S$ so that $\sum_{k \in s} k = t$ and $|s| \leq k$?Proof of polytime equivalence:Assume $W \in \mathsf{P}$.solveMIN-W(S, t, k):1. S = sort(S)2. Q = {}3. for i=1 to k:4. Q.add(S_i)5. res = solveW(Q, t)6. if res != none and res = t: return Yes7. return NoI'm not sure about this algorithm though. Can anyone help please?
How to prove polynomial time equivalence?
complexity theory;reductions;p vs np
The problems you mention here are variations of a problem called SUBSET-SUM, FYI if you want to read some literature on it.What is $S_i$ in your algorithm? If it's the $i$-th element, then you assume that the minimal set will use the smaller elements, which is not necessarily true - there could be a large element that is equal to $t$, in which case there is a subset of size $1$. So the algorithm doesn't seem correct.However, as with many optimization problems that correspond to NP-complete problems, you can solve the optimization problem given an oracle to the decision problem.First, observe that by iterating over $k$ and calling the decision oracle, you can find what is the minimal size $k_0$ of a subset whose sum equals $k$ (but not the actual subset).After finding the size, you can remove the first element in $S$, and check whether there is still a subset of size $k_0$ - if not, then $s_1$ is surely in a minimal subset, so compute $t-s_1$, and repeat the process with the rest of $S$.If $s_1$ does not effect the size $k_0$, then repeat the process with $S\setminus \{s_1\}$.It is not hard to verify that this process takes polynomial time.
_unix.55464
I'm administrating two ubuntu desktops and one debian server.There are abount ~20 active users on the desktops. A few (5-10) user accounts are added each year and about the same amount become inactive. I would like to share the user accounts and their respective homes between the two pcs. So far, my plan was to set up some kind of nfs + kerberos (+ldap/nis?), but I think kerberos is overly complicated for this simple purpose. In addition to that, the admin changes every ~2-3 years and I fear that complicated solutions will become unmaintainable for my successors (we are no professionals...).Is there a way to split up /etc/passwd etc. in different files, so I could store these on the server and copy them to the desktops? Or is there some PAM-module that provides a similar type of modular authentication ? (well, except pam_krb5). What would be the simplest way to achieve that?
Easiest way to manage users for two machines
users;authentication;multiuser
You can use a configuration management system to do this. Personally, I use Puppet for this. I have a single /etc/passwd and /etc/shadow file and I have Puppet sync it across all my systems. There is an interesting learning curve with them, but definitely tutorials for doing exactly what you want on their website.I would, however, definitely recommend using LDAP and Kerberos. I know the learning curve is steep, but the security is really good. I know kerbs can be a burden sometimes, but LDAP would probably be acceptable. I have been meaning to set one up.
_webmaster.29407
I have a site that ranks 3rd for the term e decorating. According to google's keyword tool that gets 12,000 monthly impressions.However, according to google's webmaster's tools, I'm only getting 35 impression for that term. Why is this?
Getting less impressions than my website's position in search results would suggest
seo;google;google search console
The most likely reason is that the site only shows up third for you. There are many factors that determine search placement, including your location, previous searches etc.I also believe that the 12,000 number includes searches with other terms, eg e decorating website, e decorating in [location] and so on. So there may be 12,000 searches globally, but you only show up for a small number of them.
_cstheory.10465
A random permitting-context grammar is a context-free grammar $(N, \Sigma, P, S)$ equipped with a function $p : P \rightarrow 2^N$. The rule $A \rightarrow x$ can be applied to $uAw \Rightarrow uxw$ if every symbol in $p(A\rightarrow x)$ appears in $uw$. A forbidding-context grammar is similar, except equipped with a function $f : P \rightarrow 2^N$, and a rule application is allowed if no symbol from $f(A\rightarrow x)$ appears in $uw$.$\{a^n b^m : 1 \le m \le 2^n\}$ is a random permitting-context language (RPCL), though $\{a^n b^m : m = 2^n\}$ is not (as can be proved by Ewert and Van der Walt's pumping lemma for RPCLs). In a permitting grammar we can allow the nonterminals of some type to double, but we can't force it.It seems obvious to me that one can't get more than exponential growth, but I can't see how to prove it (the pumping lemma certainly doesn't help, since it will just increase the number of $a$'s). Is there a known result that helps here?I'm also interested in superexponential growth in forbidding languages (e.g. is $\{a^{2^{2^n}} : n \in \mathbb{N}\}$ an RFCL).
Is $\{a^n b^m : 1 \le m \le 2^{2^n}\}$ a permitting-context language?
fl.formal languages
null
_softwareengineering.105570
How should I pick a web application software engineer? The (permanent) position is to rewrite the client of an existing desktop client server application. The pages will not be generated dynamically on a server, but the server will expose a full API in whatever way is needed, for example JSON RPC calls, and can make static files available. It will replace a client that people have to download and install, so requiring a decent browser with reasonable settings (e.g. JavaScript enabled) is fine. Almost all use cases are for a desktop P.C. It will not be accessible to search engines (it is an enterprise application). We can go so far as to write the whole thing in a single page, but don't have to. It would pretty much all be written by the web application software engineer in question. The exact open source libraries to use is also up to the engineer, within reason.I'm specifically looking for advice on what qualifications to look for/verify in an interview, since picking a web application software engineer in general is just too broad a topic.Edit - The position would be purely in-browser JavaScript programming (and be responsible for HTML and CSS) - other developers would develop the server but this position could request facades over the API. There is no Ruby, ASP, JSP, etc. because the web server layer is very thin and just translates calls to the business layer API and returns them as JSON (or whatever, but JSON seems easiest).
How to pick a web application software engineer?
web development;web applications;hiring
As a web developer who mainly works on enterprise apps dealing with legacy systems, I can offer a few suggestions as to what makes me successful, and hopefully that will help.I'm not sure whether you're looking for a temporary/consultancy situation to provide you with one app, or an FTE to develop and maintain this long-term, but these should apply in either case, I think.I primarily use Ruby. This makes rapid prototyping very fast and easy, and Rails makes it simple to bootstrap a new project. Now, I don't know what your environment is like, whether you have institutional mandates re languages, platforms, tooling, but if you can hire a developer who is proficient in a modern, flexible language/framework that makes writing DSL's to deal with foreign API's relatively quick and easy, that's a plus. Basically, any of the languages with the Lisp-nature will be superior in this regard. Also, hiring someone proficient in a language with good ecosystem of open-source libraries to deal with common abstractions is a must. If you can find someone used to dealing with legacy systems and abstracting their idiosyncrasies, that's rare and you should explore it. Most devs are used to building projects from the ground up (or getting brought in to maintain existing projects) and building something new to interface with something old takes a certain... not skill set, per se, but approach.When you say the server will expose an API in whatever way is needed, that's rather vague, but I'm guessing you mean there are existing developers who can implement the API on the server? I would make it clear that the position will involve working with those developers to specify the API. If you can give an example of the type of interaction required, and ask the candidate their initial thoughts about how to implement it, that might be telling. I think the fact that your initial thought was JSON RPC shows you are leaning in the right direction, so watch out for anyone who has visions of complex XML and XSLT interactions.If you can expect decent browser JavaScript support, look for someone experienced with JS frameworks like Backbone.js, JavaScriptMVC, etc. You could in that case do the entire app (basically) in-browser and maybe serve it with Node.js or something... so look toward good JS devs. If they say they prefer to write their JS in CoffeeScript, they're probably even better JS devs.Just some thoughts from my experience in the trenches, but if my department were hiring a new developer, this is what I would tell them to look for.I wish my department had the budget for another developer...
_softwareengineering.152997
UML has a jungle of Diagrams.Profile Diagrams, Class Diagrams, Package Diagrams...However, (IMH-and-not-too-experienced-O) I quite see that doing each and every diagram is overkill.Therefore, which UML Diagrams are more suitable in a web context, more expecificly a blog (we want to build it from scratchs).I understand that just because I used UML Diagrams does not imply that our code would be great and brilliant... but, it certainly would be better than just unplanified code...
Truly useful UML diagrams
web development;design;uml
As a general guideline:One deployment diagram for an overview of the architecture (good for any system)One use case diagram for an overview of what users will do with the system (dito)One class diagram for the data modelActivity diagrams for the flow of individual use cases if they are complexPerhaps a state machine diagram if you have a create/review/publish workflow for blog entriesSome diagram types (e.g. timing diagram) have rather specialized uses, others tend towards a level of detail where actual code does a better job (e.g. sequence diagrams), and others yet seem to be intended for gigantic projects but have questionable utility even there (package diagrams? Any IDE can show you your packages).
_cstheory.3154
Parity and $AC^0$ are like inseparable twins. Or so it has seemed for the last 30 years. In the light of Ryan's result, there will be renewed interest in the small classes.Furst Saxe Sipser to Yao to Hastad are all parity and random restrictions. Razborov/Smolensky is approximate polynomial with parity (ok, mod gates). Aspnes et al use weak degree on parity. Further, Allender Hertrampf and Beigel Tarui are about using Toda for small classes. And Razborov/Beame with decision trees. All of these fall into the parity basket. 1) What are other natural problems (apart from parity) that can be shown directly not to be in $AC^0$? 2) Anyone know of a drastically different approach to lower bound on AC^0 that has been tried?
Parity and $AC^0$
cc.complexity theory;complexity classes;lower bounds;circuit complexity
null
_unix.27333
I'm trying to compute the difference between the output of two awk commands but my simple attempts at it seem to be failing. Here is what I'm trying:diff $(awk '{print $3}' f1.txt | sort -u) $(awk '{print $2}' f2.txt | sort -u)This doesn't work for reasons unknown to me. I was under the assumption that $() construct was used to capture the output of another command but my diff invocation fails to recognize the two inputs given to it. Is there any way I can make this work.By the way, I can't use the obvious solution of writing the output of those two commands to separate files given that I'm logged on to a production box with no 'write' privileges.
Diff the output of two `awk` commands
shell;io redirection;awk;diff
diff expects the names of two files, so you should put the two output on two files, then compare them:awk '{print $3}' f1.txt | sort -u > out1awk '{print $2}' f2.txt | sort -u > out2diff out1 out2or, using ksh93, bash or zsh, you can fool diff with the command:diff <(awk '{print $3}' f1.txt | sort -u) <(awk '{print $2}' f2.txt | sort -u)
_unix.314059
I'm looking for an editor to print (on paper) C++ code. I'm currently in engineering school and the instructor has asked us to submit the code on paper.He wants name +surname, the class number (on header), the number of page at the bottom, and the reserved words bolded for every page! On Windows it can be done with notepadd++. But I'm on Linux and I haven't found an IDE or text editor that works. (I've already tried SCITE, gedit, and Syntaxic)
Text editor for printing C++ code
editors;c++;ide
Well, if you want to go the extra mile, do it in LaTeX and provide a professional level PDF file. You haven't mentioned your distribution so I'll give instructions for Debian based systems. The same basic idea can be done on any Linux though. Install a LaTeX system and necessary packagessudo apt-get install texlive-latex-extra latex-xcolor texlive-latex-recommendedCreate a new file (call it report.tex) with the following contents:\documentclass{article}\usepackage{fancyhdr}\pagestyle{fancy}%% Define your header here. %% See http://texblog.org/2007/11/07/headerfooter-in-latex-with-fancyhdr/\fancyhead[CO,CE]{John Doe, Class 123}\usepackage[usenames,dvipsnames]{color} %% Allow color names%% The listings package will format your source code\usepackage{listings}\lstdefinestyle{customasm}{ belowcaptionskip=1\baselineskip, xleftmargin=\parindent, language=C++, breaklines=true, %% Wrap long lines basicstyle=\footnotesize\ttfamily, commentstyle=\itshape\color{Gray}, stringstyle=\color{Black}, keywordstyle=\bfseries\color{OliveGreen}, identifierstyle=\color{blue}, xleftmargin=-8em, showstringspaces=false} \begin{document}\lstinputlisting[style=customasm]{/path/to/your/code.c}\end{document}Just make sure to change /path/to/your/code.c in the penultimate line so that it point to the actual path of your C file. If you have more than one file to include, add a \newpage and then a new \lstinputlisting for the other file. Compile a PDF (this creates report.pdf)pdflatex report.tex I tested this on my system with an example file I found here and it creates a PDF that looks like this:For a more comprehensive example that will automatically find all .c files in the target folder and create an indexed PDF file with each in a separate section, see my answer here.
_webapps.13561
I'm looking to see or get a weekly summary of the number of hours per event in a Google Calender.How can I get these hours totalled up?
Calculating total hours of an event in Google Calendar
google calendar
finally I used google apps script over a google spreadsheet, the code in a github gist .// add menufunction onOpen() {var ss = SpreadsheetApp.getActiveSpreadsheet();var menuEntries = [{name:Calcular Horas, functionName: calculateHours}];ss.addMenu(Hours, menuEntries);// calcular al iniciarcalculateHours();}function count_hours(cal_id, event_name){var hours = 0;var cal = CalendarApp.getCalendarById(cal_id);var this_year = new Date(2013,0,1);var now = new Date()var events = cal.getEvents(this_year, now, {search: event_name});for ( i = 0 ; i < events.length ; i++){var event = events[i];if ( event_name.toLowerCase() == event.getTitle().toLowerCase() ) {//Logger.log(event.getTitle());var start = event.getStartTime() ;var end = event.getEndTime();start = new Date(start);end = new Date(end);hours = hours + ( end - start ) / ( 1000 * 60 * 60 );}}var cal_name = cal.getName();// retorna el nombre del calendario y numero de horas del eventoreturn [cal_name, hours];}function hours_in_events(events){var hours = 0;for ( i = 0 ; i < events.length ; i++){var event = events[i];Logger.log(event.getTitle());var start = event.getStartTime() ;var end = event.getEndTime();start = new Date(start);end = new Date(end);hours = hours + ( end - start ) / ( 1000 * 60 * 60 );}return hours;}function authorize() {var oauthConfig = UrlFetchApp.addOAuthService(calendar);var scope = https://www.googleapis.com/auth/calendar;oauthConfig.setConsumerKey(anonymous);oauthConfig.setConsumerSecret(anonymous);oauthConfig.setRequestTokenUrl(https://www.google.com/accounts/OAuthGetRequestToken?scope=+scope);oauthConfig.setAuthorizationUrl(https://accounts.google.com/OAuthAuthorizeToken);oauthConfig.setAccessTokenUrl(https://www.google.com/accounts/OAuthGetAccessToken);}/** Count hours of events with same name*/function countHours(calId, eventName){authorize();var cal = CalendarApp.getCalendarById(calId);var key = ...;var query = encodeURIComponent(eventName);calId = encodeURIComponent(calId);var params = {method: get,oAuthServiceName: calendar,oAuthUseToken: always,};var url = https://www.googleapis.com/calendar/v3/calendars/+calId+/events?q= + query + &key= + key;var request = UrlFetchApp.fetch(url, params);var response = Utilities.jsonParse(request.getContentText());//Logger.log(response);var cal_name = response.summary;var items = response.items;var start, end;var hours = 0;for ( i = 0 ; i < items.length ; i++){if ( items[i].status != cancelled ){if ( items[i].summary == eventName ){start = items[i].start.dateTime;end = items[i].end.dateTime;start = new Date(start.replace(/-/g,'/').replace(/[A-Z]/,' ').substr(0,19) );end = new Date(end.replace(/-/g,'/').replace(/[A-Z]/,' ').substr(0,19));hours = hours + ( end - start ) / ( 1000 * 60 * 60 );}}}// retorna el nombre del calendario y numero de horas del eventoreturn [cal_name, hours];}function calculateHours(){var ss = SpreadsheetApp.getActiveSpreadsheet();var id_cal_pos = 1;var event_name_pos = 2;var cal_name_pos = 1;var total_hours_pos = 4;var s = ss.getSheets()[0];var rows = s.getDataRange();var nRows = rows.getNumRows();var values = rows.getValues();// from second rowfor ( var i = 1; i < nRows ; i ++){var row = values[i];var cal_hours = count_hours(row[id_cal_pos], row[event_name_pos]);var h = cal_hours[1];var cal_name = cal_hours[0];s.getRange(i+1, cal_name_pos).setValue(cal_name);s.getRange(i+1, total_hours_pos).setValue(h);}}
_webapps.42902
Suppose I follow person B who doesn't follow me. If I want to send to person B a tweet of person C from an unrelated conversation, how do I do it? I tried reply @B but it didn't show the link. I tried retweet but I can only retweet to myself.
Sending a tweet to somebody I follow but he doesn't follow me
twitter;tweet
When you retweet (unless your account is protected) the tweet is publicly visible. If you mention someone in a tweet who does not follow you they will not receive an instant notification, but it will appear in their Mentions and Interactions tab.So, if you quote the tweet of person C with a mention of @personB the tweet will appear in the Mentions and Interactions tab of personBTwitter Support describes this in their Type of Tweets and where they appearMentions:Definition: A Tweet containing another user's Twitter username, preceded by the @ symbol, like this: Hello @NeonGolden! What's up?Where it appears for the sender: On the sender's profile page of public Tweets.Where it appears for the recipient: In the recipient's Mentions and Interactions tabs, which is accessible only by them. Additionally, mentions will appear in the recipient's Home timeline view (not on their profile) if they are following the sender. Note: Anyone on Twitter who is following the sender of a mention will see the Tweet in their Home timeline.Places it will never appear: On anyone's profile page, unless they wrote the message.
_unix.145029
I want to write a shell script which will add list of users, defined in users.txt, to multiple existing groups. For example, I have a, b, c, d, e, f, g users which will be added to the groups according to script, and I have p, q, r, s, t groups. Below is the expected output of /etc/groups file :p:x:10029:a,c,dq:x:10030:b,c,f,gr:x:10031:a,b,c,es:x:10032:c,gt:x:10033:a,b,c,d,eso how to achieve this ?
adding list of users to multiple groups
shell script;users;group
The best and simplest approach would be to parse a file with the required information as suggested by @DannyG. While that's the way I would do it myself, another would be to hardcode the user/groups combinations in your script. For example:#!/usr/bin/env bash## Set up an indexed array where the user is the key## and the groups the values.declare -A groups=( [alice]=groupA,groupB [bob]=groupA,groupC [cathy]=groupB,groupD)## Now, go through each user (key) of the array,## create the user and add them to the right groups.for user in ${!groups[@]}; do useradd -U -G ${groups[$user]} $user doneNOTE: The above assumes a bash version >= 4 since associative arrays were not available in earlier versions.
_unix.274388
I have a folder in which there are very old files. It contains files since 2009 and they are dump files with error log. What I want to know is if it is possible to delete the files between let's say 2009 and 2011. Something like:delete 'file_patern' between 2009-2011 and 2012-2014I want to preserve for example 2010, 2015 and current year.Machine is running on RedHat.
Delete files between two dates
files;rhel
null
_codereview.143875
I have a WinForms project in which I am trying to implement the Passive View MVP pattern (meaning no business logic in my Views). Each form is a concrete View with an IView interface to which a Presenter is connected. I do think i should handle some UI related logic in my View because I am otherwise adding needless complexity to try to handle this in the Presenter. Therefore, I have created a simple function in my View class which validates the result from the file dialog. I did not put this logic in the event handler of the button press because I wanted to avoid duplicate code since I have three of these buttons.Please let me know if you think this is an appropriate way to implement this. I have posted the function, and the event (I have three button_Click events that each use this function) that utilizes this function below.Button event that uses the GetFileName function:// Gets the filename and fires the compliance standard adding event.private void newComplianceStandardButton_Click(object sender, EventArgs e){ string fileName = GetFileName(); if (fileName != null) { AddingComplianceStandard?.Invoke(this, fileName); } Close();}Function that checks the dialog result and returns the filename:// Opens the openfile dialog and checks the result. If the result is OK, the form is closed and the filename is returned.public string GetFileName(){ DialogResult result = openFileDialog1.ShowDialog(); string fileName = openFileDialog1.FileName; if (result == DialogResult.OK && (Path.GetExtension(openFileDialog1.FileName) == .txt || Path.GetExtension(openFileDialog1.FileName) == .csv)) { Close(); return fileName; } else { MessageBox.Show(You have selected a file with an illegal extension. Please try again and select a *.txt or a *.csv file); return null; }}
Handling an openFileDialog in the View of my MVP WinForms project
c#;design patterns;winforms;mvp
null
_codereview.18574
My friend wrote a program which compares random arrangements of die faces to find the one with the most evenly distributed faces - especially when the faces are not a mere sequence.I translated his program into haskell because I've been looking for a reason to talk someone's ear off about how cool haskell is. However, I am not very proficient with haskell (it took me forever to write this and it has undergone a couple giant refactorings) and so I have two problems.he has been big on optimizing his versions, and this is not very fast, and it does not scale linearly. It goes from 415 checks/s to 97 checks/s when I go from 1000 to 20000 checks. Did I mess up some tail recursion or is it some kind of larger problem?the code that came out of this isn't actually as elegant as I had predicted. I want this to be a solid showcase of Haskell, if you have any ideas on how to simplify it I am all earsThis is the most relevant code:-- _CENTERS :: [{ x :: Float, y :: Float, z :: Float}]-- _VALUES :: [Num]-- Basically just (repeat $ map rand [0.._SIDES]), but never using a seed twicerandstates from = (take _SIDES (infrand from)) : randstates newseed where infrand seed = seed : infrand (shuffle seed) newseed = (infrand from) !! (_SIDES + 1)-- yates shuffleyates _ (last:[]) = [last]yates (rand:pass) (swap:order) = choice:yates pass rorder where choice = order !! index index = (randfrom rand) `mod` (length order) rorder = take (index) order ++ swap : drop (index + 1) orderarrangements seed = map arrange $ randstates seed where arrange rands = yates rands [0.._SIDES - 2]-- fns comparing arrangements --arcLength i j = 1 / (1 + _WEIGHT * acos(dot3D / _VEC_LEN_SQUARED)) where dot3D = apply x + apply y + apply z apply fn = (fn i) * (fn j)matrix arr = map crosscmp arr where crosscmp s1 = [ value s1 * (distance s1 s2) | s2 <- arr ] distance a b = arcLength (_CENTERS !! a) (_CENTERS !! b) value s = fromInteger $ _VALUES !! svariance arr = sum $ map perside (matrix arr) where perside s = (sum s - mean) ^ 2 mean = (sum (concat $ matrix arr)) / (sides + 1) sides = fromInteger $ toInteger _SIDESmaxDistr = maximumBy (\a b -> variance a `compare` variance b)Main is basically justprint $ maxDistr $ take _TRIALS $ arrangements seed
Haskell tips/why doesnt this scale linearly?
performance;haskell;recursion
null
_ai.2338
What are the current best estimates as to what year artificial intelligence will be able to score 100 points on the Stanford Binet IQ test?
When will artificial intelligence equal human intelligence?
intelligence testing;prediction
null
_unix.24630
If I have a large file and need to split it into 100 megabyte chunks I will dosplit -b 100m myImage.isoThat usually give me something likexaaxabxacxadAnd to get them back together I have been usingcat x* > myImage.isoSeems like there should be a more efficient way than reading through each line of code in a group of files with cat and redirecting the output to a new file. Like a way of just opening two files, removing the EOF marker from the first one, and connecting them - without having to go through all the contents.Windows/DOS has a copy command for binary files. The help mentions that this command was designed to able able to combine multiple files. It works with this syntax: (/b is for binary mode)copy /b file1 + file2 + file3 outputfileIs there something similar or a better way to join large files on Linux than cat?UpdateIt seems that cat is in fact the right way and best way to join files. Glad to know i was using the right command all along :) Thanks everyone for your feedback.
What's the best way to join files again after splitting them?
linux;command line;files;iso;split
That's just what cat was made for. Since it is one of the oldest GNU tools, I think it's very unlikely that any other tool does that faster/better. And it's not piping - it's only redirecting output.
_cs.74934
How can one prove that a shunting-yard algorithm always returns a correct expression in RPN? I cannot find any proof in the internet.
Shunting-yard algorithm - proof
algorithms;correctness proof
null
_cstheory.30923
Imagine, we defined natural numbers in dependently typed lambda calculus as Church numerals. They might be defined in the following way:SimpleNat = (R : Set) R (R R) Rzero : SimpleNatzero = R z _ zsuc : SimpleNat SimpleNatsuc sn = R z s s (sn R z s)SimpleNatRec : (R : Set) R (R R) SimpleNat RSimpleNatRec R z s sn = sn R z sHowever, it seems that we can't define Church numerals with the following type of Induction principle:NatInd : (C : Nat -> Set) -> (C zero) -> ((n : Nat) -> C n -> C (suc n)) -> (n : Nat) -> (C n)Why is it so? How can I prove this? It seems that the problem is with defining a type for Nat which becomes recursive. Is it possible to amend lambda calculus to allow this?
Why it's impossible to declare an induction principle for Church numerals
type theory;lambda calculus
The question you are asking is interesting and known. You are using the so-called impredicative encoding of the natural numbers. Let me explain a bit of the background.Given a type constructor $T : \mathsf{Type} \to \mathsf{Type}$, we might be interested in the minimal type $A$ satisfying $A \cong T(A)$. In terms of category theory $T$ is a functor and $A$ is the initial $T$-algebra. For example, if $T(X) = 1 + X$ then $A$ corresponds to the natural numbers. If $T(X) = 1 + X \times X$ then $A$ is the type of finite binary trees.An idea with long history is that the initial $T$-algebra is the type$$A \mathrel{{:}{=}} \prod_{X : \mathsf{Type}} (T(X) \to X) \to X.$$(You are using Agda notation for dependent products, but I am using a more traditional mathematical notation.) Why should this be? Well, $A$ essentially encodes the recursion principle for the initial $T$-algebra: given any $T$-algebra $Y$ with a structure morphism $f : T(Y) \to Y$, we get an algebra homomorphism $\phi : A \to Y$ by$$\phi(a) = a \, Y \, f.$$So we see that $A$ is weakly initial for sure. For it to be initial we would have to know that $\phi$ is unique as well. This is not true without further assumptions, but the details are technical and nasty and require reading some background material. For instance, if we can show a satisfactory parametricty theorem then we win, but there are also other methods (such as massaging the definition of $A$ and assuming the $K$-axiom and function extensionality).Let us apply the above to $T(X) = 1 + X$:$$\mathsf{Nat} = \prod_{X : \mathsf{Type}} ((1 + X) \to X) \to X = \prod_{X : \mathsf{Type}} (X \times (X \to X)) \to X = \prod_{X : \mathsf{Type}} X \to (X \to X) \to X.$$We got Church numerals! And we also understand now that we will get a recursion principle for free, because the Church numerals are the recursion principle for numbers, but we will not get induction without parametricity or a similar device.The tehcnical answer to your question is this: there exist models of type theory in which the type SimpleNat contains exotic elements that do not correspond to numerals, and moreover, these elements break the induction principle. The type SimpleNat in these models is too big and is only a weak initial algebra.
_cs.40008
I need to write a program that does the following:Take an input list of objects whose properties include latitude and longitude to, say, 5 decimal placesStore them in a data structure onceProvide a nearby lookup function that can efficiently return the N closest objects for a given lat/longCurrently, I'm doing the following, which is suboptimal:Store all objects in a hash, with array keys like [integer_latitude, integer_longitude]At search time, find all objects in an arbitrary-sized circle around the target. Eg, if the search is at [0,0], I can get all objects within 1 degree by pulling [-1,0], then [0,0], then [1,0], then [0,-1], etc.Order the found objects by actual distance to the target and take the top NThis is obviously inefficient, because often there are many more matches than N.One improvement could be to examine locations in concentric squares outward from the center: all points 0 degrees from the center, all points 1 degree from the center, 2 degrees, etc, and stop after the first square when I have at least the number of objects needed. Then I could sort those by actual, fine-grained distance and take the top N.Is there some well-established way of doing this search efficiently?
What data structure would help find nearby coordinates quickly?
data structures;computational geometry;search algorithms;search problem
null
_scicomp.2462
Are there any algorithms for community detection for bipartite graphs (2-mode networks) implemented in igraph, networkX, R or Python etc.? In particular, is there such an implementation in which one would be able to restrict the detection of communities just on one of the two modes?
Algorithms for community detection for bipartite graphs?
python;graph theory
null
_cs.32520
There is at least one (1) distributed version propsosed for the push-relabel maximum-flow algorithm. I wonder if and how this algorithm can cope with nodes leaving or enterig the graph during runtime. Is there any work about this?1 Andrew V Goldberg and Robert E Tarjan. A new approach to themaximum-flow problem. Journal of the ACM (JACM), 35(4):921940,1988. google scholar
Distributed push relabel with changing graph topology
distributed systems;network flow
null
_softwareengineering.233858
There exists a general OOP principle that methods should return local variables rather than set object fields.For instance, say I have the following piece of code (example in Java):public class Number { public int myNum;}public class Horrible { public static void areYouSerious(Number num){ num.myNum = 47; }}public class Test { public int num = new Number(); public void doHorribleThings(){ Horrible.areYouSerious(num); } public static void main(String[] args){ Test test = new Test(); test.doHorribleThings(); }}Obviously this style of programming can go sideways very quickly, and becomes a massive headache for anyone attempting to maintain it. A more correct way of doing this would be the following:public class Number { public int myNum;}public class NotAsBad { public static int getDefaultVal(){ return 47; }}public class Test { public int num = new Number(); public void doHorribleThings(){ num.myNum = NotAsBad.getDefaultVal(); } public static void main(String[] args){ Test test = new Test(); test.doHorribleThings(); }}The first example is modifying a passed in reference, whereas the second is generalizing the static setting function, making it return a value which can be used to set the object field.Is there a name for the principle that the second method should be preferred over the first? A sort of modularity principle?Just as encapsulation is the term for restricting variable manipulation, I would like a term for limiting the scope of the effects of some piece of functionality.If there is no existing term, I am going to call it consequence localization.
Name for use of methods that return rather than set
object oriented
null
_unix.292645
I am not entirely sure how to ask this question so it may have been discussed before. Attached below is an image of my image server setup. Here is how it works:A User visits my real estate website that hosts properties from all over the country.Each section of the website uses a specific image server to deliver images based on the state that the property is located inI have a mod_proxy server running that directs requests for images to the appropriate image server on my local network.Everything is working now and the images are being served as expected fairly efficiently. What I am after is possibly caching some of the most viewed images so that content is loaded faster than it currently is. I know that squid is a caching system and I was thinking about integrating it into my network to deliver cached images if they exists, and if they don't exist in cache it should call the appropriate image server for the content.I am not sure exactly where the squid installation should reside however, should that be installed on the mod_proxy server or on the individual image servers?Any recommendations to my setup would be greatly appreciated.
MOD_PROXY with Squid
webserver;cache;squid;http proxy
null
_codereview.54229
Recent JavaScript student here. I ran afoul of some weirdness with event listeners that led to my code being unresponsive. After some reading and tweaking, I arrived at a version that worked in JSFiddle, but not in the browser. I finally got it working in the browser, but now my code looks like this:window.onload = function () { function bgnAddChllng() { alert(This is a test alert.); }var addChllng = document.getElementById(addChllng);addChllng.addEventListener('click', bgnAddChllng, false);}But the use of window.onload and nesting the actual useful code in an anonymous function seems bloated/hacky to me. What, if anything, can I do to clean it up?Available on a JSFiddle here.
Does this function have to be nested?
javascript;event handling
window.onload = function () {Mixing DOM-0 event handlers and addEventListener is inconsistent, pick one and stick with it.window.addEventListener(load, function () {function bgnAddChllng() {Don't disemvowel your variables. This isn't Wheel of Fortune, vowels don't cost anything. function beginAdditionChallenge() {var addChllng = document.getElementById(addChllng);Use proper indentation. JSFiddle has a handy TidyUp button to do it for you. Consider a more descriptive ID for the button.var button = document.getElementById(challengeButton);addChllng.addEventListener('click', bgnAddChllng, false);Switching between single and double quotes is inconsistent, pick one and stick with it. useCapture defaults to false, there's no need to explicitly pass it. button.addEventListener(click, beginAdditionChallenge);nesting the actual useful code in an anonymous function seems bloated/hacky to me.It's a pretty common practice, but if you don't like it, give the function a name and don't nest it, just as you did with the button click event handler.function beginAdditionChallenge() { alert(This is a test alert.);}function initChallenge() { var button = document.getElementById(challengeButton); button.addEventListener(click, beginAdditionChallenge); }window.addEventListener(load, initChallenge);
_codereview.144659
I created a script to import several CSV files from various sources and one CSV file with a list of systems in it. the script searches each CSV file to see if the system exist in the file and if it does then write the property information to a variable. The challenge that I have ran into is there are 26,000 systems to search for and so far the script has been running for over 24 hours and just now over half way through. Any ideas on how to speed this up?$Final = @()#Import CSV files$Systems = Import-Csv C:\Projects\Master.csv$vCenter = Import-Csv C:\Projects\vcenter.csv$Storage = Import-Csv C:\Projects\Storage.csv$SCCM = Import-Csv C:\Projects\SCCM.csv$Database = Import-Csv C:\Projects\Database.csv$OldAD = Import-Csv C:\Projects\AD_Old.csv$ADprod = Import-Csv C:\Projects\AD.csvWrite-Host Import Complete!$N = 0foreach ($System in $Systems){ Write-Host Line $N $Sys = New-Object System.Object $Sys | Add-Member -type NoteProperty -name System Name -value $System.Name ############################# #Database information Compare ############################# If ($Database.Name -contains $System.Name) { #Get the system information from the CSV file being compared $Domain = $Database | Where-Object { $_.Name -eq $System.name } | Select-object DomainName -ExpandProperty DomainName $SQLin = $Database | Where-Object { $_.Name -eq $System.name } | Select-object SQLInstance -ExpandProperty SQLInstance $Instance = $Database | Where-Object { $_.Name -eq $System.name } | Select-object Instance -ExpandProperty Instance $OS = $Database | Where-Object { $_.Name -eq $System.name } | Select-object OS -ExpandProperty OS $SQLver = $Database | Where-Object { $_.Name -eq $System.name } | Select-object SQLVersion -ExpandProperty SQLVersion $SQLsp = $Database | Where-Object { $_.Name -eq $System.name } | Select-object SQLServicePack -ExpandProperty SQLServicePack $SQLed= $Database | Where-Object { $_.Name -eq $System.name } | Select-object SQLEdition -ExpandProperty SQLEdition $OSsp = $Database | Where-Object { $_.Name -eq $System.name } | Select-object ServicePack -ExpandProperty ServicePack $Arch = $Database | Where-Object { $_.Name -eq $System.name } | Select-object SystemArchitecture -ExpandProperty SystemArchitecture $IP = $Database | Where-Object { $_.Name -eq $System.name } | Select-object IP -ExpandProperty IP $Env = $Database | Where-Object { $_.Name -eq $System.name } | Select-object Environment -ExpandProperty Environment $DBname = $Database | Where-Object { $_.Name -eq $System.name } | Select-object DBName -ExpandProperty DBName #Create a new record in the object for the System with the information from the CSV file. $Sys | Add-Member -type NoteProperty -name Domain Name -value $Domain -force $Sys | Add-Member -type NoteProperty -name SQL Instance -value $SQLin -force $Sys | Add-Member -type NoteProperty -name Database Instance -value $Instance -force $Sys | Add-Member -type NoteProperty -name Database Name -value $DBname -Force $Sys | Add-Member -type NoteProperty -name Operating System -value $OS -force $Sys | Add-Member -type NoteProperty -name SQL Version -value $SQLver -force $Sys | Add-Member -type NoteProperty -name SQL Service Pack -value $SQLsp -Force $Sys | Add-Member -type NoteProperty -name SQL Edition -value $SQLed -Force $Sys | Add-Member -type NoteProperty -name Operating System Service Pack -value $OSsp-Force $Sys | Add-Member -type NoteProperty -name System Architecture -value $Arch -Force $Sys | Add-Member -type NoteProperty -name IP Address -value $IP -Force $Sys | Add-Member -type NoteProperty -name Environment -value $Env -Force $Sys | Add-Member -type NoteProperty -name In Database File -value Yes } Else { $Sys | Add-Member -type NoteProperty -name In Database File -value No } ############################# #SCCM information Compare ############################# If ($SCCM.Name -contains $System.Name) { #Get the system information from the CSV file being compared $IP = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object IP -ExpandProperty IP $OS = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object OS -ExpandProperty OS $Vendor = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object Vendor -ExpandProperty Vendor $Model = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object Model -ExpandProperty Model $Serial = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object Serial -ExpandProperty Serial $ServicePack = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object Service Pack -ExpandProperty Service Pack $OSBuild = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object OS deployed on -ExpandProperty OS deployed on $Architecture = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object Architecture -ExpandProperty Architecture $LBTime = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object Last Boot Time -ExpandProperty Last Boot Time $LHW = $SCCM | Where-Object { $_.Name -eq $System.name } | Select-object Last H W Scan -ExpandProperty Last H W Scan #Create a new record in the object for the System with the information from the CSV file. $Sys | Add-Member -type NoteProperty -name IP Address -value $IP -force $Sys | Add-Member -type NoteProperty -name Vendor -value $Vendor -force $Sys | Add-Member -type NoteProperty -name Operating System -value $OS -force $Sys | Add-Member -type NoteProperty -name Model -value $Model -force $Sys | Add-Member -type NoteProperty -name Serial Number -value $Serial -force $Sys | Add-Member -type NoteProperty -name Operating System Service Pack -value $ServicePack -Force $Sys | Add-Member -type NoteProperty -name Operating System Build -value $OSBuild -Force $Sys | Add-Member -type NoteProperty -name System Architecture -value $Architecture -Force $Sys | Add-Member -type NoteProperty -name SCCM - Last Boot Time -value $LBTime -Force $Sys | Add-Member -type NoteProperty -name SCCM - Last Hardware Scan -value $LHW -Force $Sys | Add-Member -type NoteProperty -name In SCCM -value Yes } Else { $Sys | Add-Member -type NoteProperty -name In SCCM -value No } ############################# #Solarwinds information Compare ############################# If ($Solarwinds.Name -contains $System.Name) { #Get the system information from the CSV file being compared $IP = $Solarwinds | Where-Object { $_.Name -eq $System.name } | Select-object IP -ExpandProperty IP $Vendor = $Solarwinds | Where-Object { $_.Name -eq $System.name } | Select-object Vendor -ExpandProperty Vendor $MachineType = $Solarwinds | Where-Object { $_.Name -eq $System.name } | Select-object MachineType -ExpandProperty MachineType $Device = $Solarwinds | Where-Object { $_.Name -eq $System.name } | Select-object DeviceType -ExpandProperty DeviceType $City = $Solarwinds | Where-Object { $_.Name -eq $System.name } | Select-object City -ExpandProperty City $Region = $Solarwinds | Where-Object { $_.Name -eq $System.name } | Select-object Region -ExpandProperty Region #Create a new record in the object for the System with the information from the CSV file. $Sys | Add-Member -type NoteProperty -name IP Address -value $IP -force $Sys | Add-Member -type NoteProperty -name Vendor -value $Vendor -force $Sys | Add-Member -type NoteProperty -name Machine Type -value $MachineType -force $Sys | Add-Member -type NoteProperty -name Device Type -value $Device -force $Sys | Add-Member -type NoteProperty -name City -value $City -force $Sys | Add-Member -type NoteProperty -name Region -value $Region -Force $Sys | Add-Member -type NoteProperty -name In Solarwinds -value Yes } Else { $Sys | Add-Member -type NoteProperty -name In Solarwinds -value No } ############################# #Storage information Compare ############################# If ($Storage.Name -contains $System.Name) { #Get the system information from the CSV file being compared $Role = $Storage | Where-Object { $_.Name -eq $System.name } | Select-object Role -ExpandProperty Role $IP = $Storage | Where-Object { $_.Name -eq $System.name } | Select-object IP -ExpandProperty IP $Serial = $Storage | Where-Object { $_.Name -eq $System.name } | Select-object SerialNumber -ExpandProperty SerialNumber $State = $Storage | Where-Object { $_.Name -eq $System.name } | Select-object Physical/Virtual -ExpandProperty Physical/Virtual $City = $Storage | Where-Object { $_.Name -eq $System.name } | Select-object City -ExpandProperty City $Model = $Storage | Where-Object { $_.Name -eq $System.name } | Select-object Model -ExpandProperty Model $Region = $Storage | Where-Object { $_.Name -eq $System.name } | Select-object Region -ExpandProperty Region #Create a new record in the object for the System with the information from the CSV file. $Sys | Add-Member -type NoteProperty -name Function -value $Role -force $Sys | Add-Member -type NoteProperty -name IP Address -value $IP -force $Sys | Add-Member -type NoteProperty -name Serial Number -value $Serial -Force $Sys | Add-Member -type NoteProperty -name Physical or Virtual -value $State -force $Sys | Add-Member -type NoteProperty -name City -value $City -force $Sys | Add-Member -type NoteProperty -name Model -value $Model -force $Sys | Add-Member -type NoteProperty -name Region -value $Region -Force $Sys | Add-Member -type NoteProperty -name In Storage File -value Yes } Else { $Sys | Add-Member -type NoteProperty -name In Storage File -value No } ############################# #vCenter information Compare ############################# If ($vCenter.Name -contains $System.Name) { $OS = $vCenter | Where-Object { $_.Name -eq $System.name } | Select-object OS -ExpandProperty OS $IP = $vCenter | Where-Object { $_.Name -eq $System.name } | Select-object IP -ExpandProperty IP $DNS = $vCenter | Where-Object { $_.Name -eq $System.name } | Select-object DNS -ExpandProperty DNS $Notes = $vCenter | Where-Object { $_.Name -eq $System.name } | Select-object Notes -ExpandProperty Notes $Contact = $vCenter | Where-Object { $_.Name -eq $System.name } | Select-object Contact -ExpandProperty Contact $Function = $vCenter | Where-Object { $_.Name -eq $System.name } | Select-object Function -ExpandProperty Function $Sys | Add-Member -type NoteProperty -name Operating System -value $OS -force $Sys | Add-Member -type NoteProperty -name IP Address -value $IP -force $Sys | Add-Member -type NoteProperty -name DNS Name -value $DNS $Sys | Add-Member -type NoteProperty -name vCenter - Notes -value $Notes -force $Sys | Add-Member -type NoteProperty -name vCenter - Contact -value $Contact -force $Sys | Add-Member -type NoteProperty -name vCenter - Function -value $Function -force $Sys | Add-Member -type NoteProperty -name Is Virtual -value Yes -Force $Sys | Add-Member -type NoteProperty -name In vCenter -value Yes } Else { $Sys | Add-Member -type NoteProperty -name In vCenter -value No $Sys | Add-Member -type NoteProperty -name Is Virtual -value No -Force } ############################# #Old AD information Compare ############################# If ($OldAD.Name -contains $System.Name) { #Get the system information from the CSV file being compared $Description = $OldAD | Where-Object { $_.Name -eq $System.name } | Select-object Description -ExpandProperty Description $DNS = $OldAD | Where-Object { $_.Name -eq $System.name } | Select-object DNSname -ExpandProperty DNSname $LastLoc = $OldAD | Where-Object { $_.Name -eq $System.name } | Select-object Last Known Location -ExpandProperty Last Known Location $LastLog = $OldAD | Where-Object { $_.Name -eq $System.name } | Select-object Last Logon Date -ExpandProperty Last Logon Date $OS = $OldAD | Where-Object { $_.Name -eq $System.name } | Select-object OS -ExpandProperty OS $Contain = $OldAD | Where-Object { $_.Name -eq $System.name } | Select-object Container -ExpandProperty Container #Create a new record in the object for the System with the information from the CSV file. $Sys | Add-Member -type NoteProperty -name Description -value $Description -force $Sys | Add-Member -type NoteProperty -name DNS Name -value $DNS -force $Sys | Add-Member -type NoteProperty -name Old Active Directory - Last Known Location -value $LastLoc -Force $Sys | Add-Member -type NoteProperty -name Old Active Directory - Last Logon -value $LastLog -force $Sys | Add-Member -type NoteProperty -name Operating System -value $OS -force $Sys | Add-Member -type NoteProperty -name Old Active Directory - Container -value $Contain -force $Sys | Add-Member -type NoteProperty -name In Old Active Directory File -value Yes } Else { $Sys | Add-Member -type NoteProperty -name In Old Active Directory File -value No } ############################# #Production AD information Compare ############################# If ($ADprod.Name -contains $System.Name) { #Get the system information from the CSV file being compared $Description = $ADprod | Where-Object { $_.Name -eq $System.name } | Select-object Description -ExpandProperty Description $LastLoc = $ADprod | Where-Object { $_.Name -eq $System.name } | Select-object Last Known Location -ExpandProperty Last Known Location $LastLog = $ADprod | Where-Object { $_.Name -eq $System.name } | Select-object Last Logon Date -ExpandProperty Last Logon Date $OS = $ADprod | Where-Object { $_.Name -eq $System.name } | Select-object OS -ExpandProperty OS $Contain = $ADprod | Where-Object { $_.Name -eq $System.name } | Select-object Container -ExpandProperty Container #Create a new record in the object for the System with the information from the CSV file. $Sys | Add-Member -type NoteProperty -name Active Directory - Description -value $Description -force $Sys | Add-Member -type NoteProperty -name Production Active Directory - Last Known Location -value $LastLoc -Force $Sys | Add-Member -type NoteProperty -name Production Active Directory - Last Logon -value $LastLog -force $Sys | Add-Member -type NoteProperty -name Operating System -value $OS -force $Sys | Add-Member -type NoteProperty -name Production Active Directory - Container -value $Contain -force $Sys | Add-Member -type NoteProperty -name In Production Active Directory File -value Yes } Else { $Sys | Add-Member -type NoteProperty -name In Production Active Directory File -value No } $Final += $Sys $N++}Write-Host Compare Complete
Gather information about computers from multiple CSV files
time limit exceeded;csv;powershell;join
null
_codereview.45788
I wrote a package manager in clojure that does 5 things:depend a b //creates a and b (if they don't exist) and adds dependency on ainstall a //installs a and its dependencieslist //prints out the install packagessys //prints out all packagesremove or uninstall packages and dependencies that are no longer needed.I did this as an exercise to learn Clojure. I'm not sure if this is idiomatic are not. I tried to avoid typedef since that seems like attempting OOP on Clojure. I'm just wondering if there are any very obvious non idiomatic code or code smells.(use '[clojure.string :only [split, lower-case]])(def DEPEND depend)(def INSTALL install)(def REMOVE remove)(def UNINSTALL uninstall)(def SYS sys)(def LIST list)(def INFO info)(def EXIT exit)(def END end)(def all-packages (atom {}))(def installed-packages (atom (sorted-set)))(defn in? [seq elm] (some #(= elm %) seq))(defn create makes a package element with provided name, option list of providers are packages requried by package, optional list of clients are packages using package ([name] (create name #{} #{})) ([name providers clients] {:name name, :providers providers, :clients clients}))(defn add-client [package client] (create (:name package) (:providers package) (conj (:clients package) client)))(defn remove-client [package client] (create (:name package) (:providers package) (disj (:clients package) client)))(defn add-provider add a provided to package ([package] package) ([package provider] (create (:name package) (conj (:providers package) provider) (:clients package))) ([package provider & more-providers] (reduce add-provider package (cons provider more-providers))))(defn get-providers [package] (get (get @all-packages package) :providers ))(defn get-clients [package] (get (get @all-packages package) :clients ))(defn get-package [package] (get @all-packages package))(defn exists? [package] (contains? @all-packages package))(defn dependent? [first second] (if (in? (cons first (get-providers first)) second) (do (println (str \t first) depends on second) true) (some #(dependent? % second) (get-providers first))))(defn update-sys [package] (swap! all-packages assoc (:name package) package))(defn add-sys-package adds a package to all-packages [package & deps] (doseq [dep deps] (if-not (exists? dep) (update-sys (create dep)))) (if (not-any? #(dependent? % package) deps) (update-sys (apply add-provider (cons (create package) deps))) (println Ignoring command)))(defn print-sys [] (doseq [[k,v] @all-packages] (println \t v)))(defn print-installed [] (doseq [v @installed-packages] (println \t v)))(defn installed? [package] (contains? @installed-packages package))(defn install-new [package] (do (println \t installing package) (swap! installed-packages conj package)))(defn install [package self-install] (if-not (exists? package) (add-sys-package package)) (if-not (installed? package) (do (doseq [provider (get-providers package)] (if-not (installed? provider) (install provider false))) (doseq [provider (get-providers package)] (update-sys (add-client (get-package provider) package))) (install-new package)) (do (if self-install (update-sys (add-client (get-package package) package))) (println \t package is already installed.))))(defn not-needed? [package self-uninstall] (def clients (if self-uninstall (disj (get-clients package) package) (get-clients package))) (empty? clients))(defn uninstall-package [package] (println \t uninstalling package) (swap! installed-packages disj package))(defn uninstall [package self-uninstall] (if (installed? package) (if (not-needed? package self-uninstall) (do (doseq [provider (get-providers package)] (update-sys (remove-client (get-package provider) package))) (uninstall-package package) (doseq [provider (filter #(not-needed? % false) (get-providers package))] (uninstall provider false))) (println \t package is still needed)) (println \t package is not installed)))(def run (atom true))(defn stop-run [] (reset! run false))(defn exit [] (println goodbye) (stop-run))(defn runprog [] (println starting) (reset! run true) (while (true? @run) (def line (read-line)) (def command (first (split line # +))) (def args (rest (split line # +))) (condp = (lower-case command) DEPEND (apply add-sys-package args) LIST (print-installed) INSTALL (install (first args) true) INFO (println (get-package (first args))) [REMOVE UNINSTALL] (uninstall (first args) true) UNINSTALL (uninstall (first args) true) SYS (print-sys) EXIT (exit) END (exit) ())))(runprog)Edit: After incorporating almost all suggestions github gist
Package manager in Clojure
functional programming;clojure
OK, bear with me. I got really into this, so I hope you don't mind that this is super long! :) Here are my thoughts.Major things:I would consider using refs instead of atoms to represent your package list. The difference is that refs are used for coordinated access to multiple entities, whereas atoms provide uncoordinated access to a single entity. Because you're working with two related lists, all-packages and installed-packages, it would be safer to use refs to represent them.A simpler way to create a command-line utility like this is to make use of a CLI library. Clojure has a few good ones. See this question on Stack Overflow for a few good methods.You used def inside of two function definitions, which is generally considered incorrect in Clojure. Usually you will only use def, defn, etc. on the top level of your program. When you're inside of a def, defn, etc., you should use let instead to do what you're trying to do. See below:(defn not-needed? [package self-uninstall] (let [clients (if self-uninstall (disj (get-clients package) package) (get-clients package))] (empty? clients))) (defn runprog [] (println starting) (reset! run true) (while (true? @run)) (let [line (read-line) [command & args] (split line #\s+)] ...Notice, also, how I used destructuring to simplify command (first (split line # +)), args (rest (split line # +)) to just [command & args] (split line #\s+). Destructuring is a very powerful tool that can make your code more concise and easier to read.Minor things:For your in? function, you can simplify (some #(= elm %) seq) to (some #{elm} seq). The #{} reader is a shorthand notation for a set, and a set in Clojure can be used as a function that looks up its argument in the set and returns either the argument if it is found, or nil if it's not. Because any value that isn't false or nil is considered truthy in Clojure, that means you can use a set as a function that tells you whether or not an element is contained in a collection. By the way, I would name the argument in this function coll rather than seq, as seq already refers to the clojure.core/seq function.You can simplify the (get (get ... form in your get-providers and get-clients functions by using get-in:(defn get-providers [package] (get-in @all-packages [package :providers]))(defn get-clients [package] (get-in @all-packages [package :clients]))You can omit the do in your install-new function definition. Any time you're doing something like defining a function using defn, there is already an implicit do:(defn install-new [package] (println \t installing package) (swap! installed-packages conj package))You have a few functions (install, not-needed? and uninstall) that take a parameter called either self-install or self-uninstall, which is expected to be either true or false. It would be more idiomatic to make these keyword arguments, like this:(defn install [package & {:keys [self-install]}] ; the rest of the function would still be the same(defn not-needed? [package & {:keys [self-uninstall]}] ; etc.Then you would call the functions like this, for example:(install clementine :self-install true)It is a little more verbose, but I think it's more elegant and it makes it clearer what your code is doing.Your uninstall function smells of OOP practices, in particular in the way that you have nested if structures. These aren't always bad form in Clojure, but generally it's better to find a more functional way of expressing the flow of your program. Consider using cond as an alternative:(defn uninstall [package & {:keys [self-uninstall]}] (cond (not (installed? package)) (println \t package is not installed.) (and (installed? package) (not (not-needed? package :self-uninstall self-uninstall))) (println \t package is still needed.) :else (do (doseq [provider (get-providers package)] (update-sys (remove-client (get-package provider) package))) (uninstall-package package) (doseq [provider (filter #(not-needed? % :self-uninstall false) (get-providers package))] (uninstall provider :self-uninstall false)))))This is also longer than your code, but I find it clearer and easier to read.Nitpicky things:I would consider renaming your create function to package, simply because create sounds like a function that would change the state of something, like perhaps it would create a package within one of your package lists. I understand that that's not what your function does, but I feel like if you called it package instead, it would make it clearer that (package foo bar baz) just represents a package named foo with providers bar and clients baz. You do have a handful of functions that change the state of your package lists, so I think it pays to be careful about what you name your functions, so you can make it easier to know which functions will mutate the state of your lists, and which ones are pure functions.While you're at it, you might consider making your package (a.k.a. create) function use keyword args too:(defn package makes a package element with provided name, option list of providers are packages requried by package, optional list of clients are packages using package [name & {:keys [providers clients] :or {:providers #{} :clients #{}}] {:name name, :providers providers, :clients clients})You could then use it either like this: (package foo)which returns {:name foo :providers #{} :clients #{}}or like this: (package foo :providers #{bar} :clients #{baz})which returns {:name foo :providers #{bar} :clients #{baz}}. You can even leave out :providers or :clients at will without having to worry about the order of arguments in your function call. This is another thing that is more verbose, but more readable if you don't mind the extra typing every time you call the function.I mentioned in #1 the idea of carefully naming your functions so that you can tell which ones are mutating state vs. which ones are just pure functions. I would consider naming your non-pure functions (i.e. the ones that are changing the value of refs/atoms) with a ! at the end. This is idiomatic in Clojure. I would do that with the following functions: update-sys! add-sys-package! install-new! install! uninstall-package! and uninstall!.I would do the same thing with your stop! and exit! functions, and would also consider renaming run to run? since it represents a boolean value.
_cs.60605
I have a large graph G and a pair of nodes s,t. I want to use the A* algorithm to find the shortest path from s to t, and I have a heuristic that is consistent.Suppose I already know of a path whose length is guaranteed to be at most 1.1 times as long as the shortest path. Does this help A* find the shortest path faster? If so, how and why? Can knowledge of this path be used to improve the running time of A*, reduce the number of nodes, or make the heuristic more accurate?One improvement I can see is to remove the all the nodes in the open list which exceed the current solution. But if the heuristic is weak, this wouldn't help in reducing the number of nodes to be explored in the open list. So, in what other ways one can use the knowledge of known path in hand to help A* terminate itself earlier than without the knowledge of known complete path.
Early termination of A* with weak heuristic if solution is known
algorithms;graphs;optimization;shortest path;heuristics
null
_unix.333965
Say I have to 2 heredocs. One runs in the foreground and it's counterpart runs right after it, in the background (&):bash -s -- $$ <<'EOF0' apt-get install phpmyadmin -yphpenmod mcrypt mbstringcat << EOF1 >> /etc/apache2/apache2.confInclude /etc/phpmyadmin/apache.confEOF1service apache2 restartEOF0And:nohup bash -s -- $$ <<'EOF1' &sleep 1m apt-get purge phpmyadmin -y phpdismod mcrypt mbstring service apache2 restart sed -i 's/Include \/etc\/phpmyadmin\/apache.conf/ /g' /etc/apache2 /apache2.conf EOF1My question:How could I combine, or concatenate them as I would concatenate cd ~ && curl [URL] ?Notes:I know I could combine them into one piece and then choose what runs in the background and so forth, but concatenating them seems to me most elegant solution.
How can you concatenate (or just combine) heredocuments (heredocs)?
here document;syntax
null
_webapps.102364
I want to be able to sort columns in a sheet. I know how to do this, and I know how to freeze 1 or two top rows so that the headings don't get sorted.My question is this:How do I prevent a totals row from being sorted?e.g. in the image below, I want to be able to sort by number of bananas, but I don't want the totals to move when I start sorting:With a bit more googling, seems the best way is to select the cells I want to sort, right click and select Sort Range. Is there any other way?
How to prevent sorting on lower rows in Google Sheets
google spreadsheets
null
_unix.220128
I have wifi adapter TP-Link TL-WN722NI want to use it to access internet and to share wifi access point at once.I did it on windows 8:netshwlan set hostednetwork mode=allow ssid=APname key=mykey keyUsage=persistent wlan start hostednetworkHow can i do this from ubuntu 14.04?
Wifi and wifi hotspot at once
ubuntu;wifi;wifi hotspot;access point
null
_softwareengineering.110253
Should database files(scripts etc.) be on source control?If so, what is the best method to keep it and update it there?Is there even a need for database files to be on source control since we can put it on a development server where everyone can use it and make changes to it if needed. But, then we can't get it back if someone messes it up.What approach is best used for databases on source-control?
Database source control
database;version control
Yes. You should be able to rebuild any part of your system from source control including the database (and I'd also argue certain static data).Assuming that you don't want to have a tool to do it, I'd suggest you want to have the following included:Creation scripts for the basic table structures including schemas, users, tables, keys, defaults and so on.Upgrade scripts (either altering the table structure or migrating data from a previous schema to the new schema)Creation scripts for stored procedures, indexes, views, triggers (you don't need to worry about upgrade for these as you just overwrite what was there with the correct creation script)Data creation scripts to get the system running (a single user, any static picklist data, that sort of thing)All scripts should include the appropriate drop statements and be written so they can be run as any user (so including associated schema / owner prefixes if relevant).The process for updating / tagging / branching should be exactly as the rest of the source code - there's little point in doing it if you can't associate a database version with an application version.Incidentally, when you say people can just update the test server, I'm hoping you mean the development server. If developers are updating the test server on the fly then you're looking at a world of pain when it comes to working out what you need to release.
_cstheory.21806
Consider a graph $G$ with max degree $\Delta_G$, min degree $\delta_G$ and average degree $d_G$. Is it possible to obtain a subgraph of $G$, say $G'$, such that $\Delta_{G'} = c_1d_{G}$, $\delta_{G'} = c_2d_{G}$ , where $c_1,c_2$ are constantsEDIT 1: Also the size of the resultant subgraph $G'$ must be constant times the size of original graph $G$.My attempt : Let $X$ be the random variable which denotes the degree of any randomly chosen vertex. Therefore, $d_G = E[X]$. Using markovs's inequality, we have$$P(X > 3E[X]) < \frac{1}{3} $$ and $$P(X < \frac{E[X]}{3}) = P(\frac{1}{X} > \frac{3}{E[X]}) < \frac{E[\frac{1}{X}]*E[X]}{3} < \frac{1}{3}$$Using union bound, we have$$P(\frac{E[X]}{3} < X < 3E[X]) > 1/3$$.Using the above equation, we can show that there exist $|V|/3$ vertices in $G$ whose degree is between $d_G/3$ and $3d_G$.Does the graph obtained by the above $|V|/3$ vertices solve the problem? If not, can we modify the technique to obtain a solution?Thanks in advance!
Subgraph of G whose maximum degree and minimum degree are of the same order
graph theory;pr.probability
null
_unix.320184
I have a device connected to the internet, but it is behind a NAT, meaning that I cannot ssh directly into it from outside of the network. I have already figured out that reverse SSH tunneling can circumvent this issue. Now I want to run a netcat server process on this device, accessible once again from outside of the network. How would I go about doing this?
netcat from behind NAT
networking
null
_cogsci.6144
I have recently read interesting article about Morita therapy. I have a hard time finding articles,books or research on this subject, and how it works in the West since this therapy was developed and mostly applied in Japan. What is interesting about it : this therapy (or just Morita's philosophy) doesn't try to change person's feelings of fear/anxiety. It teaches person to understand his/her feelings and live with them. Doing things that need to be done despite fear/anxiety. Does Morita's idea have western counterpart? Any free reading on it ? (something I don't have to buy) And is there any research paper on it that is reliable and accessible? Any idea of something similar is welcome. :)
Morita therapy - does it have a western counterpart?
clinical psychology;cross cultural psychology;anxiety;fear
Reminds me a bit of existential therapy and humanistic psychology in general. Unconditional positive regard and motivational interviewing both insist on approaching undesirable feelings from the client's perspective without imposing fixes or changes on the person through pressure or blunt confrontationality. Existential theory even presumes major, inevitable negativity in life experience for which no individual could be blamed. There are plenty of terrible realities for every individual to face and accept in the process of adjustment and maturation. The individual must do this freely, idiosyncratically, and earnestly for oneself, or so the theory goes. Focus is not on eliminating negativity but on finding meaning amidst it. I hope these Wikipedia pages give you plenty to read for starters, but I can try to find more if you like.
_codereview.96703
I made this query to create a graph of a user's popular questions and the view count on that question. It allows for a minimum of 500 views, and a score of 3.DECLARE @allowed_min_views INT = 500;DECLARE @allowed_min_score INT = 3;DECLARE @user_id INT = ##UserId:int?-1##;DECLARE @min_views INT = ##MinimumViews:int?500##;DECLARE @min_score INT = ##MinimumScore:int?3##;DECLARE @question INT = 1;IF (@min_views < @allowed_min_views)BEGIN PRINT '@MinimumViews must be larger than 499.'ENDIF (@min_score < @allowed_min_score)BEGIN PRINT '@MinimumScore must be larger than 2.'ENDIF (@min_views >= @allowed_min_views AND @min_score >= @allowed_min_score)BEGIN SELECT ViewCount , Score FROM Posts WHERE PostTypeId = @question AND OwnerUserId = @user_id AND ViewCount >= @min_views AND Score >= @min_score ORDER BY ViewCount ASC;ENDFinally, here's some sample input (I'm using @Mat'sMug's user ID):@user_id: 23788@min_views: 500@min_score: 7
Popular questions by view count
sql;sql server;t sql;stackexchange
Good thingsYou use good local variables, and you are consistent with your naming. The typical naming for T-SQL is using PascalCase, however there are no standards and snake_case or camelCase work just as good, as long as you are consistent (which you are). You validate your values, although I am not quite sure why you chose 500 and 3 as arbitrary minimums (might be worth documenting).Results are not very useful...As written, your query returns this:ViewCount Score --------- ----- 571 10 629 5 685 6 721 10 728 11 761 12 840 25 849 7 870 17 888 9 1065 10...Which is all well and good, except, it doesn't give much information. Let's say we rewrite it a bit like this:IF (@min_views >= @allowed_min_views AND @min_score >= @allowed_min_score)BEGIN SELECT ViewCount , Score , [User Link] = @user_id , [Post Link] = Id , CreationDate , [Tags] = Tags FROM Posts WHERE PostTypeId = @question AND OwnerUserId = @user_id AND ViewCount >= @min_views AND Score >= @min_score ORDER BY ViewCount DESC;ENDNotice I changed ORDER BY to DESC, I think it makes more sense to show highest first.Then we get a more sensible result set, e.g.:
_softwareengineering.214154
I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form:method() { ...use Jersey client to create WebResource... ...make request... ...do something with response...}e.g. there is a create directory method, a create folder method etc.Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: Don't mock objects you don't own. I could set up a dummy web service implementation - would that still constitute a unit test or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this?
How can I unit test a class which requires a web service call?
unit testing;tdd;web services;integration tests
In my opinion you should mock the webservice calls if this is a unit test, as opposed to an integration test.Your unit test should not test whether the external webservice is working, or whether your integration with it is correct. Without getting too dogmatic about TDD, note that a side effect of turning your unit test into an integration test is that it's likely to run slower, and you want fast unit tests. Also, if the webservice is temporarily down or working incorrectly, should this cause your unit test to fail? It doesn't seem right. Your unit test should fail for only one reason: if there is a bug in the code in that unit.The only portion of code that is relevant here is ...do something with response.... Mock the rest.
_cs.60165
Are there any problems that are easy for bipartite graphs, but hard for general graphs?I am asking because some classical problems are formulated in reference to people looking for a spouse, such as the marriage problem (for straight people) and the stable marriage problem (for straight people). Both are in FP.If one removes the requirement that there are two genders and that every man has to marry exactly one women, the general stable marriage problem (my term) is the same as the stable roommates problem, and a solution is no longer guaranteed to exist. I wonder if there are other problems which are explained using similar metaphors for which there is also an increase in complexity.
Problems that are easy on bipartite but hard on general graphs
complexity theory;graphs;bipartite matching
There are several well-known NP-complete problems that become solvable in polynomial-time for bipartite graphs. For example, 3-coloring is easy as bipartite graphs are precisely the 2-colorable graphs. Another example is independent set which is made easy by Knig's theorem.Wikipedia also lists a problem that is NP-hard for general graphs but is trivially in P for bipartite graphs: the odd cycle transversal problem.
_datascience.653
I am trying to find which classification methods, that do not use a training phase, are available. The scenario is gene expression based classification, in which you have a matrix of gene expression of m genes (features) and n samples (observations).A signature for each class is also provided (that is a list of the features to consider to define to which class belongs a sample).An application (non-training) is the Nearest Template Prediction method. In this case it is computed the cosine distance between each sample and each signature (on the common set of features). Then each sample is assigned to the nearest class (the sample-class comparison resulting in a smaller distance). No already classified samples are needed in this case.A different application (training) is the kNN method, in which we have a set of already labeled samples. Then, each new sample is labeled depending on how are labeled the k nearest samples.Are there any other non-training methods?Thanks
Which non-training classification methods are available?
classification
What you are asking about is Instance-Based Learning. k-Nearest Neighbors (kNN) appears to be the most popular of these methods and is applicable to a wide variety of problem domains. Another general type of instance-based learning is Analogical Modeling, which uses instances as exemplars for comparison with new data.You referred to kNN as an application that uses training but that is not correct (the Wikipedia entry you linked is somewhat misleading in that regard). Yes, there are training examples (labeled instances) but the classifier doesn't learn/train from these data. Rather, they are only used whenever you actually want to classify a new instance, which is why it is considered a lazy learner.Note that the Nearest Template Prediction method you mention effectively is a form of kNN with k=1 and cosine distance as the distance measure.
_codereview.64490
I'm focusing on trying to write clean, modular Python 2.7.5 code.def main(): type = sys.argv[1] dataPatient = parseJSONFile(fileName) if(type == type 1): patient = [d['patient_family_member_id'] for d in dataPatient if d['id'] == sys.argv[2]] displayList(flatten(patient), Msg1) elif(type == type 2): doctor = [d['doctor_id'] for d in dataPatient if sys.argve[2] in set(d['patient_id'])] displayList(doctor, Msg2) elif(type == type 3): dataDoctor = parseData(fileName2) doctor = [d['doctor_id'] for d in dataDoctor if sys.argv[2] in set(d['year_graduated']) #notice this is doing nearly the same thing as the type 1 case for each doctor patient = [d['patient_family_member_ids'] for d in doctor for p in dataPatient if d == p['doctor']] displayList(flatten(patient), Msg3) elif(type == type 4): patient = [d['patient'] for p in dataPatient if sys.argv[2] in set(d['age']) displayList(flatten(patient), Msg4)Most of this code in the if else statements are very similar. I feel like I might be able to use a lambda to clean this code up but I'm not sure that will work. I'm trying to make this code easy for someone to read and add more cases. What is the best way of avoiding a lot of the repetition in this code?
Conditional statements with doctor and patient information
python;python 2.7;lambda
null
_webapps.11097
After doing some research, I know I need to create a new page and add a form to it.Is that the best way? Doesn't Blogger have appropriate built-in feature?What form service can you recommend? EmailMeForm, Kontactr, Wufoo, other service?I tried to use Google Forms, but the customization options are really poor (e.g. I couldn't remove the form header with its title and description).
How to add a Contact me form to a blog hosted on Blogger?
blogger
Blogger now has a built in feature for this. To use it, go to Layout, choose Add a Gadget, and then in the gadget-choice window, click More Gadgets. Contact form is top of the list. Complete the details, save the form layout - any you will have a contact feature in the sidebar or footer or header (wherever you put the gadget).If you want it to look like it is on a page, then Position it just below the Blog Posts gadgetMake a page called something like Contact, and link to it from yourmenu etcSet it up so that you only display the gadget on that specific page.(Instructions for this here.)Alternatively, Google Forms has developed a lot in the last couple of years, so you could easily make a contact-me form in it. However this will put the messages left into a Google Sheets spreadsheet, not email them to you.