text
stringlengths
175
47.7k
meta
dict
Q: regex for decimal value restrict 16 digit I want a regex expression to restrict input decimal value at max. 16 digits or 15 digits and one character (including decimal point) I found below Regex it is working find in C# code but when i am using it in TextEdit xaml as mask. (DevExpress) throwing exception syntax error: Mask: ^(?:(?=.{0,16}$)\d*\.\d+|\d{0,16})[kKmMbBtT]?$ TextEdit Xaml: <dxe:TextEdit HorizontalAlignment="Left" MaskType="RegEx" Mask="(?:(?=.{0,16}$)[0-9]*([.]?[0-9]+)|[0-9]{0,16})[kKmMbBtT]?" VerticalAlignment="Top" Width="150" EditValue="{Binding DecValue, UpdateSourceTrigger=PropertyChanged, Mode=TwoWay}" Margin="10,33,0,0"/> Purpose I want to achieve from it: User can enter at 16 digits decimal value (including decimal point) or user can enter 15 digit and one character (including decimal point) He can enter only decimal point one time Total length of input string must not more than 16 characters. A: According to documentation: Extended Regular Expressions provide almost unlimited flexibility to create input masks. The syntax used by masks in this mode is similar to the syntax defined by the POSIX ERE specification. Back referencing is not supported. So, you cannot use grouping constructs such (?: subexpression) or (?= subexpression) etc. You can use some weird mask like this: \d{0,16}|\d{14}\R.\d{1}|\d{13}\R.\d{1,2}|\d{12}\R.\d{1,3}|\d{11}\R.\d{1,4}|\d{10}\R.\d{1,5}|\d{9}\R.\d{1,6}|\d{8}\R.\d{1,7}|\d{7}\R.\d{1,8}|\d{6}\R.\d{1,9}|\d{5}\R.\d{1,10}|\d{4}\R.\d{1,11}|\d{3}\R.\d{1,12}|\d{2}\R.\d{1,13}|\d{1}\R.\d{1,14}|\R.\d{1,15} And in your XAML: <dxe:TextEdit HorizontalAlignment="Left" MaskType="RegEx" Mask="\d{0,16}|\d{14}\R.\d{1}|\d{13}\R.\d{1,2}|\d{12}\R.\d{1,3}|\d{11}\R.\d{1,4}|\d{10}\R.\d{1,5}|\d{9}\R.\d{1,6}|\d{8}\R.\d{1,7}|\d{7}\R.\d{1,8}|\d{6}\R.\d{1,9}|\d{5}\R.\d{1,10}|\d{4}\R.\d{1,11}|\d{3}\R.\d{1,12}|\d{2}\R.\d{1,13}|\d{1}\R.\d{1,14}|\R.\d{1,15}" VerticalAlignment="Top" Width="150" EditValue="{Binding DecValue, UpdateSourceTrigger=PropertyChanged, Mode=TwoWay}" Margin="10,33,0,0"/>
{ "pile_set_name": "StackExchange" }
Q: R function to inspect number of arguments in another function? Is there a built-in R function, or a way to write an R function, that can check how many inputs another function takes, and also lists names of optional arguments. Let's call this desired function, f, then the following command: f(dnorm) should output 4 and mean, sd, log Since there are 4 arguments associated with 'dnorm' and and 3 optional arguments: mean, sd, log. Or maybe this is not possible? Any insight is appreciated! A: You can try: length(formals(dnorm)) # [1] 4 names(Filter(function(x) !is.symbol(x) || nchar(as.character(x)), formals(dnorm))) # [1] "mean" "sd" "log" Two functions technically, but gets the job done. For the second one, you may need to play around a bit if the default arguments are complex.
{ "pile_set_name": "StackExchange" }
Q: Python - How to pass a variable from one method to another in Python? I've looked around for a question like this. I've seen similar questions, but nothing that really helped me out. I'm trying to pass the choice variable from the method rollDice() to the method main(). This is what I've got so far: import random import os import sys def startGame(): answer = input('Do you want to play Super Dice Roll?\nEnter 1 for Yes\nEnter 2 for No\n' os.system('cls') if (answer == '1'): rollDice() elif(answer == '2'): print('Thank you for playing!') else: print('That isn/t a valid selection.') StartGame() def rollDice(): start = input('Press Enter to roll dice.') os.system('cls') dice = sum(random.randint(1,6) for x in range (2)) print('you rolled ',dice,'\n') choice = input('Do you want to play again?\nEnter 1 for Yes\nEnter 2 for No.\n) return choice def main(): startGame() while (choice == '1'): startGame() print('Thank you for playing') print('!~!~!~!~WELCOME TO SUPER DICE ROLL~!~!~!~!~\n') main() I know that I may have other things in here that are redundant or I may have to fix, but I'm just working on this one issue right now. I'm not sure how to pass the choice variable into the main() method. I've tried putting choice == rollDice() in the main() method but that didn't work. I do mostly SQL work, but wanted to start learning Python and I found a website that has 5 beginner tasks but virtually no instructions. This is task one. A: You need to put the return value of the function into a variable to be able to evaluate it (I also corrected a few bugs in your code, mainly typos): import random import os def startGame(): answer = input('Do you want to play Super Dice Roll?\nEnter 1 for Yes\nEnter 2 for No\n') os.system('cls') while answer == '1': answer = rollDice() if answer == '2': print('Thank you for playing!') else: print('That isn/t a valid selection.') startGame() def rollDice(): input('Press Enter to roll dice.') os.system('cls') dice = sum(random.randint(1,6) for x in range (2)) print('you rolled ', dice, '\n') choice = input('Do you want to play again?\nEnter 1 for Yes\nEnter 2 for No.\n') return choice def main(): print('!~!~!~!~WELCOME TO SUPER DICE ROLL~!~!~!~!~\n') startGame() print('Thank you for playing') main()
{ "pile_set_name": "StackExchange" }
Q: A general method for a document to extend as much as needed in order to fit the content of the document I have a python script that automatically creates a LaTex document for a given input. In this document, I have a lot of proof trees (created using the bussproofs package), though it doesn't really matter what are these proof trees. The point is that in some cases the resulted proof trees are either too long horizontally or too long vertically and that result in the proof trees not fitting in the page and making erros. So, I would like to adjust the page width or height if needed in order for all the content to fit inside the page. The problem is that I can't just put some maximal value for the page width and height. So, I would like to put a few commends so that the page width and height would extend sufficiently in order to fit any content whatsoever. I should note that here an answer for how to extend the page width in order to fit one line of text of arbitrary length was given, if that helps someone to come with a solution. A: Well I manged to find a solution that works for me, although it may not help to everyone. So, the solution is not in the tex file but in the python script: After compiling the document ones with the default \pdfpagewidth = 597pt and \pdfpageheight = 845pt I scan the log file to see if there are Overfull \hbox or \vbox errors and if there are I change the width and height values in the tex file and compile it again.
{ "pile_set_name": "StackExchange" }
Q: 'sh run global' and 'sh run nat' yield no output but a ton of NAT in 'sh run' As the topic says, I'm investigating NAT on a clients ASA - it's running old 7.2 train code - I execute 'sh run global' and 'sh run nat' - the latter command only returns a single nat 0 line. When I do a 'sh run | b static' (which I thought would've shown up under one of the previous two commands!) I get a long list of policy based nat in the format of 'static (outside,inside) x.x.x.x access-list ' Which is what I would expect to have seen from one of those previously attempted commands. What commands need to be run to see everything involving nat on a 7.2 ASA? I am not seeing any kind of DST nat (which I expect in this particular case) for the tunnel I'm looking at, but yet the tunnel is up and passing traffic, so it's happening somewhere! Thank you in advance! A: 'show run static' is a usable show command as well. A: show global show nat show static show conduit (unless you've switched to ACLs) Of course, that's going to be 90% of the entire configuration anyway. (more if pdm isn't enabled, thus flooding the config with pdm location ...)
{ "pile_set_name": "StackExchange" }
Q: display details of record in new VF page when save button is clicked this is my page1 <apex:page standardController="Case" extensions="newClass" showHeader="false" > <apex:form > <apex:pageBlock title="Case Status"> <apex:pageBlockSection columns="1"> <apex:inputField value="{!Case.Status}"/> <apex:inputField value="{!Case.Reason}"/> <apex:inputField value="{!Case.Priority}"/> </apex:pageBlockSection> <div> <apex:commandButton action="{!redirectToMyVF}" value="CloseCase"/> </div> </apex:pageBlock> </apex:form> </apex:page> this is my page2 <apex:page standardController="Case" extensions="newClass" showHeader="false" > <apex:form > <apex:pageBlock title="Case Status"> <apex:pageBlockSection columns="1"> <apex:inputField value="{!Case.Status}"/> <apex:inputField value="{!Case.Reason}"/> <apex:inputField value="{!Case.Priority}"/> </apex:pageBlockSection> <apex:pageBlockButtons> <apex:commandButton action="{!saveAndRedirect}" value="Save"/> </apex:pageBlockButtons> </apex:pageBlock> </apex:form> </apex:page> my apex class public class newClass { public newClass (ApexPages.StandardController controller) { } public PageReference redirectToMyVF() { PageReference pref = new PageReference('/apex/Page'); pref.setRedirect(true); return pref; } public PageReference saveAndRedirect() { if(controller.Save() != null) { PageReference redirectPage = Page.mypage2; return redirectPage; } return null; } } A: From your code, I assume you create the record on Page2 and then redirect the user to Page1 with the details of the case record that you just created. To do so, you need to pass the record id of the case record in the URL that you just created, so that when the Page1 loads, it retrieves the information about the record created. Controller: public class newClass { public Case caseRecord { get; set; } public newClass (ApexPages.StandardController controller) { } public PageReference redirectToMyVF() { PageReference pref = new PageReference('/apex/Page'); pref.setRedirect(true); return pref; } public PageReference saveAndRedirect() { if(controller.Save() != null) { //save your case record controller.Save(); //retrieve the case record caseRecord = ( Case ) controller.getRecord(); System.debug( caseRecord ); //pass the case id as the URL parameter PageReference redirectPage = '/mypage2?id=' + caseRecord.Id; return redirectPage; } return null; } } Visualforce page: <!-- this is the page where you want to show the newly created record details --> <apex:page standardController="Case" extensions="newClass" showHeader="false" > <apex:form > <apex:pageBlock title="Case Status"> <apex:pageBlockSection columns="1"> <apex:inputField value="{!caseRecord.Status}"/> <apex:inputField value="{!caseRecord.Reason}"/> <apex:inputField value="{!caseRecord.Priority}"/> </apex:pageBlockSection> <div> <apex:commandButton action="{!redirectToMyVF}" value="CloseCase"/> </div> </apex:pageBlock> </apex:form> </apex:page>
{ "pile_set_name": "StackExchange" }
Q: Array works fine on localhost but not working on live server (gives error message Undefined offset: 0) - Laravel-5.8 Everything works perfectly okay on localhost but when migrated to godaddy live server(cpanel) I keep getting this error (Undefined offset: 0) on my blade view I have tested the application on my localhost using XAMPP running PHP 7.2.12 and it works very fine but now I moved it to godaddy cpanel running PHP 7.3 and it keeps giving me this error //This is my Route Route::get('/conversations', 'DoctorsController@Conversations'); //This is my Controller public function Conversations(Request $request){ //authenticate user if($request->us == 'guest'){ return redirect()->intended('login'); }else{ $unread=DB::table('messaging') ->where([ ['Reciever', Auth::user()->id], ['ReadStatus', '=', ''] ]) ->get(); $pending=$unread->count(); //retrieve previous chat; $conversations=DB::table('messaging') ->where('Sender', Auth::user()->id) ->orWhere('Reciever', Auth::user()->id) ->groupBy('Sender') ->orderBy('ReadStatus', 'asc') ->get(); //retrieve profile of users in the previous chat $profiles = array(); $read_status = array(); foreach($conversations as $conversation){ if($conversation->Sender == Auth::user()->id){ //check user role to know which database to query $userRole=DB::table('role_user') ->where('user_id', $conversation->Reciever) ->get(); if($userRole[0]->role_id === 2){ #retrieve the sender details from doctors table $profile=DB::table('doctors') ->where('doctor_id', $conversation->Reciever) ->get(); }else{ //retrieve the sender details from users table $profile=DB::table('profiles') ->where('user_id', $conversation->Reciever) ->get(); } if(in_array($profile, $profiles)){ }else{ array_push($profiles, $profile); } //retrieve the reciever details }else if($conversation->Reciever == Auth::user()->id){ //check user role to know which database to query $userRole=DB::table('role_user') ->where('user_id', $conversation->Sender) ->get(); if($userRole[0]->role_id === 2){ $profile=DB::table('doctors') ->where('doctor_id', $conversation->Sender) ->get(); }else{ $profile=DB::table('profiles') ->where('user_id', $conversation->Sender) ->get(); } //retrive unread chat; $unreadconvers=DB::table('messaging') ->select('ReadStatus') ->where([ ['Reciever', Auth::user()->id], ['Sender', $conversation->Sender], ['ReadStatus', '=', ''] ]) ->get(); if(in_array($profile, $profiles)){ }else{ $profile['unreads'] = $unreadconvers->count(); array_push($profiles, $profile); //array_push($read_status, $unreadconvers->count()); } } $i++; } return view('conversations')->with(['profile'=>$profiles, 'pending'=>$pending, 'unreads'=>$read_status]); //return to the conversation blade } } //This is my Blade template @foreach($profile as $profile) <div class="col-md-4 element-animate"> <div class="media d-block media-custom text-center"> <img src= "{{ URL::to(isset($profile[0]->image) ? $profile[0]->image : '../img/user.png') }}" alt="Image Placeholder" class="img-fluid img-fluid-doctors"> <div class="media-body"> <a href="{{ isset($profile[0]->doctor_id) ? url('/chat-doctor?db='.$profile[0]->doctor_id) : url('/chat-doctor?us='.$profile[0]->user_id) }}" class="envelop"><i class="far fa-envelope"></i><span class="unread">{{ isset($profile['unreads']) ? $profile['unreads'] : 0 }}</span> <h3 class="mt-0 text-black">{{ $profile[0]->name }}</h3> </a> </div> </div> </div> @endforeach At the Controller, this code is expected to retrieve all the messages from the database linking to the logged in user either send or received, store them using an array and display them at the blade template looping through each of the array. Currently that is what it does on localhost but on live server I get this error message Undefined offset: 0 (View: /resources/views/conversations.blade.php) A: I have found the solution to this issue, I was using === instead of == where I have this code if($userRole[0]->role_id === 2) I now change this line of code to if($userRole[0]->role_id == 2) and now is working perfectly well. Thank you for your response Chin Leung.
{ "pile_set_name": "StackExchange" }
Q: Terraform version error when deploying to AWS through jenkins? I was deploying using terraform through Jenkins Terraform v0.10.7. After a success deployment from my local machine using Terraform v0.11.1, I can not do it again from Jenkins, I have this error : Terraform doesn't allow running any operations against a state that was written by a future Terraform version. The state is reporting it is written by Terraform '0.11.1'. A: Using v0.11.1 run: $ terraform destroy Remove the .tfstate file Using v0.10.7 (or any version you want to use from now on), run: $ terraform apply
{ "pile_set_name": "StackExchange" }
Q: Review system with login / signup required I'm writing a review system for items in Ruby on Rails. I want the process to be as follow: users to start entering their review/ratings when they hit submit, if they're not logged in, users are redirected to the signup or login page, they create and account or signup they're redirected to the post where they wrote a review, and the review is added. How do I do that? A: Some highlights Use Authlogic In your Review Controller, say something like before_filter :require_user_before_reviews In require_user_before_reviews do something like /app/controllers/reviews_controller.rb def require_user_before_reviews return true if logged_in? session[:review_params] = params[:review] session[:return_url] = new_review_path redirect_to login_path end Then re-render the form on new_review_path with the session value after logged in. There's some broad strokes in there, but that should work. As a note: as a User, I'd want you to ask me to login before I do the review.
{ "pile_set_name": "StackExchange" }
Q: What is the best way to decorate methods of a Python class? I am following below conventions to decorate certain methods in a Python class. I am wondering if there are some better ways to do the same. My approach certainly doesn't look good; the call to original member function doesn't look intuitive at all. from threading import Lock def decobj(fun): def fun2(*args, **kwards): with args[0].lock: print 'Got the lock' fun(*args, **kwards) return fun2 class A: def __init__(self, a): self.lock = Lock() self.x = a pass @decobj def fun(self, x, y): print self.x, x, y a = A(100) a.fun(1,2) A: If your decorator can only work on methods (because you need access to the instance-specific lock) then just include self in the wrapper signature: from functools import wraps def decobj(func): @wraps(func) def wrapper(self, *args, **kwards): with self.lock: print 'Got the lock' func(self, *args, **kwards) return wrapper I included the @functools.wraps() utility decorator; it'll copy across various pieces of metadata from the original wrapped function to the wrapper. This is invariably a good idea.
{ "pile_set_name": "StackExchange" }
Q: How do I remove the delay between HTTP Requests when using Asynchronous actions in ASP.NET? I am using HttpClient to send a GET request to a server inside of a while loop while (cycle < maxcycle) { var searchParameters = new ASearchParameters { Page = cycle++, id = getid }; var searchResponse = await Client.SearchAsync(searchParameters); } and the SearchAsync contains public async Task<AuctionResponse> SearchAsync() { var uriString = "Contains a https url with parameters" var searchResponseMessage = await HttpClient.GetAsync(uriString); return await Deserialize<AuctionResponse>(searchResponseMessage); } The thing is after every request there is a delay before the next request is started. you can see this in fiddler timeline and also in fiddler there is "Tunnel To" example.com:443 before every request Question : Why is there a delay and how to remove it ? A: I see two things that are happening here. First, depending on the deserializer, it may take a while to translate your response back into an object. You might want to time that step and see if that's not the majority of your time spent. Second, the SSL handshake (the origin of your "tunnel to") does require a round trip to establish the SSL channel. I thought HttpClient sent a Keep-Alive header by default, but you may want to see if it is A) not being sent or B) being rejected. If you are re-establishing an SSL channel for each request, that could easily take on the order of a hundred ms all by itself (depending upon the server/network load). If you're using Fiddler, you can enable the ability to inspect SSL traffic to see what the actual request/response headers are.
{ "pile_set_name": "StackExchange" }
Q: Sort array of structs by a member of each I am compiling data using the following structs: struct Nursing { var leftTime: Double var rightTime: Double var submissionTime: Date } struct Bottle { var bottleQuantity: Double var bottleUnits: String var submissionTime: Date } struct Puree { var pureeQuantity: Double var pureeType: String var pureeUnits: String var submissionTime: Date } Then I create arrays of each type using data from elsewhere in the app. var nursingArray = [Nursing]() var bottleArray = [Bottle]() var pureeArray = [Puree]() I then filter each array for only entries that occurred in the last day. let yesterday = Calendar.current.date(byAdding: .day, value: -1, to: Date()) var todayBottleArray = bottleArray.filter( { ( $0.submissionTime > yesterday! ) } ) var todayNursingArray = nursingArray.filter( { ( $0.submissionTime > yesterday! ) } ) var todayPureeArray = pureeArray.filter( { ( $0.submissionTime > yesterday! ) } ) Finally they all get combined into a single unsorted array. var unsortedTodayArray: [Any] = [] unsortedTodayArray.append(todayBottleArray) unsortedTodayArray.append(todayNursingArray) unsortedTodayArray.append(todayPureeArray) Here's the question...while I know they're unrelated, the submissionTime property appears in all three. How can I sort unsortedTodayArray by submissionTime? A: You can have your struct conform to the same protocol. Something like... protocol SubmissionTimeable { var submissionTime: Date { get set } } struct Nursing: SubmissionTimeable { var leftTime: Double var rightTime: Double var submissionTime: Date } struct Bottle: SubmissionTimeable { var bottleQuantity: Double var bottleUnits: String var submissionTime: Date } struct Puree: SubmissionTimeable { var pureeQuantity: Double var pureeType: String var pureeUnits: String var submissionTime: Date } Then let your unsorted array be an array of the protocol. let unsortedArray = [SubmissionTimeable]() Then you can sort that array with submissionTime.
{ "pile_set_name": "StackExchange" }
Q: Problem with PHP Syntax I am trying to get a link together using 2 variables but the output is the link and title but no html / clickable link appearing. I'm getting something link this: http://www.mydomain.com/post1/post_title_here Here is the code: echo '<a href="'.the_permalink().'">'.the_title().'</a>'; Can anyone help please? Thanks UPDATE: Here's the whole block of code: <div id="MyBlock1"> <?php $query = new WP_Query('posts_per_page=5'); while( $query ->have_posts() ) : $query ->the_post(); echo '<li>'; echo '<a href="'.the_permalink().'">'.the_title().'</a>'; echo '</li>'; endwhile; wp_reset_postdata(); ?> </div> A: That's because the wordpress functions the_permalink() and the_title() display the respective outcomes already they need not be echoed. If you want functions that return the values, you have to use get_permalink() and get_the_title() instead. So either do: <div id="MyBlock1"> <?php $query = new WP_Query('posts_per_page=5'); while( $query ->have_posts() ) : $query ->the_post(); echo '<li>'; echo '<a href="'.get_permalink().'">'.get_the_title().'</a>'; echo '</li>'; endwhile; wp_reset_postdata(); ?> </div> or <div id="MyBlock1"> <?php $query = new WP_Query('posts_per_page=5'); while( $query ->have_posts() ) : $query ->the_post(); echo '<li><a href="'; the_permalink(); echo '">'; the_title(); echo '</a></li>'; endwhile; wp_reset_postdata(); ?> </div> Both will work.
{ "pile_set_name": "StackExchange" }
Q: How can I change SharedSection in registry using C#? Regarding this stackoverflow entry in my registry System\\CurrentControlSet\\Control\\Session Manager\\SubSystems I have to change in value windows > string parameter SharedSection=SharedSection=1024,20480,768 the third value from 768 into 2048. What is the best way to do that via C#? I tried the following: var myKey = Registry.LocalMachine.OpenSubKey("System\\CurrentControlSet\\Control\\Session Manager\\SubSystems").GetValue("Windows"); The local variable myKey contains the following string: "C:\\Windows\\system32\\csrss.exe ObjectDirectory=\\Windows SharedSection=1024,20480,768 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=sxssrv,4 ProfileControl=Off MaxRequestThreads=32" Do I need to change the value 768 into 2048 using regular expressions or is there a better way? A: for example: try { updateSharedSection(-1, -1, 2048); } catch(Exception e) { //.. } first param the maximum size of the system-wide heap second param the size of each desktop heap third param the size of the desktop heap that is associated with a non-interactive Windows station. ... public void updateSharedSection(int z) { updateSharedSection(-1, -1, z); } public void updateSharedSection(int x, int y, int z) { RegistryKey key = Registry.LocalMachine.OpenSubKey("System\\CurrentControlSet\\Control\\Session Manager\\SubSystems", true); key.SetValue("Windows", _sharedSection(x, y, z, key.GetValue("Windows").ToString())); } /// <param name="x">the maximum size of the system-wide heap (in kilobytes) / -1 by default</param> /// <param name="y">the size of each desktop heap / -1 by default</param> /// <param name="z"> the size of the desktop heap that is associated with a non-interactive Windows station / -1 by default</param> /// <param name="raw">raw data line</param> /// <returns></returns> private string _sharedSection(int x, int y, int z, string raw) { Func<int, string, string> setVal = delegate(int xyz, string def) { return (xyz == -1) ? def : xyz.ToString(); }; return Regex.Replace(raw, @"SharedSection=(\d+),(\d+),(\d+)", delegate(Match m) { return string.Format( "SharedSection={0},{1},{2}", setVal(x, m.Groups[1].Value), setVal(y, m.Groups[2].Value), setVal(z, m.Groups[3].Value)); }, RegexOptions.IgnoreCase); }
{ "pile_set_name": "StackExchange" }
Q: Promisifying Sheet API v4 causes undefined this If I use callbacks, the code below using Google's Sheets API v4 works fine. However, I am trying to apply util.promisify to the API call. This causes: Cannot read property 'getRoot' of undefined which is thrown from : node_modules\googleapis\build\src\apis\sheets\v4.js:592 This line 592 says: context: this.getRoot() I am probably not using promisify correctly and I hope that someone here can help me. I suspect it might have something to do with concurrency. Any tip would be appreciated. let { promisify } = require('util'); let { google } = require('googleapis'); let sheets = google.sheets('v4'); let credentials = require('./credentials.json') let client = new google.auth.JWT( credentials.client_email, null, credentials.private_key, ['https://www.googleapis.com/auth/spreadsheets']) client.authorize((err, tokens) => { if (err) { throw err; } }); let endpoint = promisify(sheets.spreadsheets.values.get); async function test() { let request = { auth: client, spreadsheetId: "xxxxxxxx", range: "'ExampleSheet'!A:B", valueRenderOption: "UNFORMATTED_VALUE", majorDimension: "ROWS", } let result = await endpoint(request) .then((res) => { return res }) .catch((err) => { console.log(err); }); } test(); A: Okay, after some more digging I got it to work. I modified my original code to use the following: let endpoint = promisify(api.spreadsheets.get.bind(api)); Not sure why api isn't bound to this/the context in the first place though.
{ "pile_set_name": "StackExchange" }
Q: How to translate jsoup to Objective-C? How to translate jsoup to Objective-C? I'm a newbie and much unfamiliar with Java. Recently I'd like to use jsoup in my iOS project by j2objc, but it seems hard for me. When I execute cd /path/to/jsoup-master j2objc -sourcepath ./src/main/java -classpath /Users/wildcat/Downloads/j2objc-0.9.5/lib/javax.inject-1.jar -d ./src/main/ojbc ./src/main/java/org/jsoup/*/*.java There are many packages not found, such as org.w3c.dom . I downloaded these files about org.w3c.dom but there are so many packages not found that it's difficult to handle it. They maybe belong to the standard libs of Java such as javax.net , how could I finished the translation of jsoup? Is that possible? Thanks! A: You can try RoboVM, which AFAIK supports the full JRE API. It doesn't generate Objective C, though, but instead compiles Java files directly to .o files (like clang does for C/C++/Obj-C files).
{ "pile_set_name": "StackExchange" }
Q: jsp need to write a text with different colors I need to write in jsf 2 texts on a line with different colors. Here is my code: <h:panelGrid id="accessinfo_grid" columns="3"> <h:outputText id="loginid" value="#{msgs.loginId}" styleClass="label"/> <h:outputText id="loginid_asterix" value="#{msgs.asterix}" styleClass="error_message"/> <h:inputText id="inputusername" disabled="true" value="#{userAccount.userName}"/> <h:outputText id="password" value="#{msgs.passwordID}" styleClass="label"/> <h:outputText id="passwordid_asterix" value="#{msgs.asterix}" styleClass="error_message"/> <h:secretText id="inputpassword" disabled="true" value="#{userAccount.password}"/> </h:panelGrid> I want the output to be to be something like this with * with red color: Login:* edit_box Password:* edit_box But now is something like this: Login: * edit_box Password:* edit_box I want that * red to be just after the first text. Probably i should try to use something else then a panelGrid but I don't know what/how. I am newbie at this. Thanks, A: In a <h:panelGrid>, which generates a HTML <table>, every direct child JSF component will end up in its own <td>. You need to group the JSF components which should end up in the same <td> in a <h:panelGroup>. <h:panelGrid id="accessinfo_grid" columns="2"> <h:panelGroup> <h:outputText id="loginid" value="#{msgs.loginId}" styleClass="label"/> <h:outputText id="loginid_asterix" value="#{msgs.asterix}" styleClass="error_message"/> </h:panelGroup> <h:inputText id="inputusername" disabled="true" value="#{userAccount.userName}"/> <h:panelGroup> <h:outputText id="password" value="#{msgs.passwordID}" styleClass="label"/> <h:outputText id="passwordid_asterix" value="#{msgs.asterix}" styleClass="error_message"/> </h:panelGroup> <h:inputSecret id="inputpassword" disabled="true" value="#{userAccount.password}"/> </h:panelGrid> (note that I replaced the non-existent h:secretText by h:inputSecret, not sure why you used it) Note that this has nothing to do with CSS. You'd still have exactly the same problem when disabling CSS. I'd suggest to take a JSF pause and concentrate on reading some decent HTML tutorial to understand better how it all works what JSF is generating (you can see it by opening the JSF page in browser and doing rightclick and View Source).
{ "pile_set_name": "StackExchange" }
Q: Visual Studio 2012 and SQL Server Express Is it possible that the Visual Studio Ultimate that was installed in my machine didn't include SQL Server Express, they didn't turn any option off when installing, they simply installed it following the default options. A: It comes with SQL Server, but the instance name is (LocalDb)\v11.0 instead of .\sqlexpress
{ "pile_set_name": "StackExchange" }
Q: Margin of html element defaulting to fill width of containing div, cannot override I'm a fairly novice web developer and I'm having a very fundamental problem that I would really appreciate some help with: No matter what width I set any elements within a certain containing div, safari and Chrome both add extra margins that fill the width of the div. If I specify them to have 0 margins the css is overridden. For example, I have the following <div class="container"> <div class="element1"> ... </div> </div> and I set this as the css: .container{ background-color:#ffffff; margin-left:7.5%; margin-right:7.5%; padding:30px; color:#505050; line-height:1.5; font-size:14px; } .element1{ max-width:50%; margin: 0px 0px 0px 0px; } element1 has a width of 50% of the containing element, but it then has an extra margin to the right that fills up the rest of the width of the containing element. Why is this happening and how do I set this right-margin to 0? Thanks! A: Try adding in a reset stylesheet before your stylesheet to normalise all the browsers. Browsers have their own ideas about default padding and margins etc. for different elements. By resetting the stylesheet, you are making every browser start from the same position. http://meyerweb.com/eric/tools/css/reset/
{ "pile_set_name": "StackExchange" }
Q: Reducing homogeneous second order differential equation to first order (Operator factorisation) I need to reduce the homogeneous second-order differential equation $\ y'' + by' + cy = 0$ to a first-order one using operator factorisation, where$\ b, c$ and$\ y$ are functions of t. I began by rewriting it in operator form and completing the square, getting $\ [(D + \dfrac{b}{2})^2 + (c - \dfrac{b^2}{4})]y = 0$. I'm basically stumped from here. I could try applying$\ D^2$ to both sides to get zero on the right-hand side and then substitute, but then I run into more dead-ends that give me no hints to either proceed or point to a different approach (at least in my mind). Can someone help out with suggestions? Thank you in advance. A: Write $$(D+\alpha)(D+\beta)y=0$$ This gives $$D^{2}y+(\alpha+\beta)Dy+\alpha\beta{y}=0$$ Comparing with your equation, you get $$\alpha+\beta=b$$ and $$\alpha\beta=c$$ Then you let $g(x)=(D+\beta)y(x)$ and you are left with a first order system $$g'+\alpha{g}=0, \ y'+\beta{y}=g$$ The first equation is solved by $$g(x)=c_{1}e^{-\alpha{x}}$$ The second equation is solved by the integrating factor technique $$y(x)=c_{2}e^{-\beta{x}}+c_{1}\frac{e^{-\alpha{x}}}{\beta-\alpha}$$
{ "pile_set_name": "StackExchange" }
Q: How to delete an archieved artifact in jenkins? 1 ) I am using Jenkins with tomcat. I use jenkins cli from java class to create job and to build. I want to delete a archieved artifact. How to accomplish this ? 2 ) another question is, can we give a specific name to the build in jenkins (e.g) i want the build name like (buildNumber + someName). How to achieve this ? Thanks. A: The artifacts for a build by default are located in: [JENKINS_HOME]/jobs/[job_name]/builds/[$BUILD_ID]/archive/, go there and delete it. It won't delete the link to the artifact from the build page, though. (If you are not sure where your JENKINS_HOME is, go to http://[jenkins_server]/configure, you'll see Home directory near the top of the page). To change the display name of a build automatically try Build Name Setter Plugin. To change the display name via CLI: java -jar jenkins-cli.jar -s http://[server]/ set-build-display-name [job_name] [build#] [new_name]
{ "pile_set_name": "StackExchange" }
Q: How to cache NodeJS global modules AWS CodeBuild Is there a way to cache NodeJS global modules on AWS CodeBuild? I'm using LernaJS to handle my repository and every time build starts I install it with the command npm install -g lerna (it takes 30 seconds). To handle this, first I figured out where npm install Lerna with the command npm list -g and was returned /usr/local/lib ├─┬ grunt@1.0.4 │ ├── coffeescript@1.10.0 ... ├─┬ lerna@3.14.1 │ ├─┬ @lerna/add@3.14.0 │ │ ├── @lerna/bootstrap@3.14.0 deduped ... Then I tried to cache /usr/local/lib/node_modules/**/* folder and I received the following error: [Container] 2019/05/30 20:09:00 Running command npm install -g lerna /codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: npm: not found [Container] 2019/05/30 20:09:00 Command did not exit successfully npm install -g lerna exit status 127 [Container] 2019/05/30 20:09:00 Phase complete: INSTALL State: FAILED [Container] 2019/05/30 20:09:00 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: npm install -g lerna. Reason: exit status 127 So I checked the content of /usr/local/lib/node_modules/ I had these packages: [Container] 2019/05/30 20:19:11 Running command ls /usr/local/lib/node_modules grunt grunt-cli lerna npm webpack My last attempt was cache /usr/local/lib/node_modules/lerna/**/*. This way no error is thrown, but cache doesn't work either: [Container] 2019/05/30 20:30:00 MkdirAll: /codebuild/local-cache/custom/656f09faf2819a785eae5e09f5d26a44ff4f20edf155297d6819c9600540cd26/usr/local/lib/node_modules/lerna [Container] 2019/05/30 20:30:00 Symlinking: /usr/local/lib/node_modules/lerna => /codebuild/local-cache/custom/656f09faf2819a785eae5e09f5d26a44ff4f20edf155297d6819c9600540cd26/usr/local/lib/node_modules/lerna ... [Container] 2019/05/30 20:30:01 Running command npm install -g lerna /usr/local/bin/lerna -> /usr/local/lib/node_modules/lerna/cli.js + lerna@3.14.1 added 650 packages from 321 contributors and updated 1 package in 40.628s Am I missing something? Is there a way to save Lerna as grunt, grunt-cl, npm and webpack (inside /usr/local/lib/node_modules/) before building starts? Thank you! A: Thanks to @JD D comment, I've created a docker image, pushed it to AWS ECR and use it as my own image. My Dockerfile: FROM node:lts RUN npm install -g yarn lerna RUN apt-get update && \ apt-get install -y groff less && \ apt-get clean RUN curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o awscli-bundle.zip RUN unzip awscli-bundle.zip && \ ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && \ rm awscli-bundle.zip
{ "pile_set_name": "StackExchange" }
Q: QTableWidget style per QTableWidgetItem I'm using a simple QTableWidget to display some QTableWidgetItems, which look like this: +-------------+-------------+ | | some text 1 | | some number +-------------+ | | some text 2 | +-------------+-------------+ | | some text 1 | | some number +-------------+ | | some text 2 | +-------------+-------------+ I know that I can draw a border around the QTableWidgetItems by setting a stylesheet for the QTableWidget like QTableView::item { border-bottom: 1px solid black; } but this is applied for all the QTableWidgetItems. I'd like to draw the border only for the "some number" and "some text 2" items. Is it possible to do so while sticking to the use of the QTableWidget and QTableWisgetItems? I can't use QObject::setProperty set some property to identify the items in the style sheet, because QTableWidgetItems are no QObjects … A: use delegate, example class MyDelegate : public QItemDelegate { public: MyDelegate( QObject *parent ) : QItemDelegate( parent ) { } void paint( QPainter *painter, const QStyleOptionViewItem &option, const QModelIndex &index ) const; }; void MyDelegate::paint( QPainter *painter, const QStyleOptionViewItem &option, const QModelIndex &index ) const { QItemDelegate::paint( painter, option, index ); painter->setPen( Qt::red ); painter->drawLine( option.rect.topLeft(), option.rect.bottomLeft() ); // What line should you draw // painter->drawLine( option.rect.topLeft(), option.rect.topRight() ); // painter->drawLine( option.rect.topLeft(), option.rect.bottomLeft() ); } ... m_TableWidgetClass->setItemDelegateForRow(row, new MyDelegate( this)); //m_TableWidgetClass->setItemDelegateForColumn(column, new MyDelegate( this));
{ "pile_set_name": "StackExchange" }
Q: Appropriate Scala Collection similar to Python Dictionary I have an algorithm that iteratively returns (key, value). What I want to do is store these results in a structure such that if the key does not exist, it will add it and the corresponding value. Now, if the key exists, it will append the value to an existing array of values. In python, I can do this using a python dictionary with this format: dict = {'key1': [val1, val2, val3], 'key2': [val4, val5], 'key3': [val6], ... } and simply do: if key in dict.keys(): dict[key].append(value) else: dict[key] = [value] How do I do this in Scala? A: Maybe something like this? scala> def insert[K,V](k: K, v: V, m: Map[K, List[V]]): Map[K, List[V]] = { | if (m contains k) m + (k -> (m(k) :+ v)) | else m + (k -> List(v)) } insert: [K, V](k: K, v: V, m: Map[K,List[V]])Map[K,List[V]] scala> insert('b', 23, Map('b' -> List(2))) res30: Map[Char,List[Int]] = Map(b -> List(2, 23)) scala> insert('b', 23, Map('c' -> List(2))) res31: Map[Char,List[Int]] = Map(c -> List(2), b -> List(23)) Or, incorporating Sergey's very fine suggestion: def insert[K,V](k: K, v: V, m: Map[K, List[V]]): Map[K, List[V]] = m + (k -> (m.getOrElse(k, List()) :+ v))
{ "pile_set_name": "StackExchange" }
Q: Docker Machine error: Hyper-V PowerShell Module is not available I've checked my Hyper-V settings and PowerShell Module is enabled. I've also found this documented issue: https://github.com/docker/machine/issues/4342 but it is not the same issue since I do not have VMware PowerCLI installed. The issue was closed with a push to the repo and is supposedly fixed in 0.14.0-rc1, build e918c74 so I tried it anyways. After replacing my docker-machine.exe, I'm still getting the error and still getting the error even if I reinstall Docker for Windows. For some more background, this error starting happening after a reinstall because my Docker install had an error: https://github.com/docker/for-win/issues/1691, however, I'm not longer getting that issue after reinstalling. A: For those who struggle with this issue in Windows, Follow the instruction here A: When creating a Hyper-v VM using docker-machine on win10, an error was returned"Error with pre-create check: "Hyper-V PowerShell Module is not available"。 The solution is very simple. The reason is the version of the docker-machine program. Replace it with v0.13.0. The detailed operation is as follows: Download the 0.13.0 version of the docker-machine command. Click to download: 32-bit system or 64-bit system After the download is complete, rename and replace the " docker-machine.exe " file in the " C:\Program Files\Docker\Docker\resources\bin" directory. It is best to back up the original file. A: Here is the solution https://github.com/docker/machine/releases/download/v0.15.0/docker-machine-Windows-x86_64.exe Save the downloaded file to your existing directory containing docker-machine.exe. For my system this is the location for docker-machine.exe /c/Program Files/Docker/Docker/Resources/bin/docker-machine.exe Backup the old file and replace it file with the new one. cp docker-machine.exe docker-machine.014.exe Rename the downloaded filename to docker-machine.exe mv docker-machine-Windows-x86_64.exe docker-machine.exe Build Instructions Create virtual switch in Hyper-V manager named myswitch Request Docker to create a VM named myvm1 docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 Results docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 Running pre-create checks... (myvm1) Image cache directory does not exist, creating it at C:\Users\Trey Brister\.docker\machine\cache... (myvm1) No default Boot2Docker ISO found locally, downloading the latest release... (myvm1) Latest release for github.com/boot2docker/boot2docker is v18.05.0-ce (myvm1) Downloading C:\Users\Trey Brister\.docker\machine\cache\boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v18.05.0-ce/boot2docker.iso... (myvm1) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100% Creating machine... (myvm1) Copying C:\Users\Trey Brister\.docker\machine\cache\boot2docker.iso to C:\Users\Trey Brister\.docker\machine\machines\myvm1\boot2docker.iso... (myvm1) Creating SSH key... (myvm1) Creating VM... (myvm1) Using switch "myswitch" (myvm1) Creating VHD (myvm1) Starting VM... (myvm1) Waiting for host to start... Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with boot2docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe env myvm1
{ "pile_set_name": "StackExchange" }
Q: AngularJS: ngRepeat doesn't have time to update before new change happens $scope.add = function(){ $scope.playerCards.push(Deck.drawCard()); if (playerScore > 21){ console.log("You Busted!"); newHand(); } }; In this blackjack game, the DOM automatically updates to reflect the players hand using ngRepeat. However, there isn't a window of time to allow the last card to show before the >21 logic executes. I'm guessing angular doesn't have time to run through ng-repeat. How can I force it to update as soon as the data updates? I tried $scope.$digest() but it throws an error about about something else being in progress. A: I'd solve that by delaying the test for going over 21, akin to the human delay of adding the cards and seeing they're bust. Maybe even give them enough time to see it for themselves before you tell them. :)
{ "pile_set_name": "StackExchange" }
Q: Is it practically useful to decline GUI for a newbie in Ubuntu? My Ubuntu is 12.04. I have just started learning Linux (Ubuntu in particular). To remember terminal commands quicker, I'd like to not use a GUI. However, I can't launch installed programs because I don't know where they are. For example, I have a PDF file. I know that there is a program to view such files. If I was using a GUI I would just click on the PDF file and it would open in Document Viewer 3.4.0. Then I would like to launch Firefox Web Browser. Even if I know it is installed, how to find the file to be launched using just CLI is a mystery to me. Could you suggest anything? A: Honestly, for most of these programs it is enough to simply know their name. When installed from a repo, they tend to add themselves into your path, or a symbolic link to the applications binary is added into a folder in your path already. Also, man (manuals!) is your best friend! man apt-get | less Then you can scroll up and down, line by line rather than having to go by entire pages, (depending on your version/distro, this may be the default functionality of man) it can be very useful when trying to get that next line of output. And last, but definitely not least, if you are new to linux, your package manager apt-get, is going to be your best friend. For experience you should install some programs from source, but knowing your package manager and being able to search it will be invaluable as a time saver. Hopefully this helps some.
{ "pile_set_name": "StackExchange" }
Q: wicket authentication / login I am following this tutorial http://wicket.wordpress.com/2010/01/08/template-for-building-authenticated-webapplication/ in order to learn how to make login and authentication using wicket. My question/problem is that my login area is on the header and therefor one can login on every page. If my application class should inherit AuthenticatedWebApplication, then I must override getSignInPageClass method. What page class should I provide? Is there any other best tutorial to add authentication using wicket? A: The sign in page is displayed when the user attempts to access a Page or other component which requires authorization to create. If your application allows login on every page, then none of your pages require authorization, and the sign in page will never be displayed. I suggest you set it to the home page. As all your pages are visible, you can't use the @AuthorizeInstantiation annotation on your page classes. Instead, you must control visibility of components within the page using the RENDER action instead. For example, MetaDataRoleAuthorizationStrategy.authorize(mycomponent, RENDER, "SYSADMIN"); The only example I can find is at wicketstuff.org.
{ "pile_set_name": "StackExchange" }
Q: How do I turn a string into the name of an array? I've think I've created multiple arrays from strings but if I try to inspect the array I receive an error. File.open("livestock.txt", "r") do |file| file.readlines.each do |x| if x.match(/:*:/) # puts x.inspect # strip string x.gsub!(/[^A-Za-z]/, '') x.downcase! puts x.inspect x = Array.new(){Hash.new} # puts x.inspect pigs.inspect else # puts "no" end end end animals.rb:12:in `block (2 levels) in <main>': undefined local variable or method `pigs' for main:Object (NameError) from animals.rb:2:in `each' from animals.rb:2:in `block in <main>' from animals.rb:1:in `open' from animals.rb:1:in `<main>' Ideally I want to create pigs =[] then add hashes to this array such as: pigs = [{"name"=>"peggy", "id"=>1, "owner"=>"wolcott farms"}, {"name"=>"sue", "id"=>2, "owner"=>"blue moon farms"}, {"name"=>"eddie", "id"=>3, "owner"=>"sunrise farms"} ] and the same for cows, etc. my text file animals.txt is ::pigs:: name, id, owner peggy, 1, wolcott farms sue, 2, blue moon farms eddie, 3, sunrise farms ::cows:: name, id, owner dee, 3, black hat farms sunny, 2, blue moon farms bess, 4, wolcott farms A: Parse Text, Then Assign Using Instance Variables You can't use local variables, but you can use Object#instance_variable_get and Object#instance_variable_set to do this kind of metaprogramming. For example: str = File.read '/tmp/livestock.txt' records = str.split /\n\n+/ records.map! { |r| r.split /\n/ } records.map do |r| var = ?@ << r.shift.strip.delete(?:) fields = r.shift.strip.scan /[^,]+/ hashes = r.map { |e| e.split(?,).flat_map &:strip }. map { |e| fields.zip e }. map &:to_h instance_variable_set var, instance_variable_get(var).to_a.push(hashes).flatten! end; # The data is now stored correctly in the following instance variables. @pigs @cows Caveat Note that if @pigs or @cows already exist because you're testing in the REPL, your results may not be what you expect. Make sure you invoke Object#remove_instance_variable, set your variables to nil, or create a new instance of your class between tests.
{ "pile_set_name": "StackExchange" }
Q: Updating user role using asp.net identity I have the following problem. While using the following code below to change the user's current role i am getting an exception with the message like below: [HttpPost] [ValidateAntiForgeryToken] public virtual ActionResult Edit(User user, string role) { if (ModelState.IsValid) { var oldUser = DB.Users.SingleOrDefault(u => u.Id == user.Id); var oldRoleId = oldUser.Roles.SingleOrDefault().RoleId; var oldRoleName = DB.Roles.SingleOrDefault(r => r.Id == oldRoleId).Name; if (oldRoleName != role) { Manager.RemoveFromRole(user.Id, oldRoleName); Manager.AddToRole(user.Id, role); } DB.Entry(user).State = EntityState.Modified; return RedirectToAction(MVC.User.Index()); } return View(user); } Attaching an entity of type 'Models.Entities.User' failed because another entity of the same type already has the same primary key value. This can happen when using the 'Attach' method or setting the state of an entity to 'Unchanged' or 'Modified' if any entities in the graph have conflicting key values. This may be because some entities are new and have not yet received database-generated key values. In this case use the 'Add' method or the 'Added' entity state to track the graph and then set the state of non-new entities to 'Unchanged' or 'Modified' as appropriate. Does anybody know a good solution to this problem ? A: The problem is that your Manager and DB doesn't use the same DbContext. So when you send an user from the context of your DB to the Manager it will handle it as a "new" one - and then you cant remove it from the role. You have two ways to go here. The easiest is to get the User from your Manager. [HttpPost] [ValidateAntiForgeryToken] public virtual ActionResult Edit(User user, string role) { if (ModelState.IsValid) { // THIS LINE IS IMPORTANT var oldUser = Manager.FindById(user.Id); var oldRoleId = oldUser.Roles.SingleOrDefault().RoleId; var oldRoleName = DB.Roles.SingleOrDefault(r => r.Id == oldRoleId).Name; if (oldRoleName != role) { Manager.RemoveFromRole(user.Id, oldRoleName); Manager.AddToRole(user.Id, role); } DB.Entry(user).State = EntityState.Modified; return RedirectToAction(MVC.User.Index()); } return View(user); } The more elegant way is to start using an DI-framework like AutoFac (https://code.google.com/p/autofac/wiki/MvcIntegration) and set your DbContext as InstancePerApiRequest. builder.RegisterType<YourDbContext>().As<DbContext>().InstancePerApiRequest();
{ "pile_set_name": "StackExchange" }
Q: How to generate an array of quarter numbers of a year along with year number based on current date using moment.js in node js? I want to create an array of quarter numbers along with year number using current timestamp in node js. For example, current quarter is, Q1 and year is, 2020. Now, I want to create an array like the following. quarters = ['Q2-2019','Q3-2019','Q4-2019','Q1-2020'] In the above array, Q1 is of year 2020 and remaining 3 are from year 2019 Basically my requirement is to create array of quarters including present quarter number and past 3 quarter numbers along with year number. Right now, am getting an array like, ['Q2','Q3','Q4','Q1'] by using the following code given by @Santhosh S. The code is, let quarters = [ 0, 1, 2, 3 ].map(i => moment().subtract(i, 'Q').format('[Q]Q') ); console.log(quarters); Is there anyway to generate this array? A: use moment().quarter(); to get current quarter. Edit: use subtract and format to Quarter. Sample code below let format = '[Q]Q'; let quarters = [ moment().format(format), moment().subtract(1, 'Q').format(format), moment().subtract(2, 'Q').format(format), moment().subtract(3, 'Q').format(format) ]; console.log(quarters); <script src="https://momentjs.com/downloads/moment.min.js"></script> Or a more concise version: let quarters = [ 0, 1, 2, 3 ].map(i => moment().subtract(i, 'Q').format('[Q]Q') ); console.log(quarters); <script src="https://momentjs.com/downloads/moment.min.js"></script>
{ "pile_set_name": "StackExchange" }
Q: How can I make other people's account to do ether transactions with smart contract? I was making a smart contract which involves people to buy tokens in exchange of ether they send. It works fine in testrpc as all accounts are unlocked, but how do I do it for actual accounts in main network using web3 in nodejs. What fields would be required to invoke these payable functionalities of smartcontract except the wallet address ofcourse? Any code snippets or examples? Any help would be appreciated. Thanks. A: A few points to help you separate issues and clarify thinking. A contract cannot do anything that a normal user can't do. For example, there's nothing you can program to make a contract spend someone else's money. All action on the blockchain starts with an "Externally Owned Account" "signing" and transaction. Contracts can talk to each other, but they never do anything until someone sends a signed transaction, so those "messages" are in another category ("messages"). Signing is done by wallets using secret keys. Without the secret, signing is not possible. But anyone who acquires the secret (somehow) can sign on behalf of another address. When you're using TestRPC, "the" user has 10 different addresses. They are his addresses, not strangers. He's got the secret keys. TestRPC simply makes it convenient to unlock the accounts and spend as you go. You would not be able to make up an 11th address and spend from it without the corresponding secret. In the case of a website, there are two (general) solutions. The website can create the accounts and (safely!) keep the secret keys. That would be like opening the accounts "on behalf of" the users. Consider how Exchanges operate. The browser can rely on the users' local Ethereum or MetaMask so it is (in fact) the user and not the web server that signs transactions and sends them to the chain. Consider the Mist Wallet contract. Hope it helps.
{ "pile_set_name": "StackExchange" }
Q: Memory leaks and dispose May I do not understand the conecept or I do something wrong. I have some questions about the memory management in .NET. Imagine the situation: Form1 is the big man Form, as MDI-parent and a little FormChild, is bound as child: public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void simpleButton1_Click(object sender, EventArgs e) { FormChild formChild = new FormChild(); formChild.MdiParent = this; formChild.Show(); } } Now the child is allocating a little bit memory as simulation: public partial class FormChild : Form { private readonly List<byte[]> _list = new List<byte[]>(); public FormChild() { InitializeComponent(); } private void FormChild_Load(object sender, EventArgs e) { int i = 0; while (i < 100) { _list.Add(new byte[1024 * 1024 * 10]); i += 1; } } } Now, I'm inspecting with a memory profiler whats going on in the memory heap. I see, if i click on the button, the memory is allocated. Then I close the FormChild and it calls Dispose(). But the memory is still allocated. If I click again a System.OutOfMemoryException occures. Why is the GC waiting to free the managed memory? Or is this my mistake of design? A: The GC only frees memory in response to memory pressure, the main purpose of Dispose is to clean up non-memory related resources. In other words nulling out managed objects isn't necessarily going to make them get collected any faster, but makes diagnosing memory issues much easier to diagnose. A: It looks like some sort of timing problem, where the first instance of formChild is still reachable (ie not garbage) wen the second one is created. You can't accommodate that _list twice. Note that I close the FormChild and it calls Dispose() is a statement about resources and Window handles, not about freeing the memory. It is not clear if you wrote your own Dispose() but in this (rather special) case you should. Cut the void Dispose(bool disposing) method from the FormChild.Designer.cs file and move it to FormChild.cs . use it to release the huge memory block: protected override void Dispose(bool disposing) { _list = null; // add this if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } Note that this is not a 'usual' form of memory management but it's needed because your _list is unusual too.
{ "pile_set_name": "StackExchange" }
Q: Why "undefined: StackGuardMultiplierDefault" error? describe When I clone GoAdminGroup/go-admin projectin github and run the project by the steps of README.MD file , I get this error TEST-MBP:example TEST$ GO111MODULE=on go run main.go go: downloading github.com/mattn/go-sqlite3 v1.11.0 go: extracting github.com/mattn/go-sqlite3 v1.11.0 go: finding github.com/mattn/go-sqlite3 v1.11.0 # runtime/internal/sys /Users/TEST/go/src/runtime/internal/sys/stubs.go:16:30: undefined: StackGuardMultiplierDefault Actually my /Users/TEST/go/src folder was cloned from https://github.com/golang/go/tree/release-branch.go1.13/src Why StackGuardMultiplierDefault was undefined in /src/runtime/internal/sys/stubs.go A: As per my understanding you cloned the Go source code from it's github and expecting it to work. It will not work. You need to to follow Go guide Installing Go from source if you want to install it from the (github) source. Only cloning the repository is not enough, there are some required steps to be done after that. Otherwise I suggest to install by using the available binary distributions installer. Detailed explanation: the const StackGuardMultiplierDefault is not found because the file where the const declared does not exists (the zversion.go file). This particular file is only generated when àll.bash is executed (part of steps on installling Go from source).
{ "pile_set_name": "StackExchange" }
Q: OS X 10.9 Redistributable? I work in an Apple-only office environment with ~40 Macs. Is it possible for me to download an OS X 10.9 redistributable so that we're not downloading 5+ GB continually over our ADSL link ? Thanks! A: Yes it is. You can download it once and then distribute it over an external Harddrive, an USB Flash Drive or your network. The package is called Install OS X Mavericks.app and you will find it in the directory /Applications.
{ "pile_set_name": "StackExchange" }
Q: Construction of Peltier tiles I'm learning about the construction of Peltier tiles from Wikpedia. However, some of the statements in the article are not at all clear. Here's the extract: Two unique semiconductors, one n-type and one p-type, are used because they need to have different electron densities. The semiconductors are placed thermally in parallel to each other and electrically in series and then joined with a thermally conducting plate on each side. When a voltage is applied to the free ends of the two semiconductors there is a flow of DC current across the junction of the semiconductors causing a temperature difference. The side with the cooling plate absorbs heat which is then moved to the other side of the device where the heat sink is. Thermoelectric Coolers, also abbreviated to TECs are typically connected side by side and sandwiched between two ceramic plates. The cooling ability of the total unit is then proportional to the number of TECs in it. What does "thermally in parallel to each other and electrically in series" mean for semiconductors? Also, why should they be arranged in this fashion? Why does flow of DC current across the junction of semiconductors cause a temperature difference? Which "junction" are they talking about? A: If you look at your diagram, it shows the N and P semiconductors connected in pairs (look at the lower layer - the "interconnect"). So the electrical path is in series : the sum of the junctions to work on probably 12v. And the heat transfer is through all the N and P junctions ie in parallel.
{ "pile_set_name": "StackExchange" }
Q: Use Tempfile twice? I'm having an issue with a simple program over what I believe has to do with Tempfiles. I am using 'open-uri' and 'nokogiri' and am trying to do a regex search on a document as well as an xpath search with nokogiri. However, it seems I cannot do this without making two seperate requests for the document and thus creating two separate Tempfiles. This works, but is making two requests: require 'open-uri' require 'nokogiri' source_url = "http://foo.com/" #grab html document and assign it a variable doc = open(source_url) #grab html document, convert to Nokogiri object and assign to variable. noko_doc = Nokogiri::HTML(open(source_url)) #create array of stuff. foo = noko_doc.xpath("//some element").collect { |e| e } #create another array of stuff bar = [] doc.each do |f| f.each do |line| abstract_matches = line.scan(/some regex string/) unless abstract_matches.empty? abstract_matches.collect! do |item| if item.to_s.match(/yet another regex string/) item end end.compact! unless abstract_matches.empty? abstract_matches.each { |match| bar << "#{ match } / " } end end end end #all for this puts foo + bar I would prefer if I could pass the 'doc' variable into Nokogiri::HTML, as well as iterate over it. Help? A: It's uncommon to iterate a Tempfile. More common is to access like this: html = open(source_url).read noko_doc = Nokogiri::HTML(html) html.split("\n").each do |line| # do stuff end
{ "pile_set_name": "StackExchange" }
Q: Appcelerator Studio Crashes on Safari Launch? I'm working on an iMac 27-inch, Late 2012 (macOS High Sierra) When I open Safari, Appcelerator Studio crashes immediately! I couldn't figure that out why this is happening. Does anyone know anything about this? A: Can you check what is the error reported in the crash log? Just look for main thread crash and search in the google - this might give some clue. Not sure if it's related to this: https://bugs.eclipse.org/bugs/show_bug.cgi?id=465693 When crash happens, you'll see a rectangular dialog with a title of JVM Terminate on your screen and Studio will disappear. This indicates Java itself has crashed, but it creates a log file in the process. By default, the crash file should be in either the Studio installation directory, or in the system temp directory, most likely with a file name of hs_err_pid*.log. If you still could not locate it, follow the directions here: Finding your Error Log On OS X, the location is ~/Library/Logs/Java/*.crash.log or ~/Library/Logs/CrashReporter, and the file will have the word java in its name. http://docs.appcelerator.com/platform/latest/#!/guide/Crashes_and_Freezes
{ "pile_set_name": "StackExchange" }
Q: ICY metadata support with ffmpeg Is there any way to get ICY metadata from shoutcast stream using FFMpeg ? One way would be to deal with the connection/stream by myself and send Custom IOStream to ffmpeg. Is there any other simple way? or demuxer available ? Thanks A: There was discussion of a patch for supporting it here: http://web.archiveorange.com/archive/v/yR2T400567mWEyyZHg3k But, it doesn't look like it made it in yet. I suggest you simply parse this out yourself. See my answer here for how to do this: https://stackoverflow.com/a/4914538/362536 Alternatively, you can just access /7.html on SHOUTcast servers, and you will get a line like this: 1,1,15,625,1,128,Oh Mercy - Stay, Please Stay The fields are: Number of listeners Stream status (1 means you're on the air, 0 means the source isn't there) Peak number of listeners for this server run Max number of simultaneous listeners the server is configured to allow The unique number of listeners, based on IP Current bitrate in kilobits The title. (Note, even if you have a comma in your title, it isn't escaped or anything.) Beware though that /7.html isn't always available on non-SHOUTcast servers, and may not be available in the beta of the new version. While this is a quick and easy method, you would be better off parsing the metadata sent to clients.
{ "pile_set_name": "StackExchange" }
Q: Question about von Neumann algebra generated by a complete algebra of projections Hi all, sorry if this is a dumb question, I don't know much about von Neumann algebras except the definition and a few relevant facts I've managed to prove by myself so I expect the answer will turn out to be well known. Anyway, let $\mathcal{H}$ be a Hilbert space, and suppose that $P$ is a commuting set of self-adjoint projections on $\mathcal{H}$, with the additional two properties: 1) $P$ is closed under complements, i.e. if $p \in P$ then so is $1 - p$. 2) $P$ is closed under suprema of arbitrary subsets, i.e. if $S \subseteq P$ then $\sup S \in P$ (here the projections on $\mathcal{H}$ are ordered by defining $p \leq q$ whenever the range of $p$ is contained in the range of $q$). Now let $V$ denote the smallest von Neumann algebra containing $P$. Suppose that $p \in V$ is a self-adjoint projection. Is $p \in P$? I know that $p$ is necessarily in the closure (relative to the weak operator topology) of the set of finite sums $\sum_i \lambda_i p_i$, where $p_i \in P$ and $\lambda_i \in \mathbb{R}$. It seems like it may be possible to derive a contradiction from the assumption that $q$ has a strictly smaller range than $p$, where $q \equiv \sup ${$ r \in P | r \leq p $}. But I don't know how to proceed. A: The answer "yes" follows from Theorem 2.8 of Bade's "On Boolean algebras of projections and algebras of operators," 1955, which is in the more general context of algebras of operators on a Banach space. Bade had previously proven a less general result that still covers your case, dealing with algebras of operators on reflexive spaces, in Theorem 3.4 of "Weak and strong limits of spectral operators," 1954.
{ "pile_set_name": "StackExchange" }
Q: What abbreviations, letters, or symbols do we use to denote highly degenerate stars? We have OBAFGKM to denote the strength of hydrogen lines. Apparently we added S, N, C, and the W classes when we learned of new kinds of stars. We use T, L, and Y to denote brown dwarfs. The D classes refer to electron-degenerate stars (white dwarfs). I've looked around quite a bit and I haven't seen anything regarding neutron stars and black holes. Does this mean there's no common symbol or abbreviation? What do they use in star catalogs? (I've never found a catalog with stars of all degeneracies in it.) A: Neutron stars and black holes do not have assigned spectral types since they do not have a measurable optical/IR spectrum - which is the basis for assigning a spectral type. There are many classifications for systems containing black holes and neutron stars. They are not related to the letter-based spectral types of "normal" stars or white dwarfs, but more to do with their X-ray, radio or inferred physical properties (e.g low- or high-mass X-ray binary systems).
{ "pile_set_name": "StackExchange" }
Q: Is it necessary to set scrollview.delegate as self? I am taking an online iOS course provided by Stanford. In the sample code, @IBOutlet weak var scrollView: UIScrollView! { didSet { scrollView.contentSize = imageView.frame.size // all three of the next lines of code // are necessary to make zooming work scrollView.delegate = self scrollView.minimumZoomScale = 0.03 scrollView.maximumZoomScale = 1.0 } } However, if I remove scrollView.delegate = self, this scroll view still works on the simulator. My questions: Is it necessary to set scrollview.delegate as self? Why or why not? What does self refer to? command + left click locate "did set". A: You do not have to set the delegate for scrollView. You only do it if you want to react delegate methods that scrollView calls. For example, if you want to do something when user performs a scroll, you need to set the delegate and implement func scrollViewDidScroll(scrollView: UIScrollView) method. Everytime user scrolls, you will be able to react in this method's body. Self refers to the class that holds this variable. In this case it will be probably your UIViewController A: Define, it still works? I mean if you can move it with touch, yeah it will work. The reason for scrollView.delegate = self is that it allows you to add code which can execute upon scroll began, scroll ended etc. That way you have customization points to alter the behavior or actions of scrollview. Without that little code the delegate code will never get called. Makes sense?
{ "pile_set_name": "StackExchange" }
Q: Does Dapper work on Mono? We're thinking about moving over to Mono and I see that Dapper works with MySql. However this is with a ADO.NET provider. Does Mono/Linux have a MySql ADO.NET provider and does that work with Dapper? Eventually we are planning on moving our current site from MySql to PostgreSql and I'm also wondering the same question, but also interms of PostrgreSql, Mono and Dapper on linux? A: I'm using Dapper with the official MySqlConnector on an OpenSuse machine (+ mono) and it works great. A: Why not pull down the source and build it? Based on this comment from the Dapper home page: Will dapper work with my db provider? Dapper has no DB specific implementation details, it works across all .net ado providers including sqlite, sqlce, firebird, oracle, MySQL and SQL Server ...and Mono's ADO.NET implementation, I would think your chances are pretty good that the code will work with little or no modification.
{ "pile_set_name": "StackExchange" }
Q: Display more than one image in DataGridView Image Column? Is it possible? Is it possible to display more than one image in a column in DataGridViewImageColumn? I have only 1 column and need to dynamically display images. This column could display 1 to 3 images, depending on other conditions. A: You could draw the two images you want to display onto a third, new image and then display that in the column. Something like this: Bitmap Image1 = new Bitmap(10, 10); //replace with your first image Bitmap Image2 = new Bitmap(10, 10); //replace with your second image Bitmap ImageToDisplayInColumn = new Bitmap(Image1.Width + Image2.Width, Image1.Height); using (Graphics graphicsObject = Graphics.FromImage(ImageToDisplayInColumn)) { graphicsObject.DrawImage(Image1, new Point(0, 0)); graphicsObject.DrawImage(Image2, new Point(Image1.Width, 0)); }
{ "pile_set_name": "StackExchange" }
Q: Mysql subquery much faster than join I have the following queries which both return the same result and row count: select * from ( select UNIX_TIMESTAMP(network_time) * 1000 as epoch_network_datetime, hbrl.business_rule_id, display_advertiser_id, hbrl.campaign_id, truncate(sum(coalesce(hbrl.ad_spend_network, 0))/100000.0, 2) as demand_ad_spend_network, sum(coalesce(hbrl.ad_view, 0)) as demand_ad_view, sum(coalesce(hbrl.ad_click, 0)) as demand_ad_click, truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else 100*sum(hbrl.ad_click)/sum(hbrl.ad_view) end, 0), 2) as ctr_percent, truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else sum(hbrl.ad_spend_network)/100.0/sum(hbrl.ad_view) end, 0), 2) as ecpm, truncate(coalesce(case when sum(hbrl.ad_click) = 0 then 0 else sum(hbrl.ad_spend_network)/100000.0/sum(hbrl.ad_click) end, 0), 2) as ecpc from hourly_business_rule_level hbrl where (publisher_network_id = 31534) and network_time between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f') and str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f') and (network_time IS NOT NULL and display_advertiser_id > 0) group by network_time, hbrl.campaign_id, hbrl.business_rule_id having demand_ad_spend_network > 0 OR demand_ad_view > 0 OR demand_ad_click > 0 OR ctr_percent > 0 OR ecpm > 0 OR ecpc > 0 order by epoch_network_datetime) as atb left join dim_demand demand on atb.display_advertiser_id = demand.advertiser_dsp_id and atb.campaign_id = demand.campaign_id and atb.business_rule_id = demand.business_rule_id ran explain extended, and these are the results: +----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 1451739 | 100.00 | NULL | | 1 | PRIMARY | demand | ref | PRIMARY,join_index | PRIMARY | 4 | atb.campaign_id | 1 | 100.00 | Using where | | 2 | DERIVED | hourly_business_rule_level | ALL | _hourly_business_rule_level_supply_idx,_hourly_business_rule_level_demand_idx | NULL | NULL | NULL | 1494447 | 97.14 | Using where; Using temporary; Using filesort | +----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+-----------------+---------+----------+----------------------------------------------+ and the other is: select UNIX_TIMESTAMP(network_time) * 1000 as epoch_network_datetime, hbrl.business_rule_id, display_advertiser_id, hbrl.campaign_id, truncate(sum(coalesce(hbrl.ad_spend_network, 0))/100000.0, 2) as demand_ad_spend_network, sum(coalesce(hbrl.ad_view, 0)) as demand_ad_view, sum(coalesce(hbrl.ad_click, 0)) as demand_ad_click, truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else 100*sum(hbrl.ad_click)/sum(hbrl.ad_view) end, 0), 2) as ctr_percent, truncate(coalesce(case when sum(hbrl.ad_view) = 0 then 0 else sum(hbrl.ad_spend_network)/100.0/sum(hbrl.ad_view) end, 0), 2) as ecpm, truncate(coalesce(case when sum(hbrl.ad_click) = 0 then 0 else sum(hbrl.ad_spend_network)/100000.0/sum(hbrl.ad_click) end, 0), 2) as ecpc from hourly_business_rule_level hbrl join dim_demand demand on hbrl.display_advertiser_id = demand.advertiser_dsp_id and hbrl.campaign_id = demand.campaign_id and hbrl.business_rule_id = demand.business_rule_id where (publisher_network_id = 31534) and network_time between str_to_date('2017-08-13 17:00:00.000000', '%Y-%m-%d %H:%i:%S.%f') and str_to_date('2017-08-14 16:59:59.999000', '%Y-%m-%d %H:%i:%S.%f') and (network_time IS NOT NULL and display_advertiser_id > 0) group by network_time, hbrl.campaign_id, hbrl.business_rule_id having demand_ad_spend_network > 0 OR demand_ad_view > 0 OR demand_ad_click > 0 OR ctr_percent > 0 OR ecpm > 0 OR ecpc > 0 order by epoch_network_datetime; and these are the results for the second query: +----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+ | 1 | SIMPLE | hourly_business_rule_level | ALL | _hourly_business_rule_level_supply_idx,_hourly_business_rule_level_demand_idx | NULL | NULL | NULL | 1494447 | 97.14 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | demand | ref | PRIMARY,join_index | PRIMARY | 4 | my6sense_datawarehouse.hourly_business_rule_level.campaign_id | 1 | 100.00 | Using where; Using index | +----+-------------+----------------------------+------+-------------------------------------------------------------------------------+---------+---------+---------------------------------------------------------------+---------+----------+----------------------------------------------+ the first one takes about 2 seconds while the second one takes over 2 minutes! why is the second query taking so long? what am I missing here? thanks. A: Use a subquery whenever the subquery significantly shrinks the number of rows before - ANY JOIN - always to reinforce Rick James Plan B. To reinforce Rick & Paul's answer which you have already documented. The answers by Rick and Paul deserve Acceptance.
{ "pile_set_name": "StackExchange" }
Q: Fire checkbox event with jQuery Mobile I am using jQuery 1.8.3 and jQuery Mobile 1.2.0. I am trying to implement the vertically grouped checkboxes. I need to detect changes on the checkboxes. An example is available here. However, it doesn't work on my page (which I don't give because it contains too many dependencies but the code is the same). Basically, when I click a checkbox, the following message should be displayed in the console: INPUT module1 ----- Whereas in my page I've got: FIELDSET moduleListTitle ----- DIV moduleContainer ----- DIV (an empty string) ----- I've spent hours trying to find why the event is not fired correctly. Has anyone already faced this problem? A: After testing the code live - not on JSfiddle - I have reached the below solution. Since the items are added dynamically, you should bind the event to $(document) and the handler to 'input:checkbox.module'. Here is the code, I hope it solves your problem. $(document).on('change','input:checkbox.module', function(){ console.log(this.tagName); console.log(this.id); console.log("-----"); }); trigger.('refresh') is not required here.
{ "pile_set_name": "StackExchange" }
Q: jQuery / JSON sort results I have a ColdFusion method getData() which returns a query object as follows: CustomerCode ServiceCode SubscriberCode Status UserName ------------------------------------------------------------- 811101 8 gertjan OPEN gertjan@blah.net 811101 8 gertjan CLOSING gertjan@blah.net 811101 2 99652444 CLOSED gertjan@blah.net 811101 2 99655000 OPEN gertjan@blah.net Note the first two rows - exactly the same except for Status OPEN and CLOSING respectively. The following function creates a new select option for each row where ServiceCode=8 and Status is either OPEN or CLOSING, which would be the case for both the first two rows. The data ultimately comes via a web service which is out of my control to change. I need to change the jQuery such that if BOTH an OPEN and CLOSING record exists for the same ServiceCode/SubscriberCode combination, which is the case for the first two rows, then only create an option for the OPEN record. function getInternetLines(){ var CustomerCode=global_customerCode; var SessionID=global_sessionID; var lines=[]; $.getJSON("/system.cfc?method=getData&returnformat=json&queryformat=column", {"SessionID":SessionID,"CustomerCode":CustomerCode}, function(res,code) { if(res.ROWCOUNT > 0){ for(var i=0; i<res.ROWCOUNT; i++) { var ServiceCode = res.DATA.ServiceCode[i]; var SubscriberCode = res.DATA.SubscriberCode[i]; var Status = res.DATA.Status[i]; if(ServiceCode == 8 && (Status == 'OPEN' || Status == 'CLOSING')){ lines.push(SubscriberCode); $('#selInternet').append( $('<option></option>').val(SubscriberCode).html(SubscriberCode) ); } } global_internet_lines = lines; if(lines.length == 0){ $('#divBroadbandUsage').html('No Active Broadband Connections.'); } }else{ $('#divBroadbandUsage').html('No Active Broadband Connections.'); } }); } HTML <select name="selInternet" id="selInternet" style="width:120px"> </select> Any assistance greatly appreciated in getting the cleanest approach to this, without multiple loops of the same dataset, for example. A: You would need to keep a hash as you read the data, ignoring data if an 'OPEN' was already found. Then loop through the hash items and output the data: if(res.ROWCOUNT > 0){ var hash = {}; // Hash to store data for(var i=0; i<res.ROWCOUNT; i++) { var ServiceCode = res.DATA.ServiceCode[i]; var SubscriberCode = res.DATA.SubscriberCode[i]; var Status = res.DATA.Status[i]; if(ServiceCode == 8 && (Status == 'OPEN' || Status == 'CLOSING')){ if( hash[SubscriberCode] != undefined && hash[SubscriberCode].status == 'OPEN' ) { // If we already have OPEN, don't load the data continue; } else { // Else override whatever data you have for this SubscriberCode hash[SubscriberCode] = { status: Status, subscriber: SubscriberCode, service: ServiceCode }; } } } // loop through the hash and output the options for(var x in hash) { lines.push(hash[x].subscriber); $('#selInternet').append( $('<option></option>').val(hash[x].subscriber).html(hash[x].subscriber) ); } global_internet_lines = lines; if(lines.length == 0){ $('#divBroadbandUsage').html('No Active Broadband Connections.'); } } I'm not sure what your cases are, but this covers your description I believe. I realize it is silly to have the hash key stored in the data, but for demonstration, this is how you would retrieve other data. This code stores Status, SubscriberCode, and ServiceCode, but your example only uses SubscribercCode. If this is really the case, it is much simpler: if(res.ROWCOUNT > 0){ var hash = {}; // Hash to store data for(var i=0; i<res.ROWCOUNT; i++) { var ServiceCode = res.DATA.ServiceCode[i]; var SubscriberCode = res.DATA.SubscriberCode[i]; var Status = res.DATA.Status[i]; if(ServiceCode == 8 && (Status == 'OPEN' || Status == 'CLOSING')){ // If we see the subscriber code, add it to our hash hash[SubscriberCode] = 1; } } // loop through the hash and output the options for(var sub in hash) { lines.push(sub); $('#selInternet').append( $('<option></option>').val(sub).html(sub) ); } global_internet_lines = lines; if(lines.length == 0){ $('#divBroadbandUsage').html('No Active Broadband Connections.'); } }
{ "pile_set_name": "StackExchange" }
Q: how can I self sign a tar ball so that later I can verify it was not altered intentionally I know of md5 or sha256 hashing it won't work for my case -- see "My needs" section I am having a tar file which resides on a server and the file is consumed by several clients over the internet. I want to ensure that the tar file is not tampered. Clients are programmed using python and I have control over their source code (which means I can reprogram clients to verify certificate). My needs: even if someone hacked into the server he should not be able to attack the client by altering the tar file in the server. so md5, or sha256 hashing won't work(attacker can change it on the server) My questions are? I have heard openssl making x.509 certificates but I believe openssl is not fit for this purpose because openssl is for providing security over internet not for code signing. Is my assumption correct ? If the above assumption was correct then what tool or technology should i use to sign a tar ball? Is there any built in support for this in python ? (Note : the tar ball is the output of "python setup.py sdist") A: You can sign your tar.gz file using python-gnupgp - this uses the gnupgp package so you will need that as well. You may need to send the signature separately from the tar.gz file.
{ "pile_set_name": "StackExchange" }
Q: What does "no device" mean when running iostat -En We presume to have a faulty cable that connects the SAN to a direct I/O LDOM. This is a snippet of the error when running iostat -En c5t60060E8007C50E000030C50E00001067d0 Soft Errors: 0 Hard Errors: 696633 Transport Errors: 704386 Vendor: HITACHI Product: OPEN-V Revision: 8001 Serial No: 504463 Size: 214.75GB <214748364800 bytes> Media Error: 0 Device Not Ready: 0 No Device: 6 Recoverable: 0 Illegal Request: 1 Predictive Failure Analysis: 0 What does No Device: 6 mean here? A: A search through the Illumos fiber-channel device code for ENODEV shows 13 uses of ENODEV in the source code that originated as OpenSolaris. Of those instances, I suspect this is the one most likely to cause your "No device" errors: pd = fctl_hold_remote_port_by_pwwn(port, &pwwn); if (pd == NULL) { fcio->fcio_errno = FC_BADDEV; return (ENODEV); } That code is in the fp_fcio_login() function, where the code appears to be trying to login to a remote WWN. It seems appropriate to assume a bad cable could prevent that from happening. Note that fiber-channel error code is FC_BADDEV, which also seems appropriate for a bad cable. In short, a review of the source code indicates that ENODEV errors are consistent a bad cable. You can use dTrace to more closely identify the association if necessary. Given that both hard and transport errors occur about 5 or 6 orders of magnitude more frequently, IMO that effort isn't necessary until the ENODEV errors occur after the other errors are addressed and no longer occur.
{ "pile_set_name": "StackExchange" }
Q: Motion verbs and ところ As I understand it, something like 帰るところだ usually means you are in the process of going home, say on the train. But ところ often has the meaning of "just about to do something." Does something like 帰るところだ also have this meaning? e.g. Can you say it if you are still in the office and about to leave in a couple minutes? If so, my main question is I'm wondering if ところ always has this ambiguity (at least from the English perspective) between meaning both "in the process of doing" and "just about to do"? Or is it something special to motion verbs and other verbs that are sometimes called stative verbs? What I mean by stative is 帰っている does not mean "is in the process of going" it means "went home and is now there." I'm wondering if ところ just seems to have two meanings in English because when you sitting on the train going home you are still "just about to go home" from the perspective of Japanese because 帰る is stative and you haven't completely arrived yet. Basically I'm wondering if this is correct:  雨が降るところだ = "It is just about to rain." (NEVER "It is in the process of raining.") 雨が降っているところだ = "It is in the process of raining." 帰るところだ = "I'm just about to go home (at the office)." OR "I'm in the process of going home (on the train)." And I would guess 帰っているところだ sounds strange and doesn't make much sense(?) A: When it's するところ, it means "about to do (something)." したところ means someone has just done something. しているところ means someone is doing something. していたところ means someone has been doing something. Literally, ところ means 'place', but it's also used for a figurative place as this dictionary page defines it as 2 抽象的な場所。 オ ちょうどその所「さっき着いたところだ」 Also this page puts it as 形式名詞 「こと・の・ところ・ほう・わけ・はず・つもり」など、 今テレビを見ているところです。  Now Does something like 帰るところだ also have this meaning? e.g. Can you say it if you are still in the office and about to leave in a couple minutes? Yes, you can say that when you are still at the office but about to go home. And is the reason that 帰るところだ can be interpreted as "in the process of going home" because motion verbs don't complete until you reach the destination? Theoretically, yes. But if we are just in front of our home, then we probably choose a different expression such as 今もう家の前にいるんだ or もう今家に着くところなんだ. [replying to additional request from the OP] 雨が降るところだ = "It is just about to rain." (NEVER "It is in the process of raining.") 雨が降るところだ is an unusual thing to say. To mean "It is just about to rain," we likely say 雨が降りそうだ. 雨が降っているところだ = "It is in the process of raining." This is not bad, however we more likely say 雨が降っている. 帰るところだ = "I'm just about to go home (at the office)." OR "I'm in the process of going home (on the train)." Yes, these are correct. And I would guess 帰っているところだ sounds strange and doesn't make much sense(?) Doesn't sound so bad if you are on the train, but if you are still at the office, it does sound strange. What I mean by stative is 帰っている does not mean "is in the process of going" it means "went home and is now there." Right. I am aware that Japanese continuation form ている is actually more of state than progression. I find that is why we need ところ to make it progressive. I'm wondering if ところ just seems to have two meanings in English because when you sitting on the train going home you are still "just about to go home" from the perspective of Japanese because 帰る is stative and you haven't completely arrived yet. I think 帰る itself is not necessarily stative. The form ている makes it stative.
{ "pile_set_name": "StackExchange" }
Q: bash shell array output range to csv Is there an easier way to do the below? I am reading in a large cvs file and want to only display ranges on the index per line and output back into a csv format. while IFS=$',' read -r -a myArray do echo ${myArray[44]},${myArray[45]},${myArray[46]},${myArray[47]},${myArray[48]},${myArray[65]},${myArray[66]},${myArray[67]} done < $SHEET A: You can use the substring operator with array parameter expansion: while IFS=, read -r -a myArray do ( IFS=,; echo "${myArray[*]:44:5},${myArray[*]:65:3}" ) done
{ "pile_set_name": "StackExchange" }
Q: Explaination about a statement in LISP about format function I have to convert a decimal number to binary in lisp. I came across this code while searching on web . (defun :bits (value &optional (size 64)) (format t "~v,'~B" size value)) So please explain me what will each attribute of the code will do. A: So (format nil "~B" 23) will output the number in it's binary form : > (format nil "~B" 23) "10111" But we want to specify the size of output string, we can do this by adding the size as a prefix in the format string. > (format nil "~8B" 23) " 10111" But we don't want to pad it with spaces. We want to pad it with ~. > (format nil "~8,'~B" 23) "~~~10111" Now we don't want to hard code the size of the output in the format string, we want this passed in as a parameter. This is where ~v comes in: > (format nil "~v,'~B" 8 23) "~~~10111" Now note I have been passing nil as the second parameter rather than t. Passing nil means format returns the formatted string rather than printing it. You would probably rather do this. A: Mongus Pong's answer describes the actual behavior of the code you're looking at pretty well, but I think it's always worth mentioning where to find the answer, too. The Common Lisp HyperSpec is the best source of Common Lisp documentation, but there are parts of it that are a little bit hard to read. Sometimes the documentation of directives for format can be a bit dense. In this case, you'll need to see a few sections, because some of the things in your example can apply to more than just the binary directive, ~B. You'd want to start with 22.3 Formatted Output, which describes the syntax of format strings: A directive consists of a tilde, optional prefix parameters separated by commas, optional colon and at-sign modifiers, and a single character indicating what kind of directive this is. There is no required ordering between the at-sign and colon modifier. The case of the directive character is ignored. Prefix parameters are notated as signed (sign is optional) decimal numbers, or as a single-quote followed by a character. For example, ~5,'0d can be used to print an integer in decimal radix in five columns with leading zeros, or ~5,'*d to get leading asterisks. So we're expecting to see a tilde, then (optionally) parameters separated by colons, an (optional) at sign (@), an (optional) colon (:), and the actual prefix directive (which is case sensitive). That means that ~v,'~B is broken down as ~ ; initial tilde v ; prefix parameter (could also be V) , ; separator between prefix parameters '~ ; prefix parameter (character following single quote) B ; directive (could also be b) So we have two prefix parameters: v and ~, and the directive is B. The next paragraph in the documentation describes what v does when it's a prefix parameter: In place of a prefix parameter to a directive, V (or v) can be used. In this case, format takes an argument from args as a parameter to the directive. The argument should be an integer or character. If the arg used by a V parameter is nil, the effect is as if the parameter had been omitted. Now, to find out what ~B does in general, you'll need to see 22.3.2.3 Tilde B: Binary, although it will pretty much redirect you elsewhere: This is just like ~D but prints in binary radix (radix 2) instead of decimal. The full form is therefore ~mincol,padchar,commachar,comma-intervalB. That documentation describes the prefix parameters that are accepted (mincol, padchar, commachar, and comma-interval). These are filled in from left to right. The example ~v,'B has two of those, so v is mincol and ' is padchar. But we still need to see 22.3.2.2 Tilde D: Decimal for what each of those mean: ~mincolD uses a column width of mincol; spaces are inserted on the left if the number requires fewer than mincol columns for its digits and sign. If the number doesn't fit in mincol columns, additional columns are used as needed. ~mincol,padcharD uses padchar as the pad character instead of space. … The : modifier causes commas to be printed between groups of digits; commachar may be used to change the character used as the comma. comma-interval must be an integer and defaults to 3. When the : modifier is given to any of these directives, the commachar is printed between groups of comma-interval digits. So, mincol, the width of the result is v, which means that it will be read from the list of arguments, and padchar, the padding character, is ~. Thus: CL-USER> (bits 13 10) ~~~~~~1101 ; 10 characters wide, padded with ~ CL-USER> (bits 1022 10) 1111111110 ; 10 characters wide, no need for padding CL-USER> (bits 1022 11) ; 11 characters with, padded with ~ ~1111111110
{ "pile_set_name": "StackExchange" }
Q: show the sum of specific columns based on rhandsontable values I am trying to create a shiny app, that would show a sum of a column (say mtcars$mpg) when rows are selected by the users. e.g if the first two boxes are clicked in rhandsontable, then below i should see a sum of 21 and 21. I am unable to wrap my head around it, and have made this code so far: library(shiny) library(rhandsontable) ui=fluidPage( rHandsontableOutput('table'), textOutput ('selected') ) server=function(input,output,session)({ df <- data.frame(head(transform(mtcars, Selected = as.logical(NA) ))) output$table=renderRHandsontable( rhandsontable(df,selectCallback = TRUE,readOnly = FALSE) ) output$selected<-renderText({ }) }) # end server shinyApp(ui = ui, server = server) is there any way to achieve this ? A: I found a way ! saving the rhandsontable as an r object first and then applying subset & aggregate function , then rendering the result as a table : i can use reactive like this tab1 <- reactive({ if(!is.null(input$table )) { nt <- hot_to_r(input$table) nt.1<- subset(nt, Selected == TRUE, select = c(mpg,gear)) nt.2 <- aggregate(nt.1$mpg ~ nt.1$gear , data = nt.1 , FUN = 'sum') } }) :-)
{ "pile_set_name": "StackExchange" }
Q: Why does Ramanuja refer to the PAshupata sect as the sect of "Black faces"? Ramanuja attributed this philosophy to the tradition of the Kalamukha(s), the sect of "Black Faces" to which Lakulisha belonged. What was the reason for referring to it as KAlamukha? What were RAmanuja's views on the KApAlika and Pasupata sects and their doctrines? A: Ramanujacharya does not call Pashupatas as Kalamukhas. In fact, he explicitly states that they are different Shaivite sects, in this section of his Sri Bhashya: So far it has been shown that the doctrines of Kapila, Kanâda, Sugata, and the Arhat must be disregarded by men desirous of final beatitude; for those doctrines are all alike untenable and foreign to the Veda. The Sûtras now declare that, for the same reasons, the doctrine of Pasupati also has to be disregarded. The adherents of this view belong to four different classes--Kâpâlas, Kâlâmukhas, Pâsupatas, and Saivas. All of them hold fanciful theories of Reality which are in conflict with the Veda, and invent various means for attaining happiness in this life and the next. They maintain the general material cause and the operative cause to be distinct, and the latter cause to be constituted by Pasupati. They further hold the wearing of the six so-called 'mudrâ' badges and the like to be means to accomplish the highest end of man. Thus the Kâpâlas say, 'He who knows the true nature of the six mudrâs, who understands the highest mudrâ, meditating on himself as in the position called bhagâsana, reaches Nirvâna. The necklace, the golden ornament, the earring, the head-jewel, ashes, and the sacred thread are called the six mudrâs. He whose body is marked with these is not born here again.'--Similarly the Kâlâmukhas teach that the means for obtaining all desired results in this world as well as the next are constituted by certain practices--such as using a skull as a drinking vessel, smearing oneself with the ashes of a dead body, eating the flesh of such a body, carrying a heavy stick, setting up a liquor-jar and using it as a platform for making offerings to the gods, and the like. 'A bracelet made of Rudrâksha-seeds on the arm, matted hair on the head, a skull, smearing oneself with ashes, &c.'--all this is well known from the sacred writings of the Saivas. They also hold that by some special ceremonial performance men of different castes may become Brâhmanas and reach the highest âsrama: 'by merely entering on the initiatory ceremony (dîkshâ) a man becomes a Brâhmana at once; by undertaking the kâpâla rite a man becomes at once an ascetic.' As far as how Ramanujacharya feels about Pashupatas, Kapalikas, and others, in this section of the Sri Bhashya he discusses how their beliefs and practices are criticized in the Brahma Sutras: With regard to these views the Sûtra says 'of pati, on account of inappropriateness.' A 'not' has here to be supplied from Sûtra 32. The system of Pasupati has to be disregarded because it is inappropriate, i.e. because the different views and practices referred to are opposed to one another and in conflict with the Veda. The different practices enumerated above, the wearing of the six mudrâs and so on, are opposed to each other; and moreover the theoretical assumptions of those people, their forms of devotion and their practices, are in conflict with the Veda. For the Veda declares that Nârâyana who is the highest Brahman is alone the operative and the substantial cause of the world, 'Nârâyana is the highest Brahman, Nârâyana is the highest Reality, Nârâyana is the highest light, Nârâyana is the highest Self'; 'That thought, may I be many, may I grow forth' (Kh. Up. VI, 2, 3); 'He desired, may I be many, may I grow forth' (Taitt. Up. II, 6, 1), and so on. In the same way the texts declare meditation on the Supreme Person, who is the highest Brahman, to be the only meditation which effects final release; cp. 'I know that great Person of sunlike lustre beyond the darkness. A man who knows him passes over death; there is no other path to go' (Svet. Up. III, 8). And in the same way all texts agree in declaring that the works subserving the knowledge of Brahman are only those sacrificial and other works which the Veda enjoins on men in the different castes and stages of life: 'Him Brâhmanas seek to know by the study of the Veda, by sacrifice, by gifts, by penance, by fasting. Wishing for that world only, mendicants wander forth from their homes' (Bri. Up. XI, 4, 22). In some texts enjoining devout meditation, and so on, we indeed meet with terms such as Pragâpati, Siva, Indra, Âkâsa, Prâna, &c., but that these all refer to the supreme Reality established by the texts concerning Nârâyana--the aim of which texts it is to set forth the highest Reality in its purity--, we have already proved under I, 1, 30. In the same way we have proved under Sû. I, 1, 2 that in texts treating of the creation of the world, such as 'Being only this was in the beginning,' and the like, the words Being, Brahman, and so on, denote nobody else but Nârâyana, who is set forth as the universal creator in the account of creation given in the text, 'Alone indeed there was Nârâyana, not Brahmâ, not Isâna--he being alone did not rejoice' (Mahopanishad I).--As the Pasupati theory thus teaches principles, meditations and acts conflicting with the Veda, it must be disregarded. In subsequent sections of the Sri Bhashya he discusses how the Brahma Sutras refute the belief of these Shaivite sects that Ishwara is the efficient cause but not the material cause of the Universe.
{ "pile_set_name": "StackExchange" }
Q: HTML canvas change text according to input text I want to change the text that is on a canvas. The problem that it is just adding and not removing letters as I delete them from the input. http://jsfiddle.net/pgo8yzrc/ var c = document.getElementById("myCanvas"); var ctx = c.getContext("2d"); window.change = function(val){ ctx.restore(); ctx.font = "20px Georgia"; ctx.fillText(val, 10, 50); ctx.save(); } <canvas id="myCanvas"></canvas> <input type="text" onkeyup="change(this.value)" /> Why adding text working and removing is not working. can you please correct that? Thanks A: Try this: var c = document.getElementById("myCanvas"); var ctx = c.getContext("2d"); window.change = function(val){ ctx.clearRect(0, 0, c.width, c.height); ctx.restore(); ctx.font = "20px Georgia"; ctx.fillText(val, 10, 50); ctx.save(); } See working example here Upd.: If you have background add function: function fillBackground() { ctx.fillStyle = "blue"; ctx.fillRect(0, 0, c.width, c.height); } Then use it before window.change and after ctx.clearRect
{ "pile_set_name": "StackExchange" }
Q: Retrieve a specific word from the entire string in android? I have some data in the html format. I am using Html.fromHtml(String) when setting the data to the textview in android. Also the html data contains 2 to 3 images. I want to get the name of the images from the html data and store them in an array. Based on the number of the images in the array I will set them to the imageview. How can I get the image name from src in image tag? Which one will be the better option .i.e using regular expression or substring? Please suggest some solutions and help me with some examples. My Code: public View getView(int position, View convertView, ViewGroup parent) { --- --- desc = (TextView) view.findViewById(R.id.description); URLImageParser p = new URLImageParser(desc, this); Spanned htmlSpan = Html.fromHtml(listItem.getdesc(), p, null); desc.setText(htmlSpan); ---- ---- ---- ---- } public class URLDrawable extends BitmapDrawable { // the drawable that you need to set, you could set the initial drawing // with the loading image if you need to protected Drawable drawable; @Override public void draw(Canvas canvas) { // override the draw to facilitate refresh function later if(drawable != null) { drawable.draw(canvas); } } } public class URLImageParser implements ImageGetter { ListAdapter c; View container; /*** * Construct the URLImageParser which will execute AsyncTask and refresh the container * @param t * @param listAdapter */ public URLImageParser(View t, ListAdapter listAdapter) { this.c = listAdapter; this.container = t; } public Drawable getDrawable(String source) { URLDrawable urlDrawable = new URLDrawable(); // get the actual source ImageGetterAsyncTask asyncTask = new ImageGetterAsyncTask( urlDrawable); asyncTask.execute(source); // return reference to URLDrawable where I will change with actual image from // the src tag return urlDrawable; } public class ImageGetterAsyncTask extends AsyncTask<String, Void, Drawable> { URLDrawable urlDrawable; public ImageGetterAsyncTask(URLDrawable d) { this.urlDrawable = d; } @Override protected Drawable doInBackground(String... params) { String source = params[0]; return fetchDrawable(source); } @Override protected void onPostExecute(Drawable result) { // set the correct bound according to the result from HTTP call urlDrawable.setBounds(0, 0, 0 + result.getIntrinsicWidth(), 0 + result.getIntrinsicHeight()); // change the reference of the current drawable to the result // from the HTTP call urlDrawable.drawable = result; // redraw the image by invalidating the container URLImageParser.this.container.invalidate(); } /*** * Get the Drawable from URL * @param urlString * @return */ public Drawable fetchDrawable(String urlString) { try { InputStream is = fetch(urlString); Drawable drawable = Drawable.createFromStream(is, "src"); drawable.setBounds(0, 0, 0 + drawable.getIntrinsicWidth(), 0 + drawable.getIntrinsicHeight()); return drawable; } catch (Exception e) { return null; } } private InputStream fetch(String urlString) throws MalformedURLException, IOException { DefaultHttpClient httpClient = new DefaultHttpClient(); HttpGet request = new HttpGet(urlString); HttpResponse response = httpClient.execute(request); return response.getEntity().getContent(); } } } A: Html.ImageGetter() will be more helpful, it will find the tag <img> http://developer.android.com/reference/android/text/Html.ImageGetter.html
{ "pile_set_name": "StackExchange" }
Q: ArrayFormula retrieve list by multiple criterias Input Data sheet TaskId ClientId Canceled 1 1 0 2 1 0 3 1 0 4 2 0 5 2 1 6 2 0 7 3 0 Report sheet ClientId 1 1 2 3 Desired Output Arrayformula to get all TaskIds from Data by clients where Canceled = 0 TaskIds 1 2 3 1 2 3 4 6 7 I have join + filter formula to drag down, which gives me all TaskIds for clients: ClientId TaskIds 1 1,2,3 1 1,2,3 2 4,6 3 7 Then I get my result from this helper_column: =transpose(split(join(",", helper_colum))) And I want to make this work without need to drag down. A: Try this: =ARRAYFORMULA(TRANSPOSE(SPLIT(CONCATENATE(""&TRANSPOSE(IF(TRANSPOSE(A11:A14)=B2:B8,IF(C2:C8=0,A2:A8,""),""))),""))) A11:A14=Report sheet Client ID. A2:C8=Data sheet values. Cheers
{ "pile_set_name": "StackExchange" }
Q: Batch processing in array using PHP I got thousands of data inside the array that was parsed from xml.. My concern is the processing time of my script, Does it affect the processing time of my script since I have a hundred thousand records to be inserted in the database? I there a way that I process the insertion of the data to the database in batch? A: This is for SQL files - but you can follow it's model ( if not just use it ) - It splits the file up into parts that you can specify, say 3000 lines and then inserts them on a timed interval < 1 second to 1 minute or more. This way a large file is broken into smaller inserts etc. This will help bypass editing the php server configuration and worrying about memory limits etc. Such as script execution time and the like. New Users can't insert links so Google Search "sql big dump" or if this works goto: www [dot] ozerov [dot] de [ slash ] bigdump [ dot ] php So you could even theoretically modify the above script to accept your array as the data source instead of the SQl file. It would take some modification obviously. Hope it helps. -R
{ "pile_set_name": "StackExchange" }
Q: How to compare .net FW version number with number stored in variable Super new here and to #Powershell. I'm making a script to check whether the .Net Framework version that is installed is greater than or equal to a version number stored in a variable. The issue I have is when setting up the variable that filters down to the version number. $installed = (Get-ChildItem 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full\' | Get-ItemPropertyValue -Name Version | Where { $_.Version -ge $software }) -ne $null I want to compare the .Net Framework Version found in HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full to whichever version is installed on a Windows 10 computer to see if it is greater than or equal. I've tried comparing the release number in the registry, but the Version is more relevant for what I'm doing. I want to write a message to the console and to a text file $software = '4.7.02053' $installed = (Get-ChildItem 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full\' | Get-ItemPropertyValue -Name Version | Where { $_.Version -ge $software }) -ne $null If(-not $installed) { "$software is NOT installed."| Out-File -FilePath C:\Pre-Req_Log.txt -append Write-Host "'$software' is NOT installed."; pause } else { "$software is installed."| Out-File -FilePath C:\Pre-Req_Log.txt -append Write-Host ".Net FW '$software' is installed." } My expected result is to see '4.7.02053' is (or not) Installed in the text file and it be correct. It doesn't matter if it's equal, as long as it's that version or greater I will be happy. A: To compare version numbers, don't compare them as strings, cast them to [version] (System.Version) and then compare them: $refVersion = [version] '4.7.02053' $installedVersion = [version] (Get-ItemPropertyValue -LiteralPath 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full' -Name Version) if ($installedVersion -ge $refVersion) { # installed # ... } else { # not installed # ... } If you use these [version] instances inside an expandable string ("..."), you'll get the expected string representation, but note that outputting them as-is to the console or via Out-File / > will show a tabular display with the version-number components shown individually. To force the usual string representation, use enclosing "..." - e.g., "$refVersion", or call .ToString(), e.g., $refVersion.ToString()
{ "pile_set_name": "StackExchange" }
Q: Can a person legally search for work, or other resources to facilitate future immigration, while visiting under the Visa Waiver Program? I have a friend who wishes to immigrate to the US in the future. However, they do not currently meet any eligibility requirements for a green card nor do they have a sponsor for a non-immigrant visa at this time. They will be coming to visit the US soon, under the Visa Waiver Program. While they are mainly coming for tourism and social purposes, my friend also hopes that the visit can be used to find proper sponsorship for a green card or work visa. However, I understand that the VWP does not allow for dual-intent like a work visa generally does. This raises some questions. If employment is found prior to their currently-planned trip, is it possible to change/cancel the VWP permit to have a green card or work visa granted for the trip instead? While in the US under the VWP, may they apply for US-based jobs and attend interviews (under the condition that work does not begin while still under the VWP)? If employment is found while on this trip, will they need to return to their home country before a green card or work visa can be granted? If so, for how long? A: There's nothing to change or cancel. If the person becomes eligible to apply for an immigrant visa or a non-immigrant work visa, he can simply apply. Yes. Yes. There's no work-sponsored immigrant visa as far as I'm aware, so it will be a nonimmigrant visa application. The waiting time depends on the circumstances and the visa type, but it would most likely be in the neighborhood of several months to a couple of years. Questions about the practical aspects of immigration are better suited to Expatriates.
{ "pile_set_name": "StackExchange" }
Q: Collision of Galaxies According to the big bang theory, the universe started from a small intial point and is essentially expanding. However, my question is that if the universe is expanding how is it possible for galactic collisons to occur? Are the galacies moving away from a relative position but not moving away from each other? If so, how do we know this? A: The expansion of space is something that happens on the largest scales. At small scales, such as distances between nearby galaxies, other forces, such as gravity, dominate. Galaxy clusters are held together by the attractive force of gravity between these galaxies. Space in these regions is still expanding, but the gravity pulls on these galaxies much more than space's expansion moves them apart. Galaxies that are close enough together, such as the Milky Way and Andromeda, are pulled together by the force of gravity between them. A: Even more concise: the universal expansion of space does not affect the space within gravitationally bound structures. A pair of colliding galaxies clearly fall into this category. The general expansion of the universe is only apparent at the largest scales, where the universe can be treated as an isotropic, homogeneous fluid, where there is no net gravitational force acting on any "particle". At smaller scales, this isn't true. The universe at small scales is messy and inhomogeneous. The anisotropic gravitational forces between bound objects completely dominate the expansion effect.
{ "pile_set_name": "StackExchange" }
Q: StackExchange clone: where should I add my indexes? I'm creating an open source stack exchange clone and the following is my schema. What should I add indexes on for it to be optimal? Here is the schema in Rails format (SQL format below as well): create_table "comments", force: true do |t| t.integer "id" t.integer "post_id", null: false t.integer "user_id", null: false t.text "body", null: false t.integer "score", default: 0, null: false t.datetime "created_at" t.datetime "updated_at" end create_table "post_types", force: true do |t| t.integer "id" t.string "name", null: false end create_table "posts", force: true do |t| t.integer "id" t.integer "post_type_id", limit: 2, null: false t.integer "accepted_answer_id" t.integer "parent_id" t.integer "user_id", null: false t.text "title", limit: 255, null: false t.text "body", null: false t.integer "score", default: 0, null: false t.integer "views", default: 1, null: false t.datetime "created_at" t.datetime "updated_at" end create_table "posts_tags", force: true do |t| t.integer "id" t.integer "post_id", null: false t.integer "tag_id", null: false end create_table "tag_synonyms", force: true do |t| t.integer "id" t.string "source_tag", null: false t.string "synonym", null: false end create_table "tags", force: true do |t| t.integer "id" t.string "name", null: false end create_table "users", force: true do |t| t.integer "id" t.string "first_name", limit: 50 t.string "last_name", limit: 50 t.string "display_name", limit: 100, null: false t.string "email", limit: 100, null: false t.string "password", null: false t.string "salt", null: false t.string "about_me" t.string "website_url" t.string "location", limit: 100 t.integer "karma", default: 0, null: false t.datetime "created_at" t.datetime "updated_at" end create_table "vote_types", force: true do |t| t.integer "id" t.string "name", null: false end create_table "votes", force: true do |t| t.integer "id" t.integer "post_id", null: false t.integer "vote_type_id", null: false t.integer "user_id", null: false t.datetime "created_at" t.datetime "updated_at" end Here is the raw structure in SQL as well: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `post_id` int(11) NOT NULL, `user_id` int(11) NOT NULL, `body` text NOT NULL, `score` int(11) NOT NULL DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `post_types` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `posts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `post_type_id` smallint(6) NOT NULL, `accepted_answer_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `user_id` int(11) NOT NULL, `title` tinytext NOT NULL, `body` text NOT NULL, `score` int(11) NOT NULL DEFAULT '0', `views` int(11) NOT NULL DEFAULT '1', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL AUTO_INCREMENT, `post_id` int(11) NOT NULL, `tag_id` int(11) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `tag_synonyms` ( `id` int(11) NOT NULL AUTO_INCREMENT, `source_tag` varchar(255) NOT NULL, `synonym` varchar(255) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `tags` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `first_name` varchar(50) DEFAULT NULL, `last_name` varchar(50) DEFAULT NULL, `display_name` varchar(100) NOT NULL, `email` varchar(100) NOT NULL, `password` varchar(255) NOT NULL, `salt` varchar(255) NOT NULL, `about_me` varchar(255) DEFAULT NULL, `website_url` varchar(255) DEFAULT NULL, `location` varchar(100) DEFAULT NULL, `karma` int(11) NOT NULL DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `vote_types` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `votes` ( `id` int(11) NOT NULL AUTO_INCREMENT, `post_id` int(11) NOT NULL, `vote_type_id` int(11) NOT NULL, `user_id` int(11) NOT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ); A: Let's go though a few things here... (now that you actually show the database structure instead of only the Rails 'view', we can see what's happening)..... "Relational Databases" are about "Relationships". Relationships are expressed by having queries 'join' two or more tables. The Joins require matching columns on both tables. For example, the post_id on the comment table matches the id on the posts table. If you have some comments and want to find the details of the posts they are on, then you will want to select from the posts table where the comment_id is a certain (set of) values. When you select on a column, you often (normally) want that column to be indexed. So, for each of your 'primary key' columns you will automatically also have an index. You need to index the 'other' side of the relationship as well. Comments table created_at should not be nullable. Nullable columns typically have a small impact on performance. All comments are created, and thus should all have a date, and there is no need for it to be null. If you do queries that select the comments on a particular post, then you need an index on the post_id. I suspect you may also have occasional queries for all the posts for a given user, which means you will probably want another index on the user_id post_types No problems here. Posts You will want indexes on the following: if you want to select the post for a given parent, parent_id if you want to select posts for a given user, then user_id if you want to select posts for a given type, then post_type_id you will also want to index the title, since this may make searches easier. look in to full-text indexing for the body. Should created_date be nullable? Post-Tags you will want two indexes here, and for performance reasons, you will probably want them duplicated. Explaining why is beyond this answer, but look for 'index coverage': index on both tag_id and post_id index on both post_id and tag_id Tag Synonyms source_tag should be source_id and should be an integer. Also with an index. synonym should be synonym_id and should be an integer. It should also have an index. Tags Fine Users Recommend an index on: display_name - so people can find themselves easily (and hopefully you have enough users for it to be needed). (Are you sure you don't mind the users having no name) Should created_date be nullable? Vote_Types fine Votes vote_type_id, post_id and user_id should each have their own index. Should created_date be nullable? Conclusion Now you have some suggestions on what indexes you should start with, the next step is monitoring where your actual performance is poor, and targeting those areas for additional optimization. To do that, you need to actually be running your application, and finding out what your actual queries look like, and running those queries to see what the actual execution plans are, and where those plans look like they need help by adding an index. - you do not have any primary keys on your database. Primary keys are part of the database's referential integrity, and ensure that you and your programs do the 'right thing'. Additionally, primary keys are implemented as an index, so they will ensure that primary-key-related access to your table is fast. - you do not have a post_id column on your post table????? Really? This makes no sense.... unless parent_id is supposed to be the unique identifier..... - similarly, you do not have a user_id on the users table. What gives? So, you have no keys, and as a result, you are missing what are normally the most critical indices. Set up each table to have a key and you will be most of the way there. Most databases now contain tools that will recommend indexes for you based on queries that you often run.
{ "pile_set_name": "StackExchange" }
Q: Grids and pointers in c I made this program in C where an object R is placed on a grid and it's supposed to move taking inputs from they keyboard. For example, thi is what happens if you press N. 0 1 2 0 - - - R - - - - - 1 R - - PRESS N -> GO UP -> - - - PRESS N AGAIN -> - - - 2 - - - - - - R - - So R makes it go up. The object has to move around, so when it is at [A0][B0], for example, it needs to go all the way down [A2][B0]. See above. It will move up, down, left and right. Right now i'm creating the function to make it move up, but i'm having a lot of troubles: sometimes it randomly freezes to 2:0 and 0:0 without goind up, and when it's at A = 2, instead of going up of 1, it goes to 0, although i set it to do 2-1 (to go up it has to subtract 1). I don't understand what's causing those troubles, any advice? #include <stdio.h> #include <time.h> #include <stdlib.h> #define X 3 #define Y 3 struct coords{ int a; int b; }; typedef struct coords cord; // Print the array char printArray(char row[][Y], size_t one, size_t two, struct coords cord) { row[cord.a][cord.b] = 'X'; // output column heads printf("%s", " [0] [1] [2]"); // output the row in tabular format for (size_t i = 0; i < one; ++i) { printf("\nrow[%lu] ", i); for (size_t j = 0; j < two; ++j) { printf("%-5c", row[i][j]); } } } int moveUp(struct coords * cord); int main(void) { struct coords cord; char row[X][Y] = { { '-', '-', '-'}, { '-', '-', '-'}, { '-', '-', '-'} }; srand(time(NULL)); cord.a = (rand() % 3); cord.b = (rand() % 3); printf("\nValori rand: A %d, B %d\n", cord.a, cord.b); // output the row //printf("\nrobot:%c\n", robot); puts("The array is:"); printf("\n"); printArray(row, X, Y, cord); row[cord.a][cord.b] = '-'; //printArray(row, X, Y, &m, &n); char h; while(h != '3'){ switch (h) { case 'N': moveUp(&cord); printArray(row, X, Y, cord); row[cord.a][cord.b] = '-'; break; } scanf("%s", &h); } printf("\n"); } int moveUp(struct coords * cord) { cord->a - 1; if (cord->a == 2){ cord->a - 1; } else if (cord->a == 1){ cord->a - 1; } else if (cord->a == 0){ cord->a + 2; } /* if (cord->a == 0) { cord-> a = 2; } else { cord->a - 1; } */ printf("\n A = %d, B = %d\n", cord->a, cord->b); } A: In the below code, you are checking the value of h before having read anything into it. If the uninitialized value of h happens to be 3, then execution will not enter the while loop. char h; while(h != '3') So read in a value into h and then do the check in the while loop. In the moveUp function, you can use the ternary conditional operator to assign the next position or the object R. cord->a = (cord->a)? (cord->a - 1): 2;
{ "pile_set_name": "StackExchange" }
Q: Arrow in tikzpicture between two nodes with midway label I'm trying to draw a series of labelled nodes in Tikzpicture, so far I have: \begin{tikzpicture}[box/.style = {draw, semithick, minimum size=1cm}] \node at (0, 3*3) [box] (0) {Node}; \node at (0, 3*2) [box] (1) {Node}; \node at (0, 3*1) [box] (2) {Node}; \node at (0, 3*0) [box] (3) {Node}; \draw (0) -> (1) node [midway, fill=white] {Label 1}; \draw (1) -> (2) node [midway, fill=white] {Label 2}; \draw (2) -> (3) node [midway, fill=white] {Label 3}; \end{tikzpicture} But unfortunately I cannot figure out, despite looking on S.O and on search engines, how to get a arrow, rather than the straight line. What format should I use to achieve this? A: Here I put three different arrow heads in the code. You are to tell TikZ how it is supposed to draw arrows and other stuff in square brackets after \draw. \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture}[box/.style = {draw, semithick, minimum size=1cm}] \node at (0, 3*3) [box] (0) {Node}; \node at (0, 3*2) [box] (1) {Node}; \node at (0, 3*1) [box] (2) {Node}; \node at (0, 3*0) [box] (3) {Node}; \draw[->] (0) -- (1) node [midway, fill=white] {Label 1}; \draw[-latex] (1) -- (2) node [midway, fill=white] {Label 2}; \draw[-stealth] (2) -- (3) node [midway, fill=white] {Label 3}; \end{tikzpicture} \end{document}
{ "pile_set_name": "StackExchange" }
Q: How to do I print an arraylist to a JTextArea? I can't seem to figure out how to print an arrayList<String> to a JTextArea and have tried using both append() and setText(). I have also tried to create a method which prints out the ArrayList through a loop, but it can't be added to the JTextArea because it is not of type String. An applicant is supposed to take a student profile (name, grade, university selections) and add it to the ArrayList<String> Applicants. This is done through a JButton if it holds true for the following if statement: if (studentsAverage > 74 && validInput && studentsAverage < 100) { studentChoices.addAll(uniOptions.getSelectedValuesList()); person = new Student (namePromptTF.getText(), averagePromptTF.getText(),Applicants, studentChoices); arrayCount++; numberOfApplicants.setText(arrayCount +"/" +100+"students"); person.printProfile(); //dont need person.studentProfileSort(); // dont need displayAllApplicants.append(person.returnProfile()); Applicants.add(person); The array is passed to a Student object that holds: private ArrayList<Student> ApplicantArray; ApplicantArray is then sorted through this method: void studentProfileSort() { Student profileLine = null; int numberOfStudents = ApplicantArray.size(); ArrayList<Student> displayAllSorted = new ArrayList<Student>(); for(int i = 1; i<numberOfStudents - 1; i++){ for(int j = 0; j<(numberOfStudents - i); j++) { if(ApplicantArray.get(i).getFamilyName().compareTo(ApplicantArray.get(i).getFamilyName())>0){ ApplicantArray.set(j, ApplicantArray.get(i)); } } ApplicantArray.get(i).returnProfile(); } } Is there a way to have a return statement inside of a loop so that I can change my method to a String type? A: At first your sorting algorithm does not seem to work ApplicantArray.get(i).getFamilyName().compareTo(ApplicantArray.get(i).getFamilyName()) You compare the value with it self and this results always in 0. Even if this would work, in the next line you override the array by setting a value rather than swapping the two values or setting to a new ArrayList. But if everything works, this is how you could print those students: StringBuilder b = new StringBuilder(); for (Student student : applicantArray) { b.append(student + "\n"); // this if you implemented toString() in Student b.append(student.getFamilyName() + ' ' + student.getFirstName() + "\n"); // or something like this } textArea.setText(b.toString()); P.S.: you should never use UpperCamelCase for variables or parameters, use lowerCamelCase instead (e.g. ApplicantArray -> applicantArray)
{ "pile_set_name": "StackExchange" }
Q: Adding frame to picture in Javafx i would like to add a frame to a picture using Java and Javafx and then save the framed picture. What would be the best way to do that? For example say I have a photo of a landscape and want to add a frame to it. The framed photo should look like this: A: You could add two images, first the frame, then the image, to the same canvas like this: GraphicsContext gc1 = canvas.getGraphicsContext2D(); gc1.drawImage(frameimage,0,0,image.getFitWidth()+20,image.getFitHeight()+20); GraphicsContext gc = canvas.getGraphicsContext2D(); gc.drawImage(i,10,10,image.getFitWidth(),image.getFitHeight()); and then save them as png (or whatever format you like) using the canvas.snapshot function: FileChooser fileChooser = new FileChooser(); FileChooser.ExtensionFilter extFilter =new FileChooser.ExtensionFilter("png files (*.png)", "*.png"); fileChooser.getExtensionFilters().add(extFilter); Stage primaryStage = (Stage) canvas.getScene().getWindow(); File file = fileChooser.showSaveDialog(primaryStage); if(file != null){ try { WritableImage writableImage = new WritableImage((int)canvas.getWidth(), (int)canvas.getHeight()); canvas.snapshot(null, writableImage); RenderedImage renderedImage = SwingFXUtils.fromFXImage(writableImage, null); File file1 = new File(file.getAbsolutePath()+".png"); file.renameTo(file1); ImageIO.write(renderedImage, "png", file1); } catch (IOException ex) { ex.printStackTrace(); }
{ "pile_set_name": "StackExchange" }
Q: How to secure a web API from being accessed from unauthorized SPAs I am building a B2B service whose API can be accessed by third-parties on a subscription basis. Basically, we provide a customizable widget that our customers can embed on their website to make it available to their customers (e.g. a button that opens a modal). While it is clear how to make this work in a traditional web app, I am not sure how to guarantee this in a single-page app. Is it at all possible to make this work without a redirect URI as used in OAuth? That is, the modal triggers AJAX requests to our API and we want to make sure it comes from a script from an authorized origin without redirects. We could of course simply check Origin header, but what is there to prevent someone from constructing a request with such a header on their backend manually, even though they couldn't do it in the browser. A: The Problem While it is clear how to make this work in a traditional web app, I am not sure how to guarantee this in a single-page app. From a web app you only need to see the html source code to be able to find the API keys or other secrets. Even if you use a traditional web server, cookies can also be obtained to automate attacks against it. While this series of articles about Mobile API Security Techniques are in the context of mobile devices, some of the techniques used are also valid in other type of APIs, like APIs for Web/SPAs apps, and you can see how API keys, OUATH tokens and HMAC secrets can be used to protect an API and bypassed. Possible Solution You can try to make it hard to find the API key with a Javascript Obfuscator, but bear in mind that this only delays an attacker in succeeding. So, how can I block an attacker? Well the cruel truth is... You can't!!! But you can try, by using reCAPTCHA V3 from Google, that works in the background, therefore doesn't require user interaction. The drawback here is that all your B2B clients would need to implemente it across all pages of their websites, thus may not be the way to go for your use case... reCAPTCHA V3: reCAPTCHA is a free service that protects your website from spam and abuse. reCAPTCHA uses an advanced risk analysis engine and adaptive challenges to keep automated software from engaging in abusive activities on your site. It does this while letting your valid users pass through with ease. If your B2B solution really needs to protect it at all costs then you need to employ Web Application Firewalls(WAF) and User Behavior Analytics solutions, also know as UBA, that use Artificial Intelligence and Machine Learning to prevent abuse, but once more they cannot guarantee 100% blocking and both have false positives. WAF: A web application firewall (or WAF) filters, monitors, and blocks HTTP traffic to and from a web application. A WAF is differentiated from a regular firewall in that a WAF is able to filter the content of specific web applications while regular firewalls serve as a safety gate between servers. By inspecting HTTP traffic, it can prevent attacks stemming from web application security flaws, such as SQL injection, cross-site scripting (XSS), file inclusion, and security misconfigurations. UBA: User behavior analytics (UBA) as defined by Gartner is a cybersecurity process about detection of insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from those patterns—anomalies that indicate potential threats. Instead of tracking devices or security events, UBA tracks a system's users. Big data platforms like Apache Hadoop are increasing UBA functionality by allowing them to analyze petabytes worth of data to detect insider threats and advanced persistent threats. Conclusion In the end of the day you can only protect your B2B back-end in a best effort basis, that must be proportional to the value it holds for the business. A 100% solution doesn't exist for the web, due to the way it was designed to work!!!
{ "pile_set_name": "StackExchange" }
Q: AJAX xmlhttp.send parameters I have created an AJAX function that when called it changes the color of a specific button. However, I have only managed to do it in a static way, meaning that I put the values sent to the corresponding php script manually. What I want is to call the function through my html body with some parameters and then these parameters should be passed through the xmlhttp.send method. I tried but it doesn' work. For example a call to the below function ajaxFunction() will work OK (it will pass two parameters x=0 and t=1) $ function ajaxFunction() { ... xmlhttp.open("POST","example.php",true); xmlhttp.onreadystatechange = handleServerResponse; xmlhttp.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded'); xmlhttp.send("x=0&t=1");} But when I try to call the function with some parameters (ajaxFunction(0,1) then how can I put these values in the xmlhttp.send method? Any ideas? Thanks anyway. A: did you mean: function ajaxFunction(arg0, arg1) { // ... new + open + setRequestHeader xmlhttp.send('x=' + encodeURIComponent(arg0) + '&t=' + encodeURIComponent(arg1)); }
{ "pile_set_name": "StackExchange" }
Q: Is there a function in Emacs to search the filesystem for a file by its name? I want to open a file that is somewhere deep in my project tree. I know the name of the file, however I don't want to go searching the tree for it. I would like a way to enter just the file name, and have emacs search for me. I should also be able to enter the base directory I want to start the search from, and emacs should remember that for future searches. A: Sounds like you are looking for the equivalent of Textmate's Command-T functionality. The closest I have found for emacs is find-file-in-project. It can be bound to C-t or a similar key for convenience: (global-set-key "\C-t" 'ido-find-file-in-tag-files) A: M-x find-name-dired is the built-in solution for this. The default directory changes according to the current buffer, but the minibuffer history contains the previous selections.
{ "pile_set_name": "StackExchange" }
Q: How to print a large canvas across multiple page widths in browser? My application needs to print out an arbitrarily large canvas that can span multiple page width and height widths. There was a similar question some time back where it was claimed the browser won't print to multiple page widths. Since this was a while back I am wondering if it is still true. Also, what strategies are available to print out a large canvas without splitting it up? var canvas = document.getElementById("canvas1"); function draw_a() { var context = canvas.getContext("2d"); // // LEVER //plane context.fillStyle = '#aaa'; context.fillRect(25, 90, 2500, 400); } $(document).ready(function() { draw_a(); }); canvas { border: 1px dotted; } .printOnly { display: none; } @media print { html, body { height: 100%; background-color: yellow; } .myDivToPrint { background-color: yellow; /* height: 100%; width: 100%; position: fixed;*/ top: 0; left: 0; margin: 0; } .no-print, .no-print * { display: none !important; } .printOnly { display: block; } } @media print and (-ms-high-contrast: active), (-ms-high-contrast: none) { html, body { height: 100%; background-color: yellow; } .myDivToPrint { background-color: yellow; /* height: 100%; width: 100%; position: fixed;*/ top: 0; left: 0; margin: 0; padding: 15px; font-size: 14px; line-height: 18px; position: absolute; display: flex; align-items: center; justify-content: center; -webkit-transform: rotate(90deg); -moz-transform: rotate(90deg); -o-transform: rotate(90deg); -ms-transform: rotate(90deg); transform: rotate(90deg); } .no-print, .no-print * { display: none !important; } .printOnly { display: block; } } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button onclick="window.print();" class="no-print">Print Canvas</button> <div class="myDivToPrint"> <div class="Aligner-item"> <canvas height="2500px" width="4000px" id="canvas1"></canvas> <div class="printOnly Aligner-item--bottom"> Print Only</div> </div> </div> A: @media print { @page { size: 297mm 210mm; /* landscape */ /* you can also specify margins here: */ margin: 25mm; margin-right: 45mm; /* for compatibility with both A4 and Letter */ } } var canvas = document.getElementById("canvas1"); function draw_a() { var context = canvas.getContext("2d"); // // LEVER //plane context.fillStyle = '#aaa'; context.fillRect(25, 90, 2500, 400); } $(document).ready(function() { draw_a(); }); canvas { border: 1px dotted; } .printOnly { display: none; } @media print { @page { size: 297mm 210mm; /* landscape */ /* you can also specify margins here: */ margin: 25mm; margin-right: 45mm; /* for compatibility with both A4 and Letter */ } html, body { height: 100%; background-color: yellow; } .myDivToPrint { background-color: yellow; /* height: 100%; width: 100%; position: fixed;*/ top: 0; left: 0; margin: 0; } .no-print, .no-print * { display: none !important; } .printOnly { display: block; } } @media print and (-ms-high-contrast: active), (-ms-high-contrast: none) { html, body { height: 100%; background-color: yellow; } .myDivToPrint { background-color: yellow; /* height: 100%; width: 100%; position: fixed;*/ top: 0; left: 0; margin: 0; padding: 15px; font-size: 14px; line-height: 18px; position: absolute; display: flex; align-items: center; justify-content: center; -webkit-transform: rotate(90deg); -moz-transform: rotate(90deg); -o-transform: rotate(90deg); -ms-transform: rotate(90deg); transform: rotate(90deg); } .no-print, .no-print * { display: none !important; } .printOnly { display: block; } } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button onclick="window.print();" class="no-print">Print Canvas</button> <div class="myDivToPrint"> <div class="Aligner-item"> <canvas height="2500px" width="4000px" id="canvas1"></canvas> <div class="printOnly Aligner-item--bottom"> Print Only</div> </div> </div> A: It does seem that browsers will split up a large canvas into multiple pages. I tested on MacOS Sierra using latest chrome and safari browsers. A possible approach for printing a canvas is to first transform it to a data URI containing a representation of the image using canvas.toDataURL(). You can then manipulate the image dimensions prior to printing. "<img src='" + canvas.toDataURL() + "' height='500px' width='500px' />'" In the following example, the large 4500px by 4500px canvas is translated into an img and placed inside an iframe, used for printing. You can probably append the image to the original document and than print that specific element, but the iframe may be more flexible to handle print output. You can manipulate the img dimensions according to your requirements and print a scaled representation of the canvas. Note that I hardcoded the width and height of the image but this can be calculated and changed as needed for printing. Due to iframe cross-origin restrictions, the code snippet below will not work here, but it does work on this jsfiddle. The scaled 500px by 500px image representing the canvas fits on one page when printed. var canvas = document.getElementById("canvas1"); function draw_a() { var context = canvas.getContext("2d"); // // LEVER //plane context.fillStyle = '#aaa'; context.fillRect(25, 90, 4500, 4500); } print = function() { window.frames["myFrame"].focus(); window.frames["myFrame"].print(); } function setupPrintFrame() { $('<iframe id="myFrame" name="myFrame">').appendTo("body").ready(function(){ setTimeout(function(){ $('#myFrame').contents().find('body').append("<img src='" + canvas.toDataURL() + "' height='500px' width='500px' />'"); },50); }); } $(document).ready(function() { draw_a(); setupPrintFrame(); }); canvas { border: 1px dotted; } .printOnly, #myFrame { display: none; } @media print { html, body { height: 100%; background-color: yellow; } .myDivToPrint { background-color: yellow; /* height: 100%; width: 100%; position: fixed;*/ top: 0; left: 0; margin: 0; } .no-print, .no-print * { display: none !important; } .printOnly { display: block; } } @media print and (-ms-high-contrast: active), (-ms-high-contrast: none) { html, body { height: 100%; background-color: yellow; } .myDivToPrint { background-color: yellow; /* height: 100%; width: 100%; position: fixed;*/ top: 0; left: 0; margin: 0; padding: 15px; font-size: 14px; line-height: 18px; position: absolute; display: flex; align-items: center; justify-content: center; -webkit-transform: rotate(90deg); -moz-transform: rotate(90deg); -o-transform: rotate(90deg); -ms-transform: rotate(90deg); transform: rotate(90deg); } .no-print, .no-print * { display: none !important; } .printOnly { display: block; } } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button onclick="print()" class="no-print">Print Canvas</button> <div class="myDivToPrint"> <div class="Aligner-item"> <canvas height="4500px" width="4500px" id="canvas1"></canvas> <div class="printOnly Aligner-item--bottom"> Print Only</div> </div> </div>
{ "pile_set_name": "StackExchange" }
Q: Complex Numbers.... Suppose a is a complex number such that: $$a^2+a+\frac{1}{a}+\frac{1}{a^2}+1=0$$ If m is a positive integer, find the value of: $$(a^2)^m+a^m+\frac{1}{a^m}+\frac{1}{(a^2)^m}$$ My Approach: After I could not solve it using the usual methods I tried a bit crazier approach. I thought that as the question suggests that the value of the expression does not depend upon the value of m, provided it is positive, hence the graph of the expression on Y-axis and m on X-axis would be parallel to X-axis and thus the slope be zero. So I differentiated it with respect to m and equated it to zero and after factorizing and solving I got $a^m=-1$ or $a^m=1$ or $a^m=\left(\frac{-1}{4}+i\frac{\sqrt15}{4}\right)$ or $a^m=\left(\frac{-1}{4}-i\frac{\sqrt15}{4}\right)$. But if $a^m=1$ then $a=1$ which does not sattisfy the first equaion. Thus the value of $$(a^2)^m+a^m+\frac{1}{a^m}+\frac{1}{(a^2)^m}=\frac{-9}{4}$$ OR $$(a^2)^m+a^m+\frac{1}{a^m}+\frac{1}{(a^2)^m}=0$$ What other approach would you suggest? What are the flaws in my approach (if any)? A: If we multiply $$a^2 + a + \frac1a + \frac{1}{a^2} + 1 = 0$$ with $a^2$, we obtain (since evidently $a \neq 1$) $$0 = a^4 + a^3 + a^2 + a + 1 = \frac{a^5-1}{a-1},$$ so $a^5 = 1$, $$a = e^{(2\pi ik)/5},\quad k \in \{1,2,3,4\}.$$ Then, if $m$ is a multiple of $5$, we have $$(a^2)^m + a^m + \frac1{a^m} + \frac{1}{(a^2)^m} = 1+1+1+1 = 4,$$ and if $m$ is not a multiple of $5$, the four numbers $$a^{2m},\, a^m,\, a^{-m}\, a^{-2m}$$ are the numbers $a^2,\, a,\, a^{-1},\,a^{-2}$, possibly in a different order, then the sum is $-1$.
{ "pile_set_name": "StackExchange" }
Q: DocuSign Composite Template -- uploaded document is not displaying I am using DocuSign RestAPI, trying to create an envelope using Composite Template. My intent is to append a PDF document to the end of an existing template. Using the below JSON to POST /v2/accounts/{accountId}/envelopes, I am able to get the template to show, but not the appended PDF document. What am I missing? { "status":"sent", "emailBlurb":"envelope_body", "emailSubject":"envelope_subject", "compositeTemplates":[ { "serverTemplates":[ { "sequence":"1", "templateId":"{TEMPLATE_ID}" } ], "inlineTemplates":[ { "sequence":"1", "recipients":{ "signers":[ { "clientUserId":"1234", "email":"applicant@example.com", "name":"applicant", "recipientId":1, "roleName":"Applicant", }, { "clientUserId":"2345", "email":"underwriter@example.com", "name":"underwriter", "recipientId":2, "roleName":"Underwriter", } ] } }, { "sequence":"2", "documents":[ { "documentBase64": "JVBERi0xLjMKJf////8KMSAwIG9iago8PCAvQ3JlYXRvciA8ZmVmZjAwNTAwMDcyMDA2MTAwNzcwMDZlPgovUHJvZHVjZXIgPGZlZmYwMDUwMDA3MjAwNjEwMDc3MDA2ZT4KPj4KZW5kb2JqCjIgMCBvYmoKPDwgL1R5cGUgL0NhdGFsb2cKL1BhZ2VzIDMgMCBSCj4+CmVuZG9iagozIDAgb2JqCjw8IC9UeXBlIC9QYWdlcwovQ291bnQgMQovS2lkcyBbNSAwIFJdCj4+CmVuZG9iago0IDAgb2JqCjw8IC9MZW5ndGggODEKPj4Kc3RyZWFtCnEKCkJUCjM2IDc0Ny4zODQgVGQKL0YxLjAgMTIgVGYKWzw0ODY1NmM2YzZmMjA1Nz4gMzAgPDZmNzI+IC0xNSA8NmM2ND5dIFRKCkVUCgpRCgplbmRzdHJlYW0KZW5kb2JqCjUgMCBvYmoKPDwgL1R5cGUgL1BhZ2UKL1BhcmVudCAzIDAgUgovTWVkaWFCb3ggWzAgMCA2MTIuMCA3OTIuMF0KL0NvbnRlbnRzIDQgMCBSCi9SZXNvdXJjZXMgPDwgL1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldCi9Gb250IDw8IC9GMS4wIDYgMCBSCj4+Cj4+Cj4+CmVuZG9iago2IDAgb2JqCjw8IC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMQovQmFzZUZvbnQgL0hlbHZldGljYQovRW5jb2RpbmcgL1dpbkFuc2lFbmNvZGluZwo+PgplbmRvYmoKeHJlZgowIDcKMDAwMDAwMDAwMCA2NTUzNSBmIAowMDAwMDAwMDE1IDAwMDAwIG4gCjAwMDAwMDAxMDkgMDAwMDAgbiAKMDAwMDAwMDE1OCAwMDAwMCBuIAowMDAwMDAwMjE1IDAwMDAwIG4gCjAwMDAwMDAzNDYgMDAwMDAgbiAKMDAwMDAwMDUyNCAwMDAwMCBuIAp0cmFpbGVyCjw8IC9TaXplIDcKL1Jvb3QgMiAwIFIKL0luZm8gMSAwIFIKPj4Kc3RhcnR4cmVmCjYyMQolJUVPRgo=", "documentId":"10", "fileExtension":"PDF", "name":"addendum", } ], "recipients":{ "signers":[ { "clientUserId":"1234", "email":"applicant@example.com", "name":"applicant", "recipientId":1, "roleName":"Applicant", , { "clientUserId":"2345", "email":"underwriter@example.com", "name":"underwriter", "recipientId":2, "roleName":"Underwriter", } ] } } ] } ] } A: If you want to just append the document, then below JSON structure will help you: You need to have two composite templates. First CompositeTemplate will be for adding document from the serverTemplate and providing the recipient Details. Second Composite template will just add a PDF document to the envelope. { "status":"sent", "emailBlurb":"envelope_body", "emailSubject":"envelope_subject", "compositeTemplates":[ { "compositeTemplateId":"1", "serverTemplates":[ { "sequence":"1", "templateId":"{TEMPLATE_ID}" } ], "inlineTemplates":[ { "sequence":"2", "recipients":{ "signers":[ { "clientUserId":"1234", "email":"applicant@example.com", "name":"applicant", "recipientId":"1", "roleName":"Applicant" }, { "clientUserId":"1234", "email":"underwriter@example.com", "name":"underwriter", "recipientId":"2", "roleName":"Underwriter" } ] } } ] }, { "compositeTemplateId":"2", "inlineTemplates":[ { "sequence":"3", "documents":[ { "documentBase64": "JVBERi0xLjMKJf////8KMSAwIG9iago8PCAvQ3JlYXRvciA8ZmVmZjAwNTAwMDcyMDA2MTAwNzcwMDZlPgovUHJvZHVjZXIgPGZlZmYwMDUwMDA3MjAwNjEwMDc3MDA2ZT4KPj4KZW5kb2JqCjIgMCBvYmoKPDwgL1R5cGUgL0NhdGFsb2cKL1BhZ2VzIDMgMCBSCj4+CmVuZG9iagozIDAgb2JqCjw8IC9UeXBlIC9QYWdlcwovQ291bnQgMQovS2lkcyBbNSAwIFJdCj4+CmVuZG9iago0IDAgb2JqCjw8IC9MZW5ndGggODEKPj4Kc3RyZWFtCnEKCkJUCjM2IDc0Ny4zODQgVGQKL0YxLjAgMTIgVGYKWzw0ODY1NmM2YzZmMjA1Nz4gMzAgPDZmNzI+IC0xNSA8NmM2ND5dIFRKCkVUCgpRCgplbmRzdHJlYW0KZW5kb2JqCjUgMCBvYmoKPDwgL1R5cGUgL1BhZ2UKL1BhcmVudCAzIDAgUgovTWVkaWFCb3ggWzAgMCA2MTIuMCA3OTIuMF0KL0NvbnRlbnRzIDQgMCBSCi9SZXNvdXJjZXMgPDwgL1Byb2NTZXQgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldCi9Gb250IDw8IC9GMS4wIDYgMCBSCj4+Cj4+Cj4+CmVuZG9iago2IDAgb2JqCjw8IC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMQovQmFzZUZvbnQgL0hlbHZldGljYQovRW5jb2RpbmcgL1dpbkFuc2lFbmNvZGluZwo+PgplbmRvYmoKeHJlZgowIDcKMDAwMDAwMDAwMCA2NTUzNSBmIAowMDAwMDAwMDE1IDAwMDAwIG4gCjAwMDAwMDAxMDkgMDAwMDAgbiAKMDAwMDAwMDE1OCAwMDAwMCBuIAowMDAwMDAwMjE1IDAwMDAwIG4gCjAwMDAwMDAzNDYgMDAwMDAgbiAKMDAwMDAwMDUyNCAwMDAwMCBuIAp0cmFpbGVyCjw8IC9TaXplIDcKL1Jvb3QgMiAwIFIKL0luZm8gMSAwIFIKPj4Kc3RhcnR4cmVmCjYyMQolJUVPRgo=", "documentId":"10", "fileExtension":"PDF", "name":"addendum", } ] } ] } ] }
{ "pile_set_name": "StackExchange" }
Q: How to store the data within java application? I am using H2 database which is embedded in my java application. I'm creating the connection to the server as: jdbc:h2:file:/mydata Where mydata is the database name. This seemed to tell the database connection caller to find the database within the same directory as of the application running from. But it cant find it on client computers. Why? What to do? Where to save the database so as to I don't lose data when I distribute my application? A: According to documentation you do not need / before mydata you you need to look up for a file in the same directory The database URL for connecting to a local database is jdbc:h2:[file:][path]. The prefix file: is optional. If no or only a relative path is used, then the current working directory is used as a starting point. The case sensitivity of the path and database name depend on the operating system, however it is recommended to use lowercase letters only. The database name must be at least three characters long (a limitation of File.createTempFile). http://www.h2database.com/html/features.html#embedded_databases So in your example you are trying to connect to file named mydata in the root folder. Looks like you forgot a dot (.) before /mydata. Try with the following jdbc:h2:file:./mydata
{ "pile_set_name": "StackExchange" }
Q: Why are changes to my WebJob not being picked up when publishing the Web App? I have an ASP.NET MVC Web App which is deployed to Azure. The solution within VS 2013 Pro has 3 projects: the Web App project a Webjob project a Common project which stores code which is common to both the App and the Webjob. The Webjob project was added to the main App project via the Add --> New Azure Webjob Project context menu, which actually adds a new project within the same solution, which is fine. When I initially published the app to Azure, the Webjob was deployed too and all is working as expected. The Webjob runs on schedule once per day. Now I've made some local changes to the Webjob and need those changes to be published. I follow the same process to deploy the App (rtClick main App --> Publish) which should also pick up changes to the Webjob, but the Preview pane is not picking up the changes and the changes are then subsequently not published to the Webjob. Incidentally, any changes I make to the Common project are picked up successfully so looks like there is something weird about making changes and publishing Webjobs. Has anyone come across this before? A: I've found the cause of the problem. It's actually very simple but also pretty frustrating. When publishing the web app, you have the option to Remove additional files at destination. I have always left this checked because I don't like old files hanging around for no reason. You also have the option to Exclude files from the App_Data folder which I also always leave checked so that files from App_Data are not deleted based on the remove configuration above. I then usually configure things like NLog log files, ELMAH xml files etc to go into App_Data safe in the knowledge that anything in there won't be deleted. So the issue with Webjobs is that they're deployed into App_Data. So if the Exclude files from App_Data folder is checked then when the app is published, it's doing what it's told and ignoring App_Data and hence ignoring the changes to the Webjob. So the simple solution is to uncheck this option and the Webjob is deployed successfully. However the issue now is that all other files in App_Data will be deleted (log files etc). So you could uncheck the remove files config but that then potentially leaves other unwanted files lying around. Not ideal. The other option is to leave the remove config checked, click the Preview button within the Publish dialog prior to publishing, then manually unchecking every file you don't want deleted. However the publish process fails if any of the files you want to keep are within sub-folders within App_Data e.g. App_Data/logs. So the other option is to move all of the files within App_Data that you want to keep into the root of App_Data, then uncheck each of them within the Preview window prior to publishing. Not a huge deal when done once but becomes tedious when publishing lots of times. I realise I could move log files etc to Azure storage, SQL DBs etc but what if it's the case that other files are in App_Data which need to be kept? App_Data isn't solely intended for Webjobs but using Webjobs creates a bit of an awkward situation if you also use App_Data for other things. Be keen to know if I'm missing anything obvious here?
{ "pile_set_name": "StackExchange" }
Q: Range-based intersection of n-number of arrays Lets say I have the following three arrays: [ [100, 110, 200, 300, 400], [40, 90, 99, 150, 200], [101, 202, 404, 505] ] how would I go about writing a function getIntersectingRanges(arrays, range) which would return all ranges of max 'width' of range containing 1 or more elements from all arrays. So, if range=15 then it would return [[100, 90, 99, 101], [100, 110, 99, 101], [200, 200, 202]]. Each of those arrays contains at least one element from each input range and the total 'width' of the range is less or equal to 15 (first one is 11, second one 11 as well and the third one just 3). It's conceptually incredibly simple but I have been having a really hard to figuring out how to write such a function. Like I am not looking for a fully fleshed out solution, all I need is the basis of the algorithm allowing me to do so (though I will obviously also gladly accept a fully written function). As some people seem to have a problem understanding this, let me give a few more simple examples (though writing these by hand is a bit hard, so excuse me if I make a mistake somewhere): input:   [[10, 20, 30, 40, 50, 60, 70, 80, 90], [55, 84]] range:  5 output: [[50, 55], [55, 60], [80, 84]] input:   [[10, 20, 30], [40, 50]] range:  10 output: [[30, 40]] input:   [[15, 30, 699], [16, 800], [10, 801], [11, 803]] range:  10 output: [[15, 16, 10, 11]] So my approach has been to first only take the first two arrays, next search for all elements from the first array in the second array ± range. So far it seems to make sense, but given this start it seems impossible to match both the first and the second result from the example above... so any help would be greatly appreciated. A: This solution features an object with the values as key and as value the indices of the array of the given arrays. The additional approach is the speeding of the lookup items, which have a short circuit if the index is outside the area of a possible finding. Example Given array: [ [100, 110, 200, 300, 400], [40, 90, 99, 150, 200], [101, 202, 404, 505] ] Given range: 15 First sort the given values ascending. Then iterate from the smallest value to the highest and look if values in range are in all arrays. Array Values Comment ----- ---------------------------------------------------- -------------------------------------- 0 100 110 200 300 400 1 40 90 99 150 200 2 101 202 404 505 1 here is no other value in this range 1 1 0 2 <- first group, values are in all arrays 1 0 2 0 <- second group, values are in all arrays 0 2 0 only values of two arrays 2 0 only values of two arrays 1 here is no other value in this range 01 2 <- third group, values are in all arrays 2 here is no other value in this range 0 here is no other value in this range 0 2 only values of two arrays 2 here is no other value in this range Result: [ [[100], [90, 99], [101]], [[100, 110], [99], [101]], [[200], [200], [202]] ] function intersection(arrays, range) { var result = [], // result array items = [], // all distinct items from arrays indices = {}, // object for the array indices temp, // temp array, for pushing a result i, // counter for pushing values to temp left = 0, pos = 0, lastPos, // look up range allArrays, // temporary array for indicating if index is included arraysLength = arrays.length, // arrays length itemLength, // length of all items leftValue, // literally the value from the left range emptyArrays; // template for the test if all arrays are used emptyArrays = Array.apply(Array, { length: arraysLength }); arrays.forEach(function (a, i) { a.forEach(function (item) { indices[item] = indices[item] || []; indices[item].push(i); }); }); items = Object.keys(indices).map(Number).sort(function (a, b) { return a - b; }); itemLength = items.length; do { temp = []; allArrays = emptyArrays.slice(0); leftValue = items[left]; pos = left; while (pos < itemLength && items[pos] <= range + leftValue) { temp.push(items[pos]); indices[items[pos]].forEach(function (i) { allArrays[i] = true; }); pos++; } pos !== lastPos && allArrays.every(function (a) { return a; }) && result.push(temp); left++; lastPos = pos; } while (pos < itemLength); return result; } function test(arrays, range) { var result = intersection(arrays, range); document.write("<br>arrays:", JSON.stringify(arrays)); document.write("<br>range:", range); document.write("<br>result:", JSON.stringify(result)); document.write("<br>---"); } test([[100, 110, 200, 300, 400], [40, 90, 99, 150, 200], [101, 202, 404, 505]], 15); test([[10, 20, 30, 40, 50, 60, 70, 80, 90], [55, 84]], 5); test([[10, 20, 30], [40, 50]], 10); test([[15, 30, 699], [16, 800], [10, 801], [11, 803]], 10); // taken from the answer of http://stackoverflow.com/a/32868439/1447675 from DzinX var LARGE_TEST_SIZE = 1000, largeTest = function () { var array = []; for (var i = 0; i < LARGE_TEST_SIZE; ++i) { var innerArray = []; for (var j = 0; j < LARGE_TEST_SIZE; ++j) { innerArray.push((i + j) * 10); } array.push(innerArray); } return array; }(), startTime; startTime = Date.now(); document.write('<br>' + intersection(largeTest, 20).length + '<br>'); document.write('Duration [ms]: ' + (Date.now() - startTime) + '<br>'); Comparision with the solution from DzinX I just changed the console.log to document.write('<br>' .... Please watch Duration in the result windows. function findRanges(arrays, range) { // Gather all items into one array: var items = []; arrays.forEach(function (array, arrayNumber) { array.forEach(function (item) { items.push({ value: item, arrayNumber: arrayNumber }); }); }); items.sort(function (left, right) { return left.value - right.value; }); var countByArray = []; arrays.forEach(function () { countByArray.push(0); }); var arraysIncluded = 0; var i = 0, j = 0, // inclusive spread = 0, arrayCount = arrays.length, itemCount = items.length, result = []; function includeItem(pos) { var arrayNumber = items[pos].arrayNumber; ++countByArray[arrayNumber]; if (countByArray[arrayNumber] === 1) { ++arraysIncluded; } } function excludeItem(pos) { var arrayNumber = items[pos].arrayNumber; --countByArray[arrayNumber]; if (countByArray[arrayNumber] === 0) { --arraysIncluded; } } function allArraysIncluded() { return arraysIncluded === arrayCount; } function extractValue(item) { return item.value; } function saveSpread(start, end) { result.push(items.slice(start, end).map(extractValue)); } // First item is already included. includeItem(0); while (j < (itemCount - 1)) { // grow j while you can while ((spread <= range) && (j < (itemCount - 1))) { ++j; spread += items[j].value - items[j - 1].value; includeItem(j); } if (spread <= range) { // We ran out of items and the range is still OK, break out early: break; } // Don't include the last item for checking: excludeItem(j); if (allArraysIncluded()) { saveSpread(i, j); } // Include the violating item back and try to reduce the spread: includeItem(j); while ((spread > range) && (i < j)) { spread -= items[i + 1].value - items[i].value; excludeItem(i); ++i; } } // last check after exiting the loop (j === (itemCount - 1)) if (allArraysIncluded()) { saveSpread(i, j + 1); } return result; } function test(arrays, range) { var result = findRanges(arrays, range); document.write("<br>arrays:", JSON.stringify(arrays)); document.write("<br>range:", range); document.write("<br>result:", JSON.stringify(result)); document.write("<br>---"); } test([[100, 110, 200, 300, 400], [40, 90, 99, 150, 200], [101, 202, 404, 505]], 15); test([[10, 20, 30, 40, 50, 60, 70, 80, 90], [55, 84]], 5); test([[10, 20, 30], [40, 50]], 10); test([[15, 30, 699], [16, 800], [10, 801], [11, 803]], 10); // A large test (1 million items): var LARGE_TEST_SIZE = 1000; var largeTest = (function () { var array = []; for (var i = 0; i < LARGE_TEST_SIZE; ++i) { var innerArray = []; for (var j = 0; j < LARGE_TEST_SIZE; ++j) { innerArray.push((i + j) * 10); } array.push(innerArray); } return array; })(); var startTime startTime = Date.now(); document.write('<br>' + findRanges(largeTest, 20).length); // 3 document.write('<br>Duration [ms]: ' + (Date.now() - startTime)); Speed comparison, with different browsers Machine: Win 7/64, Core i7-2600 3.40 GHz Version IE 11 Chrome 45.0 Firefox 40.0.3 ------- -------------- -------------- -------------- DzinX 375 ms 688 ms 1323 ms Nina 335 ms 122 ms 393 ms
{ "pile_set_name": "StackExchange" }
Q: Using codeigniter routes to leave out a part of the uri I have this uri http://localhost/ur/index.php/reports/annual/gm/8312/44724286729 but the annual part serves the purpose of showing the user what report he/she is viewing. The function that is therefore being mapped is gm with the parameters public function gm($id,$telephone_number{ /** General Meeting */ } in http://localhost/ur/index.php/reports/annual/gm/8312/44724286729 My controller file is called reports.How would i use routes to ignore annual and only use gm and as my function and other sections of the uri as my parameters namely 8312/44724286729?. I have tried this in my routes $route['annual/(:any)'] = "gm"; A: You could try to use the dollar-sign syntax as such $route['reports/annual/(:any)/(:any)'] = "reports/gm/$1/$2"; And in your reports controller the gm function will use $1 and $2 as arguments. P.S.: see the URI Routing section in CI manual.
{ "pile_set_name": "StackExchange" }
Q: org.hibernate.exception.JDBCConnectionException: could not execute query using hibernate I've developed an application and it worked just fine locally , and when I uploaded it to a remote server it gave me com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException. I've tried the solution mentioned in the topic in this link : https://stackoverflow.com/questions/7565143/com-mysql-jdbc-exceptions-jdbc4-mysqlnontransientconnectionexception-no-operati#= here is the code that has access to database: protected Number getCount(Class clazz){ Session currentSession = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction transaction = currentSession.beginTransaction(); return (Number) currentSession.createCriteria(clazz).setProjection(Projections.rowCount()).uniqueResult(); } and here is my hibernate configuration: <hibernate-configuration> <session-factory> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/lyrics_db</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.connection.password">123456</property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="hibernate.show_sql">true</property> <property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="c3p0.acquire_increment">1</property> <property name="c3p0.idle_test_period">100</property> <!-- seconds --> <property name="c3p0.max_size">100</property> <property name="c3p0.max_statements">0</property> <property name="c3p0.min_size">10</property> <property name="c3p0.timeout">1800</property> <!-- seconds --> </session-factory> </hibernate-configuration> and it's not working and I'm getting the same exception , and here is my full stack trace: Dec 3, 2013 8:02:44 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet ServletAdaptor threw exception org.hibernate.exception.JDBCConnectionException: could not execute query at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:74) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.loader.Loader.doList(Loader.java:2223) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2104) at org.hibernate.loader.Loader.list(Loader.java:2099) at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:94) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1569) at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:283) at daos.UltimateDao.listWithLimitWithOrder(UltimateDao.java:47) at daos.LyricDao.getTopHundred(LyricDao.java:73) at com.xeeapps.service.Service.getTopHundred(Service.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:662) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed. at sun.reflect.GeneratedConstructorAccessor127.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1014) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:988) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:974) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:919) at com.mysql.jdbc.ConnectionImpl.throwConnectionClosedException(ConnectionImpl.java:1290) at com.mysql.jdbc.ConnectionImpl.checkClosed(ConnectionImpl.java:1282) at com.mysql.jdbc.ConnectionImpl.prepareStatement(ConnectionImpl.java:4468) at com.mysql.jdbc.ConnectionImpl.prepareStatement(ConnectionImpl.java:4434) at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:1076) at org.hibernate.jdbc.AbstractBatcher.getPreparedStatement(AbstractBatcher.java:505) at org.hibernate.jdbc.AbstractBatcher.getPreparedStatement(AbstractBatcher.java:423) at org.hibernate.jdbc.AbstractBatcher.prepareQueryStatement(AbstractBatcher.java:139) at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1547) at org.hibernate.loader.Loader.doQuery(Loader.java:673) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236) at org.hibernate.loader.Loader.doList(Loader.java:2220) ... 40 more Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 38,056,253 milliseconds ago. The last packet sent successfully to the server was 38,056,857 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117) at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3871) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2484) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2664) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2794) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2322) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:144) at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:186) at org.hibernate.loader.Loader.getResultSet(Loader.java:1787) at org.hibernate.loader.Loader.doQuery(Loader.java:674) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236) at org.hibernate.loader.Loader.doList(Loader.java:2220) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2104) at org.hibernate.loader.Loader.list(Loader.java:2099) at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:94) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1569) at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:283) at org.hibernate.impl.CriteriaImpl.uniqueResult(CriteriaImpl.java:305) at daos.UltimateDao.get(UltimateDao.java:24) at daos.SongDao.getSong(SongDao.java:31) at daos.LyricDao.getLyricForSong(LyricDao.java:24) at com.xeeapps.service.Service.getLyricForASong(Service.java:82) ... 32 more Caused by: java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3852) ... 53 more Dec 3, 2013 8:02:44 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet Faces Servlet threw exception com.sun.jersey.api.client.UniformInterfaceException: GET http://localhost:8080/LyricsService/webresources/service/getTopHundred returned a response status of 500 Internal Server Error at com.sun.jersey.api.client.WebResource.handle(WebResource.java:686) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:507) at client.LyricsClient.getTopHundred(LyricsClient.java:71) at controllers.TopHundredController.init(TopHundredController.java:32) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.el.parser.AstValue.invoke(AstValue.java:191) at org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105) at com.sun.faces.facelets.tag.jsf.core.DeclarativeSystemEventListener.processEvent(EventHandler.java:128) at javax.faces.component.UIComponent$ComponentSystemEventListenerAdapter.processEvent(UIComponent.java:2563) at javax.faces.event.SystemEvent.processListener(SystemEvent.java:108) at javax.faces.event.ComponentSystemEvent.processListener(ComponentSystemEvent.java:118) at com.sun.faces.application.ApplicationImpl.processListeners(ApplicationImpl.java:2187) at com.sun.faces.application.ApplicationImpl.invokeComponentListenersFor(ApplicationImpl.java:2135) at com.sun.faces.application.ApplicationImpl.publishEvent(ApplicationImpl.java:289) at com.sun.faces.application.ApplicationImpl.publishEvent(ApplicationImpl.java:247) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:107) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:219) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:647) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:662) Does anyone know why this exception keeps happening even I changed my configuration? A: I guess the problem was that I didn't commit the transaction , so it made some sort of lock.I figured this out from this topic: What happens if you don't commit transaction in a database (say SQL Server)
{ "pile_set_name": "StackExchange" }
Q: Using PHP with LDAP returns all results into one connected line I'm trying to get user information from my Active Directory through LDAP. Im using for loops to retrieve each username for a specific AD OU. All results are showing in one line without any separation. If i put $LDAP_CN into array it just creates a lot of different arrays. Here is my PHP code: $entries = ldap_get_entries($ldap_connection, $result); for ($x=0; $x<$entries['count']; $x++){ $LDAP_CN = ""; if (!empty($entries[$x]['cn'][0])) { $LDAP_CN = $entries[$x]['cn'][0]; if ($LDAP_CN == "NULL"){ $LDAP_CN = ""; } } echo($LDAP_CN); } output: Name LastnameName1 Lastname1Name2 Lastname2Name3 Lastname3Name4 Lastname4 ant etc. When i try to var_dump $LDAP_CN it gives output like that: string (13) "Name Lastname" string (15) "Name1 Lastname1" string (15) "Name2 Lastname2" string (15) "Name3 Lastname3" string (15) "Name4 Lastname4" etc.. So i'm guessing that it knows how to separate them. But how ? I tried explode it just creates a lot of arrays.. Also if i put echo out of the loop it just returns last result. A: All results in one array: $LDAP_CN = []; for ($x=0; $x<$entries['count']; $x++){ if (!empty($entries[$x]['cn'][0])) { $LDAP_CN[] = $entries[$x]['cn'][0] == "NULL" ? "" : $entries[$x]['cn'][0]; } } print_r($LDAP_CN);
{ "pile_set_name": "StackExchange" }
Q: In Stargate is there an in-universe explanation of the cumulative effect of Zat'nik'tel (Zat guns)? Several times we see characters who had previously been shot by a zat gun shot again. The guns were supposed to be painful on the first shot, fatal on the second, and matter-destroying on the third. It seems that multiple shots spaced over time did not have this effect. Is there an in-universe explanation as to why the cumulative effect of zat guns wears off? A: Third Zap's the Charm Let's consider the Goa'uld and their needs. They were not known for their patience or tolerance, but their technology was definitely first rate. Only the Ori or the Asgard seemed to have working technology as sophisticated. It is likely the Ancients also had technology as effective, but few working samples remained. The Goa'uld have mastered the ability to compress energy into small devices such as Zat'nik'tel and Staff weapons. Both weapons have amazing capacity for energy emission and matter disruption. If we consider the effects we have seen: the ability to stun almost any creature (there are exceptions but it's a short list) the ability to kill any creature struck more than once in a short period of time, likely due to neural collapse the ultimate disintegration of a target with a third shot The energy of a Zat can be conducted through both metal and water The energy of a Zat is destructive to electronic devices (but does not work on the Replicators) My initial assessment would call this a form of coherent electron-beam weapon or a lightning-gun. The beam of coherent electrons guided by an invisible laser which polarizes the air and allows the electrons to travel to the target. The guidance effect in this case, must allow for a massive number of charged particles to reach the target. This meets several of the criteria for our weapon: Destroys electronics Can be conducted by metal or water Can cause a shock to human or humanoid-like creatures likely by overloading their neural systems But how do we get to the matter-destruction possibility? Perhaps the Goa'uld are advanced enough to use the anti-particle of the electron, the positron. This anti-matter particle would act pretty much like an electron would, but if it could be caused to stay with a target for more than a few minutes, perhaps by clinging to the matter in some undisclosed fashion, a target could: Be effected just like they would if zapped by a heavy charge of electrons They would retain enough charge that a second exposure would exacerbate the first, killing a potential target. A third exposure might cause enough of an anti-particle reaction to cause annihilation (non-explosively?) reducing matter to dust. This might also explain how the Zat'nik'tel could possibly have been used to boost hyperdrive engines when amplified by Ancient knowledge. Granted this would make the Goa'uld weaponry very strange by our standards, but if they could control anti-matter streams in that fashion, it would make most armor obsolete as all any excess matter would do once struck by a Zat is to further hold the positron charge even better. After reviewing all of the sources I could on Zat weaponry, I have to conclude, and is held up by even the show's designers, that no one considered the Zat and its effects very thoroughly and any speculation here is clearly my own. A: To answer your question about multiple shots over long time spans: In one SG-1 episode, Zats are used against a species of swarming creatures that are electromagnetic in nature. The team realizes that their only chance to reach the stargate without being attacked by the creatures is to reach it under the protection of an electromagnetic field. Colonel O'Neill is hit once with Zat fire and he makes his way toward the gate as soon as he is able to stand. However, halfway there, the field around his body begins to dissipate and the creatures begin to break through. This means: Zats impart some sort of charge on its victim that can generate an EM field (has to be an unknown particle that is not an electron, proton or a positron). It also means that the charge dissipates slowly (within 5-20 minutes, judging from the above mentioned episode). Body-wide pain is mostly likely caused by electrical effects on the nervous system. Charge imparted by successive shots is cumulative, and beyond a threshold, is lethal. This explains the lethal second shot and the ability to withstand multiple shots if enough time passes between them. The disintegration effect is poor, both scientifically and as a storytelling device. It was wisely abandoned by the writers. A: Building on HNL's answer it might be possible that the bolt of energy short from the Zat is made of positronium, stabilised in some fashion. IT travels fast enough to reach the target and impose a charge on the target (different regions being charged differently), though it would be largely negative due to some/most of the positrons annihilating. At this point not enough charge has been induced in the victim to kill it. On the second blast about the same amount of negative charge is deposited. With the victim still charged from the first blast this charge may be enough to kill them via the pure level of charge involved. Otherwise when the charge does attempt to ground itself it now has to travel through the heart (not just down the legs) as the victim is on the ground. On the third shot things get interesting. The body now has a large negative charge. This charge acts to repel the electrons and attract the positrons which reach the body faster than before, fewer positrons are annihilated and start a chain reaction with the energy released from the anti-matter reaction which is enough to disintegrate the body. It's dodgy science but it's still science.
{ "pile_set_name": "StackExchange" }
Q: Delta de Dirac Function linear? Show that $\delta_0$, Dirac function, defined than $\left<{\delta_0,\phi}\right> = \phi(0)$ is linear. I trying: Let be $\phi_1,\phi_2$ $\in W^{m,p}(\Omega)$ then $\delta_0(\phi_1+\phi_2)=(\phi_1+\phi_2)(0)$, but I need more steps. A: Hint: What is the definition of a linear functional? Just plug this into the definition and see if it works. Under trying, $(\phi_1 + \phi_2)(0)=\phi_1 (0) + \phi_2 (0)$
{ "pile_set_name": "StackExchange" }
Q: how to set header font color in Latex Is it possible to change the header font color in Latex? A: You could have a look at the sectsty package. The secsty package provides a set of commands for changing the fount 1 used for the various sectional headings in the standard LATEX 2ε document classes From the manual: Make sure you include the package in your document by saying in your document preamble: \usepackage{sectsty} You will then have some new commands available. For example: \allsectionsfont{\sffamily} will give you sanserif for all sectional headings.   Here is the full manual
{ "pile_set_name": "StackExchange" }
Q: Bearing features (2RS versus 2RSH) I have done sufficient googling to discover that a -2RS bearing is one with two rubber seals. I haven't ascertained much beyond that. For an application currently using a 2RSH bearing, can I replace it with a 2RS bearing? When is the answer yes, and when is the answer no? Thanks! A: Per this dictionary the RS and RSH parts mean the same thing. They mean Contact seal made of acrylonitrile-butadiene rubber (NRB)... That would make it seem as though different manufacturers may use different acronyms for the same thing. If that is really the case then you can totally interchange the 2. However, I'm not super-familiar with the nomenclature of wheel bearings, and they could mean something different. If anyone here knows with certainty that the above is right or wrong, let me know. But from what I can scrounge up it appears that they are completely interchangeable.
{ "pile_set_name": "StackExchange" }
Q: How to explain Real Big Numbers? Mathematicians, and esp. number theorists, are used to working with big numbers. I have noted on several occasions that lots of people don't have a clear understanding of big numbers as far as the real world is concerned. I recall a request for a list of all primes of less than 500 digits. Another example is homeopathic dilutions. I understand they use dilutions like 200C, which is 1 in $10^{400}$. An absurd number in view of the fact that the total number of particles in the universe is estimated (safe margin) to be less than a googol. How would you give people insight in big numbers? I'm not talking about Skewes' Number or Graham's Number; for most practical purposes $10^{20}$ is equal to infinity. edit To whoever voted me down: if you vote this down, please also tell me why. Thanks A: Very few (if any) mathematicians have significant insight regarding huge natural numbers (cf. various ultrafinitism arguments). Perhaps the only exceptions are logicians who work with esoteric ordinal notations. This is one of the few ways one can gain any insight into arbitrarily large numbers - using various complicated inductions to show that some property holds for all naturals - thus lifting our intuition up from small naturals to arbitrarily large naturals. For example, see the Goodstein sequence (or, more graphically: the Hercules vs. Hydra game) which encodes the ordinals below $\epsilon_0 = \omega^{\omega^{\omega^{\cdot^{\cdot^\cdot}}}} \;$ into huge natural numbers. A: Though I don't quite know that you actually want to hear - what kind of numbers do you want to give people insight in, whom and why? - I'll give a few thoughts. I) Real cases That's just understanding of natural sciences - numbers of particles in the universe, number of cells in a body ... Try to first of all break down the number by using smaller parts of the example - e.g. count bacteria in a drop of water and not in a whole lake. II) Thought experiments (explaining probabilities, complexity etc.) Extremely big numbers arise when you try to visualize probabilities or complexities, especially when exponential growth is involved. What about getting the jackpot ten times successively or trying to solve a TSP for 100 cities. When you know people aren't comfortable with that big numbers, decide: Is it really important to know the number? Maybe, extremely long or extremely improbable is just the important fact. Can you find an easier to grasp example (special units)? Longer than the universe is old is better than insert giant amount of milliseconds. Can you describe the growth differently? If your problem with 999 cities can be solved in a certain amount of time and you take one additional city, you'll need 1000 times longer III) Data Especially in the context of CS / cryptography, numbers can often most accurately be explained as some data you can calculate with. E.g. RSA (as in your link) is of course a mathematical, number-based algorithm, but in fact, you're encrypting data, so why not say a 500 char key instead of explaining the giant number involved there.
{ "pile_set_name": "StackExchange" }
Q: Receiving error of "The type or namespace name 'LayoutsPageBase' could not be found" To give you entire perspective, I am trying to create a custom ribbon in SharePoint. For that I am following this tutorial. I created the required feature and was able to deploy and test it with simple JavaScript alert. Now I am trying to call an ASPX page on click of ribbon button. For that I created an Application Page in my project. But in the code behind file of ASP.NET page I get the following error: The type or namespace name 'LayoutsPageBase' could not be found (are you missing a using directive or an assembly reference?) C:\Users\Administrator\Documents\Visual Studio 2012\Projects\CustomRibbonButton\CustomRibbonButton\Layouts\CustomRibbonButton\ApplicationPage1.aspx.cs I have imported (I hope thats what you call it in C#) Microsoft.SharePoint.WebControls with statement using Microsoft.SharePoint.WebControls; From this question on StackOverflow I was able to figure that LayoutsPageBase class is not available in sandbox solutions (with path as \UserCode\assemblies). So in my project I went to References > Microsoft.SharePoint, right-clicked on it to view its Properties. Its Path in Properties window is shown as C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.dll. What can be the reason for this error and how can it be solved? A: You can check whether or not a SharePoint project is Sandboxed by right clicking the project in Solution Explorer and viewing the properies. There is a true/false property called Sandboxed Solution.
{ "pile_set_name": "StackExchange" }
Q: C++11 - emplace_back between 2 vectors doesn't work I was trying to adapt some code and moving the content from a vector to another one using emplace_back() #include <iostream> #include <vector> struct obj { std::string name; obj():name("NO_NAME"){} obj(const std::string& _name):name(_name){} obj(obj&& tmp): name(std::move(tmp.name)) {} obj& operator=(obj&& tmp) = default; }; int main(int argc, char* argv[]) { std::vector<obj> v; for( int i = 0; i < 1000; ++i ) { v.emplace_back(obj("Jon")); } std::vector<obj> p; for( int i = 0; i < 1000; ++i ) { p.emplace_back(v[i]); } return(0); } This code doesn't compile with g++-4.7, g++-4.6 and clang++: what it's wrong with it ? I always got 1 main error about call to implicitly-deleted copy constructor of obj ? A: Although the existing answer provides a workaround using std::move that makes your program compile, it must be said that your use of emplace_back seems to be based on a misunderstanding. The way you describe it ("I was trying to [...] moving the content from a vector to another one using emplace_back()") and the way you use it suggest that you think of emplace_back as a method to move elements into the vector, and of push_back as a method to copy elements into a vector. The code you use to fill the first instance of the vector seems to suggest this as well: std::vector<obj> v; for( int i = 0; i < 1000; ++i ) { v.emplace_back(obj("Jon")); } But this is not what the difference between emplace_back and push_back is about. Firstly, even push_back will move (not copy) the elements into the vector if only it is given an rvalue, and if the element type has a move assignment operator. Secondly, the real use case of emplace_back is to construct elements in place, i.e. you use it when you want to put objects into a vector that do not exist yet. The arguments of emplace_back are the arguments to the constructor of the object. So your loop above should really look like this: std::vector<obj> v; for( int i = 0; i < 1000; ++i ) { v.emplace_back("Jon"); // <-- just pass the string "Jon" , not obj("Jon") } The reason why your existing code works is that obj("Jon") is also a valid argument to the constructor (specifically, to the move constructor). But the main idea of emplace_back is that you need not create the object and then move it in. You don't benefit from that idea when you pass obj("Jon") instead of "Jon" to it. On the other hand, in your second loop you are dealing with objects that were created before. There is no point in using emplace_back to move objects that exist already. And again, emplace_back applied to an existing object does not mean that the object is moved. It only means that it is created in-place, using the ordinary copy constructor (if that exists). If you want to move it, simply use push_back, applied to the result of std::move: std::vector<obj> p; for( int i = 0; i < 1000; ++i ) { p.push_back(std::move(v[i])); // <-- Use push_back to move existing elements } Further notes 1) You can simplify the loop above using C++11 range-based for: std::vector<obj> p; for (auto &&obj : v) p.push_back(std::move(obj)); 2) Regardless of whether you use an ordinary for-loop or range-based for, you move the elements one by one, which means that the source vector v will remain as a vector of 1000 empty objects. If you actually want to clear the vector in the process (but still use move semantics to transport the elements to the new vector), you can use the move constructor of the vector itself: std::vector<obj> p(std::move(v)); This reduces the second loop to just a single line, and it makes sure the source vector is cleared. A: The problem is that p.emplace_back(v[i]); passes an lvalue to emplace_back, which means that your move constructor (which expects an rvalue reference) won't work. If you actually want to move values from one container to another, you should explicitly call std::move: p.emplace_back(std::move(v[i])); (The idea behind a move constructor like obj(obj&& tmp) is that tmp should be an object that isn't going to be around for much longer. In your first loop, you pass a temporary object to emplace_back, which is fine -- a rvalue reference can bind to a temporary object and steal data from it because the temporary object is about to disappear. In your second loop, the object that you pass to emplace_back has a name: v[i]. That means it's not temporary, and could be referred to later in the program. That's why you have to use std::move to tell the compiler "yes, I really meant to steal data from this object, even though someone else might try to use it later.") Edit: I'm assuming that your rather unusual usage of emplace_back is a relic of having to craft a little example for us. If that isn't the case, see @jogojapan's answer for a good discussion about why using a std::vector move constructor or repeated calls to push_back would make more sense for your example.
{ "pile_set_name": "StackExchange" }
Q: How to solve problems involving roots. $\sqrt{(x+3)-4\sqrt{x-1}} + \sqrt{(x+8)-6\sqrt{x-1}} =1$ How to solve problems involving roots. If we square them they may go to fourth degree.There must be some technique to solve this. $$\sqrt{(x+3)-4\sqrt{x-1}} + \sqrt{(x+8)-6\sqrt{x-1}} =1$$ A: Straight, you can get the following equation : $$ \sqrt{(2-\sqrt{x-1})^2} + \sqrt{(3-\sqrt{x-1})^2} =1 $$ which leads to the following equation : $$ |2-\sqrt{x-1}| + |3-\sqrt{x-1}| =1 $$ Then you will have three cases to discuss : case : $\sqrt{x-1} \leq2$ (equivalent to $x\leq5$) : $\sqrt{x-1} = 2$ then $x = 5$ case : $\sqrt{x-1} >2$ and $\sqrt{x-1} <3$ (equivalent to $5<x<10$) : The equation below can be written : $$ \sqrt{x-1}-2 + 3-\sqrt{x-1} =1 $$ equivalent to : $ 1=1 $ The solutions belongs to $]5,10[$ case : $\sqrt{x-1} \geq3$ (equivalent to $x\geq10$) : $\sqrt{x-1} = 3$ then $x = 10$ The solutions belongs to $[5,10]$
{ "pile_set_name": "StackExchange" }
Q: ggplot without facet The following code, from @ROLO in answer to my earlier question generates 3 plots: require(mice) require(reshape2) require(ggplot2) dt <- nhanes impute <- mice(dt, seed = 23109) # Obtain the imputed data, together with the original data imp <- complete(impute,"long", include=TRUE) # Melt into long format imp <- melt(imp, c(".imp",".id","age")) # Add a variable for the plot legend imp$Imputed<-ifelse(imp$".imp"==0,"Observed","Imputed") # Plot. Be sure to use stat_density instead of geom_density in order # to prevent what you call "unwanted horizontal and vertical lines" ggplot(imp, aes(x=value, group=.imp, colour=Imputed)) + stat_density(geom = "path",position = "identity") + facet_wrap(~variable, ncol=2, scales="free") My question is, how do I modify this to plot each one individually ? A: As Joran said, you can just use a subset of the data in each plot. ggplot(imp[imp$variable=="bmi",], aes(x=value, group=.imp, colour=Imputed)) + stat_density(geom = "path",position = "identity") ggplot(imp[imp$variable=="hyp",], aes(x=value, group=.imp, colour=Imputed)) + stat_density(geom = "path",position = "identity") ggplot(imp[imp$variable=="chl",], aes(x=value, group=.imp, colour=Imputed)) + stat_density(geom = "path",position = "identity") Alternatively, you could put these in a loop library("plyr") d_ply(imp, .(variable), function(DF) { print(ggplot(DF, aes(x=value, group=.imp, colour=Imputed)) + stat_density(geom = "path",position = "identity")) }) The downside of this approach is that it puts all the plots out one right after the other so there is no chance to see the previous ones on the screen. If you are outputting to a PDF (directly or via something like knitr), all will get written and can be seen that way.
{ "pile_set_name": "StackExchange" }
Q: ElasticSearch Make Field non-searchable from java I am currently working on elastic search through my java Application . I know how to index the Java pojo using RestHighLevelClient. How i can make search only on new fields not the complete pojo.? public class Employee{ private long id; private String name; private String designation; private String address; //want to index but not searchable in elastic search } My Code for indexing is below which is working fine: public String saveToEs(Employee employee) throws IOException { Map<String, Object> map = objectMapper.convertValue(employee, Map.class); IndexRequest indexRequest = new IndexRequest(INDEX, TYPE, employee.getId().toString()).source(map, XContentType.JSON); IndexResponse indexResponse = client.index(indexRequest, RequestOptions.DEFAULT); I need to do this in java .Any help please or good link ? A: Writing another answer for RestHighLevelClient As another answer is useful for people not using the Rest client and adding this in the first answer makes it too long. Note: you are passing the type which is deprecated in ES 7.X and I am using the ES 7.X version, so my code is according to 7.X. CreateIndexRequest request = new CreateIndexRequest("employee"); Map<String, Object> name = new HashMap<>(); name.put("type", "text"); Map<String, Object> address = new HashMap<>(); address.put("type", "text"); address.put("index", false); Map<String, Object> properties = new HashMap<>(); properties.put("name", name); properties.put("address", address); Map<String, Object> mapping = new HashMap<>(); mapping.put("properties", properties); request.mapping(mapping); CreateIndexResponse createIndexResponse = client.indices().create(request, RequestOptions.DEFAULT); Important points I've used only 2 fields for illustration purpose, one of which is address field which is not searchable, and to do that I used, address.put("index", false); , while name is searchable field and there this option isn't present. I've created index mapping using the Map method which is available in this official ES doc. you can check the mapping created by this code, using mapping REST API. Below is the mapping generated for this code in my system and you can see, index: false is added in the address field. { "employee": { "mappings": { "properties": { "address": { "type": "text", "index": false }, "name": { "type": "text" } } } } } You can just use the same search JSON mentioned in the previous answer, to test that it's not searchable.
{ "pile_set_name": "StackExchange" }
Q: Retrofit 2 API Can I use local file path or json string instead of url? Hello I am working on an Android App which uses retrofit API getting response from server. Retrofit Automatically parse the json response and creates objects of POJO class. I am storing that json into sqlite and if internet is not connected call the json from sqllite, facing difficulty have to parse json manually. Is there any way I use retrofit library to parse json and make pojo from json string or file path?My code is here to fetch from url: @FormUrlEncoded @POST("getResponse") Observable<UserResponse> getResponse(@Field("token") String token); I want something like this if internet is not connected. @FromStringEncoded Observable<UserResponse> getResponseOffline(@Field("token") String token); Thanks. A: You don't mentioned proposes. I use below solution for mocking server in app on very early stage of development when real server doesn't work yet. So you can use interceptors in OkHttp. Like this: OkHttpClient.Builder builder = new OkHttpClient.Builder(); builder.addInterceptor(new MockClient(context)); and MockClient looks like this: public class MockClient implements Interceptor { Context context; public MockClient(Context context) { this.context = context; } @Override public Response intercept(Chain chain) throws IOException { HttpUrl url = chain.request().url(); //here determine what to do base on url. //e.g.: switch(url.encodedPath()) { case "some/path" : String response = readJsonFieleFromAssestOrAnyOtherStorage(); return new Response.Builder() .code(200) .message(response) .request(chain.request()) .protocol(Protocol.HTTP_1_1) .body(ResponseBody.create(MediaType.parse("application/json"), response.getBytes())) .addHeader("content-type", "application/json") .build(); } } } A: Simply use Google's GSON Library that allows you to convert json to POJO and vice versa. Fetch json from sqlite and parse it using gson. Gson gson=new Gson(); UserResponse userResponse= gson.fromJson(jsonInStringFromDb,UserResponse.class); You can also parse JSON from file using Gson. JSON to Java object, read it from a file. Gson gson = new Gson(); Staff staff = gson.fromJson(new FileReader("D:\\file.json"), Staff.class);
{ "pile_set_name": "StackExchange" }
Q: Authors on "the Trouble with the Revolutions of the Mind" Accepting that planet Earth was not at the centre of the universe and the stars were like just like the Sun but only much further was a "revolution of the mind" that took centuries to accept. Accepting that species evolve and change through time has not been fully accepted. In the USA for instance 38% still believe that humans were created by God (https://en.wikipedia.org/wiki/Creationism). The scientific knowledge and technology of people today exceeds most Jules Verne novels who was taken as mere entertainment during his time only 130 years ago. Is it hard to accept that perhaps science and technology of people of the 22nd century will exceed shows such us "the Matrix", "The X Files", or "Transcendence"? Where does this trouble accepting these revolutions of the mind come from? Is there a defect in the brain not being efficient at re-wiring new ideas? Is it a need to believe a much more pleasant existence? Is it perhaps that it takes effort to accept new ideas and we don't want to over-complicate our existence or is it an ego-related thing because the longer you have been wrong about things the harder it gets to admit it? What if 9/11 was an inside job or maybe the existence of alien civilisations were real? Could those be potential "revolutions of the mind" that most people would have trouble with? I'm looking for authors that explore these ideas. "First they ignore you, then they laugh at you, then they fight you, then you win". Mahatma Gandhi "Everyone takes the limits of his own vision for the limits of the world" Arthur Schopenhauer "A man is his own easiest dupe, for what he wishes to be true he generally believes to be true." Demosthenes 384-322 BC “Sometimes people don't want to hear the truth because they don't want their illusions destroyed.” Friedrich Nietzsche A: One of those authors Georg Wilhelm Friedrich Hegel and his Owl of Minerva or Athena The 19th-century idealist philosopher Georg Wilhelm Friedrich Hegel famously noted that "the owl of Minerva spreads its wings only with the falling of the dusk"—meaning that philosophy comes to understand a historical condition just as it passes away.[17] Philosophy appears only in the "maturity of reality," because it understands in hindsight. “ Philosophy, as the thought of the world, does not appear until reality has completed its formative process, and made itself ready. History thus corroborates the teaching of the conception that only in the maturity of reality does the ideal appear as counterpart to the real, apprehends the real world in its substance, and shapes it into an intellectual kingdom. When philosophy paints its grey in grey, one form of life has become old, and by means of grey it cannot be rejuvenated, but only known. The owl of Minerva takes its flight only when the shades of night are gathering.
{ "pile_set_name": "StackExchange" }
Q: HTML5 canvas object random path generation I have a canvas object, a circle, that currently animates along a particular path, rather like a bounce. The simple animation code is as follows: if (x + dx > canvasW || x + dx < 0) dx = -dx; if (y + dy > canvasH || y + dy < 0) dy = -dy; x += dx; y += dy; Where dx and dy are set offets to increase the path by. I'd like to make it follow a random path, such as a fly might. How would I go about this? Are there any tutorials anyone could point me in the direction of? I've struggled to find any either here or via Google. A: You can find an implementation of the idea you proposed here. You might want to tweak it a bit but at least it's a start. :) In case you want to make the trajectory smoother, try evaluating a Bézier curve. Before that you'll have to generate a bunch of points in which to apply the algo.
{ "pile_set_name": "StackExchange" }
Q: Match when column does and does not equal value across multiple rows I have a table with a many-to-many relationship to two other tables: CREATE TABLE assoc ( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, ref1 INT NOT NULL, ref2 INT NOT NULL, INDEX composite_key (ref1, ref2) ); I want to determine if there are associations with ref1 that match and do not match a given value for ref2. As an example, lets say I'd like to match if an association for ref1 is present with a value of 1000 and any other value for ref2: INSERT INTO assoc (ref1, ref2) VALUES (100, 10), (100, 1000); However, no match should be given if only the value 1000 is associated with ref1, or if it is solely any other value: INSERT INTO assoc (ref1,ref2) VALUES (101, 10), (102, 1000); I came up with two solutions. 1) Create a temp table with the results of rows that do match the value, then SELECT from that the rows that do not match the value, and 2) join the same table, and specify the not matching criteria from that table. CREATE TEMPORARY TABLE set SELECT ref1 FROM assoc WHERE ref2 = 1000; SELECT assoc.ref1 FROM `set` JOIN assoc ON `set`.ref1 = assoc.ref1 WHERE assoc.ref2 <> 1000; SELECT assoc.ref1 FROM assoc JOIN assoc AS `set` ON assoc.ref1 = `set`.ref1 WHERE assoc.ref2 = 1000 AND `set`.ref2 <> 1000; However, I'd like to know if there are other ways to accomplish this match? A: I think your second solution is the standard way to do what you want; I'd do it the same way. You have also added the INDEX composite_key correctly. However, you might add an additional GROUP BY to avoid that the same assoc.ref1 appears as many times as the join finds associated rows with ref2 <> 1000: SELECT assoc.ref1 FROM assoc JOIN assoc AS `set` ON assoc.ref1 = `set`.ref1 WHERE assoc.ref2 = 1000 AND `set`.ref2 <> 1000 GROUP BY assoc.ref1;
{ "pile_set_name": "StackExchange" }
Q: MySQL settings useful to speed up a mysqldump import Recently I had to import a 7 Gb MySQL dump file to a MySQL 5.6 server. The import took around 7 hours on a mono-core CPU with 1 Gb of RAM. Someone else tested the import on a MySQL server which has, amongst others, the following settings: innodb_buffer_pool_size = 8G query_cache_size = 300M I'm a bit skeptical about the relevancy of these settings (and yes, I even think that setting such a large query cache is bad). Would that make a difference? Aren't these settings used only when querying the database, and hence irrelevant for an import? If yes, which settings should be set to speed up the import of a large dump file? According to the official documentation these values should be set temporarily: unique_checks = 0 foreign_key_checks = 0 I've read here that it should be set also innodb_flush_log_at_trx_commit = 2 but I don't think it would help, because autocommit mode (flushing logs to disk for every insert) is already disabled by default in the mysqldump command (--opt option). A: SUGGESTION #1 No need to run unique_checks = 0 and foreign_key_checks = 0. See my 3-year-old post Speeding up mysqldump / reload (ASPECT #2 shows a standard header of a mysqldump. Lines 13 and 14 handle the disabling of those checks for you) SUGGESTION #2 Please note the InnoDB Architecture (Picture From Percona CTO Vadim Tkachenko) If you want to reload a MySQL Instance you should temporarily disable the Double Write Buffer. STEP #1 Login to the Target Server and run SET GLOBAL innodb_fast_shutdown = 0; STEP #2 Restart mysqld by setting to innodb_doublewrite to OFF service mysql restart --innodb-doublewrite=OFF --innodb-fast-shutdown=0 STEP #3 Load the mysqldump into the Target Server STEP #4 Restart mysqld normally (Double Write buffer will be enabled again) service mysql restart Since the name "Double Write Buffer" implies two writes, InnoDB will only write data and indexes straight to the table files and bypass writing to the Double Write Buffer within ibdata1. Maybe this will double the import time (pun intended) SUGGESTION #3 The default innodb_log_buffer_size is 8M. You need a bigger Log Buffer. Please add this line to my.cnf under the [mysqld] group header [mysqld] innodb_log_buffer_size = 128M Then, restart mysqld before the reload of the mysqldump. GIVE IT A TRY !!!
{ "pile_set_name": "StackExchange" }
Q: How to pass text from gets.chomp to a file I want to push resources :#{get} to the bottom of resources :posts. get = gets.chomp @file = File.open('config/routes.rb','r+') myString = " resources :#{get}s " Rails.application.routes.draw do resources :users do resources :posts end # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html end The result is: Rails.application.routes.draw do resources :users do resources :posts resources :categories end # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html end How do I pass data from user input to a file? A: Making the assumption that there will only ever be one resources :posts in your routes file, a simple example could be done like: require 'active_support/core_ext/string/inflections' # for `pluralize` get = gets.chomp lines = File.read("config/routes.rb").split(/\n/) # find the line of the file we want to insert after. This assumes # there will only be a single `resources :posts` in your routes. index = lines.index { |line| line.strip == 'resources :posts' } # duplicate the existing line and replace 'posts' with the pluralized form # of whatever the user input to gets, we do it this way to keep indentation new_line = lines[index].gsub(/posts/, get.pluralize) # insert the new line on the line after the `resources :posts` and then write # the entire thing back out to 'config/routes.rb' lines.insert(index + 1, new_line) File.open("config/routes.rb", "w") { |f| f.write(lines.join("\n")) } Depending on what you're trying to do, though, you may find it useful to look into Rails Generators. before Rails.application.routes.draw do resources :users do resources :posts end end execute $ echo category | ruby example.rb after Rails.application.routes.draw do resources :users do resources :posts resources :categories end end
{ "pile_set_name": "StackExchange" }
Q: Right to Left Languages in Java When I entered new String("<some arabic text>".getBytes(), "UTF-8"); despite displayed exactly the way it was pasted (into the eclipse editor), index 0 contained the rightmost character of the string. (Also, each arabic letter was two bytes, the first byte being -40 for each. Does that indicate the sequence?) I would like to know if the java compiler recognizes arabic in the background of if the eclipse editor would reorganize arabic literals? Or why the debugger knew this was arabic, which means the first to be read letter is the rightmost one and as such assigned an index 0. A: All text is stored in writing order, so the first (right most) letter in Arabic should be stored in index 0. It's up to the software that displays strings to recognize that the text is Arabic and lay it out right-to-left. Also, the line of code you quote at best does nothing, at worst it corrupts the data. It encodes the given Unicode string as bytes using the system default encoding, which could be anything, and then pretends the resulting bytes represent some text in UTF-8 and decodes it.
{ "pile_set_name": "StackExchange" }
Q: Rx.js wait for callback to complete I am using Rx.js to process the contents of a file, make an http request for each line and then aggregate the results. However the source file contains thousands of lines and I am overloading the remote http api that I am performing the http request to. I need to make sure that I wait for the existing http request to callback before starting another one. I'd be open to batching and performing n requests at a time but for this script performing the requests in serial is sufficient. I have the following: const fs = require('fs'); const rx = require('rx'); const rxNode = require('rx-node'); const doHttpRequest = rx.Observable.fromCallback((params, callback) => { process.nextTick(() => { callback('http response'); }); }); rxNode.fromReadableStream(fs.createReadStream('./source-file.txt')) .flatMap(t => t.toString().split('\r\n')) .take(5) .concatMap(t => { console.log('Submitting request'); return doHttpRequest(t); }) .subscribe(results => { console.log(results); }, err => { console.error('Error', err); }, () => { console.log('Completed'); }); However this does not perform the http requests in serial. It outputs: Submitting request Submitting request Submitting request Submitting request Submitting request http response http response http response http response http response Completed If I remove the call to concatAll() then the requests are in serial but my subscribe function is seeing the observables before the http requests have returned. How can I perform the HTTP requests serially so that the output is as below? Submitting request http response Submitting request http response Submitting request http response Submitting request http response Submitting request http response Completed A: The problem here is probably that when you use rx.Observable.fromCallback, the function you passed in argument is executed immediately. The observable returned will hold the value passed to the callback at a later point in time. To have a better view of what is happening, you should use a slightly more complex simulation : number your requests, have them return an actual (different for each request) result that you can observe through the subscription. What I posit happens here : take(5) issues 5 values map issues 5 log messages, executes 5 functions and passes on 5 observables those 5 observables are handled by concatAll and the values issued by those observables will be in order as expected. What you are ordering here is the result of the call to the functions, not the calls to the functions themselves. To achieve your aim, you need to call your observable factory (rx.Observable.fromCallback) only when concatAll subscribes to it and not at creation time. For that you can use defer : https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/core/operators/defer.md So your code would turn into : rxNode.fromReadableStream(fs.createReadStream('./path-to-file')) .map(t => t.toString().split('\r\n')) .flatMap(t => t) .take(5) .map(t => { console.log('Submitting request'); return Observable.defer(function(){return doHttpRequest(t);}) }) .concatAll() .subscribe(results => { console.log(results); }, err => { console.error('Error', err); }, () => { console.log('Completed'); }); You can see a similar issue with an excellent explanation here : How to start second observable *only* after first is *completely* done in rxjs Your log is likely to still show 5 consecutive 'Submitting request' messages. But your request should be executed one after the other has completed as you wish.
{ "pile_set_name": "StackExchange" }
Q: Azure and File.CreateText: FileNotFoundException: Could not find file I have a simple .Net Core MVC web application that I deploy to Azure. In the application, I am creating a little text file called "test.txt" using File.CreateText(). This works fine on my local PC, but when I deploy it to Azure, I get a strange message: "Could not find file 'D:\home\site\wwwroot\wwwroot\test.txt'." Indeed, the file does not exist--that's why I'm creating it. Also, it appears someone else on SO is having a similar problem: FileNotFoundException when using System.IO.Directory.CreateDirectory() Do I not have write permissions? Why is Azure not letting me create the file? Code (in Startup.cs): public void Configure(IApplicationBuilder app, IHostingEnvironment env){ using (var sw = File.CreateText(env.WebRootPath + "/test.txt")) { } } Screenshot of error: A: I found the reason for this error shortly after posting this question, but forgot to post the answer: It turns out that when you deploy to Azure, your site runs from a temporary directory, which gets wiped and re-created every time you deploy. Because of this, Azure disables the creation and editing of files in the app's directory, since they're just going to get wiped upon your next deployment (it would be nice if the error message Azure displayed was a little more informative). Therefore, the way to create and edit files on your Azure app and have them persist across deployments is to use one of the following: Database BLOB storage (simply store your files as byte arrays in a database. doesn't cost any money if you are already using a database) Azure BLOB storage (store your files to a special cloud on Azure. costs money) I read this on some site shortly after posting this question but I can't remember where, so I apologize for not having the source.
{ "pile_set_name": "StackExchange" }
Q: What is wrong with my regular expression in R? I am trying to extract the label, name, address, city, zip, and distance from the following text: A Carl's Jr. 308 WESTWOOD PLAZA LOS ANGELES, CA 90095-8355 0.0 mi. B Carl's Jr. 2727 SANTA MONICA SANTA MONICA, CA 90404-2407 4.8 mi. ... ... Here is my regular expression pattern and code, but I get a matrix of NA values. p <- "(^[AZ]\\n)^(\\w+.\\w+\\s\\w+.\\s*\\w*)\\n^(\\d+\\w+\\s*\\w*\\s*\\w*)\\n^(\\w+\\s*\\w*),\\s(CA)\\s(\\d+-*\\d*)\\n^(\\d+.\\d*)\\smi." matches <- str_match(cj, p) Do I have a syntax error in my pattern? A: Maybe try strsplit() instead. See regex101 for an explanation of the regex used below. Afterwards, we can figure out how many rows there will be by finding the number of single character elements. s <- strsplit(x, "\n+|, | (?=[0-9]+)", perl = TRUE)[[1]] as.data.frame(matrix(s, sum(nchar(s) == 1), byrow = TRUE)) # V1 V2 V3 V4 V5 V6 V7 # 1 A Carl's Jr. 308 WESTWOOD PLAZA LOS ANGELES CA 90095-8355 0.0 mi. # 2 B Carl's Jr. 2727 SANTA MONICA SANTA MONICA CA 90404-2407 4.8 mi. Data: x <- "A\n\nCarl's Jr.\n\n308 WESTWOOD PLAZA\n\nLOS ANGELES, CA 90095-8355\n\n0.0 mi.\n\nB\n\nCarl's Jr.\n\n2727 SANTA MONICA\n\nSANTA MONICA, CA 90404-2407\n\n4.8 mi."
{ "pile_set_name": "StackExchange" }
Q: How can I list all sub dir in my internal storage? I want to have a list of all my sub dir in internal storage I already created in other activity. I want to do something like this getFilesDir().listFiles(); but this one is for files in root and I want to list dirs to do this subDir = getDir("nameOfDir", MODE_PRIVATE); and this last code work but with "static" names. I want to do this dynamically. A: This is printing all the file names and directory names using recursion: public static void main(String[] args) throws IOException, URISyntaxException{ getFiles("your/path"); } public static void getFiles(String path){ File folder = new File(path); File[] listOfFiles = folder.listFiles(); for (int i = 0; i < listOfFiles.length; i++) { if (listOfFiles[i].isFile()) { System.out.println("File " + listOfFiles[i].getName()); } else if (listOfFiles[i].isDirectory()) { System.out.println("Directory " + listOfFiles[i].getName()); getFiles(listOfFiles[i].getPath()); } } }
{ "pile_set_name": "StackExchange" }
Q: input datetime set string.format to model item I've an input of type datetime <input type="datetime" id="DATA_END_@id" value="@String.Format("dd/MM/yyyy",item.DATA_END_PREZZATURA.ToString())" /> I need to set the model item value in the datetime format dd/MM//yyyy What is the right sintax? Thank you! In the end this was the solution @{ string value_d_s = ""; DateTime? dateOrNull = item.DATA_END_PREZZATURA; if (dateOrNull != null) { DateTime date_d_s = dateOrNull.Value; value_d_s = date_d_s.ToString("dd/MM/yyyy"); } } <input type="datetime" id="DATA_END_@id" value="@value_d_s" /> A: Use an overload of the DateTime.ToString() method to format your date. <input type="datetime" id="DATA_END_@id" value="@item.DATA_END_PREZZATURA.ToString("dd/MM/yyyy")" /> If item.DATA_END_PREZZATURA is a string instead of a DateTime, you will need to use DateTime.TryParse(): @{ DateTime test; DateTime.TryParse(item.DATA_END_PREZZATURA, out test); } <input type="datetime" id="DATA_END_@id" value="@test.ToString("dd/MM/yyyy")" /> Note you will need to handle the case when TryParse fails.
{ "pile_set_name": "StackExchange" }