text
stringlengths
175
47.7k
meta
dict
Q: jquery form field write issue I am updating hidden field in form with jquery. it works just fine for first three clicks after that it shows only first element value. Here is working js fiddle jsfiddle link: when users is clicking on tab the value of fileorurl changes to 1,2 or 3. it works for first 3-5 click but after that the value stocks to only 1. here is html <div class="container" id="upload"> <div class="row"> <form id="upload-form2" action="http://way2enjoy.com/modules/compress-png/converturl16.php" name="arjun" method="post" enctype="multipart/form-data"> <div id="tab" class="btn-group" data-toggle="buttons"> <a href="#fileuu" class="btn btn-default active" data-toggle="tab"> <input type="radio" class="changev" value="1">File Upload </a> <a href="#urluu" class="btn btn-default" data-toggle="tab"> <input type="radio" class="changev" value="2">URL upload </a> <a href="#linkuu" class="btn btn-default" data-toggle="tab"> <input type="radio" class="changev" value="3">Website Link </a> </div> <div class="tab-content"> <div class="tab-pane active" id="fileuu"> <label for="comment">Click below to choose files:</label> <input type="file" name="file[]" multiple id="input" class="file_input"> </div> <div class="tab-pane" id="urluu"> <label for="comment">Image Urls to Compress:</label> <textarea class="form-control" rows="2" name="urls" id="urls"></textarea> </div> <div class="tab-pane" id="linkuu"> <label for="comment">Website URL to Analyze:</label> <textarea class="form-control" rows="2" name="file[]" id="urls"></textarea> </div> </div> <div class="alert alert-warning" role="alert" id="loading_progress"></div> <br> <input type="submit" value="Compress »" class="btn btn-primary btn-lg pull-right" id="upload_btn" name="upload_btn"> <input type="hidden" name="fileorurl" id="myField" value=""> </form> </div> </div> Here is javascript: <script> $('.changev').change(function () { var valueuu = $(this).val(); $("#myField").val(valueuu); }); </script> Any help will be useful thanks! A: Your checkboxes not updating for some strange reason after a few clicks. You can use the click event on their parents instead: $('#tab a').on('click', function(){ var valueuu = $(this).find('input').val(); $("#myField").val(valueuu); }); Fiddle
{ "pile_set_name": "StackExchange" }
Q: C++ ODBC SQL_ATTR_PARAMS_STATUS_PTR missing in header I am trying to bind a structure with Rowset binding ala: http://msdn.microsoft.com/en-us/library/aa215456(v=sql.80).aspx THIS IS AN MSDN TYPO! A: It's in sqlext.h, so: #include "sqlext.h" Unless you wish to have the difference between sql.h and sqlext.h as your specialist subject in a quiz program, you are better off always #including both of them, without thinking.
{ "pile_set_name": "StackExchange" }
Q: Using TaskCompletionSource Within An await Task.Run Call I am getting unexpected behavior that I would like to shed some light on. I've created a simple example to demonstrate the problem. I call an async function using Task.Run, which will continuously generate results, and uses IProgress to deliver updates to the UI. But I want to wait until after the UI actually updates to continue, so I tried using TaskCompletionSource as suggested in some other posts (this seemed somewhat similar: Is it possible to await an event instead of another async method?.) I'm expecting the initial Task.Run to wait, but what is happening is the await happening inside seems to move it onward and "END" happens after the first iteration. Start() is the entry point: public TaskCompletionSource<bool> tcs; public async void Start() { var progressIndicator = new Progress<List<int>>(ReportProgress); Debug.Write("BEGIN\r"); await Task.Run(() => this.StartDataPush(progressIndicator)); Debug.Write("END\r"); } private void ReportProgress(List<int> obj) { foreach (int item in obj) { Debug.Write(item + " "); } Debug.Write("\r"); Thread.Sleep(500); tcs.TrySetResult(true); } private async void StartDataPush(IProgress<List<int>> progressIndicator) { List<int> myList = new List<int>(); for (int i = 0; i < 3; i++) { tcs = new TaskCompletionSource<bool>(); myList.Add(i); Debug.Write("Step " + i + "\r"); progressIndicator.Report(myList); await this.tcs.Task; } } With this I get: BEGIN Step 0 0 END Step 1 0 1 Step 2 0 1 2 instead of what I want to get which is: BEGIN Step 0 0 Step 1 0 1 Step 2 0 1 2 END I'm assuming I am misunderstanding something about Tasks and await and how they work. I do want StartDataPush to be a separate thread, and my understanding is that it is. My end use is somewhat more complex as it involves heavy calculation, updating to a WPF UI and events signaling back that it completed, but the mechanics are the same. How can I achieve what I'm trying to do? A: I'm not fully understanding the goal you are trying to achieve. But the issue is StartDataPush returning void. The only time an async should return void is if it is an event handler otherwise it needs to return Task. The following would achieve what you expected in terms of output public partial class MainWindow : Window { public TaskCompletionSource<bool> tcs; public MainWindow() { InitializeComponent(); } private async void ButtonBase_OnClick(object sender, RoutedEventArgs e) { var progressIndicator = new Progress<List<int>>(ReportProgress); Debug.Write("BEGIN\r"); await StartDataPush(progressIndicator); Debug.Write("END\r"); } private void ReportProgress(List<int> obj) { foreach (int item in obj) { Debug.Write(item + " "); } Debug.Write("\r"); Thread.Sleep(500); tcs.TrySetResult(true); } private async Task StartDataPush(IProgress<List<int>> progressIndicator) { List<int> myList = new List<int>(); for (int i = 0; i < 3; i++) { tcs = new TaskCompletionSource<bool>(); myList.Add(i); Debug.Write("Step " + i + "\r"); progressIndicator.Report(myList); await this.tcs.Task; } } }
{ "pile_set_name": "StackExchange" }
Q: Should answer votes be cast based on their accesibility to the asker? I occassionally see answers to questions which appear to require a greater level of knowlege than the asker has. Should votes be cast on answers depending on how understandable they are to the asker? People who view the question (and of course the asker themself) and have a vested interest in it probably share a similar level of experience in the subject as the user who is asking. So if a user posts an answer which is well beyond the understanding of the user who asked, then I don't see how it fulfills the goal of providing coherent Q&A style content to the site-- the target demographic of the answer is different to that of the question, so who is the target demographic of the post on the whole? I recognize that it's impossible to tell what exactly the level of understanding of the asking user is, but when an elementary question is asked, I think it's pretty safe to assume that the asker doesn't have a broad knowlege of the topic. An example is this question. Of the two most highly voted answers, one involves more advanced concepts than the other. It seems to me that the very fact that the asker is asking the question indicates that their knowlege of abstract algebra is not extensive enough to understand this more complex answer; even if it is more generalized and insightful. Should I upvote/downvote on answers like these according to how well I think they suit the (asking) audience of the question? A: My personal opinion is that no,votes should be cast based on the answer's validity/clarity, independent of who asked the question. This is consistent with the StackExchange mindset of being a repository of knowledge, and not a Q&A site. We build a repository of knowledge through questions and answers, but we do not exist to simply answer questions. Thus, it makes perfect sense to upvote a correct and clear answer, regardless of whether the asker has the mathematical maturity to understand it. A: My criteria for upvoting an answer is, "how helpful is it to me?" So if I'm the questioner, that's the criterion I would use. In most cases, a reader will have a different level of understanding than the questioner. In this case, the reader ought to decide for himself/herself whether the answer is useful. If I am the questioner, and a reader decides that a certain answer is useful, I have no problem with the reader's upvoting it, even if I do not.
{ "pile_set_name": "StackExchange" }
Q: laravel add scheduler dynamically I have a system where the user can create background tasks via the UI. The task interval are every few hours (user choice in the UI). when the user creates a task via the ui i want to add it to the scheduler dynamically. As the example states, this is static and not dynamic. protected function schedule(Schedule $schedule) { $schedule->call(function () { DB::table('recent_users')->delete(); })->daily(); } Is it possible? if not, what are the alternatives? Thanks A: I don't see why it wouldn't be possible. The Kernel::schedule method will be run every time php artisan schedule:run is run. If you set it up like the documentation, should be every minute via a cron. * * * * * php /path/to/artisan schedule:run >> /dev/null 2>&1 With that in mind, I don't see why you can't do something like this: protected function schedule(Schedule $schedule) { // Get all tasks from the database $tasks = Task::all(); // Go through each task to dynamically set them up. foreach ($tasks as $task) { // Use the scheduler to add the task at its desired frequency $schedule->call(function() use($task) { // Run your task here $task->execute(); })->cron($task->frequency); } } Depending on what you store, you can use whatever you like here instead of the CRON method. You might have a string stored in your database that represents one of Laravel's predefined frequencies and in which case you could do something like this: $frequency = $task->frequency; // everyHour, everyMinute, twiceDaily etc. $schedule->call(function() { $task->execute(); })->$frequency(); The main thing to note here, is that the schedule isn't actually scheduling in tasks in the database or in a cron that it manages. Every time the scheduler runs (Every minute) it runs and it determines what to run based on the frequencies you give each task. Example: You have a task set up using ->hourly(), that is, to run on the hour, every hour. At 00:00, the schedule runs, the ->hourly() filter passes, because the time is on the hour, so your task runs. At 00:01, the schedule runs and but this time the ->hourly() filter fails, so your task does not run.
{ "pile_set_name": "StackExchange" }
Q: Methods of evaluating $ \sum_{k=1}^\infty \frac{(m+k)!}{k!}\frac{1}{5^k}$? I am interested in ways of evaluating the following infinite seris: $$ \sum_{k=1}^\infty \frac{(m+k)!}{k!}\frac{1}{5^k}. $$ I already know the answer from Wolfram Alpha but I would like to see some methods of evaluating it as I haven't been able to find many (any?) examples involving an infinite series with the $(m+k)!$ in the numerator and the $k!$ in the denominator, it seems that it is more common to find $k!$ in the numerator and $(m+k)!$ in the denominator. So what are some methods that can be used to evaluate this series? A: Hint Consider the series $$x^m\sum_{k = 1}^\infty x^k = \sum_{k = 1}^\infty x^{m + k}$$ then do some differentiations. What you get? A: This is the non-calculus way. Let the sum be $S_m$. Then, \begin{align} & 5S_m-S_m =\sum_{k=0}^{\infty}\frac{(m+k+1)\cdots(k+2)}{5^k}-\sum_{k=1}^{\infty}\frac{(m+k)\cdots(k+1)}{5^k}\\ =&(m+1)!+m\sum_{k=1}^{\infty}\frac{(m+k)\cdots(k+2)}{5^k}\\ =&(m+1)!+5m\sum_{k=2}^{\infty}\frac{(m-1+k)\cdots(k+1)}{5^k}\\ =& (m+1)!+5m\left(S_{m-1}-\frac{m!}{5}\right)=m!+5mS_{m-1}\\ \implies& S_m=\frac{m!}{4}+\frac{5m}{4}S_{m-1}\tag{1}\\ \implies&S_m=\frac{m!}{4}+\frac{5m}{4}\left(\frac{(m-1)!}{4}+\frac{5(m-1)}{4}S_{m-2}\right)\\ =&\frac{m!}{4}+\frac{5m!}{4^2}+\frac{5^2m(m-1)}{4^2}S_{m-2}\\ =&\frac{m!}{4}+\frac{5m!}{4^2}+\frac{5^2m!}{4^3}+\frac{5^3m(m-1)(m-2)}{4^3}S_{m-3}\\ =&\frac{m!}{4}\sum_{k=0}^n\left(\frac{5}{4}\right)^k+\frac{5^{n+1}m\cdots(m-n)}{4^{n+1}}S_{m-n-1} \end{align} Setting $n=m-1$ gives us, $$ S_m=\frac{m!}{4}\sum_{k=0}^{m-1}\left(\frac{5}{4}\right)^k+m!\left(\frac{5}{4}\right)^mS_0 $$ As $S_0$ is a geometric series with value $\frac{1}{4}$ our expression becomes, $$ S_m=\frac{m!}{4}\sum_{k=0}^{m}\left(\frac{5}{4}\right)^k=m!\left(\left(\frac{5}{4}\right)^{m+1}-1\right) $$
{ "pile_set_name": "StackExchange" }
Q: Determine if the end of a string overlaps with beginning of a separate string I want to find if the ending of a string overlaps with the beginning of separate string. For example if I have these two strings: string_1 = 'People say nothing is impossible, but I' string_2 = 'but I do nothing every day.' How do I find that the "but I" part at the end of string_1 is the same as the beginning of string_2? I could write a method to loop over the two strings, but I'm hoping for an answer that has a Ruby string method that I missed or a Ruby idiom. A: Set MARKER to some string that never appears in your string_1 and string_2. There are ways to do that dynamically, but I assume you can come up with some fixed such string in your case. I assume: MARKER = "@@@" to be safe for you case. Change it depending on your use case. Then, string_1 = 'People say nothing is impossible, but I' string_2 = 'but I do nothing every day.' (string_1 + MARKER + string_2).match?(/(.+)#{MARKER}\1/) # => true string_1 = 'People say nothing is impossible, but I' string_2 = 'but you do nothing every day.' (string_1 + MARKER + string_2).match?(/(.+)#{MARKER}\1/) # => false
{ "pile_set_name": "StackExchange" }
Q: reprint capture.output(glimpse(df)) to look the same as glimpse(df) tibble::glimpse() provides easy-to-read printed output: library(tidyverse) glimpse(mtcars) #> Observations: 32 #> Variables: 11 #> $ mpg <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.... #> $ cyl <dbl> 6, 6, 4, 6, 8, 6, 8, 4, 4, 6, 6, 8, 8, 8, 8, 8, 8, 4, 4, ... #> $ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 1... #> $ hp <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, ... #> $ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.9... #> $ wt <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3... #> $ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 2... #> $ vs <dbl> 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, ... #> $ am <dbl> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, ... #> $ gear <dbl> 4, 4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, ... #> $ carb <dbl> 4, 4, 1, 1, 2, 1, 4, 2, 2, 4, 4, 3, 3, 3, 4, 4, 4, 1, 2, ... Since glimpse(df) just returns df, I'm using capture.output to save this text. But when I try to reprint this text later, such as with cat(), the line breaks don't replicate: cat(capture.output(glimpse(mtcars))) #> Observations: 32 Variables: 11 $ mpg <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.... $ cyl <dbl> 6, 6, 4, 6, 8, 6, 8, 4, 4, 6, 6, 8, 8, 8, 8, 8, 8, 4, 4, ... $ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 1... $ hp <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, ... $ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.9... $ wt <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3... $ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 2... $ vs <dbl> 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, ... $ am <dbl> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, ... $ gear <dbl> 4, 4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, ... $ carb <dbl> 4, 4, 1, 1, 2, 1, 4, 2, 2, 4, 4, 3, 3, 3, 4, 4, 4, 1, 2, ... How can I reprint the saved glimpse text to look the same as the original? I'm open to saving the output a different way if that's best. A: Use sep = "\n" with cat(): cat(capture.output(glimpse(mtcars)), sep = "\n") #> Observations: 32 #> Variables: 11 #> $ mpg <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.... #> $ cyl <dbl> 6, 6, 4, 6, 8, 6, 8, 4, 4, 6, 6, 8, 8, 8, 8, 8, 8, 4, 4, ... #> $ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 1... #> $ hp <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, ... #> $ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.9... #> $ wt <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3... #> $ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 2... #> $ vs <dbl> 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, ... #> $ am <dbl> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, ... #> $ gear <dbl> 4, 4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, ... #> $ carb <dbl> 4, 4, 1, 1, 2, 1, 4, 2, 2, 4, 4, 3, 3, 3, 4, 4, 4, 1, 2, ...
{ "pile_set_name": "StackExchange" }
Q: Pandas map (reorder/rename) columns using JSON template I have a data frame like so: |customer_key|order_id|subtotal|address | ------------------------------------------------ |12345 |O12356 |123.45 |123 Road Street| |10986 |945764 |70.00 |634 Road Street| |32576 |678366 |29.95 |369 Road Street| |67896 |198266 |837.69 |785 Road Street| And I would like to reorder/rename the columns based on the following JSON that contains the current column name and the desired column name: { "customer_key": "cust_id", "order_id": "transaction_id", "address": "shipping_address", "subtotal": "subtotal" } to have the resulting Dataframe: |cust_id|transaction_id|shipping_address|subtotal| -------------------------------------------------- |12345 |O12356 |123 Road Street |123.45 | |10986 |945764 |634 Road Street |70.00 | |32576 |678366 |369 Road Street |29.95 | |67896 |198266 |785 Road Street |837.69 | is this something that's possible? if it makes it easier, the order of the columns isn't critical. A: For renaming and ordering you would need to reindex after renaming df.rename(columns=d).reindex(columns=d.values()) or: df.reindex(columns=d.keys()).rename(columns=d)
{ "pile_set_name": "StackExchange" }
Q: Binding value not passed to user control in WPF I've looked long and hard and am stuck. I'm trying to pass a parameter from Window to UserControl1 via a binding from Window. In the MainWindow, the UserControl1 is included twice, once passing the parameter MyCustom via a binding on MyValue, again with a literal. Passing with the binding has no effect on UserControl1. MyCustom dependency property is not changed. With the literal, it works as expected. I'm very perplexed. I've copied the example in https://stackoverflow.com/a/21718694/468523 but no joy. There must be something simple I'm missing. Sorry about all the code I copied but the devil is often in the details .. MainWindow.xaml <Window x:Class="MyParamaterizedTest3.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:MyParamaterizedTest3" mc:Ignorable="d" Title="MainWindow" Height="350" Width="525" DataContext="{Binding RelativeSource={RelativeSource Self}}"> <Grid HorizontalAlignment="Center" VerticalAlignment="Center"> <StackPanel> <Rectangle Height="20"/> <local:UserControl1 MyCustom="{Binding MyValue, UpdateSourceTrigger=PropertyChanged}"/> <Rectangle Height="20"/> <local:UserControl1 MyCustom="Literal Stuff"/> <Rectangle Height="20"/> <StackPanel Orientation="Horizontal"> <TextBlock Text="MainWindow: "/> <TextBlock Text="{Binding MyValue, UpdateSourceTrigger=PropertyChanged}"/> </StackPanel> </StackPanel> </Grid> </Window> MainWindow.xaml.cs namespace MyParamaterizedTest3 { public partial class MainWindow : INotifyPropertyChanged { public MainWindow() { InitializeComponent(); } public string MyValue { get => _myValue; set => SetField(ref _myValue, value); } private string _myValue= "First things first"; public event PropertyChangedEventHandler PropertyChanged; protected bool SetField<T>(ref T field, T value, [CallerMemberName] string propertyName = null) { if (EqualityComparer<T>.Default.Equals(field, value)) { return false; } field = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); return true; } } } UserControl1.xaml (corrected below) <UserControl x:Class="MyParamaterizedTest3.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:MyParamaterizedTest3" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300" DataContext="{Binding RelativeSource={RelativeSource Self}}" > <Grid HorizontalAlignment="Center" VerticalAlignment="Center"> <Border BorderThickness="3" BorderBrush="Black"> <StackPanel> <TextBlock Text="{Binding MyCustom, UpdateSourceTrigger=PropertyChanged, FallbackValue=mycustom}"></TextBlock> </StackPanel> </Border> </Grid> </UserControl> UserControl1.xaml.cs (corrected below) namespace MyParamaterizedTest3 { public partial class UserControl1 : INotifyPropertyChanged { public UserControl1() { InitializeComponent(); } public static readonly DependencyProperty MyCustomProperty = DependencyProperty.Register("MyCustom", typeof(string), typeof(UserControl1)); public string MyCustom { get { return this.GetValue(MyCustomProperty) as string; } set { this.SetValue(MyCustomProperty, value); } } public event PropertyChangedEventHandler PropertyChanged; protected bool SetField<T>(ref T field, T value, [CallerMemberName] string propertyName = null) { if (EqualityComparer<T>.Default.Equals(field, value)) { return false; } field = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); return true; } } } Corrected UserControl1.xaml (per Ed Plunkett) <UserControl x:Class="MyParamaterizedTest3.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300" > <Grid HorizontalAlignment="Center" VerticalAlignment="Center"> <Border BorderThickness="3" BorderBrush="Black"> <StackPanel> <TextBlock Text="{Binding MyCustom, RelativeSource={RelativeSource AncestorType=UserControl}, FallbackValue=mycustom}"></TextBlock> </StackPanel> </Border> </Grid> </UserControl> Corrected UserControl1.xaml.cs (per Ed Plunkett) <UserControl x:Class="MyParamaterizedTest3.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300" > <Grid HorizontalAlignment="Center" VerticalAlignment="Center"> <Border BorderThickness="3" BorderBrush="Black"> <StackPanel> <TextBlock Text="{Binding MyCustom, RelativeSource={RelativeSource AncestorType=UserControl}, FallbackValue=mycustom}"></TextBlock> </StackPanel> </Border> </Grid> </UserControl> A: In the window XAML, the bindings on the usercontrol instance use the usercontrol's DataContext as their source, by default. You're assuming that it's inheriting its datacontext from the window. But here's this in the UserControl: DataContext="{Binding RelativeSource={RelativeSource Self}}" That breaks all the bindings the parent gives it. So don't do that. Use relativesource: <UserControl x:Class="MyParamaterizedTest3.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:MyParamaterizedTest3" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300" > <Grid HorizontalAlignment="Center" VerticalAlignment="Center"> <Border BorderThickness="3" BorderBrush="Black"> <StackPanel> <TextBlock Text="{Binding MyCustom, RelativeSource={RelativeSource AncestorType=UserControl}, FallbackValue=mycustom}"></TextBlock> </StackPanel> </Border> </Grid> </UserControl> Also: UpdateSourceTrigger=PropertyChanged doesn't serve any purpose on a binding to a property that never updates its source, so that can be omitted. As we discussed in comments, INotifyPropertyChanged isn't needed for dependency properties. It's immensely frustrating when bindings just don't work, because how do you debug them? You can't see anything. The critical thing is where is it looking for this property? You can get diagnostic information like this: <TextBlock Text="{Binding MyCustom, PresentationTraceSources.TraceLevel=High, FallbackValue=mycustom}"></TextBlock> That will emit a great deal of debugging information to the Output pane of Visual Studio at runtime. It will tell you exactly what the Binding is trying to do, step by step, what it finds, and where it fails. The window can get away with setting its own DataContext to Self because it has no parent, so it's not stepping on an inherited DataContext. However, the window can and should use RelativeSource itself -- or better yet, write a main viewmodel class (you know how to implement INPC already), move the window's properties to the main viewmodel, and assign an instance of the viewmodel to the window's DataContext.
{ "pile_set_name": "StackExchange" }
Q: Tools for Version control for Xcode For my iPhone App I want to Use version Control for Xcode so Can anyone please suggest me some user friendly tools for that which help me to configure version control for Xcode Please Help and Suggest, Thanks. A: Xcode 3.x includes built-in support for Subversion. You may find that the developer previews of Xcode 4 also include Git support. Xcode 3.x SVN support is configured via the "SCM" tab in the application's Preferences dialog. Xcode 4 is under NDA still, so you'll have to figure out how to do it yourself.
{ "pile_set_name": "StackExchange" }
Q: WPF - Dynamically add button next to textbox I am creating Label, Textbox and a button dynamically. I need Button to appear in the same line as textbox to its right. This is the code i am using: Label lbl = new Label() { Content = "Some Label", HorizontalAlignment = HorizontalAlignment.Left, VerticalAlignment = VerticalAlignment.Top, HorizontalContentAlignment = HorizontalAlignment.Center, VerticalContentAlignment = VerticalAlignment.Center, Height = 28, }; TextBox tb = new TextBox() { Text = "Some Text", IsReadOnly = true, }; Button btn = new Button() { Content = "Click Me", HorizontalAlignment = HorizontalAlignment.Left Margin = new Thickness(tb.ActualWidth), }; I am assigning Button Margin to the Right of TextBox but it still appears in the next line under the textbox. What am i doing wrong here? A: You can use StackPanel to solve your problem: StackPanel spMain = new StackPanel() { Orientation = Orientation.Vertical }; Label lbl = new Label() { Content = "Some Label", HorizontalAlignment = HorizontalAlignment.Left, VerticalAlignment = VerticalAlignment.Top, HorizontalContentAlignment = HorizontalAlignment.Center, VerticalContentAlignment = VerticalAlignment.Center, Height = 28, }; StackPanel spInner = new StackPanel() { Orientation = Orientation.Horizontal }; TextBox tb = new TextBox() { Text = "Some Text", IsReadOnly = true, }; Button btn = new Button() { Content = "Click Me", HorizontalAlignment = HorizontalAlignment.Left, Margin = new Thickness(tb.ActualWidth), }; spInner.Children.Add(tb); spInner.Children.Add(btn); spMain.Children.Add(lbl); spMain.Children.Add(spInner); You can check following link for more information: http://msdn.microsoft.com/en-us/library/system.windows.controls.stackpanel.orientation.aspx
{ "pile_set_name": "StackExchange" }
Q: How can a book get a Kirkus Star, yet have no sales? How can a 3-year-old book receive a positive rating from Kirkus, and even earn a Kirkus Star, yet still have almost no sales? I have a book coming out in a few months, and I thought if I had a positive Kirkus review, I would just promote it, and put everything on rise control from there. The book to which I'm referring is called "Juggle and Hide." It's sales rank is over a million! Anything under 10,000 is considered good. This doesn't make sense to me. Even when you type the name of the book into Amazon's search field, you can type "Juggle and Hid" (without the "e" to complete the word, and it STILL doesn't autopopulate that single last letter. Even with the best reviews, it's as if the book doesn't exist! Can someone please explain this to me? A: A good review means the reviewer liked it. It does not mean anybody else did. Something can be intensely liked by a small group of people and ignored by everyone else. It can be a very fine example of a kind of literature that appeals to very few people. The my-childhood-was-wacky-because-my-parents-were-awful genre is probably one of those. And three years is a long time. Everyone who was ever going to buy a copy may already have one by now.
{ "pile_set_name": "StackExchange" }
Q: Please explain how this wake-on-LAN script works I found this PowerShell code on a blog a couple months ago. It sends wake-on-LAN signals to the MAC address of your choice without using external programs. I commented on the blog post and asked the author to describe the logic behind the script because I was curious about it. I went back to the blog post at a later date to see if the author replied to my comment. I was surprised to see that I was redirected to a page where the author said he lost his blog due to a crash. I can't remember the details of it, but I don't think I have that blog bookmarked anymore. So now I would like to request to have the brilliant minds at Stack Overflow look at this code and explain its logic to me. A comment for each line would be fantastic. I'm quite curious to know how this works. It appears to be more robust than other scripts that I've found in that it works across subnets. I don't know much about networking, though. One of the things I'm most curious about is the for loop at the end. Why send the signal multiple times? And why on different ports? But I really would like to know the logic behind the entire script. The code: param ( $targetMac, $network = [net.ipaddress]::Broadcast, $subnet = [net.ipaddress]::Broadcast ) try { if($network.gettype().equals([string])) { $network = [net.ipaddress]::Parse($network); } if($subnet.gettype().equals([string])) { $subnet = [net.ipaddress]::Parse($subnet); } $broadcast = new-object net.ipaddress (([system.net.ipaddress]::parse("255.255.255.255").address -bxor $subnet.address -bor $network.address)) $mac = [Net.NetworkInformation.PhysicalAddress]::Parse($targetMac.toupper().replace(".","")) $u = New-Object net.sockets.udpclient $ep = New-Object net.ipendpoint $broadcast, 0 $ep2 = New-Object net.ipendpoint $broadcast, 7 $ep3 = New-Object net.ipendpoint $broadcast, 9 $payload = [byte[]]@(255,255,255, 255,255,255); $payload += ($mac.GetAddressBytes()*16) for($i = 0; $i -lt 10; $i++) { $u.Send($payload, $payload.Length, $ep) | Out-Null $u.Send($payload, $payload.Length, $ep2) | Out-Null $u.Send($payload, $payload.Length, $ep3) | Out-Null sleep 1; } } catch { $Error | Write-Error; } A: #These are the parameters to the script. The only mandatory param here is the mac address #[net.ipaddress]::Broadcast will resolve to something like 255.255.255.255 param ( $targetMac, $network = [net.ipaddress]::Broadcast, $subnet = [net.ipaddress]::Broadcast ) #We start the try, catch error handling here. #if something in try block fails, the catch block will write the error try { #This will evaludate to False. Hence, $network will have whatever was passed through params or the default value #in this case the default value is 255.255.255.255 if($network.gettype().equals([string])) { $network = [net.ipaddress]::Parse($network); } #This will evaludate to False. Hence, $network will have whatever was passed through params or the default value #in this case the default value is 255.255.255.255 if($subnet.gettype().equals([string])) { $subnet = [net.ipaddress]::Parse($subnet); } #Not sure if this is really required here. But, assuming that the default value for both $network and $subet is 255.255.255.255, #this will result in $broadcast set to 255.255.255.255 $broadcast = new-object net.ipaddress (([system.net.ipaddress]::parse("255.255.255.255").address -bxor $subnet.address -bor $network.address)) #This again assumes that you had given . as the delimeter in MAC address and removes that from MAC address $mac = [Net.NetworkInformation.PhysicalAddress]::Parse($targetMac.toupper().replace(".","")) #Create a new object of type net.sockets.udpclient $u = New-Object net.sockets.udpclient #WOL magic packet can be sent on port 0, 7, or 9 #Create a end point for the broadcast address at port 0 $ep = New-Object net.ipendpoint $broadcast, 0 #Create a end point for the broadcast address at port 7 $ep2 = New-Object net.ipendpoint $broadcast, 7 #Create a end point for the broadcast address at port 9 $ep3 = New-Object net.ipendpoint $broadcast, 9 #Create a payload packet #First, create a byte array $payload = [byte[]]@(255,255,255, 255,255,255); #add the mac address to the above byte array $payload += ($mac.GetAddressBytes()*16) #Send 10 magic packets for each port number or end point created above. #one is more than enough. If everything is congfigured properly for($i = 0; $i -lt 10; $i++) { $u.Send($payload, $payload.Length, $ep) | Out-Null $u.Send($payload, $payload.Length, $ep2) | Out-Null $u.Send($payload, $payload.Length, $ep3) | Out-Null sleep 1; } } catch { #catch block catches any error from try block $Error | Write-Error; }
{ "pile_set_name": "StackExchange" }
Q: Получить дельту значений в столбце DataFrame Имеется один столбец, необходимо добавить в DF второй по определенной формуле . В данном случае столбец Б - это изменение (дельта) значений в столбце А Например, есть исходный столбец А А 1 2 6 8 12 На выходе должны получить столбец Б - 1 4 2 4 A: Воспользуйтесь методом Series.diff(): In [89]: df Out[89]: А 0 1 1 2 2 6 3 8 4 12 In [90]: df['delta'] = df['А'].diff() In [91]: df Out[91]: А delta 0 1 nan 1 2 1.000 2 6 4.000 3 8 2.000 4 12 4.000 NOTE: если в столбце (pandas.Series) присутствует хотя бы одно вещественное (float) значение или значение NaN (Not a Number), то тип столбца будет восприниматься как float* (float16, float32, float64). Таким образом преобразовать столбец к целому типу не получиться пока присутствует хотя бы одно значение NaN. Workaround: In [142]: df['delta'] = df['А'].diff().fillna(0).astype('int16') In [143]: df Out[143]: А delta 0 1 0 1 2 1 2 6 4 3 8 2 4 12 4 In [144]: df.dtypes Out[144]: А int64 delta int16 dtype: object
{ "pile_set_name": "StackExchange" }
Q: Ignore the initial value of the parameter in calculation I have the following code. The formula is answer = (b*a)+c. I want to display a and the answer c = 9 qq = 4 b = 5 a = 0 for i in range(5): answer = (b*a)+c a += qq print a, answer Once this program runs, it displays its values starting from 4 9 up to 20 89 It runs fine, but I do not want 9 to be displayed beside 4. Instead I want 29 to be displayed beside 4 because that is the answer when 4 is plugged in for a. I've been trying for an hour now, but I do not know how to do that. A: Its because your iterating a before you display it. c = 9 qq = 4 b = 5 a = 0 for i in range(5): answer = (b*a)+c print a, answer a += qq will display: 0 9 4 29 ... 20 109
{ "pile_set_name": "StackExchange" }
Q: How to specify a git merge "ours" strategy with .gitattributes for deleted files? I have 2 branches in my project: A (master) & B. In branch B some of the files that are in master have been deleted. I want to avoid merge conflicts when there are changes in master to files that have been deleted in B. I have added the files to .gitattributes, e.g. README.adoc merge=ours For my merge driver I have [merge "ours"] name = Always keep mine during merge driver = true However I still get conflicts and I can't figure out what I'm doing wrong. git merge master CONFLICT (modify/delete): README.adoc deleted in HEAD and modified in master. Version master of README.adoc left in tree. What am I doing wrong? I have run git check-attr and it shows README.adoc: merge: ours I've also tried GIT_TRACE=2 but it provides no useful info; it only tells me where it is getting the binaries from. A: A modify/delete conflict is a high level conflict. Merge drivers, defined in .gitattributes, are used only for solving low level conflicts: the merge driver is used only when the file (a) exists in all three versions (base and both branch tips) and (b) differs in all three versions. Here, the file exists in two versions—base and one branch tip—and differs in those two versions, but the third version is simply deleted entirely, and the merge driver is never invoked. For the recursive, resolve, and subtree strategies, high level conflicts always result in a merge conflict and a suspended-in-mid-process merge. High level conflicts simply never occur in the ours strategy (-s ours, very different from the -X ours extended-option) as it looks only at the current tree. High level conflicts in octopus are (I think) fatal: the octopus merge is aborted entirely.2 I want to avoid merge conflicts when there are changes in A to files that have been deleted in B. To do this, you must write a merge strategy. This is hard.1 See my answer to git "trivial" merge strategy that directly commits merge conflicts. 1The main evidence I have for "hard" is the fact that Git comes with those five strategies—resolve, recursive, ours, octopus, and subtree—and despite well over a decade of development, Git still has only those five strategies. 2I never actually do octopus merges, so my experience here is limited.
{ "pile_set_name": "StackExchange" }
Q: Is my understanding of the relationship between virtual addresses and physical addresses correct? I've been researching (on SO and elsewhere) the relationship between virtual addresses and physical addresses. I would appreciate it if someone could confirm if my understanding of this concept is correct. The page table is classified as 'virtual space' and contains the virtual addresses of each page. It then maps to the 'physical space', which contains the physical addresses of each page. A wikipedia diagram to make my explanation clearer: https://upload.wikimedia.org/wikipedia/commons/3/32/Virtual_address_space_and_physical_address_space_relationship.svg Is my understanding of this concept correct? Thank you. A: Not entirely correct. Each program has its own virtual address space. Technically, there is only one address space, the physical random-access memory. Therefore it's called "virtual" because to the user program it seems as if it has its own address space. Now, take the instruction mov 0x1234, %eax (AT&T) or MOV EAX, [0x1234] (Intel) as an example: The CPU sends the virtual address 0x1234 to one of its parts, the MMU. The MMU obtains the corresponding physical address from the page table. This process of adjusting the address is also lovingly called "massaging." The CPU retrieves the data from the RAM location the physical address refers to. The concrete translation process depends heavily on the actual architecture and CPU.
{ "pile_set_name": "StackExchange" }
Q: Git unable to push to remote repository: "Read-only file system" Trying to push to a remote repository gives me the error: error: unable to create temporary sha1 filename : Read-only file system Funny enough, it worked perfectly fine for the push 30 minutes earlier. Another thing worth noting is that I'm the only one pushing/commiting/accessing this repository. SSHing into my repository server trying to chown, chmod, copy, rename etc the repository, I keep getting the error Read-only file system. Listing the owner of the repository by using ls -ld my-repo.git yields: drwxrwsr-x 7 my_user users 248 Jul 20 14:56 my-repo.git/ Looks proper, owned by me, right? I don't understand why this is happening. Any suggestions on how to solve this extremely annoying problem would be highly appreciated! A: your disk broke and the OS remounted it as readonly to save it. see /var/log/messages and the output of "mount" to confirm.
{ "pile_set_name": "StackExchange" }
Q: knockout dropdown depend on each other with additional info I'm trying to make a simple knockout.js-page: First the user needs to select a car brand, after that, a model. This works. Now i want additionally the number of doors shown (4 or 5). But the number of doors (selectedModel().AantalDeuren') are never shown. What is wrong with my code? ps: i've started from http://knockoutjs.com/examples/cartEditor.html Goal is to make a ShoppingCart, with data from mvc-api, which is already working. Thanks, bram <html> <head> <title>KnockoutJS Options Binding</title> <script src="https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"; type="text/javascript"> </script> </head> <body> <table> <tr> <td> <select data-bind="options: availableMerken, optionsText: 'MerkNaam', value: selectedMerk, optionsCaption: 'Kies merk..'"></select> </td> <td> <select data-bind='options: selectedMerk().modellen, optionsText: "ModelNaam", value:selectedModel, optionsCaption: "Kies model.."'> </select> </td> <td> <span data-bind='text: selectedModel().AantalDeuren'> </span> </td> </tr> </table> <script type="text/javascript"> var Model = function (_modelnaam, _aantaldeuren) { var self = this; self.ModelNaam = _modelnaam; self.AantalDeuren = _aantaldeuren; }; var Merk = function (naam, extra1, extra2) { var self = this; self.MerkNaam = naam; self.modellen = ko.observableArray(); self.modellen.push(new Model(extra1, 4)); self.modellen.push(new Model(extra2, 5)); //this.modellen[0] = new Model(extra1, 4); //this.modellen[1] = new Model(extra2, 5); }; function ViewModel() { var self = this; self.selectedMerk = ko.observable(); self.selectedModel = ko.observable(); self.availableMerken = ko.observableArray([ new Merk('vw', 'golf', 'polo'), new Merk('bmw', '3', '5'), new Merk('audi', 'A4', 'A6'), new Merk('mercedes', 'C', 'GLE'), new Merk('ford', 'escort', 'scorpio'), new Merk('opel', 'astra', 'monza'), ]); }; var vm = new ViewModel(); ko.applyBindings(vm); </script> </body> </html> A: Did you check the console for errors? The issue is that initially, both selections are undefined. Your data-binds look for undefined.modellen and undefined.AantalDeuren, resulting in errors. Fix it by checking if there is a selection, before binding to one of its properties: var Model = function(_modelnaam, _aantaldeuren) { var self = this; self.ModelNaam = _modelnaam; self.AantalDeuren = _aantaldeuren; }; var Merk = function(naam, extra1, extra2) { var self = this; self.MerkNaam = naam; self.modellen = ko.observableArray(); self.modellen.push(new Model(extra1, 4)); self.modellen.push(new Model(extra2, 5)); //this.modellen[0] = new Model(extra1, 4); //this.modellen[1] = new Model(extra2, 5); }; function ViewModel() { var self = this; self.selectedMerk = ko.observable(); self.selectedModel = ko.observable(); self.availableMerken = ko.observableArray([ new Merk('vw', 'golf', 'polo'), new Merk('bmw', '3', '5'), new Merk('audi', 'A4', 'A6'), new Merk('mercedes', 'C', 'GLE'), new Merk('ford', 'escort', 'scorpio'), new Merk('opel', 'astra', 'monza'), ]); }; var vm = new ViewModel(); ko.applyBindings(vm); <script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.2.0/knockout-min.js"></script> <table> <tr> <td> <select data-bind="options: availableMerken, optionsText: 'MerkNaam', value: selectedMerk, optionsCaption: 'Kies merk..'"></select> </td> <td data-bind="if: selectedMerk"> <select data-bind='options: selectedMerk().modellen, optionsText: "ModelNaam", value:selectedModel, optionsCaption: "Kies model.."'></select> </td> <td data-bind="if: selectedModel"> <span data-bind='text: selectedModel().AantalDeuren'> </span> </td> </tr> </table>
{ "pile_set_name": "StackExchange" }
Q: JApplet scrollbar using JScrollPane shows but doesn't scroll Graphics2D content outside screen I am trying to add a scrollbar to a JApplet component. I know I shouldn't use it and should rather use JPanel, but for sake of semplicity I'll leave it like it, as in a tutorial I am following. As you can see I tried adding a ScrollPane component, and add to it the applet. Then I add the scrollpane to the frame. The result is that I can see a vertical scrollbar, but that it does have the ability to scrool. Actually the scroll cursor is missing. And the up and down arrows don't scrooll too. I'd like to scroll down to the part of the line I've drawn that went outside the visible area. What am I doing wrong? public class App { private App() { final int WINHSIZE = 800; final int WINVSIZE = 600; class MyJApplet extends JApplet { public void init() { setBackground(Color.black); setForeground(Color.white); } public void paint(Graphics g) { Graphics2D g2 = (Graphics2D) g; g2.drawLine(0, 0, 2000, 2000); } } } JFrame f = new JFrame("Title"); f.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); JApplet applet = new MyJApplet(); JScrollPane myScrollPane = new JScrollPane(applet, JScrollPane.VERTICAL_SCROLLBAR_ALWAYS, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED); f.getContentPane().add("Center", myScrollPane); applet.init(); f.pack(); f.setSize(new Dimension(WINHSIZE, WINVSIZE)); f.setVisible(true); } public static void main(String[] args) { new App(); } } A: I'll leave it like it, as in a tutorial I am following. Well your tutorial is old and you should NOT be following it. Instead you should be learning how to create a JFrame the normal way. That is you do custom painting on a JPanel by overriding the paintComponent() method and you add the panel to the frame. You should NOT override paint(). Read the section from the Swing tutorial (which is a far better tutorial to follow) on Custom Painting for more information. You need to make sure to override the getPreferredSize() method so the scrollbars can work properly. f.getContentPane().add("Center", myScrollPane); That is not how you add a Component to a Container. You would never hardcode a constraint like that. Also you should be using: add(component, constraint) The BorderLayout will contain fields you can use to identify the constraint. People don't use f.getContentPane().add(...) anymore. Since JDK4 you can use f.add(...). As I said your tutorial is way out of date. Look at the table of contents of the Swing tutorial. The examples are more up to date and will provide a better design for your application. For example you should be creating GUI components on the Event Dispatch Thread, which your code is NOT doing. Read the tutorial on Concurrency to understand why this is important.
{ "pile_set_name": "StackExchange" }
Q: How to sort a Map by Key and Value, whereas the Val is a Map/List itself I am having a hard time understanding the right syntax to sort Maps which values aren't simply one type, but can be nested again. I'll try to come up with a fitting example here: Let's make a random class for that first: class NestedFoo{ int valA; int valB; String textA; public NestedFoo(int a, int b, String t){ this.valA = a; this.valB = b; this.textA = t; } } Alright, that is our class. Here comes the list: HashMap<Integer, ArrayList<NestedFoo>> sortmePlz = new HashMap<>(); Let's create 3 entries to start with, that should show sorting works already. ArrayList<NestedFoo> l1 = new ArrayList<>(); n1 = new NestedFoo(3,2,"a"); n2 = new NestedFoo(2,2,"a"); n3 = new NestedFoo(1,4,"c"); l1.add(n1); l1.add(n2); l1.add(n3); ArrayList<NestedFoo> l2 = new ArrayList<>(); n1 = new NestedFoo(3,2,"a"); n2 = new NestedFoo(2,2,"a"); n3 = new NestedFoo(2,2,"b"); n4 = new NestedFoo(1,4,"c"); l2.add(n1); l2.add(n2); l2.add(n3); l2.add(n4); ArrayList<NestedFoo> l3 = new ArrayList<>(); n1 = new NestedFoo(3,2,"a"); n2 = new NestedFoo(2,3,"b"); n3 = new NestedFoo(2,2,"b"); n4 = new NestedFoo(5,4,"c"); l3.add(n1); l3.add(n2); l3.add(n3); l3.add(n4); Sweet, now put them in our Map. sortmePlz.put(5,l1); sortmePlz.put(2,l2); sortmePlz.put(1,l3); What I want now, is to sort the Entire Map first by its Keys, so the order should be l3 l2 l1. Then, I want the lists inside each key to be sorted by the following Order: intA,intB,text (all ascending) I have no idea how to do this. Especially not since Java 8 with all those lambdas, I tried to read on the subject but feel overwhelmed by the code there. Thanks in advance! I hope the code has no syntatical errors, I made it up on the go A: You can use TreeSet instead of regular HashMap and your values will be automatically sorted by key: Map<Integer, ArrayList<NestedFoo>> sortmePlz = new TreeMap<>(); Second step I'm a little confused. to be sorted by the following Order: intA,intB,text (all ascending) I suppose you want to sort the list by comparing first the intA values, then if they are equal compare by intB and so on. If I understand you correctly you can use Comparator with comparing and thenComparing. sortmePlz.values().forEach(list -> list .sort(Comparator.comparing(NestedFoo::getValA) .thenComparing(NestedFoo::getValB) .thenComparing(NestedFoo::getTextA)));
{ "pile_set_name": "StackExchange" }
Q: Getting error "ValueError: time data '' does not match format '%Y-%m-%d %H:%M:%S'" Here is a sample of the df: pId tPS tLL dZ 129 2019-12-02 15:04:09 2019-12-02 15:06:31 5f723 129 2019-12-02 15:04:15 2019-12-02 15:06:37 5f723 129 2019-12-02 15:05:15 2019-12-02 15:07:37 5f723 129 2019-12-02 15:05:18 2019-12-02 15:07:40 5f723 129 2019-12-02 15:05:24 2019-12-02 15:07:46 5f723 The pID is the ID of a person and I am trying to check the entry, exit and duration time for each ID. Here is the code: from datetime import datetime stats=df.sort_values(by=['pId', 'tPS', 'tLL'])[['pId', 'tPS', 'tLL', 'dZ']] pid = '' enter_t = '' exit_t = '' enter_exit_times=[] for ind, row in stats.iterrows(): if pid =='': enter_t = row['tPS'] print(enter_t) if row['pId']!= pid or ((datetime.strftime(row['tLL'], "%Y-%m-%d %H:%M:%S") - datetime.strftime(exit_t, "%Y-%m-%d %H:%M:%S")).total_seconds()>2*60*60): duration = (datetime.strptime(exit_t, "%Y-%m-%d %H:%M:%S") - datetime.strptime(enter_t, "%Y-%m-%d %H:%M:%S")) enter_exit_times.append([pid, enter_t, exit_t, duration.total_seconds()]) pid = row['pId'] enter_t = row['tPS'] enter_exit_times.append([pid, enter_t, exit_t]) enter_exit_times_df = pd.DataFrame(enter_exit_times) So here pid is the id enter_t is the entering time exit_t is the exit time tPS is the in time tLL is the out time. I am then creating a list for which I am writing a loop below. Initially, I run it through a for loop where I iterate through the rows of the data frame. So there are two if loop, one with pid where an empty value means it needs to take the row[tPS] and if not then it has to run through the not loop. Then I am calculating the duration and then appending the values to the entry-exit times. I am getting this error: 2019-12-02 15:04:09 --------------------------------------------------------------------------- ValueError Traceback (most recent callast) <ipython-input-411-fd8f6f998cc8> in <module> 12 if row['pId']!= pid or ((datetime.strftime(row['tLL'], "%Y-%m-%d %H:%M:%S") 13 - datetime.strftime(exit_t, "%Y-%m-%d %H:%M:%S")).total_seconds()>2*60*60): ---> 14 duration = (datetime.strptime(exit_t, "%Y-%m-%d %H:%M:%S") - 15 datetime.strptime(enter_t, "%Y-%m-%d %H:%M:%S")) 16 enter_exit_times.append([pid, enter_t, exit_t, duration.total_seconds()]) ~/opt/anaconda3/lib/python3.7/_strptime.py in _strptime_datetime(cls, data_string, format) 575 """Return a class cls instance based on the input string and the 576 format string.""" --> 577 tt, fraction, gmtoff_fraction = _strptime(data_string, format) 578 tzname, gmtoff = tt[-2:] 579 args = tt[:6] + (fraction,) ~/opt/anaconda3/lib/python3.7/_strptime.py in _strptime(data_string, format) 357 if not found: 358 raise ValueError("time data %r does not match format %r" % --> 359 (data_string, format)) 360 if len(data_string) != found.end(): 361 raise ValueError("unconverted data remains: %s" % **ValueError: time data '' does not match format '%Y-%m-%d %H:%M:%S'** A: The cause of the error is that exit_t is not set anywhere in the loop. It is an empty string. You set it before the loop to exit_t = '' but then it's never set again. That's why strptime throws the error here: >>> datetime.strptime(' ', "%Y-%m-%d %H:%M:%S") Traceback (most recent call last): ... File "/usr/local/Cellar/python/3.7.6/Frameworks/Python.framework/Versions/3.7/lib/python3.7/_strptime.py", line 359, in _strptime (data_string, format)) ValueError: time data ' ' does not match format '%Y-%m-%d %H:%M:%S' The solution is to simply set it properly to "tLL" (if I understand you correctly). But I would like to go further and say that I think you are making the code much much more complicated that how it should be. My understanding is that you just want to compute the time duration between "tPS" (the in time) and "tLL" (the out time). Since you are already iterating over each row, you just need to assign the values appropriately pid = row['pId'] enter_t_str = row['tPS'] # strings exit_t_str = row['tLL'] # strings then convert the datetime strings to datetime objects using strptime enter_t_dt = datetime.strptime(enter_t_str, "%Y-%m-%d %H:%M:%S") exit_t_dt = datetime.strptime(exit_t_str, "%Y-%m-%d %H:%M:%S") then calculate the duration duration = exit_t_dt - enter_t_dt then finally append it to your list enter_exit_times.append([pid, enter_t_str, exit_t_str, duration.total_seconds()]) There is no need to keep track of the "pId". Here's the full code: stats = df.sort_values(by=['pId', 'tPS', 'tLL'])[['pId', 'tPS', 'tLL', 'dZ']] pid = '' enter_t = '' exit_t = '' enter_exit_times = [] for ind, row in stats.iterrows(): pid = row['pId'] enter_t_str = row['tPS'] exit_t_str = row['tLL'] enter_t_dt = datetime.strptime(enter_t_str, "%Y-%m-%d %H:%M:%S") exit_t_dt = datetime.strptime(exit_t_str, "%Y-%m-%d %H:%M:%S") duration = exit_t_dt - enter_t_dt enter_exit_times.append([pid, enter_t_str, exit_t_str, duration.total_seconds()]) enter_exit_times_df = pd.DataFrame(enter_exit_times) print(enter_exit_times_df) And the output DataFrame: 0 1 2 3 0 129 2019-12-02 15:04:09 2019-12-02 15:06:31 142.0 1 129 2019-12-02 15:04:15 2019-12-02 15:06:37 142.0 2 129 2019-12-02 15:05:15 2019-12-02 15:07:37 142.0 3 129 2019-12-02 15:05:18 2019-12-02 15:07:40 142.0 4 129 2019-12-02 15:05:24 2019-12-02 15:07:46 142.0 If you want to only get the enter/exit times for a particular time period of a day, you could create the datetime objects for the start and end times, and do regular comparison: >>> dt_beg = datetime(2019,12,2,8,0,0) #8AM >>> dt_beg datetime.datetime(2019, 12, 2, 8, 0) >>> dt_end = datetime(2019,12,2,10,0,0) #10AM >>> dt_end datetime.datetime(2019, 12, 2, 10, 0) >>> dt = datetime(2019,12,2,9,34,0) #9:34AM >>> dt_beg < dt < dt_end True >>> dt = datetime(2019,12,2,14,34,0) #2:34PM >>> dt_beg < dt < dt_end False So you could add a filter for what to append to enter_exit_times: if (enter_t_dt > start_dt and exit_t_dt < end_dt): enter_exit_times.append(...)
{ "pile_set_name": "StackExchange" }
Q: Installer hangs when selecting full disk encryption I'm doing a fresh install of Ubuntu 15.04. On the "Installation type" screen, I select "Erase disk and install Ubuntu" and then check "Encrypt the new Ubuntu installation for security", which causes "Use LVM with the new Ubuntu installation" to be automatically checked. I click "Continue" and am asked to choose an encryption key, which I do. I leave "Overwrite empty disk space" unchecked and click "Install Now". And then the installer hangs with no error message. The rest of the system is still responsive, but all the buttons on the installer grey out and nothing happens. I've let it sit for hours with no movement. I tried running the installer from the command line to see if there was any useful output, but no debug messages appeared. Has anyone encountered this before? Is there a workaround? A: I submitted this question to the System76 support team and they very quickly came up with a solution that worked for me. This is the main reason I bought from a vendor that pre-installs Linux and I have to say it has paid off. When the installer boots up, select "Try Ubuntu". After the desktop loads, open a terminal and run: sudo dd if=/dev/zero of=/dev/sda status=progress Then start the installation normally. Note that I already had tried removing the old partitions that were on the drive before running the installer but that was not enough. I had to overwrite the drive with 0s first. Selecting the option to "Overwrite empty disk space" during the install might have also worked but I don't plan on testing that.
{ "pile_set_name": "StackExchange" }
Q: Django only get object if one of it's subobjects meets condition So I have two models: class Business(models.Model): def __str__(self): return self.name name = models.CharField(max_length=200) class Appointment(models.Model): author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) business = models.ForeignKey(Business, on_delete=models.CASCADE, related_name="appointments") And in my view I have the following context: def home(request): [...] context = { 'business':Business.objects.order_by('name'), } [...] Now I would get all businesses there are with their submodels "Appointment". But what I want is only businesses where on of the existing submodels "Appointment" fullfillsauthor == request.author Also the submodels "Appointment" of a Business should only be the "Appointments" where their author equals request.author A: We can do something like this: business = Business.objects.filter(appointment__author=request.author) Or: business = Business.objects.filter(appointment__author__id=request.author.id) You might want to read: Lookups that span relationships
{ "pile_set_name": "StackExchange" }
Q: How do I uninstall any Apple pkg Package file? Despite opinions to the contrary, not all packages are installed cleanly in only one directory. Is there a way to reverse the install process of a pkg file, preferably with the original package (or from a repository of information about installed packages)? Specifically I've installed the PowerPC MySQL 5.4.1 package on an intel MacBook, and would like to cleanly reverse that, recovering the 5.1 x86 install I can see is still there, but not working properly now. A: https://wincent.com/wiki/Uninstalling_packages_(.pkg_files)_on_Mac_OS_X describes how to uninstall .pkg using native pkgutil. Modified excerpt $ pkgutil --pkgs # list all installed packages $ pkgutil --files the-package-name.pkg # list installed files After visually inspecting the list of files you can do something like: $ pkgutil --pkg-info the-package-name.pkg # check the location $ cd / # assuming the package is rooted at /... $ pkgutil --only-files --files the-package-name.pkg | tr '\n' '\0' | xargs -n 1 -0 sudo rm -f $ pkgutil --only-dirs --files the-package-name.pkg | tail -r | tr '\n' '\0' | xargs -n 1 -0 sudo rmdir Needless to say, extreme care should always be taken when removing files with root privileges. Particularly, be aware that some packages may update shared system components, so uninstalling them can actually break your system by removing a necessary component. For smaller packages it is probably safer to just manually remove the files after visually inspecting the package file listing. Apparently, there was once an --unlink option available in pkgutil, but as of Lion it is not mentioned in the man page. Perhaps it was removed because it was deemed too dangerous. Once you've uninstalled the files, you can remove the receipt with: $ sudo pkgutil --forget the-package-name.pkg A: Built into the system there is no option to uninstall the files using an uninstaller so you can either make an uninstaller yourself or remove the files manually. The best method to determine what files have been installed is to get a hold of the original .pkg if possible. If this is not possible you can also use the receipts instead found at /Library/Receipts. Your biggest issue is when you are dealing with a .mpkg which contains multiple .pkg files as you will then have to find all the seperate .pkg files in that folder (thankfully not that difficult when sorted by date). Once you have the .pkg file (Receipt or the full install file) you can then use a utility to either create the uninstaller or find the files so you can remove them manually: Uninstaller Absolute Software InstallEase is a free program that can create uninstallers from existing .pkg files. Make the uninstaller .pkg file (note: You'll need Apple's Developer Tools installed to actually make the .pkg file) Manually Using a program such as Pacifist or a QuickLook plugin like Suspicious Package you can view what files are installed and at what location. Using that list you can then manually navigate to those folders and remove the files. I've used this method personally countless times before I discovered InstallEase, but this is still often faster if the install isn't spread out among many locations. A: you can also uninstall .pkg packages with UninstallPKG ( http://www.corecode.at/uninstallpkg/ ) [full disclosure: yes i am the author]
{ "pile_set_name": "StackExchange" }
Q: Check if relations exists for a parent node before deleting it so that we don't have any orphan child node Here I am trying to delete a parent node but only if it doesn't have any child node. Please review this code. PostController.php /** * Delete the given post. * * @param int $post_id * @return void * * @throws \App\Exceptions\RelationExistsException */ public function deletePost($post_id) { $relations = ['tags', 'comments']; $hasRelations = $this->postRepo->hasRelations($post_id, $relations); if ($hasRelations) { throw new RelationExistsException($relations); } $this->postRepo->delete($post_id); } RepositoryTrait.php /** * Check if any relation exists. * * @param int $id * @param array $relations * @return bool */ public function hasRelations($id, array $relations) { if (count($relations) == 0) { throw new \Exception('No relation is provided.'); } $modelName = $this->model; $query = $modelName::where('id', $id); $query->where(function($q) use($relations) { foreach ($relations as $relation) { $q->orHas($relation); } }); return $query->exists(); } RelationExistsException.php namespace App\Exceptions; use Exception; class RelationExistsException extends Exception { /** * Create a new exception instance. * * @param array $relations * @return void */ function __construct($relations) { parent::__construct(sprintf( 'Cannot delete because there exists relations - %s.', implode(', ', $relations)) ); } } I have one more issue with the following code: $modelName = $this->model; // contains '\App\Models\Post' $query = $modelName::where('id', $id); I have to save the $this->model in $modelName to use it with the where function. Is there a better solution for it. I have tried {}, () but nothing worked. GitHub Gist for above code A: Some thoughts below: I would encourage you to think about validating parameters more thoroughly on your public methods. For example, in deletePosts(), you do nothing to validate that you even have an integer value (or whatever) to work with. You do it in some cases such as type-hinting for array and validating non-empty array on hasRelations(), but it is not consistent. What if non-array is passed to your exception constructor? What if non-integer (or whatever) is passed to deletePosts()? You may not think it matters now because you are currently working in this application area and understand where all the calls are made against these methods, but think about in the future when you try to leverage these classes in new ways. The call patterns may be different. If, for example, you introduce a buggy use case that passes a non-array to your exception class, you want that class to complain loudly, rather than silently fail so that you can focus your debugging efforts more quickly on the problem code. $relations = ['tags', 'comments']; Why is this hard-coded here? I would think this should, at a minimum, be a property on the class, if not derived from configuration. $hasRelations = $this->postRepo->hasRelations($post_id, $relations); Consider placing this code in a try-catch block since hasRelations() can throw. I know that since $relations is hard-coded here that we would never expect to get into a state where that exception is thrown, however I think it best practice to always use try-catch block in such a case so that as someone working in this code, you have quick understanding of how methods you are calling might perform. Who knows, maybe the implementation of RepositoryTrait changes at some point and throws exceptions for other reasons besides the passed relations parameter. throw new \Exception('No relation is provided.'); Consider throwing InvalidArgumentException if you want to be more specific here. I actually find it a bit odd that you use custom exception types in the code calling this but not here. It might call into question how you are using custom exceptions throughout the application. Your custom exception seems to have a very limited use case. It really only exists to format the message string from a passed array, functionality that might rightfully live in the code where the exception is thrown not within the logic of the exception. Is this message even meaningful to caller as is? You can't tell what kind of relationship exists amongst the types provided, nor the specific ID's of the relationships that exist, so there is questionable value in preparing this specific message string vs. just a simple 'Cannot delete because this post has relations' message in the exception. You could easily provide this message in context of where the exception is thrown totally eliminating the need to override the constructor (or maybe even this class altogether). Is this class going to be used elsewhere in your application? If not, should it even exist (vs. using other exception types)? If so, are you always going to want to pass this class an array of relation types as parameter to format into message string? I actually question whether you should even be throwing an exception here at all if a relationship exists. Since you are specifically building this functionality, my guess is that you are expecting the application to handle deletion requests against posts that have relationships as part of normal operation of the application. If so, should this really be an exception or just an alternate code path that needs to be followed to handle this condition? Only if you truly never expect the application to be put into this state would it make sense to have this code throw an exception. To your question about how to call model dynamically while avoiding having to set $modelName, you could use something like: call_user_func($this->model . '::where', 'id', $id); but I honestly find what you have in place to be easier to read, and I would not be concerned about the cost of having the additional variable in memory (that seems like a micro-optimization type of concern).
{ "pile_set_name": "StackExchange" }
Q: Finding list entry with the highest count I have an Entry data type data Entry = Entry { count :: Integer, name :: String } Then I want to write a function, that takes the name and a list of Entrys as arguments an give me the Entrys with the highest count. What I have so far is searchEntry :: String -> [Entry] -> Maybe Integer searchEntry _ [] = Nothing searchEntry name1 (x:xs) = if name x == name1 then Just (count x) else searchEntry name xs That gives me the FIRST Entry that the function finds, but I want the Entry with the highest count. How can I implement that? A: My suggestion would be to break the problem into two parts: Find all entries matching a given name Find the entry with the highest count You could set it up as entriesByName :: String -> [Entry] -> [Entry] entriesByName name entries = undefined -- Use Maybe since the list might be empty entryWithHighestCount :: [Entry] -> Maybe Entry entryWithHighestCount entries = undefined entryByNameWithHighestCount :: String -> [Entry] -> Maybe Entry entryByNameWithHighestCount name entires = entryWithHighestCount $ entriesByName name entries All you have to do is implement the relatively simple functions that are used to implement getEntryByNameWithHighestCount.
{ "pile_set_name": "StackExchange" }
Q: If 1 is the identity of the multiplicative (semi)group what is the term for 0? Broadly given an operator $*$ the term identity is used for an element $e$ such that $x * e = x$ for all elements. However is there a term for a value $ x * O = O$ for all values? This was brought to mind by this question What is the identity in the power set of $\Sigma^*$ as a monoid? that shows that the empty language has this property under concatenation. False has this property under the and operator. A: There is no such element in a nontrivial group. Every element of a group has an inverse. Let $G$ be a group and suppose (for purpose of contradiction) $G$ contains your proposed element $O \neq e$. Then there is a $p = O^{-1} \in G$ such that $pO = e \neq O$. But this contradicts the definition of $O$. Therefore, there is no nontrivial group, $G$, containing an $O \neq e$ as described. Another way to get at this, using the required existence of inverses, is that from $$ x O = O \text{,} $$ we have $$ x = xO O^{-1} = O O^{-1} = e \text{.} $$ So the assumed multiplication properties of $O$ are incompatible with its membership in a group unless the only element of the group is $e$ (in which case $e = O$ does satisfy the properties of both the multiplicative identity in a multiplicative group and the properties of the $O$ element you describe). (This is why I wrote "$O \neq e$" in the second paragraph: to avoid the case that we were secretly only talking about the group with one element.) A: Excepting the special case of a group with only one element, groups cannot have $0$ as an element, and multiplication as its operation, as it is required that every element have an inverse, and what is the multiplicative inverse of $0$? It sounds like you are interested in Rings. Rings take a set with two binary operations, one operation is analogous to addition and the other is analogous to multiplication. A ring has an additive identity ($0$) and an multiplicative identity ($1$) and requires that multiplication distribute over addition. As consequence $a\cdot 0 = 0.$ (Yes, there is also the special case here, where the ring has one element.) Rings have a generalization of $0$, which is called an "ideal." An ideal is a subset of the ring such that for every element in the ideal multiplied by a member of the ring gives an element in the ideal.
{ "pile_set_name": "StackExchange" }
Q: Perl writes LF as CRLF on Windows I have gotten some stray Windows-style line endings in a couple of files and I am trying to resolve it on my Windows machine with the following command: perl -pi -e s/\R/\n/ example.txt After running this all line endings in the files this was run on have been changed to CRLF, which is the opposite of my intention. Why is perl doing this, and is there anything I can change to make this work as I expect it to? I am using Strawberry Perl version 5.28. A: Windows builds of Perl adds a :crlf layer to handles by default. This converts CRLF to LF on read, and LF to CRLF on write. (Some other languages do something similar.) You need to tell Perl it's not a text file. Unfortunately, one can't do that with ARGV, the special handle you are using. perl -pe"BEGIN { binmode STDOUT }" example.txt >example.new.txt move example.new.txt example.txt
{ "pile_set_name": "StackExchange" }
Q: Aggregate variables in list of data frames into single data frame I am performing a per policy life insurance valuation in R. Monthly cash flow projections are performed per policy and returns a data frame in the following format (for example): Policy1 = data.frame(ProjM = 1:200, Cashflow1 = rep(5,200), Cashflow2 = rep(10,200)) My model returns a list (using lapply and a function which performs the per policy cashflow projection - based on various per policy details, escalation assumptions and life contingencies). I want to aggregate the cash flows across all policies by ProjM. The following code does what I want, but looking for a more memory efficient way (ie not using the rbindlist function). Example data: Policy1 = data.frame(ProjM = 1:5, Cashflow1 = rep(5,5), Cashflow2 = rep(10,5)) Policy2 = data.frame(ProjM = 1:3, Cashflow1 = rep(50,3), Cashflow2 = rep(-45,3)) # this is the output containing 35000 data frames: ListOfDataFrames = list(Policy1 = Policy1, Policy2 = Policy2) My code: library(data.table) OneBigDataFrame <- rbindlist(ListOfDataFrames) MyOutput <- aggregate(. ~ ProjM, data = OneBigDataFrame, FUN = sum) Output required: ProjM Cashflow1 Cashflow2 1 55 -35 2 55 -35 3 55 -35 4 5 10 5 5 10 I have looked for examples, and R aggregate list of dataframe performs aggregation for all data frames, but do not combine them into 1 data frame. A: With data.table syntax the one step approach would be to create the big data.table first and then do the aggregation: library(data.table) OneBigDataFrame <- rbindlist(ListOfDataFrames) OneBigDataFrame[, lapply(.SD, sum), by = ProjM] or, more concise rbindlist(ListOfDataFrames)[, lapply(.SD, sum), by = ProjM] ProjM Cashflow1 Cashflow2 1: 1 55 -35 2: 2 55 -35 3: 3 55 -35 4: 4 5 10 5: 5 5 10 Now, the OP has requested to avoid creating the big data.table first in order to save memory. This requires a two step approach where the aggregates are computed for each data.table which are then aggregated to a grand total in the final step: rbindlist( lapply(ListOfDataFrames, function(x) setDT(x)[, lapply(.SD, sum), by = ProjM]) )[, lapply(.SD, sum), by = ProjM] ProjM Cashflow1 Cashflow2 1: 1 55 -35 2: 2 55 -35 3: 3 55 -35 4: 4 5 10 5: 5 5 10 Note that setDT() is used here to coerce the data.frames to data.table by reference, i.e., without creating an additional copy which saves time and memory. Benchmark Using the benchmark data of d.b (list of 10000 data.frames with 100 rows each, 28.5 Mb in total) with all answers provided so far: mb <- microbenchmark::microbenchmark( malan = { OneBigDataFrame <- rbindlist(test) malan <- aggregate(. ~ ProjM, data = OneBigDataFrame, FUN = sum) }, d.b = d.b <- with(data = data.frame(do.call(dplyr::bind_rows, test)), expr = aggregate(x = list(Cashflow1 = Cashflow1, Cashflow2 = Cashflow2), by = list(ProjM = ProjM), FUN = sum)), a.gore = { newagg <- function(dataset) { dataset <- data.table(dataset) dataset <- dataset[,lapply(.SD,sum),by=ProjM,.SDcols=c("Cashflow1","Cashflow2")] return(dataset) } a.gore <- newagg(rbindlist(lapply(test,newagg))) }, dt1 = dt1 <- rbindlist(test)[, lapply(.SD, sum), by = ProjM], dt2 = dt2 <- rbindlist( lapply(test, function(x) setDT(x)[, lapply(.SD, sum), by = ProjM]) )[, lapply(.SD, sum), by = ProjM], times = 5L ) mb Unit: milliseconds expr min lq mean median uq max neval cld malan 565.43967 583.08300 631.15898 600.45790 605.60237 801.2120 5 b d.b 707.50261 710.31127 719.25591 713.54526 721.26691 743.6535 5 b a.gore 14706.40442 14747.76305 14861.61641 14778.88547 14805.29412 15269.7350 5 d dt1 40.10061 40.92474 42.27034 41.55434 42.07951 46.6925 5 a dt2 8806.85039 8846.47519 9144.00399 9295.29432 9319.17251 9452.2275 5 c The fastest solution is the one step approach using data.table which is 15 times faster than the second fastest. Surprisingly, the two step data.table approaches are magnitudes slower than the one step approach. To make sure that all solutions return the same result this can be checked using all.equal(malan, d.b) all.equal(malan, as.data.frame(a.gore)) all.equal(malan, as.data.frame(dt1)) all.equal(malan, as.data.frame(dt2)) which return TRUE in all cases.
{ "pile_set_name": "StackExchange" }
Q: Цикл с заданными параметрами при get запросе Работаю с джанго. На первый get запрос создается объект модели CurrentGame с заданными полями, в котором имеется поле small_blind_seat(номер места). После завершения всех операция, посылается новый get запрос и создается новый объект модели CurrentGame. Нужно чтобы поле small_blind_seat для каждого нового объекта модели Game увеличивалось на 1 и так пока не станет равной 6, потом опять с 1. В голову только приходит переменная, которая будет увеличиваться на 1, после каждого get запроса, но у меня ничего не вышло с ее описанием, вернее не знаю как сделать так чтобы она увеличивалась после нового запроса. class StartGame(View): def get(self, request): game_1_start = CurrentGame.objects.create( small_blind=1, big_blind=2, bank=3, small_blind_seat=i, ) A: Не понимаю зачем Вам это. Но ладно. Сохраняйте свой счетчик "i" где-нибудь (в БД хотя бы) с дефолтным значением 0. Перед тем, как создавать объект модели CurrentGame, проверяйте равен он 6 или нет. Если не равен увеличивайте на 1. Если равен: i = 1.
{ "pile_set_name": "StackExchange" }
Q: Making a toroidal array in C# (console) for my Conway's game of life, need a little help Hey there, the way my program works so far is... I have a class called Grid, so far this works, Grid contains a member, 'board' which is a 2D array of bools. I manage to load values from a file into the grid fine, in fact I manage to preform Conway's life iterations just fine, however the program behaves as if the cells outside the grid are dead (not toroidal) here's the code (C#) for the member of Grid which I use to find neighbours: public bool Peek(int Horz, int Vert) { int X = x + Horz, Y = y + Vert; if (X < 0) X = width - 1; else if (X > width - 1) X = 0; if (Y < 0) Y = height - 1; else if (Y > height - 1) Y = 0; return board[X, Y]; } this appears to be where the problem is, Horz and Vert are defining the relative position in the array 'board' x and y are the 'current position' members of the Class Grid. I just can't see what's wrong, It should be in here. in case you need it here is the code (in Program.Main) that counts neighbours int neighbours = 0; for (i = -1; i < 2; i++) { if (grid.Peek(i, -1)) neighbours++; if (grid.Peek(i, 1)) neighbours++; } if (grid.Peek(-1, 0)) neighbours++; if (grid.Peek(1, 0)) neighbours++; if (grid.Cell) { if (neighbours == 2 || neighbours == 3) next.Cell = true; else next.Cell = false; } else { if (neighbours == 3) next.Cell = true; else next.Cell = false; } the value of grid.Cell (grid being an instance of Grid) is the same as grid.Peek(0, 0) and then the x and y positions in the grid object move to the next cell. (as part of the get and set methods) A: x and y are the 'current position' members of the Class Grid. I don't see a bug in the code snippet but this statement raises a Big Red Flag. The grid doesn't have a current position, only a Cell does. You cannot keep track of the 'next.Cell' state for the grid, it must be computed for each individual cell. The next grid is created from the new cells after evaluating all the grid positions. Or use two grids and swap them.
{ "pile_set_name": "StackExchange" }
Q: Why don't researchers publish failed experiments? The following might be a slight generalization for all fields but something I've noticed especially in the field of Scientific Computing: Why don't people publish failures? I mean, if they tried some experiment and realized at the end that they tried everything and nothing worked. Why don't they publish this? Is it because such content won't get published or is it because it is shameful to have a failed experiment in a journal alongside prize-winning papers? I spent a better part of a year working on, what now looks like, a dead problem. However, most papers that I read initially took you to the point of feeling optimistic. Now that I re-read the papers, I realize that I can say (with much confidence) that the author is hiding something. For instance, one of the authors who was comparing two systems, gave an excellent theoretical foundation but when he tried to validate the theory with experiments, there were horrible discrepancies in the experiments (which I now realize). If the theory wasn't satisfied by the experiments, why not publish that (clearly pointing out parts of the theory which worked and which didn't) and save the future researchers some time? If not in a journal, why not ArXiv or their own websites? A: "Why don't people publish failures?" Actually, they do. Journal of Negative Results (ecology and evolutionary biology) Journal of Negative Results in Biomedicine Journal of Pharmaceutical Negative Results Journal of Interesting Negative Results (natural language processing and machine learning) Journal of Negative Results in Environmental Science (no issues yet?) Journal of Errology (no issues yet?) and so on... (You might also want to see the Negative Results section of the Journal of Cerebral Blood Flow & Metabolism.) A: Null results are hard to publish. They just are. Interestingly enough however, in my field they are not the hardest thing to publish. The general order goes: Well powered (big) studies that find what people expect Poorly powered (small) studies that find what people expect Poorly powered studies that find the opposite of what people expect or null findings Well powered studies that find the opposite of what people expect Those middle two categories are where you'll find most "failures", at least in terms of finding a statistically meaningful effect. That being said, there's an increasing push to see these types of studies published, because they're an important part of the literature, and several medical journals have made fairly remarkable steps in that direction - for example, if they accept a paper on the protocol for an upcoming clinical trial, they also commit to publishing the results of the trial (if they pass peer review) regardless of the finding. When it comes down to it, I think there's three reasons negative results aren't published more beyond "it's hard": Lack of pay off. It takes time and thought to get a paper into the literature, and effort. And money, by way of time and effort. Most null findings/failures are dead ends - they're not going to be used for new grant proposals, they're not going to be where you make your name. The best you can hope for is they get cited a few times in commentaries or meta-analysis papers. So, in a universe of finite time, why would you chase those results more? Lack of polish. Just finding the result is a middle-step in publishing results, not the "and thus it appears in a journal" step. Often, its easy to tell when something isn't shaping up to be successful well before its ready for publication - those projects tend to get abandoned. So while there are "failed" results, they're not publication ready results, even if we cared about failures. Many failures are methodological. This study design can't really get at the question you want to ask. Your data isn't good enough. This whole line of reasoning is flawed. Its really hard to spin that into a paper. Successful papers can be published on their own success - that is interesting. Failed papers have the dual burden of being both hard to publish and having had to fail interestingly. A: It is not completely true that failures are not published. Lack of signals, or lack of correlation are published. The point is that everything that pushes knowledge forward is worthy of publication. That said, there are other factors you have to keep into account some failures are methodological, that is, you are doing something wrong. That is not a scientific signal. it's something you have to solve. knowing what doesn't work gives you a competitive advantage against other research groups. negative signals almost never open new fields. If they do, it's because they steered attention to find a positive signal somewhere else. You don't open a new cancer drug development if a substance is found not to have an effect. You close one. For this reason, negative papers generally don't receive a lot of attention, and attention from peers is a lot in academia.
{ "pile_set_name": "StackExchange" }
Q: Why does HP Update at remote system trigger RDP printing at local system? This is obscure. When connected with RDP to another system that has HP Update installed on it, either directly running the HP Update or having the notification pop up to ask if you want to run HP Update causes the local system to try to print something to peculiarly-chosen-local-printer. Case 1: Desktop Win 7 Ult system RDP connected to HP Laptop Win 7 Ult system. When HP Update runs on the laptop a dialog for XPS Writer Save As... appears on Desktop system. Even if you put in a name, nothing gets generated and the dialog repeats. And repeats. Until you (a) close the RDP connection and (b) clean out the queued entries. If the HP Update pops up the request to run the update and you are not at the desk when this happens, there can be dozens of queued requests for this bogus printing. NOTE: the XPS Writer is not selected as a default printer on either system. Case 2: (Different) HP Laptop Win 7 Ult system RDP connected to XP Pro "brand X" desktop system but with HP printer drivers installed. If the request to run HP Update notification pops on the XP system, dozens of attempts to print, in this case to a Versa Check Printer driver, are queued. Dismissing the HP request, closing RDP, and cleaning out the queue are required to stop this. NOTE: the Versa Check Writer is not selected as a default printer on either system. THE QUESTION: What the heck is going on here? Some kind of scripting or COM activity that is misdirected? A: RDP forwards printer shares by default; it sounds like HP Update is apparently mis-seeing those printer shares as file shares and doing a file test that gets turned onto a print job. (I'm aware of a similar bug in a different package.) A workaround would be to disable forwarding printer shares to the remote.
{ "pile_set_name": "StackExchange" }
Q: Submit a new request to the controller upon changing form_dropdown I'm not sure if there is a better way of doing this than with JavaScript, but I'm trying to call a controller when the select state of a form_dropdown is changed. I don't need to change some part of my page, I need to just call the controller again with new parameters. I'm having the hardest time trying to do this with javascript/jquery. Does anyone have a solution to this? Here is my attempt... View <?php echo form_label('Select day', 'days[]', ''); echo form_error('days[]'); $day = array( '0' => 'Monday', '1' => 'Tuesday', '2' => 'Wednesday', '3' => 'Thursday', '4' => 'Friday', '5' => 'Saturday', '6' => 'Sunday' ); echo form_dropdown('days[]', $day, '', 'id="select_day"'); $controller_uri = $this->uri->slash_segment(1).$this->uri->slash_segment(2).$this->uri->slash_segment(3).$this->uri->slash_segment(4).$this->uri->slash_segment(5); ?> <script> var controller_uri = "<?php echo $controller_uri ?>"; var select_day = document.getElementById('select_day'); $( "select" ).change(function() { console.log( controller_uri ); // see what it looks like // add the day (final argument) to the controller var controller_uri = controller_uri + select_day.value; // How to submit this as a controller request? }); </script> Thanks. A: Well, it seems I forgot to add the base_uri() to the beginning of the controller_uri. So just changing the line... var controller_uri = "<?php echo base_url().$controller_uri ?>"; and then calling window.location = controller_uri; // or location.href = controller_uri; Did the trick. I hope this helps someone. If you know of a better way to do this, leave your own answer. Thanks.
{ "pile_set_name": "StackExchange" }
Q: How can I send unicode characters by using tmux command? Here is my tmux command tmux send-key -t session:window.pane say 安安 enter but the result is root@debian:~# say (There is no "安安", unicode characters.) How can I send the non-ASCII Characters??? (my tmux version is 1.8) A: This is a known bug in tmux 1.8 and earlier. It is fixed in the SVN version of tmux. See this thread for more information: https://www.mail-archive.com/tmux-users@lists.sourceforge.net/msg04478.html
{ "pile_set_name": "StackExchange" }
Q: When was brehmsstrahlung discovered? I remember that not long ago wikipedia had somewhere info regarding the discovery of that phenomenon, but apparently all reference hasbeen removed. Searching the web I found a serbian site claiming that Tesla discovered it in 1892, and published his finding in 1897. Tha is confirmend by Miles Mathis, but apparently he is not a reliable source. Do you have any precise and reliable details on the issue? A: Seems around 1896 by Tesla from here http://pubs.rsna.org/doi/full/10.1148/rg.284075206
{ "pile_set_name": "StackExchange" }
Q: Question has been migrated to an other website I posted my question on Serverfault.com on a day back. URL: Access Remote files through http path. But I noticed after a day, a notification alert shown me that "Your question "Access Remote files through http path" was migrated to Unix & Linux Stack Exchange." I don't know who was the user migrated it or is it a decision from stackoverflow commitee. Not confirmed. Kindly suggest what could be the reason. I also didn't found any suitable answer for the above question. So I need the help in which stackoverflow account I should posted this question. Any help would be appreciated. A: If your question is better suited for another site on the Stack Exchange site then it is migrated to that site so that it has more chances of receiving better answers. So in a way question migration is for your own good. Who can migrate a question? A site moderator can do this. Other alternate is that if 5 users, who have relevant privileges, vote to migrate a particular question then it will be migrated.
{ "pile_set_name": "StackExchange" }
Q: asy there any way to change color of UIWebView of Iphone Application I am using webview to display Data from RSS Feed Can I change color of WebView to Black From white. I have declare outlet in xib file named RemainingRssViewController and i am diplaying it from root view controller the code is NSString *htmlstring =[[blogEntries objectAtIndex: blogEntryIndex] objectForKey: @"description"]; NSURL *baseUrl; baseUrl = [[NSURL alloc]initWithString:feedurl]; [anotherViewController.maintext loadHTMLString:htmlstring baseURL:baseUrl]; //maintext is the UIWebVeiw outlet in anotherViewController. A: Ok, looks good. With this code: NSString *htmlstring =[[blogEntries objectAtIndex: blogEntryIndex] objectForKey: @"description"]; You set up an HTML string to go in the UIWebView. However, it's not actually HTML, because right now it's just text. To solve this, we'll instead make the "htmlstring" to be styled HTML with the text from the XML feed. For example: NSString *textString = [[blogEntries objectAtIndex: blogEntryIndex] objectForKey:@"description"]; NSString *htmlstring = [NSString stringWithFormat:@"<html><head><style type='text/css'>body { color:#FFFFFF; background-color: #000000; }</style></head><body>%@</body></html>",textString]; Then, when the "htmlstring" is placed inside the UIWebView, it will be styled as you would like it to be.
{ "pile_set_name": "StackExchange" }
Q: AS3 - How to get a reference to the container of an aggregated object? Simple enough. If I have a container class that holds a Sprite object, and I attach a touch listener to said Sprite, is there a reliable and cheap method of getting the object that contains the Sprite when it is touched? I realize I could just inherit the Sprite, but that is not what I want to do. Failing that, if I add the event listener to said Sprite object within the class that contains it, is there a way to dispatch an event that would allow me to get the reference to the container that holds the Sprite object that was touched? Thanks for any help. Reply to loxxxy: When I said "held", I meant in terms of aggregation. For example: public class Container { [Embed(source = "img1.jpg")] private var img:Class; private var sprite:Sprite; private var bitmap:Bitmap; public function Container() { bitmap = new img(); sprite = new Sprite(); sprite.addChild(bitmap); } public function GetSprite():Sprite { return sprite; } } Which is perfectly legal code. What I wanted to do was, when the Sprite object is touched outside of the Container class, that I could access other properties within the Container class through said Sprite object. However, a solid workaround would be something like the following, I think: public class Container extends InteractiveDisplayObject { [Embed(source = "img1.jpg")] private var img:Class; private var bitmap:Bitmap; public function Container() { bitmap = new img(); this.addChild(bitmap); } } Then, I could access the aggregate objects of the Container class by listening to touch events on the Container class, while making it fully extendable to any other DisplayObject class (TextField, Sprite, etc.). There's a very specific reason I want to do this, I just don't feel it's relevant to the actual question. I'll try this approach when I get some time to test it out, and see how it goes. Thanks! A: You don't really need to dispatch events just for this purpose. Add the event listener to the container & you can get reference to both container & sprite. For eg: container.addEventListener(MouseEvent.CLICK, container_touched, false, 0, true); function container_touched(e) { trace(e.target.name); // Output : sprite trace(e.currentTarget.name); // Output : container } EDIT : Or you could have rather exposed the sprite event to others by adding a function like : public function registerCallback( callback:Function) { var thisRef = this; sprite.addEventListener(MouseEvent.CLICK, function(e) { callback(thisRef); },false, 0, true); }
{ "pile_set_name": "StackExchange" }
Q: How to locate cause of "A string is required here" error Let me start by saying I'm a web developer inheriting a VB6 / Crystal Reports application, and I don't know either very well. My client was using Access as their database, and I've migrated them to SQL Server. Going back is not an option. I've gotten nearly their entire application working after the migration, but the Crystal Reports are having issues. I was previously getting an error that said "The server has not yet been opened." In an attempt to fix this, I converted the driver from ODBC to OLE DB. Now I'm getting an error that says "A string is required here." That's it...no stack trace, no Debug button. So I don't know how to track the problem down. All the other similar questions I've found always have the specific formula that's causing the problem, but that's where I'm stuck. Without a stack trace or Debug button or anything, I have no idea where to look for an error. So mine is more of a question on debugging strategy than a specific code problem. Where do I look for an error? If you need code samples, I can provide them, but you'll have to be specific as to how to get any dumps you need to see. I'm using the Microsoft Visual Basic 6.0 editor. I see the error message whenever I right-click on Database Fields and click Verify Database (it first says "The database is up to date" and then "A string is required here"). I also see the error when attempting to run the actual report in the application. I've tried going through all of the Formula Fields and wrapping database fields in CStr(...), but I still get the error. Where else should I look? A: From my experience with Crystal, no, there is no magic button to debug a broken report. I would definitely recommend opening the report in Crystal Reports (as opposed to VB). You'll have to check the usual suspects - start with the database Expert; verify the tables and joins are setup correctly. Look at the Record Selection formula (Report > Selection Formulas > Record) - use the "Check" button at the top left to verify syntax. When looking at Formulas and database fields, you can tell if the field/formula is used in the report if the field has a green checkmark next to it. Crystal is 'smart enough' in most cases that it doesn't validate logic if the object is not used by the report - This includes tables. If a table is joined, and links are not enforced, and the table is not used ANYWHERE in the report, Crystal won't even include the table in the SQL query. Just a helpful tidbit. Lastly, you can export the report defintion to a text file - Click export and select "Report Definition" - this can be helpful for searching for fields. Hope that helps.
{ "pile_set_name": "StackExchange" }
Q: Sign In with FB issue: FB popup window with the Login dialog is blocked by a web browser FB documentation says: As noted in the reference docs for this function, it results in a popup window showing the Login dialog, and therefore should only be invoked as a result of someone clicking an HTML button (so that the popup isn't blocked by browsers). And I did as it says, I put the FB.login function into an onCLick function. But the the Login dialog is still blocked. Why? How to reorder the code? // Facebook // Here is a click event, so FB.login is inside a click function. $("#login_btn_fb").on("click", function() { function getUserData(res, fCallback) { if(res.authResponse != null) { FB.api('/me', function(response) { console.log('/me_: ', response); }); } else { console.log("getUserData CANCEL: ", res); return; } }; FB.getLoginStatus(function(res) { var uid = null; var accessToken = null; if($.isPlainObject(res)) { // Fb+ App+ if(res.status == "connected") { console.error("connected"); uid = res.authResponse.userID; accessToken = res.authResponse.accessToken; getUserData(res, null); } // Fb+ App- else if(res.status == "not_authorized") { console.error("not_authorized"); FB.login(function(res) { getUserData(res, null); }, {scope: 'email,user_birthday,user_photos,public_profile,user_location'}); } // Fb- App- else { console.log("UNKNOWN"); FB.login(function(res) { // console.log("===UNK FB.login res: ", res); getUserData(res, null); }, {scope: 'email,user_birthday,user_photos,public_profile,user_location', return_scopes: true}); }; } // ERROR with FB else { alert("Facebook sign in failure. Please try again later."); return; } }); }); A: You did not put FB.login in a function that is directly called by a User event, but in an asynchronous callback function of FB.getLoginStatus. Use FB.getLoginStatus to refresh a User Session and to check if the User is logged in right when you load your Page (right after FB.init), and FB.login only on User interaction - but NOT in an asynchronous callback, of course.
{ "pile_set_name": "StackExchange" }
Q: C#, Windows Forms: How to prevent firing of EventHandlers while data is retrieved from the backend I have over 40 controls (TextBox, RadioButton, CheckBoxes, etc.) on Windows Forms. Each control is registered for EventHandlers (TextChanged, CheckChanged, etc.). I want to prevent these events from firing during initialization of the form. Unsubscribing all events before initialization and subscribing later is laborious. Which is the best way to achieve this? A: You could enumerate all controls like: private void DisableAllHandlers() { foreach (var control in this.Controls) { // Use reflection } } And use sources from the article How to remove all event handlers from a control to disable handlers for selected control.
{ "pile_set_name": "StackExchange" }
Q: One machine blocked from a single server, and only when it's at this physical location? I host an e-commerce website for a client who suddenly is unable to access it from his computer. He can ping the server, he can SSH in, but cannot load the website in any browser we've tried: IE, Firefox, Chrome, Opera. Other machines at this location can connect ok. DNS resolves fine, accessing direct with IP address does not work. Using Wireshark on the server side I can see the requests come in, and the responses leave our switch. Wireshark on the client end sees no reply. Tried so far: I find nothing in the client's router settings that would block the site for this one computer. He's had his public IP reassigned, and he's tried a different private IP to no avail. This all seems to point to malware or an otherwise busted Windows 7 install. The kicker: carry the machine off-site to a different Internet connection, and it works fine. I have never heard of malware that would deny only return responses from one specific server and only while working from a certain public IP address specific physical location (public IP changed, no difference). In my experience when some sort of filter is the cause, either all the machines behind a single connection would be blocked, or a single machine would be blocked behind all connections. Here it seems neither applies. What could be the problem? What do we test next? Update I noticed the initial redirect to www. was working before the hang, and that the favicon was coming through. Sure enough, both these requests were under 1k and it lead us to check the MTU values on the client and server. But why were they changed? A: if client's router is linux-based, please check that: 1) ICMP blocked by site or ISP 2) client used PPPOE connection man iptables: TCPMSS This target allows to alter the MSS value of TCP SYN packets, to con‐ trol the maximum size for that connection (usually limiting it to your outgoing interface's MTU minus 40 for IPv4 or 60 for IPv6, respec‐ tively). Of course, it can only be used in conjunction with -p tcp. This target is used to overcome criminally braindead ISPs or servers which block "ICMP Fragmentation Needed" or "ICMPv6 Packet Too Big" packets. The symptoms of this problem are that everything works fine from your Linux firewall/router, but machines behind it can never exchange large packets: 1. Web browsers connect, then hang with no data received. 2. Small mail works fine, but large emails hang. 3. ssh works fine, but scp hangs after initial handshaking. Workaround: activate this option and add a rule to your firewall con‐ figuration like: iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
{ "pile_set_name": "StackExchange" }
Q: Bitmap PNG save without modifications I am attempting to load a PNG image into a Bitmap and save it without modifications. I tried something along these lines: var png = Bitmap.FromFile("t_02.png"); png.Save("t_02_out.png", ImageFormat.Png); I also tried: var png = Bitmap.FromFile("t_02.png"); png.Save("t_02_out.png"); In either case, the original 233kb file procuded a 356kb image. What am I doing wrong? A: The image in question is stored with the grey-scale color type. The specification describes: http://www.libpng.org/pub/png/spec/1.2/PNG-Chunks.html A pixel is then be stored as a single byte. .NET saves a PNG file as 32-bit regardless of pixel format. The closest I got is using AForge grayscale filter and storing, which turns it into a palette storage. The result is then much closer to the original, but due to the palet it is often still larger. Conclusion: .NET image format support is terrible. I used ImageMagic to solve .NET incompetence.
{ "pile_set_name": "StackExchange" }
Q: What is the purpose of dividing an audio signal into segments and analysis each segment? I read bunch of materials for extracting feature from audio signal and they all tell me to break signal into segments, why don't we analyze all the audio signal? I don't know what are the advantages of doing that and how wide a segment should be? I only see 256 samples per frame or 512 samples per frame... what about 1028 per frame? A: Analyzing signals per segments, with proper windowing, is a way to cope with non-stationary in audio samples. With full-size analysis, features can get mixed. Segment-splitting is thus at play in many algorithms (mp3, shazam). The length of window is often a matter of trade-offs, between data information and computing advantages: signal sampling (window length is quite meaningless without sampling rate), with respect to the following: analyzing or extracting informational content from the signal: various ranges of stationarity may exist in the data, or generally useful processing features, easiness in computing: the power-of-two length you mention can be beneficial (faster algorithms like in the FFT), parallel computing, dedicated hardware, closer to real-time analysis.
{ "pile_set_name": "StackExchange" }
Q: Can this be modified to run faster? I'm creating a word list using python that hits every combination of of characters which is a monster of a calculation past 944. Before you ask where I'm getting 94, 94 covers ASCII characters 32 to 127. Understandably this function runs super slow, I'm curious if there's a way to make it more efficient. This is the meat and potatoes of the my code. def CreateTable(name,ASCIIList,size): f = open(name + '.txt','w') combo = itertools.product(ASCIIList, repeat = size) for x in combo: passwords = ''.join(x) f.write(str(passwords) + '\n') f.close() I'm using this so that I can make lists to use in a brute force where I don't know the length of the passwords or what characters the password contains. Using a list like this I hit every possible combination of words so I'm sure to hit the right one eventually. Having stated earlier that this is a slow program this also slow to read in and will not my first choice for a brute force, this more or less for a last ditch effort. To give you an idea of how long that piece of code runs. I was creating all the combinations of size 5 and ran for 3 hours ending at a little over 50GB. A: Warning : I have not tested this code. I would convert combo to a list: combo_list = list(combo) I would then break it into chunks: # https://stackoverflow.com/a/312464/596841 def get_chunks(l, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] # Change 1000 to whatever works. chunks = get_chunks(combo_list, 1000) Next, I would use multithreading to process each chunk: class myThread (threading.Thread): def __init__(self, chunk_id, chunk): threading.Thread.__init__(self) self.chunk_id = chunk_id self.chunk = chunk def run(self): print ("Starting " + self.chunk_id) process_data(self.chunk) print ("Exiting " + self.chunk_id) def process_data(): f = open(self.chunk_id + '.txt','w') for item in self.chunk: passwords = ''.join(item) f.write(str(passwords) + '\n') f.close() I would then do something like this: threads = [] for i, chunk in enumerate(chunks): thread = myThread(i, chunk) thread.start() threads.append(thread) # Wait for all threads to complete for t in threads: t.join() You could then write another script to merge all the output files, if you need.
{ "pile_set_name": "StackExchange" }
Q: Setting high frame rate recording in Swift i'm trying to create an app to record video at 120fps but i'm having troubles. First, when print(device.activeFormat), i get this in the logs AVCaptureDeviceFormat: 0x13fe49890 'vide'/'420v' 1920x1080, { 2- 30 fps}, fov:58.080, supports vis, max zoom:104.38 (upscales @1.55), AF System:1, ISO:34.0-544.0, SS:0.000024-0.500000 but my device is an iPhone 5s which supports 120fps, don't know why the range here is 2-30fps. Second, when i do device.activeVideoMaxFrameDuration = CMTimeMake(1, 120) to change the max frame rate to 120 fps, i get this error in the logs: [AVCaptureVideoDevice setActiveVideoMaxFrameDuration:] - the passed activeVideoMaxFrameDuration 1:120 is not supported by the device. What am i doing wrong? Thanks! A: As you can see from your print(device.activeFormat), the max support fps is 30 given {2- 30 fps}. Therefore, setting 120 fps with device.activeVideoMaxFrameDuration = CMTimeMake(1, 120) is not supported.
{ "pile_set_name": "StackExchange" }
Q: In-App purchasing: Listen for the "Cancel' button? I'm trying to figure out how I can listen to the "Cancel" button that appears in the "confirmation" alert shown when a user tries to purchase something. You know, the official one done by Apple, looks something like: "Confirm Your In App Purchase. Do you want to buy one $product for $price? [Cancel] [Buy]" If I understand my code correctly, the alert initiated by something like this: SKPayment *payment = [SKPayment paymentWithProductIdentifier:productIdentifier]; [[SKPaymentQueue defaultQueue] addPayment:payment]; So basically I'd like to do something if they hit Cancel. Thanks A: implement the paymentQueue:updatedTransactions: method from the SKPaymentTransactionObserver Protocol. There you can check the transactionState and the error of each transaction object. I used something like that: - (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions { for (SKPaymentTransaction *transaction in transactions) { switch (transaction.transactionState) { case SKPaymentTransactionStatePurchased: [self completeTransaction:transaction]; break; case SKPaymentTransactionStateFailed: if (transaction.error.code == SKErrorPaymentCancelled) { /// user has cancelled [self finishTransaction:transaction wasSuccessful:NO]; } else if (transaction.error.code == SKErrorPaymentNotAllowed) { // payment not allowed [self finishTransaction:transaction wasSuccessful:NO]; } else { // real error [self finishTransaction:transaction wasSuccessful:NO]; // show error } break; case SKPaymentTransactionStateRestored: [self restoreTransaction:transaction]; break; default: break; } } }
{ "pile_set_name": "StackExchange" }
Q: Why should I use AWS RDS? I installed a LAMP stack in my AWS EC2 instances so that I can use the MySQL server. Somebody recommended using RDS. But RDS is not free and also a MySQL server. My question is what makes RDS so special comparing with my MySQL server in EC2 instances? Thanks. By the way, I'm quite new to AWS. A: RDS is a managed solution. Which means, AWS staff will take care of: Patches Backups Maintenance Making sure it's alive Hosting your database in a second EC2 instance means that: You have to manage everything of the above Using a LAMP stack and co-hosting Apache and MySQL is the cheapest, but: You have to manage everything of the above You're probably hosting a database on an instance exposed to the internet That said, if you're planning to host a production website / service that's more than a personal website / blog / experiment you'll probably need to host webserver and database in different instances. Picking RDS is less of a headache. For anything thats not that important, a LAMP stack makes more sense. Less scalability, potentially less security but also less administrative overhead and costs.
{ "pile_set_name": "StackExchange" }
Q: Clicking text to check checkboxes in one table column i have rows inside container with text header and checkboxes in every column, i want to make when the header is clicked the checkboxes below inside the same container become checked but the other in different container not affected(still in their original state checked or checked). Here's the screen shoot for the layout: here's the html code for first container: <div class="container-fluid" style=""> <div class="row"> <div class="col-sm-5" style="background-color:green;"><P class="member">Member</p></div> <div class="col-sm-2" style="background-color:green;"><P class="add">ToogleAdd</p></div> <div class="col-sm-2" style="background-color:green;"><P class="edit">ToogleEdit</p></div> <div class="col-sm-2" style="background-color:green;"><P class="delete">ToogleDel</p></div> </div> <div class="row"> <div class="col-sm-5"><input type="checkbox" name="auth100" value="auth100" id="auth100" onclick ="togAuth1()">New Member + Kit Purchase</input></div> <div class="col-sm-2"><input type="checkbox" name="addAuth100" value="addAuth100" id="addAuth100">Add</input></div> <div class="col-sm-2"><input type="checkbox" name="editAuth100" value="editAuth100" id="editAuth100">Edit</input></div> <div class="col-sm-2"><input type="checkbox" name="delAuth100" value="delAuth100" id="delAuth100">Delete</input></div> </div> <div class="row"> <div class="col-sm-5"><input type="checkbox" name="auth101" value="auth101" id="auth101">New Member Registration</input></div> <div class="col-sm-2"><input type="checkbox" name="addAuth101" value="addAuth101" id="addAuth101">Add</input></div> <div class="col-sm-2"><input type="checkbox" name="editAuth101" value="editAuth101" id="editAuth101">Edit</input></div> <div class="col-sm-2"><input type="checkbox" name="delAuth101" value="delAuth101" id="delAuth101">Delete</input></div> </div> <div class="row"> <div class="col-sm-5"><input type="checkbox" name="auth102" value="auth102" id="auth102">Member Data Maintenance</input></div> <div class="col-sm-2"><input type="checkbox" name="addAuth102" value="addAuth102" id="addAuth102">Add</input></div> <div class="col-sm-2"><input type="checkbox" name="editAuth102" value="editAuth102" id="editAuth102">Edit</input></div> <div class="col-sm-2"><input type="checkbox" name="delAuth102" value="delAuth102" id="delAuth102">Delete</input></div> </div> <div class="row"> <div class="col-sm-5"><input type="checkbox" name="auth103" value="auth103" id="auth103">Member Registration Listing</input></div> <div class="col-sm-2"><input type="checkbox" name="addAuth103" value="addAuth103" id="addAuth103">Add</input></div> <div class="col-sm-2"><input type="checkbox" name="editAuth103" value="editAuth103" id="editAuth103">Edit</input></div> <div class="col-sm-2"><input type="checkbox" name="delAuth103" value="delAuth103" id="delAuth103">Delete</input></div> </div> <div class="row"> <div class="col-sm-5"><input type="checkbox" name="auth104" value="auth104" id="auth104">Geneology Listing</input></div> <div class="col-sm-2"><input type="checkbox" name="addAuth104" value="addAuth104" id="addAuth104">Add</input></div> <div class="col-sm-2"><input type="checkbox" name="editAuth104" value="editAuth104" id="editAuth104">Edit</input></div> <div class="col-sm-2"><input type="checkbox" name="delAuth104" value="delAuth104" id="delAuth104">Delete</input></div> </div> <div class="row"> <div class="col-sm-5"><input type="checkbox" name="auth105" value="auth105" id="auth105">Member Rank Report</input></div> <div class="col-sm-2"><input type="checkbox" name="addAuth105" value="addAuth105" id="addAuth105">Add</input></div> <div class="col-sm-2"><input type="checkbox" name="editAuth105" value="editAuth105" id="editAuth105">Edit</input></div> <div class="col-sm-2"><input type="checkbox" name="delAuth105" value="delAuth105" id="delAuth105">Delete</input></div> </div> html code for second container: <div class="container-fluid"> <div class="row"> <div class="col-sm-5" style="background-color:green;"><P class="member">Member</p></div> <div class="col-sm-2" style="background-color:green;"><P class="add">ToogleAdd</p></div> <div class="col-sm-2" style="background-color:green;"><P class="edit">ToogleEdit</p></div> <div class="col-sm-2" style="background-color:green;"><P class="delete">ToogleDel</p></div></div><div class="row"> <div class="col-sm-5"><input type="checkbox" name="auth100" value="auth100" id="auth200">New Member + Kit Purchase</input></div> <div class="col-sm-2"><input type="checkbox" name="addAuth100" value="addAuth100" id="addAuth200">Add</input></div> <div class="col-sm-2"><input type="checkbox" name="editAuth100" value="editAuth100" id="editAuth200">Edit</input></div> <div class="col-sm-2"><input type="checkbox" name="delAuth100" value="delAuth100" id="delAuth200">Delete</input></div></div></div> here's the unfinished jquery code: $('.container-fluid').each(function(){$('.row:first').each(function(){$('.member').click(function(){ }); $('.add').click(function(){ }); $('.edit').click(function(){ }); $('.delete').click(function(){ }); });}); A: Add classes to all the checkboxes (.edit_check, .add_check etc.) Then change them all using Javascript or JQ. $('.add').click(function(){ if ($(".add_check").is(":checked")){ //if at least one is checked document.getElementByClassName(".add_check").checked = false; //JS version $(".add_check").prop("checked", false); //JQ version } else { document.getElementByClassName(".add_check").checked = true; //JS version $(".add_check").prop("checked", true); //JQ version } });
{ "pile_set_name": "StackExchange" }
Q: Quarter on Quarter/ Month on Month analysis in mysql I've loans data and I want to compare sales in different years based on quarter or month My data looks like this disbursementdate | amount | product | cluster 2017-01-01 | 1000 | HL | West 2018-02-01 | 1000 | PL | East So After querying, I'd ideally want the result to look like this Quarter | 2017 | 2018 Q1 | 1000 | 0 Q2 | 100 | 1000 Similarly, it can be done for a monthly analysis as well I'm not averse to storing data in a different format either ... can split date in different field like month quarter year I'm struggling with query A: You can use conditional aggregation: select quarter(disbursementdate) as quarter, sum(case when year(disbursementdate) = 2017 then amount else 0 end) as amount_2017, sum(case when year(disbursementdate) = 2018 then amount else 0 end) as amount_2018 from group by quarter(disbursementdate) ; If you wanted year/quarter on separate rows, you would do: select year(disbursementdate) as year, quarter(disbursementdate) as quarter, sum(amount) from group by year(disbursementdate), quarter(disbursementdate) ;
{ "pile_set_name": "StackExchange" }
Q: Seymour's second neighborhood conjecture Does anyone out there know if Seymour's second neighborhood conjecture is still open? if not, I would appreciate any references. A: As far as I know, it is still open. Here are some of related results (Probably you know all of them): Chen-Shen-Yuster proved that for any digraph $D$, there exists a vertex $v$ such that $|N^{++}(v)|\geq\gamma|N^+(v)|$, where $\gamma=0.67815.$. See "Second neighborhood via first neighborhood in digraphs", Ann. Comb. 7 (2003), no. 1, 15–20. (Recall Seymour's second neighborhood conjecture asserts that $|N^{++}(v)|\geq|N^+(v)|$ for some $v$.) Fisher proved that Seymour's second neighborhood conjecture is true for tournament $D$, that is, the underlying graph of $D$ is a complete graph. See "Squaring a tournament: a proof of Dean's conjecture", J. Graph Theory 23 (1996), no. 1, 43–48. Ghazal proved that Seymour's second neighborhood conjecture is true for tournaments missing a generalized star. See "Seymour's second neighborhood conjecture for tournaments missing a generalized star", J. Graph Theory 71 (2012), no. 1, 89–94.
{ "pile_set_name": "StackExchange" }
Q: Async/Await single thread/some threads I need a little rule about correct usage of await. Run this code in .net core c# 7.2: static class Program { static async Task<string> GetTaskAsync(int timeout) { Console.WriteLine("Task Thread: " + Thread.CurrentThread.ManagedThreadId); await Task.Delay(timeout); return timeout.ToString(); } static async Task Main() { Console.WriteLine("Main Thread: " + Thread.CurrentThread.ManagedThreadId); Console.WriteLine("Should be greater than 5000"); await Watch(NotParallel); Console.WriteLine("Should be less than 5000"); await Watch(Parallel); } public static async Task Parallel() { var res1 = GetTaskAsync(2000); var res2 = GetTaskAsync(3000); Console.WriteLine("result: " + await res1 + await res2); } public static async Task NotParallel() { var res1 = await GetTaskAsync(2000); var res2 = await GetTaskAsync(3000); Console.WriteLine("result: " + res1 + res2); } private static async Task Watch(Func<Task> func) { var sw = new Stopwatch(); sw.Start(); await func?.Invoke(); sw.Stop(); Console.WriteLine("Elapsed: " + sw.ElapsedMilliseconds); Console.WriteLine("---------------"); } } As you all can see the behavior of two methods are different. It's easy to get wrong in practice. So i need a "thumb rule". Update for real men Please, run code. And explain please why Parallel() runs faster than NonParallel(). A: While calling GetTaskAsync without await, you actually get a Task with the method to execute (that is, GetTaskAsync) wrapped in. But when calling await GetTaskAsync, execution is suspended until the method is done executing, and then you get the result. Let me be more clear: var task = GetTaskAsync(2000); Here, task is of type Task<string>. var result = await GetTaskAsync(2000); Here result is of type string. So to address your first interrogation: when to await your Tasks really depends on your execution flow. Now, as to why Parallel() is faster, I suggest your read this article (everything is of interest, but for your specific example, you may jump to Tasks return "hot"). Now let's break it down: The await keyword serves to halt the code until the task is completed, but doesn't actually start it. In your example, NotParallel() will take longer because your Tasks execute sequentially, one after the other. As the article explains: This is due to the tasks being awaited inline. In Parallel() however... the tasks now run in parallel. This is due to the fact that all [tasks] are started before all [tasks] are subsequently awaited, again, because they return hot. About 'hot' tasks I suggest your read the following: Task-based Asynchronous Pattern (TAP) The Task Status section is of interest here to understand the concepts of cold and hot tasks: Tasks that are created by the public Task constructors are referred to as cold tasks, because they begin their life cycle in the non-scheduled Created state and are scheduled only when Start is called on these instances. All other tasks begin their life cycle in a hot state, which means that the asynchronous operations they represent have already been initiated I invite you to read extensively about async/await and Tasks. Here are a few resources in addition to the ones I provided above: Asynchronous Programming in C# 5.0 part two: Whence await? Async/Await - Best Practices in Asynchronous Programming Async and Await
{ "pile_set_name": "StackExchange" }
Q: API for powerpoint presentations to flash How do I go about automating conversions from PowerPoint to flash? I want a user to be able to upload a PowerPoint file into my web page and on the server I want to convert the PowerPoint to a flash movie. Is there any preferred method for doing this? I've searched on Google and I just keep getting a lot of 3rd party software vendors selling addon software but I can't seem to find any useful guides or tutorials for doing this. A: There are only two (decent) solutions that I know of - the first being a little more decent than the second. iSpring offers a free version of their PowerPoint to Flash converter called iSpringFree. It's decent. I use the Pro version because I also have a need to add e-Learning/SCORM functionality - but if you don't have that need, it should be fine. In general, it also one of the better converters out there (amongst a host of many pay-for PPT->SWF converters). You could open your PPT/PPTX in OpenOffice.org's Impress and then Export to Flash format. Having server-side triggered conversion is a little more tricky - I don't know of a server-side component other than they pay-for SDK solution by iSpring that offers this. The two above are for manual conversion.
{ "pile_set_name": "StackExchange" }
Q: Generate Unique hash from Long id I need to generate a unique hash from a ID value of type Long. My concern is that it should not globally generate the same hash from two different Long/long values. MD5 hashing looks a nice solution but the hash String is very long. I only need characters 0-9 a-z and A-Z And just 6-characters like: j4qwO7 What could be the simpliest solution? A: Your requirements cannot be met. You've got an alphabet of 62 possible characters, and 6 characters available - which means there are 626 possible IDs of that form. However, there are 2568 possible long values. By the pigeon-hole principle, it's impossible to give each of those long values a different ID of the given form. A: You don't have to use the hex representation. Build your own hash representation by using the actual hash bytes from the function. You could truncate the hash output to simplify the hash representation, but that would make collisions more probable. Edit: The other answers stating that what you ask isn't possible, based on the number of possible long values, is teoretically true, if you actually need the whole range. If your IDs are auto incremented from zero and up, just 62^6 = 56800235584 values might be more than enough for you, depending on your needs.
{ "pile_set_name": "StackExchange" }
Q: Executing Sp_SetAppRole from ExecuteSqlCommand Here is my Code: SqlParameter rolename = new SqlParameter("rolename", SqlDbType.VarChar); rolename.Direction = ParameterDirection.Input; rolename.Value = "role"; SqlParameter password = new SqlParameter("password", SqlDbType.VarChar); password.Direction = ParameterDirection.Input; password.Value = "123@123"; SqlParameter fCreateCookie = new SqlParameter("fCreateCookie", SqlDbType.Bit); fCreateCookie.Direction = ParameterDirection.Input; fCreateCookie.Size = 100; fCreateCookie.Value = true; SqlParameter cookie = new SqlParameter("cookie", SqlDbType.VarBinary); cookie.Direction = ParameterDirection.Output; cookie.Size = 8000; _context.Database.ExecuteSqlCommand(TransactionalBehavior.DoNotEnsureTransaction, "EXEC sp_setapprole @rolename,@password,@fCreateCookie, @cookie OUT;", rolename, password, fCreateCookie, cookie); And I am getting an Error: The formal parameter "@fCreateCookie" was not declared as an OUTPUT parameter, but the actual parameter passed in requested output. Can someone tell me what I am doing wrong? A: you are missing a parameter ALTER procedure [sys].[sp_setapprole] @rolename sysname, -- name app role @password sysname, -- password for app role --> @encrypt varchar(10) = 'none', -- Encryption style ('none' | 'odbc') @fCreateCookie bit = 0, @cookie varbinary(8000) = 0xFFFFFFFF OUTPUT as you need to add a paramerer @encrypt = 'none' before @fcreatecookie in your executesqlcommand call
{ "pile_set_name": "StackExchange" }
Q: ASP.Net Exception Shows File Path When my ASP.Net MVC application encounters an error, the full file path of the c# class is displayed in the exception even though I've only deployed binaries. E.g. at: C:\DevelopmentServer\MVC_Project\AccountManagement.cs line 45 Where is this path information being stored? Is it in the compiled dll and is there a way to remove it? A: I believe this is stored in the PDB files generated during the compile, and it reflects the paths to the code on the build machine.
{ "pile_set_name": "StackExchange" }
Q: T-SQL Overlapping time range parameter in stored procedure I would like to search for records that occurred in a specific time of day and between a date/time range. Example: My table: ID | EmpID | AuthorizationTime ------------------------------- 1 | 21455 | '23/01/2012 12:44' 2 | 22311 | '23/01/2012 18:15' 3 | 21455 | '23/01/2012 23:04' 4 | 10222 | '24/01/2012 03:31' 5 | 21456 | '24/01/2012 09:00' 6 | 53271 | '25/01/2012 12:15' 7 | 10222 | '26/01/2012 18:30' 8 | 76221 | '27/01/2012 09:00' Sample SP input parameters: @from: 22/01/2012 08:00 @to: 24/01/2012 23:00 @fromtime: 18:30 @totime: 08:00 Expected Output: EntryID EmployeeID AuthorisationTime 3 21455 '23/01/2012 23:04' 4 10222 '24/01/2012 03:31' I've tried the following select statements in the SP: ... Select @wAuthorizationTime=' AuthorizationTime between ''' + CONVERT(nvarchar(30), @from )+ ''' and ''' + convert(nvarchar(50),@to )+ ''' ' Select @Where = @wAuthorizationTime; Declare @wHours nvarchar(1000)=''; if (ISNULL(@fromtime,'')<>'' and ISNULL(@ToTime,'')<> '') begin Select @wHours= ' (Cast(AuthorizationTime as time) between ''' + @fromTime + ''' and '''+ @ToTime +''')' end if (@wHours <> '') Select @Where=@Where + ' and ' + @wHours ... The problem with this statement is that I'm not getting any results if the end time is lower than the start time (e.g. 23:00 to 03:00). It does work if I use a time frame that doesn't overlap (e.g. 18:00 to 23:59). What I need to do to get above results? A: This should give you what you want: select * from Times where AuthorizationTime >= @from and AuthorizationTime <= @to and ( (@fromtime > @totime and ((cast(AuthorizationTime as time) between '00:00:00' and @totime) or (cast(AuthorizationTime as time) between @fromtime and '23:59:59.999') ) ) or (@fromtime <= @totime and cast(AuthorizationTime as time) between @fromtime and @totime) ) SQL Fiddle
{ "pile_set_name": "StackExchange" }
Q: Dynamic styling not working in Angular2 Following is my working code, which is not giving any error in console and printing the array items and test heading as expected. BUT somehow dynamic background styling in not working, Let me know what I am doing wrong here. import { Component } from '@angular/core'; @Component({ selector: 'my-app', template: ` <h1>{{name}}</h1> <div class="one" *ngFor="let item of colArr" style="background: {{item}};">{{item}}</div> `, styles: [` .one { width: 100%; text-align: center; padding: 10px; margin: 5px; } `] }) export class HomeComponent { public name = 'test'; public colArr = ['#111111', '#222222', '#333333', '#444444', '#555555', '#666666', '#777777', '#888888', '#999999']; } Following is the output I am getting - A: Direct binding to style is discouraged (doesn't work well on all browsers). Use instead <div class="one" *ngFor="let item of colArr" [ngStyle]="{background: item}">{{item}}</div> or <div class="one" *ngFor="let item of colArr" [style.background]="item">{{item}}</div>
{ "pile_set_name": "StackExchange" }
Q: Fetch FullCalender.io Event Objects (JSON) from Database for a specific date-range I have a huge database and a lot of events. Every time I go to the Calendar Mainpage (monthly view) all the existing events in the database are loaded, despite I'm only viewing the current month. At the moment my html page is over 3MB big and tends to slow down my browser tab. To solve this problem I started to change the code to fetch only the events from the current month as json. Unfortunately, the start and end dates for the date-range are not working - the page is still fetching all events from the database. I have already done a few hours of research and a lot of tweaks. Im using FullCalendar V3.10 So far I managed to fetch my FullCalendar.io events - I have two event sources - so I used: eventSources: [ { url: 'include/load-calendar-event.php', // use the `url` property color: '#008000', }, { url: 'include/load-calendar-event-retour.php', // use the `url` property color: '#008000', }, // any other sources... ], The corresponding two files to fetch the events are almost identical - so one should be enough: load-calendar-event.php require_once('bdd.php'); $sql = "SELECT * FROM messages"; // this selects all rows $req = $bdd->prepare($sql); $req->execute(); $events = $req->fetchAll(); $data = array(); foreach($events as $event) { $start = explode(" ", $event['start']); $end = explode(" ", $event['end']); if($start[1] == '00:00:00'){ $start = $start[0]; }else{ $start = $event['start']; } if($end[1] == '00:00:00'){ $end = $end[0]; }else{ $end = $event['end']; } $data[] = array( 'id'=> $event['id'], 'title'=> $event['title'], 'start'=> $start, 'end'=> $end, 'color'=> $event['color'] ); } echo json_encode($data); By calling the Calendar Page the browsers makes the following calls - please note the date (2020-01-01 - 2020-02-01) and data format: ...include/load-calendar-event.php?start=2020-01-01&end=2020-02-01&_=1578601056565 and ...include/load-calendar-event-retour.php?start=2020-01-01&end=2020-02-01&_=1578601056566 screenshot of call in firebug The date format of the events are saved in the database with the following format (YYYY-MM-DD HH:mm:ss): 2020-01-11 10:00:00 Question: Any idea how to fetch the events ONLY for the current month? A: Why are you selecting all the rows in the table and iterating over them? You need to update your query to pull in the start and end values you're passing as a $_GET parameter. require_once('bdd.php'); $sdate = $_GET['start']; //LIKE THIS $edate = $_GET['end']; //AND THIS $sql = "SELECT * FROM messages WHERE date >= ".$sdate." AND date <= ".$edate; // this selects all rows. Change 'date' to whatever your column name is in the database. $req = $bdd->prepare($sql); $req->execute(); $events = $req->fetchAll(); $data = array(); foreach($events as $event) { $start = explode(" ", $event['start']); $end = explode(" ", $event['end']); if($start[1] == '00:00:00'){ $start = $start[0]; }else{ $start = $event['start']; } if($end[1] == '00:00:00'){ $end = $end[0]; }else{ $end = $event['end']; } $data[] = array( 'id'=> $event['id'], 'title'=> $event['title'], 'start'=> $start, 'end'=> $end, 'color'=> $event['color'] ); } echo json_encode($data); You could do this more easily by simply using PHP's date function to generate the start and end dates instead of passing in variables with your query string params. Although this will give you more flexibility. You can even include some if/else statements to generate different queries if these values are set. See: https://www.php.net/manual/en/function.date.php
{ "pile_set_name": "StackExchange" }
Q: Jquery file download no callback working I am using the jquery.filedownload plugin with asp web api to download a file and display error message from the server. I have setup the plugin and added cookies to my response as indicated on github: https://github.com/johnculviner/jquery.fileDownload My file is being downloaded successfully, however callbacks are not working. Js var url = "/WebApi/PayrollBatches/GetBatchCsv?batchId=" + batchId; $.fileDownload(url, { successCallback: function (url) { alert('success'); }, failCallback: function (responseHtml, url) { alert('error'); } }); return false; //this is critical to stop the click event which will trigger a normal file download! Asp Web Api [HttpGet] public async Task<HttpResponseMessage> GetBatchCsv(int batchId) { string csv; try { using (var Dbcontext = new RossEntities()) { PayrollBatchExport dal = new PayrollBatchExport(Dbcontext); csv = await dal.GetBatchCsv(batchId); } } catch (Exception ex) { HttpError myCustomError = new HttpError(ex.Message) { { "CustomErrorCode", 42 } }; return Request.CreateErrorResponse(HttpStatusCode.InternalServerError, myCustomError); } var cookie = new CookieHeaderValue("fileDownload", "true"); var cookiePath = new CookieHeaderValue("path", "/"); HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new StringContent(csv); response.Content.Headers.ContentType = new MediaTypeHeaderValue("text/csv"); response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment"); response.Content.Headers.ContentDisposition.FileName = "Export.csv"; response.Headers.AddCookies(new CookieHeaderValue[] { cookie, cookiePath }); response.StatusCode = HttpStatusCode.OK; return response; } Edit Below is what my browser console looks like when the server responds with error 500: A: OK, after spending some time with Fiddler, the problem was with a cookie. Notice this code inside the jquery.fileDownload.js: function checkFileDownloadComplete() { //has the cookie been written due to a file download occuring? var cookieValue = settings.cookieValue; if (typeof cookieValue == 'string') { cookieValue = cookieValue.toLowerCase(); } var lowerCaseCookie = settings.cookieName.toLowerCase() + "=" + cookieValue; if (document.cookie.toLowerCase().indexOf(lowerCaseCookie) > -1) { //execute specified callback internalCallbacks.onSuccess(fileUrl); ... The success callback is being called only if server returns a cookie as project page says, however the API controller wasn't returning this cookie correctly. This code works well for me and is also described of official docs public class PayrollBatchesController : ApiController { [HttpGet] public async Task<HttpResponseMessage> GetBatchCsv(int batchId) { string csv; try { string path = System.Web.HttpContext.Current.Request.MapPath(@"~\Files\testfile.csv"); csv = File.ReadAllText(path);// await dal.GetBatchCsv(batchId); } catch (Exception ex) { HttpError myCustomError = new HttpError(ex.Message) { { "CustomErrorCode", 42 } }; HttpResponseMessage errorResponse = Request.CreateErrorResponse(HttpStatusCode.InternalServerError, myCustomError); errorResponse.Content = new StringContent("error: " + ex.ToString()); return errorResponse; } HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK); // Set Cookie var cookie = new CookieHeaderValue("fileDownload", "true"); cookie.Expires = DateTimeOffset.Now.AddDays(1); cookie.Domain = Request.RequestUri.Host; cookie.Path = "/"; response.Headers.AddCookies(new CookieHeaderValue[] { cookie }); // ------------- response.Content = new StringContent(csv); response.Content.Headers.ContentType = new MediaTypeHeaderValue("text/csv"); response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment"); response.Content.Headers.ContentDisposition.FileName = "Export.csv"; response.StatusCode = HttpStatusCode.OK; return response; } } Please note, that for demo purposes I am not retrieving CSV data from DAL but from a testing file as the above code shows. Also, to be complete here, I am attaching the client side code: <body> <div> <button id="btn" type="button">get</button> </div> <script src="~/Scripts/fileDownload.js"></script> <script> $(document).ready(function () { $('#btn').click(function () { var url = "/api/PayrollBatches/GetBatchCsv?batchId=1"; $.fileDownload(url, { successCallback: function (url) { alert('success'); }, failCallback: function (responseHtml, url) { alert('error'); } }); return false; }); }); </script> </body> Edit: fail callback
{ "pile_set_name": "StackExchange" }
Q: Django: Custom model field does not show correct value in the form I needed to store arbitrary data in a relational database of multiple data-types so I came up with a solution of in addition to storing the data itself also to store what is the datatype of the data (str, int, etc). This allows upon retreival to cast the string which is stored in the db into whatever proper data-type the data is. In order to store the data-type I made a custom model field: class DataType(object): SUPPORTED_TYPES = { u'unicode': unicode, u'str': str, u'bool': bool, u'int': int, u'float': float } INVERSE_SUPPORTED_TYPES = dict(zip(SUPPORTED_TYPES.values(), SUPPORTED_TYPES.keys())) TYPE_CHOICES = dict(zip(SUPPORTED_TYPES.keys(), SUPPORTED_TYPES.keys())) def __init__(self, datatype=None): if not datatype: datatype = unicode t_datatype = type(datatype) if t_datatype in [str, unicode]: self.datatype = self.SUPPORTED_TYPES[datatype] elif t_datatype is type and datatype in self.INVERSE_SUPPORTED_TYPES.keys(): self.datatype = datatype elif t_datatype is DataType: self.datatype = datatype.datatype else: raise TypeError('Unsupported %s' % str(t_datatype)) def __unicode__(self): return self.INVERSE_SUPPORTED_TYPES[self.datatype] def __str__(self): return str(self.__unicode__()) def __len__(self): return len(self.__unicode__()) def __call__(self, *args, **kwargs): return self.datatype(*args, **kwargs) class DataTypeField(models.CharField): __metaclass__ = models.SubfieldBase description = 'Field for storing python data-types in db with capability to get python the data-type back' def __init__(self, **kwargs): defaults = {} overwrites = { 'max_length': 8 } defaults.update(kwargs) defaults.update(overwrites) super(DataTypeField, self).__init__(**overwrites) def to_python(self, value): return DataType(value) def get_prep_value(self, value): return unicode(DataType(value)) def value_to_string(self, obj): val = self._get_val_from_obj(obj) return self.get_prep_value(val) So this allows me to do something like that: class FooModel(models.Model): data = models.TextField() data_type = DataTypeField() >>> foo = FooModel.objects.create(data='17.94', data_type=float) >>> foo.data_type(foo.data) 17.94 >>> type(foo.data_type(foo.data)) float So my problem is that in the Django Admin (I am using ModelAdmin), the value for data_type in the textbox does not show up properly. Whenever it is float (and in db it is stores as float, I checked), the value displayed is 0.0. For int it displays 0. For bool it displays False. Instead of showing the string representation of the data_type, somewhere Django actually calls it, which means the __call__ is called withour an parameters, which results in those values. For example: >>> DataType(float)() 0.0 >>> DataType(int)() 0 >>> DataType(bool)() False I figured out how to monkey patch it by replacing __call__ method with the following: def __call__(self, *args, **kwargs): if not args and not kwargs: return self.__unicode__() return self.datatype(*args, **kwargs) This displays the correct value in the form however I feel that this is not very elegant. Is there any way to make it better? I could not figure out where Django called the field value in the first place. Thanx A: wrt why your DataType get called, read this: https://docs.djangoproject.com/en/1.4/topics/templates/#accessing-method-calls The clean solution might be to simply rename call to something more explicit.
{ "pile_set_name": "StackExchange" }
Q: Número de requisições simultâneas que um servidor PHP suporta Vamos supor que tenho um servidor com um processador i7 com 4 núcleos/8 threads. Em uma arquitetura multi-thread, assumindo que se crie uma thread por requisição, apenas será permitido 8 requisições simultâneas, uma vez que o processador possui 8 threads? Se os servidores PHP são multi-thread, como conseguem responder a milhares de conexões simultâneas? A: Não fazem. Simultâneo só a quantidade de processadores existentes. Existe uma ilusão de simultaneidade, como ocorre no seu computador agora. Tem centenas ou milhares de processos rodando e parece que tudo está simultâneo, mas não está. Vai havendo troca de execução. O sistema operacional vai agendando uma thread de cada vez em cada processador existente. Como a troca ocorre muito rápido parece superficialmente que estão executando simultaneamente, mas se fizer um teste básico de tempo verá que não é bem assim. Estamos falando de processador. Acontece que grande parte das tarefas envolvem entrada e saída, então enquanto está lendo ou escrevendo dados externamente ao processador este fica ocioso então ter várias threads, bem mais que estas 8 pode ser útil já que enquanto uma thread espera o recurso externo responder outra que não está dependendo de recurso externo pode executar, o que ajuda dar a ilusão de simultaneidade. Na verdade hoje é mais comum aplicações trabalharem assincronamente e não depender tanto assim de threads explícitas para compensar o uso de acessos externos. De qualquer forma tenho minhas dúvidas se um único servidor consegue atender milhares de requisições "simultâneas" com PHP. O Node limita à quantidade virtual de CPUs porque ele trabalha assincronamente, não tem porque usar threads em excesso porque o processador é acionado conforme a necessidade, não precisa haver concorrência de recursos de CPU. Ele enfileira as requisições em excesso já que não tem como atender mais que a capacidade do hardware, assim economiza com o gerenciamento e por isso escala muito melhor, fora o fato que o aproveitamento não fica tanto na "sorte" do momento. O Node ficou conhecido por fazer isto, mas todas tecnologias fazem isto hoje, em geral usam a libuv. Em geral as pessoas não entendem muito do que estão fazendo, elas leem algo e acham que é aquilo que está escrito sem questionar, sem entender o que está ocorrendo ali. Leia É sempre garantido que uma aplicação com múltiplas threads rode mais rápido que usando uma única thread? para entender melhor.
{ "pile_set_name": "StackExchange" }
Q: Rotating in glutIdleFunc within a specified range Let's say I have this glutIdleFunc going in the background of an OpenGL scene containing a little creature with multiple, radially arranged legs that "pulsate": void PulsateLegs(void) { lowerLegsRot = (lowerLegsRot + 1)%360; glutPostRedisplay(); } ...where the lowerLegsRot value is used like this in the display function: glRotatef((GLfloat)lowerLegsRot, 1.0, 0.0, 0.0); It's hard to visualize without seeing what the little fellow actually looks like, but it's clear that this function is making the legs spin all the way around repeatedly. I want to limit this spin to a certain range (say, -15 to 50 degrees), and, furthermore, to make the legs go back and forth within the range, like a pendulum. Since I'm going for a 65 degree swath, I tried just changing "%360" to "%65" as a first step. But this made the legs go way too fast, and I cannot use a lower increment value if I want to use modulus, which only works on ints. Is there another way to achieve the desired first-step behavior? More importantly, how can I make the legs go back and forth? It's kind of hard to conceptualize with a function that is getting called multiple times (vs. just using a loop structure that takes care of everything, which I tried before I realized this!) A: and I cannot use a lower increment value if I want to use modulus, which only works on ints. The % operator is broken anyway. You should not use integers, but floats, and use the fmod (double) or fmodf (float) function.
{ "pile_set_name": "StackExchange" }
Q: Randomly choose a number in a specific range with a specific multiple in python I have the following numbers: 100, 200, 300, 400 ... 20000 And I would like to pick a random number within that range. Again, that range is defined as 100:100:20000. Furthermore, by saying 'within that range', I don't mean randomly picking a number from 100->20000, such as 105. I mean randomly choosing a number from the list of numbers available, and that list is defined as 100:100:20000. How would I do that in Python? A: Use random.randrange : random.randrange(100, 20001, 100) A: For Python 3: import random random.choice(range(100, 20100, 100))
{ "pile_set_name": "StackExchange" }
Q: How can I determine how widespread support of OpenGL ES 3 is? I am developing new app in OpenGL which should run on iOS and Android devices. I'd like to use OpenGL ES 3. For iOS that's not a problem, since any iPhone newer than the 5 has GLES 3. But I am not sure how wide-spread GLES 3 is in the Android world or if all new devices support it. How can I determine how much support exists for GLES 3 so I can determine if I should use it or fall back to an older version? A: The amount of how many devices support different APIs is usually easy to figure out by looking at public hardware stats of popular engines, such as unity and unreal. I recommend using multiple sources of stats to get better view of actual share of different device capabilities.
{ "pile_set_name": "StackExchange" }
Q: Allow timers to fire when applicationDidEnterBackground I have read Apple's documentation and as many posts as I could here on the topic of how an app can run once it has been backgrounded. It seems that there are ways to get an application to complete some remaining tasks but not continue to run indefinitely in the background. My app has timers set to go off so audio clips can be played to the user. What happens is once the app is backgrounded the clips are not played. I know this can be done somehow as I have run a couple apps like what I am trying to do that are handling it. One example being: Nike+ GPS Are these apps just never calling endBackgroundTask? A: There are some scenarios where your App can run in the Background. Check: http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/BackgroundExecution/BackgroundExecution.html audio. The application plays audible content to the user while in the background. (This includes streaming audio or video content using AirPlay.) location. The application keeps users informed of their location, even while running in the background. voip. The application provides the ability for the user to make phone calls using an Internet connection.
{ "pile_set_name": "StackExchange" }
Q: C - Incompatible Pointer Type Why does the following code give warnings? int main(void) { struct {int x; int y;} test = {42, 1337}; struct {int x; int y;} *test_ptr = &test; } Results: warning: initialization from incompatible pointer type [-Wincompatible-pointer-types] struct {int x; int y;} *test_ptr = &test; ^ A: They're two anonymous structure types (they neither have a tag). All such structure types (in a single translation unit) are distinct — they're never the same type. Add a tag! The relevant sentence in the standard is in §6.7.2.1 Structure and union specifiers: ¶8 The presence of a struct-declaration-list in a struct-or-union-specifier declares a new type, within a translation unit. The struct-declaration-list refers to the material between { and } in the type. That means that in your code, there are two separate types, one for each struct { … }. The two types are separate; you cannot officially assign a value of one type to the other, nor create pointers, etc. In fact, you can't reference those types again after the semicolon. That means you could have: int main(void) { struct {int x; int y;} test = {42, 1337}, *tp = &test; struct {int x; int y;} result, *result_ptr; result_ptr = &result; … } Now test and tp refer to the same type (one a structure, one a pointer to the structure), and similarly result and result_ptr refer to the same type, and the initializations and assignments are fine, but the two types are different. It's not clear that you create a compound literal of either type — you'd have to write (struct {int x; int y;}){.y = 9, .x = 8}, but the presence of the struct-declaration-list means that is another new type. As noted in the comments, there is also section §6.2.7 Compatible type and composite type, which says: ¶1 … Moreover, two structure, union, or enumerated types declared in separate translation units are compatible if their tags and members satisfy the following requirements: If one is declared with a tag, the other shall be declared with the same tag. If both are completed anywhere within their respective translation units, then the following additional requirements apply: there shall be a one-to-one correspondence between their members such that each pair of corresponding members are declared with compatible types; if one member of the pair is declared with an alignment specifier, the other is declared with an equivalent alignment specifier; and if one member of the pair is declared with a name, the other is declared with the same name. For two structures, corresponding members shall be declared in the same order. For two structures or unions, corresponding bit-fields shall have the same widths. Roughly speaking, that says that if the definitions of the types in the two translation units (think 'source files' plus included headers) are the same, then they refer to the same type. Thank goodness for that! Otherwise, you couldn't have the standard I/O library working, amongst other minor details. A: Variables &test and test_ptr, which are anonymous structs, have different types. Anonymous structs defined in the same translation unit are never compatible types1 as the Standard doesn't define compatibility for two structure type definitions in the same translation unit. To have your code compile, you could do: struct {int x; int y;} test = {42, 1337} , *test_ptr; test_ptr = &test; 1 (Quoted from: ISO:IEC 9899:201X 6.2.7 Compatible type and composite type 1) Two types have compatible type if their types are the same. Additional rules for determining whether two types are compatible are described in 6.7.2 for type specifiers, in 6.7.3 for type qualifiers, and in 6.7.6 for declarators. Moreover, two structure, union, or enumerated types declared in separate translation units are compatible if their tags and members satisfy the following requirements: If one is declared with a tag, the other shall be declared with the same tag. If both are completed anywhere within their respective translation units, then the following additional requirements apply: there shall be a one-to-one correspondence between their members such that each pair of corresponding members are declared with compatible types; if one member of the pair is declared with an alignment specifier, the other is declared with an equivalent alignment specifier; and if one member of the pair is declared with a name, the other is declared with the same name. For two structures, corresponding members shall be declared in the same order. For two structures or unions, corresponding bit-fields shall have the same widths. For two enumerations, corresponding members shall have the same values.
{ "pile_set_name": "StackExchange" }
Q: I estimate 10% of the links posted here are dead. How do we deal with them? TL;DR: Approximately 10% of 1.5M randomly selected unique links in the March 2015 data dump are unavailable. To be more precise, that is approximately 150K dead links. Motivation I've been running into more and more links that are dead on Stack Overflow and it's bothering me. In some cases, I've spent the time hunting down a replacement, in others I've notified the owner of the post that a link is dead, and (more shamefully), in others I've simply ignored it and left just a down vote. Obviously that's not good. Before making sweeping generalizations that there are dead links everywhere, though, I wanted to make sure I wasn't just finding bad posts because I was wandering through the review queues. Utilizing the March 2015 data dump, I randomly selected about 25% of the posts (both questions and answers) and then parsed out the links. This works out to 5.6M posts out of 21.7M total. Of these 5.6M posts, 2.3M contained links and 1.5M of these were unique links. I sent each unique URL a HEAD request, with a user agent mimicking Firefox1. I then retested everything that didn't return a successful response a week later. Finally, anything that failed from that batch, I resent a final test a week later. If a site was down in all three tests, I considered it down for this test. Results2 By status code Good news/Bad News: A majority of the links returned a valid response, but there are still roughly 10% that failed. (This image is showing the top status codes returned) The three largest slices of the pie are the status 200s (site working!), status 404 (page not found, but server responded saying the page isn't found) and Connection Errors. Connection errors are sites that had no proper server response. The request to access the page timed out. I was generous in the time out and allowed a request to live for 20 seconds before failing a link with this status. The 4xx and 5xx errors are status codes that fall in the 400 and 500 range of HTTP responses. These are client and server error ranges, thus counted as a failure. 2xx errors (of which was are in the low triple) are pages that responded with a success message in the 200 range, but it wasn't a 200 code. Finally, there were just over a hundred sites that hit a redirect loop that didn't seem to end. These are the 3xx errors. I failed a site with this range if it redirected more than 30 times. There are a negligible number of sites that returned status codes in the 600 and 700 range4 By most common There are, as expected, many URLs that failed that appeared frequently in the sample set. Below is a list of the top 503 URLs that are in posts most often, but failed three times over the course of three weeks. http://docs.jquery.com/Plugins/validation http://www.eclipse.org/eclipselink/moxy.php http://jackson.codehaus.org/ http://xstream.codehaus.org/ http://opencv.willowgarage.com/wiki/ http://developer.android.com/resources/articles/painless-threading.html http://valums.com/ajax-upload/ http://sqlite.phxsoftware.com/ http://qt.nokia.com/ http://www.oracle.com/technetwork/java/codeconv-138413.html http://download.java.net/jdk8/docs/api/java/time/package-summary.html http://docs.oracle.com/javase/1.4.2/docs/api/java/text/SimpleDateFormat.html http://watin.sourceforge.net/ http://leandrovieira.com/projects/jquery/lightbox/ https://graph.facebook.com/ https://ccrma.stanford.edu/courses/422/projects/WaveFormat/ http://www.postsharp.org/ http://www.erichynds.com/jquery/jquery-ui-multiselect-widget/ http://ha.ckers.org/xss.html http://jetty.codehaus.org/jetty/ http://cpp-next.com/archive/2009/08/want-speed-pass-by-value/ http://codespeak.net/lxml/ http://www.hpl.hp.com/personal/Hans_Boehm/gc/ http://jquery.com/demo/thickbox/ http://book.git-scm.com/5_submodules.html http://monotouch.net/ http://developer.android.com/resources/articles/timed-ui-updates.html http://jquery.bassistance.de/validate/demo/ http://codeigniter.com/user_guide/database/active_record.html http://www.phantomjs.org/ http://watin.org/ http://www.db4o.com/ http://qt.nokia.com/products/ http://referencesource.microsoft.com/netframework.aspx https://github.com/facebook/php-sdk/ http://java.decompiler.free.fr/ http://pivotal.github.com/jasmine/ http://api.jquery.com/category/plugins/templates/ http://code.google.com/closure/library http://www.w3schools.com/tags/ref_entities.asp http://xstream.codehaus.org/tutorial.html https://github.com/facebook/php-sdk http://download.java.net/maven/1/jstl/jars/jstl-1.2.jar https://developers.facebook.com/docs/offline-access-deprecation/ http://www.parashift.com/c++-faq-lite/pointers-to-members.html https://developers.facebook.com/docs/mobile/ios/build/ http://downloads.php.net/pierre/ http://fluentnhibernate.org/ http://net.tutsplus.com/tutorials/javascript-ajax/5-ways-to-make-ajax-calls-with-jquery/ http://dev.iceburg.net/jquery/jqModal/ By post score Count of posts by score (top 10) (Covers 94% of all broken links): | Score | Percentage of Total Broken | |-------|----------------------------| | 0 | 36.4087% | | 1 | 25.1674% | | 2 | 13.4089% | | 3 | 7.2806% | | 4 | 4.2971% | | 5 | 2.7065% | | 6 | 1.8068% | | 7 | 1.2854% | | -1 | 1.1935% | | 8 | 0.9415% | By number of views Note, this is number of views at the time the data dump was created, not as of today Count of posts by number of views (top 10): | Views | Total Views | |--------------|-------------| | (0, 200] | 24.4709% | | (200, 400] | 14.2186% | | (400, 600] | 9.5045% | | (600, 800] | 6.9793% | | (800, 1000] | 5.2574% | | (1000, 1200] | 4.1864% | | (1200, 1400] | 3.3699% | | (1400, 1600] | 2.7766% | | (1600, 1800] | 2.3477% | | (1800, 2000] | 1.9550% | By days since post created Note: This is number of days since creation at the time the data dump was created, not from today Count of posts by days since creation (top 10) (Covers 64% of broken links): | Days since Creation | Percentage of Total Broken | |---------------------|----------------------------| | (1110, 1140] | 7.2938% | | (1140, 1170] | 6.7648% | | (1470, 1500] | 6.6579% | | (1080, 1110] | 6.6535% | | (750, 780] | 6.5535% | | (720, 750] | 6.5516% | | (1500, 1530] | 6.3978% | | (390, 420] | 5.8508% | | (360, 390] | 5.8258% | | (780, 810] | 5.5175% | By Ratio of Views:Days Ratio Views:Days (top 20) (Covers 90% of broken links): | Views:Days Ratio | Percentage of Total Broken | |------------------|-------------| | (0, 0.25] | 27.2369% | | (0.25, 0.5] | 18.8496% | | (0.5, 0.75] | 11.4321% | | (0.75, 1] | 7.2481% | | (1, 1.25] | 5.1668% | | (1.25, 1.5] | 3.7907% | | (1.5, 1.75] | 2.9310% | | (1.75, 2] | 2.4033% | | (2, 2.25] | 1.9788% | | (2.25, 2.5] | 1.6850% | | (2.5, 2.75] | 1.4080% | | (2.75, 3] | 1.1879% | | (3, 3.25] | 1.0654% | | (3.25, 3.5] | 0.9391% | | (3.5, 3.75] | 0.8334% | | (3.75, 4] | 0.7165% | | (4, 4.25] | 0.6634% | | (4.25, 4.5] | 0.5789% | | (4.5, 4.75] | 0.5508% | | (4.75, 5] | 0.4833% | Discussion What can we do with all of this? How do we, as a community, solve the issue of 10% of our outbound links pointing to places on the internet that no longer exist? Assuming that my sample was indicative of the entire data dump, there are close to 600K (150K broken unique links x 4, because I took 1/4 of the data dump as a sample) broken links posted in questions and answers on Stack Overflow. I assume a large number of links posted in comments would be broken as well, but that's an activity for another month. We encourage posters to provide snippets from their links just in case a link dies. That definitely helps, but the resources behind the links and the (presumably) expanded explanation behind the links are still gone. How can we properly deal with this? It looks like there have been a few previous discussions: Utilize the Wayback API to automatically fix broken links. Development appeared to stall on this due to the large number of edits the Community user would be making. This would also hide posts that depended on said link from being surfaced for the community to fix it. Link review queue. It was in alpha, but disappeared in early 2014. Badge proposal for fixing broken links Footnotes This is how it ultimately played out. Originally I sent HEAD requests, in an effort to save bandwidth. This turned out to waste a whole bunch of time because there are a whole bunch of sites around the internet that return a 405 Method Not Allowed when sending a HEAD request. The next step was to sent GET requests, but utilize the default Python requests user-agent. A lot of sites were returning 401 or 404 responses to this user agent. Links to Stack Exchange sites were not counted in the above results. The failures seen are almost 100% due to a question/answer/comment being deleted. The process ran as an anonymous user, thus didn't have any reputation and was served a 404. A user with appropriate permissions can still visit the link. I verified a number of 404'd links to Stack Overflow posts and this was the case. The 4th most common failure was to localhost. The 16th and 17th most common were localhost on ports other than 80. I removed these from the result table with the knowledge that these shouldn't be accessible from the internet. There where 7 total URLs that returned status codes in the 600 and 700 range. One such site was code.org with a status code of 752. Sadly, this is not even defined the joke RFC. A: I really think that, at least at this point, there isn't a problem. To the extent it is a problem, it is difficult to fix. Stack Overflow is meant to be a Q&A site, not a repository of links. Encountering a dead link is an annoyance, but it doesn't instantly invalidate the answer, and often barely has any impact at all. This site has a policy of encouraging answers consisting of more than links exactly for this reason: so even if the link dies, the answer still survives and remains meaningful. If an answer consists of just a link, then this is the problem, not the dead links. I'd go as far as to say the question hasn't really been answered. Many of the links are dead simply because the resource they pointed to has been moved to a slightly different location that any user could discover with a tiny bit of effort (for example, typing the name into Google). Take the link http://www.eclipse.org/eclipselink/moxy.php for example. Even though I don't trust casual users to actually fix the link, I do trust them not to be total idiots and just google eclipse moxy and follow one of the top three results to the new location. In other cases, it's simply impossible to fix a link at all, except by a person who is familiar with the subject. This is a more significant problem, but unfortunately not one that is fixable automatically. For example, take the link http://www.db4o.com, to the object database db4o. db4o hasn't existed for a while now and is no longer supported by the developer. You might be able to find the source code or the binaries, but I would not fix the link to point to them, because I would not recommend it to anyone (since it's dead). The problem is not really that the link is dead, but rather that the product has ceased to exist, and the answer that recommends it is no longer valid. It can only be fixed by posting a new answer, voting, and comments. These things might already exist on the questions you looked at. Also, a major problem with any automatic scheme to fix dead links is the potential for error. A link that points to something else, or to something that is no longer a valid answer, is a lot worse than a dead link, in the same way that misinformation is a lot worse than a lack of information. It really might confuse users, or have them using outdated software. If the bulk of the dead links continues to grow, and if popular answers get hit as well, I really would like to do something about it, largely because it makes the site looks dated and unprofessional. As it stands, an attempt at fixing it would be nice, but not something I think is important. Personally, I have encountered very few dead links as a casual user. A: The world wide web's sole purpose was to link relevant documents together. With no (working) links, there's no web. So I think every effort that can be undertaken to fix broken links, is a good effort. We shouldn't rely on users fixing their own post. We have way more inactive than active users. Perhaps there could be something like a "broken link queue", where users can report a broken link (A) and suggest a replacement (B). Then when agreed upon by reviewers and/or moderators, the system (Community user) could replace all instances in all posts of link A with link B. Of course this is very spam-sensitive, so the actual implementation details needs to be worked out pretty tight. A: I propose another hybrid of the previous broken link queue (as was mentioned above in comments and other answers) and an automated process to fix broken links with an archived version (which has also been suggested). The broken link queue should focus on editing and fixing the links in a post (as opposed to closing it). It'd be similar to the suggested edits queue, but with the focus intended to correct links not spelling and grammar. This could be done by only allowing a user to edit the links. One possibility, I envision is presenting the user with the links in the post and a status on whether or not the link is available. If it's not available, give the user a way to change that specific link. Utilizing this post, I have a quick mock up of what I propose such a review task looks like: All the links that appear in the post are on the right hand side of the screen. The links that are accessible have a green check mark. The ones that are broken (and the reason for being in this queue) have a red X. When a user elects to fix a post, they are presented with a modal showing only the broken URLs. With this queue, though, I think an automated process would be helpful as well. The idea is that this would operate similarly to the Low Quality queue, where the system can automatically add a post to the queue if certain criteria are met or a user can flag a post as having broken links. I've based my idea on what Tim Post outlined in the comments to a previous post. Automated process performs a "Today in History" type check. This keeps the fixes limited to a small subset of posts per day. It also focuses on older posts, which were more likely to have a broken link than something posted recently. Example: On July 31, 2015, the only posts being checked for bad links would be anything posted on July 31 in any year 2008 through current year - 1. Utilizing the Wayback Machine API, or similar service, the system attempts to change broken links into an archived version of the URL. This archived version should probably be from "close" to the time the post was originally made. If the automated process isn't able to find an archived version of the link, the post should be tossed into the Broken Link queue When the Community edits a post to fix a link, a new Post History event is utilized to show that a link was changed. This would allow anyone looking at revision history to easily see that a specific change was only to fix links. Actions performed in the previous bullets are exposed to 10K users in the moderator tools. Much like recent close/delete posts show up, these do as well. This allows higher rep users to spot check (if they so desire). I think this portion is important when the automated process fixes a link. For community edits in the queue, the history tab in /review seems sufficient. If a post consists of a large percentage of a link (or links) and these links were changed by Community, the post should have further action taken on it in some queue. Example: A post where X+% of the text is hyperlinks is very dependant on the links being active. If one or more of the links are broken, the post may no longer be relevant (or may be a link only post). One example I found while doing this was this answer. I don't think that this type of edit from the Community user should bump a post to the front page. Edits done in the broken link queue, though, should bump the post just like a suggested edit does today. By preventing the automated Community posts from being bumped, we prevent the the front page from being flooded, daily, with old posts and these edits. I think that the exposure in the 10K tools and the broken link queue will provide the visibility needed to check the process is working correctly. Queue Flow: Automated process flow: The automated link checking will likely run into several of the problems I did. Mainly: Sites modify the HEAD request to send a 404 instead of a 405. My solution to this was to issue GET requests for everything. Sites don't like certain user agents. My solution to this was to mimic the Firefox user agent. To be a good internet citizen, Stack Exchange probably shouldn't go that far, but providing a unique user agent that is easily identifiable as "StackExchangeBot" (think "GoogleBot"), should be helpful in identifying where traffic is coming from. Sites that are down one week and up another. I solved this by spreading my tests over a period of 3 weeks. With the queue and automatic linking to an archived version of the site, this may not be necessary. However, immediately converting a link to an archived copy should be discussed by the community. Do we convert the broken link immediately? Or do we try again in X days. If it's still down then convert it? It was suggested in another answer that we first offer the poster the chance to make changes before an automatic process takes place. The need to throttle requests so that you don't flood a site with requests. I solved this by only querying unique URLs. This still issues a lot of requests to certain, popular, domains. This could be solved by staggering the checks over a period of minutes/hours versus spewing 100s - 1000s of GET requests at midnight daily. With the broken link queue, I feel the first two would be acceptable. Much like posts in the Low Quality queue appear because of a heuristic, despite not being low quality, links will be the same way. The system will flag them as broken and the queue will determine if that is true (if an archived version of the site can't be found by the automated process). The bullet about throttling requests is an implementation detail that I'm sure the developers would be able to figure out.
{ "pile_set_name": "StackExchange" }
Q: Xcode: "The working copy ____ has uncommitted changes" vs. git status: "nothing to commit, working directory clean" In Xcode 5.0.2, I try to pull from a remote and am given the following message: "The working copy 'project-name' has uncommitted changes. Commit or discard the changes and try again." Fair enough. I pull up the commit dialog, and am then given the message, "This file does not exist at the requested revision." Clicking 'OK' brings me on into the commit dialog. (There is no revision displayed in the right pane, presumably for the same reason I was given the most recent message.) Selecting the flat view, I see that there is only one modified file: project.pbxproj. I enter a commit message and click 'Commit 1 File'. When I then go to pull, I find that I am in exactly the same position as before--the same messages appear and I am unable to pull (or push) no matter how many times I make a commit. Curious, I run git diff to see just what has changed. Nothing. git status provides me with equally helpful output: nothing to commit, working directory clean. git push or git pull? Yep, those work just fine from the command line. So what gives? Why does Xcode insist that I have changes in my working directory? Why won't it tell me what they are? Have tried restarting Xcode and system. While I'm happy that I still have some way to push and pull, it would be really nice if the Xcode git integration was behaving nicely. Any ideas? I've found these similar questions, but none address this particular issue (or provide an acceptable solution): Cannot push, pull or merge git. "Working copy has uncommited changes" Commit or discard the changes and try again Git pull fails: You have unstaged changes. Git status: nothing to commit (working directory clean) Xcode Version Controll GIT - has uncommitted changes, just after commit Xcode says "Uncommitted Changes" Whenever I try to git pull or push A: Okay, so I fixed my issue. With Xcode open: Open terminal - cd / to your project directory. Type in: "git reset --hard" Type in git status Restart Xcode and make a commit (Just a comment or something ) Repeat the above steps. This sorted out my issue for me. A: You must fix it with command line git. Go to your working folder in Terminal, type: git status That will show you what files have uncommitted changes. Crashlytics, for instance, will update itself as soon as you run it, and even using Xcode/Source Control/Discard Changes won't get rid of it. Once you see the files that have uncommitted changes (ignore added files), use: git checkout -- Folder/filename.ext That's the same as a "discard" in Xcode. After you've done that, go back to Xcode and you should be able to switch branches.
{ "pile_set_name": "StackExchange" }
Q: Are space and time hierarchies even comparable? I am wondering if there are any results to what extent the space and time hierarchies "disagree" on which problem is harder. For example, is it known whether there are languages $L_1$ and $L_2$ such that $L_1 \in \DeclareMathOperator{TIME}{TIME} \TIME(f(n))\setminus SPACE(g(n)),L_2\in \DeclareMathOperator{SPACE}{SPACE} \SPACE(g(n)) \setminus \TIME(f(n))$? How often does this occur? P.S.- The question Function with space-depending computation time seems to ask something similar but was worded confusingly and none of the answers seem to be what I'm looking for. A: You can get the situation you describe by choosing weird functions $f(n)$ and $g(n)$. For example, let $g(n) = n^3$ and $$f(n) = \begin{cases} n & \text{if $n$ is odd}, \\\ 2^{n^5} & \text{if $n$ is even}. \end{cases} $$ Then choose $L_1$ and $L_2$ as follows: $L_1$ is a language containing only strings of even length which can be decided in time $O(2^{n^5})$ but not in time $O(2^{n^4})$. The existence of such a language is pretty easy to prove from the time hierarchy theorem. $L_2$ is a language containing only strings of odd length which can be decided in space $O(n^3)$ but not in space $O(n^2)$. The existence of such a language is pretty easy to prove from the space hierarchy theorem. Then we have the following facts: $L_1 \in TIME(f(n))$: To decide whether a string is in $L_1$, simply check whether the length $n$ is even. If it is, then continue to use the $O(2^{n^5})$ time decider for $L_1$ whose existence is guaranteed by the definition of $L_1$. If $n$ is odd, immediately reject since $L_1$ does not include any odd length strings anyway. This procedure decides $L_1$, runs in time $O(n)$ when $n$ is odd, and runs in time $O(2^{n^5})$ when $n$ is even. In other words, this procedure decides $L_1$ in time $O(f(n))$. As desired, $L_1 \in TIME(f(n))$. $L_2 \in SPACE(g(n))$: By the definition of $L_2$, $L_2$ can be decided in space $O(n^3)$. Thus, $L_2 \in SPACE(n^3) = SPACE(g(n))$, as desired. $L_1 \not\in SPACE(g(n))$: Suppose for the sake of contradiction that $L_1 \in SPACE(g(n)) = SPACE(n^3)$. We know that $SPACE(n^3) \subseteq TIME(2^{O(n^3)}) \subsetneq TIME(2^{n^4})$. Thus, there exists a decider for $L_1$ which runs in time $O(2^{n^4})$. This directly contradicts the definition of $L_1$. Then by contradiction, we see that $L_1 \not\in SPACE(g(n))$. $L_2 \not\in TIME(f(n))$: Suppose for the sake of contradiction that $L_2 \in TIME(f(n))$. This means that there exists a constant $c$ and an algorithm $A$ deciding $L_2$ such that on any input of size $n$, algorithm $A$ terminates in time $c\times f(n)$. We construct a new algorithm $A'$ as follows: given some input, walk through the entire input, keeping track of whether the input length is even or odd; if at the end of the input the length is determined to be odd, return to the start of the input and run $A$; otherwise, reject. For any input of odd length, $A'$ returns the same answer as $A$. For any input of even length, $A'$ rejects, which matches the expected behavior since $L_2$ contains no even length strings. Thus, $A'$ also decides $L_2$. On even length inputs, $A'$ runs for exactly $n$ steps. On odd length inputs, $A'$ runs for exactly $2n$ steps more than $A$ requires. But $A$ requires at most $c\times f(n)$ steps, which for odd $n$ is $cn$. Thus, in all cases, $A'$ runs in at most $(c+2)n$ steps. In other words, algorithm $A'$ decides $L_2$ in time $O(n)$. But since $TIME(n) \subseteq SPACE(n)$, we can conclude that $L_2 \in SPACE(n) \subsetneq SPACE(n^2)$. This contradicts the definition of $L_2$. Thus, by contradiction we see that $L_2 \not\in TIME(f(n))$.
{ "pile_set_name": "StackExchange" }
Q: Get output outside the ajax call I'm using the libraby request for my ajax call, this library give me a response and after I'm using JSON. I've got a constant id inside and where I'm block is to use this constant outside the request() I know that I need a promises but I don't understand how to use it ... const options = { *** }, }; request(options, function(error, response, issue) { const json = JSON.parse(issue); for (let i = 0; i < json.versions.length; i++) { const test = json.versions[i].name; if (test === version) { const id = json.versions[i].id; //here id } } }); console.log(id); // I need to retrieve the const id here but id it's undefined, so how can I specified id A: Try using: const options = { *** }, }; let id; function getMyBody(options, callback) { request(options, function(error, response, issue) { const json = JSON.parse(issue); for (let i = 0; i < json.versions.length; i++) { const test = json.versions[i].name; if (test === version) { const id = json.versions[i].id; //here id callback(id); } } }); }); } getMyBody(options,(id)=>{this.id = id; console.log(id);})
{ "pile_set_name": "StackExchange" }
Q: Approximating π via Monte Carlo simulation Inspired by a tweet linked to me by a friend and a Haskell implementation by her for the same problem, I decided to try my hand at approximating the value of π using everything in the Haskell standard library I could find for the job. Here’s what I came up with: module Pi where import Data.List (genericLength) import Control.Arrow (Arrow, (<<<), (***), arr) import System.Random (newStdGen, randoms) type Point a = (a, a) chunk2 :: [a] -> [(a, a)] chunk2 [] = [] chunk2 [_] = error "list of uneven length" chunk2 (x:y:r) = (x, y) : chunk2 r both :: Arrow arr => arr a b -> arr (a, a) (b, b) both f = f *** f unsplit :: Arrow arr => (a -> b -> c) -> arr (a, b) c unsplit = arr . uncurry randomFloats :: IO [Float] randomFloats = randoms <$> newStdGen randomPoints :: IO [Point Float] randomPoints = chunk2 <$> randomFloats isInUnitCircle :: (Floating a, Ord a) => Point a -> Bool isInUnitCircle (x, y) = x' + y' < 0.25 where x' = (x - 0.5) ** 2 y' = (y - 0.5) ** 2 lengthRatio :: (Fractional c) => [b] -> [b] -> c lengthRatio = curry (unsplit (/) <<< both genericLength) approximatePi :: [Point Float] -> Float approximatePi points = circleRatio * 4.0 where circlePoints = filter isInUnitCircle points circleRatio = circlePoints `lengthRatio` points main :: IO () main = do putStrLn "How many points do you want to generate to approximate π?" numPoints <- read <$> getLine points <- take numPoints <$> randomPoints print $ approximatePi points I’m interested in a general review, but I’m especially curious about my use of arrows: is there a better way to write lengthRatio? Are anything like both and unsplit provided anywhere in the standard library? If not, do any packages help? A: About those arrows I’m interested in a general review, but I’m especially curious about my use of arrows: is there a better way to write lengthRatio? Compare the following two lines. Both do the same, but which one would you rather see if you need to change your code drunk in three months, with only 5% battery left? lengthRatio = curry (unsplit (/) <<< both genericLength) lengthRatio xs ys = genericLength xs / genericLength ys Also, which one has which type, and which one is more general? Arrows are great if you want to abstract functions. But throughout your small script, you're still just working with (->), not any other instance of Arrow. For a small script like this, Arrow is too much. For example the pointwise definition above is actually a character shorter than the pointfree one. Sure, the pointfree one is clever, but it's also very beginner-unfriendly. About randomness randomPoints introduces a dependency between your point coordinates \$x\$ and \$y\$, since both draw from the same sequence. This usually leads to points on hyperplanes (see disadvantages of LCG and spectral test). Your friends variant doesn't have this immediate problem: randomTuples :: Int -> IO [(Float, Float)] randomTuples n = do seed1 <- newStdGen seed2 <- newStdGen let xs = randoms seed1 :: [Float] -- two different ys = randoms seed2 :: [Float] -- generators being used return $ take n $ zipWith (,) xs ys However, since newStdGen is merely a split, it's more or less hiding the dependency at another place. Still, it's something to keep in mind, if you don't want to end up with something like this. But how would you check this? Well, you would run tests, over and over. Here's the second design critique on randomPoints, it doesn't take a RandomGen. Truth be told, if I say that Arrow is too much for a small script, then randomPoints :: RandomGen g => g -> [Point Float] is too much either. Also, if you know you're going to generate Points, a newtype Point a together with instance Random a => Random (Point a) where is feasible and doesn't introduce a potential error via chunk2. Keep possible problems with Random in mind, though. About names The function isInUnitCircle lies. It's not testing whether the point \$(x,y)\$ lies in the circle with radius \$r = 1\$ with center in the origin, e.g. $$ \sqrt{x^2 + y^2} \le 1^2 \Leftrightarrow x^2 + y^2 \le 1 $$ but in the circle with diameter \$d = 2r = 1\$ with center in \$(0.5, 0.5)\$. In the following picture, the green region is where you generate your random values. In the left one, you see the regular unit circle, in the right one, you see the circle size you're actually testing (after shifting your values from the green square into the red one): Therefore, you're not calculating the "usual" fourth of a circle, but instead a circle with a fourth of the original size (\$\pi(\frac{1}{2})^2 = \frac{\pi}{4}\$)). Luckily, it doesn't matter for the convergence. A real test that checks whether a point is in the unit circle is tremendously easier: isInUnitCircle :: (Num a, Ord a) => Point a -> Bool isInUnitCircle (x, y) = x ^ 2 + y ^ 2 <= 1 About optimization Last, but not least, there's an issue with approximatePi, or rather the use of lengthRatio on the same list twice. Actually, taking the length of the list again is a litle bit strange, since you know how large the sample is: numPoints <- read <$> getLine -- sample size points <- take numPoints <$> randomPoints print $ approximatePi points -- sample size still known (?) But let's say that you don't actually know how many points you have. Let's assume that someone wants to check a many points. Suddenly, the memory usage of your program explodes: $ echo 10000000 | ./CalcPi +RTS -s How many points do you want to generate to approximate π? 3.141744 33,724,505,920 bytes allocated in the heap 5,288,250,096 bytes copied during GC 1,319,621,976 bytes maximum residency (17 sample(s)) 5,554,344 bytes maximum slop 2587 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 63626 colls, 0 par 1.606s 3.155s 0.0000s 0.0006s Gen 1 17 colls, 0 par 2.732s 3.373s 0.1984s 1.2720s INIT time 0.000s ( 0.000s elapsed) MUT time 17.158s ( 15.542s elapsed) GC time 4.337s ( 6.528s elapsed) EXIT time 0.019s ( 0.166s elapsed) Total time 21.514s ( 22.236s elapsed) %GC time 20.2% (29.4% elapsed) Alloc rate 1,965,530,983 bytes per MUT second Productivity 79.8% of total user, 77.2% of total elapsed Even though randoms generates a lazy list, approximatePi needs to hold onto it completely due to lengthRatio. A classic space leak. The altnerative version of lengthRatio won't save you from that. Instead, provide a function to check the ratio of filtered elements: -- Rational from Data.Ratio filterRatio :: (a -> Bool) -> [a] -> Rational filterRatio p xs = -- exercise That way, you can define a version of approximatePi that works for large lists: approximatePi :: [Points Float] -> Double approximatePi points = circleRatio * 4 where circleRatio = fromRational $ filterRatio isInUnitCircle points $ echo 10000000 | ./GenPIRatio +RTS -s How many points do you want to generate to approximate π? 3.1421592 24,445,866,792 bytes allocated in the heap 15,555,552 bytes copied during GC 77,896 bytes maximum residency (2 sample(s)) 21,224 bytes maximum slop 1 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 46874 colls, 0 par 0.012s 0.113s 0.0000s 0.0001s Gen 1 2 colls, 0 par 0.000s 0.000s 0.0001s 0.0001s INIT time 0.000s ( 0.000s elapsed) MUT time 10.809s ( 10.746s elapsed) GC time 0.012s ( 0.113s elapsed) EXIT time 0.000s ( 0.000s elapsed) Total time 10.821s ( 10.859s elapsed) %GC time 0.1% (1.0% elapsed) Alloc rate 2,261,657,279 bytes per MUT second Productivity 99.9% of total user, 99.5% of total elapsed Summary Food for thought: Use the right level of abstraction for your problem. Arrow is an overkill for such a small script, but alright for learning. Try to decrease the amount of IO wherever possible, but again, that might be too abstract for a small script. Bad: Don't lie, give things the right name. Don't overcomplicate, keep pointfree to a sane minimum. Major space leak in approximatePi. Read the linked section of RWH and try to define filterRatio or a similar function. Good: Type signatures! Yay! Explicit imports! Type synonym instead of (a, a) everywhere! So beside the slight arrow-overkill, well done.
{ "pile_set_name": "StackExchange" }
Q: Find the sum of the series $S = \sum_{k=1}^{n} \frac{k}{k^{4} + k^{2} + 1} $ $$ S = \sum_{k=1}^{n} \frac{k}{k^{4} + k^{2} + 1} $$ I started by factorizing the denominator as $k^2+k+1$ and $k^2-k+1$ The numerator leaves a quadratic with $k$ and $k-1$ or a constant with $k+1$ and $k-1.$ I tried writing the individual terms, ofcourse, it was useless. How do I do this? A: So your term is equal to $$\frac{1}{2}\left(\frac{1}{k^2-k+1}-\frac{1}{k^2+k+1}\right)$$ Now note $(k+1)^2-(k+1)+1=k^2+k+1$, so your term is: $$\frac{1}{2}\left(\frac{1}{k^2-k+1}-\frac{1}{(k+1)^2-(k+1)+1}\right)$$ and you can apply a telescoping series technique to establish the sum to $n$ is just half of $\frac{1}{1^2 -1 +1}-\frac{1}{(n+1)^2-(n+1)+1}$. And if you are also looking for the limit, half of $\frac{1}{1^2 -1 +1}$.
{ "pile_set_name": "StackExchange" }
Q: Obtaining EntityManager in Spring + Hibernate configuration I have a Spring MVC 4.0 application, and I am learning JPA. I use Hibernate as the JPA implementation. I can configure Hibernate as described in this tutorial. It works fine, but I have to use Hibernate's Session object: @Autowired SessionFactory sessionFactory; ... Session session = sessionFactory.openSession(); Now, I want to use JPA's EntityManager instead. I have followed this tutorial on the same web site (the configuration is very similar). And I tried to obtain an EntityManager object this way: @PersistenceContext EntityManager entityManager; I got a runtime message: java.lang.IllegalStateException: No transactional EntityManager available Then, I followed the suggestion in this answer, and tried to use the following code: @PersistenceContext EntityManager entityManager; ... entityManager=entityManager.getEntityManagerFactory().createEntityManager(); It works a few times (about 9 repetitive method invocations), and then the application freezes. What is the right way to get EntityManager in Spring + Hibernate configuration? I do not need any Spring transaction functionality for now. I just want to get an access to EntityManager and play with JPA. Spring/Hibernate configuration file (hibernate.xml) <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.0.xsd"> <bean id="dataSource" class="org.apache.tomcat.dbcp.dbcp.BasicDataSource"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost:3306/test_db" /> <property name="username" value="test" /> <property name="password" value="test" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="packagesToScan" value="net.myproject" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" /> </property> <property name="jpaProperties"> <props> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean id="persistenceExceptionTranslationPostProcessor" class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor" /> <tx:annotation-driven /> </beans> The class where I attempt to use EntityManager @Repository public class ProductsService { @PersistenceContext EntityManager entityManager; @Transactional public GridResponse<Product> getProducts(GridRequest dRequest) { // The following line causes the exception: "java.lang.IllegalStateException: No transactional EntityManager available" Session session = entityManager.unwrap(Session.class); //... } ... A: For the @PersistenceContext EntityManager entityManager; approach, add tx:annotation-driven to your .xml configuration and mark your methods where you use entityManager as @Transactional. A: It can be use with @Autowired as shown in https://stackoverflow.com/a/33742769/2028440 @Autowired private EntityManager entityManager;
{ "pile_set_name": "StackExchange" }
Q: Die if anything is written to STDERR? How can I force Perl script to die if anything is written to STDERR ? Such action should be done instantly, when such output happen, or even before, to prevent that output... A: This doesn't seem like an especially smart idea, but a tied filehandle should work. According to the perltie manpage: When STDERR is tied, its PRINT method will be called to issue warnings and error messages. This feature is temporarily disabled during the call, which means you can use warn() inside PRINT without starting a recursive loop. So something like this (adapted from the manpage example) ought to work: package FatalHandle; use strict; use warnings; sub TIEHANDLE { my $i; bless \$i, shift } sub PRINT { my $r = shift; die "message to STDERR: ", @_; } package main; tie *STDERR, "FatalHandle"; warn "this should be fatal."; print "Should never get here."; And that outputs (with exit code 255): message to STDERR: this should be fatal. at fh.pl line 17. A: Here's a method that works no matter how STDERR (fd 2) is written to, even if it's a C extension that doesn't use Perl's STDERR variable to do so. It will even kill child processes that write to STDERR! { pipe(my $r, my $w) or die("Can't create pipe: $!\n"); open(STDERR, '>&', $w) or die("Can't dup pipe: $!\n"); close($r); } print "abc\n"; print "def\n"; print STDERR "xxx\n"; print "ghi\n"; print "jkl\n"; $ perl a.pl abc def $ echo $? 141 Doesn't work on Windows. Doesn't work if you add a SIGPIPE handler.
{ "pile_set_name": "StackExchange" }
Q: Using ionicons in a rails app I have a rails app I'd like to use these in. Following the instructions, I ensured the font path in .css was assets/fonts/ionicons... but it doesn't seem to be working. Anyone ever use these before? A: If anyone else has trouble to use ionicons in your rails projects, I suggest to use the gem font-ionicons-rails that I built. It's very simple to use, as below: Installation: Add this to your Gemfile: gem "font-ionicons-rails" Usage: In your application.css, include the css file: /* *= require ionicons */ Sass Support If you prefer SCSS, add this to your application.css.scss file: @import "ionicons"; If you use the Sass indented syntax, add this to your application.css.sass file: @import ionicons Then restart your webserver if it was previously running. That's all. Now you are ready to use ionicons in your project using the tag i or using the gem helper to improve use. Helpers ion_icon "camera" # => <i class="ion-camera"></i> ion_icon "camera", text: "Take a photo" # => <i class="ion-camera"></i> Take a photo ion_icon "chevron-right", text: "Get started", right: true # => Get started <i class="ion-chevron-right"></i> content_tag(:li, ion_icon("checkmark-round", text: "Bulleted list item")) # => <li><i class="ion-checkmark-round"></i> Bulleted list item</li> It's pretty ease now, yay. A: These are the steps I usually take: Add the following to config/application.rb config.assets.paths << Rails.root.join('app', 'assets', 'fonts') Make the directory app/assets/fonts and copy the font files to that directory. Copy ionicons.css to app/assets/stylesheets Edit ionicons.css file and update the url() calls to work with the asset pipeline: src: font-url("ionicons.eot?v=1.3.0"); src: font-url("ionicons.eot?v=1.3.0#iefix") format("embedded-opentype"), font-url("ionicons.ttf?v=1.3.0") format("truetype"), font-url("ionicons.woff?v=1.3.0") format("woff"), font-url("ionicons.svg?v=1.3.0#Ionicons") format("svg"); Restart webrick/thin/whatever and you should be good. :)
{ "pile_set_name": "StackExchange" }
Q: ASP.NET DetailsView Update exception handling - truncated data I'm using a DetailsView for updating a record. If the edit input of some fields is too long, the system produces a "data will be truncated" exception. I can see where I can detect the error in DetailsViewItemUpdating or DetailsViewItemUpdated, and provide a user message. However, I believe the visual feedback should be sufficient for this release, i.e. "hey, it didn't take my 30 characters, even though the header label said it would only allow 20". Is there a way to force the DetailsView to do the truncation and accept the update? Or some other approach to this data handling exception, which must be pretty common. A: ANSWER: from Ammar Gaffar at EE: Convert to template field In EditItemTemplate Set DataBindings > MaxLength property to desired max length of field Works fine.
{ "pile_set_name": "StackExchange" }
Q: How to Post to a Discord Webhook with Discord.js (code 400 Bad Request)? I'm trying to access a discord Webhook using Nodejs for simple messages (for now). I have looked at several attempts here and at other places, but didn't quite understand them or was able to replicate them myself. Reading through the docs and searching online I found node-fetch which in my eyes should work fine in principle, while seemingly simpler. const fetch = require('node-fetch'); var webhook = { "id":"my webhook id", "token":"my webhook token" }; var URL = `https://discordapp.com/api/webhooks/${webhook.id}/${webhook.token}`; fetch(URL, { "method":"POST", "payload": JSON.stringify({ "content":"test" }) }) .then(res=> console.log(res)); The only output I ever get is a Response Object with status code 400. The only time I do get something else is when I remove the method, then I get code 200 which doesn't help much... Is my payload somehow completely wrong or did I make a mistake with the URL or fetch syntax? A: Instead of making your own POST request, you can use the WebhookClient built into Discord.js like so... const id = ''; const token = ''; const webhook = new Discord.WebhookClient(id, token); webhook.send('Hello world.') .catch(console.error);
{ "pile_set_name": "StackExchange" }
Q: How to print only the largest file in a directory in Linux? I would like to know how can i sort a directory and print out to the terminal only the largest file in that specific directory? This is my directory: file1 2 file2 3 file3 1 file4 5 file5 2 The wanted result is to print "file4" to the terminal A: For just files in the directory you can use this: ls -Shld * | head -n 1 To include directories you can us du: du -a * | sort -n -r | head -n 1
{ "pile_set_name": "StackExchange" }
Q: Extending form validation rule in codeigniter I have a form with two fields <input type="text" name="total_plots" value="" placeholder="Enter Total plots" /> <input type="text" name="available_plots" value="" placeholder="Enter Available Plots " /> Available plot "available_plots" field value should be less than total plots "total_plots" field value I don't want to write callbacks. I want to extend the form validation rule. How to ? MY_Form_validation <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class MY_Form_validation extends CI_Form_validation { public function __construct() { parent::__construct(); $this->CI =& get_instance(); } public function check_avail($str) { $this->CI->form_validation->set_message('check_avail', 'Available plot should be less than Total plot.'); $total_plots = $this->CI->input->post('total_plots'); //echo '------'.$total_plots; //echo '------'.$str; if($str > $total_plots){ return false; } } } // class I have written rules in config <?php $config['plot_settings'] = array( array( 'field' => 'total_plots', 'label' => 'Total Plots', 'rules' => 'trim|xss_clean' ), array( 'field' => 'available_plots', 'label' => 'Available Plots', 'rules' => 'trim|xss_clean|check_avail' ) ); ?> Controller <?php defined('BASEPATH') OR exit('No direct script access allowed'); class Plot extends CI_Controller { public function __construct() { parent::__construct(); $this->load->library('Admin_layout'); $this->load->model('admin/plot_model'); $this->config->load('plot_rules'); $this->output->enable_profiler(TRUE); $this->new_name=''; } public function add(){ $this->form_validation->set_rules($this->config->item('plot_settings')); $this->form_validation->set_error_delimiters('<p><b>', '</b></p>'); if ($this->form_validation->run('submit') == FALSE ) { $this->admin_layout->set_title('Post Plot'); $this->admin_layout->view('admin/post_plot'); } }//add } A: I think you can do this without writing a callback or extending the validation rule. CI already provides a validation rule to check for less_than value. $total_plots = $this->input->post('total_plots'); $this->form_validation->set_rules('available_plots', 'Available Plots', "less_than[$total_plots]"); It should work.
{ "pile_set_name": "StackExchange" }
Q: parse xml with many childs (XmlRpc Response) This is the monster I receive after a GET request <?xml version="1.0" encoding="UTF-8"?> <methodResponse> <params> <param> <value> <array> <data> <value> <struct> <member> <name>DateCreated</name> <value><dateTime.iso8601>20160830T12:57:13</dateTime.iso8601></value> </member> <member> <name>Id</name> <value><i4>17</i4></value> </member> </struct> </value> <value> <struct> <member> <name>DateCreated</name> <value><dateTime.iso8601>20160830T15:57:25</dateTime.iso8601></value> </member> <member> <name>Id</name> <value><i4>43</i4></value> </member> </struct> </value> </data> </array> </value> </param> </params> </methodResponse> I want to get the DateCreated and Id values. The server may send multiple Ids with different created dates. Is it possible to compare the DateCreated values to get the most recent value? Here is what I've been able to come with after looking at the docs Dom.Document doc = res.getBodyDocument(); Dom.XmlNode methodResponse = doc.getRootElement(); String dateCreated = methodResponse.getChildElement('params', null) .getChildElement('value', null) .getChildElement('array', null) .getChildElement('data', null) .getChildElement('value', null) .getChildElement('struct', null) .getChildElement('member', null) .getChildElement('value', null).getText(); String theId = methodResponse.getChildElement('params', null) .getChildElement('value', null) .getChildElement('array', null) .getChildElement('data', null) .getChildElement('value', null) .getChildElement('struct', null) .getChildElement('member', null) .getChildElement('value', null).getText(); return theId; The return value from theId is Text Node Value [154]|methodResponse|"XMLNode[ELEMENT,methodResponse,null,null,null,[XMLNode[ELEMENT,params,null,null,null,[XMLNode[ELEMENT,param,null,null,null,[XMLNode[ELEMENT,value,null,null,null,[XMLNode[ELEMENT,array,null,null,null,[XMLNode[ELEMENT,data,null,null,null,[XMLNode[ELEMENT,value,null,null,null,[XMLNode[ELEMENT,struct,null,null,null,[XMLNode[ELEMENT,member,null,null,null,[XMLNode[ELEMENT,name,null,null,null,[XMLNode[TEXT,null,null,null,null,null,DateCreated,]],null,], XMLNode[ELEMENT,value,null,null,null,[XMLNode[ELEMENT,dateTime.iso8601,null,null,null,[XMLNode[TEXT,null,null,null,null,null,20160830T12:57:13,]],null,]],null,]],null,], XMLNode[ELEMENT,member,null,null,null,[XMLNode[ELEMENT,name,null,null,null,[XMLNode[TEXT,null,null,null,null,null,Id,]],null,], XMLNode[ELEMENT,value,null,null,null,[XMLNode[ELEMENT,i4,null,null,null,[XMLNode[TEXT,null,null,null,null,null,17,]],null,]],null,]],null,]],null,]],null,], XMLNode[ELEMENT,value,null,null,null,[XMLNode[ELEMENT,struct,null,null,null,[XMLNode[ELEMENT,member,null,null,null,[XMLNode[ELEMENT,name,null,null,null,[XMLNode[TEXT,null,null,null,null,null,DateCreated,]],null,], XMLNode[ELEMENT,value,null,null,null,[XMLNode[ELEMENT,dateTime.iso8601,null,null,null,[XMLNode[TEXT,null,null,null,null,null,20160830T15:57:25,]],null,]],null,]],null,], XMLNode[ELEMENT,member,null,null,null,[XMLNode[ELEMENT,name,null,null,null,[XMLNode[TEXT,null,null,null,null,null,Id,]],null,], XMLNode[ELEMENT,value,null,null,null,[XMLNode[ELEMENT,i4,null,null,null,[XMLNode[TEXT,null,null,null,null,null,43,]],null,]],null,]],null,]],null,]],null,]],null,]],null,]],null,]],null,]],null,]],null,]"|0x7d6901eb A: The below function will return the most recent Id from your HttpResponse. public Id getLatestId(HttpResponse res){ Dom.Document doc = res.getBodyDocument(); Dom.XmlNode methodResponse = doc.getRootElement(); List<Dom.XmlNode> dataNodes = methodResponse.getChildElement('params', null) .getChildElement('param', null) .getChildElement('value', null) .getChildElement('array', null) .getChildElement('data', null) .getChildElements(); List<DateTime> datesToSort = new List<DateTime>(); Map<DateTime, String> dateToIdMap = new Map<DateTime, String>(); for(Dom.XmlNode dNode : dataNodes){ List<Dom.XmlNode> memberNodes = dNode.getChildElement('struct', null) .getChildElements(); DateTime createdDate = null; String id = ''; for(Dom.XmlNode mNode : memberNodes){ String name = mNode.getChildElement('name', null) .getText(); if(name == 'DateCreated'){ String dt = mNode.getChildElement('value', null) .getChildElement('dateTime.iso8601', null) .getText(); //need to prepare the DateTime string for JSON parsing dt = dt.substring(0, 4) + '-' + dt.substring(4,6) + '-' + dt.substring(6, dt.length()); createdDate = (DateTime) JSON.deserialize('"'+dt+'"', DateTime.class); }else if(name == 'Id'){ id = mNode.getChildElement('value', null) .getChildElement('i4', null) .getText(); } } datesToSort.add(createdDate); dateToIdMap.put(createdDate, id); } datesToSort.sort(); String latestId = dateToIdMap.get(datesToSort.get(datesToSort.size()-1)); System.debug(latestId); return latestId; }
{ "pile_set_name": "StackExchange" }
Q: MPI - Use multiple threads to listen for incoming messages I am working on a project that uses MPI routines and multiple threads for sending and receiving messages. I would like each receiving thread to focus on a different incoming message instead of having two or more trying to receive the same one. Is there a way to achieve this? I don't know if this helps but I am currently using Iprobe() to check for incoming messages and Irecv() with Test() to check if the thread has received the whole message. A: Starting with version 3 of the standard, MPI allows for the removal of matched messages from the message queue so that they are no longer visible to subsequent probes/receives. This is done using the so-called matched probes. Just replace MPI_Iprobe with MPI_Improbe, which is the non-blocking matched probe operation: int flag; MPI_Status status; MPI_Message msg; MPI_Improbe(source, tag, comm, &flag, &msg, &status); Once MPI_Improbe returns 1 in flag, a message matching (source, tag, comm) has arrived. A handle to the message is stored into msg and the message is removed from the queue. Subsequent probes or receives with a matching (source, tag, comm) triplet - by the same thread or in another - won't see the same message again and therefore won't interfere with its reception by the thread that matched it originally. To receive a matched message, use MPI_Imrecv (or the blocking MPI_Mrecv): MPI_Request req; MPI_Imrecv(buffer, count, dtype, &msg, &req); do { ... MPI_Test(&req, &flag, &status); } while (!flag); Versions of MPI before 3.0 do not provide similar functionality. But, if I understand you correctly, you only need to guarantee that no matching probe will be posted before MPI_Irecv has had the opportunity to remove the message from the queue (which is what matched probe+receive is meant to prevent). If you are probing in a master thread and then dispatching the messages to different threads, then you could use a semaphore to delay the execution of the next probe by the main thread until after the worker has issued MPI_Irecv. If you have multiple threads doing probe+receive, then you may simply issue the MPI_Irecv call in the same critical section (or whatever synchronisation primitive you use to achieve the serialisation of the MPI calls as required by MPI_THREAD_SERIALIZED) as MPI_Iprobe once the probe turns out successful: // Worker thread CRITICAL(mpi) { MPI_Iprobe(source, tag, comm, &flag, &status); if (flag) MPI_Irecv(buffer, count, dtype, status.MPI_SOURCE, status.MPI_TAG, comm, &req); } Replace the CRITICAL(name) { ... } notation with whatever primitives your programming environment provides.
{ "pile_set_name": "StackExchange" }
Q: Crosses topology At the plane $X=\mathbb{R^2}$ we consider the collection $\mathcal{T}$ of subsets $U\subset X$ such that for all $(a,b)\in U$ there exists $\epsilon>0$ with $$((a-\epsilon,a+\epsilon)\times\{b\})\cup(\{a\}\times(b-\epsilon,b+\epsilon))\subset U$$ It's easy to see that $\mathcal{T}$ is a topology on $\mathbb{R}^2$. Also, every open set in the usual topology of $\mathbb{R}^2$ is also an open set in this topology ( because in every open ball we can contain a "cross" as the ones defined above) and no points of $X$ are open, so $\mathcal{T}$ is not the discrete topology. So we have $\mathcal{T}$ is a topology finer than the usual topology on $X$ and strictly coarser than the the discrete topology. I need to find out if $\mathcal{T}$ is different from the usual topology ( I suspect it is) and , in that case, find a basis of $\mathcal{T}$. But I don't know how to construct an open set for this topology that is not a usual open set. A: Yes, the topology is different. This answer gives an explicit open set which is not standard-open. The topology itself is always a base. The "open crosses" themselves aren't open, so only function as a network, not a base.
{ "pile_set_name": "StackExchange" }
Q: Do many people even care about "good" grammar in novels? I realise people like me are probably many, i.e., amateurs who want to write a book and then get frustrated by having to learn all this very intricate grammar just to please the odd poindexter who may read their book. Especially these days with grammar on the decline, it seems those who can even identify good grammar are very scarce. Do many people even care about "good" grammar in novels? A: People do care about good grammar in novels. I understand what you mean about the "intricate details" if you're referring to the oxford comma and grammar rules of that nature that even many editors won't really pay attention to. But, basic rules of grammar apply, unless you have a specific reason to throw those window. For example, if your character doesn't speak English well, you might use less grammar in his dialogue. Or, if you're writing about an entire civilization of people that never learned proper English, you may not want to use as much grammar, especially if your narrator is a person of the civilization. But, outside of cases like those, the rules of grammar do apply. This article may provide some insight: http://simplewriting.org/does-grammar-matter/ This one may also be of assistance, though it argues the contrary: http://www.writersdigest.com/editor-blogs/there-are-no-rules/general/why-i-dont-care-about-grammar-and-why-you-should-stop-worrying The way I've always been taught, and the way I've come to understand grammar, is that you must learn the rules before you can break them. And, if you break them too much, as I've learned the hard way, people can't understand your writing. So, no, novels aren't expected to be written in MLA or APA format, but they are expected to be coherent. A: You have to make a distinction between good grammar and what we might call the grammar of the good. Or perhaps I should say between grammar and the grammar of the good. Grammar is the mechanics of how language works. Every comprehensible sentence is comprehensible because of grammar. Either your language works -- conveys meaning to a reader -- or it does not. You cannot make yourself understood at all without grammar. Then there is "Grammar", which is the study of how grammar works, and an attempt to define its operation, which, it turns out, we don't completely understand. You don't need to know anything at all about the study of Grammar in order to speak or write with grammar. In fact, grammar has to come before Grammar because you need grammar to understand anything that is said about Grammar. Then there is the grammar of the good, which is a set of prescriptions, nominally derived from Grammar, ostensibly for the purpose of teaching people better grammar, but with the larger purpose of separating the speech of the educated from the speech of the uneducated so as to create effective barriers to social mobility. The grammar of the good contains all kinds of rules that have no justification in actual grammar, or in Grammar. Some of them even come from other languages. But just as with Grammar, you don't need to know the grammar of the good in order to write a compelling story. What you do need to do is to become as fluent in written English as you are in spoken English. The two are substantially different. To become fluent in written English you are going to need to read and write it about as much as you had to speak and hear spoken English to become fluent in it. Which means a lot. Lots of adults just aren't there simply because they have not done enough practice to be fluent. Unfortunately, while the lack of fluency is written English can manifest itself in a number of different ways, most of which are not actually grammatical, people who don't have a vocabulary for talking about these flaws will fall back of calling them "bad grammar". Unfortunately, this leads many to suppose that you have to go learn Grammar and conform to the grammar of the good in order to write well. This does not work terribly well, however, if for no other reason than that most of the faults people are complaining about are not actually grammatical, but are matters of convention, style, usage, or other things. Reading good books will do more to improve your fluency than reading Grammar books. That said, the gatekeepers of the publishing world number themselves among the good. While a few have the good sense to know that a great story does not always come across the transom in the grammar of the good, many of them will use it as their first filter. If it is not in the grammar of the good, they won't read far enough to find out if is a great story. And if the do find a great story that is not written using the grammar of the good, they will insist on editing it into the grammar of the good before they will publish it. In short, while you don't need to master the grammar of the good in order to write a compelling story, you almost certainly need to present your story in the grammar of the good in order to get it published. On the upside, since the vast majority of (professionally) published work is published in the grammar of the good, if you read and write enough to become fluent in written English, you will have picked up both grammar and the grammar of the good by osmosis. This is not to say that you won't ever be criticized, since the grammar of the good is being constantly reshaped by the social mobility that it is designed to resist. Thus most of the latinate rules (no split infinitives, no ending a sentence with a preposition) which were the frontier of the grammar of the good fifty years ago are now mere ruins to be gawked at by tourists, while whole new prescriptions have grown up around things like pronouns. But that is the writing life.
{ "pile_set_name": "StackExchange" }
Q: Is it required to be exact about previous salary when dealing with recruiting agencies? All jobs I applied for have been through recruiting agencies and all of them are asking for my current/previous salary. It is much much lower than industry average (nearly half) and I don't want that to influence my next salary... I remember reading somewhere - though I don't remember where - that it doesn't need to be precise, and that it should also include in the sum a money value of perks, benefits, bonuses and maybe even equity. For example, if my basic salary is 50, my bonus is 10, my laptop is 3, other benefits amount to 2... can I say that my salary is 65 to the recruiting agency? Could there be any negative consequences for not disclosing the exact base salary? A: You can say whatever you like. The recruiting company wants to know in order to factor it into what roles they might consider putting you forward for, and in order to determine how much they should be putting forward as your salary demands. So obviously if you fudge/mislead/use creative accounts/outright lie, it just means that they'll be using the fudged number in those considerations. However, before you do this, bear in mind that it is in the recruiters' best interests to put you in a job at the best salary they can get. Their commission is based on your salary. The more they can get for you, the more they get for themselves. So when dealing with a good recruiter, you should absolutely be able to get the most effective service by telling them what your current situation is, what you are hoping for, and what you are not willing to settle for less than. Feel free to say "my current salary is $50k, but taking perks and bonus into account, I value it at $65k, and am not interested in changing jobs for anything less than $75k". A: I disagree with Carson63000's point that "it is in the recruiters' best interests to put you in a job at the best salary they can get." Technically, this is true. However, their primary goal is really just getting the placement at all. If they can sell the employer on the fact that you will accept less than another candidate, they will absolutely do so, and hope to earn 90% of a commission, vs losing it entirely. You have to be very clear to the recruiter (and to yourself) about what you'd really do in various scenarios. You say your salary is only about half the industry average. Using your example, would you stay at your current $50k job even if the recruiter could get you a $70k one? That still may be below the industry average, but it's a nice bump for you. If you know you'd turn it down, be honest and tell the recruiter that -- but be prepared if the recruiter says demanding more will take you out of consideration. A: You have two professional choices, and neither involve lying. Put down what you want your salary to be, providing the answer is clear that it is not your current salary. Some forms will ask for a desired salary, and so current salary is immaterial. Put down your current base salary and then when salary comes up, point out what you were getting with bonuses, and what it will actually take to move, based on being underpaid. The reason for giving the base salary, is that many companies will, in the process of checking references, confirm salaries with the previous employer(s). If the number you have provided does not match the numbers they hear, and is wildly different, you may be dropped from consideration without a chance to explain. Here are some related links on how they might verify your salary, and a couple more about why it is inappropriate to include the value of your benefits and whether you should lie about your salary.
{ "pile_set_name": "StackExchange" }
Q: string modifications are not working? This is my code and in my url string white spaces are not encoded by the code NSString *str = item.fileArtPath; NSCharacterSet *set = [NSCharacterSet URLQueryAllowedCharacterSet]; [str stringByAddingPercentEncodingWithAllowedCharacters:set]; [str stringByReplacingOccurrencesOfString:@" " withString:@"%20"]; NSURL *urlString = [NSURL URLWithString:str]; for below string: http://xx.xx.xx.xxx:xx/xx/xx/xx/xxx/Audio Adrenaline.jpg ^^^^^^^^^^^^^^^^^^^^ The white space after Audio is not converted to %20 after I used string replacement. And in Debugging urlString is nil why so? A: From the NSString Class Reference Returns a new string made from the receiver by replacing all characters not in the specified set with percent encoded characters. Meaning it doesn't permute the instance it's called on. You need to say something like str = [str stringByAddingPercentEncodingWithAllowedCharacters:set]; Same with [str stringByReplacingOccurrencesOfString:@" " withString:@"%20"];
{ "pile_set_name": "StackExchange" }
Q: Linking a twitter search to a custom tableview cell okay, here's what I've been scratching my head over for the last few days. I have created a custom cell for my table view. I have created a separate class (customCell.h) for this cell and linked them together in Xcode. The custom cell has four UIlabels which I have declared in the .h file of the custom cell and linked to the custom cell via storyboard. I have imported the customCell.h header file into the .h file of my table view controller I am trying to do a search on twitter, and then populate the table view and custom cell with the details of the various tweets. The issue is I don't know how to link the results of the tweet to the 4 UIlabel outlets in my custom cell. When I am declaring some of the outlets of the custom cell in my table view implementation file (even though I've imported the .h file of the custom cell) xcode is saying that it does not recognise the name I've copied the coding below detailing as far as I can get. Any help would be much appreciated. Thanks in advance - (void)fetchTweets { dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ NSData* data = [NSData dataWithContentsOfURL: [NSURL URLWithString: @"THIS IS WHERE MY TWITTER SEARCH STRING WILL GO.json"]]; NSError* error; tweets = [NSJSONSerialization JSONObjectWithData:data options:kNilOptions error:&error]; dispatch_async(dispatch_get_main_queue(), ^{ [self.tableView reloadData]; }); }); } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return tweets.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"TweetCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } NSDictionary *tweet = [tweets objectAtIndex:indexPath.row]; NSString *text = [tweet objectForKey:@"text"]; NSString *name = [[tweet objectForKey:@"user"] objectForKey:@"name"]; NSArray *arrayForCustomcell = [tweet componentsSeparatedByString:@":"]; cell.textLabel.text = text; cell.detailTextLabel.text = [NSString stringWithFormat:@"by %@", name]; return cell; } A: You are creating an instance of the UITableViewCell, which is the default class for tableview cells. In your case, you have to create an instance of your customCell class (which extends the UITableViewCell class). You have to do this in you cellForRowAtIndexPath method: static NSString *CellIdentifier = @"TweetCell"; customCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if ( cell == nil ) { cell = [[customCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } // Get the tweet NSDictionary *tweet = [tweets objectAtIndex:indexPath.row]; I hope this helped you out! Steffen.
{ "pile_set_name": "StackExchange" }
Q: Setting a form field's value during validation I read about this issue already, but I'm having trouble understanding why I can't change the value of a form's field during validation. I have a form where a user can enter a decimal value. This value has to be higher than the initial value of the item the user is changing. During clean(), the value that was entered is checked against the item's previous value. I would like to be able to re-set the form field's value to the item's initial value when a user enters a lower value. Is this possible from within the clean() method, or am I forced to do this in the view? Somehow, it doesn't feel right to do this in the view... (To make matters more complicated, the form's fields are built up dynamically, meaning I have to override the form's clean() method instead of using the clean_() method). A: I agree with Jack M's comment above. However, if you are going to change a form field's value, the view is likely the best place to do it. Validation methods should only be concerned with determining whether or not the current values are valid. In the view, you are already assigning flow control depending on a bound form's validity - whether or not to redirect to a 'success' page, or redisplay the form. In many cases you are also pre-populating a form, as in the example of a form used to edit existing parameters. It seems a logical extension of this functionality to add extra control over a particular value.
{ "pile_set_name": "StackExchange" }
Q: DateTime Display in Windows Form application I have a windows form application. I have a query which return me datetime in the format dd-MM-yyyy. But when this datetime is displayed in a datagrid it is displayed as dd/MM/yyyy. I wish to display dd-MM-yyyy in my datagrid. How can this be achieved. A: You set this in the column's DefaultCellStyle.Format: this.dataGridView1.Columns["Date"].DefaultCellStyle.Format = "dd-MM-yyyy";
{ "pile_set_name": "StackExchange" }
Q: Spacing between boxes using bootstrap responsive scheme I have a this layout based on bootstrap. I've manually added a margin between the rows, but when I resize the page I want the columns to have vertical spacing between them as well. How should I do this? Full-size browser: Resized browser: Ans here's the code: <link href="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css" rel="stylesheet" /> <div class="container-fluid"> <div class="row top-buffer"> <!-- VISITORS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">VISITORS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of unique visitors to your web site"><small>?</small></span> <br /> <span class="single-number-md">4,700</span> <small>visitors</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">10%&#x25B2;</span> <div class="chart" style="height:105px;" id="visitors-chart"></div> </div> </div> <!--/col-md-4--> <!-- TRIALS IN PROGRESS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">TRIALS IN PROGRESS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of trials in progress"><small>?</small></span> <br /> <span class="single-number-md" data-toggle="counterup">235</span> <small>trials</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">7%&#x25B2;</span> <div class="chart" style="height:105px;" id="trials-in-progress-chart"></div> </div> </div> <!--/col-md-4--> <!-- NEW CUSTOMERS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">NEW CUSTOMERS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of new customers acquired this month"><small>?</small></span> <br /> <span class="single-number-md">56</span> <small>customers</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">2%&#x25B2;</span> <div class="chart" style="height:105px;" id="new-customers-chart"></div> </div> </div> <!--/col-md-4--> </div> <!--/row--> <div class="row top-buffer"> <!-- VISITORS TO TRIALS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">VISITORS TO TRIALS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of visitors that have signed up for trial accounts"><small>?</small></span> <br /> <span class="single-number-md" data-toggle="counterup">5%</span> <small>conversion</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">7%&#x25B2;</span> <div class="chart" style="height:105px;" id="visitors-to-trials-chart"></div> </div> </div> <!--/col-md-4--> <!-- TRIALS TO PURCHASE --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">TRIALS TO PURCHASE <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of trials converted to purchases"><small>?</small></span></span> <br /> <span class="single-number-md">17%</span> <small>conversion</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">9%&#x25B2;</span> <div class="chart" style="height:105px;" id="trials-to-purchase-chart"></div> </div> </div> <!--/col-md-4--> <!-- TOTAL CUSTOMERS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">TOTAL CUSTOMERS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of total active customers"><small>?</small></span> <br /> <span class="single-number-md">488</span> <small>customers</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">2%&#x25B2;</span> <div class="chart" style="height:105px;" id="total-customer-chart"></div> </div> </div> <!--/col-md-4--> </div> <!--/row--> <div class="row top-buffer bottom-buffer"> <!-- FTE SALES REPS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">FTE SALES REPRESENTATIVES </span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of visitors that have signed up for trial accounts"><small>?</small></span> <br /> <span class="single-number-md" data-toggle="counterup">6</span> <small>sales reps</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">20%&#x25B2;</span> <div class="chart" style="height:105px;" id="fte-sales-reps-chart"></div> </div> </div> <!--/col-md-4--> <!-- QUOTA PER SALES REP --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">QUOTA PER SALES REP <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of trials converted to purchases"><small>?</small></span></span> <br /> <span class="single-number-md">$4,200</span> <small>dollars</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">9%&#x25B2;</span> <div class="chart" style="height:105px;" id="quota-per-sales-rep-chart"></div> </div> </div> <!--/col-md-4--> <!-- FORECASTED SALES --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">FORECASTED SALES </span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of total active customers"><small>?</small></span> <br /> <span class="single-number-md">$25,200</span> <small>dollars</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">2%&#x25B2;</span> <div class="chart" style="height:105px;" id="forecasted-sales-chart"></div> </div> </div> <!--/col-md-4--> </div> <!--/row--> </div> <!--container-fluid--> Thanks!! A: Can't you simply add a margin-bottom: 30px on your widget class and remove it from the rows? .widget-body { margin-bottom: 30px; } .widget-white { background-color: #fff; } .container-fluid { background-color: #eee; } <link href="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css" rel="stylesheet" /> <div class="container-fluid"> <div class="row top-buffer"> <!-- VISITORS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">VISITORS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of unique visitors to your web site"><small>?</small></span> <br /> <span class="single-number-md">4,700</span> <small>visitors</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">10%&#x25B2;</span> <div class="chart" style="height:105px;" id="visitors-chart"></div> </div> </div> <!--/col-md-4--> <!-- TRIALS IN PROGRESS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">TRIALS IN PROGRESS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of trials in progress"><small>?</small></span> <br /> <span class="single-number-md" data-toggle="counterup">235</span> <small>trials</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">7%&#x25B2;</span> <div class="chart" style="height:105px;" id="trials-in-progress-chart"></div> </div> </div> <!--/col-md-4--> <!-- NEW CUSTOMERS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">NEW CUSTOMERS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of new customers acquired this month"><small>?</small></span> <br /> <span class="single-number-md">56</span> <small>customers</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">2%&#x25B2;</span> <div class="chart" style="height:105px;" id="new-customers-chart"></div> </div> </div> <!--/col-md-4--> </div> <!--/row--> <div class="row top-buffer"> <!-- VISITORS TO TRIALS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">VISITORS TO TRIALS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of visitors that have signed up for trial accounts"><small>?</small></span> <br /> <span class="single-number-md" data-toggle="counterup">5%</span> <small>conversion</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">7%&#x25B2;</span> <div class="chart" style="height:105px;" id="visitors-to-trials-chart"></div> </div> </div> <!--/col-md-4--> <!-- TRIALS TO PURCHASE --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">TRIALS TO PURCHASE <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of trials converted to purchases"><small>?</small></span></span> <br /> <span class="single-number-md">17%</span> <small>conversion</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">9%&#x25B2;</span> <div class="chart" style="height:105px;" id="trials-to-purchase-chart"></div> </div> </div> <!--/col-md-4--> <!-- TOTAL CUSTOMERS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">TOTAL CUSTOMERS</span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of total active customers"><small>?</small></span> <br /> <span class="single-number-md">488</span> <small>customers</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">2%&#x25B2;</span> <div class="chart" style="height:105px;" id="total-customer-chart"></div> </div> </div> <!--/col-md-4--> </div> <!--/row--> <div class="row top-buffer bottom-buffer"> <!-- FTE SALES REPS --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">FTE SALES REPRESENTATIVES </span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of visitors that have signed up for trial accounts"><small>?</small></span> <br /> <span class="single-number-md" data-toggle="counterup">6</span> <small>sales reps</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">20%&#x25B2;</span> <div class="chart" style="height:105px;" id="fte-sales-reps-chart"></div> </div> </div> <!--/col-md-4--> <!-- QUOTA PER SALES REP --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">QUOTA PER SALES REP <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Percentage of trials converted to purchases"><small>?</small></span></span> <br /> <span class="single-number-md">$4,200</span> <small>dollars</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">9%&#x25B2;</span> <div class="chart" style="height:105px;" id="quota-per-sales-rep-chart"></div> </div> </div> <!--/col-md-4--> <!-- FORECASTED SALES --> <div class="col-md-4"> <div class="widget-body widget-white fixed-h-single-number-chart-sm"> <span class="light-grey">FORECASTED SALES </span> <span class="badge badge-question" data-container="body" data-toggle="tooltip" data-placement="right" title="Number of total active customers"><small>?</small></span> <br /> <span class="single-number-md">$25,200</span> <small>dollars</small> <span class="badge alert-success" data-container="body" data-toggle="tooltip" data-placement="right" title="Previous 30 days">2%&#x25B2;</span> <div class="chart" style="height:105px;" id="forecasted-sales-chart"></div> </div> </div> <!--/col-md-4--> </div> <!--/row--> </div> <!--container-fluid-->
{ "pile_set_name": "StackExchange" }
Q: Приоритеты и последнее вхождение в регулярке Добрый день! Пишу небольшое расширение для корпоративного сайта на javascript. С сайта вытягивается оригинал письма в формате plain/text. Требуется найти последний ip адрес ИЛИ ip-адрес, который идет после слова unknown, причем, если в письме встречается слово unknown то взять Ip-адрес, который идет сразу после него. На данный момент имеется такое регулярное выражение: /[([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})]|.nknown.[([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})]/gi По этой регулярке я получаю только ПЕРВЫЙ ip-адрес. Как найти последнее вхождение в регулярке и реализовать приоритеты поиска? A: До конца ваш вопрос не понятен, но предлагаю такой вариант /(?:.*unknown|.*[^\d]|^)([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})/i Возврат данных идет в 1 группе. Приоритет поиска задан в (?:.*unknown|.*[^\d]|^), то есть сначала идет попытка найти последнее (.*unknown) слово unknown за которым сразу идет ip адрес. Если такой вариант не найден, то идет поиск последнего (.*[^\d]) ip адреса и поиск ip в начале строки (^). Пример 1 https://regex101.com/r/EEhyIg/4 Пример 2 https://regex101.com/r/EEhyIg/5 Если нужно найти первое вхождение unknownIP или последний IP, то регулярку меняем так /(?:.*?unknown|.*[^\d]|^)([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})/i Пример 3 https://regex101.com/r/EEhyIg/6 Пример 4 https://regex101.com/r/EEhyIg/7
{ "pile_set_name": "StackExchange" }
Q: Read File with structur, show it and write it I have the following two files (those are language files from a ruby on rails project): en: calendar: check: invalid_date: 'Date is invalid' wrong_input_format: "The date should have following format<br/>%{format}" globals: yestext: "Yes" notext: "No" Second File: de: calendar: check: invalid_date: 'Datum ist ungültig' wrong_input_format: "Das Datum muss das folgende Format haben <br/>%{format}" globals: yestext: "Ja" notext: "Nein" I need a simple WinForms-Application that shows me the file for editing and saving in a Spreedsheet: What is the best way to realize this? I am a newbie. A: What I would do is probably: Build a reader class for the files that is reusable for any number of files (languages) and save it in a Dictionary<String,<Dictionary<String,String>> where it would hold the values: Dictionary<Language,<Dictionary<Key,Value>> Now walk the outer and inner dictionaries and add the items to a table to show. Find the key in first column, if found add to the column you are currently handling. If not found add a new row at the end. To save changes I would again write a class that takes care of a single file and save all keys that have a value. I see there are ' and " used. If that is essential save the type. In that case I would not use String for the value, but create a simple class with the value and String sign. HINT: This would be much easier using a tree for editing the entries. In that case you don't have to translate the noded design of the original file and visa versa: Sample: http://www.codeproject.com/Articles/23746/TreeView-with-Columns but there are many out there.
{ "pile_set_name": "StackExchange" }
Q: How to generate random match-up using round robin system? given these table Players | id | name | +----+----------+ | 1 | tawing | | 2 | master | | 3 | pepe | | 4 | bethel | | 5 | richard | matches: tawing vs master master vs pepe master vs bethel master vs richard.... Here's what I've tried so far select t1.id , t1.name from Players t1 cross apply Players t2 A: Is this what you are looking for: select concat(t1.name ,' vs ', t2.name) "Match-up" from players t1 cross join players t2 where t1.name <> t2.name; DEMO
{ "pile_set_name": "StackExchange" }
Q: High performance primitive array builder in Java I currently use google-or tools to solve a max flow problem, so this has me create a few int[] arrays in java to pass into ortools. Now ortools is very fast and not an issue here but I'm open to performance minded alternatives here. The problem lies mostly in building the arrays which takes a majority of the time as well as GC when the results are returned from which I chalk up to probably JNI overhead and not much I can do about that. The primitive arrays approach around the 5 - 7 million point mark and they are large enough to require them to be integers, short is not an option. Do I have any options or tricks or does anyone have any insight into how to most efficiently build these? Memory is not really an issue here I have enough that and for the most part I am open to any solution for the absolute bleeding edge performance, even if it requires a different representation of my data but this still must be able to be plugged into Ortools (unless You have an idea to replace it) but I welcome any suggestions at all regarding how to get the fastest array building out of this. Mind you I don't know the length of the arrays ahead of time, I don't do updates, deletes, only appends. I'm happy to provide anymore details. Thanks for any suggestions. A: Too long for a comment. If building the problem representation takes a lot of time when compared to solving, then you're doing something wrong. I guess, you're using something like int[] appendTo(int[] array, int element) { int[] result = Arrays.copyOf(array, array.length + 1); result[result.length - 1] = element; return result; } which has a quadratic complexity. The solution is similar to what ArrayList does: Grow by some fixed factor and ignore trailing array elements. This mayn't be what you need at the end, but shrinking all arrays once (just before passing them to the library) is cheap. You could use a class like class MyIntArray { private int length; private int[] data = new data[4]; // This does the final shrinking. public int[] toArray() { return Arrays.copyOf(array, length); } public MyIntArray append(int element) { if (array.length == length) { array = Arrays.copyOf(array, 2 * length); } array[length++] = element; } } or misuse the last element of an int[] for tracking the logical length (slightly more efficient, but very hacky). There are various trade-offs, e.g., you could reduce your growth factor to 1.5 by using length + (length >> 1) instead of 2 * length, start with shorter or longer arrays, or even with an empty array (like ArrayList does; then you'd need to adapt the growth factor as well).
{ "pile_set_name": "StackExchange" }
Q: Why won't my link open in a new tab? I create websites all the time, but this time I am stumped. I cannot seem to get a link to open in a new tab. Here is the specific line of code I am trying to troubleshoot: <aside id="bnk_widget_donation-2" class="bnk-widget bnk_widget_donation"> <div class="bnk-donation clickable"> <span class="donation-icon mobile-hide">&nbsp;</span> <h3 class="replace inset"> <a href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=WQZ2PBSENFF2C" target="_blank"> Donate Now </a> </h3> <p class="subhead"> support our mission </p> </div> </aside> The line number when looking at the source is line 197. The page is available here. The problem is the Donate link to PayPal that does not open in a new tab. Any thoughts? A: Javascript is happening when you click on it. The HTML includes a javascript file "http://www.3e.oneofakind.ws/wp-content/themes/bhinneka/js/p2-init.js?ver=3.6" which has a click event $(".clickable, .landing-mod").
{ "pile_set_name": "StackExchange" }
Q: Infinite scrolling on extJS local data store is it possible to have infinte scrolling in a extJS (4.1) grid, whose data-store is loaded manually? myStore = Ext.create('Ext.data.Store', { fields:givenStoreFields, data: [[]], }); myGrid = Ext.create('Ext.grid.Panel', { store: myStore, columns: givenColumns, }); In my case I fetch data from the server, the data is tweaked, and then loaded into the store manually. myStore.loadData(fetchedAndTweaked); Since fetchedAndTweaked contains many rows, rendering is very slow, and slows the entire browser. Therefore I want to add parameters to myGryd and myStore to have "infinite" scrolling (on the data-set fetchedAndTweaked). However: All examples I find, the dataStore has some proxy/reader etc. //Thanks A: You can, if you use buffered: true config on your store as described in the Ext JS 4.1.3 docs:. buffered : Boolean Allows the Store to prefetch and cache in a page cache, pages of Records, and to then satisfy loading requirements from this page cache. To use buffered Stores, initiate the process by loading the first page. The number of rows rendered are determined automatically, and the range of pages needed to keep the cache primed for scrolling is requested and cached. Example: myStore.loadPage(1); // Load page 1 A PagingScroller is instantiated which will monitor the scrolling in the grid, and refresh the view's rows from the page cache as needed. It will also pull new data into the page cache when scrolling of the view draws upon data near either end of the prefetched data. The margins which trigger view refreshing from the prefetched data are Ext.grid.PagingScroller.numFromEdge, Ext.grid.PagingScroller.leadingBufferZone and Ext.grid.PagingScroller.trailingBufferZone. The margins which trigger loading more data into the page cache are, leadingBufferZone and trailingBufferZone. By default, only 5 pages of data are cached in the page cache, with pages "scrolling" out of the buffer as the view moves down through the dataset. Setting this value to zero means that no pages are ever scrolled out of the page cache, and that eventually the whole dataset may become present in the page cache. This is sometimes desirable as long as datasets do not reach astronomical proportions. Selection state may be maintained across page boundaries by configuring the SelectionModel not to discard records from its collection when those Records cycle out of the Store's primary collection. This is done by configuring the SelectionModel like this: selModel: { pruneRemoved: false } Defaults to: false Available since: 4.0.0 As noted above, you will also have to set thepageSize config on the store to what you want it. A word of warning: you don't find any examples of local stores with infinite scrolling because the number of records to make infinite scrolling viable exceeds the number of records which you should reasonably keep in a local store. In other words the rendering is not the only thing that slows down the browser, it's also the amount of data you are trying to process locally. If you feel you need to implement infinite scrolling it's probably time to convert to a remotely loaded data store. A: After an upgrade I found out that this i much easier in extJS 4.2(beta). The infinite scrolling is detached from the datastore. IE it does not matter what type of datastore you use. Also sorting is working as you want it. store = Ext.create('Ext.data.SimpleStore',{ autoLoad: true, pageSize:100, data:[ [] ], } Ext.require('Ext.grid.plugin.BufferedRenderer') var grid = Ext.create('Ext.grid. plugins: 'bufferedrenderer', store : store, } //I load matrix data directly in the store for speed store.loadRawData(matrixData); The application is so much faster now.
{ "pile_set_name": "StackExchange" }
Q: How to pass different config for each instance of a browser created by protractor in sharded config enabled.? Iam passing the login details of a tested website using browser.params in protractor login suite. But the problem is that my web application has a restriction for single login for a user account. Hence running tests in multicapablities in firefox and chrome simultaneously fails.Since only one browser user session can exists at a time.Please provide a work around to solve this. It should be nice to pass different login params to firefox and chrome inside multicapabilities. is it possible. A: The browser instance can be fetched using browser.getProcessedConfig and the login credentials can be assigned accordingly in on-prepare of Protractor.conf.js Refer browser.getProcessedConfig API doc onPrepare: function() { browser.getProcessedConfig().then(function(config){ switch(config.capabilities.browserName) { case 'chrome': //logic for chrome browser.params.username = 'blahblah' break; case 'firefox': browser.params.username = 'blahblah2' break; default: browser.params.username = 'blahblah3' break; } }) },
{ "pile_set_name": "StackExchange" }