text
stringlengths
175
47.7k
meta
dict
Q: can capacitor increase flow/quantity of water in water pump can capacitor increase flow/quantity of water in water pump. we are already using capacitor of 25 uf. can water quantity be increased if we use capacitor of 30 uf or any higher which u suggest. A: No, the wrong sized capacitor will make things worse. The purpose of a run capacitor is to energise a second winding in the motor. It's capacitance is chosen to produce the correct phase difference between the windings. A smaller value will produce less torque. A higher value will overheat the motor. simulate this circuit – Schematic created using CircuitLab See Single-phase induction motors Motor Capacitor FAQ
{ "pile_set_name": "StackExchange" }
Q: What is wrong with this for loop? Hi guys I have an array that I'm trying to append to the DOM! Here is the code: function inputFeedContent(data){ for(var k=0; k < columns; k++) { var col = "<div class='col-1'>"; for(var j=0; j < data.sample[k].length; j++) { col += "<p>"+data.sample[k][j]+"</p>"; } col += "</div>"; $('.sliding-window').append(col); } } where columns = 12. Problem is I only get five of these: <div class="col-1"> <p>Some text</p> </div> What am I doing wrong here? Keep in mind I'm a noob :) Thanks! A: I think you're trying to print each column separately, hence not row after row. Try this: function inputFeedContent(data) { for(var k=0; k < columns; k++) { var col = "<div class='col-1'>"; for(var j=0; j < data.sample.length; j++) { col += "<p>"+data.sample[j][k]+"</p>"; } col += "</div>"; $('.sliding-window').append(col); } }
{ "pile_set_name": "StackExchange" }
Q: HOWTO: Call Managed C# Interface From Unmanaged C++ On WindowsCE Compact Framework I have extensive unmanaged Windows CE 5 C++ code that provides a UI that I want to use in a new product by combining it with a large amount of newer business and communications logic written in managed C# on Windows CE 6 and the Compact Framework. The UI may know about the business logic, but I want the business logic ignorant of the UI such that I can later replace it with a managed version, or any other UI that I choose as a front-end. I found an article that describes how to use COM as the bridge in the Windows world, but I'm having difficulty applying it in the .NET CF under WinCE. In the past, I've imported type libraries and used COM calls (CoInitialize(), CoCreateInstance()) to obtain pointers to interfaces on other Windows platforms, and that's the strategy I'm pursuing at the moment: using COM directly in the unmanaged C++ library to access the C# interfaces in my managed library, assuming that the same facility is provided in WinCE. Here's my problem: the typelib. It's not available from my managed C# library as I've used it in the past via a '#import "SomeCPPLibrary.dll"' statement. I believe it's buried in the .dll assembly, stored in a different manner than it has been in the past and hence, not directly available through a #import of the library itself. I think that I can #import a typelib, but I cannot find a way to extract the typelib from my managed .dll, and while I might be able to hack together an interface definition file (.idl) and use the platform's midl.exe to generate a .tlb from it, there's no guarantee that my .idl, and hence, resulting .tlb, would really match what is in my C# .dll. I don't even know if the platform midl.exe works in this manner but assume that it does. Am I barking up the wrong tree? Is it possible to use a managed C# interface in unmanaged C++ through a corresponding COM interface? Does setting the [assembly: ComVisible(true)] attribute in its AssemblyInfo.cs file make all interfaces in the managed assembly available through COM in the unmanaged world via the GUID the AssemblyInfo.cs defines, or do I have to do something more? How do I get the typelib out of the managed .dll so that my unmanaged C++ library can #import it? I tried adding my managed C# library project as a reference in the unmanaged C++ library project, but that didn't seem to help. Is such a reference relevant at all in this situation? Is there a better approach to solving the basic problem of calling managed C# code from the unmanaged C++ world? Something I just read about here is a mixed mode libarary with a managed translation layer to bridge the unmanaged/managed gap. I'm not sure that is a good strategy as call response speed is an important factor, but might it be better in the long run as I plan to rewrite the UI to managed C# at some point, and thus puts all the effort on the throw-away UI rather than mucking with the more permanent business/comms logic? Regardless of the answer to this question, I'd still like to solve the problem of using COM, if for no other reason than curiosity. A: I have attempted to call C# from C++ in WinCE. I don't believe there is any COM support provided by the Compact Framework, so you can't use ComVisible(true). I also couldn't find a way to host .NET in C++ because again the functionality wasn't exposed in the Compact Framework. My solution was to create a stub C# application and communicate with the C++ host via Msg Queues. This also solves the data marshaling issue. Performance for my usage is fine. The biggest cost is the startup time of the stub which you'd have to pay anyway if you're complete app was C#.
{ "pile_set_name": "StackExchange" }
Q: Sum of numbers in array I am new to programming and I try to solve problems in online-judging system. There is a problem, which looks very interesting and important, but unfortunately I have no idea how to solve that. I would appreciate any hints. Here is the problem: Given an array as an input. Print 1 if there are two subarrays, which have the same sum of numbers, otherwise print 0. Input: 2 1 1 Output: 1 Input: 3 2 5 3 Output: 1 Input: 3 1 4 7 Output: 0 Thanks A: This is known as the partition problem (or at least a variant of the partition problem). It's a problem that requires what is called a dynamic programming solution. This is a pretty advanced problem for someone who is just beginning to learn how to program. I advise starting on some easier challenges. If, however, you are interested in tackling this beast, check out this link: http://people.csail.mit.edu/bdean/6.046/dp/. The link on this page to the 'partition problem' shows a video explanation of a working solution.
{ "pile_set_name": "StackExchange" }
Q: What does it mean to start a server in your own computer and access the application through you computer? I understand that the rails s command starts a server and you can access your Rails application because of this command. However, don't you usually have to connect to some other server outside of one's computer? A: Why that? A server is just a "computer" and it can be your one. One difference is that you make HTTP request in your local network (from the browser to the server on your machine). While in a request to a real server the request also starts from your browser but reach a server elsewhere.
{ "pile_set_name": "StackExchange" }
Q: Weird directory entries in FAT file system So I'm trying to figure out how the FAT FS works and got confused by the root directory table. I have two files in the partition: test.txt and innit.eh which results in the following table: The entries starting with 0xE5 are deleted, so I assume these were created due to renaming. The entries for the actual files look like this: TEST TXT *snip* INNIT EH *snip* What I don't understand is where the entries like At.e.s.t......t.x.t Ai.n.n.i.t.....e.h. are coming from and what are they for. They do not start with 0xE5, so should be treated as existing files. By the way, I'm using Debian Linux to create filesystems and files, but I noticed similar behaviour on FS and files created on Windows. A: The ASCII parts of the name (where the letters were close to each other) is the legacy 8.3 DOS shortname. You see it only uses capital letters. In DOS, only these would be there. The longer parts (with 0x00 in between) is the long name (shown in Windows) which is Unicode, and uses 16bits per character.
{ "pile_set_name": "StackExchange" }
Q: how prove the following statment for this matrix. Let $A:=[a_{ij}]_{n×n}$ , $a_{ij}=0$ or $a_{ij}=1$ and $\exists m \in\mathbb N$ such that $A^m=J-I$, where $I$ is the identity matrix and $J=[1]_{n×n}$ (each entry is $1$). How to prove: $\exists a \in\mathbb N$ such that $n=a^m+1$, and $m$ is odd. Thanks in advance. A: (The problem statement is false when $n=1$, but we will ignore this degenerate case.) We have at least three proofs. Proof 1 (adapted from the proof by Anon; see his/her comment). We have $$\det (A)^m=\det (A^m)=\det(J-I)=(-1)^{n-1}(n-1)$$ and therefore $n=|\det(A)|^m+1$. Proof 2 (adapted from Theorem 1 of C. W. H. Lam, J. H. van Lint, Directed Graphs with Unique Paths of Fixed Length, Journal of Combinatorial Theory B, vol. 24, No. 3, 1978; thanks to @darij_grinberg for the information): $A^m=J−I$ implies that $AJ−A=A^{m+1}=JA−A$. Hence $AJ=JA$, i.e. all row sums and column sums of $A$ are equal to some natural integer $c$. Thus $AJ=JA=cJ$ and in turn $A^mJ=c^mJ$. But by property of $A$, we also have $A^mJ=(J−I)J=(n−1)J$. Therefore $c^m=n−1$. Proof 3: As $2 = 1^m+1$, we may assume that $n\ge3$. Since $A$ is an entrywise nonnegative, by Perron-Frobenius theorem, the spectral radius $\rho(A)$ of $A$ is a maximal eigenvalue of $A$. Hence $\rho(A)^m$ a maximal eigenvalue of $A^m$. But when $n\ge3$, the maximal eigenvalue of $A^m=J-I$ is unique, namely $n-1$. Hence $\rho(A)^m=n-1$, or $n=\rho(A)^m+1$. Finally, as the eigenvalues of $A^m=J-I$ are $n-1$ (simple eigenvalue) and $-1$ (with multiplicity $n-1$), the eigenvalues of $A$ are $\rho(A)=(n-1)^{1/m}$ and a number of $m$-th roots of $-1$. Hence $\rho(A)=|\det(A)|$ and in turn $\rho(A)$ is an integer.
{ "pile_set_name": "StackExchange" }
Q: Convert DOM childs to JSON var allmenus = $('.dragger-menu').map(function() { var li = {}; $(this).children('li').each(function() { switch ($(this).data("menu")) { case "page": li.page = { id: $(this).data("menu-id") }; break; case "external-link": li["external-link"] = { title: $(this).text().trim(), url: $(this).data("menu-link"), icon: $(this).children("i").attr("class") } break; case "dropdown": li.dropdown = {}; li.dropdown.title = $(this).contents().filter(function() { return this.nodeType == Node.TEXT_NODE; }).text().trim(); li.dropdown.data = $(this).children("ol").map(function() { var data = {}; $(this).children("li").each(function() { switch ($(this).data("menu")) { case "page": data.page = { id: $(this).data("menu-id") }; break; case "external-link": data["external-link"] = { title: $(this).text().trim(), url: $(this).data("menu-link"), icon: $(this).children("i").attr("class") } break; } }); return data; }).get(); break; } }); return li; }).get(); var obj = { menu: allmenus }; var jsondata = JSON.stringify(obj, null, 2); console.log(jsondata); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ol class="dragger-menu"> <li class="list-group-item" data-menu="page" data-menu-id="24">Online Register</li> <li class="list-group-item" data-menu="page" data-menu-id="26">Secondly Page</li> <li class="list-group-item" data-menu="dropdown"> <i class="fa fa-caret-square-o-down"></i> Dropdown Menu <ol class=""> <li class="list-group-item" data-menu="page" data-menu-id="25">Contact Us</li> <li class="list-group-item" data-menu="external-link" data-menu-link="https://twitter.com/your-page"><i class="fa fa-Twitter"></i> Twitter</li> <li class="list-group-item" data-menu="external-link" data-menu-link="https://facebook.com/your-page"><i class="fa fa-Facebook"></i> Facebook</li> </ol> </li> </ol> <ol class="dragger-menu"> <li class="list-group-item" data-menu="page" data-menu-id="28">Ahmet Deneme</li> <li class="list-group-item" data-menu="page" data-menu-id="21">Secondly Page</li> </ol> The above javascript code does not pass the same type element to json. Example: If you are run the code, you see only page 26, not 24. This code only get last elem in same elem. I want the all elements in json data. What should i do in javascript code ? Sorry for my bad englısh . Thanks for all. A: Don't build JSON by concatenating strings. Create an object and then use JSON.stringify. And use jQuery's .each() and .map() methods to loop through the DOM, rather than for loops. var allmenus = $('.dragger-menu').map(function() { var li = {}; $(this).children('li').each(function() { switch ($(this).data("menu")) { case "page": li.page = { id: $(this).data("menu-id") }; break; case "external-link": li["external-link"] = { title: $(this).text().trim(), url: $(this).data("menu-link"), icon: $(this).children("i").attr("class") } break; case "dropdown": li.dropdown = {}; li.dropdown.title = $(this).contents().filter(function() { return this.nodeType == Node.TEXT_NODE; }).text().trim(); li.dropdown.data = $(this).children("ol").map(function() { var data = {}; $(this).children("li").each(function() { switch ($(this).data("menu")) { case "page": data.page = { id: $(this).data("menu-id") }; break; case "external-link": data["external-link"] = { title: $(this).text().trim(), url: $(this).data("menu-link"), icon: $(this).children("i").attr("class") } break; } }); return data; }).get(); break; } }); return li; }).get(); var obj = { menu: allmenus }; var jsondata = JSON.stringify(obj, null, 2); console.log(jsondata); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ol class="dragger-menu"> <li class="list-group-item" data-menu="page" data-menu-id="24">Online Register</li> <li class="list-group-item" data-menu="external-link" data-menu-link="https://facebook.com/your-page"><i class="fa fa-facebook"></i> Facebook</li> <li class="list-group-item" data-menu="dropdown"> <i class="fa fa-caret-square-o-down"></i> Dropdown Menu <ol class=""> <li class="list-group-item" data-menu="page" data-menu-id="25">Contact Us</li> <li class="list-group-item" data-menu="external-link" data-menu-link="https://twitter.com/your-page"><i class="fa fa-Twitter"></i> Twitter</li> </ol> </li> </ol> <ol class="dragger-menu"> <li class="list-group-item" data-menu="page" data-menu-id="28">Ahmet Deneme</li> </ol> The code to get the title of the dropdown came from How to get the text node of an element?. It would be easier if you put that text inside a <span> so you can write a selector for it.
{ "pile_set_name": "StackExchange" }
Q: Categories with post from child categories on front page I am developing my first WordPress site and I am using the Customizr theme. I need to build a front page, that shows the latest posts from 2 categories - News and Events and all their child categories. So my category tree looks like this: Articles -News --First news child category --Second news child category -Events --First events child category --Second events child category and I want the front page to show the category News and the category Events with all the posts in their child categories packed in 2 containers with the headlines News and Events respectively. I am a total newbie in WordPress and I've searched for days for a solution, but I think I'm searching with the wrong keywords, because I didn't found what I need. I've created a child theme, and I'm currently trying to make an index.php with The Loop to achieve that. I've also tried making a static front page which uses a different template. So which is the right way to do that? Can you please at least give me some pointers? Thank you! A: Normally you'd use the pre_get_posts filter for this, but for your specific case, since you probably want to customize how the posts are displayed (separately), I'd recommend setting a static front page and using a shortcode in the content that uses get_posts (yet this also means that your theme's "Blog" template won't be used.) eg. add_shortcode('custom_news_and_events','my_custom_news_and_events'); function my_custom_news_and_events() { $news = get_posts(array('category'=>'1','posts_per_page'=>'7')); $childnews1 = get_posts(array('category'=>'2','posts_per_page'=>'3')); $childnews2 = get_posts(array('category'=>'3','posts_per_page'=>'3')); $events = get_posts(array('category'=>'4','posts_per_page'=>'7')); $childevents1 = get_posts(array('category'=>'5','posts_per_page'=>'3')); $childevents2 = get_posts(array('category'=>'6','posts_per_page'=>'3')); $output = '<div id="frontpagenews">'; $output .= '<h3>News</h3>'; if (count($childnews1) > 0)) {foreach $news as $post) { $output .= custom_post_display($post); } } $output .= '<h4>Child News 1</h4>'; if (count($childnews1) > 0)) {foreach $childnews1 as $post) { $output .= custom_post_display($post); } } $output .= '<h4>Child News 2</h4>'; if (count($childnews2) > 0)) {foreach $childnews2 as $post) { $output .= custom_post_display($post); } } $output .= '</div>'; $output = '<div id="frontpageevents">'; $output .= '<h3>Events</h3>'; if (count($events) > 0)) {foreach $events as $post) { $output .= custom_post_display($post); } } $output .= '<h4>Child Events 1</h4>'; if (count($childevents1) > 0)) {foreach $childevents1 as $post) { $output .= custom_post_display($post); } } $output .= '<h4>Child Events 2</h4>'; if (count($childevents2) > 0)) {foreach $childevents2 as $post) { $output .= custom_post_display($post); } } $output .= '</div>'; return $output; } (setting the correct category IDs for each get_posts of course.) the posts_per_page settings gives you more fine-grained control over how many posts from each category you get, but of course you can pass more arguments to get_posts (see the codex page) ...this is an example function called in each display loop: function custom_post_display($post) { $display = '<div class="postitem">'; $display .= '<h5><a href=".get_permalink($post->ID).">'; $display .= $post->post_title.'</a></h5>'; $display .= '<p>'.$post->post_excerpt.'</p>'; $display .= '</div>'; return $display; } ...or displaying anything else you want from the WP_Post object... and now that is all there use the shortcode [custom_news_and_events] on the static page. you can then style the #frontpagenews and #frontpageevents divs and .postitem class etc if you don't need the child category subheadings you can remove them and replace the get_post calls to simply: $news = get_posts(array('category'=>'1,2,3','posts_per_page'=>'10')); $events = get_posts(array('category'=>'4,5,6','posts_per_page'=>'10')); which will then get the latest 10 posts from the main and subcategories (as the default for the orderby argument is date.)
{ "pile_set_name": "StackExchange" }
Q: Using fromarray from PIL to save an image vector of 4 channels and then re-reading it I have an image vector v with the (100, 100, 4) dimensions. To save this image vector, I used PIL as follows. im = Image.fromarray(v) The image vector is not RGB, As it have 4 channels. I got the following error. TypeError: Cannot handle this data type I also got few more errors. I think, there is some issue with the type of my array. The type of v vectors is as follows. print(type(v)) <class 'numpy.ndarray'> A: You will get this error if your basic datatype is unacceptable to Image.fromarray(). So, for example, it will happily accept an array of unsigned 8-bit integers: i=np.zeros((100,100,4),dtype=np.uint8) # specify unsigned 8-bit ints print(i.dtype) # prints dtype('uint8') im = Image.fromarray(i) # works fine Now try with unacceptable type: i=np.zeros((100,100,4),dtype=np.int16) print(i.dtype) # prints dtype('int16') im = Image.fromarray(i) TypeError: Cannot handle this data type So, the answer is that your underlying datatype is unacceptable. Check it with v.dtype.
{ "pile_set_name": "StackExchange" }
Q: ¿Cómo puedo ingresar un Nodo en sql server desde TreeView? C# y Windows Form Tengo un TreeView en 3 capas, ya me carga mi TreeView desde sql a visual studio mediante un funcion recursiva, pero no logro hacer que se me guarde en la base de datos, se me guarda en la vista porque logro ver el nodo con su nombre pero al cerrar la aplicación desaparece, y es porque no se está guardando en la base de datos, me pueden decir si es que tengo la función de InsertarNodos desde mi capa Datos mal? Capa Datos: public void InsertarNodos(datostreeview parametros) { comandSql.Connection = con.AbrirConexion(); comandSql.CommandText = "sp_InsertarNodos"; comandSql.CommandType = CommandType.StoredProcedure; comandSql.Parameters.AddWithValue("@Codigo", parametros.Codigo); comandSql.Parameters.AddWithValue("@Nombre", parametros.Nombre); comandSql.Parameters.AddWithValue("@CodigoRapido", parametros.Codigorapido); comandSql.Parameters.AddWithValue("@Idpadre", parametros.Idpadre); comandSql.ExecuteNonQuery(); comandSql.Parameters.Clear(); con.CCerraConexion(); } Capa Negocio: public void InsertarAttr(string codigo, string nombre, string codigorapido, int idpadre) { obje_cdtreeview.Codigo = codigo; obje_cdtreeview.Nombre = nombre; obje_cdtreeview.Codigorapido = Convert.ToInt32(codigorapido); obje_cdtreeview.Idpadre = idpadre; } capa presentación: private void button1_Click(object sender, EventArgs e) { if (editar == false) { try { if (textBox1.Text != "" || textBox2.Text != "") { nodoSeleccionado = treeView1.SelectedNode; padre = int.Parse(dataTableNodos.Rows[int.Parse(nodoSeleccionado.Tag.ToString())]["IdPCuentas"].ToString()); textBox1.Text = textBox1.Text + "."; string Codigo = textBox1.Text; textBox2.Text = textBox2.Text.ToUpper(); string Nombre = textBox2.Text; textBox3.Text = Codigo.Replace(".", ""); string Codigorapido = textBox3.Text; objc_treeview.InsertarAttr(Codigo, Nombre, Codigorapido, padre); MessageBox.Show("Se guardo el Registro"); textBox1.Enabled = false; textBox2.Enabled = false; TreeNode nodoInsertado = new TreeNode(); nodoInsertado.Text = Codigo + " " + Nombre; nodoSeleccionado.Nodes.Add(nodoInsertado); } Corregido el error pero sigue sin guardarme. A: EDITO, tras la rectificación del error anterior El método InsertarNodos insertar nodos parece correcto, sin conocer el contenido del procedimiento almacenado sp_InsertarNodos. El problema es que este método no lo llamas en ninguna parte, jamás va a entrar en la inserción de la base de datos. ** FIN DE LA EDICIÓN ** Parece un error muy simple: Agregas el parámetro comandSql.Parameters.AddWithValue("Idpadre", parametros.Idpadre); Cuando debería ser comandSql.Parameters.AddWithValue("@Idpadre", parametros.Idpadre); Por otro lado asegúrate que los parámetros @Codigo, @Nombre, @CodigoRapido y @Idpadre están exactamente escritos en tu procedimiento almacenado-
{ "pile_set_name": "StackExchange" }
Q: CFZip of certain file type Is it possible to use cfzip to create a zip folder containing of a certain type. I need to do this to take out .bak files from a folder with different filetypes and insert them into a zip folder. Thanks Colin A: Steve already pointed you to the CF documentation which has example of delete. To create a zip, the simplest way is as follows: <cfset fileName = createUUID() /> <cfzip file="D:\#fileName#.zip" action="zip" source="D:\myfolder" filter="*.bak" recurse="No" > If you want to add the files of a sub-directory, then make recurse="yes". To filter multiple file types you can simply use comma separated file type in filter like this filter="*.jpg, *.gif, *.png" Note: I have used dynamic file name in case you want to run this script multiple times and have different file name, or multiple users are accessing this script at the same time.
{ "pile_set_name": "StackExchange" }
Q: How can I get my CheckBoxList to have three possible states? How can I get my CheckBoxList to have three possible states ? More precisely, is there a way to have three possible states for each checkbox : checked, unchecked, undefined (in most GUIs this is represented as a full square). Alternatively, do you recommend another control that would meet these needs ? UPDATE : Ok, given that HTML does not support tri-state checkboxes, I'm looking for a way to 'CSS' (color fill, highlight, etc...) the checkboxes of the items that are in that 'undefined' state from my server point of view. A: Not all GUIs represent undefined as a full square. That's why it's going to be clearer to use one of the following: 1. Radio boxes 2. Dropdown 3. ListBox A: I have decided to use CheckBoxList anyway, since I need two states accessible by the user, and a third one not accessible by the user but to display inconsistencies. The way I have solved this is through Javascript + CSS. Some bits of ideas here: http://css-tricks.com/indeterminate-checkboxes/
{ "pile_set_name": "StackExchange" }
Q: Skipping the BufferedReader readLine() method in java Is there a easy way to skip the readLine() method in java if it takes longer than, say, 2 seconds? Here's the context in which I'm asking this question: public void run() { boolean looping = true; while(looping) { for(int x = 0; x<clientList.size(); x++) { try { Comm s = clientList.get(x); String str = s.recieve(); // code that does something based on the string in the line above } // other stuff like catch methods } } } Comm is a class I wrote, and the receive method, which contains a BufferedReader called "in", is this: public String recieve() { try { if(active) return in.readLine(); } catch(Exception e) { System.out.println("Comm Error 2: "+e); } return ""; } I've noticed that the program stops and waits for the input stream to have something to read before continuing. Which is bad, because I need the program to keep looping (as it loops, it goes to all the other clients and asks for input). Is there a way to skip the readLine() process if there's nothing to read? I'm also pretty sure that I'm not explaining this well, so please ask me questions if I'm being confusing. A: The timeout alone is not a good idea. Use one thread per client (or use asynchronous I/O, but unless you're building some high performance application, that's unnecessarily complicated). As for the timeout itself, it must be done on the stream that's encapsulated. See for example How can I set a timeout against a BufferedReader based upon a URLConnection in Java?
{ "pile_set_name": "StackExchange" }
Q: Dynamically add href on I have a logout link <div id="titleInfo"> <div id="signOut"> <a href="" class="signOutLink" >Log Out</a> </div> </div> I need to add navigation destination dynamically. $(document).ready(function() { //Navigation Handling $('.signOutLink').click(function() { alert('h'); window.location.href("LogOut.aspx"); //window.location.replace("LogOut.aspx"); }); }); But it is not working. How can we correct it? A: $(document).ready(function () { $('a.signOutLink').click(function () { window.location = "LogOut.aspx"; }); }); A: Thanks to all. I used following: $('.signOutLink').click(function () { window.location.href("LogOut.aspx?a=b"); //Can use REPLACE also return false; //FALSE });
{ "pile_set_name": "StackExchange" }
Q: Best resources to learn origami art immediately? I saw one of my class fellows creating beautiful origami art work with paper. I was quite amazed, so I would like to learn origami art. Can you provide resources or recommendations on how to learn origami art immediately? A: As with all crafts and arts, you can not expect to make beautiful items when you just start. You will have to start with the simple models and big sheets of paper. Once you have some experience, you will go up to more difficult patterns and smaller paper. Depending on how you take to it, it might take days or weeks, but origami can be done beautiful with little experience when you work precise and with good paper. Using a good book or a good website, cheap paper for your first few tries but nice paper for as soon as you are happy you can do the model. Depending on how you learn best, still pictures with how to fold, as in the books, or videos with fold along instructions will work. If you go for one style, try to get a site or person who has done more than just one or two, as while you are learning, following the same kind of instructions is often helpful. On the other hand, if you do not understand the instructions, other instructions for the same item may well help you. Some websites, (I have no connection with any, I learned my folding from age 4, well before internet was invented.) Beginners as well as more advanced. Not as easy but clear indication which models are not starter ones. You can say that all sites that make you fold a crane as first item are over ambitious. I am out of the books, as the last one I purchased was 20 years back or so, but I know that if you go to a good bookshop, they should stock some. Look and read, buy a kid book if you can understand what the pictures instruct. You can go on to adult books after you have learned the basics. There are also many youtube channels and individual videos dedicated to origami. Again I would suggest to start with instructions aimed at children. While the boxes or packages with nice paper seem quite expensive, you will find that one pack will last quite a few models. One way to make bigger results of small and relatively simple origami is to put them together and make scenes, birthday cards or pictures. Or to go modular, where you make the same piece over and over and put them together. If you dream of big and intricate figures you will need to keep improving on what you have learned, using bigger and more difficult models. But do start with baby steps, and keep improving step by step. If you try to go too difficult before you have the basics, you may give up well before you are good enough.
{ "pile_set_name": "StackExchange" }
Q: Should a dialog close before a mutating change is accepted? I have a simple settings dialog Some setting: [ ] [Update] [Cancel] When the user clicks on "Update", the app will send a request to the server to update the user's settings. Should the dialog close before or after this request is acknowledged? I can think of three different behaviors here, and I'm not sure what's best: The dialog stays open until the server responds with success (in which case the dialog closes) or failure (in which case the error is reported). During this time, perhaps the "Update" button turns into a spinner, or some other indication that something's happening. The dialog immediately closes, and the settings displayed in the main app page are immediately updated client-side. When the server returns, if it returns with a failure, then some notification (popup? error message underneath the settings?) tells the user it failed. The dialog immediately closes, but the settings on the main app page turn into a spinner, which resolves when the server returns. A: I feel that it depends. To be as unobtrusive as possible, if the settings that you are referring to, does not impact the immediate usage/experience for the user, we could just use option 3 - Immediately close the dialog + spinner on the settings However, if the experience of the user depends on the settings that he has invoked, I would generally prefer to use option 1. If it is taking longer than expected eg after 10 sec, you could also inform the user that he will know the status of that transaction later, and fall-back to option 3. Option 2 in my personal experience tends to confuse the user and lead to a poorer experience. eg He has assumed that the settings change was successful and is later informed that it was not, despite having seen that it was reflected at the client side.
{ "pile_set_name": "StackExchange" }
Q: Read .txt file into 2D Array There are a few of these topics out there, but this problem has a slight twist that makes it different. I'm focused on only half of a larger problem. I'm sure many of you are aware of the magic square problem. Prompt: Assume a file with lines and numbers on each line like the square shown. Write a program that reads info into a two dimensional array of intS. The program should determine if the matrix is a magic square or not. Working Solution: public static int[][] create2DIntMatrixFromFile(String filename) throws Exception { int[][] matrix = {{1}, {2}}; File inFile = new File(filename); Scanner in = new Scanner(inFile); int intLength = 0; String[] length = in.nextLine().trim().split("\\s+"); for (int i = 0; i < length.length; i++) { intLength++; } in.close(); matrix = new int[intLength][intLength]; in = new Scanner(inFile); int lineCount = 0; while (in.hasNextLine()) { String[] currentLine = in.nextLine().trim().split("\\s+"); for (int i = 0; i < currentLine.length; i++) { matrix[lineCount][i] = Integer.parseInt(currentLine[i]); } lineCount++; } return matrix; } public static boolean isMagicSquare(int[][] square) { return false; } Here is my (old) code for reading info from a text file into a 2D array: public static int[][] create2DIntMatrixFromFile(String filename) throws Exception { int[][] matrix = {{1}, {2}}; File inFile = new File(filename); Scanner in = new Scanner(inFile); in.useDelimiter("[/n]"); String line = ""; int lineCount = 0; while (in.hasNextLine()) { line = in.nextLine().trim(); Scanner lineIn = new Scanner(line); lineIn.useDelimiter(""); for (int i = 0; lineIn.hasNext(); i++) { matrix[lineCount][i] = Integer.parseInt(lineIn.next()); lineIn.next(); } lineCount++; } return matrix; } public static boolean isMagicSquare(int[][] square) { return false; } And here is the text file I am reading from. It is in the shape of a 9x9 2D array, but the program must accommodate an array of ambiguous size. 37 48 59 70 81 2 13 24 35 36 38 49 60 71 73 3 14 25 26 28 39 50 61 72 74 4 15 16 27 29 40 51 62 64 75 5 6 17 19 30 41 52 63 65 76 77 7 18 20 31 42 53 55 66 67 78 8 10 21 32 43 54 56 57 68 79 9 11 22 33 44 46 47 58 69 80 1 12 23 34 45 There are two spaces proceeding each line on purpose. Before I state the exact problem, this is a homework template so the method declaration and variable initialization was pre-determined. I'm not positive that the method even correctly creates a 2D Array from the file because I can't run it yet. The issue is that for some reason "matrix" was initialized with 1 column and 2 rows. For what reason I'm not sure, but in order to fill an array with the numbers from the file I need to create a 2D array with dimensions equal to the number of values in a line. I previously had written code to create a new 2D array int[line.length()][line.length()] but it created a 36x36 array because that's how many individual characters are in one line. I have a feeling it's as simple as looping through the first line and having a counter keep track of each sequence of numbers separated by a zero. To me, that solution seems too inefficient and time consuming just to find the dimensions of the new array. What's the best way to accomplish this? Without using ArrayLists as I have to rewrite this program after using ArrayLists. A: I produced the following 2D array from the file you provided: 37 | 48 | 59 | 70 | 81 | 2 | 13 | 24 | 35 ----+----+----+----+----+----+----+----+---- 36 | 38 | 49 | 60 | 71 | 73 | 3 | 14 | 25 ----+----+----+----+----+----+----+----+---- 26 | 28 | 39 | 50 | 61 | 72 | 74 | 4 | 15 ----+----+----+----+----+----+----+----+---- 16 | 27 | 29 | 40 | 51 | 62 | 64 | 75 | 5 ----+----+----+----+----+----+----+----+---- 6 | 17 | 19 | 30 | 41 | 52 | 63 | 65 | 76 ----+----+----+----+----+----+----+----+---- 77 | 7 | 18 | 20 | 31 | 42 | 53 | 55 | 66 ----+----+----+----+----+----+----+----+---- 67 | 78 | 8 | 10 | 21 | 32 | 43 | 54 | 56 ----+----+----+----+----+----+----+----+---- 57 | 68 | 79 | 9 | 11 | 22 | 33 | 44 | 46 ----+----+----+----+----+----+----+----+---- 47 | 58 | 69 | 80 | 1 | 12 | 23 | 34 | 45 The array figures out the size of the square when it reads the first line of the file. This is very dynamic. Its works as long as the input file is a perfect square. I have no further error handling. Here is a simple approach which should adhere to your guidelines. import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; public class ReadMagicSquare { public static int[][] create2DIntMatrixFromFile(String filename) throws Exception { int[][] matrix = null; // If included in an Eclipse project. InputStream stream = ClassLoader.getSystemResourceAsStream(filename); BufferedReader buffer = new BufferedReader(new InputStreamReader(stream)); // If in the same directory - Probably in your case... // Just comment out the 2 lines above this and uncomment the line // that follows. //BufferedReader buffer = new BufferedReader(new FileReader(filename)); String line; int row = 0; int size = 0; while ((line = buffer.readLine()) != null) { String[] vals = line.trim().split("\\s+"); // Lazy instantiation. if (matrix == null) { size = vals.length; matrix = new int[size][size]; } for (int col = 0; col < size; col++) { matrix[row][col] = Integer.parseInt(vals[col]); } row++; } return matrix; } public static void printMatrix(int[][] matrix) { String str = ""; int size = matrix.length; if (matrix != null) { for (int row = 0; row < size; row++) { str += " "; for (int col = 0; col < size; col++) { str += String.format("%2d", matrix[row][col]); if (col < size - 1) { str += " | "; } } if (row < size - 1) { str += "\n"; for (int col = 0; col < size; col++) { for (int i = 0; i < 4; i++) { str += "-"; } if (col < size - 1) { str += "+"; } } str += "\n"; } else { str += "\n"; } } } System.out.println(str); } public static void main(String[] args) { int[][] matrix = null; try { matrix = create2DIntMatrixFromFile("square.txt"); } catch (Exception e) { e.printStackTrace(); } printMatrix(matrix); } } This approach is more refined and optimized. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; public class ReadMagicSquare { private int[][] matrix; private int size = -1; private int log10 = 0; private String numberFormat; public ReadMagicSquare(String filename) { try { readFile(filename); } catch (IOException e) { e.printStackTrace(); } } public void readFile(String filename) throws IOException { // If included in an Eclipse project. InputStream stream = ClassLoader.getSystemResourceAsStream(filename); BufferedReader buffer = new BufferedReader(new InputStreamReader(stream)); // If in the same directory - Probably in your case... // Just comment out the 2 lines above this and uncomment the line // that follows. //BufferedReader buffer = new BufferedReader(new FileReader(filename)); String line; int row = 0; while ((line = buffer.readLine()) != null) { String[] vals = line.trim().split("\\s+"); // Lazy instantiation. if (matrix == null) { size = vals.length; matrix = new int[size][size]; log10 = (int) Math.floor(Math.log10(size * size)) + 1; numberFormat = String.format("%%%dd", log10); } for (int col = 0; col < size; col++) { matrix[row][col] = Integer.parseInt(vals[col]); } row++; } } @Override public String toString() { StringBuffer buff = new StringBuffer(); if (matrix != null) { for (int row = 0; row < size; row++) { buff.append(" "); for (int col = 0; col < size; col++) { buff.append(String.format(numberFormat, matrix[row][col])); if (col < size - 1) { buff.append(" | "); } } if (row < size - 1) { buff.append("\n"); for (int col = 0; col < size; col++) { for (int i = 0; i <= log10 + 1; i++) { buff.append("-"); } if (col < size - 1) { buff.append("+"); } } buff.append("\n"); } else { buff.append("\n"); } } } return buff.toString(); } public static void main(String[] args) { ReadMagicSquare square = new ReadMagicSquare("square.txt"); System.out.println(square.toString()); } }
{ "pile_set_name": "StackExchange" }
Q: AWS Beanstalk and Apache VirtualHost SSL custom DocumentRoot I already configured my .ebextensions directory to install SLL files and configure ssl.conf apache file. Everything is working perfect, instead the DocumentRoot of my ssl.conf that is not overwriting my Elastic Beanstalk default DocumentRoot. Problem: When I access https://dashboard.mydomain.com its also pointing to /home instead /dashboard folder. Elastic BeanStalk Default DocumentRoot: Directory Files: home/ -> Accessed by http://www.mydomain.com/ ... dashboard/ -> Accessed by https://dashboard.mydomain.com (DocumentRoot isn't working, its also pointing to /home) ... framework/ (Secure) ssl.conf: LoadModule ssl_module modules/mod_ssl.so Listen 443 <VirtualHost *:443> ServerName dashboard.mydomain.com DocumentRoot /var/www/html/dashboard -- NOT WORKING <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLCertificateChainFile "/etc/httpd/ssl/gd_bundle.crt" SSLCertificateFile "/etc/httpd/ssl/cert.crt" SSLCertificateKeyFile "/etc/httpd/ssl/key.key" ProxyPass / http://localhost:80/ retry=0 ProxyPassReverse / http://localhost:80/ ProxyPreserveHost on LogFormat "%h (%{X-Forwarded-For}i) %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" ErrorLog /var/log/httpd/elasticbeanstalk-error_log TransferLog /var/log/httpd/elasticbeanstalk-access_log </VirtualHost> A: UPDATE 2014 feb 08 I did some tests and I found out that the ProxyPass directive does not simply redirect every request from port 443 to localhost:80 (as one could easily thought), but basically repeats the request to Apache from scratch, through the port 80 (at least, that's what I understood). So, if you want to set any environment variable, you need to do it in the matching VirtualHost, adding to your .conf something like this: Listen 80 <VirtualHost *:80> DocumentRoot /var/www/html/dashboard </VirtualHost> This will be executed with every request (80 or 443). END UPDATE I have no idea why, but here two suggestions: the .conf in /etc/httpd/conf.d are processed in alphabetical order. This doesn't explain why your configuration don't take over, because the environmental variables are set in aws_env.conf BUT... take a look to aws_env.conf: you'll find some hints (it looks like the name of documentRoot is somewhat changed: for example, in my version is PHP_DOCUMENT_ROOT). Good luck, let us know if you find out.
{ "pile_set_name": "StackExchange" }
Q: Efficient and Succinct Vector Transformation of Weekly to Daily hourly Data in R I've got a working function, but I'm hoping there is a more succinct way of going about this. I have a dataset of events that are captured with the hour of the week they occurred in. For example, 4 AM on Sunday= 4, 4 AM on Monday = 28 etc. I want to analyze this data on a daily basis. For instance, all of the events that happen between 8 and 10 am on any day. To do this I have built a function that returns a dichotomous value for the given range for an ordered list. Function two_break accepts an ordered list of integers between 0:168 representing the hours of a week and a range (b1 and b2) for the desired periods of a 24 hour day. b1 and b2 divide the range of the 24 hour day that are desired. i.e. if b1=8 and b2=10 two_break will return all all values of 9, (9+24)=33, (9+48)=57...etc. as 1 and all others 0. two_break <- function(test_hr,b1,b2){ test_hr<-ifelse(test_hr==1,1.1,test_hr) for(i in 0:6){ test_hr<-ifelse(test_hr> (b1+24*i) & test_hr< (b2+24*i), 1 ,test_hr) } test_hr<-ifelse(test_hr==1,1,0) return(test_hr) } This function works fine, but I'm wondering if anybody out there could do it more efficiently/succinctly. See full code and data set at my github: anthonyjp87 168 hr transformation file/data. Cheers! A: You can use integer division %/% to capture the day of the week, and modulus, %% to capture the hour in the day: weekHours <- 1:168 # return the indices of all elements where the hour is between 8AM and 10AM, inclusive test_hr <- weekHours[weekHours %% 24 %in% 8:10] Note that midnight is represented by 0. If you want to wrap this into a function, you might use getTest_hr <- function(weekHours, startTime, stopTime) { weekHours[weekHours %% 24 %in% seq(startTime, stopTime)] } To get the day of the week, you can use integer division: # get all indices for the third day of the week dayOfWeek3 <- weekHours[(weekHours %/% 24 + 1) == 3] To get a binary vector of the selected time periods, simply pull the logical out of the index: allTimesBinary <- (weekHours %% 24) %in% 8:10
{ "pile_set_name": "StackExchange" }
Q: How to specify anonymous object as generic parameter? Let's suppose I have the following task: var task = _entityManager.UseRepositoryAsync(async (repo) => { IEnumerable<Entity> found = //... Get from repository return new { Data = found.ToList() }; } What is the type of task? Actually, it turns out to be: System.Threading.Tasks.Task<'a>, where 'a is anonymous type: { List<object> Data } How can I explicitly state this type without using var? I have tried Task<a'> task = ... or Task<object> task = ... but can't manage it to compile. Why do I need to do this? I have a method (UseApplicationCache<T>), that takes a Func<Task<T>> as a parameter. I also have a variable cache that the user might set to true or false. If true, the above said method should be called and my task should be passed as argument, if false, I should execute my task without giving it as an argument to the method. My end result would be something like this: Func<Task<?>> fetch = () => _entityManager.UseRepositoryAsync(async (repo) => { IEnumerable<Entity> found = //... Get from repository return new { Data = found.ToList() }; } return await (cache ? UseApplicationCache(fetch) : fetch()); A: How can I explicitly state this type You cannot. An anonymous type has no name, hence cannot be explicitly mentioned. The anonymous type can be inferred if you create a generic helper method. In your case, you can do: static Func<TAnon> InferAnonymousType<TAnon>(Func<TAnon> f) { return f; } With that you can just do: var fetch = InferAnonymousType(() => _entityManager.UseRepositoryAsync(async (repo) => { IEnumerable<Entity> found = //... Get from repository return new { Data = found.ToList() }; } )); return await (cache ? UseApplicationCache(fetch) : fetch()); The "value" of TAnon will be automatically inferred.
{ "pile_set_name": "StackExchange" }
Q: mix the result of 2 queries in one table? I am new to SQL, can i mix the result of two queries in one table with different attributes ? Below 3 tables of my database and the result i want to get. Is this possible ? client |id|name|adress| ---------------- |1 |a |x | |2 |b |y | order |id|client.id|product|date | -------------------------------- |1 |1 |px |2018-01-1| |2 |1 |py |2018-05-1| |3 |2 |px |2018-06-1| pay |id|client.id|amount|date | ------------------------------- |1 |1 |1000 |2018-03-1| |2 |2 |500 |2018-09-1| Output |name |order.id |product |pay.id |amount |date | ---------------------------------------------------- |a |1 |px |- |- |2018-01-1| |a |- |- |1 |1000 |2018-03-1| |a |2 |py |- |- |2018-05-1| |b |3 |px |- |- |2018-06-1| |b |- |- |2 |500 |2018-09-1| A: Edited for Access. Use UNION for 2 subqueries: SELECT * FROM ( SELECT client.name, order.id AS orderid, order.product AS product, "-" AS payid, "-" AS amount, order.date AS [date] FROM client INNER JOIN [order] ON client.id = order.clientid UNION SELECT client.name, "-" AS orderid, "-" AS product, pay.id AS payid, pay.amount AS amount, pay.date AS [date] FROM client INNER JOIN [pay] ON client.id = pay.clientid ) ORDER BY name, date the result is: name orderid product payid amount date a 1 px - - 2018-01-1 a - - 1 1000 2018-03-1 a 2 py - - 2018-05-1 b 3 px - - 2018-06-1 b - - 2 500 2018-09-1
{ "pile_set_name": "StackExchange" }
Q: Programming a custom redirect in a Drupal Site I would like to know what is the best option to setup a Drupal redirect process that will look at the current time of day and then redirect the user to a page based on this information? Since I have not had to do this before in Drupal, I'm looking for one or two suggestions on how to do this with the user starting the action from a menu item. One thought I had is that I could create a page that is PHP content and determine the redirect there, but I'm unsure of how to code that in that location. Also, I'm not sure if I've overlooked a module that could help with a conditional redirect or if all the references to form redirects might be applicable to what I'm trying to do. A: You could do something along the lines of this: function MYMODULE_menu() { $items['MYMODULE/whattodo'] = array( 'title' => 'Where should I go now, George', 'page callback' => 'MYMODULE_whattodo', 'access callback' => TRUE, 'type' => MENU_CALLBACK, ); return $items; } function MYMODULE_whattodo() { $foo= // code to get the current time of day in whatever form you want that matches the switch cases below switch($foo) { case // morning: drupal_goto(// page for the morning); break; case // afternoon: drupal_goto(// page for the afternoon); break; case // evening: drupal_goto(// page for the evening); break; } } which means when a user went to www.mysite.com/MYMODULE/whattodo, they would be redirected to whatever page you set up via drupal_goto()
{ "pile_set_name": "StackExchange" }
Q: Gulp write file if none exist How can I make gulp write a file only if there is no existing file. The bellow solution works for gulp 4.0 which is in alpha. // for when gulp 4.0 releases .pipe(gulp.dest(conf.plugScss.dist, {overwrite: false})) A: There is no exact equivalent in gulp 3.x, but you can use gulp-changed to achieve the same thing. gulp-changed is usually used to write only those files that have changed since the last time they were written to the destination folder. However you can provide a custom hasChanged function. In your case you can write a function does nothing but check if the file already exists using fs.stat(): var gulp = require('gulp'); var changed = require('gulp-changed'); var fs = require('fs'); function compareExistence(stream, cb, sourceFile, targetPath) { fs.stat(targetPath, function(err, stats) { if (err) { stream.push(sourceFile); } cb(); }); } gulp.task('default', function() { return gulp.src(/*...*/) .pipe(changed(conf.plugScss.dist, {hasChanged: compareExistence})) .pipe(gulp.dest(conf.plugScss.dist)); });
{ "pile_set_name": "StackExchange" }
Q: In Gradle, how do I declare common dependencies in a single place? In Maven there is a very useful feature when you can define a dependency in the <dependencyManagement> section of the parent POM, and reference that dependency from child modules without specifying the version or scope or whatever. What are the alternatives in Gradle? A: You can declare common dependencies in a parent script: ext.libraries = [ // Groovy map literal spring_core: "org.springframework:spring-core:3.1", junit: "junit:junit:4.10" ] From a child script, you can then use the dependency declarations like so: dependencies { compile libraries.spring_core testCompile libraries.junit } To share dependency declarations with advanced configuration options, you can use DependencyHandler.create: libraries = [ spring_core: dependencies.create("org.springframework:spring-core:3.1") { exclude module: "commons-logging" force = true } ] Multiple dependencies can be shared under the same name: libraries = [ spring: [ // Groovy list literal "org.springframework:spring-core:3.1", "org.springframework:spring-jdbc:3.1" ] ] dependencies { compile libraries.spring } will then add both dependencies at once. The one piece of information that you cannot share in this fashion is what configuration (scope in Maven terms) a dependency should be assigned to. However, from my experience it is better to be explicit about this anyway. A: It's a late reply, yet you might also want to have a look at: http://plugins.gradle.org/plugin/io.spring.dependency-management It provides possibility to import a maven 'bom', and reuse the definitions defined in the 'bom'. It's certainly a nice help when gradually migrating from maven to gradle ! Enjoying it right now. A: As of Gradle 4.6, dependency constraints are suggested in the documentation as the way to achieve this. From https://docs.gradle.org/current/userguide/declaring_dependencies.html#declaring_a_dependency_without_version: A recommended practice for larger projects is to declare dependencies without versions and use dependency constraints for version declaration. The advantage is that dependency constraints allow you to manage versions of all dependencies, including transitive ones, in one place. In your parent build.gradle file: allprojects { plugins.withType(JavaPlugin).whenPluginAdded { dependencies { constraints { implementation("com.google.guava:guava:27.0.1-jre") } } } } Wrapping the dependencies block with a check for the Java plugin (... whenPluginAdded {) isn't strictly necessary, but it will then handle adding a non-Java project to the same build. Then in a child gradle project you can simply omit the verison: apply plugin: "java" dependencies { implementation("com.google.guava:guava") } Child builds can still choose to specify a higher version. If a lower version is specified it is automatically upgraded to the version in the constraint.
{ "pile_set_name": "StackExchange" }
Q: CSS sub-menu overlay issue I'm trying to set up this 2 column menu that is side by side with each of the columns submenus opening to a different side. The left menu is working correctly, and the submenu buttons are clickable, but the right menu submenu items are located under the left menu elements. I tried using z-index, which worked, but made the left-menu items not clickable (hover did not work as well). Please see http://kink.cz/najforever/index_copy.html for reference. Could you please suggest how I can have the right menu submenu items shown correctly? Your help is much appreciated. HTML: <div class='people'> <div id='cssmenu'> <ul> <li class='active has-sub' id='fake'><a href='#'><span>A fake artist</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='danny'><a href='#'><span>Danny Rose Fashion</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='heels'><a href='#'><span>Heels in Prague</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='hodanajan'><a href='#'><span>Hodanajan</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='jakub'><a href='#'><span>Jakub Mařík</span></a> <ul> <li class='has-sub'><a href='#'><span>Web</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> </ul> </div> <div id='cssmenu2'> <ul> <li class='active has-sub' id='kaa'><a href='#'><span>Kaa Glo</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='pau'><a href='#'><span>Paulinemma</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='red'><a href='#'><span>Red Poppy Stories</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='kisic'><a href='#'>Sandra Kisic</a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> <li class='active has-sub' id='aesthet'><a href='#'><span>The Aesthet</span></a> <ul> <li class='has-sub'><a href='#'><span>Blog</span></a></li> <li class='has-sub'><a href='#'><span>Facebook</span></a></li> <li class='has-sub'><a href='#'><span>Instagram</span></a></li> </ul> </li> </ul> </div> CSS: /* =========================== ====== Name Menu Right ====== =========================== */ #cssmenu { padding: 0; margin: 0; border: 0; line-height: 1; text-align:left } #cssmenu ul, #cssmenu ul li, #cssmenu ul ul { list-style: none; margin: 0; padding: 0; } #cssmenu ul { position: relative; z-index: 597; float: left; } #cssmenu ul li { float: left; min-height: 1px; line-height: 1em; vertical-align: middle; position: relative; } #cssmenu ul li.hover, #cssmenu ul li:hover { position: relative; z-index: 599; cursor: default; } #cssmenu ul ul { visibility: hidden; position: absolute; top: 100%; left: 200px; z-index: 598; } #cssmenu ul ul li { float: none; right:250px; } #cssmenu ul ul ul { top: -2px; right: 0; } #cssmenu ul li:hover > ul { visibility: visible; } #cssmenu ul ul { top: 1px; left: 99%; } #cssmenu ul li { float: none; } #cssmenu ul ul { margin-top: 1px; } #cssmenu ul ul li { font-weight: normal; } /* Custom CSS Styles Menu Right*/ #cssmenu { width: 130px; background: white; font-family: 'Oxygen Mono', Tahoma, Arial, sans-serif; zoom: 1; font-size: 12px; float:right; margin-left:5px; } #cssmenu:before { content: ''; display: block; } #cssmenu:after { content: ''; display: table; clear: both; } #cssmenu a { display: block; padding: 6px 0px; color: black; text-decoration: none; padding-right:5px; } #cssmenu > ul { width: 130px; } #cssmenu ul ul { width: 130px; } #cssmenu > ul > li > a { color: black; } #cssmenu > ul > li > a:hover { color: black; } #cssmenu > ul > li.active a { background: white; } #cssmenu > ul > li a:hover, #cssmenu > ul > li:hover a { background: white; } #cssmenu li { position: relative; } #cssmenu ul ul li.first { -webkit-border-radius: 0 3px 0 0; -moz-border-radius: 0 3px 0 0; border-radius: 0 3px 0 0; } #cssmenu ul ul li.last { -webkit-border-radius: 0 0 3px 0; -moz-border-radius: 0 0 3px 0; border-radius: 0 0 3px 0; border-bottom: 0; } #cssmenu ul ul { -webkit-border-radius: 0 3px 3px 0; -moz-border-radius: 0 3px 3px 0; border-radius: 0 3px 3px 0; } #cssmenu ul ul { margin-left:2px; text-align:right; } #cssmenu ul ul a { font-size: 12px; color: black; } #cssmenu ul ul a:hover { color: black; } #cssmenu ul ul li { } #cssmenu ul ul li:hover > a { background: black; color: #ffffff; } #cssmenu.align-right > ul > li > a { border-left: 4px solid black; border-right: none; } #cssmenu.align-right { float: right; } #cssmenu.align-right li { text-align: right; } #cssmenu.align-right ul li.has-sub > a:after { content: none; } #cssmenu.align-right ul ul { visibility: hidden; position: absolute; top: 0; left: -100%; z-index: 598; width: 100%; } #cssmenu.align-right ul ul li.first { -webkit-border-radius: 3px 0 0 0; -moz-border-radius: 3px 0 0 0; border-radius: 3px 0 0 0; } #cssmenu.align-right ul ul li.last { -webkit-border-radius: 0 0 0 3px; -moz-border-radius: 0 0 0 3px; border-radius: 0 0 0 3px; } #cssmenu.align-right ul ul { -webkit-border-radius: 3px 0 0 3px; -moz-border-radius: 3px 0 0 3px; border-radius: 3px 0 0 3px; } /* =========================== ====== Name Menu Left ====== =========================== */ #cssmenu2 { padding: 0; margin: 0; border: 0; line-height: 1; text-align:right; } #cssmenu2 ul, #cssmenu2 ul li, #cssmenu2 ul ul { list-style: none; margin: 0; padding: 0; } #cssmenu2 ul { position: relative; z-index: 597; float: left; } #cssmenu2 ul li { float: left; min-height: 1px; line-height: 1em; vertical-align: middle; position: relative; } #cssmenu2 ul li.hover, #cssmenu2 ul li:hover { position: relative; z-index: 599; cursor: default; } #cssmenu2 ul ul { visibility: hidden; position: absolute; top: 100%; left: 0px; z-index: 598; width: 100%; text-align:left; } #cssmenu2 ul ul li { float: none; margin-left:4px; } #cssmenu2 ul ul ul { top: -2px; right: 0; } #cssmenu2 ul li:hover > ul { visibility: visible; } #cssmenu2 ul ul { top: 1px; left: 99%; } #cssmenu2 ul li { float: none; } #cssmenu2 ul ul { margin-top: 1px; } #cssmenu2 ul ul li { font-weight: normal; } /* Custom CSS Styles Menu Left*/ #cssmenu2 { width: 130px; background: white; font-family: 'Oxygen Mono', Tahoma, Arial, sans-serif; zoom: 1; font-size: 12px; float:right; margin-left:5px; } #cssmenu2:before { content: ''; display: block; } #cssmenu2:after { content: ''; display: table; clear: both; } #cssmenu2 a { display: block; padding: 6px 0px; color: black; text-decoration: none; padding-right:5px; } #cssmenu2 > ul { width: 130px; } #cssmenu2 ul ul { width: 130px; } #cssmenu2 > ul > li > a { border-right: 4px solid black; color: black; } #cssmenu2 > ul > li > a:hover { color: black; } #cssmenu2 > ul > li.active a { background: white; } #cssmenu2 > ul > li a:hover, #cssmenu2 > ul > li:hover a { background: white; } #cssmenu2 li { position: relative; } #cssmenu2 ul ul li.first { -webkit-border-radius: 0 3px 0 0; -moz-border-radius: 0 3px 0 0; border-radius: 0 3px 0 0; } #cssmenu2 ul ul li.last { -webkit-border-radius: 0 0 3px 0; -moz-border-radius: 0 0 3px 0; border-radius: 0 0 3px 0; border-bottom: 0; } #cssmenu2 ul ul { border-right: 2px solid black; background:white; margin-top:-2px; } #cssmenu2 ul ul { margin-left:2px; } #cssmenu2 ul ul a { font-size: 12px; color: black; } #cssmenu2 ul ul a:hover { color: black; } #cssmenu2 ul ul li { } #cssmenu2 ul ul li:hover > a { background: black; color: #ffffff; } #cssmenu2.align-right > ul > li > a { border-left: 4px solid black; border-right: none; } #cssmenu2.align-right { float: right; } #cssmenu2.align-right li { text-align: right; } #cssmenu2.align-right ul li.has-sub > a:after { content: none; } #cssmenu2.align-right ul ul { visibility: hidden; position: absolute; top: 0; left: -100%; z-index: 598; width: 100%; } #cssmenu2.align-right ul ul li.first { -webkit-border-radius: 3px 0 0 0; -moz-border-radius: 3px 0 0 0; border-radius: 3px 0 0 0; } #cssmenu2.align-right ul ul li.last { -webkit-border-radius: 0 0 0 3px; -moz-border-radius: 0 0 0 3px; border-radius: 0 0 0 3px; } #cssmenu2.align-right ul ul { -webkit-border-radius: 3px 0 0 3px; -moz-border-radius: 3px 0 0 3px; border-radius: 3px 0 0 3px; } /* =========================== ====== Contact Form ====== =========================== */ input, textarea { padding: 10px; border: 1px solid #E5E5E5; width: 400px; color: #999999; box-shadow: rgba(0, 0, 0, 0.1) 0px 0px 8px; -moz-box-shadow: rgba(0, 0, 0, 0.1) 0px 0px 8px; -webkit-box-shadow: rgba(0, 0, 0, 0.1) 0px 0px 8px; } textarea { width: 400px; height: 150px; max-width: 400px; line-height: 18px; } input:hover, textarea:hover, input:focus, textarea:focus { border-color: 1px solid #C9C9C9; box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px; -moz-box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px; -webkit-box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px; } .form label { color: #999999; } /* Submit Button */ .submit input { width: 100px; height: 40px; background-color: #474E69; color: #FFF; border-radius: 3px; -moz-border-radius: 3px; -webkit-border-radius: 3px; } A: Hopefully It will work, I have checked @MarianRehak.. just remove z-index from #cssmenu2 ul{} and add this class #cssmenu2 ul li:hover ul { z-index: 999999 !important; } Rest you can play with the position.. the basic issue will be resolved now..
{ "pile_set_name": "StackExchange" }
Q: Run a simple shell command What is the best WinAPI function to use when you only want to run a simple shell command like hg > test.txt? A: To simply run a file, then ShellExecute() and CreateProcess() are the best options. As you want to redirect output to a file/run a shell command, it complicates things... Output redirection is a feature of the command prompt, and as such, the command you want to run needs to be passed to cmd.exe (on NT/XP+) passing /c and your command as the parameters (either ShellExecute or CreateProcess will do). cmd /c "ipconfig >c:\debug\blah.txt" The best way however is to use CreateProcess() and create your own pipes to talk to the stdin and stdout of the program (This is all cmd does internally)
{ "pile_set_name": "StackExchange" }
Q: Is there downtime involved when using Azure vertical scaling? I have a two websites on a VM with WHM/cPanel and MySql. I am looking to move this into Azure and use vertical scaling. Visits to the website are usually stable but three or four times a year there is a big increase in traffic which historically has caused big problems for the existing host. I am looking to move it into the A5-A7 range of servers for the vertical scaling. I cannot find anything anywhere about whether there is any downtime involved when Azure scales up my VM from A5 to A6 or whatever. Does anyone have any experience with this and can give me a definitive answer as to whether there is downtime when using vertical scaling, and if there is any downtime involved then the kind of downtime I would expect Thank you for your time. A: Yes, it will incur downtime. Post on it Azure will restart your VM. Quote from the page (I highlighted the important bit): When considering the ability to resize virtual machines there are three key concepts that will impact how simple it is to change the size of your VM. The region in which your VM is deployed. Different VM sizes require different physical hardware. In some instances, an Azure region may not contain the hardware required to support the desired VM size. All Azure regions support the VM sizes Standard_A0 – A7 and Basic_A0 – A4. You can then find which other VM sizes are supported in each region under the Services tab of the Azure Regions web page. The physical hardware currently hosting your VM. If the physical hardware currently running your virtual machine also supports your desired new size, then it is very easy to change the VM size through a simple size change operation which results in a VM reboot. The deployment model used for the VM. The two deployment models are Classic and Resource Manager. The Resource Manager model is the newer model, and it supports some ease of use functionality not available in the classic deployment model.
{ "pile_set_name": "StackExchange" }
Q: No common factors implies functional independence Let $f$ and $g$ be two polynomials in two real variables $x$ and $y$, both of which vanish at $(0,0)$. Suppose that they have no common factors. Is it true that their Jacobian $$J(f,g)=\begin{bmatrix}\partial_x f & \partial_y f\\\partial_x g & \partial_y g\end{bmatrix}$$ is invertible at $(0,0)$? Typical example would be $f(x,y)=x$ and $g(x,y)=y$. A one-dimensional analogue seems to be that if two polynomials have no common factors, they are linearly independent as functions. A: No, that's not true. You can take $f=y-x^2$ and $g=y$, then $$ J(f,g)=\left[\begin{matrix} -2x & 1 \\ 0 & 1 \end{matrix}\right] $$ which is not invertible at $(0,0)$. The reason is that the parabola cut out by $f$ and the line cut out by $g$ intersect doubly at the origin, so the resulting point is not a smooth point.
{ "pile_set_name": "StackExchange" }
Q: Chaining .apply() and .bind() surprising behaviour I need your help to better understand the following behavior. In the snippet below, we can see that fooBar outputs foo as this and then returns bar 1, 2, 3 as expected - means bar is called with foo as context. const arguments = [1,2,3]; function bar(...args) { console.log('bar this ->', this); return 'bar ' + args; } function foo(...args) { console.log('foo this ->', this); return 'foo ' + args; } const fooBar = bar.bind(foo, null)(arguments); console.log(fooBar); // <--- no surprises! Let's now have const bar = Math.max.apply; instead. const arguments = [1,2,3]; const bar = Math.max.apply; function foo(...args) { console.log('foo this ->', this); return 'foo ' + args; } const fooBar = bar.bind(foo, null)(arguments); console.log(fooBar); // <--- surprise! In this case, foo is being called as opposed to bar. Why? What is bind() exactly doing under the hood in this case? I'd have assumed that again bar should be called with foo as a context. The context, in this case, is window. I always thought someFunction.apply(someContext, args) behaves as someFunction.bind(someContext, null)(args), but in the second example someFunction.bind(someContext, null)(args) behaves as someContext(args). A: This is because of the specific purpose of apply: to call a given function. Remember that bar is the generic Function.prototype.apply function. bind essentially creates a copy of the original function, with the context (this value) and (optionally) arguments preset. A polyfill for bind would use apply internally. So fooBar = bar.bind(foo, null) is the same as function fooBar(...args) { return Function.prototype.apply.apply(foo, [null, args]); } The double use of apply is obviously confusing! Let's step through what bar.bind(foo, null)(arguments) would do: Function.prototype.apply.bind(foo, null)(arguments) which can be reduced to Function.prototype.apply.apply(foo, [null, arguments]) which in this specific instance is the same as foo(null, ...arguments); The reason this is so confusing is that you are doing a complex invocation of the apply function, which is designed for complex invocations of functions!
{ "pile_set_name": "StackExchange" }
Q: Cannot infer instance using evaluator I've started to work through http://www.cs.nott.ac.uk/~pszgmh/monads for a intro on functional programming course. What better way to try and understand stuff than to actually try and test the code. Alas, on the second page I encounter the following: data Expr = Val Int | Div Expr Expr eval :: Expr -> Int eval (Val n) = n eval (Div x y) = eval x `div` eval y Which produces an error when I try to run it. I'm not quite sure why this happens. When I try eval (Val 4) `div` eval (Val 2) in the repl-loop, it works just fine, but eval 4 `div` eval 2 Ends in a type inference error. When I update my definition to the following: data Expr = Val Int | Div Expr Expr eval :: Expr -> Int eval (Val n) = n eval (Div x y) = eval (Val x) `div` eval (Val y) I get a type error in definition. What is wrong with the first definition? The course uses Hugs by the way. A: What eval expects is an argument of a type that eval is defined for. Looking at the signature, it requires an argument of type Expr, either a Val or a Div. eval 4 means that you're passing an Int to the function. For that to work, eval would have to be defined as: eval :: Int -> Int By writing (Val 4), you are invoking one of the data constructors of Expr type, creating a new value of type Expr, which you can pass to eval and make the compiler happy.
{ "pile_set_name": "StackExchange" }
Q: Default filename of image dragged from the browser window In Safari, when I drag an image from the browser window to the desktop the image take its filename from the last part of the URL. For example: http://www.mysite.com/images/05 the image name is 05.jpeg Is this a behaviour consistent across all (recent IE8+) browsers? Can I decide an arbitrary filename the image will get when dragged out of the browser? I tried (in Safari) to set the name and alt tag of the image but this doesn't have any effect. Maybe can I decide the filename setting it in the header of the server response when the image is served? A: One method is to specify the desired filename in the header of the response when the file is served. I'm on php so... header('Content-disposition: inline; filename=the-image.jpg'); When the image is dragged from the browser window to the desktop the file name is the-image.jpg Unfortunately this is not consistent across all browsers, in particular Firefox doesn't follow the rule and sticks to the last part of the URL for giving the name. The solution that works across all browsers is to avoid specifying the name in the header of the response and set the name as the last part of the URL. As I can manage the routes for my website the solution I adopted is to let the route to images end with a string that is ignored by the server and has the sole purpose of defining a filename for the image in case it's dragged out of the browser. For example: http://www.my-site.com/images/05/my%20custom%20filename.jpg What tells the server what image the client wants is the parameter following images, so 05 in the example. It's important to note that the filename must be URI-component encoded, escaping spaces, slashes, percents, and so on... The filename, to be OS friendly, should then be scrubbed from slashes, back-slashes and other characters that may eventually create mess.
{ "pile_set_name": "StackExchange" }
Q: Who Am I? (A Dead Guy) You seek him who died at sea, Try and find who he might be. A heart that did not wilt in flame, Writ on Roman stone is his name. Nothing of him that doth fade, But doth suffer a sea-change, Into something rich and strange. Now with his Adonis he lies, Who thought not himself worthy of fame, And writ on stone not his name, That lies beneath lit and unlit skies. Who am I? A: I am... Percy Shelley, the early Romantic poet. You seek him who died at sea, Try and find who he might be. Shelley drowned when his boat sank. A heart that did not wilt in flame, Legend has it that Trelawny plucked the heart from Shelley’s funeral pyre to be buried with his son. Writ on Roman stone is his name. Shelley is buried in Rome Nothing of him that doth fade, But doth suffer a sea-change, Into something rich and strange. These lines from The Tempest are written on his grave. Now with his Adonis he lies, This could have a few meanings: Shelley died with a copy of Keats’s poetry in his pocket, and his poem Adonaïs is written in Keats’s honour. His heart is said to have been kept inside his widow’s manuscript of Adonaïs. His ashes were later found in an envelope inside his daughter-in-law’s copy of Adonaïs. He was also buried in the same cemetery as Keats. Who thought not himself worthy of fame, And writ on stone not his name, Keats himself requested to be buried with no name or date on his tombstone. That lies beneath lit and unlit skies. I don’t know what this refers to.
{ "pile_set_name": "StackExchange" }
Q: Storing SHA1 hex value in PostgreSQL Seemingly simple question, corresponds to another question that was asked with regards to MySQL: How does one store the hex value that results from a SHA1 hash in a PostgreSQL database? Note: I realize I could use a VARCHAR(40) field, but this isn't efficient, as the data is in hex. Also, I am using PHP to interact with the database, so I can use PHP functions if necessary, but if this is the case, what do I store the result as in the database? A: I would store as bytea, hex encoded. Converting the human-readable hex data to bytea is simply a matter of: ('\x' || sha1_hex_value)::bytea The only real disadvantage here is that depending on your app framework you may get a binary representation out. If not you will get an escaped version and depending on the escape settings, may want to convert to binary yourself (if it is hex though you can just strip off the \x at the front of the value and use as hex).
{ "pile_set_name": "StackExchange" }
Q: Search for multiple strings and print out match I have a command that prints out this long line, and i'm looking for a way to search for 3 different strings and when one is found it should be printed. The text will always only have one of the 3 options in it. b"bid: 5.0\r\ncompute_on: cpu\r\nconcent_enabled: true\r\ncost: null\r\nduration: 4.870952844619751\r\nestimated_cost: '1666666666666666667'\r\nestimated_fee: '56000000000000'\r\nfee: null\r\nid: a197d3fa-dfb4-11e8-9f77-a6389e8e7978\r\nlast_updated: 1541282756.6588786\r\nname: '4444'\r\noptions:\r\n compositing: false\r\n format: PNG\r\n frame_count: 1\r\n frames: '1'\r\n output_path: C:/Users/me/Google Drive/GolemProject/var/media/e/output/4444\r\n resolution:\r\n - 222\r\n - 222\r\npreview: C:\Users\me\AppData\Local\golem\golem\default\rinkeby\res\a197d3fa-dfb4-11e8-9f77-a6389e8e7978\tmp\current_preview.PNG\r\nprogress: 0.00 %\r\nresources:\r\n- C:/Users/me/Google Drive/GolemProject/var/media/e/fa3ee533-2020-45e7-9f5c-5501baa49285/bmw27/bmw27_cpu.blend\r\n- C:\Users\me\Google Drive\GolemProject\var\media\e\fa3ee533-2020-45e7-9f5c-5501baa49285\bmw27\bmw27_cpu.blend\r\nstatus: Waiting\r\nsubtask_timeout: 0:20:00\r\nsubtasks: 1\r\ntime_remaining: ???\r\ntime_started: 1541282753.4829328\r\ntimeout: 0:40:00\r\ntype: Blender\r\n\r\n" My current code looks like this. status = subprocess.check_output(["golemcli", "tasks", "show", line], shell=True) findstatus = ['Waiting', 'Finished', 'Timeout'] printstatus = str(status) for line in printstatus: if any(word in line for word in findstatus): print(line) But it doesnt seem like it finds anything because nothing ever gets printed. A: You are iterating over characters - not lines. status = b"bid: 5.0\r\ncompute_on: cpu\r\nconcent_enabled: true\r\ncost: null\r\nduration: 4.870952844619751\r\nestimated_cost: '1666666666666666667'\r\nestimated_fee: '56000000000000'\r\nfee: null\r\nid: a197d3fa-dfb4-11e8-9f77-a6389e8e7978\r\nlast_updated: 1541282756.6588786\r\nname: '4444'\r\noptions:\r\n compositing: false\r\n format: PNG\r\n frame_count: 1\r\n frames: '1'\r\n output_path: C:/Users/me/Google Drive/GolemProject/var/media/e/output/4444\r\n resolution:\r\n - 222\r\n - 222\r\npreview: C:\Users\me\AppData\Local\golem\golem\default\rinkeby\res\a197d3fa-dfb4-11e8-9f77-a6389e8e7978\tmp\current_preview.PNG\r\nprogress: 0.00 %\r\nresources:\r\n- C:/Users/me/Google Drive/GolemProject/var/media/e/fa3ee533-2020-45e7-9f5c-5501baa49285/bmw27/bmw27_cpu.blend\r\n- C:\Users\me\Google Drive\GolemProject\var\media\e\fa3ee533-2020-45e7-9f5c-5501baa49285\bmw27\bmw27_cpu.blend\r\nstatus: Waiting\r\nsubtask_timeout: 0:20:00\r\nsubtasks: 1\r\ntime_remaining: ???\r\ntime_started: 1541282753.4829328\r\ntimeout: 0:40:00\r\ntype: Blender\r\n\r\n" findstatus = ['Waiting', 'Finished', 'Timeout'] printstatus = str(status) # you need to split it here, by literal \r\n - not the special characters # for carriage return, linefeed \r\n: for line in printstatus.split(r"\r\n"): # split here by _literal_ \\r\\n if any(word in line for word in findstatus): print(line) Alternate way using sets: findstatus = set([ 'Waiting', 'Finished', 'Timeout'] ) printstatus = str(status) # you need to split it here, by literal \r\n - not the special characters # for carriage return, linefeed \r\n: for line in printstatus.split(r"\r\n"): # split here by _literal_ \\r\\n status = set( line.split() ) & findstatus if status: print(*status) Output: status: Waiting
{ "pile_set_name": "StackExchange" }
Q: COUNTA and INDIRECT together Any help would be appreciated. I have seen the indirect formula used a lot but I'm not sure if I can string together all that I am trying to do here. I have created within the X column of SS Sales a formula that calculates the data range based on install dates. So 1/1/13 to 4/1/13 may equal (DT14:BL41). The X column gives me this answer depending on dates that change. I need to use the range determined by cell X2 in SS Sales(DT14:BL41) in a COUNTA formula to count what is actually open on the calendar, which is on a separate sheet within the same workbook. This is what I am trying but it doesn't work: =COUNTA('install calendar copy'!(INDIRECT('SS Sales'!X2)) A: You need 'SS Sales'!X2 to contain the text string 'install calendar copy'!DT14:BL41 then =COUNTA(INDIRECT('SS Sales'!X2)) should give you what you want.
{ "pile_set_name": "StackExchange" }
Q: Jenkins + SonarQube 4.0 ClassNotFoundException occurs durng XHTML validation check using XML profile I am getting a ClassNotFoundException while attempting to run a SonarQube 4.0 analysis from Jenkins on a Maven project using Sonar's XML language profile. Within the SonarQube analysis, the exception is occurring while attempting to perform the XML Schema Validation check. What might be wrong? This is the exception from the Jenkins build job: 0.0.0.0 ERROR - Could not analyze the file D:\Jenkins_home\.jenkins\jobs\XXX with Sonar Runner\workspace\XXX\WebContent\Login.xhtml org.sonar.api.utils.SonarException: java.lang.ClassNotFoundException: org.apache.xerces.dom.DOMImplementationSourceImpl at org.sonar.plugins.xml.schemas.SchemaResolver.createLSInput(SchemaResolver.java:122) ~[na:na] at org.sonar.plugins.xml.schemas.SchemaResolver.resolveResource(SchemaResolver.java:269) ~[na:na] at com.sun.org.apache.xerces.internal.util.DOMEntityResolverWrapper.resolveEntity(DOMEntityResolverWrapper.java:106) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.resolveEntity(XMLEntityManager.java:1100) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaLoader.resolveDocument(XMLSchemaLoader.java:595) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.xs.traversers.XSDHandler.resolveSchema(XSDHandler.java:1671) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.xs.traversers.XSDHandler.constructTrees(XSDHandler.java:909) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.xs.traversers.XSDHandler.parseSchema(XSDHandler.java:569) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaLoader.loadSchema(XMLSchemaLoader.java:552) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaLoader.loadGrammar(XMLSchemaLoader.java:519) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaLoader.loadGrammar(XMLSchemaLoader.java:485) ~[na:1.6.0_24] at com.sun.org.apache.xerces.internal.jaxp.validation.XMLSchemaFactory.newSchema(XMLSchemaFactory.java:211) ~[na:1.6.0_24] at org.sonar.plugins.xml.checks.XmlSchemaCheck.createSchema(XmlSchemaCheck.java:147) ~[na:na] at org.sonar.plugins.xml.checks.XmlSchemaCheck.validate(XmlSchemaCheck.java:234) ~[na:na] at org.sonar.plugins.xml.checks.XmlSchemaCheck.validate(XmlSchemaCheck.java:227) ~[na:na] at org.sonar.plugins.xml.checks.XmlSchemaCheck.validate(XmlSchemaCheck.java:259) ~[na:na] at org.sonar.plugins.xml.XmlSensor.analyse(XmlSensor.java:69) ~[na:na] at org.sonar.batch.phases.SensorsExecutor.execute(SensorsExecutor.java:72) [sonar-batch-4.0.jar:na] at org.sonar.batch.phases.PhaseExecutor.execute(PhaseExecutor.java:114) [sonar-batch-4.0.jar:na] at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:150) [sonar-batch-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) [sonar-plugin-api-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) [sonar-plugin-api-4.0.jar:na] at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:190) [sonar-batch-4.0.jar:na] at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:185) [sonar-batch-4.0.jar:na] at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:178) [sonar-batch-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) [sonar-plugin-api-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) [sonar-plugin-api-4.0.jar:na] at org.sonar.batch.scan.ScanTask.scan(ScanTask.java:58) [sonar-batch-4.0.jar:na] at org.sonar.batch.scan.ScanTask.execute(ScanTask.java:45) [sonar-batch-4.0.jar:na] at org.sonar.batch.bootstrap.TaskContainer.doAfterStart(TaskContainer.java:82) [sonar-batch-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) [sonar-plugin-api-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) [sonar-plugin-api-4.0.jar:na] at org.sonar.batch.bootstrap.BootstrapContainer.executeTask(BootstrapContainer.java:155) [sonar-batch-4.0.jar:na] at org.sonar.batch.bootstrap.BootstrapContainer.doAfterStart(BootstrapContainer.java:143) [sonar-batch-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) [sonar-plugin-api-4.0.jar:na] at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) [sonar-plugin-api-4.0.jar:na] at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:92) [sonar-batch-4.0.jar:na] at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:74) [sonar-batch-4.0.jar:na] at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:45) [sonar-runner-batch1703873637256551857.jar:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.6.0_24] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) ~[na:1.6.0_24] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) ~[na:1.6.0_24] at java.lang.reflect.Method.invoke(Method.java:597) ~[na:1.6.0_24] at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87) [sonar-runner-dist-2.3.jar:na] at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75) [sonar-runner-dist-2.3.jar:na] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_24] at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69) [sonar-runner-dist-2.3.jar:na] at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50) [sonar-runner-dist-2.3.jar:na] at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:102) [sonar-runner-dist-2.3.jar:na] at org.sonar.runner.api.Runner.execute(Runner.java:90) [sonar-runner-dist-2.3.jar:na] at org.sonar.runner.Main.executeTask(Main.java:70) [sonar-runner-dist-2.3.jar:na] at org.sonar.runner.Main.execute(Main.java:59) [sonar-runner-dist-2.3.jar:na] at org.sonar.runner.Main.main(Main.java:41) [sonar-runner-dist-2.3.jar:na] Caused by: java.lang.ClassNotFoundException: org.apache.xerces.dom.DOMImplementationSourceImpl at java.net.URLClassLoader$1.run(URLClassLoader.java:202) ~[na:1.6.0_24] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_24] at java.net.URLClassLoader.findClass(URLClassLoader.java:190) ~[na:1.6.0_24] at java.lang.ClassLoader.loadClass(ClassLoader.java:307) ~[na:1.6.0_24] at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ~[na:1.6.0_24] at org.w3c.dom.bootstrap.DOMImplementationRegistry.newInstance(DOMImplementationRegistry.java:146) ~[na:1.6.0_24] at org.sonar.plugins.xml.schemas.SchemaResolver.createLSInput(SchemaResolver.java:115) ~[na:na] ... 52 common frames omitted Configuration: Jenkins 1.509.2 job using Maven 2.2.1. Jenkins is running under Tomcat 7.0.11. Jenkins Sonar plugin version 2.1. Sonar XML plugin version 1.0.1. SonarQube 4.0 analysis using XML language profile. I tried initiating the Sonar analysis from the Jenkins job using both a) SonarQube runner 2.3 via Maven post-build step; and b) directly using a Sonar post-build action. Both give the same result. Note that the SonarQube analysis still completes and I can see the results in SonarQube. I also tried running the job using Maven 3.0.3. Same result. A: Ticket created, indeed I've managed to reproduce this issue locally : https://jira.codehaus.org/browse/SONARPLUGINS-3356 Thanks for your feedback.
{ "pile_set_name": "StackExchange" }
Q: Little bit confusing about free pointer inside function some weeks ago I learn about proper way how pointers should be allocated and dealocated inside function so given in example linkedList: typedef struct _node { void *data; struct _node *next; } Node; typedef struct _linkedList { Node *head; Node *tail; Node *current; } LinkedList; In My opinion right way to destroy queue is by using function that get pointer to pointer to node which I want to delete: void destroy (Node ** node) Or in more real world example pointer to pointer and pointer to list to delete. void destroy (LinkedList * list, Node ** node) but now I am reading book "Understanding C pointers" and I encounter to problem because In chapter 6 pointers to structures there is followed example of function destroyList: void delete(LinkedList *list, Node *node) { if (node == list->head) { if (list->head->next == NULL) { list->head = list->tail = NULL; } else { list->head = list->head->next; } } else { Node *tmp = list->head; while (tmp != NULL && tmp->next != node) { tmp = tmp->next; } if (tmp != NULL) { tmp->next = node->next; } } free(node); //free on pointer copy value not pointer } So in this example author run free on pointer that is passed by value, so in my opinion this shouldn't work. But I looked into errata and there is no notes about this example. In this case I understand that List is function argument to use correct instance of list, but is should be rather that: free(list->head) ; //It should work? And then also free memory for allocated List. Am I right? Because reading this example I have a feeling that I have a problem with good understanding of this topic. Also I went to Linux Man Page and I saw free prototype: void free(void *ptr); // http://linux.die.net/man/3/free So why everybody told to pass pointer to pointer when You want to free memory but in standard library same free function take as parameter not pointer to pointer but pointer, how it work correctly? A: as you state correctly, free( void *p ) frees the memory p points to but as the address is passed by value it will remain unchanged for the caller. Thus you could run into problems like this: int *p = malloc( sizeof *p ); *p = 1; free( p ); ... if( p ) *p = 2; // Undefined Behaviour!!! although p is free()'d it's still != NULL So you will often find free( p ); p = NULL; Nevertheless in my opinion it's ok to write a free() like function where you pass a pointer by value as long as the function's description states clearly that the pointer must not be used afterwards (no matter wht type of pointer that may be). But of course you're free to define the function with double pointers and set everything you have free()'d to NULL inside, like in this very simple example: void myfree( void **pp ) { free( *p ); *p = NULL; } ... int *p = malloc( sizeof *p ); ... myfree( &p ); // now p == NULL
{ "pile_set_name": "StackExchange" }
Q: Encrypt web.config file using Data Protection API I have encrypted part of my web.config file using the Data Protection API. Now, my question is, what does a hacker need in order to decrypt the web.config file? Does he require physical access to the machine in order to decrypt it? Or can he decrypt it from a remote location? A: It depends. If the attacker gains access to your web.config alone, then he's not able to decrypt it without the key, which we're assuming he doesn't possess. If you've set useMachineProtection to true in your DpapiProtectedConfigurationProvider configurations, and the attacker gains access to your machine (remote or not) with any account, then any process running on the machine could decrypt the web.config file, including anything the attacker could run. If you've set useMachineProtection to false in your DpapiProtectedConfigurationProvider configurations, then the attacker needs access to the user account used for the process (remote or not). You need to know that DPAPI provides password-based protection. So, assuming you're using option #3, then even if the attacker gains physical access to the machine, they still need the account's password to decrypt encryption keys which will decrypt the data. Of course, the attacker can easily reset the password, but that would render the encrypted keys useless and leaves your data inaccessible. Note that until .Net 3.5 SP1, useMachineProtection is set to true (bad) by default. I have no information on later versions. Update: .Net 4.0 uses the default value true for useMachineProtection as well. A: I'm assuming that you are working in .Net or perhaps lower level like C++. From having recently consumed those APIs here is what I would recommend. There are 2 Protected Configuration Providers available, DPAPI (more appropriate for client side desktop applications) and the RSA Provider. The latter is more appropriate for encrypting web.configs as this is a public key configuration where only the ASP.NET service has the private key to decrypt the data, this can be scoped at both user and machine, similar to DPAPI. Here is an old but sound walkthrough of it in ASP.Net. Apologies if you are not using .Net, it was unclear. Still though I think the RSA provider is the correct way to go here. With regards to breaking the DPAPI encryption, yes most of the attacks would need to be orchestrated on the target machine, usually involving either an attempt at the password SAM files or password reset scripts targeted at systems admins. Related conference paper - breaking DPAPI offline : BlackHat 2010.
{ "pile_set_name": "StackExchange" }
Q: Datatable on row-reorder data oldData and newData is undefined I am currently rewriting my datatable from CodeIgniter (datatables v1.10.12 and RowReorder v1.1.2) to Laravel 6 (datatables v1.10.20 and RowReorder v1.2.6). On the event 'row-reorder' i need to collect data about the changes. Therefore i use this script. $('#category-table').on('row-reorder.dt', function (dragEvent, data, nodes) { var newSequences = []; $.each(data, function(key, change) { console.log(change); newSequences.push({ id: $(change.oldData).data('id'), sequence: $(change.newData).data('sequence') }); }); doThingsWithTheResult(newSequences); } In the old situation (CodeIgniter) 'change.oldData' and 'change.newData' are filled with the old and new elements that are affected by the 'row-reorder' event but in the new situation (Laravel) both 'change.oldData' and 'change.newData' are 'undefined'. Old/working situation New/ not working situation What could be the reason why these crucial properties are 'undefined'? A: Problem solved! Although it worked out-of-the-box in the old situation in de new situation i needed to add/provide the rowReorder.dataSrc setting.
{ "pile_set_name": "StackExchange" }
Q: Do I have to use the entire loan amount? I am 18 and I want to apply for a loan for a used car. I do not plan on paying more than $4000 for the car but the bank has a minimum of $7500 for car loans. If I only used $4000 could I just use the rest to pay it off? I also do not have any credit. A: I doubt they will let you even get an auto loan for almost double what the car is worth. A car loan is secured by the car. If it isn't worth what you borrowed they wouldn't be able to recover their money by repossessing the car if you didn't make your payments. A: You should shop for a car loan at another bank or credit union. There are lots of lenders willing to lend just the amount you need. According to Craigslist, you can buy a very nice 15-20 year old vehicle for the amount you have budgeted. Perhaps you can borrow against a credit card or other personal line of credit. That might even be cheaper. If so, it will be much more convenient -- you won't have any hassles getting a car lender's name added to, or removed from, the title. It would also eliminate the possibility of repossession. Have you set up a checking account and a savings account? If so, ask your bank or credit union about a car loan or credit card or personal line of credit. If not, shop for all of them at the same time. If you can get a $ 4,000 car loan at 7 % APR for 36 months, your monthly payment would be about $ 125 per month. There is a major U.S. on-line bank that (as of December 2015) offers these terms for some U.S. buyers with "rebuilding" credit (on approved credit). The bank has a risk-free on-line pre-qualification process. They don't actually make the loan until after you submit your loan application at the dealer. An Oregon credit union that posted its rate sheet in October 2015 was offering (on approved credit) a $ 4,000 unsecured loan at 17% APR for up to 48 months to members with credit ratings of "559 or below". This works out to 36 monthly payments of about $ 145/month, or 48 monthly payments of about $ 120/month. (They also offered auto loans at 15.5% APR with a substantial downpayment.)
{ "pile_set_name": "StackExchange" }
Q: sapply function(x) where x is subsetted argument So, I want to generate a new vector from the information in two existing ones (numerical), one which sets the id for the participant, the other indicating the observation number. Each paticipant has been observed different times. Now, the new vector should should state: 0 when obs_no=1; 1 when obs_no=last observation for that id; NA for cases in between. id obs_no new_vector 1 1 0 1 2 NA 1 3 NA 1 4 NA 1 5 1 2 1 0 2 2 1 3 1 0 3 2 NA 3 3 1 I figure I could do this separatly for every id using code like this new_vector <- c(0, rep(NA, times=length(obs_no[id==1])-2), 1) Or I guess just using max() but it wouldn't make any difference. But adding each participant manually is really inconvenient since I have a lot of cases. I can't figure out how to make a generic function. I tried to define a function(x) using sapply but cant get it to work since x is positioned within subsetting brackets. Any advice would be helpful. Thanks. A: ave to the rescue: dat$newvar <- NA dat$newvar <- with(dat, ave(newvar, id, FUN=function(x) replace(x, c(length(x),1), c(1,0)) ) ) Or use a bit of duplicated() fun: dat$newvar <- NA dat$newvar[!duplicated(dat$id, fromLast=TRUE)] <- 1 dat$newvar[!duplicated(dat$id)] <- 0 Both giving: # id obs_no new_vector newvar #1 1 1 0 0 #2 1 2 NA NA #3 1 3 NA NA #4 1 4 NA NA #5 1 5 1 1 #6 2 1 0 0 #7 2 2 1 1 #8 3 1 0 0 #9 3 2 NA NA #10 3 3 1 1
{ "pile_set_name": "StackExchange" }
Q: ifElse statement with state? Background I am making an app that receives messages from several devices. Upon receiving a messages an event is fired with the given message: on( "data", message => { //doSomething } ); Challenge This function receives two types of messages: A and B: message A has the id of the device message B has some info about a deivce My first approach to dealing with this was the following: const { ifElse } = require("ramda"); const evalData = ifElse( isTypeA, // Returns true if type A, false otherwise evalTypeA, // Returns device Id evalTypeB // Processes data in message and returns bytes read ); on( "data", evalData ); Problem The problem here is that messages of type B don't have the Id of the device they belong to. So to properly process the message I need the deviceId that evalTypeA returned in a previous message. My idea to tackle this was to pass the Id to evalTypeB: const evalData = messages => { let id = undefined; id = ifElse( isTypeA, // Returns true if type A, false otherwise evalTypeA, // Returns device Id evalTypeB( id ) // Processes data in message and returns bytes read )( message ); } The problem here is that this wouldn't work! evalTypeB also returns a number and then I would have no idea if what the ifElse expression is given me is a number of bytes read or an Id! Question How would you solve this without mutation and side effects? A: By definition, a pure function that is called repeated times as the callback for on('data', callback) can not keep track of state from previous calls. With that in mind, there are a couple of options to consider to help try to minimise or localise the side-effects: Close over the state, keeping the logic free of side-effects: const processMsg = ifElse(isTypeA, evalTypeA, evalTypeB) const handler = initialState => { let state = initialState return msg => { state = processMsg(state, msg) } } on('data', handler(42)) Recursively attach a new handler that will only be called once for each message at the end of each call, limiting the effects to the handler registration (this assumes something like once is supported by the event emitter): const processMsg = ifElse(isTypeA, evalTypeA, evalTypeB) const handler = state => msg => once('data', handler(processMsg(state, msg))) once('data', handler(42))
{ "pile_set_name": "StackExchange" }
Q: DistinctCount extension method Here I go again. I have been finding a fairly common pattern in business logic code. And that pattern looks like this: int sprocketCount = datastore.GetSprocketOrders(parameters).Distinct().Count(); I decided I wanted to build DistinctCount() (again from "first principles") as Distinct() will create a second enumerable off of the first before Count() is executed. With that, here are four variations of DistinctCount(): public static int DistinctCount<TSource>(this IEnumerable<TSource> source) => source?.DistinctCount((IEqualityComparer<TSource>)null) ?? throw new ArgumentNullException(nameof(source)); public static int DistinctCount<TSource>(this IEnumerable<TSource> source, IEqualityComparer<TSource> comparer) { if (source is null) { throw new ArgumentNullException(nameof(source)); } ISet<TSource> set = new HashSet<TSource>(comparer); int num = 0; using (IEnumerator<TSource> enumerator = source.GetEnumerator()) { while (enumerator.MoveNext()) { // ReSharper disable once AssignNullToNotNullAttribute if (set.Add(enumerator.Current)) { checked { ++num; } } } } return num; } public static int DistinctCount<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source is null) { throw new ArgumentNullException(nameof(source)); } if (predicate is null) { throw new ArgumentNullException(nameof(predicate)); } return source.DistinctCount(predicate, null); } public static int DistinctCount<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate, IEqualityComparer<TSource> comparer) { if (source is null) { throw new ArgumentNullException(nameof(source)); } if (predicate is null) { throw new ArgumentNullException(nameof(predicate)); } ISet<TSource> set = new HashSet<TSource>(comparer); int num = 0; foreach (TSource source1 in source) { if (predicate(source1) && set.Add(source1)) { checked { ++num; } } } return num; } And here are a battery of unit tests: [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void TestNull() { int[] nullArray = null; // ReSharper disable once ExpressionIsAlwaysNull Assert.AreEqual(0, nullArray.DistinctCount()); } [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void TestNullPredicate() { int[] zero = Array.Empty<int>(); Func<int, bool> predicate = null; // ReSharper disable once ExpressionIsAlwaysNull Assert.AreEqual(0, zero.DistinctCount(predicate)); } [TestMethod] public void TestZero() { int[] zero = Array.Empty<int>(); Assert.AreEqual(0, zero.DistinctCount()); } [TestMethod] public void TestOne() { int[] one = { 1 }; Assert.AreEqual(1, one.DistinctCount()); } [TestMethod] public void TestOneWithDuplicate() { int[] oneWithDuplicate = { 1, 1, 1, 1, 1 }; Assert.AreEqual(1, oneWithDuplicate.DistinctCount()); } [TestMethod] public void TestTwo() { int[] two = { 1, 2 }; Assert.AreEqual(2, two.DistinctCount()); } [TestMethod] public void TestTwoWithDuplicate() { int[] twoWithDuplicate = { 2, 1, 2, 1, 2, 2, 1, 2 }; Assert.AreEqual(2, twoWithDuplicate.DistinctCount()); } [TestMethod] public void TestTwoWithDuplicateUsingPredicate() { int[] twoWithDuplicate = { 2, 1, 3, 2, 1, 2, 2, 1, 2, 3 }; Assert.AreEqual(2, twoWithDuplicate.DistinctCount(x => x > 1)); } [TestMethod] public void TestTwoUsingNullComparer() { int[] two = { 1, 2 }; IEqualityComparer<int> comparer = null; // ReSharper disable once ExpressionIsAlwaysNull Assert.AreEqual(2, two.DistinctCount(comparer)); } [TestMethod] public void TestOneWithDuplicateUsingComparer() { string[] one = { "one", "One", "oNe", "ONE" }; Assert.AreEqual(1, one.DistinctCount(StringComparer.InvariantCultureIgnoreCase)); } [TestMethod] public void TestTwoWithDuplicateUsingPredicateAndComparer() { string[] two = { "one", "two", "One", "Two", "oNe", "TWO", "ONE", "tWo", "three" }; Assert.AreEqual(2, two.DistinctCount(x => x != "three", StringComparer.InvariantCultureIgnoreCase)); } As always, ooking for overall review - is the code readable, maintainable, performant? Do the tests have the right amount of coverage or are there more particular cases to consider? A: As slepic in his comment, I also wonder why you use an enumerator in the first and foreach in the second place? You can eliminate null checks in the versions that call other overrides: public static int DistinctCount<TSource>(this IEnumerable<TSource> source) => source?.DistinctCount((IEqualityComparer<TSource>)null) ?? throw new ArgumentNullException(nameof(source)); can be reduced to: public static int DistinctCount<TSource>(this IEnumerable<TSource> source) => DistinctCount(source, (IEqualityComparer<TSource>)null); And the other to: public static int DistinctCount<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) => DistinctCount(source, predicate, null); Do you really need num? Couldn't you just return set.Count? By using ToHashSet<T>() directly as show below I only find a minor loss (if any) in performance compared to your versions: public static class ExtensionsReview { public static int DistinctCount<TSource>(this IEnumerable<TSource> source) => DistinctCount(source, (IEqualityComparer<TSource>)null); public static int DistinctCount<TSource>(this IEnumerable<TSource> source, IEqualityComparer<TSource> comparer) { if (source is null) { throw new ArgumentNullException(nameof(source)); } return source.ToHashSet(comparer).Count; } public static int DistinctCount<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) => DistinctCount(source, predicate, null); public static int DistinctCount<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate, IEqualityComparer<TSource> comparer) { if (source is null) { throw new ArgumentNullException(nameof(source)); } if (predicate is null) { throw new ArgumentNullException(nameof(predicate)); } return source.Where(predicate).ToHashSet(comparer).Count; } } According to your tests, I think you should test reference types (classes) with override of Equals()/GetHashCode() (and implementation of IEquatable<T>) with and without a custom comparer. A: Slepic and Henrik are wondering about the use of foreach and enumerator, and I'm too. Anyway, instead of having different versions with actual implementations for the same purpose (count the distinct elements), you can create one private method with the full implementation, and then, just call back this method on the other methods. So, the main implementation would be like this : private static int CountDistinctIterator<TSource>(IEnumerable<TSource> source, Func<TSource, bool> predicate, IEqualityComparer<TSource> comparer) { if (source == null) throw new ArgumentNullException(nameof(source)); var set = new HashSet<TSource>(comparer); var count = 0; foreach (TSource element in source) { checked { if (set.Add(element) && predicate(element)) { count++; } } } return count; } Now, it's a matter of calling back this method with the appropriate arguments. Like this : public static int CountDistinct<TSource>(this IEnumerable<TSource> source) { return CountDistinctIterator<TSource>(source, (s) => true, null); } public static int CountDistinct<TSource>(this IEnumerable<TSource> source, IEqualityComparer<TSource> comparer) { return CountDistinctIterator<TSource>(source, (s) => true, comparer); } public static bool AnyDistinct<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) { return CountDistinctIterator<TSource>(source, predicate, null) == 1; } public static bool AnyDistinct<TSource>(this IEnumerable<TSource> source) { return CountDistinctIterator<TSource>(source, (s) => true, null) == 1; } although, for this Distinct I don't see any usage for Func<TSource, bool> predicate except for checking if the element exists or not. As the Distinct would get the unique elements, and if you say element == xxx it'll always return 1 if exists, and 0 if not. Unless there is any other uses except this one, in my opinion, I find it beneficial if rename this method: DistinctCount<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) to something meaningful other than DistinctCount like for instance DistinctAny which return boolean (true if DistinctCount returns 1, false if 0). UPDATE : I have changed the methods name from DistinctCount to CountDistinct the reason of this is because the method is Counting, so the Count needs to be first so it would be easier to picked up, the other reason is doing this will make it appear after Count on the intellisense list. I also added AnyDistinct methods which replaced the mentioned method (the one with Func<TSource, bool>). A: Just looking at your tests, there's a couple of points to consider... Naming Having Test at the front of every test case is usually redundant (public methods in test classes are tests...). The beginning of the test name is also quite valuable real-estate since your test runner/window is likely to truncate what it displays after a certain number of characters. Consider removing the 'Test'. A better prefix might be the name of the method under test (although you may be using the name of the `TestClass for that, since you don't include that part of your code). Make it clear what you're testing I found your test methods that are testing for exceptions to be less than clear. [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void TestNullPredicate() { int[] zero = Array.Empty<int>(); Func<int, bool> predicate = null; // ReSharper disable once ExpressionIsAlwaysNull Assert.AreEqual(0, zero.DistinctCount(predicate)); } Initially I skipped over the method annotation and just rest the test code. On the face of it, it look like if there was a null predicate, you are expecting the method to return 0. This seemed odd, however possible behaviour. There's nothing in the test name (such as DistinctCount_NullPredicate_Throws) to indicate what the expected outcome was, then eventually there's the ExpectedException attribute, which explains that actually the test is expecting an ArgumentNullException. Having an Assert statement when you're not actually expecting a value to be returned from the call is misleading. It would be better to just call the method (zero.DistinctCount(predicate)). The lack of an assertion helps to make it more obvious that the attributes indicate the success criteria for the test.
{ "pile_set_name": "StackExchange" }
Q: proving convergence of this sequence and calculating limit So i have this sequence: $a_{n}=1+ \frac{1}{3}\cos1 +\frac{1}{3^2}\cos2+ ... +\frac{1}{3^n}\cos(n) $ I have to prove it is convergent, and then calculate the limit. I'm not totally sure how to find the limit of this sequence, so i am stuck at the beginning. Because of the 2nd task, it's probably not the best idea to try to prove it's a Cauchy's sequence, so i guess it's the best to find the limit(at least the candidate) and then prove that it's convergent with that limit by definition. But i am stuck at the begining. Thanks in advance. A: Hint: $$ \cos k=\operatorname{Re}\bigl(e^{ik}\bigr) $$ and $$ \sum_{k=0}^n\frac{\cos k}{3^k}=\operatorname{Re}\Bigl(\sum_{k=0}^n\Bigl(\frac{e^i}{3}\Bigr)^k\Bigr). $$ The sum is the sum of a geometric progression of ratio $e^i/3$. $$ \sum_{k=0}^n\Bigl(\frac{e^i}{3}\Bigr)^k=\frac{(e^i/3)^{n+1}-1}{e^i/3-1}. $$ Since $|e^i/3|=1/3<1$ we have $$ \lim_{n\to\infty}\sum_{k=0}^n\Bigl(\frac{e^i}{3}\Bigr)^k=\frac{1}{1-e^i/3}=\frac{3}{3-\cos1-i\sin1}. $$ The desired limit is the real part, that is $$ \frac{3(3-\cos1)}{(3-\cos1)^2+\sin^21}. $$
{ "pile_set_name": "StackExchange" }
Q: CORS preflight request returning HTTP 401 with windows authentication I searched a lot on Google and Stack overflow to find a solution for my problem, but nothing worked. Here is my problem: I use IIS 7 with a special programming environment called WebDEV that does not allow direct manipulation of OPTIONS HTTP method. So, all solutions suggesting some kind of server-side request handling using code are not feasible. I have to use Window authentication and disable anonymous access I have a page that uses CORS to POST to this server. As this POST should have Content-type: Octet-stream, a preflight is issued by the browser. When I enable anonymous access, everything works fine (CORS is well configured) When I disable anonymous access, the server replies with HTTP 401 unauthorized response to the preflight request, as it does not contain credentials information. I tried to write a module for IIS that accepts OPTIONS requests like this, but it did not work (couldn't add the module correctly to IIS, maybe) public class CORSModule : IHttpModule { public void Dispose() { } public void Init(HttpApplication context) { context.PreSendRequestHeaders += delegate { if (context.Request.HttpMethod == "OPTIONS") { var response = context.Response; response.StatusCode = (int)HttpStatusCode.OK; } }; } } The question is: How can I make IIS respond with HTTP 200 to the preflight request without enabling anonymous access or writing some server-side code? Is there an easy configuration or a ready-made module for IIS to do so? At least, what are the detailed steps to install the above module into IIS 7? A: Here is the solution that uses "URL Rewrite" IIS module. It works perfectly. 1- Stop IIS service (maybe not necessary) 2- Install "web platform installer" from https://www.microsoft.com/web/downloads/platform.aspx 3- Go to "Applications" tab and search for "URL Rewrite" and download it 4- Install this hotfix KB2749660 (maybe not necessary) 5- Open IIS configuration tool, double click "URL Rewrite" 6- Add a new blankrule 7- Give it any name 8- In "Match URL", specify this pattern: .* 9- In "Conditions", specify this condition entry: {REQUEST_METHOD} and this pattern: ^OPTIONS$ 10- In "Action", specify: action type Personalized response, state code 200, reason Preflight, description Preflight 11- Start the server Now, the server should reply with a 200 status code response to the preflight request, regardless of the authentication. Remarks: I also disabled all compression, I don't know if it matters. A: From AhmadWabbi's answer, easy XML pasting into your web.config: <system.webServer> <rewrite> <rules> <rule name="CORS Preflight Anonymous Authentication" stopProcessing="true"> <match url=".*" /> <conditions> <add input="{REQUEST_METHOD}" pattern="^OPTIONS$" /> </conditions> <action type="CustomResponse" statusCode="200" statusReason="Preflight" statusDescription="Preflight" /> </rule> </rules> </rewrite> </system.webServer>
{ "pile_set_name": "StackExchange" }
Q: pre-soaking tea in cold water prior to brewing My friend recently gave me the advice that tea bag should be soaked in a little bit (barely enough to submerge the tea bag) of cold water for a couple of minutes. Then hot water should be added to the cold to brew it. The idea is that scalding hot water is hot enough to burn flavour compounds and pre-soaking protects against this. I tried googling to no avail. Does anyone have any references that prove/dis-prove this? A: I don't think the couple minutes of soaking is actually doing anything; it'll pull a bit of stuff out of the leaves, and get them wet, but what really matters is the hot water. It sounds like this is a way of getting lower temperature water, similar to your proposed "protect the tea from hot water" explanation. This is indeed good for green and white tea, and maybe oolong, but essentially unnecessary for most other teas. You don't actually always want boiling water for tea. Joe provided this table of temperatures in his comment. Some temperatures for common types of tea, in decreasing order of temperature: maté, rooibos or herbal (208F / 98C); black (195-205F / 91-96C); oolong (195F / 91C); blooming (180F / 82C); white or green (175F / 80C). So for some teas (black, maté, rooibos, herbal), it's pretty close to boiling - by the time the water's poured in, and transfers some heat to the cup, it'll be a few degrees below boiling, so you don't need to worry about it much. But other kinds of tea (green or white tea), you ideally want to add somewhat lower temperature water. If you have a way to get water somewhere around 80C - for example, some electric kettles can automatically turn off at a lower temperature - then just do that. But if it's easiest to make boiling water, then if you fill your cup a bit less than 1/4 of the way with water at room temperature (20C) then fill it the rest of the way with boiling water, the result will be around 80C, just right for green tea!
{ "pile_set_name": "StackExchange" }
Q: array not getting values I am running a script that changes values in a formula with a message box. var searchtext = Browser.inputBox("Enter search text"); var replacetext = Browser.inputBox("Enter replace text"); var form = ss1.getRange("D3"); var formula = form.getFormula(); var updated =formula; updated.indexOf(searchtext); updated = updated.replace(searchtext, replacetext); form.setFormula(updated); var form2 = ss1.getRange("D10"); var formula2 = form2.getFormula(); var updated2 =formula2; updated2.indexOf(searchtext); updated2 = updated2.replace(searchtext, replacetext); form2.setFormula(updated2); As you notice I have to repeat the code for the different ranges I have. In the code above I have D3 and D10 ranges. I have around another 20 ranges that I need to replace formula from. I have created this array to hopefully do them all together while the script runs but I am not seeing any changes. Any ideas why would this be happening? function dash(){ var ss1 = SpreadsheetApp.getActiveSpreadsheet(); var searchtext = Browser.inputBox("Enter search text"); var replacetext = Browser.inputBox("Enter replace text"); var rangeArray = ss1.setActiveSheet(ss1.getSheetByName("Ranges").getRange("A1:A5").getValues()); var daily = ss1.setActiveSheet(ss1.getSheetByName("Daily")); for(var i in rangeArray){ var form = daily.getRange(rangeArray[i][0]); var formula = getRange(form).getFormula(); var updated =formula; updated.indexOf(searchtext); updated = updated.replace(searchtext, replacetext); form.setFormula(updated);} } A: There are a few informtion about your sheet layout that I ignore so I had to make some assumptions... I suppose the ranges you want to process are columns in the sheet so I would do something like this (see comments in code): (I didn't have the opportunity to test this code, it might need some debugging) function dash(){ var ss1 = SpreadsheetApp.getActiveSpreadsheet(); var searchtext = Browser.inputBox("Enter search text"); var replacetext = Browser.inputBox("Enter replace text"); var rangeArray = ss1.getSheetByName("Ranges").getRange("A1:A4").getValues(); // I suppose these cells contains A1 notation of the useful ranges var daily = ss1.setActiveSheet(ss1.getSheetByName("Daily")); Logger.log(rangeArray) for(var i in rangeArray){ var formula = daily.getRange(rangeArray[i][0].toString()).getFormula();// Logger.log(formula) var updated =formula.toString().replace(searchtext, replacetext); Logger.log(updated) } daily.getRange(rangeArray[i][0].toString()).setFormula(updated);// } EDIT : removed first code and replaced following your comment and example sheet
{ "pile_set_name": "StackExchange" }
Q: "It was him" auf Deutsch In English, we could answer the question "Who ate the cake?" with It was him. I don't know the reason why we use the accusative "him", even though the person is the subject of eating the cake, not the object. In German, is it the same? Wer aß den Kuchen? Es war ihn. A: German is different and uses the nominative: Er war's. Er war es. Das war er. Note the word order; es can't be in initial position here.
{ "pile_set_name": "StackExchange" }
Q: QT 5.7 QML - Reference Error: Class is not defined I get the "qrc:/main_left.qml:23: ReferenceError: CppClass is not defined" when I run the below code. This code tries to change the position of a rectangle in a window. Main.cpp #include <QGuiApplication> #include <QQmlApplicationEngine> #include <QQmlContext> #include "cppclass.h" #include "bcontroller.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication app(argc, argv); //QGuiApplication app(argc, argv); BController c; CppClass cppClass; QQmlApplicationEngine engine; engine.rootContext()->setContextProperty("CppClass", &cppClass); engine.load(QUrl(QStringLiteral("qrc:/main_left.qml"))); return app.exec(); } main_left.qml import QtQuick 2.7 import QtQuick.Window 2.2 import QtQuick.Controls 1.2 Rectangle { visible: true width: 640 height: 480 property int index: 0 Text { text: controller.name anchors.centerIn: parent } Image{ id:imageLeft anchors.fill: parent source:"imageLeft.jpg"; } Connections { target: CppClass onPosUpdate: { rect.x = currentPos } } Button { id: button1 x: 163 y: 357 text: qsTr("Change Position") anchors.bottom: parent.bottom anchors.bottomMargin: 20 anchors.horizontalCenter: parent.horizontalCenter onClicked: CppClass.getCurrentPos() } Rectangle { id: rect width: parent.width/2 height: parent.height/2 color: "transparent" border.color: "red" border.width: 5 radius: 10 } MouseArea { anchors.fill: parent onClicked: controller.setName(++index) } } cppclass.cpp #include "cppclass.h" #include <QtQuick> #include <string> CppClass::CppClass(QObject *parent) : QObject(parent) { } CppClass::~CppClass() { } void CppClass::getCurrentPos() { int pos = rand() % 400; std::string s = std::to_string(pos); QString qstr = QString::fromStdString(s); emit posUpdate(qstr); } Please help! A: I think there is a problem with CppClass declaration in your main.cpp => CppClass cppClass; and your CppClass constructor is CppClass::CppClass(QObejct *parent); which means that you are missing the constructor parameter. Therefore,you have two possibilities 1st : Try use your class without QObject *parent 2nd: provide the QObject* parent for the contructor of CppClass when declaring it in main.cpp
{ "pile_set_name": "StackExchange" }
Q: Help required to get full code covered in test class I am having the trigger and handler class in the quote Line Item object it's work like, the I am having two same named custom field in the OpportunityLineItem and Quote Line item respectively.so that field is to be updated by the condition of when issyncing of the quote of the quoteline item to its opportunity is to be true. that custom field name SyncCheck__c. My trigger: trigger customSyncHandlerTrigger on QuoteLineItem (after update) { if (Trigger.isUpdate && Trigger.isAfter) { CustomSyncHandler.UpdateTrigger(Trigger.New, Trigger.OldMap); } } My handler class: public class CustomSyncHandler { public static void UpdateTrigger (List<QuoteLineItem> InsertedQuote, Map<Id,QuoteLineItem> OldInsertedQuoteMap) { Set<Id> ProductIdset = new Set<Id>(); Set<Id> QuoteIdset = new Set<Id>(); Set<Id> OpportunityIdset = new Set<Id>(); List<QuoteLineItem> QuoteLineItemList = new List<QuoteLineItem>(); List<OpportunityLineItem> OpportunityLineItemList = new List<OpportunityLineItem>(); for (QuoteLineItem RecordQuoteItem: InsertedQuote) { QuoteLineItem OldQuoteLineItemREC = OldInsertedQuoteMap.get(RecordQuoteItem.Id); if (OldQuoteLineItemREC.SyncCheck__c != RecordQuoteItem.SyncCheck__c) { ProductIdset.add(RecordQuoteItem.Product2Id); QuoteIdset.add(RecordQuoteItem.QuoteId); } } If (QuoteIdset.size()>0) { QuoteLineItemList = [SELECT Id, QuoteId, Product2Id, SyncCheck__c, Quote.issyncing, Quote.OpportunityId FROM QuoteLineItem WHERE Product2Id IN :ProductIdset AND Quote.issyncing = True ]; } If (QuoteLineItemList.size() > 0) { for (QuoteLineItem quoteLineitemvalue: QuoteLineItemList) { OpportunityIdset.add(quoteLineitemvalue.Quote.OpportunityId); } } List<OpportunityLineItem> OpportunityLineitemvalueList = [SELECT Id, Name, OpportunityId, SyncCheck__c, Product2Id FROM OpportunityLineItem WHERE OpportunityId IN :OpportunityIdset AND Product2Id IN :ProductIdset]; Map<Id, List<OpportunityLineItem>> OpportunityandOppolineitemMap = new Map<Id, List<OpportunityLineItem>>(); for (OpportunityLineItem OpportunityLIRecord : OpportunityLineitemvalueList) { If (!OpportunityandOppolineitemMap.Containskey(OpportunityLIRecord.Id)) { OpportunityandOppolineitemMap.put(OpportunityLIRecord.OpportunityId, new List<OpportunityLineItem>()); } OpportunityandOppolineitemMap.get(OpportunityLIRecord.OpportunityId).add(OpportunityLIRecord); } system.debug('@@@ OpportunityandOppolineitemMap value is'+OpportunityandOppolineitemMap); for (QuoteLineItem QuoteLineItemRecord : QuoteLineItemList) { if (OpportunityandOppolineitemMap.containsKey(QuoteLineItemRecord.Quote.OpportunityId)) { for (OpportunityLineItem OpporVar : OpportunityandOppolineitemMap.get(QuoteLineItemRecord.Quote.OpportunityId)) { if (OpporVar.Product2Id == QuoteLineItemRecord.Product2Id) { OpporVar.SyncCheck__c = QuoteLineItemRecord.SyncCheck__c; OpportunityLineItemList.add(OpporVar); } } } } if (OpportunityLineItemList.size() > 0) { Update OpportunityLineItemList; } } } My test class : @isTest public class TestCustomSyncHandlerTrigger { static testMethod void UpdateCustomSyncHandler() { Account acc1 = new Account(); acc1.Name = 'test account'; insert acc1; system.debug('insert acc1 is success'); Opportunity Opp1 = new Opportunity(); Opp1.Name = 'testOpp'; Opp1.AccountId = acc1.Id; Opp1.StageName = 'Closed Won'; Opp1.CloseDate = system.Today(); insert Opp1; system.debug('insert opp1 success'); Product2 Pro1 = new Product2(); Pro1.Name = 'SLA: Bronze'; pro1.isActive = True; Insert pro1; system.debug('insert pro1 is success'); Pricebook2 pb = new pricebook2(); pb.Name = 'Standard Price Book 2009'; pb.description = 'Price Book 2009 Products'; pb.isActive = True; insert pb; system.debug('pb value is'+ pb.Id); system.debug('insert pricebook2 is success'); Id pricebookId = Test.getStandardPricebookId(); PricebookEntry StandardPriceBookEntry = new PricebookEntry(); StandardPriceBookEntry.Pricebook2Id = pricebookId; StandardPriceBookEntry.Product2Id = pro1.Id; StandardPriceBookEntry.UnitPrice = 10000; StandardPriceBookEntry.IsActive =True; insert StandardPriceBookEntry; system.debug('insert StandardPriceBookEntry'); PricebookEntry pbe = new PricebookEntry(pricebook2id=pb.id, product2id=pro1.id, unitprice=10000, isActive = True); insert pbe; system.debug('insert pbe is success'); Quote Quo1 = new Quote(); Quo1.OpportunityId = Opp1.Id; Quo1.Pricebook2Id = pb.Id; Quo1.Name = 'test Quo1'; insert Quo1; system.debug('insert Quo1 success'); QuoteLineItem QLI = new QuoteLineItem(); QLI.Product2Id = pro1.Id; QLI.QuoteId = Quo1.Id; QLI.PricebookEntryId = Pbe.Id; QLI.Quantity = 2; QLI.UnitPrice = 150000.0; insert QLI; system.debug('insert QLI is success'); OpportunityLineItem OLI = new OpportunityLineItem(); OLI.UnitPrice = 150000; OLI.OpportunityId = Opp1.Id; OLI.PriceBookEntryId = Pbe.Id; OLI.Quantity = 2; insert OLI; system.debug('insert OLI is success'); List<QuoteLineItem> QuoteLIList = [SELECT Id, Quote.issyncing, Quote.OpportunityId FROM QuoteLineItem WHERE Quote.issyncing = True]; for(QuoteLineItem q:QuoteLIList) { system.assertEquals(Opp1.Id, q.Quote.OpportunityId, 'the value of q.Quote.OpportunityId value is not null'); } if (QLI.Quote.isSyncing == True ) { if (QLI.Product2id == OLI.Product2id) { QLI.SyncCheck__c = 'check'; OLI.SyncCheck__c = QLI.SyncCheck__c; system.assertEquals (OLI.product2Id, QLI.product2id); system.assertEquals (OLI.SyncCheck__c, 'check' ); } } Update QLI; } } Here my test class only provide the 53% only it's my knowledge in test class so please help me to get the full code coverage of my class and trigger. For answer's thanks in advance. A: the answer is, @isTest public class TestCustomSyncHandlerTrigger { static testMethod void UpdateCustomSyncHandler() { Account acc1 = new Account(); acc1.Name = 'test account'; insert acc1; system.debug('insert acc1 is success'); Opportunity Opp1 = new Opportunity(); Opp1.Name = 'testOpp'; Opp1.AccountId = acc1.Id; Opp1.StageName = 'Closed Won'; Opp1.CloseDate = system.Today(); insert Opp1; system.debug('insert opp1 success'); Product2 Pro1 = new Product2(); Pro1.Name = 'SLA: Bronze'; pro1.isActive = True; Insert pro1; system.debug('insert pro1 is success'); Pricebook2 pb = new pricebook2(); pb.Name = 'Standard Price Book 2009'; pb.description = 'Price Book 2009 Products'; pb.isActive = True; insert pb; system.debug('pb value is'+ pb.Id); system.debug('insert pricebook2 is success'); Id pricebookId = Test.getStandardPricebookId(); PricebookEntry StandardPriceBookEntry = new PricebookEntry(); StandardPriceBookEntry.Pricebook2Id = pricebookId; StandardPriceBookEntry.Product2Id = pro1.Id; StandardPriceBookEntry.UnitPrice = 10000; StandardPriceBookEntry.IsActive =True; insert StandardPriceBookEntry; system.debug('insert StandardPriceBookEntry'); PricebookEntry pbe = new PricebookEntry(pricebook2id=pb.id, product2id=pro1.id, unitprice=10000, isActive = True); insert pbe; system.debug('insert pbe is success'); Quote Quo1 = new Quote(); Quo1.OpportunityId = Opp1.Id; Quo1.Pricebook2Id = pb.Id; Quo1.Name = 'test Quo1'; insert Quo1; system.debug('insert Quo1 success'); QuoteLineItem QLI = new QuoteLineItem(); QLI.Product2Id = pro1.Id; QLI.QuoteId = Quo1.Id; QLI.PricebookEntryId = Pbe.Id; QLI.Quantity = 2; QLI.UnitPrice = 150000.0; insert QLI; system.debug('insert QLI is success'); List<QuoteLineItem> QuoteLIList = [SELECT Id, Quote.issyncing, Quote.OpportunityId FROM QuoteLineItem WHERE Quote.issyncing = True]; for(QuoteLineItem q:QuoteLIList) { system.assertEquals(Opp1.Id, q.Quote.OpportunityId, 'the value of q.Quote.OpportunityId value is not null'); } Opp1.SyncedQuoteId = Quo1.Id; Update Opp1; QLI.SyncCheck__c = 'Check value'; Update QLI; } }
{ "pile_set_name": "StackExchange" }
Q: Python search imap email for a string New to python, having some trouble getting past this. Am getting back emails from gmail via imap (with starter code from https://yuji.wordpress.com/2011/06/22/python-imaplib-imap-example-with-gmail/) and want to search a specific email (which I am able to fetch) for a specific string. Something like this ids = data[0] id_list = ids.split() ids = data[0] id_list = ids.split() latest_email_id = id_list[-1] result, data = mail.fetch(latest_email_id, "(RFC822)") raw_email = data[0][1] def search_raw(): if 'gave' in raw_email: done = 'yes' else: done = 'no' and it always sets done to no. Here's the output for the email (for the body section of the email) Content-Type multipart/related;boundary=1_56D8EAE1_29AD7EA0;type="text/html" --1_56D8EAE1_29AD7EA0 Content-Type text/html;charset="UTF-8" Content-Transfer-Encoding base64 PEhUTUw+CiAgICAgICAgPEhFQUQ+CiAgICAgICAgICAgICAgICA8VElUTEU+PC9USVRMRT4KICAg ICAgICA8L0hFQUQ+CiAgICAgICAgPEJPRFk+CiAgICAgICAgICAgICAgICA8UCBhbGlnbj0ibGVm dCI+PEZPTlQgZmFjZT0iVmVyZGFuYSIgY29sb3I9IiNjYzAwMDAiIHNpemU9IjIiPlNlbnQgZnJv bSBteSBtb2JpbGUuCiAgICAgICAgICAgICAgICA8QlI+X19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzwvRk9OVD48L1A+CgogICAgICAg ICAgICAgICAgPFBSRT4KR2F2ZQoKPC9QUkU+CiAgICAgICAgPC9CT0RZPgo8L0hUTUw+Cg== --1_56D8EAE1_29AD7EA0-- I know the issue is the html, but can't seem to figure out how to parse the email properly. Thank you! A: The text above is base64 encoding. Python has a module named base64 which gives you the ability to decode it. import base64 import re def has_gave(raw_email): email_body = base64.b64decode(raw_email) match = re.search(r'.*gave.*', email_body , re.IGNORECASE) if match: done = 'yes' print 'match found for word ', match.group() else: done = 'no' print 'no match found' return done
{ "pile_set_name": "StackExchange" }
Q: How to smoothly rasterize PDF using the API? I'd like to rasterize a PDF source (in this case to PNG, 500x500 pixels) using these golang bindings for ImageMagick6. On the CLI, I can do this using convert -density 5000 -define psd:fit-page=500x test.pdf -resize 500x test.png which results in a smoothly rendered image. What I'm failing to do right now is to produce something similar using the API: either the resulting image has scaled pixels or is blurry and has a size 500x500 pixels, or it's in the "original" size. Here's a minimum snippet of my playground code: package main import "gopkg.in/gographics/imagick.v2/imagick" func main() { imagick.Initialize() defer imagick.Terminate() mw := imagick.NewMagickWand() defer mw.Destroy() mw.SetImageResolution(5000,5000) mw.SetOption("psd:fit-page", "500x") mw.ReadImage("test.pdf") mw.ResizeImage(500, 500, imagick.FILTER_POINT, 1) mw.SetIteratorIndex(0) // This being the page offset mw.SetImageFormat("png") mw.WriteImage("test.png") } I got confused by density, image size, image resolution and canvas size I guess. How is it meant to be used? A: Your GO version of the convert command is missing the "density" argument. Replacing the call to SetImageResolution with one to SetOption and using a filter to smooth out the edges results in a smooth image: package main import "gopkg.in/gographics/imagick.v2/imagick" func main() { imagick.Initialize() defer imagick.Terminate() mw := imagick.NewMagickWand() defer mw.Destroy() mw.SetOption("density", "5000") mw.SetOption("psd:fit-page", "500x") mw.ReadImage("test.pdf") mw.ResizeImage(500, 500, imagick.FILTER_LAGRANGE, 1) mw.SetIteratorIndex(0) // This being the page offset mw.SetImageFormat("png") mw.WriteImage("test.png") }
{ "pile_set_name": "StackExchange" }
Q: O que é mais rápido para ser lido e editado, um banco de dados ou um .txt? Seria para armazenar e reescrever apenas 1 dígito INT, apagando um e escrevendo outro, na condição de que quando chegar a 5, voltar pro 1, e ir fazendo esse loop toda vez que alguém acessar. Sempre que alguém acessar a página, o valor seria acrescido de +1 e gravado, exceto que já tivesse em 5, aí voltaria pro 1. Enfim. PHP + MySQL, ou PHP + .txt, qual é mais rápido para essa finalidade? Considerando que se fazendo centenas de requests simultâneos não bugue o sistema, qual é melhor? A: Um arquivo texto flat será sempre mais rápido, ele não tem que fazer quase nada. Um banco de dados executa um conjunto monumental de coisas para garantir a integridade dos dados e fazer isso de uma forma fácil e padronizada. Isso não quer dizer que seja a melhor opção. Se for acessar o arquivo concorrentemente tem que saber o que está fazendo, caso contrário terá problemas, mesmo em um padrão simples como o relatado. O que não cria bugs no sistema é dominar todo o processo de desenvolvimento de software, implantação e manutenção da infraestrutura. Isso não é só saber se é melhor usar MySQL ou .txt. Mas se não sabe o que é melhor, vá no mais seguro que é o banco de dados. Pelo menos ele exige menos cuidado. Uma alternativa intermediária seria o uso do SQLite que possui as características do banco de dados com algumas facilidades do acesso ao arquivo, incluindo melhor performance. Em alguns casos outro banco de dados pode ser mais útil, quem sabe um NoSQL. A: Segundo o que descreveu, haverá centenas de conexões simultâneas. Cada conexão incrementa um valor e quando chegar a 5, volta ao valor 1. Com arquivo txt isso pode ser um problema pois terá que criar condições para impedir inconsistências. Uma lógica é travar a edição ou acesso ao arquivo caso já esteja aberto por algum usuário: $f = fopen('fit.txt', 'a'); if(flock($f, LOCK_EX | LOCK_NB)) { $n = fread($fp, 4); ($n == 5)? $n = 1: $n++; fwrite($f, $n); flock($f, LOCK_UN); } fclose($f); Usando um banco de dados essa operação é mais segura, no entanto, é óbvio que terá um custo muito maior de processos. Antes de pensar em performance, pense em consistência. Se a rotina é segura e tem certeza de que não haverá falha, você vai para o "próximo estágio" que é a otimização. Nesse exemplo acima com flock(), o processo é "super rápido" mas ainda assim pode acontecer alguma falha. Algo inexperado onde demore muito para liberar para o próximo usuário. Imagine então um cenário onde 200 usuários acessaram no mesmo exato tempo. O primeiro será o "sortudo". Vai ler e escrever o número e liberar para o segundo, terceiro, quarto. Mas o que estiver por último na fila conseguirá ler e escrever o valor corretamente ou retornará algum erro por longo tempo de espera? Considere que se o sistema tem centenas de acessos simultâneos, digamos que num único segundo receba 150 conexões e depois de 2 segundos mais 200 e depois de 2 segundos mais 100. Só aí você já tem, num espaço de 5 segundos, 350 neguinho na fila de espera para ler e escrever nesse txt. O sistema pode interromper a execução por volta do número 200 devido a longa espera. Pode ser o caso de repensar a lógica do negócio. Caso não tenha essa grande quantidade de conexões simultâneas, aí sim, o simples flock(), como no exemplo, pode resolver e ser ainda uma opção mais viável que um banco de dados, em termos de performance.
{ "pile_set_name": "StackExchange" }
Q: Best way to put user input into generated javascript? I need for someone to be able to put some text into a page and then this gets sent to the server, saved in the database, and else where this text is put into a javascript variable. Basically like this: Write("var myVar=\""+MyData+"\";"); What is the best way of escaping this data? Is there anything out there already to deal with things like ' and " and new lines? Is base64 my only option? My serverside framework/language is ASP.Net/C# A: You should use WPL: Write("var myVar=" + Encoder.JavaScriptEncode(MyData, true) + ";"); if you don't want to reference the library, you can use the following function (adapted from the .Net source): public static void QuoteString(this string value, StringBuilder b) { if (String.IsNullOrEmpty(value)) return ""; var b = new StringBuilder(); int startIndex = 0; int count = 0; for (int i = 0; i < value.Length; i++) { char c = value[i]; // Append the unhandled characters (that do not require special treament) // to the string builder when special characters are detected. if (c == '\r' || c == '\t' || c == '\"' || c == '\'' || c == '<' || c == '>' || c == '\\' || c == '\n' || c == '\b' || c == '\f' || c < ' ') { if (b == null) { b = new StringBuilder(value.Length + 5); } if (count > 0) { b.Append(value, startIndex, count); } startIndex = i + 1; count = 0; } switch (c) { case '\r': b.Append("\\r"); break; case '\t': b.Append("\\t"); break; case '\"': b.Append("\\\""); break; case '\\': b.Append("\\\\"); break; case '\n': b.Append("\\n"); break; case '\b': b.Append("\\b"); break; case '\f': b.Append("\\f"); break; case '\'': case '>': case '<': AppendCharAsUnicode(b, c); break; default: if (c < ' ') { AppendCharAsUnicode(b, c); } else { count++; } break; } } if (b == null) { b.Append(value); } if (count > 0) { b.Append(value, startIndex, count); } return b.ToString(); }
{ "pile_set_name": "StackExchange" }
Q: A strange, but probably simple, shell_exec issue If I run this command on the command line (on my Mac OS X): echo -n "hello" > foo-cmd.txt I get the expected result, namely a file foo-cmd.txt containing "hello" without any newline at the end. However, if I run this PHP code: <?php shell_exec("echo -n \"hello\" > foo-php.txt"); ?> I get a file foo-php.txt containing the text "-n hello" followed by a newline! In other words, the argument -n sneaks in as output, instead of being treated as an argument! How can I resolve this issue? A: Your command is using the shell's built-in version of echo which doesn't support the -n option. Try /bin/echo instead.
{ "pile_set_name": "StackExchange" }
Q: Работа с базой данных в flask-security Изучаю flask-security. В Quickstart есть такой вот код: class UserRoles(db.Model): # Because peewee does not come with built-in many-to-many # relationships, we need this intermediary class to link # user to roles. user = ForeignKeyField(User, related_name='roles') role = ForeignKeyField(Role, related_name='users') name = property(lambda self: self.role.name) description = property(lambda self: self.role.description) Никак не могу понять, что делают строки name = property(lambda self: self.role.name) description = property(lambda self: self.role.description) От куда берется функция property, да и вообще, что происходит в этих строках ? A: property - это стандартная функция python. Позволяет работать с атрибутом экземпляра класса, назначая различные действия на его установку, изменение, удаление. Подробнее в документации В данном случае это просто эквивалент варианта с декоратором class Foo(object): # для примера role = '123' @property def name(self): return self.role >>> Foo().name '123' Просто не имело смысла расписывать полный вариант данной конструкции, а данное свойство является обычным синтаксическим сахаром для получения self.role.name
{ "pile_set_name": "StackExchange" }
Q: Windows 7 running Virtualbox 4.1.12 with Ubuntu 10.04 LTS guest but no full screen After a recent windows update (though may not be the issue), I am unable to run Ubuntu in the virtualbox with full screen mode. I can no longer stretch the window to any resolution. It is locked at 1024x768 in the guest window (Ubuntu). For sanity, I re-installed The Linux Guest Additions. I also tried uninstalling it and reinstalling it even though the script does this for you. No matter what, I can't get past the limited 1024x768 workspace. Furthermore, I noticed that xorg.conf is not in the /etc/X11 folder but I believe this is normal as my other PC running a similar setup also does not have xorg.conf yet scales the guest window of Ubuntu properly to any resolution (including true full screen mode). I have set the guest settings to 64MB video space and also tried 128MB. No difference there. I also tested running Virtualbox 4.1.8 (to try an older working version). Nothing. Now I am back to Vbox version 4.1.12 with no avail. I am also unable to run the Ubuntu Desktop with advanced effects. Maybe the Linux Guest Additions is not running even though it is installed? How do I verify this? Any other suggestions? A: try with this: 1. Open terminal and enter the following command: sudo apt-get update sudo apt-get install build-essential linux-headers-$(uname -r) sudo apt-get install virtualbox-ose-guest-x11 2. Once installation is finished, restart your virtualBox machine. 3. Go to System -->Preferences -->Monitors and change the resolution of your screen. source: http://tutorial.downloadatoz.com/how-to-fix-ubuntu-10-10-virtualbox-guest-additions-problems.html
{ "pile_set_name": "StackExchange" }
Q: String escaping with twig? QUESTION: How do you append the following twig variable at the end of the first parameter of the anchor function? {{ anchor('welcome/play/', 'Play', {'class': 'btn btn-primary'})|raw }} I have tried variations of: 'welcome/play/{{t.CompetitionID}}' etc which doesnt work. A: You could use ~ to concatenate strings http://twig.sensiolabs.org/doc/templates.html#expressions : ~: Converts all operands into strings and concatenates them. {{ "Hello " ~ name ~ "!" }} would return (assuming name is 'John') Hello John!. Thus: {{ anchor('welcome/play/' ~ t.competitionID, 'Play', {'class': 'btn btn-primary'})|raw }}
{ "pile_set_name": "StackExchange" }
Q: Need 3 outcomes from IF statement I have a list of names in excel which have a random number next to them in brackets, the numbers are 1 to 3 digits long. I was using the formula =IF(LEFT(RIGHT(B8,4),1)=("("),RIGHT(B8,3),RIGHT(B8,4)) to get rid of the first bracket and then =IF(RIGHT(LEFT(AX8,3),1)=")",LEFT(AX8,2),LEFT(AX8,3)) to remove the last bracket. This was working until I found a name with a 4 digit number in the brackets. Is there a way to add another outcome to the if statement? A: The VALUE function will interpret bracketed numbers as negative but the ABS function will take care of that. =ABS(VALUE(MID(A2, FIND("(", A2), 99))) An additional bonus is that you end up with true numbers, not text that looks like a number.
{ "pile_set_name": "StackExchange" }
Q: Transmitting live video over long distances with ethernet? I'm having trouble thinking of the best way to transmit live video using a pi zero w and a camera module through a wired ethernet connection of around 100ft. (model 3 could also be used) I want to overlay a couple sensor readouts like temperature and humidity onto the footage and some control using possibly rs485 or another protocol through ethernet would be great for changing camera angle using a servo with a joystick on the other end. (RS485 uses 2 wires) HDMI to ethernet exists, however Is there a solution which does not use all 8 wires of the ethernet cable? Would there be a problem with interference between both signals? I am not sure how twisted pairs work with cancelling emf in an ethernet cable. Ethernet cable is desirable for its cheap cost/ft! A: There are great HDMI-over-CAT6 cables. Some use 2 cables and can go up to 300 feeet. Others use two cables and go 100 Meters. Then there are also single-cable adapters that actually go 300 feet over a single cable, supporting 1080p and audio. Here is the search I used: HDMI to Ethernet converter single cable So there's your answer. They are quite available, and affordable. Whatever your software can generate to the HDMI port can run a large-screen at the 100 Ft specified.
{ "pile_set_name": "StackExchange" }
Q: How do we test a `Range` type in Elixir How do we test for a Range type? What would be the equivalent of is_range? Erlang/OTP 21 [erts-10.1] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] Interactive Elixir (1.7.3) - press Ctrl+C to exit (type h() ENTER for help) iex(1)> a = 1..10 1..10 iex(2)> is_list a false iex(3)> i a Term 1..10 Data type Range Description This is a struct. Structs are maps with a __struct__ key. Reference modules Range, Map Implemented protocols IEx.Info, Enumerable, Inspect iex(4)> A: Why would you need this function in the first place? Range is a struct. We have pattern matching everywhere you might need it. Just pattern match to %Range{} and you are all set.
{ "pile_set_name": "StackExchange" }
Q: Use route as url in config with Symfony I use CKEditor and want to use uploadimage plugin. I need to specify uploadUrl in config.yml. How can I put here a route instead of direct url? ivory_ck_editor: default_config: my_config configs: my_config: extraPlugins: "lineutils,widget,notificationaggregator,uploadwidget,notification,uploadimage,wordcount" uploadUrl: '/admin/upload' I know I can redefine config with form builder $builder->add('field', 'ckeditor', array( 'config' => array('uploadUrl' => ...), )); But I want to do it once for every form. Which is the best way? A: If you define your form as a service, you could inject the router and use it to generate the path in your form. (This ignores the possibility of setting it in config.yml.) services: app.form.type.yourformtype: class: AppBundle\Form\YourFormType arguments: [@router] tags: - { name: form.type } Then, in your form: <?php namespace AppBundle\Form use Symfony\Bundle\FrameworkBundle\Routing\Router; use Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\Form\AbstractType; class YourFormType extends AbstractType { private $router; public __construct(Router $router) { $this->router = $router; } public function buildForm(FormBuilderInterface $builder, array $options) { $uploadRouteName = 'app_admin_upload'; // Or whatever maps out to /app/admin, re: your original question ... $builder->add('field', 'ckeditor', array( 'config' => array('uploadUrl' => $this->router->generate($uploadRouteName)), )); ... } } To truly do this once for every form you should consider extending the ckeditor formtype and adding your route to uploadUrl in the configureOptions method using the OptionsResolver.. Then update the service definition to inject the router to that class, and in place of ckeditor in the second argument to add methods, use YourCkeditorExtendedType::class, and you won't need to add config each time.
{ "pile_set_name": "StackExchange" }
Q: Best Algorithm to encrypt password I want to encrypt the password & store the encrypted value in MySQL database. What algorithm is the best way to go for encryption in Java? For each user, I've userid, useremail field as well & for all these users, I need algo to encrypt password. Note: I should be able to decrypt the password as well, since one of our support page shows the password only to support team (Legacy page that cannot be removed). Thanks A: There are standard algorithms for encrypting passwords. You can use PBEWITHSHA256AND256BITAES-CBC-BC from BouncyCastle.
{ "pile_set_name": "StackExchange" }
Q: Shrinking a large transaction log on a full drive Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of? A: You could've put the database into Simple recovery model (then use the CHECKPOINT command to make sure that the log is as truncated as possible) and then back into the Full recovery model. Then take a full database backup so that the system realises that it's in Full. Then you can shrink the log at your convenience. Shrinking a log isn't nearly as awful as shrinking the database, which you should almost never do. You haven't deleted the log, you've just truncated it - which kills your option to restore to a point in time. So do a full backup of your database as soon as possible. Also bear in mind that if your log can't grow, your database will stop. So leaving autogrow on isn't a bad option at all... but perhaps set it up to send you an alert when it's filling up.
{ "pile_set_name": "StackExchange" }
Q: Passing the text to function on Click Please see the following code: function liReplace(txt) { $('#srch > a').text(txt); } <ul class="dropdown"> <li id='srch'> <a href="#">All Categories</a> <ul class="sub_menu"> <li> <a href="#">Gadgets</a> <ul> <li onclick='liReplace("DVD")'><a href="#" >DVD</a></li> <li><a href="#">XBOX</a></li> <li><a href="#">Ps2</a></li> <li><a href="#">Cellphone</a></li> </ul> </li> <li> <a href="#">Locations</a> <ul> <li><a href="#">Indoor</a></li> <li><a href="#">Outdoor</a></li> </ul> </li> </ul> </li> </ul> I wanted to remove the code onclick='liReplace("DVD")' and change it to jquery. I could use the class="dropdown" to identify it. How do I code and call the clicking of <li> tags inside that ul "dropdown" and pass the text? i was thinking like $("li").click(function () { $("li").val(this.text); }); I know its very wrong, my excuse is that I am still in learning phase. ---ANSWER $("ul.dropdown li > a").click(function () { var old_text = $(this).text(); liReplace(old_text); }); A: Something like that should work: $(".dropdown .sub_menu ul li").on("click", function() { $("#srch > a").text($(this).text()); }); A: function liReplace(txt) { $('#srch > a').text(txt); } $("li a").click(function () { var old_text = $(this).text(); liReplace(old_text); });
{ "pile_set_name": "StackExchange" }
Q: Converting .m3u playlists from Windows for Android media players using Notepad++ Winamp saves playlists that are saved in same folder as the music as relative paths for Windows, but copying and pasting into Android doesn't work unless I convert it to Linux relative paths. So #EXTM3U #EXTINF:262,Corona - Rhythm Of The Night Unsorted\Corona - Rhythm Of The Night.mp3 #EXTINF:324,The B-52's - Love Shack The B-52's - Love Shack.mp3 needs conversion to #EXTM3U #EXTINF:262,Corona - Rhythm Of The Night ./Unsorted/Corona - Rhythm Of The Night.mp3 #EXTINF:324,The B-52's - Love Shack ./The B-52's - Love Shack.mp3 for VLC Player on Android to read the playlist properly. Well, figuring out how to convert \ to / on Notepad++ without regular expressions enabled was easy enough, but I'm too new at regex to get a grip on how to even read the table of contents on its guides even though all I want to do after that is to add ./ to the start of every odd line after the first line. A: You may use (?:.*\R){2}\K and replace with ./. Details (?:.*\R){2} - two consecutive occurrences ({2}) of any 0+ chars other than line break chars, as many as possible (.*), \K - match reset operator discarding all text matched so far from the match buffer. The replacement is ./, i.e. it is inserted at the end of the match.
{ "pile_set_name": "StackExchange" }
Q: Project rotation matrix to closest rotation matrix on a specific plain I have a rotation matrix created by some values of roll, pitch, yaw and I'm looking for the value yaw2 s.t the rotation matrix created by [0, 0, yaw2] is closest to the roration matrix created by [roll, pitch, yaw]. I think It can be rephrased as projecting the rotation matrix to the horizontal plane. I'm defining my rotation matrix as Ryaw X Rpitch X Rroll, but does the answer change if I reverse the order? A: Here is my best guess as to what you are asking. Typically if yaw, pitch, and roll are given by a vector of the form $\left [ \alpha \;\; \beta \;\; \gamma \right ]^T$ then the corresponding rotation matrices associated to this representation are given by: $$ R_z(\alpha) \;\; =\;\; \left [ \begin{array}{ccc} \cos \alpha & - \sin \alpha & 0 \\ \sin\alpha & \cos\alpha & 0 \\ 0 & 0& 1 \\ \end{array} \right ] \hspace{2pc} R_y(\beta) \;\; =\;\; \left [ \begin{array}{ccc} \cos \beta & 0 & -\sin \beta \\ 0 & 1 & 0 \\ \sin \beta & 0 & \cos \beta \\ \end{array} \right ] \hspace{2pc} R_x(\gamma) \;\; =\;\; \left [ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos\gamma & - \sin \gamma \\ 0 & \sin \gamma & \cos \gamma \\ \end{array} \right ] $$ and a rotation matrix in $SO(3)$ can be represented by $R(\alpha,\beta, \gamma) = R_z(\alpha)R_y(\beta)R_x(\gamma)$. I have a rotation matrix created by some values of roll, pitch, yaw and I'm looking for the value yaw2 s.t the rotation matrix created by [0, 0, yaw2] is closest to the roration matrix created by [roll, pitch, yaw]. What I believe you're asking here is to find some matrix of the form $R_z(\eta)$ which closest represents the matrix $R(\alpha,\beta,\gamma)$ given above. The way we can do this depends on what you mean by "closest". This often has to do with how you measure distance between rotation matrices. How you want to measure the distance is up to you, but you have a couple of options at your disposal: \begin{eqnarray*} \text{Euclidean Norm}: && d(R_z(\eta), R(\alpha, \beta, \gamma)) \;\; =\;\; \left| \left| R_z(\eta) - R(\alpha,\beta,\gamma) \right| \right|_F \\ \text{Left-Invariant Norm}: && d(R_z(\eta), R(\alpha,\beta,\gamma)) \;\; =\;\; \left | \left | \text{Log}\left ( R(\alpha,\beta,\gamma)^TR_z(\eta) \right )\right | \right |_F \\ \end{eqnarray*} If you are to take this approach, my best advice would be to perform some sort of gradient descent procedure, particularly using geodesics. I think It can be rephrased as projecting the rotation matrix to the horizontal plane. If instead you want to interpret this problem simply in terms of projection, then you have to be careful in exactly how you do this. You may be tempted to simply pick a Euclidean basis (i.e. $\hat{x}$ and $\hat{y}$ for your $xy$-plane) and then express your vector components under the rotation $R$ as follows: \begin{eqnarray*} R\hat{x} & = & a_x \hat{x} + a_y\hat{y} + a_z\hat{z} \\ R\hat{y} & = & b_x \hat{x} + b_y\hat{y} + b_z\hat{z}. \end{eqnarray*} It would be an incorrect choice to construct the following matrix: $$ R_z \;\; =\;\; \left[ \begin{array}{ccc} a_x & a_y & 0 \\ b_x & b_y & 0 \\ 0 & 0 & 1 \\ \end{array} \right ]. $$ The reason why here is that the columns of the above matrix are not guaranteed to be orthogonal, let alone of unit norm. The above matrix might also be strangely dubious in the case where we have particular types of rotations (for instance, imagine if $R = R_x(\gamma)$ for any $\gamma$). Instead what I would recommend would be to use the exponential and logarithmic maps for matrix Lie groups. Compute the matrix $\Omega = \log(R)$ (in Matlab this would be the function logm). This essentially linearizes rotation matrix $R$ and gives you a matrix of the form: $$ \Omega \;\; =\;\; \left [ \begin{array}{ccc} 0 & \alpha & -\beta \\ -\alpha & 0 & \gamma \\ \beta & -\gamma & 0 \\ \end{array} \right ]. $$ This matrix will essentially contain all of the yaw, pitch, roll data that you need. Because this space is linear (set of skew-symmetric matrices) project to your yaw component here: $$ P_\alpha(\Omega) \;\; =\;\; \left [ \begin{array}{ccc} 0 & \alpha & 0 \\ -\alpha & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right ] $$ and now use the exponential map (in Matlab this is expm) and compute: $$ R_z \;\; =\;\; \text{Exp}(P_\alpha (\text{Log}(R))). $$ I'm defining my rotation matrix as Ryaw X Rpitch X Rroll, but does the answer change if I reverse the order? Yes it does. These matrices do NOT commute. If you change their order, you change the rotation that's being represented. A: After thinking and researching the problem further, the solution is to project the perpendicular vector. e.g when the Yaw is the only rotation that is not zero, the up vector remains pointing at (0,0,1), regardless of the yaw value. When taking the full rotation matrix, calculate the direction of the "up vector" by rotating it using the rotation matrix, and then find the rotation matrix that will rotate this new up vector back to (0, 0, 1). This is a common and easy problem, easiest using quaternions. The multiplication of the original rotation matrix, with the new rotation matrix to align the up-vector back to its position will generate the rotation matrix. the last step is to extract the yaw value, this is easy as the rotation matrix is constructed only using sin,cos of the yaw
{ "pile_set_name": "StackExchange" }
Q: Manually specify constraints in Xcode 4 interfacebuilder/storyboard This seems like a really basic question, but I can't find the answer to it. When I add elements to a scene in my storyboard, or drag them around or resize them, Xcode automatically adds and removes constraints to describe their sizing and positioning. Now, once Xcode has created a constraint of a particular type - say, a height constraint - I know that I can modify its attributes - Relation, Constant and Priority in the case of the height constraint - via the attributes inspector. What I can't figure out is how to manually add or remove constraints of a given type rather than relying on Xcode's magic to do so for me. For example, in my current scenario, I have a ViewController that contains a toolbar and a table view. Now, I know exactly what constraints I want to use to describe the vertical positioning and sizing of those two views: The top of the toolbar has Vertical Space 0 from the top of the screen The toolbar has a fixed height The bottom of the table view has Vertical Space 0 from the bottom of the screen The top of the table view has Vertical Space 0 from the bottom of the toolbar This way the table view's height will adjust appropriately to the screen size. However, Xcode, in its wisdom, has decided that this isn't what I want, and has instead inflicted the following constraints on me (which it doesn't seem to want to change no matter how much I randomly drag stuff around and pray): The top of the toolbar has Vertical Space 0 from the top of the screen The bottom of the toolbar has Vertical Space 526 from the bottom of the screen The table view has Vertical Space 0 from the bottom of the screen The table view has a fixed height of 526 As a result, everything goes wrong when I try viewing my ViewController on a smaller screen, or in a container: Not being able to just manually set my own constraints when I know exactly what I need is frustrating. How can I explicitly delete the bullshit constraints that Xcode has automatically created and manually add my own instead? A: The problem when working with constraints in IB/storyboard is that Xcode will never allow you to have ambiguous constraints. Ever. Including when you are in the process if editing them. So whenever you may want to edit multiple constraints, Xcode may decide to automatically add some to prevent a disallowed state. This can be painful to work with. What I have found kind of works is adding bogus constraints on all four edges while I am setting other constraints. This - hopefully - keeps IB in check and prevents it from adding completely dumb stuff. You can add constraints from the top menu. Click the object and then Editor from the to menu. You can add constraints with the Align and Pin submenus (which then can be edited later on). Note: not all constraints can be added in IB. Aspect ratio for example can only be done in code. PS. Xcode will only allow you to delete constraints that are redundant, i.e that are not going to leave an ambiguous state. So in order to delete the "b****t" constraints, you have to first add enough others to create an allowed state.
{ "pile_set_name": "StackExchange" }
Q: list files with first char within specific range I need to list all files that have first char within a specific range. If I use Powershell I can do this with gci [a-c]* How can I do it from command line? A: You may use the following command: dir /b | findstr /R "^[a-c].*"
{ "pile_set_name": "StackExchange" }
Q: Z values way too large compared to x and y in reconstruction I reconstruct x,y,z from disparity using triangulation formula .My problem is that x,y, and z values are in very different orders .For eg order of x is like 0.001 and similar for y but z is in the order of 10 .Because of this I see a straight line instead of seeing a face .Is there any way I could apply some transform preserving the structure of face but getting a better reconstruction. EDITED: here is a sample L image and the disparity map ( normalized to 0-255 for visualization not the true values).My point of giving this is to show that disparity comes out fairly decently. A: Assuming that you are solving for the fundamental matrix using point correspondences between the left and right images, this is the expected result. Because the fundamental matrix is rank-deficient, it is only defined up to a scale factor. If you define everything in terms of pixel units, there is no way to reconstruct the scene in real-world units. Solving this requires an additional piece of information: the relationship between a pair of corresponding points in a three-dimensional coordinate frame. For a stereo system, this is most often the baseline, the distance between the left and right camera centers.
{ "pile_set_name": "StackExchange" }
Q: async_network_io issues with BIDS This is related to my previous question drastically different runtimes in BIDS and the SSRS Web Portal. I am struggling with refreshing some reports in BIDS because they run forever. I have now noticed that when I refresh the report in BIDS the query will experience ASYNC_NETWORK_IO waits. (also CXPACKET but I understand that's more an effect than a cause) The post Need help with ASYNC_NETWORK_IO seems to indicate SQL Server is waiting on the client, which I assume in this case is BIDS on my desktop. If it can be identified from this snippet of information, where is the problem here? Is it my desktop and its lack of oomph? Could it be the network? Is my report doing too many calculations on the report-side? Where else should I look? How can I fix it? I will add that oftentimes the data never full returns. Instead I receive the following (redundant) error message: An error occurred during local report processing. An error has occurred during report processing. Exception of type 'System.OutOfMemoryException' was thrown. A: We actually ran into a similar issue at StackOverflow and Kyle blogged about it: http://blog.serverfault.com/2011/03/16/views-of-the-same-problem-network-admin-dba-and-developer/ The problem can be a number of things: Queries bringing back too much data (like select * from a wide table with a lot of XML or binary fields) Client-side apps processing data row by row instead of pulling it all into memory and then doing whatever work necessary Underpowered app server hardware (or in your case, client machines) that are paging to disk or churning on CPU. I see this a lot on over-committed virtual machines. To tune it, start by looking at Perfmon counters on your own machine. My tutorial on it is at http://www.BrentOzar.com/go/perfmon, and it includes a list of counters to gather, how to analyze 'em, and how to interpret your bottleneck.
{ "pile_set_name": "StackExchange" }
Q: What are the differences between the spirit of God and spirit of Man? First, I thought that the spirit Christians get from God is the same as the spirit of God. Recently, I heard a sermon that asserted they are different, but gave no more details. Which verses describe this in the Bible? When we get the Holy Spirit are we getting the spirit of God? A: It helps to first know what the word "Spirit" actually means, which is more of a question for the Hermeneutics site. I do see you've been asking questions about the meaning of the word there, so I'll include a brief bit here as introduction. From http://www.pickle-publishing.com/papers/soul-and-spirit.htm In the New Testament the word for "spirit" is pneuma. Pneuma is translated the following ways: ghost 2 Ghost (with Holy) 90 life 1 spirit 151 Spirit 137 spiritual gift 1 spiritually 1 wind 1 The entire article is very long, but it does a good job of illustrating how a simple word can be interpreted different ways based on the context of the verse. The word "Spirit" can carry the more religious/supernatural connotation associated with the word today, or it can simply mean "life" or "the breath of life". Quite commonly it is used as a literary form of personification: One brief thought before we look at the verses: There are some verses that seem to use "soul" and "spirit" in ways that harmonize with the common concept of the nature of man. How can this be? Is the Bible contradicting itself? Here is one suggestion: It was commonplace for the Bible writers to take parts of man’s being and personify them, give them attributes they did not in actuality possess. Perhaps sometimes they personified the "soul" and "spirit" as well. The most familiar example of a part of a person being personified is the heart. The heart, simply an organ that pumps blood, is said to have qualities that the mind does have, but that the heart definitely does not have. Another example which is not so familiar is the personification of the kidneys, called the "reins" (Ps. 7:9; 16:7; 26:2; 73:21; Prov. 23:16; Jer. 11:20; 12:2; 17:10; 20:12; Rev. 2:23). The kidneys seem to have been made the seat of the affections and emotions. Another example is the use of the words for "bowels" (Ps. 40:8 (translated "heart"); Cant. 5:4; Is. 16:11; 63:15; Jer. 4:19; 31:20; Lam. 1:20; 2:11; Luke 1:78 (translated "tender"); 2 Cor 7:15 (translated "inward affection"); Php. 2:1; Col. 3:12; Phm. 1:7, 20; 1 Jn. 3:17). In the light of these scriptures, the possibility that the Bible writers also occasionally personify the "soul" and "spirit" should be considered. In other words, the "soul" and the "spirit" may in some verses be given qualities that they do not in actuality possess. The article shows the various verses in which the word "Spirit" is used. I'll include just a few: "And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath (neshamah, pnoe) of life; and man became a living soul" (Gen. 2:7). [God put the "spirit" into Adam's nose.] "And, behold, I, even I, do bring a flood of waters upon the earth, to destroy all flesh, wherein is the breath (ruach, pneuma) of life, from under heaven; and every thing that is in the earth shall die" (Gen. 6:17). [Animals have the "spirit" in them too.] "All the while my breath (neshamah, pnoe) is in me, and the spirit (ruach, pneuma) of God is in my nostrils" (Job 27:3). [The "spirit" lives in the nose.] "The Spirit (neshamah, pnoe) of God hath made me, and the breath (ruach, pneuma) of the Almighty hath given me life" (Job 33:4). [God's "spirit" gives us life.] I'm assuming your question deals with verses like the last two shown above. In the context, and general meaning. Job 27:3 has been translates several ways by scholars, in the various versions of the Bible. Examples: GOD'S WORD® Translation (©1995) 'As long as there is one breath [left] in me and God's breath fills my nostrils, American King James Version All the while my breath is in me, and the spirit of God is in my nostrils; Both of these clearly indicate that the Spirit of God is distinct from the breath of life (spirit of man). It indicates that man has a spirit in him, given to him by God, not that the spirit in him is the same spirit that God possesses. The spirit that God gives may be from God, but the connotation of the word "spirit" when it comes to man is simply life itself. We each have our own life - our own spirit - our own ?breath that gives us life" that is certainly a gift from God, but is not the same as God's spirit. It's also completely different than the Holy Spirit, which is the third person of God in the Trinity and, according to most traditions, indwells us at the point of, and remains in us after salvation.
{ "pile_set_name": "StackExchange" }
Q: What is the right xpath/css to use to get each paragraph text in a single string? First I create a HtmlResponse And read it using scrapy: from scrapy.http import HtmlResponse from scrapy.selector import Selector body = """ <div class="a"> <p> text1<br> text2 </p> </div> <div class="a"> <p> text3 </p> </div> """ response = HtmlResponse(url='http://example.com/', body=body) sel = Selector(response) Now, I would like to extract text from this html But I get a list with 2 elements. This is what I have so far tried: sel.xpath('//div[@class="a"]/p/text()').extract() # [u'\n text1', u' text2\n ', u'\n text3\n '] As you note I get 3 text elements for 2 paragraphs? How can I do to get only 2 text elements? [u'text1 text2',u'text3'] Note that I prefer not to use BeautifulSoup since performance is a requirement here. A: With CSS selectors (including Scrapy's ::text extension): >>> from scrapy.http import HtmlResponse >>> from scrapy.selector import Selector >>> >>> body = """ ... <div class="a"> ... <p> ... text1<br> text2 ... </p> ... </div> ... <div class="a"> ... <p> ... text3 ... </p> ... </div> ... """ >>> response = HtmlResponse(url='http://example.com/', body=body) >>> sel = Selector(response) >>> [u''.join(paragraph.css('::text').extract()).strip() for paragraph in sel.css('div.a > p')] [u'text1 text2', u'text3'] >>>
{ "pile_set_name": "StackExchange" }
Q: call functions on events NodeJs actually i'm trying to call the function demandConnexion after an event, but it is working for me , it telles me that "this.demandeConnexion is not a function" . how can i get it to work ? help , this is the code : Serveur.prototype.demandConnexion = function(idZEP) { if (this.ZP.createZE(idZEP)) { console.log(' ==> socket : demande de creation ZE pour '+idZEP +' accepte'); } else { console.log(' ==> socket : demande de creation ZE pour '+idZEP +' refuse'); } }; Serveur.prototype.traitementSurConnection = function(socket) { // console.log('connexion'); console.log(' ==> socket connexion'); // traitement de l'evenement DEMANDE DE CONNEXION D'UNE ZE socket.on('connection', (function(idZEP) { this.demandConnexion(idZEP) console.log('good') })) A: It's because when the callback is called "this" isn't your "Serveur" instance. In your case try something like var that = this; socket.on('connection', (function(idZEP) { that.demandConnexion(idZEP) console.log('good') })) or socket.on('connection', this.demandConnexion.bind(this)); An other solution (the best in my opinion) would be to use the arrow functions to keep the same scope as the closure socket.on('connection', ()=>{ //here this refers to your Serveur (the enclosing scope) });
{ "pile_set_name": "StackExchange" }
Q: Tab-completion of shell patterns On my Debian servers I'm used to hitting Tab to "preview" the expansion of shell patterns: $ cp *some*<Tab> something somewhat have-some-cake $ cp *some*_ When the pattern expands to one entry, Tab replaces the pattern with the actual entry; otherwise it shows a list of matching entries. This is intuitive and useful because it's consistent with the regular "prefix" Tab completion. But my Ubuntu servers and desktops behave differently: even when it would expand to more than one entry, Tab replaces the pattern with the first entry. I have checked the usual suspects (/etc/bash.bashrc, /etc/inputrc, and the local versions) and I couldn't find any difference. Does anybody know which setting controls this behaviour? A: Contrary the other answer, this particular problem is probably a direct result of using bash-completion. The bash-completion package has several bugs (as noted in this U&L answer about a similar problem, for instance). If I comment out this section in my .bashrc: # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if ! shopt -oq posix; then if [ -f /usr/share/bash-completion/bash_completion ]; then . /usr/share/bash-completion/bash_completion elif [ -f /etc/bash_completion ]; then . /etc/bash_completion fi fi and start a new instance of bash, then I get: $ echo *o*<tab><tab> foo food foo.sh $ echo *o* And then if I source the /usr/share/bash-completion/bash_completion script like it was in the .bashrc: $ . /usr/share/bash-completion/bash_completion $ echo foo The *o* was immediately autocompleted to foo without showing the other matches. I'm using 16.04, by the way. I don't know if this has been fixed in newer releases. $ dpkg-query --show --showformat='${Package} ${version}\n' bash bash-completion bash 4.3-14ubuntu1.2 bash-completion 1:2.1-4.2ubuntu1.1
{ "pile_set_name": "StackExchange" }
Q: Connecting to Google via OAuth 2, "invalid_request" when requesting an access token There are several questions along these lines already on SE, but I've read everything I can find that seems relevant, and I'm still not quite there. I got an authentication code, so now I need to exchange it for an access token and a refresh token. However, Google returns the wonderfully non-specific error "invalid_request". Here's my code: private const string BaseAccessTokenUrl = "https://accounts.google.com/o/oauth2/token"; private const string ContentType = "application/x-www-form-urlencoded"; public static string GetRefreshToken(string clientId, string clientSecret, string authCode) { Dictionary<string, string> parameters = new Dictionary<string, string> { { "code", authCode }, { "client_id", clientId }, { "client_secret", clientSecret }, { "redirect_uri", "http://localhost" }, { "grant_type", "authorization_code" } }; string rawJson = WebUtilities.Post(BaseAccessTokenUrl, parameters, ContentType); return rawJson; // TODO: Parse out the actual refresh token } My Post() method URL-encodes the parameters keys and values and concatenates them: public static string Post(string uri, Dictionary<string, string> properties, string contentType = "application/x-www-form-urlencoded") { string content = String.Join("&", from kvp in properties select UrlEncode(kvp.Key) + "=" + UrlEncode(kvp.Value) ); return Post(uri, content); } The two-parameter Post() method just handles converting the content to bytes, adding content-length, etc., then returns the contents of the response even if it came as a WebException. I can include it if it's of any interest. The authorization code looks right, it's similar to others I've seen: 62 characters, and it starts with "4/". The client ID, secret, and redirect URL I've carefully copied from the Google API Console. The app is registered as an "Other" app, and I'm connecting from a Windows machine. Per this and this post, I've tried NOT URL-encoding, with no change. The OAuth Playground suggests that URL-encoding is correct. Per this post and this one, the properties are concatenated on a single line. Per this post, I've tried approval_prompt=force in the authorization request, but the new auth code did not work any better. Do auth codes expire? I'm using new codes within a few seconds, usually. Per the Google docs and this post, I'm using content-type "application/x-www-form-encoded". My authorization request is for scope "https://www.googleapis.com/auth/analytics.readonly". Per this post, there's no leading question mark in the parameters. There is a Google .NET OAuth library, but I was not able to get it working easily, and ~50,000 lines of code is more than I'd like to study if I have a choice. I prefer to write something clean from the ground up than to blindly copy over a bunch of libraries, cargo cult-style. A: Found it. The redirect_uri used to request tokens needs to match what was used when getting the authorization code. Here's my working code to get an auth code: private const string BaseAuthorizationUrl = "https://accounts.google.com/o/oauth2/auth"; public string GetAuthorizationUrl(string clientId, IEnumerable<string> scopes) { var parameters = new Dictionary<string, string> { { "response_type", "code" }, { "client_id", clientId }, { "redirect_uri", RedirectUrl }, { "scope", String.Join(" ", scopes) }, { "approval_prompt", "auto" } }; return WebUtilities.BuildUrl(BaseAuthorizationUrl, parameters); } ...here's my code to get an access token and a refresh token: private const string BaseAccessTokenUrl = "https://accounts.google.com/o/oauth2/token"; public void GetTokens(string clientId, string clientSecret, string authorizationCode, out string accessToken, out string refreshToken) { var parameters = new Dictionary<string, string> { { "code", authorizationCode }, { "redirect_uri", RedirectUrl }, // Must match that used when authorizing an app { "client_id", clientId }, { "scope", String.Empty }, { "client_secret", clientSecret }, { "grant_type", "authorization_code" } }; string rawJson = WebUtilities.Post(BaseAccessTokenUrl, parameters, "application/x-www-form-urlencoded"); dynamic parsedJson = JsonUtilities.DeserializeObject(rawJson); accessToken = parsedJson.access_token; refreshToken = parsedJson.refresh_token; } ...here's the code to get a fresh access token: public string GetAccessToken(string clientId, string clientSecret, string refreshToken) { var parameters = new Dictionary<string, string> { { "client_id", clientId }, { "client_secret", clientSecret }, { "refresh_token", refreshToken }, { "grant_type", "refresh_token" } }; string rawJson = WebUtilities.Post(BaseAccessTokenUrl, parameters, "application/x-www-form-urlencoded"); dynamic parsedJson = JsonUtilities.DeserializeObject(rawJson); return parsedJson.access_token; }
{ "pile_set_name": "StackExchange" }
Q: How to avoid or report targeting I have heard many reports of a user being watched by a specific person, and that person constantly downvoting all posts by that user, and adding comments to put them down, such as "How do you not know this" or "Have you ever used a computer before?", as well as other such verbiage. He reported the person several times but no action was taken for almost 8 months. Is there any expedited process to report and remove these people from commenting on questions? A: H‌i! I'm a moderator! Although that makes me one of a declining breed around here, there are a few of us who still exist. As a moderator, I do a variety of things to help make Stack Overflow a better place, including removing non-answers, closing off-topic questions, investigating vote fraud, investigating trolling/harassment. If you notice something going wrong on the site, and you want to bring it to a moderator's attention, you can do so by raising a flag. Flags are strictly confidential—the only people who will see them are moderators, and we're bound by a confidentiality agreement that prevents us from disclosing any details about flags or flaggers in public. This ensures that you can report something to us without any fear of retaliation. We consider and investigate every flag that is raised. We may not always act on them, but we do pay them careful attention. If you want us to look at something, or something is going on that makes you feel uncomfortable, please raise a flag to let us know. In most cases, what you'll be flagging is a post (question or answer) or comment. The former have a handy "flag" link underneath them, while the latter have a little flag comment in the left-hand margin. The UI explicitly supports flagging for these because content is the focus of this site, so most of what gets reported and addressed by moderators is content. We would prefer in every case that you report content, if possible, not users. For example, flag the abusive comment, not the user who left it. Sometimes, though, it's not possible. For example, you want to report a pattern of behavior by a particular user. That's okay, you are allowed to do that, and since flags are confidential, there's no way the user you're reporting will find out. To report a user, just flag one of their posts. Or one of your own posts. It really doesn't matter. Choose the "needs moderator attention" option, which gives you a text box to type into. Use that text box to explain what your concern is and provide as much evidence as possible—links to Q&A where you see evidence of harassment or targeting behavior, for example. It's okay if they're deleted: moderators can see deleted content. Either way, we'll investigate, and if we conclude that something untoward is happening, then we'll deal with it appropriately. We have the ability to see lots of different information (including, as I mentioned, deleted posts and comments), as well as to see histories of users' behavior, which allows us to judge patterns. When targeting becomes harassment, it is not appropriate here. It is a clear violation of the Code of Conduct even in its simplest form (be nice, and respect one another), so it would absolutely not be allowed. Now, of course, as Alexei mentions in his answer, you need to be discerning about what constitutes "targeting". If you're saying to yourself, why whenever I post a question about x86 assembly language does this fellow Peter Cordes show up and leave a comment or an answer—is he targeting me? Well, yeah, I guess you could kind of say that in a certain sense, but not really. He's targeting assembly language questions, because that's one of his major areas of expertise. If you ask an assembly language question on Stack Overflow, you're very likely to run into him. If you post an incorrect assembly language answer on Stack Overflow, you're very likely to get a downvote from him. It doesn't mean that he's targeting you, and it's only a problem if he, as you mention in the question, were to start leaving rude comments or otherwise harassing you. (I'm picking on Peter here because I know him. There are dozens or hundreds of equivalent examples in other tags. We call these "subject matter experts", and we're very thankful for their regular high-quality contributions.) Thankfully, moderators are smart enough to tell the difference between innocent, coincidental targeting and abusive behavior, and that's one of the things we'll investigate in response to a flag. In summary, if you want us to take a look at something that is bothering you or seems suspicious, please raise a flag to let us know and we'll be happy to do so. A: I expect "this user follows me" to happen very frequently as a person who multiple have questions on the same topic endup with relatively small group of answerers who see those questions. While it may look like answerers "follow you" it's most likely other way around. Like if you wake up 7am daily the Sun is out by that time - it's strange to blame the Sun for watching you... The best way I know (and seen used) to no be targeted by a single user is to post subsequent questions/groups of questions in different unrelated tags, preferably at very different time of the day. Most of the tags have relatively small set of people who answer/curate questions and each person would be most active at particular time of day. This leads to perceived "this @#$@#$ follows me" when one posts similarly styled questions on the same subject around the same time. The other things to consider - do not ask question that request continuation of your work. I.e. you assignment is "read multiple values for a list and print them in reverse order" then asking "how to read a value", "what is a list", "previous questions shown how to read a value how to put it in a list", "previous questions show how to add value, how to reverse it".... may look like lack of understanding of basic concepts as well as lack of effort to learn things. Some people can lash out when it happen. avoid any personal writing style that stands out while asking not-so-on-topic questions. While SO gets a lot of low/average quality of the question it is rare to see those questions asked with good quality of writing. Those stand out and likely trigger "I think I've seen something form that person... totally off-topic too... let me doublecheck all questions they asked" Note that if you see actually offensive/not kind comments - flag them appropriately. But I would not expect to see punishment for something "did you even read anything" type of comment on "how to declare variable in JavaScript" type of question... What is expected to happen is author of comment get notified of their undesired behavior and changes the behavior as result - this outcome is not really visible to the flagger by design. If behavior continues despite flags - bringing "why my specific question attracted unkind comments" may be a way to get help (assuming you believe the post is of good quality) or contact community managers via "contact us".
{ "pile_set_name": "StackExchange" }
Q: Oracle SQL How can I separate values from a column in two different columns? I want to code a query to return description of some concepts and their respective price but I want to make two different columns to categorise two diferentes items categories of values. Is it possible? SELECT b.descripcion CONCEPTO, a.cantidad, a.importe, c.descripcion FROM detalles_liquidaciones a JOIN conceptos b ON (a.codigo_concepto = b.codigo) JOIN tipos_conceptos c ON (b.codigo_tipo = c.codigo) WHERE a.numero_liquidacion = 13802 AND c.descripcion IN ('HABER', 'RETENCION', 'ANTICIPO'); Output Query I want to code something like this: Ideal query A: Could this work ? ( Perhaps there's a better solution, it feels a bit tricky... litterally "switching" ): SELECT Concepto , (CASE WHEN description LIKE 'HABER' THEN importe ELSE NULL END) haberes, (CASE WHEN description LIKE 'HABER' THEN cantidad ELSE NULL END) cantidad, (CASE WHEN description LIKE 'RETENCION' OR description LIKE 'ANTICIPO' THEN importe ELSE NULL END) retenciones FROM (SELECT b.descripcion concepto, a.cantidad, a.importe, c.descripcion FROM detalles_liquidaciones a JOIN conceptos b ON (a.codigo_concepto = b.codigo) JOIN tipos_conceptos c ON (b.codigo_tipo = c.codigo) WHERE a.numero_liquidacion = 13802 AND c.descripcion IN ('HABER', 'RETENCION', 'ANTICIPO'));
{ "pile_set_name": "StackExchange" }
Q: how to add UILabel within a UILabel? This question looks very usual.. but I had a problem yesterday, I have added a label in xib and created outlet for it, I want to have multiple lines, buttons inside that label, So I have decided to add new sub labels inside it... UILabel *label; label = [[UILabel alloc] initWithFrame:initial_rect]; label.font = [UIFont systemFontOfSize:[UIFont smallSystemFontSize]]; label.text = sel_obj->start_time; label.backgroundColor = [UIColor whiteColor]; [xib_label addSubview:label] When I tried like that, it was not working, then I have added the same label in self.view it works fine.. So what do I need to do when I want to add a label within a label which was added using xib. Am I missing anything here.. thanks... A: I came across this situation long ago. Just FYI : I think the problem is the initial_rect, which is in the view's frame but out of the xib_label's frame. The frame of label is relative to xib_label, not self.view. You can try this : UILabel *label; CGRect rect = (CGRect){0, 0, 10, 10}; label = [[UILabel alloc] initWithFrame:rect]; label.font = [UIFont systemFontOfSize:[UIFont smallSystemFontSize]]; label.text = sel_obj->start_time; label.backgroundColor = [UIColor redColor]; [xib_label addSubview:label]; Change the backgroundColor to red to see the label clearly. And if you want to show multiple lines in a label, you can : label.numberOfLines = 0;.
{ "pile_set_name": "StackExchange" }
Q: pyftpdlib slow .read on file blocks entire mainloop Helllo, I am using a custom AbstractFS on pyftpdlib that maps files on a HTTP server to FTP. This files are returned by my implementation of open (of AbstractFS) which returns a httplib.HTTPResponse wrapped by the following class: class HTTPConnWrapper: def __init__(self, obj, filename): # make it more file obj like self.obj = obj self.closed = True self.name = filename.split(os.sep)[-1] def seek(self, arg): pass def read(self, bytes): #print 'read', bytes read = self.obj.read(100) #we DONT read var byes, but 100 bytes #print 'ok' return read The problem is that if a client is downloading files the entire server becommes sluggish. What can I do? Any ideas? PS: And why just monkey patching everything with evenetlet does'nt magically makes everything work? A: Ok, I posted a bug report on pyftpdlib: I wouldn't even know what to recommend exactly as it's a problem which is hard to resolve and there's no easy or standard way to deal with it. But I got a crazy solution to solve this problem without using pyftpdlib. rewrite everything using wsgidav (which uses the cherrypy wsgiserver, so its threaded) mount that WebDAV filesystem as native filesystem (net use on windows, mount.davfs on linux) serve this mounted filesystem with any ftp server that can handle blocking file systems
{ "pile_set_name": "StackExchange" }
Q: How can I duplicate the terminal/prompt in a file like using history + sink I know the sink command can divert the stdout to a file, but basically if I do this in the command window: library(data.table) a = 1; b = 2; a [1] 1 Only the last line [1] 1will be printed in the file. Is there a way my whole command window could be printed to a file like it is done with sink ? NOTE: I want it to be done each time I write something to avoid losing everything if R crashes, meaning I do not want to have to type printAllCommandToFile() for this to be done A: What about txtStart from the "TeachingDemos" package? See here. Sometimes, when introducing students to R, I've recommended it to help them remember what they did and what the results were, a situation somewhat like you describe. In my experience on a Linux machine, even if you close R without calling txtStop, the output is saved to whatever text file you had specified at the start of your session.
{ "pile_set_name": "StackExchange" }
Q: Estimating the $L^1$ norm of the Dirichlet kernel Suppose $D_N(x)=\frac{\cos\frac{x}{2}-\cos(N+\frac{1}{2})x}{\sin\frac{x}{2}}$. How to prove the inequality below$$\int_{-\pi}^\pi|D_N(x)|\text{d}x\leq c\log N$$ for some constant $c>0$ ? A: By recalling that $$D_n(x)=\sum_{k=-n}^n e^{ikx}=1+2\sum_{k=1}^n\cos(kx)=\frac{\sin\left(\left(n +1/2\right) x \right)}{\sin(x/2)}\tag{1}$$ it is not difficult to locate the stationary points of $D_n(x)$ in $(-\pi,\pi)$ and conclude that $$ \left|D_N(x)\right|\leq \min\left(2N+1,\frac{\pi}{|x|}\right)\tag{2}$$ from which: $$ \int_{-\pi}^{\pi}|D_N(x)|\,dx\leq \int_{-\frac{\pi}{2N+1}}^{\frac{\pi}{2N+1}}N\,dx+2\pi\int_{\frac{\pi}{2N+1}}^{\pi}\frac{dx}{x}\leq 10\log N\tag{3} $$ for any $N\geq 8$.
{ "pile_set_name": "StackExchange" }
Q: Magento 2 - How to show tax and shipping costs in mini-cart pop up How to show tax and shipping costs in mini cart popup in Magento 2? A: To show additional stuff or costs in the minicart (like taxes or shipping costs), you need to understand where the data in the minicart comes from. The Customer Data Section Pool If you add a product to the cart, you can see in your inspector a XHR request is made to /customer/section/load with the arguments ?sections=cart,messages. This controller checks in the section pool (Magento\Customer\CustomerData\SectionPoolInterface) for objects that provide data for the required sections. This data is saved in the local storage of your browser and utilized in the minicart. This design pattern is called an Object Pool and they are used thoroughly in Magento 2. You can hook into this Object Pool very easily by adding the following to your modules' di.xml: <type name="Magento\Customer\CustomerData\SectionPoolInterface"> <arguments> <argument name="sectionSourceMap" xsi:type="array"> <item name="my-section" xsi:type="string">Vendor\Module\CustomerData\Something</item> </argument> </arguments> </type> As for the Something-class in this example: the only requirement is that it implements \Magento\Customer\CustomerData\SectionSourceInterface so it has the getSectionData()-method. This method should return an array with data that is added to the JSON object when you ask the controller to load it (for example: /customer/section/load?sections=my-section. Extending existing functionality Now as for your question the answer is even simpler: since you want to add some extra quote information (like taxes and shipping costs) you can simply use a plugin to hook into Magento\Checkout\CustomerData\Cart::getSectionData() to add your information to the data that is fetched when /customer/section/load?sections=cart is called. In your di.xml add: <type name="Magento\Checkout\CustomerData\Cart"> <plugin name="my_custom_stuff" type="Vendor\Module\Plugin\Magento\Checkout\CustomerData\Cart"/> </type> And in your plugin: /** * @param \Magento\Checkout\CustomerData\Cart $subject * @param array $result * @return array */ public function afterGetSectionData(\Magento\Checkout\CustomerData\Cart $subject, array $result) { $result['something'] = 'Stuff'; return $result; } Now what you do in your plugin is beyond the scope of this help, so you have to figure out on your own how to fetch the Tax and/or shipping costs at this point, but I'm pretty sure you'll figure that out. Implementing it in the Frontend Now we have a hook where we can add data from the backend to the frontend. Please note that this is only executed when the customer section data is loaded! or even better: as soon as you add a product to the cart. In other words: you won't have this data on the frontend without updating your cart first. Now, the data in our plugin is added to our cart-node in the JSON, and we can access this in the Magento_Checkout/js/view/checkout/minicart/subtotal/totals UI Component. This has a property called cart which is an observable that contains everything from our JSON response. This becomes very clear if you look at the template Magento_Checkout/minicart/subtotal/totals.html: <div class="amount"> <span data-bind="html: cart().subtotal"></span> </div> In this file, cart().subtotal reflects that cart.subtotal-item in the JSON data. Adding an extra rule to the minicart Now here comes the most important part (and probably the answer to your question): How can we show our extra data in the minicart? Well the minicart is made up of a bunch of UI Components tangled into each other, so without too much further explanation this is how you set it up. Add checkout_cart_sidebar_total_renderers.xml to your modules' frontend/layout-folder: <?xml version="1.0"?> <page xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:framework:View/Layout/etc/page_configuration.xsd"> <body> <referenceBlock name="minicart"> <arguments> <argument name="jsLayout" xsi:type="array"> <item name="components" xsi:type="array"> <item name="minicart_content" xsi:type="array"> <item name="children" xsi:type="array"> <item name="subtotal.container" xsi:type="array"> <item name="children" xsi:type="array"> <!-- Include stuff in MiniCart: --> <item name="stuff" xsi:type="array"> <item name="component" xsi:type="string">uiComponent</item> <item name="config" xsi:type="array"> <item name="template" xsi:type="string">Vendor_Module/checkout/minicart/stuff</item> </item> <item name="children" xsi:type="array"> <item name="subtotal.totals" xsi:type="array"> <item name="component" xsi:type="string">Magento_Checkout/js/view/checkout/minicart/subtotal/totals</item> <item name="config" xsi:type="array"> <item name="template" xsi:type="string">Vendor_Module/checkout/minicart/stuff/totals</item> </item> </item> </item> </item> </item> </item> </item> </item> </item> </argument> </arguments> </referenceBlock> </body> </page> And the template frontend/web/checkout/minicart/stuff.html: <div class="my-stuff"> <span class="label"> <!-- ko i18n: 'My Custom Stuff' --><!-- /ko --> </span> <!-- ko foreach: elems --> <!-- ko template: getTemplate() --><!-- /ko --> <!-- /ko --> </div> And the template frontend/web/checkout/minicart/stuff/totals.html: <div class="amount price-container"> <span class="price-wrapper" data-bind="html: cart().something"></span> </div> This should do the trick. Haven't tested this example, but it's taken from an implementation I did where I need to add the FPT, Shipping costs and Grand Total to the minicart and it worked like a charm! It's so easy! ;-)
{ "pile_set_name": "StackExchange" }
Q: Drywall taping/floating steps if finish will be skim coating I am finishing up a master bath remodel which involved removing some walls, and creating new walls. I have: Hung new sheetrock for the new walls Patched the ceiling where the old walls were with drywall (~4.5" wide strips) Patched some gaps with hot mud Taped all the joints and covered with the first layer of all-purpose joint compound. The next step will be to sand down this first layer as prep for the subsequent layers, repeating until the joints 'disappear'. For the final finish, we have decided to skim coat as we like the way this worked and looked in our previous bathroom. My question is: Do I need to do the subsequent (2 or 3) layers of joint and screw-divot feathering if I intend to put 2 or 3 coats on for skimming? I have not skim-coated a wall before, and I just don't know how much of the surface topography I will be able to hide. Thanks! A: Read the datasheet for the compound you want to use to skim coat and see what's the maximum allowable depth for a layer. (For European products I'm familiar with it's about 2 or 3 mm.) Then you can decide what dimples need pre-covering and which can simply get covered in the skim coat. If you exceed the max depth you might get hairline cracks, but [in my experience] that only happens on larger areas, not screw-size... and you can fix it anyway by filling the crack(s) with the same compound, which usually takes less time than strictly following the prescribed layer depth. YMMV.
{ "pile_set_name": "StackExchange" }
Q: How to get the value from JSON in JS I'm trying to get the "formatted_address" value from this JSON file. I'm new to this and found the documentation quite confusing. The code I have now is the following where the variable "location" is the url generated like the one above. $.getJSON(location, function( data ){ stad = data.results.formatted_address; console.log(stad); }); How would I achieve this? A: results is an array, so you need to access it as one. Given your example with only one item, you can access it directly by index: var stad = data.results[0].formatted_address; // = "'s-Hertogenbosch, Netherlands" If there were multiple items in the array you would need to loop through them: for (var i = 0; i < data.results.length; i++) { var stad = data.results[i].formatted_address; // do something with the value for each iteration here... }
{ "pile_set_name": "StackExchange" }
Q: How to append the string variables using stringWithFormat method in Objective-C I want to append the string into single varilable using stringWithFormat.I knew it in using stringByAppendingString. Please help me to append using stringWithFormat for the below code. NSString* curl = @"https://invoices?ticket="; curl = [curl stringByAppendingString:self.ticket]; curl = [curl stringByAppendingString:@"&apikey=bfc9c6ddeea9d75345cd"]; curl = [curl stringByReplacingOccurrencesOfString:@"\n" withString:@""]; Thank You, Madan Mohan. A: You can construct your curl string using -stringWithFormat: method: NSString *apiKey = @"bfc9c6ddeea9d75345cd"; NSString* curl = [NSString stringWithFormat:@"https://invoices?ticket=%@&apikey=%@", self.ticket, apiKey];
{ "pile_set_name": "StackExchange" }
Q: Recover files from a failed RAID 1 - How do I rsync if source file size is bigger I have 2 drives that were on a mirrored RAID. The server crashed (failed array) and there are no backups. The client was setup with Carbonite, but failed to renew their plan, which expired this past July. Now they're stuck with a down server and no backups. I pulled the drives, both of which appear when I connect them individually to my computer using a SATA to USB adapter. Both drives obviously have issues, but I've been able to recover most (almost all) of the files off of one of the drives. The recovery destination is a NAS in my office. Now, I would like to attempt a recovery from the second drive. I think it makes sense to ONLY overwrite the files on the destination NAS if the source (2nd hard drive) file size is bigger. My theory is that if files were corrupted on the first drive but not corrupted on the 2nd drive, they would be smaller (and vice-versa for good files on the 1st drive that are bad on the 2nd). Is this a stupid theory? Should I just stop acting like an idiot and create 2 separate directories for "disk 1" and "disk 2" and let the client sort it out? If my idea is sound, how would I go about this using rsync? I know how to use rsync in general, and I know how to use rsync if the file sizes are different or if the file timestamps are different, but I don't know how to do a conditional copy if (and only if) the source is bigger than the destination. A: I think it makes sense to ONLY overwrite the files on the destination NAS if the source (2nd hard drive) file size is bigger. File size would typically be the same; the filesystem tracks file size and corruption won't care about file length boundaries. Should I just stop acting like an idiot and create 2 separate directories for "disk 1" and "disk 2" and let the client sort it out? Best to leave data recovery to the professionals if you're not quite sure what you're doing. No article you read online or answer you'll get here will give you the background and experience needed to do a full and proper recovery. If the data is critical for your customer they need to pay up and get it professionally taken care of.
{ "pile_set_name": "StackExchange" }
Q: Why canvas is not working locally but working on TRYIT I Have build some code related to canvas but code is working on TRYIT but code is not working locally when i have copied all code to file and tried to run it . This is what this code is doing , it takes an image and set the width and height of canvas with respect to that image and draw a filled circle with text in it on that image(canvas). Here is code <head> <meta charset=utf-8 /> <title>Draw a circle</title> </head> <body onload="draw();"> <canvas id="circle"></canvas> </body> <script> var canvas = document.getElementById('circle'), context = canvas.getContext('2d'); function draw() { base_image = new Image(); base_image.src = 'http://sst-system.com/old/Planos/C21E34.JPG'; var canvas = document.getElementById('circle'); if (canvas.getContext) { base_image = new Image(); base_image.src = 'http://sst-system.com/old/Planos/C21E34.JPG'; var ctx = canvas.getContext('2d'); ctx.beginPath(); ctx.canvas.width = base_image.width; ctx.canvas.height = base_image.height; var X = 500; var Y = 229; var R = 6.4; ctx.font = "15px Arial bold"; ctx.beginPath(); ctx.arc(X, Y, R, 0, 2 * Math.PI, false); ctx.lineWidth = 12; ctx.strokeStyle = '#FF0000'; ctx.drawImage(base_image, 0, 0) ctx.stroke(); ctx.fillText("TT", X-9, Y+5); } } </script> There are no errors on console , but it shows these warnings in console : A: As @DBS mentioned, it was a speed thing. You weren't waiting for the image to load before working with it. The fix is to attach a listener to the load event of the image, either using image.addEventListener('load', () => { or the deprecated-but-still-works style of image.onload = () => {, which I have used below. The reason that it works in the TryIt example is that the image is cached by the browser from the second load onwards, so it is available immediately and you don't need to wait for it to load. I suspect that when you run it locally, if you have Devtools open, the cache is disabled due to an option in Devtools settings called "Disable cache (while DevTools is open)". So it will never be pulled from the cache, and thus never work. The following code works: <html> <head> <meta charset=utf-8 /> <title>Draw a circle</title> </head> <body onload="draw();"> <canvas id="circle"></canvas> </body> <script> var canvas = document.getElementById('circle'), context = canvas.getContext('2d'); function draw() { base_image = new Image(); base_image.src = 'http://sst-system.com/old/Planos/C21E34.JPG'; var canvas = document.getElementById('circle'); if (canvas.getContext) { base_image = new Image(); base_image.src = 'http://sst-system.com/old/Planos/C21E34.JPG'; // The key change: put the rest of the code inside the onload callback // to wait for the image to load before using it. base_image.onload = function() { var ctx = canvas.getContext('2d'); ctx.beginPath(); ctx.canvas.width = base_image.width; ctx.canvas.height = base_image.height; var X = 500; var Y = 229; var R = 6.4; ctx.font = "15px Arial bold"; ctx.beginPath(); ctx.arc(X, Y, R, 0, 2 * Math.PI, false); ctx.lineWidth = 12; ctx.strokeStyle = '#FF0000'; ctx.drawImage(base_image, 0, 0) ctx.stroke(); ctx.fillText("TT", X - 9, Y + 5); }; } } </script> </html>
{ "pile_set_name": "StackExchange" }
Q: RequiredValidator is not working in asp.net I want to validate some text box and dropdownlist control not to empty, like below highlight part: and my GridView control code looks like below: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="EMPLOYEEID" DataSourceID="SqlDataSource1" ShowFooter="True"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:TemplateField> <FooterTemplate> <asp:LinkButton ID="LinkButton1" runat="server">Insert</asp:LinkButton>&nbsp;&nbsp; </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="EMPLOYEEID" SortExpression="EMPLOYEEID"> <EditItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Eval("EMPLOYEEID") %>'></asp:Label> </EditItemTemplate> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("EMPLOYEEID") %>'></asp:Label> </ItemTemplate> <FooterTemplate> <asp:TextBox ID="txtInsertEmpID" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="rfvInsertEmpID" ControlToValidate="txtInsertEmpID" Text="*" ForeColor="Red" ValidationGroup="Insert" runat="server" ErrorMessage="EmployeeID is required" /> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="NAME" SortExpression="NAME"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("NAME") %>'></asp:TextBox> <asp:RequiredFieldValidator ID="rfvEditName" ControlToValidate="TextBox1" Text="*" ForeColor="Red" runat="server" ErrorMessage="EmployeeName is required" /> </EditItemTemplate> <ItemTemplate> <asp:Label ID="Label2" runat="server" Text='<%# Bind("NAME") %>'></asp:Label> </ItemTemplate> <FooterTemplate> <asp:TextBox ID="txtInsertName" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="rfvInsertName" ControlToValidate="txtInsertName" Text="*" ForeColor="Red" ValidationGroup="Insert" runat="server" ErrorMessage="EmployeeName is required" /> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="DEPTID" SortExpression="DEPTID"> <EditItemTemplate> <asp:DropDownList ID="DropDownList1" SelectedValue='<%# Bind("DEPTID") %>' runat="server"> <asp:ListItem>Select Department</asp:ListItem> <asp:ListItem Value="1">SM</asp:ListItem> <asp:ListItem Value="2">CDS</asp:ListItem> <asp:ListItem Value="3">AM</asp:ListItem> <asp:ListItem Value="4">FS</asp:ListItem> </asp:DropDownList> <asp:RequiredFieldValidator ID="rfvEditDept" ControlToValidate="DropDownList1" Text="*" ForeColor="Red" runat="server" ErrorMessage="Department is required" InitialValue="Select Department" /> </EditItemTemplate> <ItemTemplate> <asp:Label ID="Label3" runat="server" Text='<%# Bind("DEPTID") %>'></asp:Label> </ItemTemplate> <FooterTemplate> <asp:DropDownList ID="ddlInsertDeptID" runat="server"> <asp:ListItem>Select Department</asp:ListItem> <asp:ListItem Value="1">SM</asp:ListItem> <asp:ListItem Value="2">CDS</asp:ListItem> <asp:ListItem Value="3">AM</asp:ListItem> <asp:ListItem Value="4">FS</asp:ListItem> </asp:DropDownList> <asp:RequiredFieldValidator ID="rfvInsertDept" ControlToValidate="ddlInsertDeptID" Text="*" ForeColor="Red" ValidationGroup="Insert" runat="server" ErrorMessage="Department is required" InitialValue="Select Department" /> </FooterTemplate> </asp:TemplateField> </Columns> </asp:GridView> <br /> <asp:ValidationSummary ID="ValidationSummary1" runat="server" ValidationGroup="Insert" ForeColor="Blue" /> <asp:ValidationSummary ID="ValidationSummary2" runat="server" ForeColor="Red" /> I'm not sure what's the problem so that when I click Insert link button the page was submitted without any error message even though I don't type anything in the bottom of text boxes. can anybody help me? A: You are just Missing Validation Group in Insert LinkButton. <asp:LinkButton ID="LinkButton1" runat="server" ValidationGroup="Insert">Insert</asp:LinkButton>&nbsp;&nbsp;
{ "pile_set_name": "StackExchange" }
Q: Java 6: how to pass multiple parameters to APT I have a Java Annotation Processor that extends from AbstractProcessor. I have two supported options, addResDir and verbose, and I am trying to set them like this: -AaddResDir=src/main/webapp -Averbose=true I have also tried this: -AaddResDir=src/main/webapp,verbose=true While a single parameter works, e.g. -AaddResDir=src/main/webapp I can't get the multiple parameters to work and I can't find any relevant docs. Do I need to parse the parameters manually in APT? The only thing I have is the output of javac -help: -Akey[=value] Options to pass to annotation processors EDIT It turns out to be a maven problem, after all. Here is my maven config: <plugin> <inherited>true</inherited> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.1</version> <configuration> <source>1.6</source> <target>1.6</target> <optimize>true</optimize> <debug>true</debug> <compilerArgument>-AaddResDir=src/main/webapp -Averbose=true</compilerArgument> </configuration> </plugin> Unfortunately, maven sends the argument to Javac as a single string in the args array, while it should of course be two Strings. The Map Version <compilerAguments> is no help either, because <Averbose>true</Averbose> <AaddResDir>src/main/webapp</AResDir> generates the output: [... , -Averbose, true, -AaddResDir, src/main/webapp] While javac requires the syntax [... , -Averbose=true, -AaddResDir=src/main/webapp ] and <Averbose=true /> <AaddResDir=src/main/webapp /> is invalid XML. (See Mapping Maps from the Guide to Configuring Maven Plugins) And I am afraid there is no way to change this, argh. EDIT: I have now filed a bug report. A: There is no real answer as of yet. The bug is filed: MCOMPILER-135 and I have submitted three different patches, the last of which introduces a variable of type Properties: <additionalCompilerArguments> <property> <name>-Akey=value</name> </property> <property> <name>-verbose</name> </property> <property> <name>-Xmaxerrs</name> <value>1000</value> </property> </additionalCompilerArguments> This solution is the most flexible one because it supports many different parameter syntax formats. (If the existing parameter <compilerArguments> were also of type Properties my problem would be solved)
{ "pile_set_name": "StackExchange" }
Q: Generating R ggplot line graph with color/type conditional on different variables I'm struggling to get the exact output needed for a ggplot line graph. As an example, see the code below. Overall, I have two conditions (A/B), and two treatments (C/D). So four total series, but in a factorial way. The lines can be viewed as a time series but with ordinal markings (rather than numeric). I'd like to generate a connected line graph for the four types, where the color depends on the condition, and the line type depends on the treatment. Thus two different colors and two line types. To make things a bit more complicated, one condition (B) does not have data for the third time period. I cannot seem to generate the graph needed for these constraints. The closest I got is shown below. What am I doing wrong? I try to remove the group=condition code, but that doesn't help either. library(ggplot2) set.seed<-1 example_df <- data.frame(time = c('time1','time2','time3','time1','time2','time3','time1','time2','time1','time2'), time_order = c(1,2,3,1,2,3,1,2,1,2), condition = c('A','A','A','A','A','A','B','B','B','B'), treatment = c('C','C','C','D','D','D','C','C','D','D'), value = runif(10)) ggplot(example_df, aes(x=reorder(time,time_order), y=value, color=condition , line_type=treatment, group=condition)) + geom_line() A: You've got 3 problems, from what I can tell. linetype doesn't have an underscore in it. With a categorical axis, you need to use the group aesthetic to set which lines get connected. You've made a start with group = conidition, but this would imply one line for each condition type (2 lines), but you want one line for each condition:treamtment interaction (2 * 2 = 4 lines), so you need group = interaction(condition, treatment). Your sample data doesn't quite make sense. Your condition B values have two treatment Cs at time 1 and two Ds at time 2, so there is no connection between times 1 and 2. This doesn't much matter, and your real data is probably fine. This should work: ggplot( example_df, aes( x = reorder(time, time_order), y = value, color = condition , linetype = treatment, group = interaction(condition, treatment) ) ) + geom_line()
{ "pile_set_name": "StackExchange" }
Q: Ajax request returns empty response using ServiceStack total n00b when it comes to restful stuff, ajax, and so forth so please be gentle. I have an issue whereby I have taken the example ServiceStack "Todo" service, and am trying to develop a mobile client using this service as a data source. I'm trying to learn how it all works so I can build a specific service which I feel SS is more suited to as opposed to WCF/WebAPI. Anyway let's say that the Service is running on http://localhost:1234/api/todos I have enabled CORS support based on cobbling together information found in various other posts. So my Configure function looks like this: Plugins.Add(new CorsFeature()); this.RequestFilters.Add((httpReq, httpRes, requestDto) => { httpRes.AddHeader("Access-Control-Allow-Origin", "*"); //Handles Request and closes Responses after emitting global HTTP Headers if (httpReq.HttpMethod == "OPTIONS") { httpRes.AddHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS"); httpRes.AddHeader("Access-Control-Allow-Headers", "X-Requested-With, Content-Type"); httpRes.StatusCode = 204; httpRes.End(); } }); and I have a service method like this on the TodoService: [EnableCors] public object Post(Todo todo) { var t = Repository.Store(todo); return t; } Using a browser (FF/IE) If I call this ajax function: var todo = { content: "this is a test" }; $.ajax( { type: "POST", contentType: "application/json", data: JSON.stringify(todo), timeout:20000, url: "http://localhost:1234/api/todos", success: function (e) { alert("added"); app.navigate("Todo"); }, error: function (x, a, t) { alert("Error"); console.log(x); console.log(a); console.log(t); } } ); from http://localhost:1234, then it all works fine. The todo gets added and in the success function, "e" contains the returned todo object the service created. However, if I call this from anywhere else (http://localhost:9999 i.e the asp.net dev server that the mobile client app is running under) then, although the Service method executes, and the todo does get added on the server side, the response back to jquery is empty, and it hits the error function right away. I'm convinced I am doing something dumb but I can't for the life of me see it. Anyone have any clue? Thanks in advance... Update: Well it seems to work OK now, the problem appeared to be httpRes.AddHeader("Access-Control-Allow-Origin", "*"); outside of the "OPTIONS" block. So the code that works in apphost is Plugins.Add(new CorsFeature()); this.RequestFilters.Add((httpReq, httpRes, requestDto) => { //Handles Request and closes Responses after emitting global HTTP Headers if (httpReq.HttpMethod == "OPTIONS") { httpRes.AddHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS"); httpRes.AddHeader("Access-Control-Allow-Origin", "*"); httpRes.AddHeader("Access-Control-Allow-Headers", "X-Requested-With, Content-Type"); httpRes.StatusCode = 204; httpRes.End(); } }); A: so it turns out there was a problem in my original code; the amdended code is: Plugins.Add(new CorsFeature()); this.RequestFilters.Add((httpReq, httpRes, requestDto) => { //Handles Request and closes Responses after emitting global HTTP Headers if (httpReq.HttpMethod == "OPTIONS") { httpRes.AddHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS"); //this line used to be outside the if block so was added to every header twice. httpRes.AddHeader("Access-Control-Allow-Origin", "*"); httpRes.AddHeader("Access-Control-Allow-Headers", "X-Requested-With, Content-Type"); httpRes.StatusCode = 204; httpRes.End(); } }); So the CorsFeature() plugin would appear to be correctly handling all CORs stuff for POST, GET and the pre-flight OPTIONS request is being handled by the RequestFilter (confusion - why doesn't the plugin just handle the OPTIONS request?) ; in the old code, the allow-origin header was being added twice for every request (by the plugin and by the filter) and this seems to have been confusing either jquery or the browser. Not that I fully understand any of this , I have some reading to do :) and it's all been rendered moot anyway since the mobile framework I am using (DXTreme) can't seem to handle anything other than JSONP (no good for me since I need POST/PUT) for a cross-domain Rest Data source, so I am already going to have to go the proxy route, or dump the framework, or find some other way around my issues.
{ "pile_set_name": "StackExchange" }
Q: Can you die when your plugged-in mobile phone falls in the bathtub? I have questions related to the safety of using mobile devices in the bathtub. I always thought the danger in dropping it or getting water on it is mostly to the device, not to the person, but a recent accident made me doubt that. Is it possible to die from the charging end of a high-power USB charger (which can deliver up to 20V) in the bathtub? (The charger itself would not be in the water, nor would be the wall cable.) Is a mobile device dropped in the bathtub dangerous (to the person, not the device ;-) )? Are there known instances when a submerged mobile device started to burn? (After all, the battery contains Lithium which can burn under water, and Lithium battery fires cannot be extinguished with water.) A: The phone itself can’t develop enough potential to cause a shock with burns - the battery is only 3.8V. Further, any stray currents would have been local to the phone, and would not have found a body path. What might have happened is that she reached for the cord to plug in the phone, and she got shocked by the cord, and dropped the phone... and was not able to get the cord away. Very sad - what an unfortunate way to lose someone. Then the underlying problem leading to this would be two things: a faulty charger, and a non-functioning or not-present GFCI that should have tripped.
{ "pile_set_name": "StackExchange" }
Q: необходимо создать прозрачное окно или область для вывода информации Имеется игра, которая запущена в оконном режиме на весь экран, по верх нее можно выводить какую то информацию, как это делают некоторые программы. Хотелось бы создать подобную область и прозрачную и чтобы нажатия мыши не на этом окне останавливались а проходили насквозь в игру, другими словами растянуть форму на всю ширину и сделать ей topmost не прокатит, ибо на игре невозможно будет кликать мышой. Подскажите как реализовать подобное окно? A: In Windows Forms 2.0 there is a new property called ShowWithoutActivation – which you would need to override on the Form. In native applications you can use SetWindowPos with the SWP_NOACTIVATE flag or the ShowWindow with the SW_SHOWNA flag. Взято отсюда(по линке больше информации): https://stackoverflow.com/questions/2423234/make-a-form-not-focusable-in-c-sharp
{ "pile_set_name": "StackExchange" }
Q: “管你是[surname] + [name]还是[same surname] + [different name]” There's a set phrase that goes something like: (我)管你张三还是李四 I've seen variations on it like: 我管你张静还是李静呢 I'm trying to remember one where the surname stays the same though, something like: 管你是张飞还是张家辉 What common variants on this phrase exist where the surname stays the same? A: "我管你张三还是李四" literally means "I care you are John, Dick or Harry". And the actual meaning is "我(不)管你张三还是李四" meaning "I (don't) care you are John, Dick or Harry" Using the opposite term "管" (care) instead of the actual term "不管" (don't care) is a sarcastic way of making a statement. What common variants on this phrase exist where the surname stays the same? It is not a set phrase, you can substitute any names/nouns to indicate "I don't care who you are". For example: "我(不)管你是皇帝还是王八", "我(不)管你成龍还是成蟲" When someone said he is the emperor but you don't care, you can say "我管你皇帝还是王八" When Jackie Chan demands you to do something because he is Jackie Chan, and you don't want to do it, and don't care it is Jackie who demanded it, you can say "我管你成龍还是成蟲" One more: "我管你特朗普还是特朗通,你是个混蛋" roughly translated as "I don't care you are Donald Trump or Donald Dumb, you are an asshole"
{ "pile_set_name": "StackExchange" }
Q: footable , how to initialize the filter of a table just after the loading of the page I use the excellent plugin "footable" to order and filter my tables. But in some cases, my page must be initialized with a certain filter. For exemple, mu url is : myurl/mypage/19 On the server side I get the '19' and send it to the view. Into the view I put this value into an input field. How to filter this table with this input value just after the page is loaded ? I tried : $('table').footable(); $('table').trigger('footable_redraw'); and $('table').footable(); $('table').trigger('footable_initialize'); Without success. Dom update : I specify that the filter works. It's just that the filter does not itinialize when I put something in the field during the load of the page. the js code : $(document).ready( function() { $('table').footable(); $('table').trigger('footable_initialize'); }) the html code : <input class="form-control" id="filter" type="text" placeholder="Rechercher ..." value="{{ $libelle_classe }}"> <table class="table footable table-hover table-bordered" data-filter="#filter"> A: A possible solution is trigger the filter event manually, using the input value: $(document).ready( function() { $('table').trigger('footable_filter', { filter: $("#filter").val() }); }); I hope this helps!
{ "pile_set_name": "StackExchange" }
Q: C# GroupBy 30 minute interval returning incorrect result I am using Highcharts to give visual displays of information. As of right now, I am grouping all of the database table records by the hour: var lstGroupSummariesByHour = lstAllSummaries.Where(x => x.DateTimeProperty.Year == year && !x.deleted) .GroupBy(x => x.DateTimeProperty.Hour) .Select(x => new object[] {x.Count()}).ToArray(); This is one line on my line chart.. but I am looking to create a new line where it shows all summaries for every half hour. Is there a simple LINQ lambda way to achieve this? According to the this question if I wanted 30 minute intervals my code would look something like this since in the second answer he divides the minute by 12 to get 5 minute intervals? var lstGroupSummariesByHalfHour = lstAllSummaries.Where(x => x.DateTimeProperty.Year == year && !x.deleted) .GroupBy( x => new DateTime(x.DateTimeProperty.Year, x.DateTimeProperty.Month, x.DateTimeProperty.Day, x.DateTimeProperty.Hour, x.DateTimeProperty.Minute / 2, 0)) .Select(x => new object[] {x.Count()}).ToArray(); This is returning right under 3500 records which is causing a Highcharts error of not enough ticks on the X-Axis.. which is 0 through 24 (based on hour).. so shouldn't it be returning 48 since there are 48 half-hours in a day? 24 * 2 (which would still return the highcharts error, but I will deal with that later)? How do I fix the code above to get results for every half hour? UPDATE What I'm looking for (for example), is how many summaries are between 0100 & 0130, 0131 - 0200, 0201 - 0230.. and so on and so on. Here is what my graph currently looks like: I want to get the count of summaries for the tick that is in between each number (Hour).. 0030.. 0130.. 0230.. My graph currently is based on the entire year.. so throughout the year.. 'x' number of summaries during the 0 hour.. 'y' number of summaries during the 1 hour.. and so on and so on.. so I'm looking for a total count of number of summaries for the entire year that happened between 0000 - 0030, 0031-0100, 0101 - 0130.. and so on and so on. A: you need to quantize the minutes to either 0 or 30 new DateTime( x.DateTimeProperty.Year, x.DateTimeProperty.Month, x.DateTimeProperty.Day, x.DateTimeProperty.Hour, x.DateTimeProperty.Minute > 30 ? 30 : 0, 0));
{ "pile_set_name": "StackExchange" }
Q: Write file with specific permissions in Python I'm trying to create a file that is only user-readable and -writable (0600). Is the only way to do so by using os.open() as follows? import os fd = os.open('/path/to/file', os.O_WRONLY, 0o600) myFileObject = os.fdopen(fd) myFileObject.write(...) myFileObject.close() Ideally, I'd like to be able to use the with keyword so I can close the object automatically. Is there a better way to do what I'm doing above? A: What's the problem? file.close() will close the file even though it was open with os.open(). with os.fdopen(os.open('/path/to/file', os.O_WRONLY | os.O_CREAT, 0o600), 'w') as handle: handle.write(...) A: This answer addresses multiple concerns with the answer by vartec, especially the umask concern. import os import stat # Define file params fname = '/tmp/myfile' flags = os.O_WRONLY | os.O_CREAT | os.O_EXCL # Refer to "man 2 open". mode = stat.S_IRUSR | stat.S_IWUSR # This is 0o600. umask = 0o777 ^ mode # Prevents always downgrading umask to 0. # For security, remove file with potentially elevated mode try: os.remove(fname) except OSError: pass # Open file descriptor umask_original = os.umask(umask) try: fdesc = os.open(fname, flags, mode) finally: os.umask(umask_original) # Open file handle and write to file with os.fdopen(fdesc, 'w') as fout: fout.write('something\n') If the desired mode is 0600, it can more clearly be specified as the octal number 0o600. Even better, just use the stat module. Even though the old file is first deleted, a race condition is still possible. Including os.O_EXCL with os.O_CREAT in the flags will prevent the file from being created if it exists due to a race condition. This is a necessary secondary security measure to prevent opening a file that may already exist with a potentially elevated mode. In Python 3, FileExistsError with [Errno 17] is raised if the file exists. Failing to first set the umask to 0 or to 0o777 ^ mode can lead to an incorrect mode (permission) being set by os.open. This is because the default umask is usually not 0, and it will be applied to the specified mode. For example, if my original umask is 2 i.e. 0o002, and my specified mode is 0o222, if I fail to first set the umask, the resulting file can instead have a mode of 0o220, which is not what I wanted. Per man 2 open, the mode of the created file is mode & ~umask. The umask is restored to its original value as soon as possible. This getting and setting is not thread safe, and a threading.Lock must be used in a multithreaded application. For more info about umask, refer to this thread. A: update Folks, while I thank you for the upvotes here, I myself have to argue against my originally proposed solution below. The reason is doing things this way, there will be an amount of time, however small, where the file does exist, and does not have the proper permissions in place - this leave open wide ways of attack, and even buggy behavior. Of course creating the file with the correct permissions in the first place is the way to go - against the correctness of that, using Python's with is just some candy. So please, take this answer as an example of "what not to do"; original post You can use os.chmod instead: >>> import os >>> name = "eek.txt" >>> with open(name, "wt") as myfile: ... os.chmod(name, 0o600) ... myfile.write("eeek") ... >>> os.system("ls -lh " + name) -rw------- 1 gwidion gwidion 4 2011-04-11 13:47 eek.txt 0 >>> (Note that the way to use octals in Python is by being explicit - by prefixing it with "0o" like in "0o600". In Python 2.x it would work writing just 0600 - but that is both misleading and deprecated.) However, if your security is critical, you probably should resort to creating it with os.open, as you do and use os.fdopen to retrieve a Python File object from the file descriptor returned by os.open.
{ "pile_set_name": "StackExchange" }
Q: Global variable undefined in nested loop I am trying to do a odd/even number generator and printing the percentage of even numbers WITHOUT using if-then-else statement. However my global variable could not be read in my nested loop, any advise? Thanks! I have tried this in other languages and it work, however it don't seem to work in python. import random; numberArr = []; noRandomNum = 4; isEven = 0; for i in range (0, noRandomNum): numberArr.append(random.randint(1,10)); for i in range(len(numberArr)): x = numberArr[i]%2; # print(isEven); while x == 0: print("test") # isEven++; //UNDEFINED ERROR HERE break; print(isEven); isEven is a global variable, thus it should work. A: There is no ++ operator in python. You have to use it like isEven += 1. Also the semi-colons are not mandatory. This is not an issue of variable scope.
{ "pile_set_name": "StackExchange" }