content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: PHP mail using Gmail In my PHP web app, I want to be notified via email whenever certain errors occur. I'd like to use my Gmail account for sending these. How could this be done? A: Gmail's SMTP-server requires a very specific configuration. From Gmail help: Outgoing Mail (SMTP) Server (requires TLS) - smtp.gmail.com - Use Authentication: Yes - Use STARTTLS: Yes (some clients call this SSL) - Port: 465 or 587 Account Name: your full email address (including @gmail.com) Email Address: your email address (username@gmail.com) Password: your Gmail password You can probably set these settings up in Pear::Mail or PHPMailer. Check out their documentation for more details. A: You could use PEAR's mail function with Gmail's SMTP Server Note that when sending e-mail using Gmail's SMTP server, it will look like it came from your Gmail address, despite what you value is for $from. (following code taken from About.com Programming Tips ) <?php require_once "Mail.php"; $from = "Sandra Sender <sender@example.com>"; $to = "Ramona Recipient <recipient@example.com>"; $subject = "Hi!"; $body = "Hi,\n\nHow are you?"; // stick your GMAIL SMTP info here! ------------------------------ $host = "mail.example.com"; $username = "smtp_username"; $password = "smtp_password"; // -------------------------------------------------------------- $headers = array ('From' => $from, 'To' => $to, 'Subject' => $subject); $smtp = Mail::factory('smtp', array ('host' => $host, 'auth' => true, 'username' => $username, 'password' => $password)); $mail = $smtp->send($to, $headers, $body); if (PEAR::isError($mail)) { echo("<p>" . $mail->getMessage() . "</p>"); } else { echo("<p>Message successfully sent!</p>"); } ?>
PHP mail using Gmail
In my PHP web app, I want to be notified via email whenever certain errors occur. I'd like to use my Gmail account for sending these. How could this be done?
[ "Gmail's SMTP-server requires a very specific configuration.\nFrom Gmail help:\nOutgoing Mail (SMTP) Server (requires TLS)\n - smtp.gmail.com\n - Use Authentication: Yes\n - Use STARTTLS: Yes (some clients call this SSL)\n - Port: 465 or 587\nAccount Name: your full email address (including @gmail.com)\nEmail Address: your email address (username@gmail.com)\nPassword: your Gmail password \n\nYou can probably set these settings up in Pear::Mail or PHPMailer. Check out their documentation for more details.\n", "You could use PEAR's mail function with Gmail's SMTP Server\nNote that when sending e-mail using Gmail's SMTP server, it will look like it came from your Gmail address, despite what you value is for $from.\n(following code taken from About.com Programming Tips )\n<?php\nrequire_once \"Mail.php\";\n\n$from = \"Sandra Sender <sender@example.com>\";\n$to = \"Ramona Recipient <recipient@example.com>\";\n$subject = \"Hi!\";\n$body = \"Hi,\\n\\nHow are you?\";\n\n// stick your GMAIL SMTP info here! ------------------------------\n$host = \"mail.example.com\";\n$username = \"smtp_username\";\n$password = \"smtp_password\";\n// --------------------------------------------------------------\n\n$headers = array ('From' => $from,\n 'To' => $to,\n 'Subject' => $subject);\n$smtp = Mail::factory('smtp',\n array ('host' => $host,\n 'auth' => true,\n 'username' => $username,\n 'password' => $password));\n\n$mail = $smtp->send($to, $headers, $body);\n\nif (PEAR::isError($mail)) {\n echo(\"<p>\" . $mail->getMessage() . \"</p>\");\n } else {\n echo(\"<p>Message successfully sent!</p>\");\n }\n?>\n\n" ]
[ 10, 4 ]
[]
[]
[ "email", "gmail", "php" ]
stackoverflow_0000036079_email_gmail_php.txt
Q: How much should one DataSet represent? How much should one DataSet represent? Using the example of an ordering system: While showing your order I also show a list of items similar to one of yours as well as a list of our most popular items. While your items are tangled in a web of relationships involving you and your past orders, preferred suppliers, and the various other kinds of information related to you as a client, the other items do not have these same relationships. The set of queries I use to navigate the set of stuff representing you is different than the queries I use for one of these other lists of items. My inclination is to create different DataSets for different kinds of relationships but then I create ten separate item DataTables and that seems wrong. When I instantiate the larger DataSet even though I am only interested in a small subset that seems wrong, and when I try to pack all of these into one DataSet I have a big messy looking thing with several items tables next to each other and I am pretty sure that IS wrong. Maybe I am over-valuing the relationship feature of DataSets or maybe I just need to get over myself, either way I could use some guidance. A: The DataSet is vastly overrated and overused. Use strongly-typed collections (thank you, generics and automatic properties!). As icing on the cake, you can now even do cool query things against your custom objects with LINQ. Good Esposito article on datasets versus custom objects: http://msdn.microsoft.com/en-us/magazine/cc163751.aspx Automatic properties: http://weblogs.asp.net/dwahlin/archive/2007/12/04/c-3-0-features-automatic-properties.aspx LINQ with your objects: http://blogs.msdn.com/wriju/archive/2006/09/16/linq-custom-object-query.aspx A: This is why I don't use datasets. If you use strongly-typed datasets you benefit from the strong typing but you pay for it in terms of the time it takes to create one even if you're just using part of it and its extensibility in terms of the code base. If you want to modify an existing one and you modify a row definition then this will create "shotgun" breaks in the code base as each definition for adding a new row will have to be modified as it wont compile anymore. To avoid the above scenario the most sensible approach is to generally give up on sensible re-use. Define a dataset per purpose and per use. However the main issue with this is API use, you end up with dataset that is simliar to another dataset but because it is a different dataset type you have to transform it to use the common API which is both painful and inelegant. This, plus the fact that strongly typed datasets make your code look horrid (the length of the type declarations) are pretty much the reasons i've given up on datasets and switched to business objects instead.
How much should one DataSet represent?
How much should one DataSet represent? Using the example of an ordering system: While showing your order I also show a list of items similar to one of yours as well as a list of our most popular items. While your items are tangled in a web of relationships involving you and your past orders, preferred suppliers, and the various other kinds of information related to you as a client, the other items do not have these same relationships. The set of queries I use to navigate the set of stuff representing you is different than the queries I use for one of these other lists of items. My inclination is to create different DataSets for different kinds of relationships but then I create ten separate item DataTables and that seems wrong. When I instantiate the larger DataSet even though I am only interested in a small subset that seems wrong, and when I try to pack all of these into one DataSet I have a big messy looking thing with several items tables next to each other and I am pretty sure that IS wrong. Maybe I am over-valuing the relationship feature of DataSets or maybe I just need to get over myself, either way I could use some guidance.
[ "The DataSet is vastly overrated and overused. Use strongly-typed collections (thank you, generics and automatic properties!). As icing on the cake, you can now even do cool query things against your custom objects with LINQ. \nGood Esposito article on datasets versus custom objects:\nhttp://msdn.microsoft.com/en-us/magazine/cc163751.aspx\nAutomatic properties:\nhttp://weblogs.asp.net/dwahlin/archive/2007/12/04/c-3-0-features-automatic-properties.aspx\nLINQ with your objects:\nhttp://blogs.msdn.com/wriju/archive/2006/09/16/linq-custom-object-query.aspx\n", "This is why I don't use datasets. If you use strongly-typed datasets you benefit from the strong typing but you pay for it in terms of the time it takes to create one even if you're just using part of it and its extensibility in terms of the code base. If you want to modify an existing one and you modify a row definition then this will create \"shotgun\" breaks in the code base as each definition for adding a new row will have to be modified as it wont compile anymore.\nTo avoid the above scenario the most sensible approach is to generally give up on sensible re-use. Define a dataset per purpose and per use. However the main issue with this is API use, you end up with dataset that is simliar to another dataset but because it is a different dataset type you have to transform it to use the common API which is both painful and inelegant.\nThis, plus the fact that strongly typed datasets make your code look horrid (the length of the type declarations) are pretty much the reasons i've given up on datasets and switched to business objects instead. \n" ]
[ 4, 1 ]
[]
[]
[ ".net", "dataset" ]
stackoverflow_0000036262_.net_dataset.txt
Q: "The system cannot find the file specified" when invoking subprocess.Popen in python I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue. I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path): P:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> i,k = os.popen4("svn --version") >>> i.close() >>> k.readline() 'svn, version 1.4.2 (r22196)\n' Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking os.popen4() it uses subprocess.Popen(). Trying that reproduces the error: C:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE, >>> close_fds=False, stderr=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in File "C:\Python25\lib\subprocess.py", line 594, in __init__ errread, errwrite) File "C:\Python25\lib\subprocess.py", line 816, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified >>> For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution. If it's not obvious, I'm a complete python newbie, but Google hasn't helped. A: It's a bug, see the documentation of subprocess.Popen. There either needs to be a "shell=True" option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, "svn --version" which it doesn't find. I don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it.
"The system cannot find the file specified" when invoking subprocess.Popen in python
I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue. I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path): P:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> i,k = os.popen4("svn --version") >>> i.close() >>> k.readline() 'svn, version 1.4.2 (r22196)\n' Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking os.popen4() it uses subprocess.Popen(). Trying that reproduces the error: C:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE, >>> close_fds=False, stderr=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in File "C:\Python25\lib\subprocess.py", line 594, in __init__ errread, errwrite) File "C:\Python25\lib\subprocess.py", line 816, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified >>> For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution. If it's not obvious, I'm a complete python newbie, but Google hasn't helped.
[ "It's a bug, see the documentation of subprocess.Popen. There either needs to be a \"shell=True\" option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, \"svn --version\" which it doesn't find.\nI don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it.\n" ]
[ 21 ]
[]
[]
[ "python", "svn_merge" ]
stackoverflow_0000036324_python_svn_merge.txt
Q: How can I store user-tweakable configuration in app.config? I know it is a good idea to store configuration data in app.config (e.g. database connection strings) instead of hardcoing it, even if I am writing an application just for myself. But is there a way to update the configuration data stored in app.config from the program that is using it? A: If you use the Settings for the project, you can mark each setting as either application or user. If they're set as user, they will be stored per-user and when you call the Save method it will be updated in the config for that user. Code project has a really detailed article on saving all types of settings. A: app.config isn't what you want to use for user-tweakable data, as it'll be stored somewhere in Program Files (which the user shouldn't have write permissions to). Instead, settings marked with a UserScopedSettingAttribute will end up in a user-scoped .config file somewhere in %LocalAppData%. I found the best way to learn this stuff was to mess with the Visual Studio "Settings" tab (on your project's property pages), then look at the code that it generates and look in %LocalAppData% to see the file that it generates.
How can I store user-tweakable configuration in app.config?
I know it is a good idea to store configuration data in app.config (e.g. database connection strings) instead of hardcoing it, even if I am writing an application just for myself. But is there a way to update the configuration data stored in app.config from the program that is using it?
[ "If you use the Settings for the project, you can mark each setting as either application or user.\nIf they're set as user, they will be stored per-user and when you call the Save method it will be updated in the config for that user.\nCode project has a really detailed article on saving all types of settings.\n", "app.config isn't what you want to use for user-tweakable data, as it'll be stored somewhere in Program Files (which the user shouldn't have write permissions to). Instead, settings marked with a UserScopedSettingAttribute will end up in a user-scoped .config file somewhere in %LocalAppData%.\nI found the best way to learn this stuff was to mess with the Visual Studio \"Settings\" tab (on your project's property pages), then look at the code that it generates and look in %LocalAppData% to see the file that it generates.\n" ]
[ 6, 1 ]
[]
[]
[ ".net", "app_config", "c#" ]
stackoverflow_0000036326_.net_app_config_c#.txt
Q: Preview theme in WordPress In the latest version of WordPress, it gives you the opportunity to view a preview of what your site would look like using a different theme. You basically just click on the theme, it takes over the screen and you have a chance to activate or close it (and return to the previous screen, which is grayed out in the background). I have seen a similar technique used on a number of websites recently for display images as well. I'm wondering what technology/code they use to do this? A: It's open source - use the source, Luke. Look in wp-admin/js/theme-preview.js
Preview theme in WordPress
In the latest version of WordPress, it gives you the opportunity to view a preview of what your site would look like using a different theme. You basically just click on the theme, it takes over the screen and you have a chance to activate or close it (and return to the previous screen, which is grayed out in the background). I have seen a similar technique used on a number of websites recently for display images as well. I'm wondering what technology/code they use to do this?
[ "It's open source - use the source, Luke.\nLook in wp-admin/js/theme-preview.js\n" ]
[ 5 ]
[]
[]
[ "html", "jquery", "wordpress" ]
stackoverflow_0000036333_html_jquery_wordpress.txt
Q: How can I authenticate using client credentials in WCF just once? What is the best approach to make sure you only need to authenticate once when using an API built on WCF? My current bindings and behaviors are listed below <bindings> <wsHttpBinding> <binding name="wsHttp"> <security mode="TransportWithMessageCredential"> <transport/> <message clientCredentialType="UserName" negotiateServiceCredential="false" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="NorthwindBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceAuthorization principalPermissionMode="UseAspNetRoles"/> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider"/> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> Next is what I am using in my client app to authenticate (currently I must do this everytime I want to make a call into WCF) Dim client As ProductServiceClient = New ProductServiceClient("wsHttpProductService") client.ClientCredentials.UserName.UserName = "foo" client.ClientCredentials.UserName.Password = "bar" Dim ProductList As List(Of Product) = client.GetProducts() What I would like to do is auth w/ the API once using these credentials, then get some type of token for the period of time my client application is using the web service project. I thought establishsecuritycontext=true did this for me? A: If you're on an intranet, Windows authentication can be handled for "free" by configuration alone. If this isn't appropriate, token services work just fine, but for some situations they may be just too much. The application I'm working on needed bare-bones authentication. Our server and client run inside a (very secure) intranet, so we didn't care too much for the requirement to use an X.509 certificate to encrypt the communication, which is required if you're using username authentication. So we added a custom behavior to the client that adds the username and (encrypted) password to the message headers, and another custom behavior on the server that verifies them. All very simple, required no changes to the client side service access layer or the service contract implementation. And as it's all done by configuration, if and when we need to move to something a little stronger it'll be easy to migrate. A: While I hate to give an answer I'm not 100% certain of, the lack of responses so far makes me think a potentially correct answer might be okay in this case. As far as I'm aware there isn't the kind of session token mechanism you're looking for out-of-the-box with WCF which means you're going to have to do some heavy lifting to get things working in the way you want. I should make it clear there is a session mechanism in WCF but it's focused on guaranteeing message orders and is not the ideal tool for creating an authentication session. I just finished working on a project where we implemented our own session mechanism to handle all manner of legacy SOAP stacks, but I believe the recommended way to implement authenticated sessions is to use a Secure Token Service (STS) like Pablo Cibraro's. If you want more details please shout, but I suspect Pablo's blog will have more than enough info for you to steam ahead.
How can I authenticate using client credentials in WCF just once?
What is the best approach to make sure you only need to authenticate once when using an API built on WCF? My current bindings and behaviors are listed below <bindings> <wsHttpBinding> <binding name="wsHttp"> <security mode="TransportWithMessageCredential"> <transport/> <message clientCredentialType="UserName" negotiateServiceCredential="false" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="NorthwindBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceAuthorization principalPermissionMode="UseAspNetRoles"/> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider"/> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> Next is what I am using in my client app to authenticate (currently I must do this everytime I want to make a call into WCF) Dim client As ProductServiceClient = New ProductServiceClient("wsHttpProductService") client.ClientCredentials.UserName.UserName = "foo" client.ClientCredentials.UserName.Password = "bar" Dim ProductList As List(Of Product) = client.GetProducts() What I would like to do is auth w/ the API once using these credentials, then get some type of token for the period of time my client application is using the web service project. I thought establishsecuritycontext=true did this for me?
[ "If you're on an intranet, Windows authentication can be handled for \"free\" by configuration alone. \nIf this isn't appropriate, token services work just fine, but for some situations they may be just too much.\nThe application I'm working on needed bare-bones authentication. Our server and client run inside a (very secure) intranet, so we didn't care too much for the requirement to use an X.509 certificate to encrypt the communication, which is required if you're using username authentication.\nSo we added a custom behavior to the client that adds the username and (encrypted) password to the message headers, and another custom behavior on the server that verifies them.\nAll very simple, required no changes to the client side service access layer or the service contract implementation. And as it's all done by configuration, if and when we need to move to something a little stronger it'll be easy to migrate.\n", "While I hate to give an answer I'm not 100% certain of, the lack of responses so far makes me think a potentially correct answer might be okay in this case.\nAs far as I'm aware there isn't the kind of session token mechanism you're looking for out-of-the-box with WCF which means you're going to have to do some heavy lifting to get things working in the way you want. I should make it clear there is a session mechanism in WCF but it's focused on guaranteeing message orders and is not the ideal tool for creating an authentication session.\nI just finished working on a project where we implemented our own session mechanism to handle all manner of legacy SOAP stacks, but I believe the recommended way to implement authenticated sessions is to use a Secure Token Service (STS) like Pablo Cibraro's.\nIf you want more details please shout, but I suspect Pablo's blog will have more than enough info for you to steam ahead.\n" ]
[ 3, 1 ]
[]
[]
[ "authentication", "security", "wcf", "ws_security" ]
stackoverflow_0000030800_authentication_security_wcf_ws_security.txt
Q: Relative Root with Visual Studio ASP.NET debugger I am working on an ASP.NET project which is physically located at C:\Projects\MyStuff\WebSite2. When I run the app with the Visual Studio debugger it seems that the built in web server considers "C:\Projects\MyStuff\" to be the relative root, not "C:\Projects\MyStuff\WebSite2". Is there a web.config setting or something that will allow tags like <img src='/img/logo.png' /> to render correctly without having to resort to the ASP.NET specific tags like <asp:image />? If I code for the debugger's peculiarities then when I upload to the production IIS server everthing is off. How do you resolve this? A: you can try this trick that Scott Guthrie posted on his blog http://weblogs.asp.net/scottgu/archive/2006/12/19/tip-trick-how-to-run-a-root-site-with-the-local-web-server-using-vs-2005-sp1.aspx to cut to the fix: select your project/solution in solution explorer and then open the Properties tab like you would if you were editing a textbox. If you right click and go to "Property Pages" that is the wrong place.
Relative Root with Visual Studio ASP.NET debugger
I am working on an ASP.NET project which is physically located at C:\Projects\MyStuff\WebSite2. When I run the app with the Visual Studio debugger it seems that the built in web server considers "C:\Projects\MyStuff\" to be the relative root, not "C:\Projects\MyStuff\WebSite2". Is there a web.config setting or something that will allow tags like <img src='/img/logo.png' /> to render correctly without having to resort to the ASP.NET specific tags like <asp:image />? If I code for the debugger's peculiarities then when I upload to the production IIS server everthing is off. How do you resolve this?
[ "you can try this trick that Scott Guthrie posted on his blog http://weblogs.asp.net/scottgu/archive/2006/12/19/tip-trick-how-to-run-a-root-site-with-the-local-web-server-using-vs-2005-sp1.aspx\nto cut to the fix: select your project/solution in solution explorer and then open the Properties tab like you would if you were editing a textbox. If you right click and go to \"Property Pages\" that is the wrong place.\n" ]
[ 2 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000036406_asp.net.txt
Q: I Am Not Getting the Result I Expect Using readLine() in Java I am using the code snippet below, however it's not working quite as I understand it should. public static void main(String[] args) { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); String line; try { line = br.readLine(); while(line != null) { System.out.println(line); line = br.readLine(); } } catch (IOException e) { e.printStackTrace(); } } From reading the Javadoc about readLine() it says: Reads a line of text. A line is considered to be terminated by any one of a line feed (\n), a carriage return (\r), or a carriage return followed immediately by a linefeed. Returns: A String containing the contents of the line, not including any line-termination characters, or null if the end of the stream has been reached Throws: IOException - If an I/O error occurs From my understanding of this, readLine should return null the first time no input is entered other than a line termination, like \r. However, this code just ends up looping infinitely. After debugging, I have found that instead of null being returned when just a termination character is entered, it actually returns an empty string (""). This doesn't make sense to me. What am I not understanding correctly? A: From my understanding of this, readLine should return null the first time no input is entered other than a line termination, like '\r'. That is not correct. readLine will return null if the end of the stream is reached. That is, for example, if you are reading a file, and the file ends, or if you're reading from a socket and the socket closses. But if you're simply reading the console input, hitting the return key on your keyboard does not constitute an end of stream. It's simply a character that is returned (\n or \r\n depending on your OS). So, if you want to break on both the empty string and the end of line, you should do: while (line != null && !line.equals("")) Also, your current program should work as expected if you pipe some file directly into it, like so: java -cp . Echo < test.txt A: No input is not the same as the end of the stream. You can usually simulate the end of the stream in a console by pressing Ctrl+D (AFAIK some systems use Ctrl+Z instead). But I guess this is not what you want so better test for empty strings additionally to null strings. A: There's a nice apache commons lang library which has a good api for common :) actions. You could use statically import StringUtils and use its method isNotEmpty(String ) to get: while(isNotEmpty(line)) { System.out.println(line); line = br.readLine(); } It might be useful someday:) There are also other useful classes in this lib.
I Am Not Getting the Result I Expect Using readLine() in Java
I am using the code snippet below, however it's not working quite as I understand it should. public static void main(String[] args) { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); String line; try { line = br.readLine(); while(line != null) { System.out.println(line); line = br.readLine(); } } catch (IOException e) { e.printStackTrace(); } } From reading the Javadoc about readLine() it says: Reads a line of text. A line is considered to be terminated by any one of a line feed (\n), a carriage return (\r), or a carriage return followed immediately by a linefeed. Returns: A String containing the contents of the line, not including any line-termination characters, or null if the end of the stream has been reached Throws: IOException - If an I/O error occurs From my understanding of this, readLine should return null the first time no input is entered other than a line termination, like \r. However, this code just ends up looping infinitely. After debugging, I have found that instead of null being returned when just a termination character is entered, it actually returns an empty string (""). This doesn't make sense to me. What am I not understanding correctly?
[ "\nFrom my understanding of this, readLine should return null the first time no input is entered other than a line termination, like '\\r'.\n\nThat is not correct. readLine will return null if the end of the stream is reached. That is, for example, if you are reading a file, and the file ends, or if you're reading from a socket and the socket closses.\nBut if you're simply reading the console input, hitting the return key on your keyboard does not constitute an end of stream. It's simply a character that is returned (\\n or \\r\\n depending on your OS).\nSo, if you want to break on both the empty string and the end of line, you should do:\nwhile (line != null && !line.equals(\"\"))\n\nAlso, your current program should work as expected if you pipe some file directly into it, like so:\njava -cp . Echo < test.txt\n\n", "No input is not the same as the end of the stream. You can usually simulate the end of the stream in a console by pressing Ctrl+D (AFAIK some systems use Ctrl+Z instead). But I guess this is not what you want so better test for empty strings additionally to null strings.\n", "There's a nice apache commons lang library which has a good api for common :) actions. You could use statically import StringUtils and use its method isNotEmpty(String ) to get:\nwhile(isNotEmpty(line)) {\n System.out.println(line);\n line = br.readLine();\n}\n\nIt might be useful someday:) There are also other useful classes in this lib.\n" ]
[ 10, 3, 1 ]
[]
[]
[ "java", "java_io" ]
stackoverflow_0000025033_java_java_io.txt
Q: What are the current best options for parallelizing a CPU-intensive .NET app? This is an open-ended question. What approaches should I consider? A: Your first step is to find and understand the parallelism in your problem. It is really easy to write multi-threaded code that performs no better than the single-threaded code it replaces. "Patterns for Parallel Programming" (Amazon) is a great introduction to the key concepts. Once you have a workable design, start reading the articles in the "Concurrency" topic in the MSDN Magazine archives (link), particularly anything written by Jeff Richter. Those will give you the nuts and bolts stuff on the threading constructs specific to Windows and .NET. (The multi-threading section in Richter's "CLR via C# (Amazon)is short, but very insightful - highly recommended.) A: There are some parallel extensions to .NET that are currently in testing and available at Microsoft's Parallel Computing Developer Center. They have a few interesting items that you would expect like Parallel foreach and a parallel version of LINQ called PLINQ. Some of the best information about the extensions is on Channel 9. A: I think we could also include non-.NET-specific approaches to parallel processing if those are among the best options to consider. A: @Larsenal If you want to branch outside of .NET there has been a lot of discussion about Intel's Threading Building Blocks which is a parallel library for C++. A: There are many options and the best solution will depend on the nature of the problem you are trying to solve. If you are trying to solve an embarassingly parallel problem then dividing and parallelising the tasks will be trivial. In that case the challenge will come in distributing and managing the data used. Some suggestions would be: ICE Grid which has bindings for .Net and other common languages Velocity which is Microsoft's version of Oracle (Tangersol) Coherence The forthcoming HPC offering from Microsoft Compute Cluster Server Data Synapse Grid Server
What are the current best options for parallelizing a CPU-intensive .NET app?
This is an open-ended question. What approaches should I consider?
[ "Your first step is to find and understand the parallelism in your problem. It is really easy to write multi-threaded code that performs no better than the single-threaded code it replaces. \"Patterns for Parallel Programming\" (Amazon) is a great introduction to the key concepts.\nOnce you have a workable design, start reading the articles in the \"Concurrency\" topic in the MSDN Magazine archives (link), particularly anything written by Jeff Richter. Those will give you the nuts and bolts stuff on the threading constructs specific to Windows and .NET. (The multi-threading section in Richter's \"CLR via C# (Amazon)is short, but very insightful - highly recommended.)\n", "There are some parallel extensions to .NET that are currently in testing and available at Microsoft's Parallel Computing Developer Center. They have a few interesting items that you would expect like Parallel foreach and a parallel version of LINQ called PLINQ. Some of the best information about the extensions is on Channel 9.\n", "I think we could also include non-.NET-specific approaches to parallel processing if those are among the best options to consider.\n", "@Larsenal\nIf you want to branch outside of .NET there has been a lot of discussion about Intel's Threading Building Blocks which is a parallel library for C++.\n", "There are many options and the best solution will depend on the nature of the problem you are trying to solve. If you are trying to solve an embarassingly parallel problem then dividing and parallelising the tasks will be trivial. In that case the challenge will come in distributing and managing the data used. \nSome suggestions would be:\n\nICE Grid which has bindings for .Net and other common languages\nVelocity which is Microsoft's version of Oracle (Tangersol) Coherence\nThe forthcoming HPC offering from Microsoft Compute Cluster Server\nData Synapse Grid Server\n\n" ]
[ 10, 6, 2, 2, 0 ]
[]
[]
[ ".net", "parallel_processing" ]
stackoverflow_0000001387_.net_parallel_processing.txt
Q: merge rss feeds I want to merge multiple rss feeds into a single feed, removing any duplicates. Specifically, I'm interested in merging the feeds for the tags I'm interested in. [A quick search turned up some promising links, which I don't have time to visit at the moment] Broadly speaking, the ideal would be a reader that would list all the available tags on the site and toggle them on and off, allowing me to explore what's available, keep track of questions I've visited, new answers on interesting feeds, etc, etc . . . though I don't suppose such a things exists right now. As I randomly explore the site and see questions I think are interesting, I inevitably find "oh yes, that one looked interesting a couple days ago when I read it the first time, and hasn't been updated since". It would be much nicer if my machine would keep track of such deails for me :) Update: You can now use "and", "or", and "not" to combine multiple tags into a single feed: Tags AND Tags OR Tags Update: You can now use Filters to watch tags across one or multiple sites: Improved Tag Stes A: Have you heard of Yahoo's Pipes. Its an interactive feed aggregator and manipulator. List of 'hot pipes' to subscribe to, and ability to create your own (yahoo account required). I played with it during beta back in the day, however I had a blast. Its really fun and easy to aggregate different feeds and you can add logic or filters to the "pipes". You can even do more then just RSS like import images from flickr. A: I create a the stackoverflow tag feeds pipe. You can list your tags of choice into the text box and it will combine them into a single feed with all the unique posts. It escapes '#' and '+' characters for you. Alternatively, you can use the pipe's rss feed by appending your html-encoded tags separated by '+'s: http://pipes.yahoo.com/pipes/pipe.run?_id=uP22vN923RG_c71O1ZzWFw&_render=rss&tags=.net+c%23+powershell Unfortunatley, though, this seems to strip out the content of the posts. The content is visible in the debug view, but the output only contains the post title. [Thanks to everyone for suggesting Yahoo Pipes! Had heard of it before, but never tried it until now :-] A: Here is an article on Merge Multiple RSS Feeds Into One with Yahoo! Pipes + FeedBurner. Another option is Feed Rinse, but they have a paid version as well as the free version. Additionally: I have heard good things about AideRss A: SimplePie is a PHP library that supports merging RSS feeds into one combined feed. I don't believe it does dupe checking out-of-the-box, but I found it trivial to write a little function to eliminate duplicate content via their GUIDs. A: Yahoo Pipes? 23 minutes later: Aww, I got answer-sniped by @Bernie Perez. Oh well :) A: In the latest Podcast, Jeff and Joel talked about the RSS feeds for tags, and Joel noted that there is only the current ability to do AND on tags, not OR. Jeff suggested that this would be included at some stage in the future. I think that you should request this on uservoice, or vote for it if it is already there.
merge rss feeds
I want to merge multiple rss feeds into a single feed, removing any duplicates. Specifically, I'm interested in merging the feeds for the tags I'm interested in. [A quick search turned up some promising links, which I don't have time to visit at the moment] Broadly speaking, the ideal would be a reader that would list all the available tags on the site and toggle them on and off, allowing me to explore what's available, keep track of questions I've visited, new answers on interesting feeds, etc, etc . . . though I don't suppose such a things exists right now. As I randomly explore the site and see questions I think are interesting, I inevitably find "oh yes, that one looked interesting a couple days ago when I read it the first time, and hasn't been updated since". It would be much nicer if my machine would keep track of such deails for me :) Update: You can now use "and", "or", and "not" to combine multiple tags into a single feed: Tags AND Tags OR Tags Update: You can now use Filters to watch tags across one or multiple sites: Improved Tag Stes
[ "Have you heard of Yahoo's Pipes. \n\nIts an interactive feed aggregator and\n manipulator. List of 'hot pipes' to\n subscribe to, and ability to create\n your own (yahoo account required).\n\nI played with it during beta back in the day, however I had a blast. Its really fun and easy to aggregate different feeds and you can add logic or filters to the \"pipes\". You can even do more then just RSS like import images from flickr.\n", "I create a the stackoverflow tag feeds pipe. You can list your tags of choice into the text box and it will combine them into a single feed with all the unique posts. It escapes '#' and '+' characters for you.\nAlternatively, you can use the pipe's rss feed by appending your html-encoded tags separated by '+'s:\nhttp://pipes.yahoo.com/pipes/pipe.run?_id=uP22vN923RG_c71O1ZzWFw&_render=rss&tags=.net+c%23+powershell\n\nUnfortunatley, though, this seems to strip out the content of the posts. The content is visible in the debug view, but the output only contains the post title.\n[Thanks to everyone for suggesting Yahoo Pipes! Had heard of it before, but never tried it until now :-]\n", "Here is an article on Merge Multiple RSS Feeds Into One with Yahoo! Pipes + FeedBurner. \nAnother option is Feed Rinse, but they have a paid version as well as the free version. \nAdditionally:\nI have heard good things about AideRss\n", "SimplePie is a PHP library that supports merging RSS feeds into one combined feed. I don't believe it does dupe checking out-of-the-box, but I found it trivial to write a little function to eliminate duplicate content via their GUIDs.\n", "Yahoo Pipes?\n23 minutes later:\nAww, I got answer-sniped by @Bernie Perez. Oh well :)\n", "In the latest Podcast, Jeff and Joel talked about the RSS feeds for tags, and Joel noted that there is only the current ability to do AND on tags, not OR.\nJeff suggested that this would be included at some stage in the future.\nI think that you should request this on uservoice, or vote for it if it is already there.\n" ]
[ 17, 4, 2, 2, 0, 0 ]
[]
[]
[ "feed", "rss" ]
stackoverflow_0000027148_feed_rss.txt
Q: What is the best way to use a console when developing? For scripting languages, what is the most effective way to utilize a console when developing? Are there ways to be more productive with a console than a "compile and run" only language? Added clarification: I am thinking more along the lines of Ruby, Python, Boo, etc. Languages that are used for full blown apps, but also have a way to run small snippets of code in a console. A: I am thinking more along the lines of Ruby, ... Well for Ruby the irb interactive prompt is a great tool for "practicing" something simple. Here are the things I'll mention about the irb to give you an idea of effective use: Automation. You are allowed a .irbrc file that will be automatically executed when launching irb. That means you can load your favorite libraries or do whatever you want in full Ruby automatically. To see what I mean check out some of the ones at dotfiles.org. Autocompletion. That even makes writing code easier. Can't remember that string method to remove newlines? "".ch<tab> produces chop and chomp. NOTE: you have to enable autocompletion for irb yourself Divide and Conquer. irb makes the small things really easy. If you're writing a function to manipulate strings, the ability to test the code interactively right in the prompt saves a lot of time! For instance you can just open up irb and start running functions on an example string and have working and tested code already ready for your library/program. Learning, Experimenting, and Hacking. Something like this would take a very long time to test in C/C++, even Java. If you tried testing them all at once you might seg-fault and have to start over. Here I'm just learning how the String#[] function works. joe[~]$ irb >> "12341:asdf"[/\d+/] # => "12341" >> "12341:asdf"[/\d*/] # => "12341" >> "12341:asdf"[0..5] # => "12341:" >> "12341:asdf"[0...5] # => "12341" >> "12341:asdf"[0, ':'] TypeError: can't convert String into Integer from (irb):5:in `[]' from (irb):5 >> "12341:asdf"[0, 5] # => "12341" Testing and Benchmarking. Now they are nice and easy to perform. Here is someone's idea to emulate the Unix time function for quick benchmarking. Just add it to your .irbrc file and its always there! Debugging - I haven't used this much myself but there is always the ability to debug code like this. Or pull out some code and run it in the irb to see what its actually doing. I'm sure I'm missing some things but I hit on my favorite points. You really have zero limitation in shells so you're limited only by what you can think of doing. I almost always have a few shells running. Bash, Javascript, and Ruby's irb to name a few. I use them for a lot of things! A: I think it depends on the console. The usefulness of a CMD console on windows pails in comparison to a Powershell console. A: You didn't say what OS you're using but on Linux I been using a tabbed window manager (wmii) for a year or so and it has radically changed the way I use applications - consoles or otherwise. I often have four or more consoles and other apps on a virtual desktop and with wmii I don't have to fiddle with resizing windows to line everything up just so. I can trivially rearrange them into vertical columns, stack them up vertically, have them share equal amounts of vertical or horizontal space, and move them between screens. Say you open two consoles on your desktop. You'd get this (with apologies for the cronkey artwork): ---------------- | | | 1 | | | ---------------- ---------------- | | | 2 | | | ---------------- Now I want them side-by-side. I enter SHIFT-ALT-L in window 2 to move it rightwards and create two columns: ------- ------- | || | | || | | 1 || 2 | | || | | || | ------- ------- Now I could open another console and get ------- ------- | || 2 | | || | | | ------- | 1 | ------- | || 3 | | || | ------- ------- Then I want to temporarily view console 3 full-height, so I hit ALT-s in it and get: ------- ------- | | ------- | || | | 1 || 3 | | || | | || | ------- ------- Consoles 2 and 3 are stacked up now. I could also give windows tags. For example, in console 2 I could say ALT-SHIFT-twww+dev and that console would be visible in the 'www' and 'dev' virtual desktops. (The desktops are created if they don't already exist.) Even better, the console can be in a different visual configuration (e.g., stacked and full-screen) on each of those desktops. Anyway, I can't do tabbed window managers justice here. I don't know if it's relevant to your environment but if you get the chance to try this way of working you probably won't look back. A: I've added a shortcut to my Control-Shift-C key combination to bring up my Visual Studio 2008 Console. This alone has saved me countless seconds when needing to register a dll or do any other command. I imagine if you leverage this with another command tool and you may have some massive productivity increases. A: Are you kidding? In my Linux environment, the console is my lifeblood. I'm proficient in bash scripting, so to me a console is very much like sitting in a REPL for Python or Lisp. You can quite literally do anything. I actually write tools used by my team in bash, and the console is the perfect place to do that development. I really only need an editor as a backing store for things as I figure them out.
What is the best way to use a console when developing?
For scripting languages, what is the most effective way to utilize a console when developing? Are there ways to be more productive with a console than a "compile and run" only language? Added clarification: I am thinking more along the lines of Ruby, Python, Boo, etc. Languages that are used for full blown apps, but also have a way to run small snippets of code in a console.
[ "\nI am thinking more along the lines of Ruby, ...\n\nWell for Ruby the irb interactive prompt is a great tool for \"practicing\" something simple. Here are the things I'll mention about the irb to give you an idea of effective use:\n\nAutomation. You are allowed a .irbrc file that will be automatically executed when launching irb. That means you can load your favorite libraries or do whatever you want in full Ruby automatically. To see what I mean check out some of the ones at dotfiles.org.\nAutocompletion. That even makes writing code easier. Can't remember that string method to remove newlines? \"\".ch<tab> produces chop and chomp. NOTE: you have to enable autocompletion for irb yourself\nDivide and Conquer. irb makes the small things really easy. If you're writing a function to manipulate strings, the ability to test the code interactively right in the prompt saves a lot of time! For instance you can just open up irb and start running functions on an example string and have working and tested code already ready for your library/program. \nLearning, Experimenting, and Hacking. Something like this would take a very long time to test in C/C++, even Java. If you tried testing them all at once you might seg-fault and have to start over.\nHere I'm just learning how the String#[] function works.\njoe[~]$ irb\n>> \"12341:asdf\"[/\\d+/]\n# => \"12341\" \n>> \"12341:asdf\"[/\\d*/]\n# => \"12341\" \n>> \"12341:asdf\"[0..5]\n# => \"12341:\" \n>> \"12341:asdf\"[0...5]\n# => \"12341\" \n>> \"12341:asdf\"[0, ':']\nTypeError: can't convert String into Integer\n from (irb):5:in `[]'\n from (irb):5\n>> \"12341:asdf\"[0, 5]\n# => \"12341\" \n\nTesting and Benchmarking. Now they are nice and easy to perform. Here is someone's idea to emulate the Unix time function for quick benchmarking. Just add it to your .irbrc file and its always there!\nDebugging - I haven't used this much myself but there is always the ability to debug code like this. Or pull out some code and run it in the irb to see what its actually doing.\n\nI'm sure I'm missing some things but I hit on my favorite points. You really have zero limitation in shells so you're limited only by what you can think of doing. I almost always have a few shells running. Bash, Javascript, and Ruby's irb to name a few. I use them for a lot of things!\n", "I think it depends on the console. The usefulness of a CMD console on windows pails in comparison to a Powershell console.\n", "You didn't say what OS you're using but on Linux I been using a tabbed window manager (wmii) for a year or so and it has radically changed the way I use applications - consoles or otherwise.\nI often have four or more consoles and other apps on a virtual desktop and with wmii I don't have to fiddle with resizing windows to line everything up just so. I can trivially rearrange them into vertical columns, stack them up vertically, have them share equal amounts of vertical or horizontal space, and move them between screens.\nSay you open two consoles on your desktop. You'd get this (with apologies for the cronkey artwork):\n ----------------\n| |\n| 1 |\n| |\n ----------------\n ----------------\n| |\n| 2 |\n| |\n ----------------\n\nNow I want them side-by-side. I enter SHIFT-ALT-L in window 2 to move it rightwards and create two columns:\n ------- -------\n| || |\n| || |\n| 1 || 2 |\n| || |\n| || |\n ------- -------\n\nNow I could open another console and get\n ------- -------\n| || 2 |\n| || |\n| | -------\n| 1 | -------\n| || 3 |\n| || |\n ------- -------\n\nThen I want to temporarily view console 3 full-height, so I hit ALT-s in it and get:\n ------- -------\n| | -------\n| || |\n| 1 || 3 |\n| || |\n| || |\n ------- -------\n\nConsoles 2 and 3 are stacked up now.\nI could also give windows tags. For example, in console 2 I could say ALT-SHIFT-twww+dev and that console would be visible in the 'www' and 'dev' virtual desktops. (The desktops are created if they don't already exist.) Even better, the console can be in a different visual configuration (e.g., stacked and full-screen) on each of those desktops.\nAnyway, I can't do tabbed window managers justice here. I don't know if it's relevant to your environment but if you get the chance to try this way of working you probably won't look back.\n", "I've added a shortcut to my Control-Shift-C key combination to bring up my Visual Studio 2008 Console. This alone has saved me countless seconds when needing to register a dll or do any other command. I imagine if you leverage this with another command tool and you may have some massive productivity increases.\n", "Are you kidding?\nIn my Linux environment, the console is my lifeblood. I'm proficient in bash scripting, so to me a console is very much like sitting in a REPL for Python or Lisp. You can quite literally do anything.\nI actually write tools used by my team in bash, and the console is the perfect place to do that development. I really only need an editor as a backing store for things as I figure them out.\n" ]
[ 2, 1, 1, 0, 0 ]
[]
[]
[ "console", "scripting" ]
stackoverflow_0000032537_console_scripting.txt
Q: Should DB layer members be static or instance? I've seen projects where the classes in the DB layer have just static functions in them and other projects where those classes need to be instantiated to get access to the member functions. Which is "better" and why? A: I like a single object to be correlated to a single record in the database, i.e. an object must be instantiated. This is your basic ActiveRecord pattern. In my experience, the one-object-to-one-row approach creates a much more fluid and literate presentation in code. Also, I like to treat objects as records and the class as the table. For example to change the name of a record I do: objPerson = new Person(id) objPerson.name = "George" objPerson.save() while to get all people who live in Louisiana I might do aryPeople = Person::getPeopleFromState("LA") There are plenty of criticisms of Active Record. You can especially run into problems where you are querying the database for each record or your classes are tightly coupled to your database, creating inflexibility in both. In that case you can move up a level and go with something like DataMapper. Many of the modern frameworks and ORM's are aware of some of these drawbacks and provide solutions for them. Do a little research and you will start to see that this is a problem that has a number of solutions and it all depend on your needs. A: It's all about the purpose of the DB Layer. If you use an instance to access the DB layer, you are allowing multiple versions of that class to exist. This is desirable if you want to use the same DB layer to access multiple databases for example. So you might have something like this: DbController acrhive = new DbController("dev"); DbController prod = new DbController("prod"); Which allows you to use multiple instances of the same class to access different databases. Conversely you might want to allow only one database to be used within your application at a time. If you want to do this then you could look at using a static class for this purpose. A: As lomaxx mentioned, it's all about the purpose of the DB model. I find it best to use static classes, as I usually only want one instance of my DAL classes being created. I'd rather use static methods than deal with the overhead of potentially creating multiple instances of my DAL classes where only 1 should exist that can be queried multiple times. A: I would say that it depends on what you want the "DB layer" to do... If you have general routines for executing a stored procedure, or sql statement, that return a dataset, then using static methods would make more sense to me, since you don't need a permanent reference to an object that created the dataset for you. I'd use a static method as well if I created a DB Layer that returned a strongly-typed class or collection as its result. If on the other hand you want to create an instance of a class, using a given parameter like an ID (see @barret-conrad's answer), to connect to the DB and get the necessary record, then you'd probably not want to use a static method on the class. But even then I'd say you'd probably have some sort of DB Helper class that DID have static methods that your other class was relying on. A: Another "it depends". However, I can also think of a very common scenario where static just won't work. If you have a web site that gets a decent amount of traffic, and you have a static database layer with a shared connection, you could be in trouble. In ASP.Net, there is one instance of your application created by default, and so if you have a static database layer you may only get one connection to the database for everyone who uses your web site.
Should DB layer members be static or instance?
I've seen projects where the classes in the DB layer have just static functions in them and other projects where those classes need to be instantiated to get access to the member functions. Which is "better" and why?
[ "I like a single object to be correlated to a single record in the database, i.e. an object must be instantiated. This is your basic ActiveRecord pattern. In my experience, the one-object-to-one-row approach creates a much more fluid and literate presentation in code. Also, I like to treat objects as records and the class as the table. For example to change the name of a record I do:\nobjPerson = new Person(id)\n\nobjPerson.name = \"George\"\n\nobjPerson.save()\n\nwhile to get all people who live in Louisiana I might do\naryPeople = Person::getPeopleFromState(\"LA\")\n\nThere are plenty of criticisms of Active Record. You can especially run into problems where you are querying the database for each record or your classes are tightly coupled to your database, creating inflexibility in both. In that case you can move up a level and go with something like DataMapper. \nMany of the modern frameworks and ORM's are aware of some of these drawbacks and provide solutions for them. Do a little research and you will start to see that this is a problem that has a number of solutions and it all depend on your needs. \n", "It's all about the purpose of the DB Layer.\nIf you use an instance to access the DB layer, you are allowing multiple versions of that class to exist. This is desirable if you want to use the same DB layer to access multiple databases for example.\nSo you might have something like this:\nDbController acrhive = new DbController(\"dev\");\nDbController prod = new DbController(\"prod\");\n\nWhich allows you to use multiple instances of the same class to access different databases.\nConversely you might want to allow only one database to be used within your application at a time. If you want to do this then you could look at using a static class for this purpose.\n", "As lomaxx mentioned, it's all about the purpose of the DB model.\nI find it best to use static classes, as I usually only want one instance of my DAL classes being created. I'd rather use static methods than deal with the overhead of potentially creating multiple instances of my DAL classes where only 1 should exist that can be queried multiple times.\n", "I would say that it depends on what you want the \"DB layer\" to do...\nIf you have general routines for executing a stored procedure, or sql statement, that return a dataset, then using static methods would make more sense to me, since you don't need a permanent reference to an object that created the dataset for you.\nI'd use a static method as well if I created a DB Layer that returned a strongly-typed class or collection as its result.\nIf on the other hand you want to create an instance of a class, using a given parameter like an ID (see @barret-conrad's answer), to connect to the DB and get the necessary record, then you'd probably not want to use a static method on the class. But even then I'd say you'd probably have some sort of DB Helper class that DID have static methods that your other class was relying on.\n", "Another \"it depends\". However, I can also think of a very common scenario where static just won't work. If you have a web site that gets a decent amount of traffic, and you have a static database layer with a shared connection, you could be in trouble. In ASP.Net, there is one instance of your application created by default, and so if you have a static database layer you may only get one connection to the database for everyone who uses your web site.\n" ]
[ 2, 1, 0, 0, 0 ]
[ "It depends which model you subscribe to. ORM (Object Relational Model) or Interface Model. ORM is very popular right now because of frameworks like nhibernate, LINQ to SQL, Entity Framework, and many others. The ORM lets you customize some business constraints around your object model and pass it around with out actually knowing how it should be committed to the database. Everything related to inserting, updating, and deleting happens in the object and doesn't really have to worry the developer too much.\nThe Interface Model like the Enterprise Data Pattern made popular by Microsoft, requires you to know what state your object is in and how it should be handled. It also requires you to create the necessary SQL to perform the actions.\nI would say go with ORM.\n" ]
[ -2 ]
[ "class_design", "database", "orm" ]
stackoverflow_0000016320_class_design_database_orm.txt
Q: Is there anything similar to the OS X InputManager on Windows? Is there anything similar on Windows what would achieve the same as the InputManager on OS X? A: I'm pretty sure Windows has an API that developers can use to create new kinds of text input systems. I gather there are a wide variety of text input systems in use in non-Roman-derived markets, many of which are provided by third parties. It's unclear if that's what you were really asking about, though, because you just assumed everyone knows what you would want to use an Input Manager for on Mac OS X. If you want to create a new type of input method, ask how to do that. If you want to get your own code running inside other applications, ask how to do that. Don't just assume people can read your mind when asking questions, and don't assume that they have the same experience that you do and will recognize all the same platform-specific terminology. A: If you are looking to inject code into processes (which is what Input Managers are most commonly used for), the Windows equivalents are: AppInit_DLLs to automatically load your DLL into new processes, CreateRemoteThread to start a new thread in a particular existing process, and SetWindowsHookEx to allow the capture of window events (keyboard, mouse, window creating, drawing, etc). All of these methods require a DLL which will be injected into the remote process. C would be the best language to write such a DLL in as such a DLL needs to be quite light weight as to not bog the system down. RPC methods such as named pipes can be used to communicate to a master process should this be required. Googling for these three APIs will turn up general sample code for these methods.
Is there anything similar to the OS X InputManager on Windows?
Is there anything similar on Windows what would achieve the same as the InputManager on OS X?
[ "I'm pretty sure Windows has an API that developers can use to create new kinds of text input systems. I gather there are a wide variety of text input systems in use in non-Roman-derived markets, many of which are provided by third parties.\nIt's unclear if that's what you were really asking about, though, because you just assumed everyone knows what you would want to use an Input Manager for on Mac OS X.\n\nIf you want to create a new type of input method, ask how to do that.\nIf you want to get your own code running inside other applications, ask how to do that.\n\nDon't just assume people can read your mind when asking questions, and don't assume that they have the same experience that you do and will recognize all the same platform-specific terminology.\n", "If you are looking to inject code into processes (which is what Input Managers are most commonly used for), the Windows equivalents are:\n\nAppInit_DLLs to automatically load your DLL into new processes,\nCreateRemoteThread to start a new thread in a particular existing process, and\nSetWindowsHookEx to allow the capture of window events (keyboard, mouse, window creating, drawing, etc).\n\nAll of these methods require a DLL which will be injected into the remote process. C would be the best language to write such a DLL in as such a DLL needs to be quite light weight as to not bog the system down. RPC methods such as named pipes can be used to communicate to a master process should this be required.\nGoogling for these three APIs will turn up general sample code for these methods.\n" ]
[ 1, 1 ]
[]
[]
[ "macos", "winapi", "windows" ]
stackoverflow_0000030972_macos_winapi_windows.txt
Q: How expensive is ST_GeomFromText In postgis, is the ST_GeomFromText call very expensive? I ask mostly because I have a frequently called query that attempts to find the point that is nearest another point that matches some criteria, and which is also within a certain distance of that other point, and the way I currently wrote it, it's doing the same ST_GeomFromText twice: $findNearIDMatchStmt = $postconn->prepare( "SELECT internalid " . "FROM waypoint " . "WHERE id = ? AND " . " category = ? AND ". " (b.category in (1, 3) OR type like ?) AND ". " ST_DWithin(point, ST_GeomFromText(?," . SRID . " ),". SMALL_EPSILON . ") " . " ORDER BY ST_Distance(point, ST_GeomFromText(?,", SRID . " )) " . " LIMIT 1"); Is there a better way to re-write this? Slightly OT: In the preview screen, all my underscores are being rendered as & # 9 5 ; - I hope that's not going to show up that way in the post. A: I don't believe ST_GeomFromText() is particularly expensive, although in the past I've optimized PostGIS queries by creating a function, declaring a variable and then assigning the result of ST_GeomFromText to the variable. Have you tried checking the execution plan for you query with a variety of different parameters because that should give you a definite idea of which bits of the query are taking the time? I'm guessing most of the execution time will be in the calls to ST_DWithin() and ST_Distance(), although if the id and category columns aren't indexed then it might be doing some interesting table scanning. A: @Ubiguch It appears that ST_DWithin uses the spatial index, so that seems to cut down on the number of points to be queried pretty quickly. navaid=> explain select internalid from waypoint where id != 'KROC' AND ST_DWithin(point, ST_GeomFromText('POINT(-77.6723888888889 43.1188611111111)',4326), 0.05) order by st_distance(point, st_geomfromtext('POINT(-77.6723888888889 43.1188611111111)',4326)) limit 1; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=8.37..8.38 rows=1 width=104) -> Sort (cost=8.37..8.38 rows=1 width=104) Sort Key: (st_distance(point, '0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry)) -> Index Scan using waypoint_point_idx on waypoint (cost=0.00..8.36 rows=1 width=104) Index Cond: (point && '0103000020E61000000100000005000000000000C03B6E53C000000060D0884540000000C03B6E53C0000000409D95454000000020D56753C0000000409D95454000000020D56753C000000060D0884540000000C03B6E53C000000060D0884540'::geometry) Filter: (((id)::text <> 'KROC'::text) AND (point && '0103000020E61000000100000005000000000000C03B6E53C000000060D0884540000000C03B6E53C0000000409D95454000000020D56753C0000000409D95454000000020D56753C000000060D0884540000000C03B6E53C000000060D0884540'::geometry) AND ('0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry && st_expand(point, 0.05::double precision)) AND (st_distance(point, '0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry) < 0.05::double precision)) (6 rows) Without the order by and the limit, it looks like a typical query is only returning 5-10 waypoints max. So I probably shouldn't worry about the additional cost of the filter that's applied to the points returned.
How expensive is ST_GeomFromText
In postgis, is the ST_GeomFromText call very expensive? I ask mostly because I have a frequently called query that attempts to find the point that is nearest another point that matches some criteria, and which is also within a certain distance of that other point, and the way I currently wrote it, it's doing the same ST_GeomFromText twice: $findNearIDMatchStmt = $postconn->prepare( "SELECT internalid " . "FROM waypoint " . "WHERE id = ? AND " . " category = ? AND ". " (b.category in (1, 3) OR type like ?) AND ". " ST_DWithin(point, ST_GeomFromText(?," . SRID . " ),". SMALL_EPSILON . ") " . " ORDER BY ST_Distance(point, ST_GeomFromText(?,", SRID . " )) " . " LIMIT 1"); Is there a better way to re-write this? Slightly OT: In the preview screen, all my underscores are being rendered as & # 9 5 ; - I hope that's not going to show up that way in the post.
[ "I don't believe ST_GeomFromText() is particularly expensive, although in the past I've optimized PostGIS queries by creating a function, declaring a variable and then assigning the result of ST_GeomFromText to the variable.\nHave you tried checking the execution plan for you query with a variety of different parameters because that should give you a definite idea of which bits of the query are taking the time? \nI'm guessing most of the execution time will be in the calls to ST_DWithin() and ST_Distance(), although if the id and category columns aren't indexed then it might be doing some interesting table scanning.\n", "@Ubiguch\nIt appears that ST_DWithin uses the spatial index, so that seems to cut down on the number of points to be queried pretty quickly.\n navaid=> explain select internalid from waypoint where id != 'KROC' AND ST_DWithin(point, ST_GeomFromText('POINT(-77.6723888888889 43.1188611111111)',4326), 0.05) order by st_distance(point, st_geomfromtext('POINT(-77.6723888888889 43.1188611111111)',4326)) limit 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=8.37..8.38 rows=1 width=104)\n -> Sort (cost=8.37..8.38 rows=1 width=104)\n Sort Key: (st_distance(point, '0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry))\n -> Index Scan using waypoint_point_idx on waypoint (cost=0.00..8.36 rows=1 width=104)\n Index Cond: (point && '0103000020E61000000100000005000000000000C03B6E53C000000060D0884540000000C03B6E53C0000000409D95454000000020D56753C0000000409D95454000000020D56753C000000060D0884540000000C03B6E53C000000060D0884540'::geometry)\n Filter: (((id)::text <> 'KROC'::text) AND (point && '0103000020E61000000100000005000000000000C03B6E53C000000060D0884540000000C03B6E53C0000000409D95454000000020D56753C0000000409D95454000000020D56753C000000060D0884540000000C03B6E53C000000060D0884540'::geometry) AND ('0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry && st_expand(point, 0.05::double precision)) AND (st_distance(point, '0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry) < 0.05::double precision))\n(6 rows)\n\nWithout the order by and the limit, it looks like a typical query is only returning 5-10 waypoints max. So I probably shouldn't worry about the additional cost of the filter that's applied to the points returned.\n" ]
[ 1, 1 ]
[]
[]
[ "gis", "postgis" ]
stackoverflow_0000036182_gis_postgis.txt
Q: Closing and Disposing a WCF Service The Close method on an ICommunicationObject can throw two types of exceptions as MSDN outlines here. I understand why the Close method can throw those exceptions, but what I don't understand is why the Dispose method on a service proxy calls the Close method without a try around it. Isn't your Dispose method the one place where you want make sure you don't throw any exceptions? A: It seems to be a common design pattern in .NET code. Here is a citation from Framework design guidelines Consider providing method Close(), in addition to the Dispose(), if close is standard terminology in the area. When doing so, it is important that you make the Close implementation identical to Dispose ... Here is a blog post in which you can find workaround for this System.ServiceModel.ClientBase design problem A: Yes, typically Dispose is one of the places you want to ensure exceptions aren't thrown. However, based on this MSDN forum thread there were some historical reasons for this behavior. As such, the recommended pattern is the try{Close}/catch{Abort} paradigm.
Closing and Disposing a WCF Service
The Close method on an ICommunicationObject can throw two types of exceptions as MSDN outlines here. I understand why the Close method can throw those exceptions, but what I don't understand is why the Dispose method on a service proxy calls the Close method without a try around it. Isn't your Dispose method the one place where you want make sure you don't throw any exceptions?
[ "It seems to be a common design pattern in .NET code. Here is a citation from Framework design guidelines \n\nConsider providing method Close(), in addition to the Dispose(), if close is standard terminology in the area. When doing so, it is important that you make the Close implementation identical to Dispose ...\n\nHere is a blog post in which you can find workaround for this System.ServiceModel.ClientBase design problem \n", "Yes, typically Dispose is one of the places you want to ensure exceptions aren't thrown. However, based on this MSDN forum thread there were some historical reasons for this behavior. As such, the recommended pattern is the try{Close}/catch{Abort} paradigm.\n" ]
[ 10, 10 ]
[]
[]
[ "wcf", "web_services" ]
stackoverflow_0000023867_wcf_web_services.txt
Q: Add service reference to Amazon service fails Add service reference to Amazon service fails, saying "Could not load file or assembly "System.Core, Version=3.5.0.0,...' or one or more of it dependencies. The module was expected to contain an assembly manifest." This is in VS 2008, haven't installed SP1 on this machine yet. Any ideas? A: This can happen if ASP.NET isn't installed. Go to Add/Remove Windows Components and look under IIS; make sure that ASP.NET is checked (meaning that it's installed.) That should clear up your problem!
Add service reference to Amazon service fails
Add service reference to Amazon service fails, saying "Could not load file or assembly "System.Core, Version=3.5.0.0,...' or one or more of it dependencies. The module was expected to contain an assembly manifest." This is in VS 2008, haven't installed SP1 on this machine yet. Any ideas?
[ "This can happen if ASP.NET isn't installed. Go to Add/Remove Windows Components and look under IIS; make sure that ASP.NET is checked (meaning that it's installed.) That should clear up your problem!\n" ]
[ 1 ]
[]
[]
[ "amazon", "web_services" ]
stackoverflow_0000036575_amazon_web_services.txt
Q: Is there an easy way to do transparent forms in a VB .NET app? I'm writing a simple app that's going to have a tiny form sitting in one corner of the screen, updating itself. I'd really love for that form to be transparent and to have the transparency be user-configurable. Is there any easy way to achieve this? A: You could try using the Opacity property of the Form. Here's the relevant snippet from the MSDN page: private Sub CreateMyOpaqueForm() ' Create a new form. Dim form2 As New Form() ' Set the text displayed in the caption. form2.Text = "My Form" ' Set the opacity to 75%. form2.Opacity = 0.75 ' Size the form to be 300 pixels in height and width. form2.Size = New Size(300, 300) ' Display the form in the center of the screen. form2.StartPosition = FormStartPosition.CenterScreen ' Display the form as a modal dialog box. form2.ShowDialog() End Sub A: You can set the Form.Opacity property. It should do what you want. A: Set Form.Opacity = 0.0 on page load I set something like what your talking about on an app about a year ago. Using a While loop with a small Sleep you can setup a nice fading effect. A: I don't know exactly what you mean by transparent, but if you use WPF you can set AllowTransparency = True on your form and then remove the form's style/border and then set the background to a color that has a zero alpha channel. Then, you can draw on the form all you want and the background will be see-through and the other stuff will be fully visible. Additionally, you could set the background to a low-opacity layer so you can half see through the form.
Is there an easy way to do transparent forms in a VB .NET app?
I'm writing a simple app that's going to have a tiny form sitting in one corner of the screen, updating itself. I'd really love for that form to be transparent and to have the transparency be user-configurable. Is there any easy way to achieve this?
[ "You could try using the Opacity property of the Form. Here's the relevant snippet from the MSDN page:\nprivate Sub CreateMyOpaqueForm()\n ' Create a new form.\n Dim form2 As New Form()\n ' Set the text displayed in the caption.\n form2.Text = \"My Form\"\n ' Set the opacity to 75%.\n form2.Opacity = 0.75\n ' Size the form to be 300 pixels in height and width.\n form2.Size = New Size(300, 300)\n ' Display the form in the center of the screen.\n form2.StartPosition = FormStartPosition.CenterScreen\n\n ' Display the form as a modal dialog box.\n form2.ShowDialog()\nEnd Sub\n\n", "You can set the Form.Opacity property. It should do what you want.\n", "Set Form.Opacity = 0.0 on page load\nI set something like what your talking about on an app about a year ago. Using a While loop with a small Sleep you can setup a nice fading effect.\n", "I don't know exactly what you mean by transparent, but if you use WPF you can set AllowTransparency = True on your form and then remove the form's style/border and then set the background to a color that has a zero alpha channel. Then, you can draw on the form all you want and the background will be see-through and the other stuff will be fully visible. Additionally, you could set the background to a low-opacity layer so you can half see through the form.\n" ]
[ 4, 0, 0, 0 ]
[]
[]
[ "transparency", "vb.net" ]
stackoverflow_0000036563_transparency_vb.net.txt
Q: C# Linq Grouping I'm experimenting with Linq and am having trouble figuring out grouping. I've gone through several tutorials but for some reason can't figure this out. As an example, say I have a table (SiteStats) with multiple website IDs that stores a count of how many visitors by type have accessed each site in total and for the past 30 days. ╔════════╦═════════════╦════════╦══════╗ ║ SiteId ║ VisitorType ║ Last30 ║ Total║ ╠════════╬═════════════╬════════╬══════╣ ║ 1 ║ 1 ║ 10 ║ 100 ║ ║ 1 ║ 2 ║ 40 ║ 140 ║ ║ 2 ║ 1 ║ 20 ║ 180 ║ ╚════════╩═════════════╩════════╩══════╝ In SQL, I can easily get the counts for SiteID 1 with the following: SELECT SiteId, SUM(Last30) AS Last30Sum FROM Sites WHERE SiteId = 1 GROUP BY SiteId and should get a row like... ╔════════╦════════════╗ ║ SiteId ║ Last30Total║ ╠════════╬════════════╣ ║ 1 ║ 50 ║ ╚════════╩════════════╝ However I'm not sure how to get this result using Linq. I've tried: var statsRecord = from ss in db.SiteStats where ss.SiteId == siteId group ss by ss.SiteId into ss select ss; but I'm not able to get back the total with something like statsRecord.Last30 Can someone please let me know where I'm going wrong? Any help is appreciated. A: Actually, although Thomas' code will work, it is more succint to use a lambda expression: var totals = from s in sites group s by s.SiteID into grouped select new { SiteID = grouped.Key, Last30Sum = grouped.Sum( s => s.Last30 ) }; which uses the Sum extension method without the need for a nested LINQ operation. as per the LINQ 101 examples - http://msdn.microsoft.com/en-us/vcsharp/aa336747.aspx#sumGrouped A: Easiest way for me to illustrate is using in-memory objects so it's clear what's happening. LINQ to SQL should be able to take that same LINQ query and translate it into appropriate SQL. public class Site { static void Main() { List<Site> sites = new List<Site>() { new Site() { SiteID = 1, VisitorType = 1, Last30 = 10, Total = 100, }, new Site() { SiteID = 1, VisitorType = 2, Last30 = 40, Total = 140, }, new Site() { SiteID = 2, VisitorType = 1, Last30 = 20, Total = 180, }, }; var totals = from s in sites group s by s.SiteID into grouped select new { SiteID = grouped.Key, Last30Sum = (from value in grouped select value.Last30).Sum(), }; foreach (var total in totals) { Console.WriteLine("Site: {0}, Last30Sum: {1}", total.SiteID, total.Last30Sum); } } public int SiteID { get; set; } public int VisitorType { get; set; } public int Last30 { get; set; } public int Total { get; set; } }
C# Linq Grouping
I'm experimenting with Linq and am having trouble figuring out grouping. I've gone through several tutorials but for some reason can't figure this out. As an example, say I have a table (SiteStats) with multiple website IDs that stores a count of how many visitors by type have accessed each site in total and for the past 30 days. ╔════════╦═════════════╦════════╦══════╗ ║ SiteId ║ VisitorType ║ Last30 ║ Total║ ╠════════╬═════════════╬════════╬══════╣ ║ 1 ║ 1 ║ 10 ║ 100 ║ ║ 1 ║ 2 ║ 40 ║ 140 ║ ║ 2 ║ 1 ║ 20 ║ 180 ║ ╚════════╩═════════════╩════════╩══════╝ In SQL, I can easily get the counts for SiteID 1 with the following: SELECT SiteId, SUM(Last30) AS Last30Sum FROM Sites WHERE SiteId = 1 GROUP BY SiteId and should get a row like... ╔════════╦════════════╗ ║ SiteId ║ Last30Total║ ╠════════╬════════════╣ ║ 1 ║ 50 ║ ╚════════╩════════════╝ However I'm not sure how to get this result using Linq. I've tried: var statsRecord = from ss in db.SiteStats where ss.SiteId == siteId group ss by ss.SiteId into ss select ss; but I'm not able to get back the total with something like statsRecord.Last30 Can someone please let me know where I'm going wrong? Any help is appreciated.
[ "Actually, although Thomas' code will work, it is more succint to use a lambda expression:\nvar totals =\nfrom s in sites\ngroup s by s.SiteID into grouped\nselect new\n{\n SiteID = grouped.Key,\n Last30Sum = grouped.Sum( s => s.Last30 )\n};\n\nwhich uses the Sum extension method without the need for a nested LINQ operation.\nas per the LINQ 101 examples - http://msdn.microsoft.com/en-us/vcsharp/aa336747.aspx#sumGrouped\n", "Easiest way for me to illustrate is using in-memory objects so it's clear what's happening. LINQ to SQL should be able to take that same LINQ query and translate it into appropriate SQL.\npublic class Site\n{\n static void Main()\n {\n List<Site> sites = new List<Site>()\n {\n new Site() { SiteID = 1, VisitorType = 1, Last30 = 10, Total = 100, },\n new Site() { SiteID = 1, VisitorType = 2, Last30 = 40, Total = 140, },\n new Site() { SiteID = 2, VisitorType = 1, Last30 = 20, Total = 180, },\n };\n\n var totals =\n from s in sites\n group s by s.SiteID into grouped\n select new\n {\n SiteID = grouped.Key,\n Last30Sum = \n (from value in grouped\n select value.Last30).Sum(),\n };\n\n foreach (var total in totals)\n {\n Console.WriteLine(\"Site: {0}, Last30Sum: {1}\", total.SiteID, total.Last30Sum);\n }\n }\n\n public int SiteID { get; set; }\n public int VisitorType { get; set; }\n public int Last30 { get; set; }\n public int Total { get; set; }\n}\n\n" ]
[ 34, 4 ]
[]
[]
[ "c#", "linq" ]
stackoverflow_0000034913_c#_linq.txt
Q: Boundary Tests For a Networked App Besides "no connection", what other failure modes should I test for? How do I simulate a high-latency link, an unreliable link, or all the other sorts of crazy stuff that will undoubtedly happen "in the wild"? How about wireless applications? How do I test the performance in a less-than-ideal WL environment? A: To add to TimK's answer, if you have a router, test pulling the upstream link on the router, this will test a bad connection without your system knowing that you lost the physical link. Also if you plug it back in after a few seconds it's possible that the connection won't be lost*. This can simulate a very high latency. *this depends on your ISP and your router. A: If you're using Linux, try Virtual Distributed Ethermet (VDE). VDE gives you virtualised switches/hubs and Ethernet cables. You can tune network characteristics such as latency, delay, MTU, errored bits per MB, bandwidth, duplicates, etc on individual cables - all in real time! A: You definitely want to test physically pulling the cable out. Lots of networking code will throw different exceptions in that scenario vs when the connection has just been lost. A: To add to TimK's answer, if you have a router, test pulling the upstream link on the router, this will test a bad connection without your system knowing that you lost the physical link. A: Our network/server closet is a spaghetti-mess of wires; I'm not going to walk in there and start unplugging stuff lest I hit something mission-critical. (At least I have access to it; I'm sure many readers don't even know where their routers are.) Similarly, both ends of the ethernet cable require a hands-and-knees adventure to reach. I tested enabling/disabling the network adapter, and I'm going to test from my cable internet connection from home as well. Also, I had the idea of installing Tor to create a high latency connection. For wireless connections, I have a metal box to test what happens when the signal dies, but I notice that network connection behavior is very different depending on how I test: put the transmitter/reciever in a metal box go stand next to the microwave in the kitchen and turn it on go stand in a little closet which has concrete walls
Boundary Tests For a Networked App
Besides "no connection", what other failure modes should I test for? How do I simulate a high-latency link, an unreliable link, or all the other sorts of crazy stuff that will undoubtedly happen "in the wild"? How about wireless applications? How do I test the performance in a less-than-ideal WL environment?
[ "\nTo add to TimK's answer, if you have a router, test pulling the upstream link on the router, this will test a bad connection without your system knowing that you lost the physical link.\n\nAlso if you plug it back in after a few seconds it's possible that the connection won't be lost*. This can simulate a very high latency.\n*this depends on your ISP and your router. \n", "If you're using Linux, try Virtual Distributed Ethermet (VDE).\nVDE gives you virtualised switches/hubs and Ethernet cables. You can tune network characteristics such as latency, delay, MTU, errored bits per MB, bandwidth, duplicates, etc on individual cables - all in real time!\n", "You definitely want to test physically pulling the cable out. Lots of networking code will throw different exceptions in that scenario vs when the connection has just been lost.\n", "To add to TimK's answer, if you have a router, test pulling the upstream link on the router, this will test a bad connection without your system knowing that you lost the physical link.\n", "Our network/server closet is a spaghetti-mess of wires; I'm not going to walk in there and start unplugging stuff lest I hit something mission-critical. (At least I have access to it; I'm sure many readers don't even know where their routers are.) Similarly, both ends of the ethernet cable require a hands-and-knees adventure to reach.\nI tested enabling/disabling the network adapter, and I'm going to test from my cable internet connection from home as well. Also, I had the idea of installing Tor to create a high latency connection.\nFor wireless connections, I have a metal box to test what happens when the signal dies, but I notice that network connection behavior is very different depending on how I test:\n\nput the transmitter/reciever in a metal box\ngo stand next to the microwave in the kitchen and turn it on\ngo stand in a little closet which has concrete walls\n\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "networking", "testing", "wireless" ]
stackoverflow_0000023205_networking_testing_wireless.txt
Q: How to get controls in WPF to fill available space? Some WPF controls (like the Button) seem to happily consume all the available space in its' container if you don't specify the height it is to have. And some, like the ones I need to use right now, the (multiline) TextBox and the ListBox seem more worried about just taking the space necessary to fit their contents, and no more. If you put these guys in a cell in a UniformGrid, they will expand to fit the available space. However, UniformGrid instances are not right for all situations. What if you have a grid with some rows set to a * height to divide the height between itself and other * rows? What if you have a StackPanel and you have a Label, a List and a Button, how can you get the list to take up all the space not eaten by the label and the button? I would think this would really be a basic layout requirement, but I can't figure out how to get them to fill the space that they could (putting them in a DockPanel and setting it to fill also doesn't work, it seems, since the DockPanel only takes up the space needed by its' subcontrols). A resizable GUI would be quite horrible if you had to play with Height, Width, MinHeight, MinWidth etc. Can you bind your Height and Width properties to the grid cell you occupy? Or is there another way to do this? A: There are also some properties you can set to force a control to fill its available space when it would otherwise not do so. For example, you can say: HorizontalContentAlignment="Stretch" ... to force the contents of a control to stretch horizontally. Or you can say: HorizontalAlignment="Stretch" ... to force the control itself to stretch horizontally to fill its parent. A: Each control deriving from Panel implements distinct layout logic performed in Measure() and Arrange(): Measure() determines the size of the panel and each of its children Arrange() determines the rectangle where each control renders The last child of the DockPanel fills the remaining space. You can disable this behavior by setting the LastChild property to false. The StackPanel asks each child for its desired size and then stacks them. The stack panel calls Measure() on each child, with an available size of Infinity and then uses the child's desired size. A Grid occupies all available space, however, it will set each child to their desired size and then center them in the cell. You can implement your own layout logic by deriving from Panel and then overriding MeasureOverride() and ArrangeOverride(). See this article for a simple example. A: Well, I figured it out myself, right after posting, which is the most embarassing way. :) It seems every member of a StackPanel will simply fill its minimum requested size. In the DockPanel, I had docked things in the wrong order. If the TextBox or ListBox is the only docked item without an alignment, or if they are the last added, they WILL fill the remaining space as wanted. I would love to see a more elegant method of handling this, but it will do. A: Use the HorizontalAlignment and VerticalAlignment layout properties. They control how an element uses the space it has inside its parent when more room is available than it required by the element. The width of a StackPanel, for example, will be as wide as the widest element it contains. So, all narrower elements have a bit of excess space. The alignment properties control what the child element does with the extra space. The default value for both properties is Stretch, so the child element is stretched to fill all available space. Additional options include Left, Center and Right for HorizontalAlignment and Top, Center and Bottom for VerticalAlignment.
How to get controls in WPF to fill available space?
Some WPF controls (like the Button) seem to happily consume all the available space in its' container if you don't specify the height it is to have. And some, like the ones I need to use right now, the (multiline) TextBox and the ListBox seem more worried about just taking the space necessary to fit their contents, and no more. If you put these guys in a cell in a UniformGrid, they will expand to fit the available space. However, UniformGrid instances are not right for all situations. What if you have a grid with some rows set to a * height to divide the height between itself and other * rows? What if you have a StackPanel and you have a Label, a List and a Button, how can you get the list to take up all the space not eaten by the label and the button? I would think this would really be a basic layout requirement, but I can't figure out how to get them to fill the space that they could (putting them in a DockPanel and setting it to fill also doesn't work, it seems, since the DockPanel only takes up the space needed by its' subcontrols). A resizable GUI would be quite horrible if you had to play with Height, Width, MinHeight, MinWidth etc. Can you bind your Height and Width properties to the grid cell you occupy? Or is there another way to do this?
[ "There are also some properties you can set to force a control to fill its available space when it would otherwise not do so. For example, you can say:\nHorizontalContentAlignment=\"Stretch\"\n\n... to force the contents of a control to stretch horizontally. Or you can say:\nHorizontalAlignment=\"Stretch\"\n\n... to force the control itself to stretch horizontally to fill its parent.\n", "Each control deriving from Panel implements distinct layout logic performed in Measure() and Arrange():\n\nMeasure() determines the size of the panel and each of its children\nArrange() determines the rectangle where each control renders\n\nThe last child of the DockPanel fills the remaining space. You can disable this behavior by setting the LastChild property to false.\nThe StackPanel asks each child for its desired size and then stacks them. The stack panel calls Measure() on each child, with an available size of Infinity and then uses the child's desired size. \nA Grid occupies all available space, however, it will set each child to their desired size and then center them in the cell.\nYou can implement your own layout logic by deriving from Panel and then overriding MeasureOverride() and ArrangeOverride().\nSee this article for a simple example.\n", "Well, I figured it out myself, right after posting, which is the most embarassing way. :)\nIt seems every member of a StackPanel will simply fill its minimum requested size.\nIn the DockPanel, I had docked things in the wrong order. If the TextBox or ListBox is the only docked item without an alignment, or if they are the last added, they WILL fill the remaining space as wanted.\nI would love to see a more elegant method of handling this, but it will do.\n", "Use the HorizontalAlignment and VerticalAlignment layout properties. They control how an element uses the space it has inside its parent when more room is available than it required by the element.\nThe width of a StackPanel, for example, will be as wide as the widest element it contains. So, all narrower elements have a bit of excess space. The alignment properties control what the child element does with the extra space.\nThe default value for both properties is Stretch, so the child element is stretched to fill all available space. Additional options include Left, Center and Right for HorizontalAlignment and Top, Center and Bottom for VerticalAlignment.\n" ]
[ 270, 174, 24, 6 ]
[]
[]
[ "layout", "user_interface", "wpf" ]
stackoverflow_0000036108_layout_user_interface_wpf.txt
Q: Error using Team Foundation Server merge function When merging two code branches in Team Foundation Server I get the following error: The given key was not present in the dictionary. Some files are checked out and show up in "Pending Changes", but no changes are actually made. I have a workaround: Attempt to merge (fails with error) Get latest from trunk Undo all pending changes with "merge, edit" or "merge" Merge Again (works this time) Any ideas on what's causing this error? Edit after answer: Seems like a bug. And it's extremely repeatable. Every single merge does it. I'll send a bug report to MS and see what happens. A: Sounds like a bug. If you can replicate this, I recommend you contact Microsoft Support or use the Microsoft Connect bug reporting web site. I did not find any mention of this in a preliminary search.
Error using Team Foundation Server merge function
When merging two code branches in Team Foundation Server I get the following error: The given key was not present in the dictionary. Some files are checked out and show up in "Pending Changes", but no changes are actually made. I have a workaround: Attempt to merge (fails with error) Get latest from trunk Undo all pending changes with "merge, edit" or "merge" Merge Again (works this time) Any ideas on what's causing this error? Edit after answer: Seems like a bug. And it's extremely repeatable. Every single merge does it. I'll send a bug report to MS and see what happens.
[ "Sounds like a bug. If you can replicate this, I recommend you contact Microsoft Support or use the Microsoft Connect bug reporting web site. I did not find any mention of this in a preliminary search.\n" ]
[ 1 ]
[]
[]
[ "merge", "tfs" ]
stackoverflow_0000035191_merge_tfs.txt
Q: Redirecting users from edit page back to calling page I am working on a project management web application. The user has a variety of ways to display a list of tasks. When viewing a list page, they click on task and are redirected to the task edit page. Since they are coming from a variety of ways, I am just curious as to the best way to redirect the user back to the calling page. I have some ideas, but would like to get other developers input. Would you store the calling url in session? as a cookie? I like the concept of using an object handle the redirection. A: I would store the referring URL using the ViewState. Storing this outside the scope of the page (i.e. in the Session state or cookie) may cause problems if more than one browser window is open. The example below validates that the page was called internally (i.e. not requested directly) and bounces back to the referring page after the user submits their response. public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (Request.UrlReferrer == null) { //Handle the case where the page is requested directly throw new Exception("This page has been called without a referring page"); } if (!IsPostBack) { ReturnUrl = Request.UrlReferrer.PathAndQuery; } } public string ReturnUrl { get { return ViewState["returnUrl"].ToString(); } set { ViewState["returnUrl"] = value; } } protected void btn_Click(object sender, EventArgs e) { //Do what you need to do to save the page //... //Go back to calling page Response.Redirect(ReturnUrl, true); } } A: I personally would store the required redirection info in an object and handle globally. I would avoid using a QueryString param or the like since they could try bouncing themselves back to a page they are not supposed to (possible security issue?). You could then create a static method to handle the redirection object, which could read the information and act accordingly. This encapsulates your redirection process within one page. Using an object also means you can later extend it if required (such as adding return messages and other info). For example (this is a 2 minute rough guideline BTW!): public partial class _Default : System.Web.UI.Page { void Redirect(string url, string messsage) { RedirectionParams paras = new RedirectionParams(url, messsage); RedirectionHandler(paras); // pass to some global method (or this could BE the global method) } protected void Button1_Click(object sender, EventArgs e) { Redirect("mypage.aspx", "you have been redirected"); } } public class RedirectionParams { private string _url; public string URL { get { return _url; } set { _url = value; } } private string _message; public string Message { get { return _message; } set { _message = value; } } public RedirectionParams(string url, string message) { this.URL = url; this.Message = message; } } A: This message my be tagged asp.net but I think it is a platform independent issue that pains all new web developers as they seek a 'clean' way to do this. I think the two options in achieving this are: A param in the url A url stored in the session I don't like the url method, it is a bit messy, and you have to remember to include the param in every relevent URL. I'd just use an object with static methods for this. The object would wrap around the session item you use to store redirect URLS. The methods would probably be as follows (all public static): setRedirectUrl(string URL) doRedirect(string defaultURL) setRedirectUrl would be called in any action that produces links / forms which need to redirect to a given url. So say you had a projects view action that generates a list of projects, each with tasks that can be performed on them (e.g. delete, edit) you would call RedirectClass.setRedirectUrl("/project/view-all") in the code for this action. Then lets say the user clicks delete, they need to be redirected to the view page after a delete action, so in the delete action you would call RedirectClass.setRedirectUrl("/project/view-all"). This method would look to see if the redirect variable was set in the session. If so redirect to that URL. If not, redirect to the default url (the string passed to the setRedirectUrl method). A: I agree with "rmbarnes.myopenid.com" regarding this issue as being platform independent. I would store the calling page URL in the QueryString or in a hidden field (for example in ViewState for ASP.NET). If you will store it outside of the page scope (such as Session, global variable - Application State and so on) then it will not be just overkill as Tom said but it will bring you trouble. What kind of trouble? Trouble if the user has more than one tab (window) of that browser open. The tabs (or windows) of the same browser will probably share the same session and the redirection will not be the one expected and all the user will feel is that it is a bug. My 2 eurocents..
Redirecting users from edit page back to calling page
I am working on a project management web application. The user has a variety of ways to display a list of tasks. When viewing a list page, they click on task and are redirected to the task edit page. Since they are coming from a variety of ways, I am just curious as to the best way to redirect the user back to the calling page. I have some ideas, but would like to get other developers input. Would you store the calling url in session? as a cookie? I like the concept of using an object handle the redirection.
[ "I would store the referring URL using the ViewState. Storing this outside the scope of the page (i.e. in the Session state or cookie) may cause problems if more than one browser window is open.\nThe example below validates that the page was called internally (i.e. not requested directly) and bounces back to the referring page after the user submits their response.\npublic partial class _Default : System.Web.UI.Page\n{\n protected void Page_Load(object sender, EventArgs e)\n {\n if (Request.UrlReferrer == null)\n {\n //Handle the case where the page is requested directly\n throw new Exception(\"This page has been called without a referring page\");\n }\n\n if (!IsPostBack)\n { \n ReturnUrl = Request.UrlReferrer.PathAndQuery;\n }\n }\n\n public string ReturnUrl\n {\n get { return ViewState[\"returnUrl\"].ToString(); }\n set { ViewState[\"returnUrl\"] = value; }\n }\n\n protected void btn_Click(object sender, EventArgs e)\n {\n //Do what you need to do to save the page\n //...\n\n //Go back to calling page\n Response.Redirect(ReturnUrl, true);\n }\n}\n\n", "I personally would store the required redirection info in an object and handle globally. I would avoid using a QueryString param or the like since they could try bouncing themselves back to a page they are not supposed to (possible security issue?). You could then create a static method to handle the redirection object, which could read the information and act accordingly. This encapsulates your redirection process within one page.\nUsing an object also means you can later extend it if required (such as adding return messages and other info).\nFor example (this is a 2 minute rough guideline BTW!):\npublic partial class _Default : System.Web.UI.Page \n{\n\n void Redirect(string url, string messsage)\n {\n RedirectionParams paras = new RedirectionParams(url, messsage);\n RedirectionHandler(paras); // pass to some global method (or this could BE the global method)\n }\n protected void Button1_Click(object sender, EventArgs e)\n {\n Redirect(\"mypage.aspx\", \"you have been redirected\");\n }\n}\n\npublic class RedirectionParams\n{\n private string _url;\n\n public string URL\n {\n get { return _url; }\n set { _url = value; }\n }\n\n private string _message;\n\n public string Message\n {\n get { return _message; }\n set { _message = value; }\n }\n\n public RedirectionParams(string url, string message)\n {\n this.URL = url;\n this.Message = message;\n }\n}\n\n", "This message my be tagged asp.net but I think it is a platform independent issue that pains all new web developers as they seek a 'clean' way to do this.\nI think the two options in achieving this are:\n\nA param in the url\nA url stored in the session\n\nI don't like the url method, it is a bit messy, and you have to remember to include the param in every relevent URL.\nI'd just use an object with static methods for this. The object would wrap around the session item you use to store redirect URLS.\nThe methods would probably be as follows (all public static):\n\nsetRedirectUrl(string URL)\ndoRedirect(string defaultURL)\n\nsetRedirectUrl would be called in any action that produces links / forms which need to redirect to a given url. So say you had a projects view action that generates a list of projects, each with tasks that can be performed on them (e.g. delete, edit) you would call RedirectClass.setRedirectUrl(\"/project/view-all\") in the code for this action.\nThen lets say the user clicks delete, they need to be redirected to the view page after a delete action, so in the delete action you would call RedirectClass.setRedirectUrl(\"/project/view-all\"). This method would look to see if the redirect variable was set in the session. If so redirect to that URL. If not, redirect to the default url (the string passed to the setRedirectUrl method).\n", "I agree with \"rmbarnes.myopenid.com\" regarding this issue as being platform independent.\nI would store the calling page URL in the QueryString or in a hidden field (for example in ViewState for ASP.NET). If you will store it outside of the page scope (such as Session, global variable - Application State and so on) then it will not be just overkill as Tom said but it will bring you trouble.\nWhat kind of trouble? Trouble if the user has more than one tab (window) of that browser open. The tabs (or windows) of the same browser will probably share the same session and the redirection will not be the one expected and all the user will feel is that it is a bug.\nMy 2 eurocents..\n" ]
[ 5, 1, 1, 1 ]
[]
[]
[ "asp.net", "redirect" ]
stackoverflow_0000036733_asp.net_redirect.txt
Q: Listing Items and Displaying Data Inline I use asp.net 3.5 and have also begun looking at 3.5 sp1 I like the clean urls that mvc tends to have but use asp.net webforms for my primary development. I normally use a url rewriter in order to accomplish this type stuff. When I say clean urls I mean like /products to get a list of products and /products/Product_One to look at the info about product called Product_One. I've used this on sites where the listing is on one page and when you pick the item it goes to a different page that shows the info about the item selected. but I also like the way that the update panel works and changing stuff on screen with out flashing the screen. When I do this I tend to have a list on the left with the different items that are selectable and then have on the left the data about the selected item, then I use an update panel so that when the item on the left is selected it's data shows up on the left without flashing. I need opinions on what you all think of the two different methods of displaying a list and seeing the item that is selected's data. 1) Which is better in your opinion? 2) What do you all do to display a list and show the data on one of the items? 3) Is there another way of doing this? 4) Is it possible to combine the update panel method and the nice urls? (i.e. change the url to match the url that would get you to the current displayed data even though the update panel was used, and add to the history the new clean url for the current page) A: What you are referring to is AJAX URL history management but you will not be able to modify the URL besides the "#" anchor. At least not without reloading the page.
Listing Items and Displaying Data Inline
I use asp.net 3.5 and have also begun looking at 3.5 sp1 I like the clean urls that mvc tends to have but use asp.net webforms for my primary development. I normally use a url rewriter in order to accomplish this type stuff. When I say clean urls I mean like /products to get a list of products and /products/Product_One to look at the info about product called Product_One. I've used this on sites where the listing is on one page and when you pick the item it goes to a different page that shows the info about the item selected. but I also like the way that the update panel works and changing stuff on screen with out flashing the screen. When I do this I tend to have a list on the left with the different items that are selectable and then have on the left the data about the selected item, then I use an update panel so that when the item on the left is selected it's data shows up on the left without flashing. I need opinions on what you all think of the two different methods of displaying a list and seeing the item that is selected's data. 1) Which is better in your opinion? 2) What do you all do to display a list and show the data on one of the items? 3) Is there another way of doing this? 4) Is it possible to combine the update panel method and the nice urls? (i.e. change the url to match the url that would get you to the current displayed data even though the update panel was used, and add to the history the new clean url for the current page)
[ "What you are referring to is AJAX URL history management but you will not be able to modify the URL besides the \"#\" anchor.\nAt least not without reloading the page.\n" ]
[ 1 ]
[]
[]
[ ".net_3.5", "asp.net" ]
stackoverflow_0000036642_.net_3.5_asp.net.txt
Q: How can I disable DLL Caching in Windows Vista via CMD? I know Windows Vista (and XP) cache recently loaded DLL's in memory... How can this be disabled via the command prompt? A: The only thing you can do is disable SuperFetch, which can be done from the command prompt with this command (there has to be a space between the = sign and disabled). sc config Superfetch start= disabled There is a myth out there that you can disable DLL caching, but that only worked for systems prior to Windows 2000. [source] A: Perhaps it would be helpful to know why you want to do this and then try to help solve the original problem... A: Windows does not cache recently used DLLs in memory. It does cache the contents of the files in the file cache, like it would normally do with data files.
How can I disable DLL Caching in Windows Vista via CMD?
I know Windows Vista (and XP) cache recently loaded DLL's in memory... How can this be disabled via the command prompt?
[ "The only thing you can do is disable SuperFetch, which can be done from the command prompt with this command (there has to be a space between the = sign and disabled).\nsc config Superfetch start= disabled\n\nThere is a myth out there that you can disable DLL caching, but that only worked for systems prior to Windows 2000. [source]\n", "Perhaps it would be helpful to know why you want to do this and then try to help solve the original problem...\n", "Windows does not cache recently used DLLs in memory.\nIt does cache the contents of the files in the file cache, like it would normally do with data files.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "command_prompt", "windows_vista" ]
stackoverflow_0000036502_command_prompt_windows_vista.txt
Q: Updating/Intercepting HttpContext.Current.Request.QueryString Here's a wierd one. I'm reusing a code base that unfortunately must not be updated. This code makes a call to HttpContext.Current.Request.QueryString. Ideally, I need to push a value into this collection with every request that is made. Is this possible - perhaps in an HTTP Module? A: Without using reflection, the simplest way to do it would be to use the RewritePath function on the current HttpContext object in order to modify the querystring. Using an IHttpModule, it might look something like: context.RewritePath(context.Request.Path, context.Request.PathInfo, newQueryStringHere!); Hope this helps! A: Ditto Espo's answer and I would like to add that usually in medium trust (specific to many shared hostings) you will not have access to reflection so ... RewritePath will remain your probably only choice.
Updating/Intercepting HttpContext.Current.Request.QueryString
Here's a wierd one. I'm reusing a code base that unfortunately must not be updated. This code makes a call to HttpContext.Current.Request.QueryString. Ideally, I need to push a value into this collection with every request that is made. Is this possible - perhaps in an HTTP Module?
[ "Without using reflection, the simplest way to do it would be to use the RewritePath function on the current HttpContext object in order to modify the querystring. \nUsing an IHttpModule, it might look something like:\ncontext.RewritePath(context.Request.Path, context.Request.PathInfo, newQueryStringHere!);\n\nHope this helps!\n", "Ditto Espo's answer and I would like to add that usually in medium trust (specific to many shared hostings) you will not have access to reflection so ... RewritePath will remain your probably only choice.\n" ]
[ 6, 0 ]
[]
[]
[ ".net_3.5", "asp.net", "query_string" ]
stackoverflow_0000034365_.net_3.5_asp.net_query_string.txt
Q: What's the answer to this Microsoft PDC challenge? In today's channel9.msdn.com video, the PDC guys posted a challenge to decipher this code: 2973853263233233753482843823642933243283 6434928432937228939232737732732535234532 9335283373377282333349287338349365335325 3283443783243263673762933373883363333472 8936639338428833535236433333237634438833 3275387394324354374325383293375366284282 3323383643473233852922933873933663333833 9228632439434936334633337636632933333428 9285333384346333346365364364365365336367 2873353883543533683523253893663653393433 8837733538538437838338536338232536832634 8284348375376338372376377364368392352393 3883393733943693253343433882852753933822 7533337432433532332332328232332332932432 3323323323323336323333323323323327323324 2873323253233233233892792792792792792792 7934232332332332332332332733432333832336 9344372376326339329376282344 Decipher it and win a t-shirt. (Lame, I know, was hoping for a free trip to the PDC.) I notice some interesting patterns in this code, such as the 332 pattern towards the end, but I'm at a loss as to where to go from here. They've said the answer is a text question. Any ideas on deciphering this code? A: I'm still fiddling with this -- no answer yet, or even a clear direction, but some of this random assortment of facts might be useful to someone.. Meta: Is there any way to mark "read more" in an answer? Sorry in advance for all the scrolling this answer will cause! The code is 708 digits long. Prime factorization: 2 2 3 59. Unless they're being tricky by padding the ends, the chunk size must be 1, 2, 4, 6, or 12; the higher factors are silly. This assumes, of course, that the code is based on concatenated chunks, which may not be the case. Mike Stone suggested a chunk size of 3. Here's the distribution for that: Number of distinct chunks: 64 Number of chunks: 236 (length of message) 275: ### 279: ####### 282: #### 283: # 284: #### 285: ## 286: # 287: ### 288: # 289: ### 292: # 293: #### 297: # 323: ############################# 324: ####### 325: ####### 326: #### 327: #### 328: ## 329: ##### 332: ### 333: ########### 334: ### 335: ###### 336: ### 337: # 338: #### 339: ### 342: # 343: ## 344: ### 345: # 346: ### 347: ## 348: ### 349: ### 352: #### 353: # 354: ## 363: ## 364: ####### 365: ##### 366: ##### 367: ## 368: ### 369: ## 372: ### 373: ## 374: ## 375: ### 376: ####### 377: #### 378: ## 382: ### 383: ### 384: ### 385: #### 387: ## 388: ###### 389: ## 392: ### 393: #### 394: ### 449: # If it's base64 encoded then we might have something ;) but my gut tells me that there are too many distinct chunks of length 3 for plain English text. There is indeed that odd blip for the symbol "323" though. Somewhat more interesting is a chunk size of 2: Number of distinct chunks: 49 Number of chunks: 354 (length of message) 22: ## 23: ######################## 24: ##### 25: ###### 26: # 27: ###### 28: ######### 29: #### 32: ################################## 33: ################################################ 34: ########### 35: ######## 36: ############## 37: ############ 38: ################## 39: #### 42: ## 43: ########### 44: ### 45: # 46: # 47: # 49: ## 52: # 53: ######### 54: ## 62: # 63: ############# 64: #### 65: ### 66: ## 67: ## 68: # 72: ### 73: ############ 74: # 75: #### 76: ##### 77: # 79: #### 82: ###### 83: ########### 84: ##### 85: #### 88: #### 89: # 92: ######### 93: ################ 94: ## As for letter frequency, that's a good strategy, but remember that the text is likely to contain spaces and punctuation. Space might be the most common character by far! Meta: This question re-asks a question found elsewhere. Does that count as homework? :) A: Well, based on the 332 pattern you pointed out and the fact that the number of numbers is divisible by 3, and that several of the first 3 digit groups have matches... it might be that each 3 digits represent a character. Get a distribution of the number matches for all the 3 digit groups, then see if that distribution looks like the distribution of common letters. If so, each 3 digit code could then be mapped to a character, and you might get a lot of the characters filled in for you this way, then just see if you can fill in the blanks of the less common letters that may not match the distribution perfectly. A quick google search revealed this source for distribution of frequency in the English language. This, of course, may not be fruitful, but it's a good first attempt. A: I wrote some C# code to scan the cipher and give me some stats back. Here are some interesting results: With a chunk size of 3, There are 236 chunks. There are 172 duplicates. The 323 code shows up a whopping total of 29 times! The 333 code shows up 11 times. All other codes show up 7 times or less. 35 chunks start with a 2. 200 chunks start with a 3. (Interesting!) 1 chunk starts with a 4. Despite the cipher containing 2s, 3s, 4s, 5s, 6s, 7s, 8s, and 9s, chunks only start with 2 and 3, except the 1 chunk that starts with 4. There are no 0s. There are no 1s. There are 115 2s. There are 293 3s. There are 56 4s. There are 38 5s. There are 49 6s. There are 52 7s. There are 63 8s. There are 42 9s. I'd describe the 323 appearance count highly irregular. I'd also suggest that the fact that all of the chunks start with either 3 or 2 (barring the 1 appearance of a 4 chunk) is also highly irregular. I've ran the same analysis using chunks of 2, 4, and 8, and the results look more or less random. At this point, I'm leaning towards a 3 chunk. A: I'd say that anyone that finds the answer should keep it to themselves, and instead of posting it should just add a note that you can go read a particular url to find it, or send someone an email or something if they want to know the answer to it. At the time when Channel9 says its broken or posts the answer themselves, post it here, but until then, just let the discussion and pondering going. Much better for the brain.
What's the answer to this Microsoft PDC challenge?
In today's channel9.msdn.com video, the PDC guys posted a challenge to decipher this code: 2973853263233233753482843823642933243283 6434928432937228939232737732732535234532 9335283373377282333349287338349365335325 3283443783243263673762933373883363333472 8936639338428833535236433333237634438833 3275387394324354374325383293375366284282 3323383643473233852922933873933663333833 9228632439434936334633337636632933333428 9285333384346333346365364364365365336367 2873353883543533683523253893663653393433 8837733538538437838338536338232536832634 8284348375376338372376377364368392352393 3883393733943693253343433882852753933822 7533337432433532332332328232332332932432 3323323323323336323333323323323327323324 2873323253233233233892792792792792792792 7934232332332332332332332733432333832336 9344372376326339329376282344 Decipher it and win a t-shirt. (Lame, I know, was hoping for a free trip to the PDC.) I notice some interesting patterns in this code, such as the 332 pattern towards the end, but I'm at a loss as to where to go from here. They've said the answer is a text question. Any ideas on deciphering this code?
[ "I'm still fiddling with this -- no answer yet, or even a clear direction, but some of this random assortment of facts might be useful to someone..\nMeta: Is there any way to mark \"read more\" in an answer? Sorry in advance for all the scrolling this answer will cause!\nThe code is 708 digits long. Prime factorization: 2 2 3 59. Unless they're being tricky by padding the ends, the chunk size must be 1, 2, 4, 6, or 12; the higher factors are silly. This assumes, of course, that the code is based on concatenated chunks, which may not be the case.\nMike Stone suggested a chunk size of 3. Here's the distribution for that:\n\n Number of distinct chunks: 64\n Number of chunks: 236 (length of message)\n\n 275: ###\n 279: #######\n 282: ####\n 283: #\n 284: ####\n 285: ##\n 286: #\n 287: ###\n 288: #\n 289: ###\n 292: #\n 293: ####\n 297: #\n 323: #############################\n 324: #######\n 325: #######\n 326: ####\n 327: ####\n 328: ##\n 329: #####\n 332: ###\n 333: ###########\n 334: ###\n 335: ######\n 336: ###\n 337: #\n 338: ####\n 339: ###\n 342: #\n 343: ##\n 344: ###\n 345: #\n 346: ###\n 347: ##\n 348: ###\n 349: ###\n 352: ####\n 353: #\n 354: ##\n 363: ##\n 364: #######\n 365: #####\n 366: #####\n 367: ##\n 368: ###\n 369: ##\n 372: ###\n 373: ##\n 374: ##\n 375: ###\n 376: #######\n 377: ####\n 378: ##\n 382: ###\n 383: ###\n 384: ###\n 385: ####\n 387: ##\n 388: ######\n 389: ##\n 392: ###\n 393: ####\n 394: ###\n 449: #\n\nIf it's base64 encoded then we might have something ;) but my gut tells me that there are too many distinct chunks of length 3 for plain English text. There is indeed that odd blip for the symbol \"323\" though.\nSomewhat more interesting is a chunk size of 2:\n\n Number of distinct chunks: 49\n Number of chunks: 354 (length of message)\n\n 22: ##\n 23: ########################\n 24: #####\n 25: ######\n 26: #\n 27: ######\n 28: #########\n 29: ####\n 32: ##################################\n 33: ################################################\n 34: ###########\n 35: ########\n 36: ##############\n 37: ############\n 38: ##################\n 39: ####\n 42: ##\n 43: ###########\n 44: ###\n 45: #\n 46: #\n 47: #\n 49: ##\n 52: #\n 53: #########\n 54: ##\n 62: #\n 63: #############\n 64: ####\n 65: ###\n 66: ##\n 67: ##\n 68: #\n 72: ###\n 73: ############\n 74: #\n 75: ####\n 76: #####\n 77: #\n 79: ####\n 82: ######\n 83: ###########\n 84: #####\n 85: ####\n 88: ####\n 89: #\n 92: #########\n 93: ################\n 94: ##\n\nAs for letter frequency, that's a good strategy, but remember that the text is likely to contain spaces and punctuation. Space might be the most common character by far!\nMeta: This question re-asks a question found elsewhere. Does that count as homework? :)\n", "Well, based on the 332 pattern you pointed out and the fact that the number of numbers is divisible by 3, and that several of the first 3 digit groups have matches... it might be that each 3 digits represent a character. Get a distribution of the number matches for all the 3 digit groups, then see if that distribution looks like the distribution of common letters.\nIf so, each 3 digit code could then be mapped to a character, and you might get a lot of the characters filled in for you this way, then just see if you can fill in the blanks of the less common letters that may not match the distribution perfectly. \nA quick google search revealed this source for distribution of frequency in the English language. \nThis, of course, may not be fruitful, but it's a good first attempt.\n", "I wrote some C# code to scan the cipher and give me some stats back. Here are some interesting results:\nWith a chunk size of 3, \n\nThere are 236 chunks.\nThere are 172 duplicates.\nThe 323 code shows up a whopping\ntotal of 29 times!\nThe 333 code shows up 11 times.\nAll other codes show up 7 times or less.\n35 chunks start with a 2.\n200 chunks start with a 3. (Interesting!)\n1 chunk starts with a 4.\nDespite the cipher containing 2s, 3s, 4s, 5s, 6s, 7s, 8s, and 9s, chunks only start with 2 and 3, except the 1 chunk that starts with 4.\nThere are no 0s.\nThere are no 1s.\nThere are 115 2s.\nThere are 293 3s.\nThere are 56 4s.\nThere are 38 5s.\nThere are 49 6s.\nThere are 52 7s.\nThere are 63 8s.\nThere are 42 9s.\n\nI'd describe the 323 appearance count highly irregular. I'd also suggest that the fact that all of the chunks start with either 3 or 2 (barring the 1 appearance of a 4 chunk) is also highly irregular.\nI've ran the same analysis using chunks of 2, 4, and 8, and the results look more or less random. At this point, I'm leaning towards a 3 chunk.\n", "I'd say that anyone that finds the answer should keep it to themselves, and instead of posting it should just add a note that you can go read a particular url to find it, or send someone an email or something if they want to know the answer to it. At the time when Channel9 says its broken or posts the answer themselves, post it here, but until then, just let the discussion and pondering going. Much better for the brain.\n" ]
[ 3, 2, 0, 0 ]
[]
[]
[ "encryption", "pdc" ]
stackoverflow_0000036296_encryption_pdc.txt
Q: Can I generate ASP.NET MVC routes from a Sitemap? I'm thinking of learning the ASP.NET MVC framework for an upcoming project. Can I use the advanced routing to create long URLs based on the sitemap hierarchy? Example navigation path: Home > Shop > Products > Household > Kitchen > Cookware > Cooksets > Nonstick Typical (I think) MVC URL: http://example.com/products/category/NonstickCooksets Desired URL: http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick Can I do this? A: Zack, if I understand right you want unlimited depth of the subcategories. No biggie, since MVC Preview 3 (I think 3 or 4) this has been solved. Just define a route like "{controller}/{action}/{*categoryPath}" for an url such as : http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick you should have a ShopController with a Products action : public class ShopController : Controller { ... public ActionResult Products(string categoryPath) { // the categoryPath value would be // "household/kitchen/cookware/cooksets/nonstick". Process it (for ex. split it) // and then decide what you do.. return View(); } A: The MVC routing lets you define pretty much any structure you want, you just need to define what each of the pieces mean semantically. You can have bits that are "hard-coded", like "shop/products", and then define the rest as variable, "{category}/{subcategory}/{speciality}", etc. You can also define several routes that all map to the same end point if you like. Basically, when a URL comes into your MVC app, it goes through the routing table until it finds a pattern that matches, fills in the variables and passes the request off to the appropriate controller for processing. While the default route is a simple Controller, Action, Id kind of setup, that's certainly not the extent of what you can do.
Can I generate ASP.NET MVC routes from a Sitemap?
I'm thinking of learning the ASP.NET MVC framework for an upcoming project. Can I use the advanced routing to create long URLs based on the sitemap hierarchy? Example navigation path: Home > Shop > Products > Household > Kitchen > Cookware > Cooksets > Nonstick Typical (I think) MVC URL: http://example.com/products/category/NonstickCooksets Desired URL: http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick Can I do this?
[ "Zack, if I understand right you want unlimited depth of the subcategories. No biggie, since MVC Preview 3 (I think 3 or 4) this has been solved.\nJust define a route like\n\"{controller}/{action}/{*categoryPath}\"\nfor an url such as :\nhttp://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick\nyou should have a ShopController with a Products action :\npublic class ShopController : Controller\n{\n...\n public ActionResult Products(string categoryPath)\n {\n // the categoryPath value would be\n // \"household/kitchen/cookware/cooksets/nonstick\". Process it (for ex. split it)\n // and then decide what you do..\n return View();\n }\n\n", "The MVC routing lets you define pretty much any structure you want, you just need to define what each of the pieces mean semantically. You can have bits that are \"hard-coded\", like \"shop/products\", and then define the rest as variable, \"{category}/{subcategory}/{speciality}\", etc.\nYou can also define several routes that all map to the same end point if you like. Basically, when a URL comes into your MVC app, it goes through the routing table until it finds a pattern that matches, fills in the variables and passes the request off to the appropriate controller for processing.\nWhile the default route is a simple Controller, Action, Id kind of setup, that's certainly not the extent of what you can do.\n" ]
[ 10, 2 ]
[]
[]
[ "asp.net", "asp.net_mvc", "routing", "sitemap", "url" ]
stackoverflow_0000014923_asp.net_asp.net_mvc_routing_sitemap_url.txt
Q: Why do .Net WPF DependencyProperties have to be static members of the class Learning WPF nowadays. Found something new today with .Net dependency properties. What they bring to the table is Support for Callbacks (Validation, Change, etc) Property inheritance Attached properties among others. But my question here is why do they need to be declared as static in the containing class? The recommmended way is to then add instance 'wrapper' property for them. Why ? edit: @Matt, but doesn't that also mandate that the property value is also shared across instances - unless of course it is a derived value ? A: Dependency properties are static because of a key optimization in WPF: Many of the controls in WPF have tens, if not hundreds of properties. Most of the properties in these classes are set to their default value. If DP's were instance properties, memory would need to be allocated for every property in every object you create. Since DP's are static, WPF is free to manage each property's memory usage more effectively. The reason why you should supply a default value for any DP you register is because WPF will take care not to allocate extra memory for your property when it's set to its default value, no matter how many objects containing that property you create. A: I think the reason you need the static instance of a dependency property is really just because that's how they were designed. The static bit holds all the property metadata - its default value, its owner type (handy if it's an attached property) etc, its callback methods for when it changes - that sort of thing. Makes sense to store these things statically across all instances of the class rather than per-instance. A: I see 2 reasons behind that requirement: You can't register same DP twice. To comply with this constraint you should use static variable, it will be initialized only one time thus you will register DP one time only. DP should be registered before any class (which uses that DB) instance created
Why do .Net WPF DependencyProperties have to be static members of the class
Learning WPF nowadays. Found something new today with .Net dependency properties. What they bring to the table is Support for Callbacks (Validation, Change, etc) Property inheritance Attached properties among others. But my question here is why do they need to be declared as static in the containing class? The recommmended way is to then add instance 'wrapper' property for them. Why ? edit: @Matt, but doesn't that also mandate that the property value is also shared across instances - unless of course it is a derived value ?
[ "Dependency properties are static because of a key optimization in WPF: Many of the controls in WPF have tens, if not hundreds of properties. Most of the properties in these classes are set to their default value. If DP's were instance properties, memory would need to be allocated for every property in every object you create. Since DP's are static, WPF is free to manage each property's memory usage more effectively.\nThe reason why you should supply a default value for any DP you register is because WPF will take care not to allocate extra memory for your property when it's set to its default value, no matter how many objects containing that property you create.\n", "I think the reason you need the static instance of a dependency property is really just because that's how they were designed. The static bit holds all the property metadata - its default value, its owner type (handy if it's an attached property) etc, its callback methods for when it changes - that sort of thing. Makes sense to store these things statically across all instances of the class rather than per-instance.\n", "I see 2 reasons behind that requirement:\n\nYou can't register same DP twice. To comply with this constraint you should use static variable, it will be initialized only one time thus you will register DP one time only.\nDP should be registered before any class (which uses that DB) instance created\n\n" ]
[ 7, 5, 2 ]
[]
[]
[ ".net", "wpf" ]
stackoverflow_0000036682_.net_wpf.txt
Q: How to stop NTFS volume auto-mounting on OS X? I'm a bit newbieish when it comes to the deeper parts of OSX configuration and am having to put up with a fairly irritating niggle which while I can put up with it, I know under Windows I could have sorted in minutes. Basically, I have an external disk with two volumes: One is an HFS+ volume which I use for TimeMachine backups. The other, an NTFS volume that I use for general file copying etc on Mac and Windows boxes. So what happens is that whenever I plug in the disk into my Mac USB, OSX goes off and mounts both volumes and shows an icon on the desktop for each. The thing is that to remove the disk you have to eject the volume and in this case do it for both volumes, which causes an annoying warning dialog to be shown every time. What I'd prefer is some way to prevent the NTFS volume from auto-mounting altogether. I've done some hefty googling and here's a list of things I've tried so far: I've tried going through options in Disk Utility I've tried setting AutoMount to No in /etc/hostconfig but that is a bit too global for my liking. I've also tried the suggested approach to putting settings in fstab but it appears the OSX (10.5) is ignoring these settings. Any other suggestions would be welcomed. Just a little dissapointed that I can't just tick a box somewhere (or untick). EDIT: Thanks heaps to hop for the answer it worked a treat. For the record it turns out that it wasn't OSX not picking up the settings I actually had "msdos" instead of "ntfs" in the fs type column. A: The following entry in /etc/fstab will do what you want, even on 10.5 (Leopard): LABEL=VolumeName none ntfs noauto If the file is not already there, just create it. Do not use /etc/fstab.hd! No reloading of diskarbitrationd needed. If this still doesn't work for you, maybe you can find a hint in the syslog. A: This is not directly an answer, but The thing is that to remove the disk you have to eject the volume and in this case do it for both volumes I have a similar situation. OSX remembers where you put your icons on the desktop - I've moved the icons for both of my removable drives to just above where the trash can lives. Eject procedure becomes Hit top-left of screen with mouse to show desktop Drag small box around both removable drives Drag 2cm onto trash so they both get ejected Remove firewire cable
How to stop NTFS volume auto-mounting on OS X?
I'm a bit newbieish when it comes to the deeper parts of OSX configuration and am having to put up with a fairly irritating niggle which while I can put up with it, I know under Windows I could have sorted in minutes. Basically, I have an external disk with two volumes: One is an HFS+ volume which I use for TimeMachine backups. The other, an NTFS volume that I use for general file copying etc on Mac and Windows boxes. So what happens is that whenever I plug in the disk into my Mac USB, OSX goes off and mounts both volumes and shows an icon on the desktop for each. The thing is that to remove the disk you have to eject the volume and in this case do it for both volumes, which causes an annoying warning dialog to be shown every time. What I'd prefer is some way to prevent the NTFS volume from auto-mounting altogether. I've done some hefty googling and here's a list of things I've tried so far: I've tried going through options in Disk Utility I've tried setting AutoMount to No in /etc/hostconfig but that is a bit too global for my liking. I've also tried the suggested approach to putting settings in fstab but it appears the OSX (10.5) is ignoring these settings. Any other suggestions would be welcomed. Just a little dissapointed that I can't just tick a box somewhere (or untick). EDIT: Thanks heaps to hop for the answer it worked a treat. For the record it turns out that it wasn't OSX not picking up the settings I actually had "msdos" instead of "ntfs" in the fs type column.
[ "The following entry in /etc/fstab will do what you want, even on 10.5 (Leopard):\nLABEL=VolumeName none ntfs noauto\n\nIf the file is not already there, just create it. Do not use /etc/fstab.hd! No reloading of diskarbitrationd needed.\nIf this still doesn't work for you, maybe you can find a hint in the syslog.\n", "This is not directly an answer, but\n\nThe thing is that to remove the disk you have to eject the volume and in this case do it for both volumes\n\nI have a similar situation.\nOSX remembers where you put your icons on the desktop - I've moved the icons for both of my removable drives to just above where the trash can lives.\nEject procedure becomes\n\nHit top-left of screen with mouse to show desktop\nDrag small box around both removable drives\nDrag 2cm onto trash so they both get ejected\nRemove firewire cable\n\n" ]
[ 2, 1 ]
[]
[]
[ "hardware", "macos" ]
stackoverflow_0000020850_hardware_macos.txt
Q: Is there any way to "sticky" a file in subversion? We have been working with CVS for years, and frequently find it useful to "sticky" a single file here and there. Is there any way to do this in subversion, specifically from TortoiseSVN? A: Short answer: no. Long answer: Working copies are sticky to a branch by definition, as changing to a different branch means changing the base-Subversion-URL used to access the repository. However sticky-revision files... that's not a concept that Subversion has. See: Subversion update command reference Appendix A of the subversion book: Subversion for CVS Users One workaround might be to manually return the file to a specific revision after doing an update. Perhaps putting something like the following into a script. svn update svn update -r1234 that/particular/file.txt Another workaround, as tweakt suggests, is to have a partial branch with just one file in it. This needs very careful management though and things can get a bit.. er... sticky :-) , if you're not vigilant. A: You can technically "branch" as little as a single file if you'd like... you can use 'svn switch' on any level directory or file. SVN tracks resources on a per-file basis just as CVS does, so it can do 'sticky' to the same effect. Committing a working copy containing mixed paths has very different effects though. See: http://svnbook.red-bean.com/en/1.0/re27.html http://svn.haxx.se/dev/archive-2002-11/0336.shtml
Is there any way to "sticky" a file in subversion?
We have been working with CVS for years, and frequently find it useful to "sticky" a single file here and there. Is there any way to do this in subversion, specifically from TortoiseSVN?
[ "Short answer: no.\nLong answer:\nWorking copies are sticky to a branch by definition, as changing to a different branch means changing the base-Subversion-URL used to access the repository.\nHowever sticky-revision files... that's not a concept that Subversion has.\nSee:\n\nSubversion update command reference\nAppendix A of the subversion book: Subversion for CVS Users\n\nOne workaround might be to manually return the file to a specific revision after doing an update. Perhaps putting something like the following into a script.\nsvn update\nsvn update -r1234 that/particular/file.txt\n\nAnother workaround, as tweakt suggests, is to have a partial branch with just one file in it. This needs very careful management though and things can get a bit.. er... sticky :-)\n, if you're not vigilant.\n", "You can technically \"branch\" as little as a single file if you'd like... you can use 'svn switch' on any level directory or file. SVN tracks resources on a per-file basis just as CVS does, so it can do 'sticky' to the same effect. Committing a working copy containing mixed paths has very different effects though.\nSee: \n\nhttp://svnbook.red-bean.com/en/1.0/re27.html\nhttp://svn.haxx.se/dev/archive-2002-11/0336.shtml\n\n" ]
[ 2, 2 ]
[]
[]
[ "svn" ]
stackoverflow_0000036915_svn.txt
Q: Communication between pages I want to enable an user to be able to communicate with other users through a site. I know that ASP.net is stateless, but what can I use for this synced communication? Java servlets? A: I don't think you need to set up Java just to use a servlet for this. I would use AJAX and the database. I don't know ASP.NET but I PHP is similar in this case, being also basically "stateless". If you want to display some kind of asynchronous communication between two different users, say, from two different sessions, without a lot of refreshing (like chat), you can have the AJAX page constantly poll the database for new messages, and display them when they come in. You can also use AJAX to insert the new messages, giving the user read/write access to this messages data structure. Since the "other" user is doing the same thing, user A should see new messages pop up when user B types them in. Is that what you mean? A: You probably don't want to use sessions for things like chat messages but you probably could use some type of implementation of queueing using MSMQ. The approach to chat could be done in many different ways, this is just a suggesting off the top of my head. A: ASP.NET is "stateless" but it maintains state using Sessions. You can use them by default just using the Session[] keyword. Look at ASP.NET Session State for some details from Microsoft. A: Could do a messaging solution in Java Servlets using the application context. Objects stored as attributes in the application context are visible from anywhere in your webapp. Update: Chat like functionality... I guess that would be AJAX polling your message structure stored in the app context unless you want to use something like applets. A: Don't know if it's any good, but there's a chat servlet here that might be useful to use or learn from if you decide to go the Java route...
Communication between pages
I want to enable an user to be able to communicate with other users through a site. I know that ASP.net is stateless, but what can I use for this synced communication? Java servlets?
[ "I don't think you need to set up Java just to use a servlet for this. I would use AJAX and the database. I don't know ASP.NET but I PHP is similar in this case, being also basically \"stateless\". If you want to display some kind of asynchronous communication between two different users, say, from two different sessions, without a lot of refreshing (like chat), you can have the AJAX page constantly poll the database for new messages, and display them when they come in. You can also use AJAX to insert the new messages, giving the user read/write access to this messages data structure. Since the \"other\" user is doing the same thing, user A should see new messages pop up when user B types them in.\nIs that what you mean?\n", "You probably don't want to use sessions for things like chat messages but you probably could use some type of implementation of queueing using MSMQ.\nThe approach to chat could be done in many different ways, this is just a suggesting off the top of my head.\n", "ASP.NET is \"stateless\" but it maintains state using Sessions. You can use them by default just using the Session[] keyword. \nLook at ASP.NET Session State for some details from Microsoft.\n", "Could do a messaging solution in Java Servlets using the application context. Objects stored as attributes in the application context are visible from anywhere in your webapp.\nUpdate: Chat like functionality... I guess that would be AJAX polling your message structure stored in the app context unless you want to use something like applets.\n", "Don't know if it's any good, but there's a chat servlet here that might be useful to use or learn from if you decide to go the Java route...\n" ]
[ 2, 1, 0, 0, 0 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000036916_asp.net.txt
Q: How do I add data to an existing model in Django? Currently, I am writing up a bit of a product-based CMS as my first project. Here is my question. How can I add additional data (products) to my Product model? I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django How can I do this all without using this existing django admin interface. A: You will want to wire your URL to the Django create_object generic view, and pass it either "model" (the model you want to create) or "form_class" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors. Sample URLconf for the simplest case: from django.conf.urls.defaults import * from django.views.generic.create_update import create_object from my_products_app.models import Product urlpatterns = patterns('', url(r'^admin/products/add/$', create_object, {'model': Product})) Your template will get the context variable "form", which you just need to wrap in a <form> tag and add a submit button. The simplest working template (by default should go in "my_products_app/product_form.html"): <form action="." method="POST"> {{ form }} <input type="submit" name="submit" value="add"> </form> Note that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save.
How do I add data to an existing model in Django?
Currently, I am writing up a bit of a product-based CMS as my first project. Here is my question. How can I add additional data (products) to my Product model? I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django How can I do this all without using this existing django admin interface.
[ "You will want to wire your URL to the Django create_object generic view, and pass it either \"model\" (the model you want to create) or \"form_class\" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors.\nSample URLconf for the simplest case:\nfrom django.conf.urls.defaults import *\nfrom django.views.generic.create_update import create_object\n\nfrom my_products_app.models import Product\n\nurlpatterns = patterns('',\n url(r'^admin/products/add/$', create_object, {'model': Product}))\n\nYour template will get the context variable \"form\", which you just need to wrap in a <form> tag and add a submit button. The simplest working template (by default should go in \"my_products_app/product_form.html\"):\n<form action=\".\" method=\"POST\">\n {{ form }}\n <input type=\"submit\" name=\"submit\" value=\"add\">\n</form>\n\nNote that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save.\n" ]
[ 7 ]
[ "This topic is covered in Django tutorials.\n", "Follow the Django tutorial for setting up the \"admin\" part of an application. This will allow you to modify your database.\nDjango Admin Setup\nAlternatively, you can just connect directly to the database using the standard tools for whatever database type you are using.\n" ]
[ -1, -2 ]
[ "django", "python" ]
stackoverflow_0000036812_django_python.txt
Q: IE 6 CSS Hover non Anchor Tag What is the simplest and most elegant way to simulate the hover pseudo-class for non-Anchor tags in IE6? I am specifically trying to change the cursor in this instance to that of a pointer. A: I think the simplest way is to use the hover.htc approach. You add the hover.htc file to your site, then reference it in your stylesheet: body { behavior:url("csshover.htc"); } If you want to keep things as clean as possible, you can use IE conditional comments so that line is only rendered users with IE6. A: Regarding your request -- I am specifically trying to change the cursor in this instance to that of a pointer -- the easiest way is to specify cursor:pointer in your css. I think you will find that works in IE 6. Try this to verify (where div can be any element): <div style="background:orange; cursor:pointer; height:100px; width:100px;"> Hover </div> A: I would say that the simplest method would be to add onmouseover/out Javascript functions. A: Another alternative that will fix many more issues in one go is to use IE7.js. A: Another approach, depending on what the item is, is to add a non link anchor and set its display to block. Either put the anchor within or surrounding the item you want the pseudo hover behavior on. A: Aside: I actually already needed to swap the image anyhow Make sure you take a look at Image Sprites. Sometimes its much nicer to use one image and "shift" the image then to use two separate images and "toggle" or "swap" between them. In my experience its been much nice when as user interacts with it is sometimes an advantage that there is a single request for the 1 image then multiple requests for multiple images. A: I liked the mouseover/out best since I actually already needed to swap the image anyhow. I really should have thought of doing this with javascript to begin with. Thanks for the quick answers. @Joseph Thanks for that link. I had never heard of this technique before and really like the idea. I will definitely try that out and see how I fare with it. A: If your willing to use JQuery, I would use Set Hover Class for Anything technique.
IE 6 CSS Hover non Anchor Tag
What is the simplest and most elegant way to simulate the hover pseudo-class for non-Anchor tags in IE6? I am specifically trying to change the cursor in this instance to that of a pointer.
[ "I think the simplest way is to use the hover.htc approach. You add the hover.htc file to your site, then reference it in your stylesheet:\nbody { behavior:url(\"csshover.htc\"); }\n\nIf you want to keep things as clean as possible, you can use IE conditional comments so that line is only rendered users with IE6.\n", "Regarding your request -- I am specifically trying to change the cursor in this instance to that of a pointer -- the easiest way is to specify cursor:pointer in your css. I think you will find that works in IE 6.\nTry this to verify (where div can be any element):\n<div style=\"background:orange; cursor:pointer; height:100px; width:100px;\">\n Hover\n</div>\n\n", "I would say that the simplest method would be to add onmouseover/out Javascript functions.\n", "Another alternative that will fix many more issues in one go is to use IE7.js.\n", "Another approach, depending on what the item is, is to add a non link anchor and set its display to block. Either put the anchor within or surrounding the item you want the pseudo hover behavior on.\n", "Aside:\n\nI actually already needed to swap the image anyhow\n\nMake sure you take a look at Image Sprites. Sometimes its much nicer to use one image and \"shift\" the image then to use two separate images and \"toggle\" or \"swap\" between them. In my experience its been much nice when as user interacts with it is sometimes an advantage that there is a single request for the 1 image then multiple requests for multiple images.\n", "I liked the mouseover/out best since I actually already needed to swap the image anyhow. I really should have thought of doing this with javascript to begin with.\nThanks for the quick answers.\n@Joseph\nThanks for that link. I had never heard of this technique before and really like the idea.\nI will definitely try that out and see how I fare with it.\n", "If your willing to use JQuery, I would use Set Hover Class for Anything technique.\n" ]
[ 12, 6, 4, 3, 1, 1, 0, 0 ]
[]
[]
[ "css", "internet_explorer_6" ]
stackoverflow_0000036605_css_internet_explorer_6.txt
Q: Designing Panels without a parent Form in VS? Are there any tools or plugins to design a Panel independently of a Form (Windows, not Web Form) within Visual Studio? I've been using the designer and manually extracting the bits I want from the source, but surely there is a nicer way. A: You could just write the code by hand! A: You could do all the design work inside of a UserControl. If you go that route, instead of just copying the bits out of the user control, simply use the user control itself. A: As Chris Karcher said, you should probably use a user control. This will allow easy, VS-supported/-integrated reuse without having to manually fiddle with designer code.
Designing Panels without a parent Form in VS?
Are there any tools or plugins to design a Panel independently of a Form (Windows, not Web Form) within Visual Studio? I've been using the designer and manually extracting the bits I want from the source, but surely there is a nicer way.
[ "You could just write the code by hand!\n", "You could do all the design work inside of a UserControl.\nIf you go that route, instead of just copying the bits out of the user control, simply use the user control itself.\n", "As Chris Karcher said, you should probably use a user control. This will allow easy, VS-supported/-integrated reuse without having to manually fiddle with designer code.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000036968_visual_studio.txt
Q: How do I use ADAM to run unit tests? I writing a web site that uses Active Directory to validate users. I don't have access to an Active Directory instance that I can edit in any way. I've heard that some people are using Active Directory Application Mode (ADAM) to create AD data to be used in Unit and Integration Testing. Has anyone else done this? Are the any good site/blog that tells how to do this? What are the issues? Is this even a good idea? A: I don't think this is a good idea just like reading files or accessing the database in unit tests isn't a good idea. Your tests will become dependent on the state of an external piece of software. Or you will have a lot of setup and teardown code. If you write tests this way you can expect you'll spend a lot of extra time maintaining your test-code. Setting up and maintaining a build server will become harder too and setting up the development environment for new programmers will take more time. The way to go in cases like this is to set up an adapter class around the infrastructure for calling into AD and to use something like rhino-mocks or another mocking framework to setup a mock-active-directory in your tests. If you're not familiar with mocking it sounds like a lot of work. But in practice it's usually only a couple of lines of code per test.
How do I use ADAM to run unit tests?
I writing a web site that uses Active Directory to validate users. I don't have access to an Active Directory instance that I can edit in any way. I've heard that some people are using Active Directory Application Mode (ADAM) to create AD data to be used in Unit and Integration Testing. Has anyone else done this? Are the any good site/blog that tells how to do this? What are the issues? Is this even a good idea?
[ "I don't think this is a good idea just like reading files or accessing the database in unit tests isn't a good idea. Your tests will become dependent on the state of an external piece of software. Or you will have a lot of setup and teardown code. If you write tests this way you can expect you'll spend a lot of extra time maintaining your test-code. Setting up and maintaining a build server will become harder too and setting up the development environment for new programmers will take more time.\nThe way to go in cases like this is to set up an adapter class around the infrastructure for calling into AD and to use something like rhino-mocks or another mocking framework to setup a mock-active-directory in your tests. If you're not familiar with mocking it sounds like a lot of work. But in practice it's usually only a couple of lines of code per test.\n" ]
[ 4 ]
[]
[]
[ "active_directory", "adam", "testing" ]
stackoverflow_0000036949_active_directory_adam_testing.txt
Q: How do you make a post request into a new browser tab using JavaScript / XUL? I'm trying to open a new browser tab with the results of a POST request. I'm trying to do so using a function containing the following code: var windowManager = Components.classes["@mozilla.org/appshell/window-mediator;1"] .getService(Components.interface s.nsIWindowMediator); var browserWindow = windowManager.getMostRecentWindow("navigator:browser"); var browser = browserWindow.getBrowser(); if(browser.mCurrentBrowser.currentURI.spec == "about:blank") browserWindow.loadURI(url, null, postData, false); else browser.loadOneTab(url, null, null, postData, false, false); I'm using a string as url, and JSON data as postData. Is there something I'm doing wrong? What happens, is a new tab is created, the location shows the URL I want to post to, but the document is blank. The Back, Forward, and Reload buttons are all grayed out on the browser. It seems like it did everything except executed the POST. If I leave the postData parameter off, then it properly runs a GET. Build identifier: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1 A: Something which is less Mozilla specific and should work reasonably well with most of the browsers: Create a hidden form with the fields set up the way you need them Make sure that the "target" attribute of the form is set to "_BLANK" Submit the form programatically A: The answer to this was found by shog9. The postData parameter needs to be a nsIMIMEInputStream object as detailed in here. A: try with addTab instead of loadOneTab, and remove the last parameter. Check out this page over at the Mozilla Development Center for information on how to open tabs. You could use this function, for example: function openAndReuseOneTabPerURL(url) { var wm = Components.classes["@mozilla.org/appshell/window-mediator;1"] .getService(Components.interfaces.nsIWindowMediator); var browserEnumerator = wm.getEnumerator("navigator:browser"); // Check each browser instance for our URL var found = false; while (!found && browserEnumerator.hasMoreElements()) { var browserInstance = browserEnumerator.getNext().getBrowser(); // Check each tab of this browser instance var numTabs = browserInstance.tabContainer.childNodes.length; for(var index=0; index<numTabs; index++) { var currentBrowser = browserInstance.getBrowserAtIndex(index); if ("about:blank" == currentBrowser.currentURI.spec) { // The URL is already opened. Select this tab. browserInstance.selectedTab = browserInstance.tabContainer.childNodes[index]; // Focus *this* browser browserInstance.focus(); found = true; break; } } } // Our URL isn't open. Open it now. if (!found) { var recentWindow = wm.getMostRecentWindow("navigator:browser"); if (recentWindow) { // Use an existing browser window recentWindow.delayedOpenTab(url, null, null, null, null); } else { // No browser windows are open, so open a new one. window.open(url); } } }
How do you make a post request into a new browser tab using JavaScript / XUL?
I'm trying to open a new browser tab with the results of a POST request. I'm trying to do so using a function containing the following code: var windowManager = Components.classes["@mozilla.org/appshell/window-mediator;1"] .getService(Components.interface s.nsIWindowMediator); var browserWindow = windowManager.getMostRecentWindow("navigator:browser"); var browser = browserWindow.getBrowser(); if(browser.mCurrentBrowser.currentURI.spec == "about:blank") browserWindow.loadURI(url, null, postData, false); else browser.loadOneTab(url, null, null, postData, false, false); I'm using a string as url, and JSON data as postData. Is there something I'm doing wrong? What happens, is a new tab is created, the location shows the URL I want to post to, but the document is blank. The Back, Forward, and Reload buttons are all grayed out on the browser. It seems like it did everything except executed the POST. If I leave the postData parameter off, then it properly runs a GET. Build identifier: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1
[ "Something which is less Mozilla specific and should work reasonably well with most of the browsers:\n\nCreate a hidden form with the fields set up the way you need them\nMake sure that the \"target\" attribute of the form is set to \"_BLANK\"\nSubmit the form programatically\n\n", "The answer to this was found by shog9. The postData parameter needs to be a nsIMIMEInputStream object as detailed in here.\n", "try with addTab instead of loadOneTab, and remove the last parameter. \nCheck out this page over at the Mozilla Development Center for information on how to open tabs. \nYou could use this function, for example:\nfunction openAndReuseOneTabPerURL(url) {\n var wm = Components.classes[\"@mozilla.org/appshell/window-mediator;1\"]\n .getService(Components.interfaces.nsIWindowMediator);\n var browserEnumerator = wm.getEnumerator(\"navigator:browser\");\n\n // Check each browser instance for our URL\n var found = false;\n while (!found && browserEnumerator.hasMoreElements()) {\n var browserInstance = browserEnumerator.getNext().getBrowser();\n\n // Check each tab of this browser instance\n var numTabs = browserInstance.tabContainer.childNodes.length;\n for(var index=0; index<numTabs; index++) {\n var currentBrowser = browserInstance.getBrowserAtIndex(index);\n if (\"about:blank\" == currentBrowser.currentURI.spec) {\n\n // The URL is already opened. Select this tab.\n browserInstance.selectedTab = browserInstance.tabContainer.childNodes[index];\n\n // Focus *this* browser\n browserInstance.focus();\n found = true;\n break;\n }\n }\n }\n\n // Our URL isn't open. Open it now.\n if (!found) {\n var recentWindow = wm.getMostRecentWindow(\"navigator:browser\");\n if (recentWindow) {\n // Use an existing browser window\n recentWindow.delayedOpenTab(url, null, null, null, null);\n }\n else {\n // No browser windows are open, so open a new one.\n window.open(url);\n }\n }\n}\n\n" ]
[ 3, 3, 0 ]
[]
[]
[ "firefox", "javascript", "ubiquity", "xul" ]
stackoverflow_0000036144_firefox_javascript_ubiquity_xul.txt
Q: Do you have to register a Dialog Box? So, I am a total beginner in any kind of Windows related programming. I have been playing around with the Windows API and came across a couple of examples on how to initialize create windows and such. One example creates a regular window (I abbreviated some of the code): int WINAPI WinMain( [...] ) { [...] // Windows Class setup wndClass.cbSize = sizeof( wndClass ); wndClass.style = CS_HREDRAW | CS_VREDRAW; [...] // Register class RegisterClassEx( &wndClass ); // Create window hWnd = CreateWindow( szAppName, "Win32 App", WS_OVERLAPPEDWINDOW, 0, 0, 512, 384, NULL, NULL, hInstance, NULL ); [...] } The second example creates a dialog box (no abbreviations except the WinMain arguments): int WINAPI WinMain( [...] ) { // Create dialog box DialogBox(hInstance, MAKEINTRESOURCE(IDD_MAIN_DLG), NULL, (DLGPROC)DialogProc); } The second example does not contain any call to the register function. It just creates the DialogBox with its DialogProc process attached. This works fine, but I am wondering if there is a benefit of registering the window class and then creating the dialog box (if this is at all possible). A: You do not have to register a dialog box. Dialog boxes are predefined so (as you noted) there is no reference to a window class when you create a dialog. If you want more control of a dialog (like you get when you create your own window class) you would subclass the dialog which is a method by which you replace the dialogs window procedure with your own. When your procedure is called you modify the behavior of the dialog window; you then might or might not call the original window procedure depending upon what you're trying to do. A: It's been a while since I've done this, but IIRC, the first case is for creating a dialog dynamically, from an in-memory template. The second example is for the far more common case of creating a dialog using a resource. The dynamic dialog stuff in Win32 was fairly complex, but it allowed you to create a true data-driven interface, and avoid issues with bundling resources with DLLs. As for why use Win32 - if you need a windows app and you don't want to depend on MFC or the .NET runtime, then that's what you use.
Do you have to register a Dialog Box?
So, I am a total beginner in any kind of Windows related programming. I have been playing around with the Windows API and came across a couple of examples on how to initialize create windows and such. One example creates a regular window (I abbreviated some of the code): int WINAPI WinMain( [...] ) { [...] // Windows Class setup wndClass.cbSize = sizeof( wndClass ); wndClass.style = CS_HREDRAW | CS_VREDRAW; [...] // Register class RegisterClassEx( &wndClass ); // Create window hWnd = CreateWindow( szAppName, "Win32 App", WS_OVERLAPPEDWINDOW, 0, 0, 512, 384, NULL, NULL, hInstance, NULL ); [...] } The second example creates a dialog box (no abbreviations except the WinMain arguments): int WINAPI WinMain( [...] ) { // Create dialog box DialogBox(hInstance, MAKEINTRESOURCE(IDD_MAIN_DLG), NULL, (DLGPROC)DialogProc); } The second example does not contain any call to the register function. It just creates the DialogBox with its DialogProc process attached. This works fine, but I am wondering if there is a benefit of registering the window class and then creating the dialog box (if this is at all possible).
[ "You do not have to register a dialog box.\nDialog boxes are predefined so (as you noted) there is no reference to a window class when you create a dialog. If you want more control of a dialog (like you get when you create your own window class) you would subclass the dialog which is a method by which you replace the dialogs window procedure with your own. When your procedure is called you modify the behavior of the dialog window; you then might or might not call the original window procedure depending upon what you're trying to do.\n", "It's been a while since I've done this, but IIRC, the first case is for creating a dialog dynamically, from an in-memory template. The second example is for the far more common case of creating a dialog using a resource. The dynamic dialog stuff in Win32 was fairly complex, but it allowed you to create a true data-driven interface, and avoid issues with bundling resources with DLLs.\nAs for why use Win32 - if you need a windows app and you don't want to depend on MFC or the .NET runtime, then that's what you use.\n" ]
[ 2, 2 ]
[]
[]
[ "c++", "winapi" ]
stackoverflow_0000036991_c++_winapi.txt
Q: Is there a standard way to return values from custom dialogs in Windows Forms? So right now my project has a few custom dialogs that do things like prompt the user for his birthday, or whatever. Right now they're just doing things like setting a this.Birthday property once they get an answer (which is of type DateTime?, with the null indicating a "Cancel"). Then the caller inspects the Birthday property of the dialog it created to figure out what the user answered. My question is, is there a more standard pattern for doing stuff like this? I know we can set this.DialogResult for basic OK/Cancel stuff, but is there a more general way in Windows Forms for a form to indicate "here's the data I collected"? A: I would say exposing properties on your custom dialog is the idiomatic way to go because that is how standard dialogs (like the Select/OpenFileDialog) do it. Someone could argue it is more explicit and intention revealing to have a ShowBirthdayDialog() method that returns the result you're looking for, but following the framework's pattern is probably the wise way to go. A: is there a more standard pattern for doing stuff like this? No, it sounds like you're using the right approach. If the dialog returns DialogResult.OK, assume that all the necessary properties in the dialog are valid. A: For me sticking with the Dialog returning the standard dialog responses and then accessing the results via properties is the way to go. Two good reasons from where I sit: Consistency - you're always doing the same thing with a dialog and the very nature of the question suggests that patterns are good (-: Although equally the question is whether this is a good pattern? It allows for return of multiple values from the dialog - ok there's whole new discussion here too but applied pragmatism means that this is what one wants in some circumstances its not always appropriate or desirable to package values up just so that you can pass them back in all in one go. The flow of logic is nice too: if (Dialog == Ok) { // Do Stuff with the entered values } else { // Respond appropriately to the user cancelling the dialog } Its a good question - we're supposed to question stuff like this - but for me the current pattern is a decent one. Murph A: For modal input dialogs, I typically overload ShowDialog and pass out params for the data I need. DialogResult ShowDialog(out datetime birthday) I generally find that it's easier to discover and understand vs mixing my properties with the 100+ that the Form class exposes. For forms, I normally have a Controller and a IView interface that uses readonly properties to pass data. A: I've always done it exactly the way you're describing. I'm curious to see if there's a more accepted approach.
Is there a standard way to return values from custom dialogs in Windows Forms?
So right now my project has a few custom dialogs that do things like prompt the user for his birthday, or whatever. Right now they're just doing things like setting a this.Birthday property once they get an answer (which is of type DateTime?, with the null indicating a "Cancel"). Then the caller inspects the Birthday property of the dialog it created to figure out what the user answered. My question is, is there a more standard pattern for doing stuff like this? I know we can set this.DialogResult for basic OK/Cancel stuff, but is there a more general way in Windows Forms for a form to indicate "here's the data I collected"?
[ "I would say exposing properties on your custom dialog is the idiomatic way to go because that is how standard dialogs (like the Select/OpenFileDialog) do it. Someone could argue it is more explicit and intention revealing to have a ShowBirthdayDialog() method that returns the result you're looking for, but following the framework's pattern is probably the wise way to go.\n", "\nis there a more standard pattern for doing stuff like this?\n\nNo, it sounds like you're using the right approach.\nIf the dialog returns DialogResult.OK, assume that all the necessary properties in the dialog are valid.\n", "For me sticking with the Dialog returning the standard dialog responses and then accessing the results via properties is the way to go.\nTwo good reasons from where I sit:\n\nConsistency - you're always doing the same thing with a dialog and the very nature of the question suggests that patterns are good (-: Although equally the question is whether this is a good pattern?\nIt allows for return of multiple values from the dialog - ok there's whole new discussion here too but applied pragmatism means that this is what one wants in some circumstances its not always appropriate or desirable to package values up just so that you can pass them back in all in one go.\n\nThe flow of logic is nice too:\nif (Dialog == Ok)\n{\n // Do Stuff with the entered values\n}\nelse\n{\n // Respond appropriately to the user cancelling the dialog\n}\n\nIts a good question - we're supposed to question stuff like this - but for me the current pattern is a decent one.\nMurph\n", "For modal input dialogs, I typically overload ShowDialog and pass out params for the data I need.\nDialogResult ShowDialog(out datetime birthday)\n\nI generally find that it's easier to discover and understand vs mixing my properties with the 100+ that the Form class exposes.\nFor forms, I normally have a Controller and a IView interface that uses readonly properties to pass data.\n", "I've always done it exactly the way you're describing. I'm curious to see if there's a more accepted approach.\n" ]
[ 9, 3, 2, 1, 0 ]
[]
[]
[ ".net", "user_interface", "winforms" ]
stackoverflow_0000036984_.net_user_interface_winforms.txt
Q: Integrating Perl and Oracle Advanced Queuing Is there any way to listen to an Oracle AQ using a Perl process as the listener. A: This Introduction to Oracle Advanced Queuing states that you can interface to it through "Internet access using HTTP, HTTPS, and SMTP" so it should be straightforward to do that using a Perl script.
Integrating Perl and Oracle Advanced Queuing
Is there any way to listen to an Oracle AQ using a Perl process as the listener.
[ "This Introduction to Oracle Advanced Queuing states that you can interface to it through \"Internet access using HTTP, HTTPS, and SMTP\" so it should be straightforward to do that using a Perl script.\n" ]
[ 1 ]
[]
[]
[ "advanced_queuing", "messaging", "oracle", "perl" ]
stackoverflow_0000036825_advanced_queuing_messaging_oracle_perl.txt
Q: How Do Sites Suppress Pasting Text? I've noticed that some sites (usually banks) suppress the ability to paste text into text fields. How is this done? I know that JavaScript can be used to swallow the keyboard shortcut for paste, but what about the right-click menu item? A: Probably using the onpaste event, and either return false from it or use e.preventDefault() on the Event object. Note that onpaste is non standard, don't rely on it for production sites, because it will not be there forever. $(document).on("paste",function(e){ console.log("paste") e.preventDefault() return false; }) A: Even if it is somewhat possible to intercept the paste event in many browsers (but not all as shown at the link on the previous answer), that is quite unreliable and posible not complete (depending on the browser / OS it may be possible to do the paste operation in different ways that may not be trappable by javascript code). Here is a collection of notes regarding paste (and copy) in the context of rich text editors that may be applied also elsewhere.
How Do Sites Suppress Pasting Text?
I've noticed that some sites (usually banks) suppress the ability to paste text into text fields. How is this done? I know that JavaScript can be used to swallow the keyboard shortcut for paste, but what about the right-click menu item?
[ "Probably using the onpaste event, and either return false from it or use e.preventDefault() on the Event object.\nNote that onpaste is non standard, don't rely on it for production sites, because it will not be there forever.\n\n\n$(document).on(\"paste\",function(e){\r\n console.log(\"paste\")\r\n e.preventDefault()\r\n return false;\r\n})\n\n\n\n", "Even if it is somewhat possible to intercept the paste event in many browsers (but not all as shown at the link on the previous answer), that is quite unreliable and posible not complete (depending on the browser / OS it may be possible to do the paste operation in different ways that may not be trappable by javascript code).\nHere is a collection of notes regarding paste (and copy) in the context of rich text editors that may be applied also elsewhere.\n" ]
[ 10, 2 ]
[]
[]
[ "browser", "clipboard", "javascript", "web_applications" ]
stackoverflow_0000033103_browser_clipboard_javascript_web_applications.txt
Q: How can I get notification when a mirrored SQL Server database has failed over We have a couple of mirrored SQL Server databases. My first problem - the key problem - is to get a notification when the db fails over. I don't need to know because, erm, its mirrored and so it (almost) all carries on working automagically but it would useful to be advised and I'm currently getting failovers when I don't think I should be so it want to know when they occur (without too much digging) to see if I can determine why. I have services running that I could fairly easily use to monitor this - so the alternative question would be "How do I programmatically determine which is the principal and which is the mirror" - preferably in a more intelligent fashion than just attempting to connect each in turn (which would mostly work but...). Thanks, Murph Addendum: One of the answers queries why I don't need to know when it fails over - the answer is that we're developing using ADO.NET and that has automatic failover support, all you have to do is add Failover Partner=MIRRORSERVER (where MIRRORSERVER is the name of your mirror server instance) to your connection string and your code will fail over transparently - you may get some errors depending on what connections are active but in our case very few. A: Right, The two answers and a little thought got me to something approaching an answer. First a little more clarification: The app is written in C# (2.0+) and uses ADO.NET to talk to SQL Server 2005. The mirror setup is two W2k3 servers hosting the Principal and the Mirror plus a third server hosting an express instance as a monitor. The nice thing about this is a failover is all but transparent to the app using the database, it will throw an error for some connections but fundamentally everything will carry on nicely. Yes we're getting the odd false positive but the whole point is to have the system carry on working with the least amount of fuss and mirror does deliver this very nicely. Further, the issue is not with serious server failure - that's usually a bit more obvious but with a failover for other reasons (c.f. the false positives above) as we do have a couple of things that can't, for various reasons, fail over and in any case so we can see if we can identify the circumstance where we get false positives. So, given the above, simply checking the status of the boxes is not quite enough and chasing through the event log is probably overly complex - the answer is, as it turns out, fairly simple: sp_helpserver The first column returned by sp_helpserver is the server name. If you run the request at regular intervals saving the previous server name and doing a comparison each time you'll be able to identify when a change has taken place and then take the appropriate action. The following is a console app that demonstrates the principal - although it needs some work (e.g. the connection ought to be non-pooled and new each time) but its enough for now (so I'd then accept this as "the" answer"). Parameters are Principal, Mirror, Database using System; using System.Data.SqlClient; namespace FailoverMonitorConcept { class Program { static void Main(string[] args) { string server = args[0]; string failover = args[1]; string database = args[2]; string connStr = string.Format("Integrated Security=SSPI;Persist Security Info=True;Data Source={0};Failover Partner={1};Packet Size=4096;Initial Catalog={2}", server, failover, database); string sql = "EXEC sp_helpserver"; SqlConnection dc = new SqlConnection(connStr); SqlCommand cmd = new SqlCommand(sql, dc); Console.WriteLine("Connection string: " + connStr); Console.WriteLine("Press any key to test, press q to quit"); string priorServerName = ""; char key = ' '; while(key.ToString().ToLower() != "q") { dc.Open(); try { string serverName = cmd.ExecuteScalar() as string; Console.WriteLine(DateTime.Now.ToLongTimeString() + " - Server name: " + serverName); if (priorServerName == "") { priorServerName = serverName; } else if (priorServerName != serverName) { Console.WriteLine("***** SERVER CHANGED *****"); Console.WriteLine("New server: " + serverName); priorServerName = serverName; } } catch (System.Data.SqlClient.SqlException ex) { Console.WriteLine("Error: " + ex.ToString()); } finally { dc.Close(); } key = Console.ReadKey(true).KeyChar; } Console.WriteLine("Finis!"); } } } I wouldn't have arrived here without a) asking the question and then b) getting the responses which made me actually think Murph A: If the failover logic is in your application you could write a status screen that shows which box you're connected by writing to a var when the first connection attempt fails. I think your best bet would be a ping daemon/cron job that checks the status of each box periodically and sends an email if one doesn't respond. A: Use something like Host Monitor http://www.ks-soft.net/hostmon.eng/ to monitor the Event Log for messages related to the failover event, which can send you an alert via email/SMS. I'm curious though how you wouldn't need to know that the failover happened, because don't you have to then update the datasources in your applications to point to the new server that you failed over to? Mirroring takes place on different hosts (the primary and the mirror), unlike clustering which has multiple nodes that appear to be a single device from the outside. Also, are you using a witness server in order to automatically fail over from the primary to the mirror? This is the only way I know of to make it happen automatically, and in my experience, you get a lot of false-positives where network hiccups can fool the mirror and witness into thinking the primary is down when in fact it is not.
How can I get notification when a mirrored SQL Server database has failed over
We have a couple of mirrored SQL Server databases. My first problem - the key problem - is to get a notification when the db fails over. I don't need to know because, erm, its mirrored and so it (almost) all carries on working automagically but it would useful to be advised and I'm currently getting failovers when I don't think I should be so it want to know when they occur (without too much digging) to see if I can determine why. I have services running that I could fairly easily use to monitor this - so the alternative question would be "How do I programmatically determine which is the principal and which is the mirror" - preferably in a more intelligent fashion than just attempting to connect each in turn (which would mostly work but...). Thanks, Murph Addendum: One of the answers queries why I don't need to know when it fails over - the answer is that we're developing using ADO.NET and that has automatic failover support, all you have to do is add Failover Partner=MIRRORSERVER (where MIRRORSERVER is the name of your mirror server instance) to your connection string and your code will fail over transparently - you may get some errors depending on what connections are active but in our case very few.
[ "Right, \nThe two answers and a little thought got me to something approaching an answer.\nFirst a little more clarification:\nThe app is written in C# (2.0+) and uses ADO.NET to talk to SQL Server 2005.\nThe mirror setup is two W2k3 servers hosting the Principal and the Mirror plus a third server hosting an express instance as a monitor. The nice thing about this is a failover is all but transparent to the app using the database, it will throw an error for some connections but fundamentally everything will carry on nicely. Yes we're getting the odd false positive but the whole point is to have the system carry on working with the least amount of fuss and mirror does deliver this very nicely.\nFurther, the issue is not with serious server failure - that's usually a bit more obvious but with a failover for other reasons (c.f. the false positives above) as we do have a couple of things that can't, for various reasons, fail over and in any case so we can see if we can identify the circumstance where we get false positives.\nSo, given the above, simply checking the status of the boxes is not quite enough and chasing through the event log is probably overly complex - the answer is, as it turns out, fairly simple: sp_helpserver\nThe first column returned by sp_helpserver is the server name. If you run the request at regular intervals saving the previous server name and doing a comparison each time you'll be able to identify when a change has taken place and then take the appropriate action.\nThe following is a console app that demonstrates the principal - although it needs some work (e.g. the connection ought to be non-pooled and new each time) but its enough for now (so I'd then accept this as \"the\" answer\"). Parameters are Principal, Mirror, Database\nusing System;\nusing System.Data.SqlClient;\n\nnamespace FailoverMonitorConcept\n{\n class Program\n {\n static void Main(string[] args)\n {\n string server = args[0];\n string failover = args[1];\n string database = args[2];\n\n string connStr = string.Format(\"Integrated Security=SSPI;Persist Security Info=True;Data Source={0};Failover Partner={1};Packet Size=4096;Initial Catalog={2}\", server, failover, database);\n string sql = \"EXEC sp_helpserver\";\n\n SqlConnection dc = new SqlConnection(connStr);\n SqlCommand cmd = new SqlCommand(sql, dc);\n Console.WriteLine(\"Connection string: \" + connStr);\n Console.WriteLine(\"Press any key to test, press q to quit\");\n\n string priorServerName = \"\";\n char key = ' ';\n\n while(key.ToString().ToLower() != \"q\")\n {\n dc.Open();\n try\n {\n string serverName = cmd.ExecuteScalar() as string;\n Console.WriteLine(DateTime.Now.ToLongTimeString() + \" - Server name: \" + serverName);\n if (priorServerName == \"\")\n {\n priorServerName = serverName;\n }\n else if (priorServerName != serverName)\n {\n Console.WriteLine(\"***** SERVER CHANGED *****\");\n Console.WriteLine(\"New server: \" + serverName);\n priorServerName = serverName;\n }\n }\n catch (System.Data.SqlClient.SqlException ex)\n {\n Console.WriteLine(\"Error: \" + ex.ToString());\n }\n finally\n {\n dc.Close();\n }\n key = Console.ReadKey(true).KeyChar;\n\n }\n\n Console.WriteLine(\"Finis!\");\n\n }\n }\n}\n\nI wouldn't have arrived here without a) asking the question and then b) getting the responses which made me actually think\nMurph\n", "If the failover logic is in your application you could write a status screen that shows which box you're connected by writing to a var when the first connection attempt fails.\nI think your best bet would be a ping daemon/cron job that checks the status of each box periodically and sends an email if one doesn't respond. \n", "Use something like Host Monitor http://www.ks-soft.net/hostmon.eng/ to monitor the Event Log for messages related to the failover event, which can send you an alert via email/SMS.\nI'm curious though how you wouldn't need to know that the failover happened, because don't you have to then update the datasources in your applications to point to the new server that you failed over to? Mirroring takes place on different hosts (the primary and the mirror), unlike clustering which has multiple nodes that appear to be a single device from the outside.\nAlso, are you using a witness server in order to automatically fail over from the primary to the mirror? This is the only way I know of to make it happen automatically, and in my experience, you get a lot of false-positives where network hiccups can fool the mirror and witness into thinking the primary is down when in fact it is not.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "sql_server" ]
stackoverflow_0000028353_sql_server.txt
Q: Best Practices for versioning web site? What's are the best practices for versioning web sites? Which revision control systems are well suited for such a job? What special-purpose tools exist? What other questions should I be asking? A: Firstly you can - and should - use a revision control system, most will handle binary files although unlike text files you can't merge two different set of changes so you may want to set the system up to lock these files whilst they are being changed (assuming that that's not the default mode of operation for you rcs in the first place). Where things get a bit more interesting for Websites is managing those files that are required for the site but don't actually form part of the site - the most obvious example being something like .psd files from which web graphics are produced but which don't get deployed. We therefore have a tree for each site which has two folders: assets and site. Assets are things that aren't in the site, and site is - well the site. What you have to watch with this is that designers tend to have their own "systems" for "versioning" graphic files (count the layers in the PSD). You don't need necessarily to stop them doing this but you do need to ensure that they commit each change too. Other questions? Deployment. We're still working on this one (-: But we're getting better (I'm happier now with what we do!) Murph A: In response to Christian Lescuyer's post, you also need to enable the "svn:keywords" property on the file with that line in it. Subversion won't bother looking in your files for keywords like $Revision$ unless that property is set. Also, if using PHP like in his example, you may want to put $Revision$ inside a single-quoted string instead of a double quoted string to prevent PHP from trying to parse $Revision as a PHP variable and throwing a warning. :) A: I use Subversion. As an easy way to reference the website version (production, testing, development), I use a very simple trick. I add the revision number somewhere on the site (eg in the admin footer). Something like this: <?php print("$Revision: 1 $"); ?> Each time you checkout (development versions) or export (for production), the "1" will be replaced by the revision number in your repository, thus making it easy to setup the customer version on your test server, for example.
Best Practices for versioning web site?
What's are the best practices for versioning web sites? Which revision control systems are well suited for such a job? What special-purpose tools exist? What other questions should I be asking?
[ "Firstly you can - and should - use a revision control system, most will handle binary files although unlike text files you can't merge two different set of changes so you may want to set the system up to lock these files whilst they are being changed (assuming that that's not the default mode of operation for you rcs in the first place).\nWhere things get a bit more interesting for Websites is managing those files that are required for the site but don't actually form part of the site - the most obvious example being something like .psd files from which web graphics are produced but which don't get deployed.\nWe therefore have a tree for each site which has two folders: assets and site. Assets are things that aren't in the site, and site is - well the site. \nWhat you have to watch with this is that designers tend to have their own \"systems\" for \"versioning\" graphic files (count the layers in the PSD). You don't need necessarily to stop them doing this but you do need to ensure that they commit each change too.\nOther questions? \nDeployment. We're still working on this one (-: But we're getting better (I'm happier now with what we do!)\nMurph\n", "In response to Christian Lescuyer's post, you also need to enable the \"svn:keywords\" property on the file with that line in it. Subversion won't bother looking in your files for keywords like $Revision$ unless that property is set.\nAlso, if using PHP like in his example, you may want to put $Revision$ inside a single-quoted string instead of a double quoted string to prevent PHP from trying to parse $Revision as a PHP variable and throwing a warning. :)\n", "I use Subversion.\nAs an easy way to reference the website version (production, testing, development), I use a very simple trick. I add the revision number somewhere on the site (eg in the admin footer). Something like this:\n<?php print(\"$Revision: 1 $\"); ?>\n\nEach time you checkout (development versions) or export (for production), the \"1\" will be replaced by the revision number in your repository, thus making it easy to setup the customer version on your test server, for example.\n" ]
[ 5, 4, 2 ]
[]
[]
[ "version_control" ]
stackoverflow_0000037104_version_control.txt
Q: What logging is good logging for your app? So we've discussed logging in passing at my place of work and I was wondering if some of you guys here could give me some ideas of your approaches? Typically our scenario is, no logging really at all, and mostly .NET apps, winforms/WPF clients talking through web services or direct to a db. So, the real question is, where or what would you log? At the moment we have users reporting error messages - so I would assume log startups/shutdowns, exceptions... Do you take it to calls to the web services or db? Page loads? How do you get a good idea of what the user was trying to do at the time? Is it better to go all the way and log everything across multiple attempts/days, or log only what you need to (given hdd is cheap). I guess that's a few questions, but I wanted to get more of an idea of what the actual practice is out there in larger shops! A: The key thing for logging is good planning. I would suggest that you look into the enterprise library exception and logging application block (http://msdn.microsoft.com/en-us/library/cc467894.aspx). There is a wee bit of a learning curve but it does work quite well. The approach I favour at the moment is to define 4 priority levels. 4=Unhandled exception (error in event log), 3=Handled exception (warning in event log), 2=Access an external resource such as a webservice, db or mainframe system (information in event log), 1=Verbose/anything else of interest (information in event log). Using the application block it's then quite easy to tweak what level of priority you want to log. So in development you'd log everything but as you get a stable system in production, you'd probably only be interested in unhandled exceptions and possibly handled exceptions. Update: For clarity, I would suggest you have logging in both your winform/wpf app and your webservices. In a web scenario, I've had problems in the past where it can be difficult to tie an error on the client back through to the app servers. Mainly because any error through webservices gets wrapped up as a SOAP exception. I can't remember off the top of my head, but I think if you use a custom exception handler (that is part of the enterprise library) you can add data onto exceptions such as the handlinginstance id of the exception from the app server. This makes it easier to tie up exceptions on a client back to your app box by using LogParser (http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en). Second Update: I also like to give each different event a seperate event id and to track that in a text file or spreadsheet under source control. Yes, its a pain but if you're lucky enough to have an IT team looking after your systems in production, I find they tend to expect different events to have different event ids. A: Being an admin, I really appreciate apps that log to the Event Log (preferably their own, otherwise the application log) for all logging but trace logs. By logging to the event log, you make it much more likely that warnings or errors can be found and addressed by the admin staff before they become a major problem (if it is a issue they can address), or allows them to get in contact with the devs, who can use the trace logs to further troubleshoot the issue. My biggest pain point in supporting a custom .NET app right now is that there are 8 different applications (some console apps, some winforms, and some web) from the same vendor. None of them log to the event log, they all have their own custom log files. But for all the winforms and console apps, they keep the file open while they are running, so I can't monitor it for issues. Also, the logs are all written slightly differently, so I would have to parse them a bit differently to get useful information. This forces me to monitor the appearance of an application (is it responding on the ports it is active on, is the process working set getting too high, etc..), rather than what the state of the application really is. Please, please consider the folks who maintain your application after it is deployed and provide logging they can use. Thanks! A: This post on highscalability.com provides a good perspective on logging in a large scale distributed system. (And coincidentally it starts out by mentioning a post on the JoelOnSoftware). A: Is it better to go all the way and log everything across multiple attempts/days, or log only what you need to (given hdd is cheap). The fact harddrives are cheap really isn't a good reason to verbosely log everything possible, for a few reasons.. For one, with a very busy application, you really don't want to slow it down and tie up disc-writes writing logs (harddrives are pretty slow). The second point, and the more important one - there's really very little to gain from terabytes worth of logs.. For development, they can are useful, but you don't need to keep more than a few minutes of them.. Some logging is of course useful, having different levels is about the only way to go about it - for example debug() info() only get logged if requested (in a config, or command line flag), then maybe warning() and error() get sent to a log file For most of the things I've written (smallish scripts) I generally just have a debug() function, that checks if --verbose is set, and prints the message.. That way I can shove debug("some value: %s" % (avar)) when needed, and not have to worry about going back and removing debugging print() statements everwhere. For web applications, I generally just use the web-server logs for statistics, and the error log. I use things like mod_rewrite's log when needed, but it would be idiotic to leave this enabled beyond development (as it creates many many lines on each page request) I suppose it depends on the application itself, but generally, for big applications use multiple levels of logs that can be activated when needed. For smaller things, a --verbose flag or equivalent, for web applications, log errors and (to a point) log hits. Basically, in "production" log only the information you can use, in development log everything you could possible need to fix problems. A: As a quick answer I would say to come up with a series of categories and have switchable logging levels, e.g. info, warning, error, critical, etc. Then make it easy to set the logging level to tune the level of detail that you need. Typically, set the logging level in a config file and stop and restart the app. I would also publicize to the developers what the meaning is for each of the levels. edit: I would also set up a system to rotate out, compress and archive log files on a regular basis, maybe nightly. A: For a typical desktop app, I'd store everything on the current session, and maybe store info messages for the past n sessions or up to x in size. I'm assuming that your messages are organized. We use 4 categories; errors, warnings, info, and trace. We're still figuring out what goes at which level. As I'm getting used to parsing log files, I generally say "log more". Don't sweat readability, you're probably gonna have to process the log file a bit before you can use it. In the end, find a good logging framework that allows you to control your spool usage on lifetime and storage space, and a proper api that minimizes the effect on your code. Ideally you just type info("waaah") or warning("waah") and the API does all the fancy tagging for you. A: Thanks guys, lot of good info, but Martin has given me a bit more detail on how to proceed. I'll give him the answer, as it seems like now we're off the front few pages answers will drop off.
What logging is good logging for your app?
So we've discussed logging in passing at my place of work and I was wondering if some of you guys here could give me some ideas of your approaches? Typically our scenario is, no logging really at all, and mostly .NET apps, winforms/WPF clients talking through web services or direct to a db. So, the real question is, where or what would you log? At the moment we have users reporting error messages - so I would assume log startups/shutdowns, exceptions... Do you take it to calls to the web services or db? Page loads? How do you get a good idea of what the user was trying to do at the time? Is it better to go all the way and log everything across multiple attempts/days, or log only what you need to (given hdd is cheap). I guess that's a few questions, but I wanted to get more of an idea of what the actual practice is out there in larger shops!
[ "The key thing for logging is good planning. I would suggest that you look into the enterprise library exception and logging application block (http://msdn.microsoft.com/en-us/library/cc467894.aspx). There is a wee bit of a learning curve but it does work quite well. The approach I favour at the moment is to define 4 priority levels. 4=Unhandled exception (error in event log), 3=Handled exception (warning in event log), 2=Access an external resource such as a webservice, db or mainframe system (information in event log), 1=Verbose/anything else of interest (information in event log).\nUsing the application block it's then quite easy to tweak what level of priority you want to log. So in development you'd log everything but as you get a stable system in production, you'd probably only be interested in unhandled exceptions and possibly handled exceptions.\nUpdate: For clarity, I would suggest you have logging in both your winform/wpf app and your webservices. In a web scenario, I've had problems in the past where it can be difficult to tie an error on the client back through to the app servers. Mainly because any error through webservices gets wrapped up as a SOAP exception. I can't remember off the top of my head, but I think if you use a custom exception handler (that is part of the enterprise library) you can add data onto exceptions such as the handlinginstance id of the exception from the app server. This makes it easier to tie up exceptions on a client back to your app box by using LogParser (http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en). \nSecond Update: I also like to give each different event a seperate event id and to track that in a text file or spreadsheet under source control. Yes, its a pain but if you're lucky enough to have an IT team looking after your systems in production, I find they tend to expect different events to have different event ids.\n", "Being an admin, I really appreciate apps that log to the Event Log (preferably their own, otherwise the application log) for all logging but trace logs. By logging to the event log, you make it much more likely that warnings or errors can be found and addressed by the admin staff before they become a major problem (if it is a issue they can address), or allows them to get in contact with the devs, who can use the trace logs to further troubleshoot the issue.\nMy biggest pain point in supporting a custom .NET app right now is that there are 8 different applications (some console apps, some winforms, and some web) from the same vendor. None of them log to the event log, they all have their own custom log files. But for all the winforms and console apps, they keep the file open while they are running, so I can't monitor it for issues. Also, the logs are all written slightly differently, so I would have to parse them a bit differently to get useful information. \nThis forces me to monitor the appearance of an application (is it responding on the ports it is active on, is the process working set getting too high, etc..), rather than what the state of the application really is.\nPlease, please consider the folks who maintain your application after it is deployed and provide logging they can use. Thanks!\n", "This post on highscalability.com provides a good perspective on logging in a large scale distributed system. (And coincidentally it starts out by mentioning a post on the JoelOnSoftware).\n", "\nIs it better to go all the way and log everything across multiple attempts/days, or log only what you need to (given hdd is cheap).\n\nThe fact harddrives are cheap really isn't a good reason to verbosely log everything possible, for a few reasons.. For one, with a very busy application, you really don't want to slow it down and tie up disc-writes writing logs (harddrives are pretty slow). The second point, and the more important one - there's really very little to gain from terabytes worth of logs.. For development, they can are useful, but you don't need to keep more than a few minutes of them..\nSome logging is of course useful, having different levels is about the only way to go about it - for example debug() info() only get logged if requested (in a config, or command line flag), then maybe warning() and error() get sent to a log file\nFor most of the things I've written (smallish scripts) I generally just have a debug() function, that checks if --verbose is set, and prints the message.. That way I can shove debug(\"some value: %s\" % (avar)) when needed, and not have to worry about going back and removing debugging print() statements everwhere.\nFor web applications, I generally just use the web-server logs for statistics, and the error log. I use things like mod_rewrite's log when needed, but it would be idiotic to leave this enabled beyond development (as it creates many many lines on each page request)\nI suppose it depends on the application itself, but generally, for big applications use multiple levels of logs that can be activated when needed. For smaller things, a --verbose flag or equivalent, for web applications, log errors and (to a point) log hits.\nBasically, in \"production\" log only the information you can use, in development log everything you could possible need to fix problems.\n", "As a quick answer I would say to come up with a series of categories and have switchable logging levels, e.g. info, warning, error, critical, etc.\nThen make it easy to set the logging level to tune the level of detail that you need. Typically, set the logging level in a config file and stop and restart the app.\nI would also publicize to the developers what the meaning is for each of the levels.\nedit: I would also set up a system to rotate out, compress and archive log files on a regular basis, maybe nightly.\n", "For a typical desktop app, I'd store everything on the current session, and maybe store info messages for the past n sessions or up to x in size.\nI'm assuming that your messages are organized. We use 4 categories; errors, warnings, info, and trace. We're still figuring out what goes at which level. As I'm getting used to parsing log files, I generally say \"log more\". Don't sweat readability, you're probably gonna have to process the log file a bit before you can use it.\nIn the end, find a good logging framework that allows you to control your spool usage on lifetime and storage space, and a proper api that minimizes the effect on your code. Ideally you just type info(\"waaah\") or warning(\"waah\") and the API does all the fancy tagging for you. \n", "Thanks guys, lot of good info, but Martin has given me a bit more detail on how to proceed. I'll give him the answer, as it seems like now we're off the front few pages answers will drop off.\n" ]
[ 10, 7, 3, 2, 1, 1, 0 ]
[]
[]
[ ".net", "client_applications", "logging" ]
stackoverflow_0000035849_.net_client_applications_logging.txt
Q: Event handling in Dojo Taking Jeff Atwood's advice, I decided to use a JavaScript library for the very basic to-do list application I'm writing. I picked the Dojo toolkit, version 1.1.1. At first, all was fine: the drag-and-drop code I wrote worked first time, you can drag tasks on-screen to change their order of precedence, and each drag-and-drop operation calls an event handler that sends an AJAX call to the server to let it know that order has been changed. Then I went to add in the email tracking functionality. Standard stuff: new incoming emails have a unique ID number attached to their subject line, all subsequent emails about that problem can be tracked by simply leaving that ID number in the subject when you reply. So, we have a list of open tasks, each with their own ID number, and each of those tasks has a time-ordered list of associated emails. I wanted the text of those emails to be available to the user as they were looking at their list of tasks, so I made each task box a Dijit "Tree" control - top level contains the task description, branches contain email dates, and a single "leaf" off of each of those branches contains the email text. First problem: I wanted the tree view to be fully-collapsed by default. After searching Google quite extensively, I found a number of solutions, all of which seemed to be valid for previous versions of Dojo but not the one I was using. I eventually figured out that the best solution would seem to be to have a event handler called when the Tree control had loaded that simply collapsed each branch/leaf. Unfortunately, even though the Tree control had been instantiated and its "startup" event handler called, the branches and leaves still hadn't loaded (the data was still being loaded via an AJAX call). So, I modified the system so that all email text and Tree structure is added server-side. This means the whole fully-populated Tree control is available when its startup event handler is called. So, the startup event handler fully collapses the tree. Next, I couldn't find a "proper" way to have nice formatted text for the email leaves. I can put the email text in the leaf just fine, but any HTML gets escaped out and shows up in the web page. Cue more rummaging around Dojo's documentation (tends to be out of date, with code and examples for pre-1.0 versions) and Google. I eventually came up with the solution of getting JavaScript to go and read the SPAN element that's inside each leaf node and un-escape the escaped HTML code in it's innerHTML. I figured I'd put code to do this in with the fully-collapse-the-tree code, in the Tree control's startup event handler. However... it turns out that the SPAN element isn't actually created until the user clicks on the expando (the little "+" symbol in a tree view you click to expand a node). Okay, fair enough - I'll add the re-formatting code to the onExpand() event handler, or whatever it's called. Which doesn't seem to exist. I've searched to documentation, I've searched Google... I'm quite possibly mis-understanding Dojo's "publish/subscribe" event handling system, but I think that mainly because there doesn't seem to be any comprehensive documentation for it anywhere (like, where do I find out what events I can subscribe to?). So, in the end, the best solution I can come up with is to add an onClick event handler (not a "Dojo" event, but a plain JavaScript event that Dojo knows nothing about) to the expando node of each Tree branch that re-formats the HTML inside the SPAN element of each leaf. Except... when that is called, the SPAN element still doesn't exist (sometimes - other times it's been cached, just to further confuse you). Therefore, I have the event handler set up a timer that periodically calls a function that checks to see if the relevant SPAN element has turned up yet before then re-formatting it. // An event handler called whenever a "email title" tree node is expanded. function formatTreeNode(nodeID) { if (dijit.byId(nodeID).getChildren().length != 0) { clearInterval(nodeUpdateIntervalID); messageBody = dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML if (messageBody.indexOf("<b>Message text:</b>") == -1) { messageBody = messageBody.replace(/&gt;/g, ">"); messageBody = messageBody.replace(/&lt;/g, "<"); messageBody = messageBody.replace(/&amp;/g, "&"); dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML = "<b>Message text:</b><div style=\"font-family:courier\">"+messageBody+"</div>"; } } } // An event handler called when a tree node has been set up - we changed the default fully-expanded to fully-collapsed. function setupTree(theTree) { dijit.byId("tree-"+theTree).rootNode.collapse(); messageNode = dijit.byId("tree-"+theTree).rootNode.getChildren(); for (pl = 0; pl < messageNode.length; pl++) { messageNode[pl].collapse(); messageNode[pl].expandoNode.onclick = eval("nodeUpdateIntervalID = setInterval(\"formatTreeNode('"+messageNode[pl].id+"')\",200); formatTreeNode('"+messageNode[pl].id+"');"); } } The above has the feel of a truly horrible hack, and I feel sure I must have taken a wrong turn somewhere early on in my thought process. Can someone please tell me: The correct way to go about putting nicely-formatted text inside a Dojo/Dijit Tree control. The correct way to handle Dojo events, like where I can figure out what events are available for me to subscribe to. A better JavaScript library to use (can I do what I want to with JQuery and avoid the all-around-the-houses approach seen above?). PS: If you're naming a software project, give thought to its name's uniqueness in Google - I'm sure searching for "Dojo" documentation in Google would be easier without all the martial arts results getting in the way. PPS: Firefox spellchecker knows how to spell "Atwood", correcting me when I put two 'T's instead of one. Is Jeff just that famous now? A: I assume that you followed the dijit.Tree and dojo.data in Dojo 1.1 tutorial which directed you to pass the data to the tree control using a data store. That had me banging my head of a brick wall for a while. Its not really a great approach and the alternative is not really well documented. You need to create a use model instead. I have included an example below of a tree model that I created for displaying the structure of an LDAP directory. You will find the default implementation of the model in your dojo distribution at ./dijit/_tree/model.js. The comments should help you understand the functions supported by the model. The IDirectoryService class the code below are stubs for server-side Java POJOs generated by Direct Web Remoting (DWR). I highly recommend DWR if you going to be doing a lot of client-server interaction. dojo.declare("LDAPDirectoryTreeModel", [ dijit.tree.model ], { getRoot : function(onItem) { IDirectoryService.getRoots( function(roots) { onItem(roots[0]) }); }, mayHaveChildren : function(item) { return true; }, getChildren : function(parentItem, onComplete) { IDirectoryService.getChildrenImpl(parentItem, onComplete); }, getIdentity : function(item) { return item.dn; }, getLabel : function(item) { return item.rdn; } }); And here is an extract from the my JSP page where I created the model and used it to populate the tree control. <div dojoType="LDAPDirectoryTreeModel" jsid="treeModel" id="treeModel"> </div> <div jsid="tree" id="tree" dojoType="dijit.Tree" model="treeModel" labelAttr="name" label="${directory.host}:${directory.port}"> </div>
Event handling in Dojo
Taking Jeff Atwood's advice, I decided to use a JavaScript library for the very basic to-do list application I'm writing. I picked the Dojo toolkit, version 1.1.1. At first, all was fine: the drag-and-drop code I wrote worked first time, you can drag tasks on-screen to change their order of precedence, and each drag-and-drop operation calls an event handler that sends an AJAX call to the server to let it know that order has been changed. Then I went to add in the email tracking functionality. Standard stuff: new incoming emails have a unique ID number attached to their subject line, all subsequent emails about that problem can be tracked by simply leaving that ID number in the subject when you reply. So, we have a list of open tasks, each with their own ID number, and each of those tasks has a time-ordered list of associated emails. I wanted the text of those emails to be available to the user as they were looking at their list of tasks, so I made each task box a Dijit "Tree" control - top level contains the task description, branches contain email dates, and a single "leaf" off of each of those branches contains the email text. First problem: I wanted the tree view to be fully-collapsed by default. After searching Google quite extensively, I found a number of solutions, all of which seemed to be valid for previous versions of Dojo but not the one I was using. I eventually figured out that the best solution would seem to be to have a event handler called when the Tree control had loaded that simply collapsed each branch/leaf. Unfortunately, even though the Tree control had been instantiated and its "startup" event handler called, the branches and leaves still hadn't loaded (the data was still being loaded via an AJAX call). So, I modified the system so that all email text and Tree structure is added server-side. This means the whole fully-populated Tree control is available when its startup event handler is called. So, the startup event handler fully collapses the tree. Next, I couldn't find a "proper" way to have nice formatted text for the email leaves. I can put the email text in the leaf just fine, but any HTML gets escaped out and shows up in the web page. Cue more rummaging around Dojo's documentation (tends to be out of date, with code and examples for pre-1.0 versions) and Google. I eventually came up with the solution of getting JavaScript to go and read the SPAN element that's inside each leaf node and un-escape the escaped HTML code in it's innerHTML. I figured I'd put code to do this in with the fully-collapse-the-tree code, in the Tree control's startup event handler. However... it turns out that the SPAN element isn't actually created until the user clicks on the expando (the little "+" symbol in a tree view you click to expand a node). Okay, fair enough - I'll add the re-formatting code to the onExpand() event handler, or whatever it's called. Which doesn't seem to exist. I've searched to documentation, I've searched Google... I'm quite possibly mis-understanding Dojo's "publish/subscribe" event handling system, but I think that mainly because there doesn't seem to be any comprehensive documentation for it anywhere (like, where do I find out what events I can subscribe to?). So, in the end, the best solution I can come up with is to add an onClick event handler (not a "Dojo" event, but a plain JavaScript event that Dojo knows nothing about) to the expando node of each Tree branch that re-formats the HTML inside the SPAN element of each leaf. Except... when that is called, the SPAN element still doesn't exist (sometimes - other times it's been cached, just to further confuse you). Therefore, I have the event handler set up a timer that periodically calls a function that checks to see if the relevant SPAN element has turned up yet before then re-formatting it. // An event handler called whenever a "email title" tree node is expanded. function formatTreeNode(nodeID) { if (dijit.byId(nodeID).getChildren().length != 0) { clearInterval(nodeUpdateIntervalID); messageBody = dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML if (messageBody.indexOf("<b>Message text:</b>") == -1) { messageBody = messageBody.replace(/&gt;/g, ">"); messageBody = messageBody.replace(/&lt;/g, "<"); messageBody = messageBody.replace(/&amp;/g, "&"); dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML = "<b>Message text:</b><div style=\"font-family:courier\">"+messageBody+"</div>"; } } } // An event handler called when a tree node has been set up - we changed the default fully-expanded to fully-collapsed. function setupTree(theTree) { dijit.byId("tree-"+theTree).rootNode.collapse(); messageNode = dijit.byId("tree-"+theTree).rootNode.getChildren(); for (pl = 0; pl < messageNode.length; pl++) { messageNode[pl].collapse(); messageNode[pl].expandoNode.onclick = eval("nodeUpdateIntervalID = setInterval(\"formatTreeNode('"+messageNode[pl].id+"')\",200); formatTreeNode('"+messageNode[pl].id+"');"); } } The above has the feel of a truly horrible hack, and I feel sure I must have taken a wrong turn somewhere early on in my thought process. Can someone please tell me: The correct way to go about putting nicely-formatted text inside a Dojo/Dijit Tree control. The correct way to handle Dojo events, like where I can figure out what events are available for me to subscribe to. A better JavaScript library to use (can I do what I want to with JQuery and avoid the all-around-the-houses approach seen above?). PS: If you're naming a software project, give thought to its name's uniqueness in Google - I'm sure searching for "Dojo" documentation in Google would be easier without all the martial arts results getting in the way. PPS: Firefox spellchecker knows how to spell "Atwood", correcting me when I put two 'T's instead of one. Is Jeff just that famous now?
[ "I assume that you followed the dijit.Tree and dojo.data in Dojo 1.1 tutorial which directed you to pass the data to the tree control using a data store. That had me banging my head of a brick wall for a while. \nIts not really a great approach and the alternative is not really well documented. You need to create a use model instead. I have included an example below of a tree model that I created for displaying the structure of an LDAP directory.\nYou will find the default implementation of the model in your dojo distribution at ./dijit/_tree/model.js. The comments should help you understand the functions supported by the model.\nThe IDirectoryService class the code below are stubs for server-side Java POJOs generated by Direct Web Remoting (DWR). I highly recommend DWR if you going to be doing a lot of client-server interaction.\n\ndojo.declare(\"LDAPDirectoryTreeModel\", [ dijit.tree.model ], {\n getRoot : function(onItem) {\n IDirectoryService.getRoots( function(roots) {\n onItem(roots[0])\n });\n },\n\n mayHaveChildren : function(item) {\n return true;\n },\n\n getChildren : function(parentItem, onComplete) {\n IDirectoryService.getChildrenImpl(parentItem, onComplete);\n },\n\n getIdentity : function(item) {\n return item.dn;\n },\n\n getLabel : function(item) {\n return item.rdn;\n }\n});\n\nAnd here is an extract from the my JSP page where I created the model and used it to populate the tree control.\n\n<div\n dojoType=\"LDAPDirectoryTreeModel\"\n jsid=\"treeModel\"\n id=\"treeModel\">\n</div>\n<div\n jsid=\"tree\"\n id=\"tree\"\n dojoType=\"dijit.Tree\" model=\"treeModel\"\n labelAttr=\"name\"\n label=\"${directory.host}:${directory.port}\">\n</div>\n\n" ]
[ 3 ]
[]
[]
[ "dojo", "javascript" ]
stackoverflow_0000037141_dojo_javascript.txt
Q: What is the role of the buried-buffer-list frame parameter in Emacs In emacs, I've read the following code snippet in simple.el: (frame-parameter frame 'buried-buffer-list) What is the exact meaning of the 'buried-buffer-list parameter? What it is used for? A: The result of M-x describe function RET frame-parameter is: frame-parameter is a built-in function. (frame-parameter FRAME PARAMETER) Return FRAME's value for parameter PARAMETER. If FRAME is nil, describe the currently selected frame. Also, have a look in the Elisp info manual for the node called "Frame/Frame Parameters". There isn't a specific reference to 'buried-buffer-list that I could find. You might be able to get the value of it by evaluating: (cdr (frame-parameter FRAME 'buffer-list)) since a "buried buffer" is just a buffer that's been pushed to the back of the list of buffers for a particular frame. See the documentation for bury-buffer: bury-buffer is an interactive compiled Lisp function in `window.el'. (bury-buffer &optional BUFFER-OR-NAME) Put BUFFER-OR-NAME at the end of the list of all buffers. There it is the least likely candidate for `other-buffer' to return; thus, the least likely buffer for C-x b to select by default. You can specify a buffer name as BUFFER-OR-NAME, or an actual buffer object. If BUFFER-OR-NAME is nil or omitted, bury the current buffer. Also, if BUFFER-OR-NAME is nil or omitted, remove the current buffer from the selected window if it is displayed there. A: A quick look at http://www.update.uu.se/~ams/slask/emacs/src/frame.h returns: List of buffers that were viewed, then buried in this frame. The most recently buried buffer is first. So in theory you can use cdr to obtain the same list as Ben Collins said.
What is the role of the buried-buffer-list frame parameter in Emacs
In emacs, I've read the following code snippet in simple.el: (frame-parameter frame 'buried-buffer-list) What is the exact meaning of the 'buried-buffer-list parameter? What it is used for?
[ "The result of M-x describe function RET frame-parameter is:\n\nframe-parameter is a built-in\n function.\n(frame-parameter FRAME PARAMETER)\n\nReturn FRAME's value for parameter\n PARAMETER. If FRAME is nil, describe\n the currently selected frame.\n\nAlso, have a look in the Elisp info manual for the node called \"Frame/Frame Parameters\". There isn't a specific reference to 'buried-buffer-list that I could find.\nYou might be able to get the value of it by evaluating:\n(cdr (frame-parameter FRAME 'buffer-list))\n\nsince a \"buried buffer\" is just a buffer that's been pushed to the back of the list of buffers for a particular frame. See the documentation for bury-buffer:\n\nbury-buffer is an interactive compiled Lisp function in `window.el'.\n(bury-buffer &optional BUFFER-OR-NAME)\nPut BUFFER-OR-NAME at the end of the list of all buffers. There it is\n the least likely candidate for `other-buffer' to return; thus, the\n least likely buffer for C-x b to select by default.\nYou can specify a buffer name as BUFFER-OR-NAME, or an actual buffer\n object. If BUFFER-OR-NAME is nil or omitted, bury the current buffer.\n Also, if BUFFER-OR-NAME is nil or omitted, remove the current buffer\n from the selected window if it is displayed there.\n\n", "A quick look at http://www.update.uu.se/~ams/slask/emacs/src/frame.h returns:\nList of buffers that were viewed, then buried in this frame. The\nmost recently buried buffer is first. \n\nSo in theory you can use cdr to obtain the same list as Ben Collins said.\n" ]
[ 1, 1 ]
[]
[]
[ "elisp", "emacs" ]
stackoverflow_0000035102_elisp_emacs.txt
Q: How do I avoid read locks in my database? How do I avoid read locks in my database? Answers for multiple databases welcome! A: In SQL Server you can use the with(nolock) keyword in your select statements. For example: Select table1.columna, table2.columna from table1 with(nolock), table2 with(nolock) Make sure to specify with(nolock) for each table/view in the query. A: In Oracle the default mode of operation is the Read committed isolation level where a select statement is not blocked by another transaction modifying the data it's reading. From Data Concurrency and Consistency: Each query executed by a transaction sees only data that was committed before the query (not the transaction) began. An Oracle query never reads dirty (uncommitted) data. A: PostgreSQL also uses MVCC (Multi-Version Concurrency Control), so using the default transaction isolation level (read-committed), you should never block, unless somebody is doing maintainace on th DB (dropping / adding columns / tables / indexes / etc). A: In Firebird writers never block readers and there are no dirty-reads. Only read-commited and snapshot isolation levels. It uses a multi-generational engine (like oracle i believe) instead of simple page or record locking.
How do I avoid read locks in my database?
How do I avoid read locks in my database? Answers for multiple databases welcome!
[ "In SQL Server you can use the with(nolock) keyword in your select statements. For example:\nSelect table1.columna, table2.columna\nfrom table1 with(nolock), table2 with(nolock)\n\nMake sure to specify with(nolock) for each table/view in the query.\n", "In Oracle the default mode of operation is the Read committed isolation level where a select statement is not blocked by another transaction modifying the data it's reading.\nFrom Data Concurrency and Consistency:\n\nEach query executed by a transaction sees only data that was committed before the query (not the transaction) began. An Oracle query never reads dirty (uncommitted) data.\n\n", "PostgreSQL also uses MVCC (Multi-Version Concurrency Control), so using the default transaction isolation level (read-committed), you should never block, unless somebody is doing maintainace on th DB (dropping / adding columns / tables / indexes / etc).\n", "In Firebird writers never block readers and there are no dirty-reads. Only read-commited and snapshot isolation levels.\nIt uses a multi-generational engine (like oracle i believe) instead of simple page or record locking.\n" ]
[ 3, 3, 3, 2 ]
[]
[]
[ "database", "locking", "performance", "sql" ]
stackoverflow_0000037095_database_locking_performance_sql.txt
Q: How do I declare a list of fixed length in specman? In E (specman) I want to declare variables that are lists, and I want to fix their lengths. It's easy to do for a member of a struct: thread[2] : list of thread_t; while for a "regular" variable in a function the above doesn't work, and I have to do something like: var warned : list of bool; gen warned keeping { it.size() == 5; }; Is there a better way to declare a list of fixed size? A: A hard keep like you have is only going to fix the size at initialization but elements could still be added or dropped later, are you trying to guard against this condition? The only way I can think of to guarantee that elements aren't added or dropped later is emitting an event synced on the size != the predetermined amount: event list_size_changed is true (wanted.size() != 5) @clk; The only other thing that I can offer is a bit of syntactic sugar for the hard keep: var warned : list of bool; keep warned.size() == 5;
How do I declare a list of fixed length in specman?
In E (specman) I want to declare variables that are lists, and I want to fix their lengths. It's easy to do for a member of a struct: thread[2] : list of thread_t; while for a "regular" variable in a function the above doesn't work, and I have to do something like: var warned : list of bool; gen warned keeping { it.size() == 5; }; Is there a better way to declare a list of fixed size?
[ "A hard keep like you have is only going to fix the size at initialization but elements could still be added or dropped later, are you trying to guard against this condition? The only way I can think of to guarantee that elements aren't added or dropped later is emitting an event synced on the size != the predetermined amount:\nevent list_size_changed is true (wanted.size() != 5) @clk;\n\nThe only other thing that I can offer is a bit of syntactic sugar for the hard keep:\nvar warned : list of bool;\nkeep warned.size() == 5;\n\n" ]
[ 4 ]
[ "I know nothing of specman, but a fixed sized list is an array, so that might point you somewhere.\n" ]
[ -1 ]
[ "specman" ]
stackoverflow_0000020696_specman.txt
Q: Calculate Video Duration I suck at math. I need to figure out how to calculate a video duration with only a few examples of values. For example, a value of 70966 is displayed as 1:10 minutes. A value of 30533 displays as 30 seconds. A value of 7007 displays as 7 seconds. A: Looks like the numbers are in milliseconds. So to convert to seconds, divide by 1000, then divide by 60 to find minutes etc. A: It's a simple matter of division: 70966 / 70 seconds (1:10 minutes) = 1013.8 30533 / 30 = 1017.76 7007 / 7 = 1001 Looks like the numbers are nothing but milliseconds. 70966 displays as 1:10 minutes because it shaves of the millisecond part (last 3 digits). A: I'm not sure if I completely understand this, but: 70966 / 70 seconds = 1013.8 So dividing the "value" by 1013.8 should get the duration, approximately... Edit: Yes, Ben is right, you should divide by 1000. I got 1013.8 because the 70 seconds was rounded down from 70.966 seconds to 70. A: To expand on what Ben said, it looks like they are milliseconds, and the display value is rounded slightly, possibly to the nearest 100 milliseconds and then 'cropped' to seconds. This would explain why 30533 is 30s and 70966 is 70s.
Calculate Video Duration
I suck at math. I need to figure out how to calculate a video duration with only a few examples of values. For example, a value of 70966 is displayed as 1:10 minutes. A value of 30533 displays as 30 seconds. A value of 7007 displays as 7 seconds.
[ "Looks like the numbers are in milliseconds. So to convert to seconds, divide by 1000, then divide by 60 to find minutes etc.\n", "It's a simple matter of division:\n\n70966 / 70 seconds (1:10 minutes) = 1013.8\n30533 / 30 = 1017.76\n7007 / 7 = 1001\n\nLooks like the numbers are nothing but milliseconds. 70966 displays as 1:10 minutes because it shaves of the millisecond part (last 3 digits).\n", "I'm not sure if I completely understand this, but:\n 70966 / 70 seconds = 1013.8\n\nSo dividing the \"value\" by 1013.8 should get the duration, approximately...\nEdit: Yes, Ben is right, you should divide by 1000. I got 1013.8 because the 70 seconds was rounded down from 70.966 seconds to 70.\n", "To expand on what Ben said, it looks like they are milliseconds, and the display value is rounded slightly, possibly to the nearest 100 milliseconds and then 'cropped' to seconds. This would explain why 30533 is 30s and 70966 is 70s.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "duration", "math", "video" ]
stackoverflow_0000037483_duration_math_video.txt
Q: Is it possible to convert projects wizard created for MSVS 2005 to MSVS 2008 format automatically I have a bunch of wizards created for MSVS 2005. Is it possible to convert them to MSVS 2008 format automatically? A: Looking at these posts: How to use solutions and projects between Visual Studio 2005 and 2008 Running VS2008 & VS 2005 side by side - project file difference You will only need to change a few lines if you do it manually. So i guess this is trivial if you automate it.
Is it possible to convert projects wizard created for MSVS 2005 to MSVS 2008 format automatically
I have a bunch of wizards created for MSVS 2005. Is it possible to convert them to MSVS 2008 format automatically?
[ "Looking at these posts:\n\nHow to use solutions and projects between Visual Studio 2005 and 2008 \nRunning VS2008 & VS 2005 side by side - project file difference \n\nYou will only need to change a few lines if you do it manually. So i guess this is trivial if you automate it.\n" ]
[ 1 ]
[]
[]
[ "data_conversion", "templates", "visual_studio", "visual_studio_2005", "visual_studio_2008" ]
stackoverflow_0000037503_data_conversion_templates_visual_studio_visual_studio_2005_visual_studio_2008.txt
Q: Recursion in production-quality VC++ Code When writing production-quality VC++ code, is the use of recursion acceptable? Why or why not? A: Is there a way to determine at what point I would encounter a stack overflow? Not really. A stack overflow happens when you exhaust the stack space - however... The initial stack size can be changed programatically and may default to different amounts depending on your OS/compiler/etc How much of it is already used up depends on what your app (and the libraries your app uses) has previously done - this is often impossible to predict How much of the stack each call requires depends on what you do in your function. If you only allocate say 1 integer on the stack, you may be able to recurse an enourmous amount of times, but if you are allocating a 200k buffer on the stack, not so much. The only times I've ever hit one is in an infinite loop, or using the aforementioned 200k buffer. I find it far more prefereable for my app to just crash, than for it to loop forever using 100% CPU and have to be forcefully killed (this is a right PITA on a remote server over a bad connection as windows lacks SSH) A rough guideline: Do you think your recursive function is likely to call itself more than say 10,000 times consecutively? Or are you doing something dumb like allocating 200k buffers on the stack? If yes, worry about it. If no, carry on with more important things. A: Yes. But never in dead code. That would be silly. A: Sure - e.g. if you want to traverse a tree structure what else would you use ? Maybe you would like to have something like a maximum depth to be sure you're not writing an infinite loop. (if this makes sense in your example) A: Is there a way to determine at what point I would encounter a stack overflow? Depends how deep you go, and how large the actual recursion is. I take it you understand what recursion does? A: Recursion is almost essential to traverse File structures like folder/directories. Traversing a tree like structure is very easy if recursion is used.
Recursion in production-quality VC++ Code
When writing production-quality VC++ code, is the use of recursion acceptable? Why or why not?
[ "\nIs there a way to determine at what point I would encounter a stack overflow?\n\nNot really. A stack overflow happens when you exhaust the stack space - however...\n\nThe initial stack size can be changed programatically and may default to different amounts depending on your OS/compiler/etc\nHow much of it is already used up depends on what your app (and the libraries your app uses) has previously done - this is often impossible to predict\nHow much of the stack each call requires depends on what you do in your function. If you only allocate say 1 integer on the stack, you may be able to recurse an enourmous amount of times, but if you are allocating a 200k buffer on the stack, not so much.\n\nThe only times I've ever hit one is in an infinite loop, or using the aforementioned 200k buffer. \nI find it far more prefereable for my app to just crash, than for it to loop forever using 100% CPU and have to be forcefully killed (this is a right PITA on a remote server over a bad connection as windows lacks SSH)\nA rough guideline: Do you think your recursive function is likely to call itself more than say 10,000 times consecutively? Or are you doing something dumb like allocating 200k buffers on the stack?\nIf yes, worry about it.\nIf no, carry on with more important things.\n", "Yes. But never in dead code. That would be silly.\n", "Sure - e.g. if you want to traverse a tree structure what else would you use ? \nMaybe you would like to have something like a maximum depth to be sure you're not writing an infinite loop. (if this makes sense in your example)\n", "\nIs there a way to determine at what\n point I would encounter a stack\n overflow?\n\nDepends how deep you go, and how large the actual recursion is. I take it you understand what recursion does?\n", "Recursion is almost essential to traverse File structures like folder/directories.\nTraversing a tree like structure is very easy if recursion is used.\n" ]
[ 6, 2, 0, 0, 0 ]
[]
[]
[ "recursion", "visual_c++" ]
stackoverflow_0000037516_recursion_visual_c++.txt
Q: Why does TreeNodeCollection not implenent IEnumerable? TreeNodeCollection, like some of the other control collections in System.Windows.Forms, implements IEnumerable. Is there any design reason behind this or is it just a hangover from the days before generics? A: Yes, there are many .NET Framework collection, that does not implement generic IEnumerable. I think that's because after 2.0 there was no (at least not so match) development of the core part of FW. Meanwhile I suggest you to make use of following workaround: using System.Linq; ... var nodes = GetTreeNodeCollection().OfType<TreeNode>(); A: Yes, Windows Forms dates back to before generics in .Net
Why does TreeNodeCollection not implenent IEnumerable?
TreeNodeCollection, like some of the other control collections in System.Windows.Forms, implements IEnumerable. Is there any design reason behind this or is it just a hangover from the days before generics?
[ "Yes, there are many .NET Framework collection, that does not implement generic IEnumerable. \nI think that's because after 2.0 there was no (at least not so match) development of the core part of FW.\nMeanwhile I suggest you to make use of following workaround:\nusing System.Linq; \n... \nvar nodes = GetTreeNodeCollection().OfType<TreeNode>();\n\n", "Yes, Windows Forms dates back to before generics in .Net\n" ]
[ 7, 0 ]
[]
[]
[ ".net", "winforms" ]
stackoverflow_0000037597_.net_winforms.txt
Q: How can I simply inherit methods from an existing instance? Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name. import cgi class ClassX(object): pass # ... with own __repr__ class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() inst_z=[ i*i for i in range(25) ] inst_b=True class HTMLDecorator(object): def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_y).html() wrapped_z = HTMLDecorator(inst_z) inst_z[0] += 70 wrapped_z[0] += 71 print wrapped_z.html() print HTMLDecorator(inst_b).html() Output: Traceback (most recent call last): File "html.py", line 21, in print HTMLDecorator(inst_x).html() TypeError: default __new__ takes no parameters Is what I'm trying to do possible? If so, what am I doing wrong? A: Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way. Looks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like bool, but it will for your user-defined classes: def HTMLDecorator (obj): def html (): sep = cgi.escape (repr (obj)) return sep.join (("<H1>", "</H1>")) obj.html = html return obj And here is the proxy version: class HTMLDecorator(object): def __init__ (self, wrapped): self.__wrapped = wrapped def html (self): sep = cgi.escape (repr (self.__wrapped)) return sep.join (("<H1>", "</H1>")) def __getattr__ (self, name): return getattr (self.__wrapped, name) def __setattr__ (self, name, value): if not name.startswith ('_HTMLDecorator__'): setattr (self.__wrapped, name, value) return super (HTMLDecorator, self).__setattr__ (name, value) def __delattr__ (self, name): delattr (self.__wraped, name) A: Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types: import cgi class ClassX(object): pass # ... with own __repr__ class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() class HTMLDecorator: def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) ClassX.__bases__ += (HTMLDecorator,) ClassY.__bases__ += (HTMLDecorator,) print inst_x.html() print inst_y.html() Be warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library. A: Is what I'm trying to do possible? If so, what am I doing wrong? It's certainly possible. What's wrong is that HTMLDecorator.__init__() doesn't accept parameters. Here's a simple example: def decorator (func): def new_func (): return "new_func %s" % func () return new_func @decorator def a (): return "a" def b (): return "b" print a() # new_func a print decorator (b)() # new_func b A: @John (37448): Sorry, I might have misled you with the name (bad choice). I'm not really looking for a decorator function, or anything to do with decorators at all. What I'm after is for the html(self) def to use ClassX or ClassY's __repr__. I want this to work without modifying ClassX or ClassY. A: Ah, in that case, perhaps code like this will be useful? It doesn't really have anything to do with decorators, but demonstrates how to pass arguments to a class's initialization function and to retrieve those arguments for later. import cgi class ClassX(object): def __repr__ (self): return "<class X>" class HTMLDecorator(object): def __init__ (self, wrapped): self.__wrapped = wrapped def html (self): sep = cgi.escape (repr (self.__wrapped)) return sep.join (("<H1>", "</H1>")) inst_x=ClassX() inst_b=True print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_b).html() A: @John (37479): Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way. import cgi from math import sqrt class ClassX(object): def __repr__(self): return "Best Guess" class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() inst_z=[ i*i for i in range(25) ] inst_b=True avoid="__class__ __init__ __dict__ __weakref__" class HTMLDecorator(object): def __init__(self,master): self.master = master for attr in dir(self.master): if ( not attr.startswith("__") or attr not in avoid.split() and "attr" not in attr): self.__setattr__(attr, self.master.__getattribute__(attr)) def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) def length(self): return sqrt(sum(self.__iter__())) print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_y).html() wrapped_z = HTMLDecorator(inst_z) print wrapped_z.length() inst_z[0] += 70 #wrapped_z[0] += 71 wrapped_z.__setitem__(0,wrapped_z.__getitem__(0)+ 71) print wrapped_z.html() print HTMLDecorator(inst_b).html() Output: <H1>Best Guess</H1> <H1><__main__.ClassY object at 0x891df0c></H1> 70.0 <H1>[141, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576]</H1> <H1>True</H1>
How can I simply inherit methods from an existing instance?
Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name. import cgi class ClassX(object): pass # ... with own __repr__ class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() inst_z=[ i*i for i in range(25) ] inst_b=True class HTMLDecorator(object): def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_y).html() wrapped_z = HTMLDecorator(inst_z) inst_z[0] += 70 wrapped_z[0] += 71 print wrapped_z.html() print HTMLDecorator(inst_b).html() Output: Traceback (most recent call last): File "html.py", line 21, in print HTMLDecorator(inst_x).html() TypeError: default __new__ takes no parameters Is what I'm trying to do possible? If so, what am I doing wrong?
[ "\nVery close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.\n\nLooks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like bool, but it will for your user-defined classes:\ndef HTMLDecorator (obj):\n def html ():\n sep = cgi.escape (repr (obj))\n return sep.join ((\"<H1>\", \"</H1>\"))\n obj.html = html\n return obj\n\nAnd here is the proxy version:\nclass HTMLDecorator(object):\n def __init__ (self, wrapped):\n self.__wrapped = wrapped\n\n def html (self):\n sep = cgi.escape (repr (self.__wrapped))\n return sep.join ((\"<H1>\", \"</H1>\"))\n\n def __getattr__ (self, name):\n return getattr (self.__wrapped, name)\n\n def __setattr__ (self, name, value):\n if not name.startswith ('_HTMLDecorator__'):\n setattr (self.__wrapped, name, value)\n return\n super (HTMLDecorator, self).__setattr__ (name, value)\n\n def __delattr__ (self, name):\n delattr (self.__wraped, name)\n\n", "Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types:\nimport cgi\n\nclass ClassX(object):\n pass # ... with own __repr__\n\nclass ClassY(object):\n pass # ... with own __repr__\n\ninst_x=ClassX()\ninst_y=ClassY()\n\nclass HTMLDecorator:\n def html(self): # an \"enhanced\" version of __repr__\n return cgi.escape(self.__repr__()).join((\"<H1>\",\"</H1>\"))\n\nClassX.__bases__ += (HTMLDecorator,)\nClassY.__bases__ += (HTMLDecorator,)\n\nprint inst_x.html()\nprint inst_y.html()\n\nBe warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library.\n", "\nIs what I'm trying to do possible? If so, what am I doing wrong?\n\nIt's certainly possible. What's wrong is that HTMLDecorator.__init__() doesn't accept parameters.\nHere's a simple example:\ndef decorator (func):\n def new_func ():\n return \"new_func %s\" % func ()\n return new_func\n\n@decorator\ndef a ():\n return \"a\"\n\ndef b ():\n return \"b\"\n\nprint a() # new_func a\nprint decorator (b)() # new_func b\n\n", "@John (37448):\nSorry, I might have misled you with the name (bad choice). I'm not really looking for a decorator function, or anything to do with decorators at all. What I'm after is for the html(self) def to use ClassX or ClassY's __repr__. I want this to work without modifying ClassX or ClassY.\n", "Ah, in that case, perhaps code like this will be useful? It doesn't really have anything to do with decorators, but demonstrates how to pass arguments to a class's initialization function and to retrieve those arguments for later.\nimport cgi\n\nclass ClassX(object):\n def __repr__ (self):\n return \"<class X>\"\n\nclass HTMLDecorator(object):\n def __init__ (self, wrapped):\n self.__wrapped = wrapped\n\n def html (self):\n sep = cgi.escape (repr (self.__wrapped))\n return sep.join ((\"<H1>\", \"</H1>\"))\n\ninst_x=ClassX()\ninst_b=True\n\nprint HTMLDecorator(inst_x).html()\nprint HTMLDecorator(inst_b).html()\n\n", "@John (37479):\nVery close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.\nimport cgi\nfrom math import sqrt\n\nclass ClassX(object): \n def __repr__(self): \n return \"Best Guess\"\n\nclass ClassY(object):\n pass # ... with own __repr__\n\ninst_x=ClassX()\n\ninst_y=ClassY()\n\ninst_z=[ i*i for i in range(25) ]\n\ninst_b=True\n\navoid=\"__class__ __init__ __dict__ __weakref__\"\n\nclass HTMLDecorator(object):\n def __init__(self,master):\n self.master = master\n for attr in dir(self.master):\n if ( not attr.startswith(\"__\") or \n attr not in avoid.split() and \"attr\" not in attr):\n self.__setattr__(attr, self.master.__getattribute__(attr))\n\n def html(self): # an \"enhanced\" version of __repr__\n return cgi.escape(self.__repr__()).join((\"<H1>\",\"</H1>\"))\n\n def length(self):\n return sqrt(sum(self.__iter__()))\n\nprint HTMLDecorator(inst_x).html()\nprint HTMLDecorator(inst_y).html()\nwrapped_z = HTMLDecorator(inst_z)\nprint wrapped_z.length()\ninst_z[0] += 70\n#wrapped_z[0] += 71\nwrapped_z.__setitem__(0,wrapped_z.__getitem__(0)+ 71)\nprint wrapped_z.html()\nprint HTMLDecorator(inst_b).html()\n\nOutput:\n<H1>Best Guess</H1>\n<H1><__main__.ClassY object at 0x891df0c></H1>\n70.0\n<H1>[141, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576]</H1>\n<H1>True</H1>\n" ]
[ 2, 2, 0, 0, 0, 0 ]
[]
[]
[ "inheritance", "object", "oop", "python" ]
stackoverflow_0000037479_inheritance_object_oop_python.txt
Q: What steps can I give a windows user to make a given file writeable Imagine we have a program trying to write to a particular file, but failing. On the Windows platform, what are the possible things which might be causing the file to be un-writable, and what steps could be suggested to an end user/administrator to fix it. Please include steps which might require administrator permissions (obviously users may not be administrators, but for this question, let's assume they are (or can become) administrators. Also, I'm not really familiar with how permissions are calculated in windows. - Does the user need write access to each directory up the tree, or anything similar to that? A: Some suggestions: No write permission (get permission through Security tab on file Properties window; you must be the file owner or an Administrator) File is locked (close any program that may have the file open, then reboot if that doesn't help) File has the read-only DOS attribute set (unset it from file Properties window, or with attrib -r; you must be the file owner or an Administrator) Edit 1: Only the second item (file is locked) has a possible solution that all users are likely to be able to do without help. For the first and third, you'll probably want to provide guidance (and hope the file wasn't made read-only intentionally!). Edit 2: Technically, the user does need write and execute (chdir) permissions on all directories up to the root. Windows may skip some of the recursive checks up the tree as a performance optimization, but you should not rely on this because admins can force on these so-called "traverse checks" for certain users. Edit 3: @RobM: Yes, you should check that there is no obvious reason that the user should not have the permissions she needs but does not have. I alluded to this in a less direct way in my first edit. However, in some cases users should have write permission to a file but do not because of filesystem corruption, a misbehaving program, or a mistake on their own part. A: If you are having trouble working out if the file is locked, try using Unlocker - it's a really useful free utility that shows you the process that has locked the file and lets you force an unlock if you need to. A: On Vista could it also be that it's "marked" as unsafe because it's been downloaded from the internet and you have to click the unblock button on it's explorer properties dialog? A: Lets change this around a bit. If your program is trying to write to a file and failing you either need to change the location of the file to one where the user can write to, or check the correct rights when the program starts and refuse to run if the user doesn't have them. Trampling over the system permissions is not the answer.
What steps can I give a windows user to make a given file writeable
Imagine we have a program trying to write to a particular file, but failing. On the Windows platform, what are the possible things which might be causing the file to be un-writable, and what steps could be suggested to an end user/administrator to fix it. Please include steps which might require administrator permissions (obviously users may not be administrators, but for this question, let's assume they are (or can become) administrators. Also, I'm not really familiar with how permissions are calculated in windows. - Does the user need write access to each directory up the tree, or anything similar to that?
[ "Some suggestions:\n\nNo write permission (get permission through Security tab on file Properties window; you must be the file owner or an Administrator)\nFile is locked (close any program that may have the file open, then reboot if that doesn't help)\nFile has the read-only DOS attribute set (unset it from file Properties window, or with attrib -r; you must be the file owner or an Administrator)\n\nEdit 1: Only the second item (file is locked) has a possible solution that all users are likely to be able to do without help. For the first and third, you'll probably want to provide guidance (and hope the file wasn't made read-only intentionally!).\nEdit 2: Technically, the user does need write and execute (chdir) permissions on all directories up to the root. Windows may skip some of the recursive checks up the tree as a performance optimization, but you should not rely on this because admins can force on these so-called \"traverse checks\" for certain users.\nEdit 3: @RobM: Yes, you should check that there is no obvious reason that the user should not have the permissions she needs but does not have. I alluded to this in a less direct way in my first edit. However, in some cases users should have write permission to a file but do not because of filesystem corruption, a misbehaving program, or a mistake on their own part.\n", "If you are having trouble working out if the file is locked, try using Unlocker - it's a really useful free utility that shows you the process that has locked the file and lets you force an unlock if you need to.\n", "On Vista could it also be that it's \"marked\" as unsafe because it's been downloaded from the internet and you have to click the unblock button on it's explorer properties dialog?\n", "Lets change this around a bit. If your program is trying to write to a file and failing you either need to change the location of the file to one where the user can write to, or check the correct rights when the program starts and refuse to run if the user doesn't have them. Trampling over the system permissions is not the answer.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "filesystems", "windows" ]
stackoverflow_0000037525_filesystems_windows.txt
Q: Displaying XML data in a Winforms control I would like to display details of an xml error log to a user in a winforms application and am looking for the best control to do the job. The error data contains all of the sever variables at the time that the error occurred. These have been formatted into an XML document that looks something to the effect of: <error> <serverVariables> <item> <value> </item> </serverVariables> <queryString> <item name=""> <value string=""> </item> </queryString> </error> I would like to read this data from the string that it is stored in and display it to the user via a windows form in a useful way. XML Notepad does a cool job of formatting xml, but is not really was I am looking for since I would prefer to rather display item details in a Name : string format. Any suggestions or am I looking and a custom implementation? [EDIT] A section of the data that needs to be displayed: <?xml version="1.0" encoding="utf-8"?> <error host="WIN12" type="System.Web.HttpException" message="The file '' does not exist." source="System.Web" detail="System.Web.HttpException: The file '' does not exist. at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at" time="2008-09-01T07:13:08.9171250+02:00" statusCode="404"> <serverVariables> <item name="ALL_HTTP"> <value string="HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " /> </item> <item name="AUTH_TYPE"> <value string="" /> </item> <item name="HTTPS"> <value string="off" /> </item> <item name="HTTPS_KEYSIZE"> <value string="" /> </item> <item name="HTTP_USER_AGENT"> <value string="Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" /> </item> </serverVariables> <queryString> <item name="tid"> <value string="196" /> </item> </queryString> </error> A: You can transform your XML data using XSLT Another option is to use XLinq. If you want concrete code example provide us with sample data EDIT: here is a sample XSLT transform for your XML file: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text"/> <xsl:template match="//error/serverVariables"> <xsl:text>Server variables: </xsl:text> <xsl:for-each select="item"> <xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> <xsl:template match="//error/queryString"> <xsl:text>Query string items: </xsl:text> <xsl:for-each select="item"> <xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> </xsl:stylesheet> You can apply this transform using XslCompiledTransform class. It should give output like this: Server variables: ALL_HTTP:HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible MSIE 6.0; Windows NT 5.1; SV1) AUTH_TYPE: HTTPS:off HTTPS_KEYSIZE: HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;S ) Query string items: tid:196 A: You could try using the DataGridView control. To see an example, load an XML file in DevStudio and then right-click on the XML and select "View Data Grid". You'll need to read the API documentation on the control to use it. A: You could use a treeview control and use a recursive XLinq algorithm to put the data in there. I've done that myself with an interface allow a user to build up a custom XML representation and it worked really well. A: See XML data binding. Use Visual Studio or xsd.exe to generate DataSet or classes from XSD, then use System.Xml.Serialization.XmlSerializer if needed to turn your XML into objects/DataSet. Massage the objects. Display them in grid.
Displaying XML data in a Winforms control
I would like to display details of an xml error log to a user in a winforms application and am looking for the best control to do the job. The error data contains all of the sever variables at the time that the error occurred. These have been formatted into an XML document that looks something to the effect of: <error> <serverVariables> <item> <value> </item> </serverVariables> <queryString> <item name=""> <value string=""> </item> </queryString> </error> I would like to read this data from the string that it is stored in and display it to the user via a windows form in a useful way. XML Notepad does a cool job of formatting xml, but is not really was I am looking for since I would prefer to rather display item details in a Name : string format. Any suggestions or am I looking and a custom implementation? [EDIT] A section of the data that needs to be displayed: <?xml version="1.0" encoding="utf-8"?> <error host="WIN12" type="System.Web.HttpException" message="The file '' does not exist." source="System.Web" detail="System.Web.HttpException: The file '' does not exist. at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at" time="2008-09-01T07:13:08.9171250+02:00" statusCode="404"> <serverVariables> <item name="ALL_HTTP"> <value string="HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " /> </item> <item name="AUTH_TYPE"> <value string="" /> </item> <item name="HTTPS"> <value string="off" /> </item> <item name="HTTPS_KEYSIZE"> <value string="" /> </item> <item name="HTTP_USER_AGENT"> <value string="Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" /> </item> </serverVariables> <queryString> <item name="tid"> <value string="196" /> </item> </queryString> </error>
[ "You can transform your XML data using XSLT\nAnother option is to use XLinq.\nIf you want concrete code example provide us with sample data\nEDIT:\nhere is a sample XSLT transform for your XML file: \n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n <xsl:output method=\"text\"/>\n <xsl:template match=\"//error/serverVariables\">\n <xsl:text>Server variables:\n </xsl:text>\n <xsl:for-each select=\"item\">\n <xsl:value-of select=\"@name\"/>:<xsl:value-of select=\"value/@string\"/>\n <xsl:text>\n </xsl:text>\n </xsl:for-each>\n </xsl:template>\n <xsl:template match=\"//error/queryString\">\n <xsl:text>Query string items:\n </xsl:text>\n <xsl:for-each select=\"item\">\n <xsl:value-of select=\"@name\"/>:<xsl:value-of select=\"value/@string\"/>\n <xsl:text>\n </xsl:text>\n </xsl:for-each>\n </xsl:template>\n</xsl:stylesheet>\n\nYou can apply this transform using XslCompiledTransform class.\nIt should give output like this:\n\nServer variables:\n ALL_HTTP:HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible MSIE 6.0; Windows NT 5.1; SV1)\n AUTH_TYPE:\n HTTPS:off\n HTTPS_KEYSIZE:\n HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;S ) \nQuery string items:\n tid:196 \n\n", "You could try using the DataGridView control. To see an example, load an XML file in DevStudio and then right-click on the XML and select \"View Data Grid\". You'll need to read the API documentation on the control to use it.\n", "You could use a treeview control and use a recursive XLinq algorithm to put the data in there. I've done that myself with an interface allow a user to build up a custom XML representation and it worked really well.\n", "See XML data binding.\nUse Visual Studio or xsd.exe to generate DataSet or classes from XSD, then use System.Xml.Serialization.XmlSerializer if needed to turn your XML into objects/DataSet. Massage the objects. Display them in grid.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "c#", "formatting", "winforms", "xml" ]
stackoverflow_0000037591_c#_formatting_winforms_xml.txt
Q: data 'security' with java and hibernate The system I am currently working on requires some role-based security, which is well catered for in the Java EE stack. The system intends to be a framework for business domain experts to write their code on top of. However, there is also a requirement for data security. That is, what information is visible to an end user. This effectively means reducing visibility to rows (and perhaps even columns) in the database. We are using Hibernate for our persistence. However, we are using our own annotations so as not to expose our persistence choice to the business domain experts. For row based security this means we could add an annotation such as @Secured at the entity level, which would cause an extra column to be added to the underlying table to constrain our selects? For column based security, we could perhaps have @Secured to either assist in query generation, or perhaps use an aspect to filter the information returned? I'm curious to know how this might affect hibernate's caching mechanisms as well? I'm sure a lot of others will have had the same issue, and I was wondering how you approached this? Much appreciated... A: Hibernate has a filter mechanism that may work for you. The filters will rewrite the queries hibernate generates to include an additional clause to limit the rows returned. I'm not aware of anything in hibernate to mask/hide columns. Your database may also have support for this functionality. Oracle, for example, has the Virtual Private Database (VPD) which will rewrite your queries at the database level. This solution has the added benefit that any external program (e.g. reporting tools) that goes against your db will have your security restrictions enforced. VPD also has support to mask restricted columns with NULLs. Unfortunately, the above solutions have not been adequate to support the security requirements for the types projects I typically work on. There is usually some sort of context that cannot be easily expressed in the above solutions. For example, users can view data that they have created, or that have been been marked as public, or belong to a project which they manage. We typically create query/finder/DAO objects where we pass in the values required to enforce the security and then create the query accordingly. I hope this helps A: When using Hibernate filters you need to be aware that the additional restrictions will not be applied to SQL statements generted by the load() or get() methods.
data 'security' with java and hibernate
The system I am currently working on requires some role-based security, which is well catered for in the Java EE stack. The system intends to be a framework for business domain experts to write their code on top of. However, there is also a requirement for data security. That is, what information is visible to an end user. This effectively means reducing visibility to rows (and perhaps even columns) in the database. We are using Hibernate for our persistence. However, we are using our own annotations so as not to expose our persistence choice to the business domain experts. For row based security this means we could add an annotation such as @Secured at the entity level, which would cause an extra column to be added to the underlying table to constrain our selects? For column based security, we could perhaps have @Secured to either assist in query generation, or perhaps use an aspect to filter the information returned? I'm curious to know how this might affect hibernate's caching mechanisms as well? I'm sure a lot of others will have had the same issue, and I was wondering how you approached this? Much appreciated...
[ "Hibernate has a filter mechanism that may work for you. The filters will rewrite the queries hibernate generates to include an additional clause to limit the rows returned. I'm not aware of anything in hibernate to mask/hide columns.\nYour database may also have support for this functionality. Oracle, for example, has the Virtual Private Database (VPD) which will rewrite your queries at the database level. This solution has the added benefit that any external program (e.g. reporting tools) that goes against your db will have your security restrictions enforced. VPD also has support to mask restricted columns with NULLs.\nUnfortunately, the above solutions have not been adequate to support the security requirements for the types projects I typically work on. There is usually some sort of context that cannot be easily expressed in the above solutions. For example, users can view data that they have created, or that have been been marked as public, or belong to a project which they manage.\nWe typically create query/finder/DAO objects where we pass in the values required to enforce the security and then create the query accordingly.\nI hope this helps\n", "When using Hibernate filters you need to be aware that the additional restrictions will not be applied to SQL statements generted by the load() or get() methods.\n" ]
[ 6, 1 ]
[]
[]
[ "hibernate", "jakarta_ee", "java", "security" ]
stackoverflow_0000034638_hibernate_jakarta_ee_java_security.txt
Q: How do you prevent the IIS default site web.config file being inherited by virtual directories? I have the following code in a web.config file of the default IIS site. <httpModules> <add type="MDL.BexWebControls.Charts.ChartStreamHandler,Charts" name="ChartStreamHandler"/> </httpModules> Then when I setup and browse to a virtual directory I get this error Could not load file or assembly 'Charts' or one of its dependencies. The system cannot find the file specified. The virtual directory is inheriting the modules from the default web.config. How do you stop this inheritance? A: I've found the answer. Wrap the HttpModule section in location tags and set the inheritInChildApplications attribute to false. <location path="." inheritInChildApplications="false"> <system.web> <httpModules> <add type="MDL.BexWebControls.Charts.ChartStreamHandler,Charts" name="ChartStreamHandler"/> </httpModules> </system.web> </location> Now any virtual directories will not inherit the settings in this location section. @GateKiller This isn't another website, its a virtual directory so inheritance does occur. @petrich I've had hit and miss results using <remove />. I have to remember to add it to every virtual directory which is a pain. A: Add the following to the virtual directory's web.config file: <httpModules> <remove name="ChartStreamHandler"/> </httpModules>
How do you prevent the IIS default site web.config file being inherited by virtual directories?
I have the following code in a web.config file of the default IIS site. <httpModules> <add type="MDL.BexWebControls.Charts.ChartStreamHandler,Charts" name="ChartStreamHandler"/> </httpModules> Then when I setup and browse to a virtual directory I get this error Could not load file or assembly 'Charts' or one of its dependencies. The system cannot find the file specified. The virtual directory is inheriting the modules from the default web.config. How do you stop this inheritance?
[ "I've found the answer. Wrap the HttpModule section in location tags and set the inheritInChildApplications attribute to false.\n<location path=\".\" inheritInChildApplications=\"false\">\n <system.web>\n <httpModules>\n <add type=\"MDL.BexWebControls.Charts.ChartStreamHandler,Charts\" name=\"ChartStreamHandler\"/>\n </httpModules>\n </system.web>\n</location>\n\nNow any virtual directories will not inherit the settings in this location section.\n@GateKiller This isn't another website, its a virtual directory so inheritance does occur.\n@petrich I've had hit and miss results using <remove />. I have to remember to add it to every virtual directory which is a pain.\n", "Add the following to the virtual directory's web.config file:\n<httpModules>\n <remove name=\"ChartStreamHandler\"/>\n</httpModules>\n\n" ]
[ 20, 2 ]
[ "According to Microsoft, other websites do not inherit settings from the Default Website. Do you mean you are editing the default web.config which is located in the same folder as the machine.config?\n" ]
[ -2 ]
[ ".net", "asp.net", "configuration", "configuration_files" ]
stackoverflow_0000037759_.net_asp.net_configuration_configuration_files.txt
Q: Enforcing web coding standards The HTML standard defines a clear separation of concerns between CSS (presentation) and HTML (semantics or structure). Does anyone use a coding standards document for CSS and XHTML that has clauses which help to maintain this separation? What would be good clauses to include in such a coding standards document? A: We don't have a physical document we all adhere to where I work. There are a number of guidelines we try and keep in mind but there isn't really enough information to require a physcial document. This article sums them up these guidelines pretty well. You may also consider formatting your CSS to make it easier to read. Smashing Magazine has a great article on this subject. A: The article referred to by @Lee Theobald is a good start. Some basic ideas I try to keep in mind when marking up: Regarding html: Try to write for the next person - that is, think about how easy or difficult it might be for someone else to come and pick up your work and carry on. To support this principle, you should try and make sure your markup is as legible as possible - class and id tags in particular should relate as much as possible to their intended content. In other words, try to use your tags to describe the kind of content they will have. For example, "Sub-navigation", "content" etc. The aim is to provide markup that someone can pickup having not looked at before and get a sense of the logical structure of the document. Also, try to avoid the addition of markup that is purely to achieve a visual effect. But bear in mind that any website that requires even slightly sophisticated styling is unlikely to be able avoid non-semantic markup, due to weaknesses in current implementations of CSS and browser-compatibility issues. Regarding CSS files: Many people divide their css up into sections using comments, separating them into functional or structural areas. So you might have a section for your header, your footer, or typography and so on. Others take this further and split css across files, having one for typography, one for layout etc. However, this can according to Yslow! can have a negative impact on page loading, due to increased http requests. I could write more, but as you can see I struggle to be concise. I hope this is of some use to you.
Enforcing web coding standards
The HTML standard defines a clear separation of concerns between CSS (presentation) and HTML (semantics or structure). Does anyone use a coding standards document for CSS and XHTML that has clauses which help to maintain this separation? What would be good clauses to include in such a coding standards document?
[ "We don't have a physical document we all adhere to where I work. There are a number of guidelines we try and keep in mind but there isn't really enough information to require a physcial document. This article sums them up these guidelines pretty well. You may also consider formatting your CSS to make it easier to read. Smashing Magazine has a great article on this subject.\n", "The article referred to by @Lee Theobald is a good start.\nSome basic ideas I try to keep in mind when marking up:\nRegarding html:\nTry to write for the next person - that is, think about how easy or difficult it might be for someone else to come and pick up your work and carry on. \nTo support this principle, you should try and make sure your markup is as legible as possible - class and id tags in particular should relate as much as possible to their intended content. In other words, try to use your tags to describe the kind of content they will have. \nFor example, \"Sub-navigation\", \"content\" etc. \nThe aim is to provide markup that someone can pickup having not looked at before and get a sense of the logical structure of the document.\nAlso, try to avoid the addition of markup that is purely to achieve a visual effect. But bear in mind that any website that requires even slightly sophisticated styling is unlikely to be able avoid non-semantic markup, due to weaknesses in current implementations of CSS and browser-compatibility issues.\nRegarding CSS files:\nMany people divide their css up into sections using comments, separating them into functional or structural areas. So you might have a section for your header, your footer, or typography and so on. Others take this further and split css across files, having one for typography, one for layout etc. However, this can according to Yslow! can have a negative impact on page loading, due to increased http requests.\nI could write more, but as you can see I struggle to be concise. I hope this is of some use to you.\n" ]
[ 2, 0 ]
[]
[]
[ "coding_style", "css", "html", "xhtml" ]
stackoverflow_0000031128_coding_style_css_html_xhtml.txt
Q: Concatenate several fields into one with SQL I have three tables tag, page, pagetag With the data below page ID NAME 1 page 1 2 page 2 3 page 3 4 page 4 tag ID NAME 1 tag 1 2 tag 2 3 tag 3 4 tag 4 pagetag ID PAGEID TAGID 1 2 1 2 2 3 3 3 4 4 1 1 5 1 2 6 1 3 I would like to get a string containing the correspondent tag names for each page with SQL in a single query. This is my desired output. ID NAME TAGS 1 page 1 tag 1, tag 2, tag 3 2 page 2 tag 1, tag 3 3 page 3 tag 4 4 page 4 Is this possible with SQL? I am using MySQL. Nonetheless, I would like a database vendor independent solution if possible. A: Yep, you can do it across the 3 something like the below: SELECT page_tag.id, page.name, group_concat(tags.name) FROM tag, page, page_tag WHERE page_tag.page_id = page.page_id AND page_tag.tag_id = tag.id; Has not been tested, and could be probably be written a tad more efficiently, but should get you started! Also, MySQL is assumed, so may not play so nice with MSSQL! And MySQL isn't wild about hyphens in field names, so changed to underscores in the above examples. A: Sergio del Amo: However, I am not getting the pages without tags. I guess i need to write my query with left outer joins. SELECT pagetag.id, page.name, group_concat(tag.name) FROM ( page LEFT JOIN pagetag ON page.id = pagetag.pageid ) LEFT JOIN tag ON pagetag.tagid = tag.id GROUP BY page.id; Not a very pretty query, but should give you what you want - pagetag.id and group_concat(tag.name) will be null for page 4 in the example you've posted above, but the page shall appear in the results. A: As far as I'm aware SQL92 doesn't define how string concatenation should be done. This means that most engines have their own method. If you want a database independent method, you'll have to do it outside of the database. (untested in all but Oracle) Oracle SELECT field1 | ', ' | field2 FROM table; MS SQL SELECT field1 + ', ' + field2 FROM table; MySQL SELECT concat(field1,', ',field2) FROM table; PostgeSQL SELECT field1 || ', ' || field2 FROM table; A: I got a solution playing with joins. The query is: SELECT page.id AS id, page.name AS name, tagstable.tags AS tags FROM page LEFT OUTER JOIN ( SELECT pagetag.pageid, GROUP_CONCAT(distinct tag.name) AS tags FROM tag INNER JOIN pagetag ON tagid = tag.id GROUP BY pagetag.pageid ) AS tagstable ON tagstable.pageid = page.id GROUP BY page.id And this will be the output: id name tags --------------------------- 1 page 1 tag2,tag3,tag1 2 page 2 tag1,tag3 3 page 3 tag4 4 page 4 NULL Is it possible to boost the query speed writing it another way? A: I think you may need to use multiple updates. Something like (not tested): select ID as 'PageId', Name as 'PageName', null as 'Tags' into #temp from [PageTable] declare @lastOp int set @lastOp = 1 while @lastOp > 0 begin update p set p.tags = isnull(tags + ', ', '' ) + t.[Tagid] from #temp p inner join [TagTable] t on p.[PageId] = t.[PageId] where p.tags not like '%' + t.[Tagid] + '%' set @lastOp == @@rowcount end select * from #temp Ugly though. That example's T-SQL, but I think MySql has equivalents to everything used. A: pagetag.id and group_concat(tag.name) will be null for page 4 in the example you've posted above, but the page shall appear in the results. You can use the COALESCE function to remove the Nulls if you need to: select COALESCE(pagetag.id, '') AS id ... It will return the first non-null value from it's list of parameters.
Concatenate several fields into one with SQL
I have three tables tag, page, pagetag With the data below page ID NAME 1 page 1 2 page 2 3 page 3 4 page 4 tag ID NAME 1 tag 1 2 tag 2 3 tag 3 4 tag 4 pagetag ID PAGEID TAGID 1 2 1 2 2 3 3 3 4 4 1 1 5 1 2 6 1 3 I would like to get a string containing the correspondent tag names for each page with SQL in a single query. This is my desired output. ID NAME TAGS 1 page 1 tag 1, tag 2, tag 3 2 page 2 tag 1, tag 3 3 page 3 tag 4 4 page 4 Is this possible with SQL? I am using MySQL. Nonetheless, I would like a database vendor independent solution if possible.
[ "Yep, you can do it across the 3 something like the below:\nSELECT page_tag.id, page.name, group_concat(tags.name)\nFROM tag, page, page_tag\nWHERE page_tag.page_id = page.page_id AND page_tag.tag_id = tag.id;\n\nHas not been tested, and could be probably be written a tad more efficiently, but should get you started!\nAlso, MySQL is assumed, so may not play so nice with MSSQL! And MySQL isn't wild about hyphens in field names, so changed to underscores in the above examples.\n", "\nSergio del Amo:\n\nHowever, I am not getting the pages without tags. I guess i need to write my query with left outer joins.\n\n\nSELECT pagetag.id, page.name, group_concat(tag.name)\nFROM\n(\n page LEFT JOIN pagetag ON page.id = pagetag.pageid\n)\nLEFT JOIN tag ON pagetag.tagid = tag.id\nGROUP BY page.id;\n\nNot a very pretty query, but should give you what you want - pagetag.id and group_concat(tag.name) will be null for page 4 in the example you've posted above, but the page shall appear in the results.\n", "As far as I'm aware SQL92 doesn't define how string concatenation should be done. This means that most engines have their own method. \nIf you want a database independent method, you'll have to do it outside of the database.\n(untested in all but Oracle)\nOracle\nSELECT field1 | ', ' | field2\nFROM table;\n\nMS SQL \nSELECT field1 + ', ' + field2\nFROM table;\n\nMySQL\nSELECT concat(field1,', ',field2)\nFROM table;\n\nPostgeSQL\nSELECT field1 || ', ' || field2\nFROM table;\n\n", "I got a solution playing with joins. The query is: \nSELECT\n page.id AS id,\n page.name AS name,\n tagstable.tags AS tags\nFROM page \nLEFT OUTER JOIN \n(\n SELECT pagetag.pageid, GROUP_CONCAT(distinct tag.name) AS tags\n FROM tag INNER JOIN pagetag ON tagid = tag.id\n GROUP BY pagetag.pageid\n)\nAS tagstable ON tagstable.pageid = page.id\nGROUP BY page.id\n\nAnd this will be the output: \nid name tags\n---------------------------\n1 page 1 tag2,tag3,tag1\n2 page 2 tag1,tag3\n3 page 3 tag4\n4 page 4 NULL\n\nIs it possible to boost the query speed writing it another way?\n", "I think you may need to use multiple updates.\nSomething like (not tested):\nselect ID as 'PageId', Name as 'PageName', null as 'Tags'\ninto #temp \nfrom [PageTable]\n\ndeclare @lastOp int\nset @lastOp = 1\n\nwhile @lastOp > 0\nbegin\n update p\n set p.tags = isnull(tags + ', ', '' ) + t.[Tagid]\n from #temp p\n inner join [TagTable] t\n on p.[PageId] = t.[PageId]\n where p.tags not like '%' + t.[Tagid] + '%'\n\n set @lastOp == @@rowcount\nend\n\nselect * from #temp\n\nUgly though.\nThat example's T-SQL, but I think MySql has equivalents to everything used.\n", "\npagetag.id and group_concat(tag.name) will be null for page 4 in the example you've posted above, but the page shall appear in the results.\n\nYou can use the COALESCE function to remove the Nulls if you need to:\nselect COALESCE(pagetag.id, '') AS id ...\n\nIt will return the first non-null value from it's list of parameters.\n" ]
[ 3, 3, 1, 1, 0, 0 ]
[]
[]
[ "mysql", "sql" ]
stackoverflow_0000037696_mysql_sql.txt
Q: Where can I get free Vista style developer graphics? What is the best source of free Vista style graphics for application development? I want 32x32 and 16x16 that I can use in a Winforms application. A: Best place I've found for commercial toolbar icons etc is glyfx.com. A: If you're using Visual Studio Professional or above, you've got a zip file of icons in your VS path under Common7\VS2008ImageLibrary. Some of the images use the Vista style. A: The Tango project has some good icons For areas that only need 16x16, the silk icons from famfamfam are good too Both are Creative Commons licensed
Where can I get free Vista style developer graphics?
What is the best source of free Vista style graphics for application development? I want 32x32 and 16x16 that I can use in a Winforms application.
[ "Best place I've found for commercial toolbar icons etc is glyfx.com.\n", "If you're using Visual Studio Professional or above, you've got a zip file of icons in your VS path under Common7\\VS2008ImageLibrary. Some of the images use the Vista style.\n", "The Tango project has some good icons\nFor areas that only need 16x16, the silk icons from famfamfam are good too\nBoth are Creative Commons licensed\n" ]
[ 3, 3, 2 ]
[]
[]
[ "graphics", "winforms" ]
stackoverflow_0000037593_graphics_winforms.txt
Q: C# console program can't send fax when run as a scheduled task I have a console program written in C# that I am using to send faxes. When I step through the program in Visual Studio it works fine. When I double click on the program in Windows Explorer it works fine. When I setup a Windows scheduled task to run the program it fails with this in the event log. EventType clr20r3, P1 consolefaxtest.exe, P2 1.0.0.0, P3 48bb146b, P4 consolefaxtest, P5 1.0.0.0, P6 48bb146b, P7 1, P8 80, P9 system.io.filenotfoundexception, P10 NIL. I wrote a batch file to run the fax program and it fails with this message. Unhandled Exception: System.IO.FileNotFoundException: Operation failed. at FAXCOMEXLib.FaxDocumentClass.ConnectedSubmit(FaxServer pFaxServer) Can anyone explain this behavior to me? A: I can't explain it - but I have a few ideas. Most of the times, when a program works fine testing it, and doesn't when scheduling it - security is the case. In the context of which user is your program scheduled? Maybe that user isn't granted enough access. Is the resource your programm is trying to access a network drive, that the user running the scheduled task simply haven't got? A: Check that you set correct working directory for your task A: Is the scheduled task running on the same computer you're developing on, or is it on a dedicated olp server? It's quite common for paths to change when you change environments, so is the path to the document you're trying to send the same? A: I agree with MartinNH. Many of these problems root from the fact that you develop while logged in as an administrator in Visual Studio (so the program has all the permissions for execution set properly) but you deploy as a user with lesser privileges. Try setting the priveleges of the task scheduler user higher. A: If you are running in Vista, you may find that the elevation is getting in the way. You may need to ensure your task runs as a proper administrator, not as a restricted user. A: When you run a schedule task you can have it run under a user. Verify the user that is running the schedule task has the same rights for the fax resource as you. Which is why you can run it when you double click in Windows explore.
C# console program can't send fax when run as a scheduled task
I have a console program written in C# that I am using to send faxes. When I step through the program in Visual Studio it works fine. When I double click on the program in Windows Explorer it works fine. When I setup a Windows scheduled task to run the program it fails with this in the event log. EventType clr20r3, P1 consolefaxtest.exe, P2 1.0.0.0, P3 48bb146b, P4 consolefaxtest, P5 1.0.0.0, P6 48bb146b, P7 1, P8 80, P9 system.io.filenotfoundexception, P10 NIL. I wrote a batch file to run the fax program and it fails with this message. Unhandled Exception: System.IO.FileNotFoundException: Operation failed. at FAXCOMEXLib.FaxDocumentClass.ConnectedSubmit(FaxServer pFaxServer) Can anyone explain this behavior to me?
[ "I can't explain it - but I have a few ideas.\nMost of the times, when a program works fine testing it, and doesn't when scheduling it - security is the case. In the context of which user is your program scheduled? Maybe that user isn't granted enough access.\nIs the resource your programm is trying to access a network drive, that the user running the scheduled task simply haven't got?\n", "Check that you set correct working directory for your task\n", "Is the scheduled task running on the same computer you're developing on, or is it on a dedicated olp server? It's quite common for paths to change when you change environments, so is the path to the document you're trying to send the same?\n", "I agree with MartinNH.\nMany of these problems root from the fact that you develop while logged in as an administrator in Visual Studio (so the program has all the permissions for execution set properly) but you deploy as a user with lesser privileges.\nTry setting the priveleges of the task scheduler user higher.\n", "If you are running in Vista, you may find that the elevation is getting in the way. You may need to ensure your task runs as a proper administrator, not as a restricted user.\n", "When you run a schedule task you can have it run under a user. Verify the user that is running the schedule task has the same rights for the fax resource as you. Which is why you can run it when you double click in Windows explore.\n" ]
[ 5, 0, 0, 0, 0, 0 ]
[]
[]
[ "c#", "console", "fax" ]
stackoverflow_0000037189_c#_console_fax.txt
Q: Font-dependent control positioning I'd like to use Segoe UI 9 pt on Vista, and Tahoma 8 pt on Windows XP/etc. (Actually, I'd settle for Segoe UI on both, but my users probably don't have it installed.) But, these being quite different, they really screw up the layout of my forms. So... is there a good way to deal with this? An example: I have a Label, with some blank space in the middle, into which I place a NumericUpDown control. If I use Segoe UI, the NumericUpDown is about 5 pixels or so to the left of the blank space, compared to when I use Tahoma. This is a pain; I'm not sure what to do here. So most specifically, my question would be: how can I place controls in the middle of a blank space in my Labels (or CheckBoxes, etc.)? Most generally: is there a good way to handle varying fonts in Windows Forms? Edit: I don't think people understood the question. I know how to vary my fonts based on OS. I just don't know how to deal with the layout problems that arise from doing so. Reply to ajryan, quick_dry: OK, you guys understand the question. I guess MeasureString might work, although I'd be interested in further exploration of better ways to solve this problem. The problem with splitting the control is most apparent with, say, a CheckBox. There, if the user clicks on the "second half" of the CheckBox (which would be a separate Label control, I guess), the CheckBox doesn't change state. A: I second the usage of TableLayoutPanel for single-line inline controls. I usually set each column and the first row to AutoSize and set each child control's Dock property to Fill in the designer. That gets the horizontal layout to display properly. To make the the text line up between labels/textboxes, set the TextAlign property to MiddleLeft. If your text flows onto to the next line there's no easy solution. Using Graphics.MeasureString/TextRenderer.MeasureText and some fancy wrapping logic is your best bet :( A: First of all, you can find out which version of Windows you are using with the OperatingSystem.Platform property in the System library. Second, it is possible that you may put your font settings in Resource files, and determine which resource file to use depending on certain conditions (e.g., your operating system version). Personally though, I think it would be nice to let your user determine the fonts that they prefer as opposed to the font that you want for them to use. Finally, you might want to take a look at WPF as this is one of the problem spaces that it was designed to solve. A: is the problem working out the placement of controls? i.e. you know font X and Y work on OS A and B, and give the layout you want with the text you're using on those systems? MeasureString method might help in working out your layout in a way that you weren't tied to specific fonts. float textWidth = graphics.MeasureString(someString, someFont).Width; (would a change in text alignment work? I might be misunderstanding the problem too) A: It's strange to need to layout one control within another. You might be solving an upstream problem wrong. Are you able to split the label into two labels with the updown between and maybe rely on a Windows Forms TableLayout panel? If it's essential to try to position based on font sizes, you could use Graphics.MeasureString("String before updown", myLabel.Font) If what you're after is font-dependent control positioning, you should probably retitle the question. [edit] You can handle the click event of the "second half" part of the label and change the checkbox state on that event. The whole thing seems like a hack though. What is the problem being solved by this weird control layout? Why do you need an up-down in the middle of a label?
Font-dependent control positioning
I'd like to use Segoe UI 9 pt on Vista, and Tahoma 8 pt on Windows XP/etc. (Actually, I'd settle for Segoe UI on both, but my users probably don't have it installed.) But, these being quite different, they really screw up the layout of my forms. So... is there a good way to deal with this? An example: I have a Label, with some blank space in the middle, into which I place a NumericUpDown control. If I use Segoe UI, the NumericUpDown is about 5 pixels or so to the left of the blank space, compared to when I use Tahoma. This is a pain; I'm not sure what to do here. So most specifically, my question would be: how can I place controls in the middle of a blank space in my Labels (or CheckBoxes, etc.)? Most generally: is there a good way to handle varying fonts in Windows Forms? Edit: I don't think people understood the question. I know how to vary my fonts based on OS. I just don't know how to deal with the layout problems that arise from doing so. Reply to ajryan, quick_dry: OK, you guys understand the question. I guess MeasureString might work, although I'd be interested in further exploration of better ways to solve this problem. The problem with splitting the control is most apparent with, say, a CheckBox. There, if the user clicks on the "second half" of the CheckBox (which would be a separate Label control, I guess), the CheckBox doesn't change state.
[ "I second the usage of TableLayoutPanel for single-line inline controls.\nI usually set each column and the first row to AutoSize and set each child control's Dock property to Fill in the designer. That gets the horizontal layout to display properly.\nTo make the the text line up between labels/textboxes, set the TextAlign property to MiddleLeft.\nIf your text flows onto to the next line there's no easy solution. Using Graphics.MeasureString/TextRenderer.MeasureText and some fancy wrapping logic is your best bet :(\n", "First of all, you can find out which version of Windows you are using with the OperatingSystem.Platform property in the System library.\nSecond, it is possible that you may put your font settings in Resource files, and determine which resource file to use depending on certain conditions (e.g., your operating system version).\nPersonally though, I think it would be nice to let your user determine the fonts that they prefer as opposed to the font that you want for them to use.\nFinally, you might want to take a look at WPF as this is one of the problem spaces that it was designed to solve.\n", "is the problem working out the placement of controls? i.e. you know font X and Y work on OS A and B, and give the layout you want with the text you're using on those systems?\nMeasureString method might help in working out your layout in a way that you weren't tied to specific fonts.\nfloat textWidth = graphics.MeasureString(someString, someFont).Width;\n(would a change in text alignment work? I might be misunderstanding the problem too)\n", "It's strange to need to layout one control within another. You might be solving an upstream problem wrong. Are you able to split the label into two labels with the updown between and maybe rely on a Windows Forms TableLayout panel? \nIf it's essential to try to position based on font sizes, you could use Graphics.MeasureString(\"String before updown\", myLabel.Font)\nIf what you're after is font-dependent control positioning, you should probably retitle the question.\n\n[edit] You can handle the click event of the \"second half\" part of the label and change the checkbox state on that event. The whole thing seems like a hack though. What is the problem being solved by this weird control layout? Why do you need an up-down in the middle of a label?\n" ]
[ 3, 1, 1, 1 ]
[]
[]
[ ".net", "fonts", "layout", "user_interface", "winforms" ]
stackoverflow_0000037306_.net_fonts_layout_user_interface_winforms.txt
Q: Adding server-side event to extender control I have an extender control that raises a textbox's OnTextChanged event 500ms after the user has finished typing. The problem with this is that OnTextChanged gets raised when the textbox loses focus, which causes problems (because of the postback). What I'd like to do is give the extender control its own server-side event (say, OnDelayedSubmit) so I can handle it separately. The event will originate in the extender control's behavior script (after the 500ms delay), so putting a __doPostBack in onchanged is not an option. Can anyone shed light on how to go about this? A: After plenty of reading up on extender controls and JavaScript, I've cobbled together a solution that seems to be working so far. The main trick was getting the necessary postback code from server-side to the client-side behavior script. I did this by using an ExtenderControlProperty (which is set in the control's OnPreRender function), and then eval'd in the behavior script. The rest was basic event-handling stuff. So now my extender control's .cs file looks something like this: public class DelayedSubmitExtender : ExtenderControlBase, IPostBackEventHandler { // This is where we'll give the behavior script the necessary code for the // postback event protected override void OnPreRender(EventArgs e) { string postback = Page.ClientScript.GetPostBackEventReference(this, "DelayedSubmit") + ";"; PostBackEvent = postback; } // This property matches up with a pair of get & set functions in the behavior script [ExtenderControlProperty] public string PostBackEvent { get { return GetPropertyValue<string>("PostBackEvent", ""); } set { SetPropertyValue<string>("PostBackEvent", value); } } // The event handling stuff public event EventHandler Submit; // Our event protected void OnSubmit(EventArgs e) // Called to raise the event { if (Submit != null) { Submit(this, e); } } public void RaisePostBackEvent(string eventArgument) // From IPostBackEventHandler { if (eventArgument == "DelayedSubmit") { OnSubmit(new EventArgs()); } } } And my behavior script looks something like this: DelayedSubmitBehavior = function(element) { DelayedSubmitBehavior.initializeBase(this, [element]); this._postBackEvent = null; // Stores the script required for the postback } DelayedSubmitBehavior.prototype = { // Delayed submit code removed for brevity, but normally this would be where // initialize, dispose, and client-side event handlers would go // This is the client-side part of the PostBackEvent property get_PostBackEvent: function() { return this._postBackEvent; }, set_PostBackEvent: function(value) { this._postBackEvent = value; } // This is the client-side event handler where the postback is initiated from _onTimerTick: function(sender, eventArgs) { // The following line evaluates the string var as javascript, // which will cause the desired postback eval(this._postBackEvent); } } Now the server-side event can be handled the same way you'd handle an event on any other control.
Adding server-side event to extender control
I have an extender control that raises a textbox's OnTextChanged event 500ms after the user has finished typing. The problem with this is that OnTextChanged gets raised when the textbox loses focus, which causes problems (because of the postback). What I'd like to do is give the extender control its own server-side event (say, OnDelayedSubmit) so I can handle it separately. The event will originate in the extender control's behavior script (after the 500ms delay), so putting a __doPostBack in onchanged is not an option. Can anyone shed light on how to go about this?
[ "After plenty of reading up on extender controls and JavaScript, I've cobbled together a solution that seems to be working so far.\nThe main trick was getting the necessary postback code from server-side to the client-side behavior script. I did this by using an ExtenderControlProperty (which is set in the control's OnPreRender function), and then eval'd in the behavior script. The rest was basic event-handling stuff.\nSo now my extender control's .cs file looks something like this:\npublic class DelayedSubmitExtender : ExtenderControlBase, IPostBackEventHandler\n{\n // This is where we'll give the behavior script the necessary code for the \n // postback event\n protected override void OnPreRender(EventArgs e)\n {\n string postback = Page.ClientScript.GetPostBackEventReference(this, \"DelayedSubmit\") + \";\";\n PostBackEvent = postback;\n }\n\n // This property matches up with a pair of get & set functions in the behavior script\n [ExtenderControlProperty]\n public string PostBackEvent\n {\n get\n {\n return GetPropertyValue<string>(\"PostBackEvent\", \"\");\n }\n set\n {\n SetPropertyValue<string>(\"PostBackEvent\", value);\n }\n }\n\n // The event handling stuff\n public event EventHandler Submit; // Our event\n\n protected void OnSubmit(EventArgs e) // Called to raise the event\n {\n if (Submit != null)\n {\n Submit(this, e);\n }\n }\n\n public void RaisePostBackEvent(string eventArgument) // From IPostBackEventHandler\n {\n if (eventArgument == \"DelayedSubmit\")\n {\n OnSubmit(new EventArgs());\n }\n }\n\n}\n\nAnd my behavior script looks something like this:\nDelayedSubmitBehavior = function(element) {\n DelayedSubmitBehavior.initializeBase(this, [element]);\n\n this._postBackEvent = null; // Stores the script required for the postback\n}\n\nDelayedSubmitBehavior.prototype = {\n // Delayed submit code removed for brevity, but normally this would be where \n // initialize, dispose, and client-side event handlers would go\n\n // This is the client-side part of the PostBackEvent property\n get_PostBackEvent: function() {\n return this._postBackEvent;\n },\n set_PostBackEvent: function(value) {\n this._postBackEvent = value;\n }\n\n // This is the client-side event handler where the postback is initiated from\n _onTimerTick: function(sender, eventArgs) {\n // The following line evaluates the string var as javascript,\n // which will cause the desired postback\n eval(this._postBackEvent);\n }\n}\n\nNow the server-side event can be handled the same way you'd handle an event on any other control.\n" ]
[ 5 ]
[]
[]
[ ".net_3.5", "asp.net" ]
stackoverflow_0000037555_.net_3.5_asp.net.txt
Q: UI and event testing So I know that unit testing is a must. I get the idea that TDD is the way to go when adding new modules. Even if, in practice, I don't actually do it. A bit like commenting code, really. The real thing is, I'm struggling to get my head around how to unit-test the UI and more generally objects that generate events: user controls, asynchronous database operations, etc. So much of my code relates to UI events that I can't quite see how to even start the unit testing. There must be some primers and starter docs out there? Some hints and tips? I'm generally working in C# (2.0 and 3.5) but I'm not sure that this is strictly relevant to the question. A: the thing to remember is that unit testing is about testing the units of code you write. Your unit tests shouldn't test that clicking a button raises an event, but that the code being executed by that click event does as it's supposed to. What you're really wanting to do is test the underlying code does what it should so that your UI layers can execute that code with confidence. A: Read this if you're struggling with UI Testing Manually test UI stuff where benefit to cost in automating it is minimal. Test everything under the UI skin ruthlessly. Use Humble Dialog, MVC or variants to keep logic and UI distinct and loosely coupled. A: You should separate logic and presentation. Using MVP(Model-View-Presenter)/MVC (Model-View-Controller) patterns you can unit test you logic without relying on UI events. Also you can use White framework to simulate user input. I would highly recommend you to visit Microsoft's Patterns&Practices developer center, especially take a look at composite application block and Prism - you can get a lot of information on test driven design. A: The parts of your application that talk to the outside world (ie UI, database etc.) are always a problem when unit-testing. The way around this is actually not to test those layers but make them as thin as possible. For the UI you can use a humble dialog or a view that doesn't do anything worth testing and then put all the logic in a controller or presenter class. You can then use a mocking framework or write your own mock objects to make fake versions of the views to test the logic in the presenters or controller. On the database side you can do something similar. Testing events is not impossible. You can for example subscribe an anonymous method to the event that throws an exception if the event is thrown or counts the number of times the event is thrown.
UI and event testing
So I know that unit testing is a must. I get the idea that TDD is the way to go when adding new modules. Even if, in practice, I don't actually do it. A bit like commenting code, really. The real thing is, I'm struggling to get my head around how to unit-test the UI and more generally objects that generate events: user controls, asynchronous database operations, etc. So much of my code relates to UI events that I can't quite see how to even start the unit testing. There must be some primers and starter docs out there? Some hints and tips? I'm generally working in C# (2.0 and 3.5) but I'm not sure that this is strictly relevant to the question.
[ "the thing to remember is that unit testing is about testing the units of code you write. Your unit tests shouldn't test that clicking a button raises an event, but that the code being executed by that click event does as it's supposed to.\nWhat you're really wanting to do is test the underlying code does what it should so that your UI layers can execute that code with confidence. \n", "Read this if you're struggling with UI Testing \nManually test UI stuff where benefit to cost in automating it is minimal. Test everything under the UI skin ruthlessly. Use Humble Dialog, MVC or variants to keep logic and UI distinct and loosely coupled.\n", "You should separate logic and presentation. Using MVP(Model-View-Presenter)/MVC (Model-View-Controller) patterns you can unit test you logic without relying on UI events.\nAlso you can use White framework to simulate user input.\nI would highly recommend you to visit Microsoft's Patterns&Practices developer center, especially take a look at composite application block and Prism - you can get a lot of information on test driven design.\n", "The parts of your application that talk to the outside world (ie UI, database etc.) are always a problem when unit-testing. The way around this is actually not to test those layers but make them as thin as possible. For the UI you can use a humble dialog or a view that doesn't do anything worth testing and then put all the logic in a controller or presenter class. You can then use a mocking framework or write your own mock objects to make fake versions of the views to test the logic in the presenters or controller. On the database side you can do something similar.\nTesting events is not impossible. You can for example subscribe an anonymous method to the event that throws an exception if the event is thrown or counts the number of times the event is thrown.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "tdd", "unit_testing", "user_interface", "visual_studio" ]
stackoverflow_0000037832_tdd_unit_testing_user_interface_visual_studio.txt
Q: Exposing a remote interface or object model I have a question on the best way of exposing an asynchronous remote interface. The conditions are as follows: The protocol is asynchronous A third party can modify the data at any time The command round-trip can be significant The model should be well suited for UI interaction The protocol supports queries over certain objects, and so must the model As a means of improving my lacking skills in this area (and brush up my Java in general), I have started a project to create an Eclipse-based front-end for xmms2 (described below). So, the question is; how should I expose the remote interface as a neat data model (In this case, track management and event handling)? I welcome anything from generic discussions to pattern name-dropping or concrete examples and patches :) My primary goal here is learning about this class of problems in general. If my project can gain from it, fine, but I present it strictly to have something to start a discussion around. I've implemented a protocol abstraction which I call 'client' (for legacy reasons) which allows me to access most exposed features using method calls which I am happy with even if it's far from perfect. The features provided by the xmms2 daemon are things like track searching, meta-data retrieval and manipulation, change playback state, load playlists and so on and so forth. I'm in the middle of updating to the latest stable release of xmms2, and I figured I might as well fix some of the glaring weaknesses of my current implementation. My plan is to build a better abstraction on top of the protocol interface, one that allows a more natural interaction with the daemon. The current 'model' implementation is hard to use and is frankly quite ugly (not to mention the UI-code which is truly horrible atm). Today I have the Tracks interface which I can use to get instances of Track classes based on their id. Searching is performed through the Collections interface (unfortunate namespace clash) which I'd rather move to Tracks, I think. Any data can be modified by a third party at any time, and this should be properly reflected in the model and change-notifications distributed These interfaces are exposed when connecting, by returning an object hierarchy that looks like this: Connection Playback getPlayback() Play, pause, jump, current track etc Expose playback state changes Tracks getTracks() Track getTrack(id) etc Expose track updates Collections getCollection() Load and manipulate playlists or named collections Query media library Expose collection updates A: For the asynchronous bit, I would suggest checking into java.util.concurrent, and especially the Future<T> interface. The future interface is used to represent objects which are not ready yet, but are being created in a separate thread. You say that objects can be modified at any time by a third party, but I would still suggest you use immutable return objects here, and instead have a separate thread/event log you can subscribe to to get noticed when objects expire. I have little programming with UIs, but I believe using Futures for asynchronous calls would let you have a responsive GUI, rather than one that was waiting for a server reply. For the queries I would suggest using method chaining to build the query object, and each object returned by method chaining should be Iterable. Similar to how Djangos model is. Say you have QuerySet which implements Iterable<Song>. You can then call allSongs() which would return a result iterating over all Songs. Or allSongs().artist("Beatles"), and you would have an iterable over all Betles songs. Or even allSongs().artist("Beatles").years(1965,1967) and so on. Hope this helps as a starting place. A: @Staale: Thanks a bunch! Using Future for the async operations is interesting. The only drawback being that it is doesn't provide callbacks. But then again, I tried that approach, and look where that got me :) I'm currently solving a similar problem using a worker thread and a blocking queue for dispatching the incoming command replies, but that approach doesn't translate very well. The remote objects can be modified, but since I do use threads, I try to keep the objects immutable. My current hypothesis is that I will send notification events on track updates on the form somehandlername(int changes, Track old_track, Track new_track) or similar, but then I might end up with several versions of the same track. I'll definitely look into Djangos method chaining. I've been looking at some similar constructs but haven't been able to come up with a good variant. Returning something iterable is interesting, but the query could take some time to complete, and I wouldn't want to actually execute the query before it's completely constructed. Perhaps something like Tracks.allSongs().artist("Beatles").years(1965,1967).execute() returning a Future might work... A: Iterable only has the method Iterator get() or somesuch. So no need to build any query or execute any code until you actually start iterating. It does make the execute in your example redundant. However, the thread will be locked until the first result is available, so you might consider using an Executor to run the code for the query in a separate thread. A: @Staale It is certainly possibly, but as you note, that would make it blocking (at home for something like 10 seconds due to sleeping disks), meaning I can't use it to update the UI directly. I could use the iterator to create a copy of the result in a separate thread and then send that to the UI, but while the iterator solution by itself is rather elegant, it won't fit in very well. In the end, something implementing IStructuredContentProvider needs to return an array of all the objects in order to display it in a TableViewer, so if I can get away with getting something like that out of a callback... :) I'll give it some more thought. I might just be able to work out something. It does give the code a nice look. A: My conclusions so far; I am torn on whether to use getters for the Track objects or just expose the members since the object is immutable. class Track { public final String album; public final String artist; public final String title; public final String genre; public final String comment; public final String cover_id; public final long duration; public final long bitrate; public final long samplerate; public final long id; public final Date date; /* Some more stuff here */ } Anybody who wants to know when something happened to a track in the library would implement this... interface TrackUpdateListener { void trackUpdate(Track oldTrack, Track newTrack); } This is how querys are built. Chain calls to your hearts content. the jury is still out on the get() though. There are some details missing, such as how I should handle wildcards and more advanced queries with disjunctions. I might just need some completion callback functionality, perhaps similar to the Asynchronous Completion Token, but we'll see about that. Perhaps that will happen in an additional layer. interface TrackQuery extends Iterable<Track> { TrackQuery years(int from, int to); TrackQuery artist(String name); TrackQuery album(String name); TrackQuery id(long id); TrackQuery ids(long id[]); Future<Track[]> get(); } Some examples: tracks.allTracks(); tracks.allTracks().artist("Front 242").album("Tyranny (For You)"); The tracks interface is mostly just the glue between the connection and the individual tracks. It will be the one implementing or managing meta-data caching, if any (as today, but I think I'll just remove it during the refactoring and see if I actually need it). Also, this provides medialib track updates as it would just be too much work to implement it by track. interface Tracks { TrackQuery allTracks(); void addUpdateListener(TrackUpdateListener listener); void removeUpdateListener(TrackUpdateListener listener); }
Exposing a remote interface or object model
I have a question on the best way of exposing an asynchronous remote interface. The conditions are as follows: The protocol is asynchronous A third party can modify the data at any time The command round-trip can be significant The model should be well suited for UI interaction The protocol supports queries over certain objects, and so must the model As a means of improving my lacking skills in this area (and brush up my Java in general), I have started a project to create an Eclipse-based front-end for xmms2 (described below). So, the question is; how should I expose the remote interface as a neat data model (In this case, track management and event handling)? I welcome anything from generic discussions to pattern name-dropping or concrete examples and patches :) My primary goal here is learning about this class of problems in general. If my project can gain from it, fine, but I present it strictly to have something to start a discussion around. I've implemented a protocol abstraction which I call 'client' (for legacy reasons) which allows me to access most exposed features using method calls which I am happy with even if it's far from perfect. The features provided by the xmms2 daemon are things like track searching, meta-data retrieval and manipulation, change playback state, load playlists and so on and so forth. I'm in the middle of updating to the latest stable release of xmms2, and I figured I might as well fix some of the glaring weaknesses of my current implementation. My plan is to build a better abstraction on top of the protocol interface, one that allows a more natural interaction with the daemon. The current 'model' implementation is hard to use and is frankly quite ugly (not to mention the UI-code which is truly horrible atm). Today I have the Tracks interface which I can use to get instances of Track classes based on their id. Searching is performed through the Collections interface (unfortunate namespace clash) which I'd rather move to Tracks, I think. Any data can be modified by a third party at any time, and this should be properly reflected in the model and change-notifications distributed These interfaces are exposed when connecting, by returning an object hierarchy that looks like this: Connection Playback getPlayback() Play, pause, jump, current track etc Expose playback state changes Tracks getTracks() Track getTrack(id) etc Expose track updates Collections getCollection() Load and manipulate playlists or named collections Query media library Expose collection updates
[ "For the asynchronous bit, I would suggest checking into java.util.concurrent, and especially the Future<T> interface. The future interface is used to represent objects which are not ready yet, but are being created in a separate thread. You say that objects can be modified at any time by a third party, but I would still suggest you use immutable return objects here, and instead have a separate thread/event log you can subscribe to to get noticed when objects expire. I have little programming with UIs, but I believe using Futures for asynchronous calls would let you have a responsive GUI, rather than one that was waiting for a server reply.\nFor the queries I would suggest using method chaining to build the query object, and each object returned by method chaining should be Iterable. Similar to how Djangos model is. Say you have QuerySet which implements Iterable<Song>. You can then call allSongs() which would return a result iterating over all Songs. Or allSongs().artist(\"Beatles\"), and you would have an iterable over all Betles songs. Or even allSongs().artist(\"Beatles\").years(1965,1967) and so on. \nHope this helps as a starting place.\n", "@Staale: Thanks a bunch!\nUsing Future for the async operations is interesting. The only drawback being that it is doesn't provide callbacks. But then again, I tried that approach, and look where that got me :)\nI'm currently solving a similar problem using a worker thread and a blocking queue for dispatching the incoming command replies, but that approach doesn't translate very well.\nThe remote objects can be modified, but since I do use threads, I try to keep the objects immutable. My current hypothesis is that I will send notification events on track updates on the form\nsomehandlername(int changes, Track old_track, Track new_track)\n\nor similar, but then I might end up with several versions of the same track.\nI'll definitely look into Djangos method chaining. I've been looking at some similar constructs but haven't been able to come up with a good variant. Returning something iterable is interesting, but the query could take some time to complete, and I wouldn't want to actually execute the query before it's completely constructed.\nPerhaps something like\nTracks.allSongs().artist(\"Beatles\").years(1965,1967).execute()\n\nreturning a Future might work...\n", "Iterable only has the method Iterator get() or somesuch. So no need to build any query or execute any code until you actually start iterating. It does make the execute in your example redundant. However, the thread will be locked until the first result is available, so you might consider using an Executor to run the code for the query in a separate thread.\n", "@Staale\nIt is certainly possibly, but as you note, that would make it blocking (at home for something like 10 seconds due to sleeping disks), meaning I can't use it to update the UI directly.\nI could use the iterator to create a copy of the result in a separate thread and then send that to the UI, but while the iterator solution by itself is rather elegant, it won't fit in very well. In the end, something implementing IStructuredContentProvider needs to return an array of all the objects in order to display it in a TableViewer, so if I can get away with getting something like that out of a callback... :)\nI'll give it some more thought. I might just be able to work out something. It does give the code a nice look.\n", "My conclusions so far;\nI am torn on whether to use getters for the Track objects or just expose the members since the object is immutable.\nclass Track {\n public final String album;\n public final String artist;\n public final String title;\n public final String genre;\n public final String comment;\n\n public final String cover_id;\n\n public final long duration;\n public final long bitrate;\n public final long samplerate;\n public final long id;\n public final Date date;\n\n /* Some more stuff here */\n}\n\nAnybody who wants to know when something happened to a track in the library would implement this...\ninterface TrackUpdateListener {\n void trackUpdate(Track oldTrack, Track newTrack);\n}\n\nThis is how querys are built. Chain calls to your hearts content. the jury is still out on the get() though. There are some details missing, such as how I should handle wildcards and more advanced queries with disjunctions. I might just need some completion callback functionality, perhaps similar to the Asynchronous Completion Token, but we'll see about that. Perhaps that will happen in an additional layer.\ninterface TrackQuery extends Iterable<Track> {\n TrackQuery years(int from, int to);\n TrackQuery artist(String name);\n TrackQuery album(String name);\n TrackQuery id(long id);\n TrackQuery ids(long id[]);\n\n Future<Track[]> get();\n}\n\nSome examples:\ntracks.allTracks();\ntracks.allTracks().artist(\"Front 242\").album(\"Tyranny (For You)\");\n\nThe tracks interface is mostly just the glue between the connection and the individual tracks. It will be the one implementing or managing meta-data caching, if any (as today, but I think I'll just remove it during the refactoring and see if I actually need it). Also, this provides medialib track updates as it would just be too much work to implement it by track.\ninterface Tracks {\n TrackQuery allTracks();\n\n void addUpdateListener(TrackUpdateListener listener);\n void removeUpdateListener(TrackUpdateListener listener);\n}\n\n" ]
[ 2, 0, 0, 0, 0 ]
[]
[]
[ "eclipse", "java", "oop", "osgi" ]
stackoverflow_0000037041_eclipse_java_oop_osgi.txt
Q: Handling XSD Dataset ConstraintExceptions Does anyone have any tips for dealing with ConstraintExceptions thrown by XSD datasets? This is the exception with the cryptic message: System.Data.ConstraintException : Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. A: A couple of tips that I've found lately. It's much better to use the TableAdapter FillByDataXXXX() methods instead of GetDataByXXXX() methods because the DataTable passed into the fill method can be interrogated for clues: DataTable.GetErrors() returns an array of DataRow instances in error DataRow.RowError contains a description of the row error DataRow.GetColumnsInError() returns an array of DataColumn instances in error Recently, I wrapped up some interrogation code into a subclass of ConstraintException that's turned out to be a useful starting point for debugging. C# Example usage: Example.DataSet.fooDataTable table = new DataSet.fooDataTable(); try { tableAdapter.Fill(table); } catch (ConstraintException ex) { // pass the DataTable to DetailedConstraintException to get a more detailed Message property throw new DetailedConstraintException("error filling table", table, ex); } Output: DetailedConstraintException : table fill failed Errors reported for ConstraintExceptionHelper.DataSet+fooDataTable [foo] Columns in error: [1] [PRODUCT_ID] - total rows affected: 1085 Row errors: [4] [Column 'PRODUCT_ID' is constrained to be unique. Value '1' is already present.] - total rows affected: 1009 [Column 'PRODUCT_ID' is constrained to be unique. Value '2' is already present.] - total rows affected: 20 [Column 'PRODUCT_ID' is constrained to be unique. Value '4' is already present.] - total rows affected: 34 [Column 'PRODUCT_ID' is constrained to be unique. Value '6' is already present.] - total rows affected: 22 ----> System.Data.ConstraintException : Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. I don't know if this is too much code to include in a Stack Overflow answer but here's the C# class in full. Disclaimer: this works for me, please feel free to use/modify as appropriate. using System; using System.Collections.Generic; using System.Text; using System.Data; namespace ConstraintExceptionHelper { /// <summary> /// Subclass of ConstraintException that explains row and column errors in the Message property /// </summary> public class DetailedConstraintException : ConstraintException { private const int InitialCountValue = 1; /// <summary> /// Initialises a new instance of DetailedConstraintException with the specified string and DataTable /// </summary> /// <param name="message">exception message</param> /// <param name="ErroredTable">DataTable in error</param> public DetailedConstraintException(string message, DataTable erroredTable) : base(message) { ErroredTable = erroredTable; } /// <summary> /// Initialises a new instance of DetailedConstraintException with the specified string, DataTable and inner Exception /// </summary> /// <param name="message">exception message</param> /// <param name="ErroredTable">DataTable in error</param> /// <param name="inner">the original exception</param> public DetailedConstraintException(string message, DataTable erroredTable, Exception inner) : base(message, inner) { ErroredTable = erroredTable; } private string buildErrorSummaryMessage() { if (null == ErroredTable) { return "No errored DataTable specified"; } if (!ErroredTable.HasErrors) { return "No Row Errors reported in DataTable=[" + ErroredTable.TableName + "]"; } foreach (DataRow row in ErroredTable.GetErrors()) { recordColumnsInError(row); recordRowsInError(row); } StringBuilder sb = new StringBuilder(); appendSummaryIntro(sb); appendErroredColumns(sb); appendRowErrors(sb); return sb.ToString(); } private void recordColumnsInError(DataRow row) { foreach (DataColumn column in row.GetColumnsInError()) { if (_erroredColumns.ContainsKey(column.ColumnName)) { _erroredColumns[column.ColumnName]++; continue; } _erroredColumns.Add(column.ColumnName, InitialCountValue); } } private void recordRowsInError(DataRow row) { if (_rowErrors.ContainsKey(row.RowError)) { _rowErrors[row.RowError]++; return; } _rowErrors.Add(row.RowError, InitialCountValue); } private void appendSummaryIntro(StringBuilder sb) { sb.AppendFormat("Errors reported for {1} [{2}]{0}", Environment.NewLine, ErroredTable.GetType().FullName, ErroredTable.TableName); } private void appendErroredColumns(StringBuilder sb) { sb.AppendFormat("Columns in error: [{1}]{0}", Environment.NewLine, _erroredColumns.Count); foreach (string columnName in _erroredColumns.Keys) { sb.AppendFormat("\t[{1}] - rows affected: {2}{0}", Environment.NewLine, columnName, _erroredColumns[columnName]); } } private void appendRowErrors(StringBuilder sb) { sb.AppendFormat("Row errors: [{1}]{0}", Environment.NewLine, _rowErrors.Count); foreach (string rowError in _rowErrors.Keys) { sb.AppendFormat("\t[{1}] - rows affected: {2}{0}", Environment.NewLine, rowError, _rowErrors[rowError]); } } /// <summary> /// Get the DataTable in error /// </summary> public DataTable ErroredTable { get { return _erroredTable; } private set { _erroredTable = value; } } /// <summary> /// Get the original ConstraintException message with extra error information /// </summary> public override string Message { get { return base.Message + Environment.NewLine + buildErrorSummaryMessage(); } } private readonly SortedDictionary<string, int> _rowErrors = new SortedDictionary<string, int>(); private readonly SortedDictionary<string, int> _erroredColumns = new SortedDictionary<string, int>(); private DataTable _erroredTable; } }
Handling XSD Dataset ConstraintExceptions
Does anyone have any tips for dealing with ConstraintExceptions thrown by XSD datasets? This is the exception with the cryptic message: System.Data.ConstraintException : Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.
[ "A couple of tips that I've found lately.\n\nIt's much better to use the TableAdapter FillByDataXXXX() methods instead of GetDataByXXXX() methods because the DataTable passed into the fill method can be interrogated for clues:\n\nDataTable.GetErrors() returns an\narray of DataRow instances in error\nDataRow.RowError contains a\ndescription of the row error\nDataRow.GetColumnsInError() returns\nan array of DataColumn instances in\nerror\n\nRecently, I wrapped up some interrogation code into a subclass of ConstraintException that's turned out to be a useful starting point for debugging.\n\nC# Example usage:\nExample.DataSet.fooDataTable table = new DataSet.fooDataTable();\n\ntry\n{\n tableAdapter.Fill(table);\n}\ncatch (ConstraintException ex)\n{\n // pass the DataTable to DetailedConstraintException to get a more detailed Message property\n throw new DetailedConstraintException(\"error filling table\", table, ex);\n}\n\nOutput:\n\n\nDetailedConstraintException : table fill failed\n Errors reported for ConstraintExceptionHelper.DataSet+fooDataTable [foo]\n Columns in error: [1]\n [PRODUCT_ID] - total rows affected: 1085\n Row errors: [4]\n [Column 'PRODUCT_ID' is constrained to be unique. Value '1' is already present.] - total rows affected: 1009\n [Column 'PRODUCT_ID' is constrained to be unique. Value '2' is already present.] - total rows affected: 20\n [Column 'PRODUCT_ID' is constrained to be unique. Value '4' is already present.] - total rows affected: 34\n [Column 'PRODUCT_ID' is constrained to be unique. Value '6' is already present.] - total rows affected: 22\n ----> System.Data.ConstraintException : Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.\n\n\nI don't know if this is too much code to include in a Stack Overflow answer but here's the C# class in full.\nDisclaimer: this works for me, please feel free to use/modify as appropriate.\nusing System;\nusing System.Collections.Generic;\nusing System.Text;\nusing System.Data;\n\nnamespace ConstraintExceptionHelper\n{\n\n /// <summary>\n /// Subclass of ConstraintException that explains row and column errors in the Message property\n /// </summary>\n public class DetailedConstraintException : ConstraintException\n {\n\n private const int InitialCountValue = 1;\n\n\n /// <summary>\n /// Initialises a new instance of DetailedConstraintException with the specified string and DataTable\n /// </summary>\n /// <param name=\"message\">exception message</param>\n /// <param name=\"ErroredTable\">DataTable in error</param>\n public DetailedConstraintException(string message, DataTable erroredTable)\n : base(message)\n {\n ErroredTable = erroredTable;\n }\n\n\n /// <summary>\n /// Initialises a new instance of DetailedConstraintException with the specified string, DataTable and inner Exception\n /// </summary>\n /// <param name=\"message\">exception message</param>\n /// <param name=\"ErroredTable\">DataTable in error</param>\n /// <param name=\"inner\">the original exception</param>\n public DetailedConstraintException(string message, DataTable erroredTable, Exception inner)\n : base(message, inner)\n {\n ErroredTable = erroredTable;\n }\n\n\n private string buildErrorSummaryMessage()\n {\n if (null == ErroredTable) { return \"No errored DataTable specified\"; }\n if (!ErroredTable.HasErrors) { return \"No Row Errors reported in DataTable=[\" + ErroredTable.TableName + \"]\"; }\n\n foreach (DataRow row in ErroredTable.GetErrors())\n {\n recordColumnsInError(row);\n recordRowsInError(row);\n }\n\n StringBuilder sb = new StringBuilder();\n\n appendSummaryIntro(sb);\n appendErroredColumns(sb);\n appendRowErrors(sb);\n\n return sb.ToString();\n }\n\n\n private void recordColumnsInError(DataRow row)\n {\n foreach (DataColumn column in row.GetColumnsInError())\n {\n if (_erroredColumns.ContainsKey(column.ColumnName))\n {\n _erroredColumns[column.ColumnName]++;\n continue;\n }\n\n _erroredColumns.Add(column.ColumnName, InitialCountValue);\n }\n }\n\n\n private void recordRowsInError(DataRow row)\n {\n if (_rowErrors.ContainsKey(row.RowError))\n {\n _rowErrors[row.RowError]++;\n return;\n }\n\n _rowErrors.Add(row.RowError, InitialCountValue);\n }\n\n\n private void appendSummaryIntro(StringBuilder sb)\n {\n sb.AppendFormat(\"Errors reported for {1} [{2}]{0}\", Environment.NewLine, ErroredTable.GetType().FullName, ErroredTable.TableName);\n }\n\n\n private void appendErroredColumns(StringBuilder sb)\n {\n sb.AppendFormat(\"Columns in error: [{1}]{0}\", Environment.NewLine, _erroredColumns.Count);\n\n foreach (string columnName in _erroredColumns.Keys)\n {\n sb.AppendFormat(\"\\t[{1}] - rows affected: {2}{0}\",\n Environment.NewLine,\n columnName,\n _erroredColumns[columnName]);\n }\n }\n\n\n private void appendRowErrors(StringBuilder sb)\n {\n sb.AppendFormat(\"Row errors: [{1}]{0}\", Environment.NewLine, _rowErrors.Count);\n\n foreach (string rowError in _rowErrors.Keys)\n {\n sb.AppendFormat(\"\\t[{1}] - rows affected: {2}{0}\",\n Environment.NewLine,\n rowError,\n _rowErrors[rowError]);\n }\n }\n\n\n /// <summary>\n /// Get the DataTable in error\n /// </summary>\n public DataTable ErroredTable\n {\n get { return _erroredTable; }\n private set { _erroredTable = value; }\n }\n\n\n /// <summary>\n /// Get the original ConstraintException message with extra error information\n /// </summary>\n public override string Message\n {\n get { return base.Message + Environment.NewLine + buildErrorSummaryMessage(); }\n }\n\n\n private readonly SortedDictionary<string, int> _rowErrors = new SortedDictionary<string, int>();\n private readonly SortedDictionary<string, int> _erroredColumns = new SortedDictionary<string, int>();\n private DataTable _erroredTable;\n }\n}\n\n" ]
[ 20 ]
[]
[]
[ "constraintexception", "dataset", "xsd" ]
stackoverflow_0000037936_constraintexception_dataset_xsd.txt
Q: What exactly is WPF? I have seen lots of questions recently about WPF... What is it? What does it stand for? How can I begin programming WPF? A: WPF is a new technology that will supersede Windows Forms. WPF stands for Windows Presentation Foundation Here are some useful topics on SO: What WPF books would you recommend What real world WPF applications are out there From my practice I can say that WPF is a truly amazing technology however it takes some time to get used to because it's totally different from the WinForms. I would recommend you to take a look at this demo. A: WPF is the next frontier with Windows UIs. Built on top of DirectX, it opens up hardware acceleration support for your .Net 3.0+ user-interfaces. Emphasis on Vector Graphics - UIs scale and render better Composable UIs. You could nest animated buttons in combo boxes.. the world's your oyster. Is a rewrite with only minimal core components written in unmanaged code VS GDI-User Dll based Winforms approach which is a thin managed layer over largely unmanaged code. Declarative approach to UI programming, User Interfaces are largely specified in a XML variant called XAML (eXtensible Application markup language) pronounced Zammel. This opens up WPF to designer folks who can specialized tools to craft UIs that the developers can then code up. No translation losses between wireframes to final product. MS 'allegedly' will not provide any future updates to Winforms. Heavily invested in WPF as the way forward Oh yeah, before I forget. Works best on Vista :) You can get either Adam Nathan's WPF Unleashed Book or Chris Sells Programming WPF .. those seem to be the way to go. I just read the first chapter of Adam's (Lead for WPF at MS) book. Hence the WPF praise fountains :) A: Take a look here http://windowsclient.net/ and here Windows Presentation Foundation (WPF) Basically WPF is created to make windows form easier to design because of the use of XAML, designers can work on the design and programmers on the underlying code A: WPF is the Windows Presentation Foundation. It is Microsoft's newest API for building applications with User Interfaces (UIs), working for both standalone and web-based applications. Unsurprisingly, there is a very detailed but not all that helpful Windows Presentation Foundation page at Wikipedia. The WPF Getting Started Page at the Microsoft MSDN site is probably a better place to start. A: Is the new Windows Gui system. I don't believe its aim is to make development easier per se but more to address fundamental issues with WinForm, such as transparency and scaling, neither of which WinForm can effectively address. Furthermore it seeks to address the "one resolution only" paradigm of WinForm by mapping sizes to real-pixel sizes and making flow layout easier and more fundamental. It's also based on an XML derivative making it easier to change the UI and forcing a separation of the UI and the core code (although technically you can still badly hack it together in this manner). This separation also drives a desire to be able to divide the work into two camps, the designers taking charge of the XAML and layout and the programmers taking care of developing the objects used in the XAML. A: Check out Eric Sink's Twelve days of WPF 3D. A: Windows Presentation Foundation. It's basically Microsoft's latest attempt to make development easier, and provide a whole heap of nice functionality out of the box. I'm not sure where to start, but googling "WPF 101" should throw up a few useful links. A: WPF is part of the .net 3.0 stack. Its microsoft's next generation Graphical User Interface system. All the information you need can be found on wikipedia and msdn's wpf site To Get Started programming I guess check out the essential downloads on windows client
What exactly is WPF?
I have seen lots of questions recently about WPF... What is it? What does it stand for? How can I begin programming WPF?
[ "WPF is a new technology that will supersede Windows Forms.\nWPF stands for Windows Presentation Foundation\nHere are some useful topics on SO:\n\nWhat WPF books would you recommend\nWhat real world WPF applications are out there\n\nFrom my practice I can say that WPF is a truly amazing technology however it takes some time to get used to because it's totally different from the WinForms.\nI would recommend you to take a look at this demo.\n", "WPF is the next frontier with Windows UIs. \n\nBuilt on top of DirectX, it opens up hardware acceleration support for\nyour .Net 3.0+ user-interfaces. \nEmphasis on Vector Graphics - UIs scale and render better \nComposable UIs. You could nest animated buttons in combo boxes.. the world's your oyster.\nIs a rewrite with only minimal core components written in unmanaged code VS GDI-User Dll based Winforms approach which is a thin managed layer over largely unmanaged code.\nDeclarative approach to UI programming, User Interfaces are largely specified in a XML variant called XAML (eXtensible Application markup language) pronounced Zammel. This opens up WPF to designer folks who can specialized tools to craft UIs that the developers can then code up. No translation losses between wireframes to final product.\nMS 'allegedly' will not provide any future updates to Winforms. Heavily invested in WPF as the way forward\nOh yeah, before I forget. Works best on Vista :)\n\nYou can get either Adam Nathan's WPF Unleashed Book or Chris Sells Programming WPF .. those seem to be the way to go. I just read the first chapter of Adam's (Lead for WPF at MS) book. Hence the WPF praise fountains :)\n", "Take a look here http://windowsclient.net/ and here Windows Presentation Foundation (WPF)\nBasically WPF is created to make windows form easier to design because of the use of XAML, designers can work on the design and programmers on the underlying code\n", "WPF is the Windows Presentation Foundation. It is Microsoft's newest API for building applications with User Interfaces (UIs), working for both standalone and web-based applications.\nUnsurprisingly, there is a very detailed but not all that helpful Windows Presentation Foundation page at Wikipedia.\nThe WPF Getting Started Page at the Microsoft MSDN site is probably a better place to start.\n", "Is the new Windows Gui system. I don't believe its aim is to make development easier per se but more to address fundamental issues with WinForm, such as transparency and scaling, neither of which WinForm can effectively address. Furthermore it seeks to address the \"one resolution only\" paradigm of WinForm by mapping sizes to real-pixel sizes and making flow layout easier and more fundamental.\nIt's also based on an XML derivative making it easier to change the UI and forcing a separation of the UI and the core code (although technically you can still badly hack it together in this manner).\nThis separation also drives a desire to be able to divide the work into two camps, the designers taking charge of the XAML and layout and the programmers taking care of developing the objects used in the XAML.\n", "Check out Eric Sink's Twelve days of WPF 3D. \n", "Windows Presentation Foundation. It's basically Microsoft's latest attempt to make development easier, and provide a whole heap of nice functionality out of the box. I'm not sure where to start, but googling \"WPF 101\" should throw up a few useful links. \n", "WPF is part of the .net 3.0 stack. Its microsoft's next generation Graphical User Interface system. All the information you need can be found on wikipedia and msdn's wpf site\nTo Get Started programming I guess check out the essential downloads on windows client \n" ]
[ 10, 5, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "windows", "wpf" ]
stackoverflow_0000037843_windows_wpf.txt
Q: What's the use of value types in .Net? The official guidelines suggest that there can be very few practical uses for these. Does anyone have examples of where they've put them to good use? A: Au Contrare... you'll find C/C++ people flocking to structs a.k.a. value types. An example would be data packets. If you have a large number of data packets to transfer/transmit, you'd use value structs to model your data packets. reason: Turning something into a class adds an overhead of (approx 8-16 Bytes I forget) of overhead in the object header in addition to the instance data. In scenarios where this is unacceptable, value types are your safest bet Another use would be situations where you need value type semantics - once you create-initialize a object, it is readonly/immutable and can be passed around to n clients. A: For the most part, it's good to emulate the behaviour of the framework. Many elementary data types such as ints are value types. If you have types that have similar properties, use value types. For example, when writing a Complex data type or a BigInteger, value types are the logical solution. The same goes for the other cases where the framework used value types: DateTime, Point, etc. When in doubt, use a reference type instead. A: Enums are first class citizens of .NET world. As for structures I found that in most cases classes can be used, however for memory-intense scenarios consider using structures. As a practical example I used structures as data structures for OSCAR (ICQ) protocols primitives. A: I tend to use enum for avoiding magic numbers, this can be overcome by const I guess, but enum allows you to group them up. i.e enum MyWeirdType { TypeA, TypeB, TypeC}; switch(value){ case MyWeirdType.TypeA: ... A: You should use a value type whenever: The use of a class isn't necessary (no need for inheritance) You want to make sure there's no need to initialize the type. You have a reason to want the type to be allocated in stack space You want the type to be a complete independent entity on assigment instead of a "link" to the instance as it is in reference types. A: Exactly what most other people use them for.. Fast and light data/value access. As well as being ideal for making grouping properties (where it makes sense of course) into an object. For example: Display/Data value differences, such as String pairs of image names and a path for a control (or whatever). You want the path for the work under the hood, but the name to be visible to the user. Obvious grouping of values for the metrics of objects. We all know Size etc but there may be plenty of situations where the base "metric" types are not enough for you. "Typing" of enum values, being more than a fixed enum, but less that a full blown class (already has been mentioned, just want to advocate). Its important to remember the differences between value and reference types. Used properly, they can really improve efficiency of your code as well as make the object model more robust. A: Value types, specifically, structs and enums, and have proper uses in object-oriented programming. Enums are, as aku said, first class citizens in .NET, which can be used from all sorts of things from Colors to DialogBox options to various types of flags. Structs, as far as my experience goes, are great as Data Transfer Objects; logicless containers of data especially when they comprise mostly of primitive types. And of course, primitive types are all value types, which resolve to System.Object (unlike in Java where primitive types aren't related to structs and need some sort of wrapper). A: Actually prior to .net 3.5 SP1 there has been a performance issue with the intensive use of value types as mentioned here in Vance Morrison's blog. As far as I can see the vast majority of the time you should be using classes and the JITter should guarantee a good level of performance. structs have 'value type semantics', so will pass by value rather than by reference. We can see this difference in behaviour in the following example:- using System; namespace StructClassTest { struct A { public string Foobar { get; set; } } class B { public string Foobar { get; set; } } class Program { static void Main() { A a = new A(); a.Foobar = "hi"; B b = new B(); b.Foobar = "hi"; StructTest(a); ClassTest(b); Console.WriteLine("a.Foobar={0}, b.Foobar={1}", a.Foobar, b.Foobar); Console.ReadKey(true); } static void StructTest(A a) { a.Foobar = "hello"; } static void ClassTest(B b) { b.Foobar = "hello"; } } } The struct will be passed by value so StructTest() will get it's own A struct and when it changes a.Foobar will change the Foobar of its new type. ClassTest() will receive a reference to b and thus the .Foobar property of b will be changed. Thus we'd obtain the following output:- a.Foobar=hi, b.Foobar=hello So if you desire value type semantics then that would be another reason to declare something as a struct. Note interestingly that the DateTime type in .net is a value type, so the .net architects decided that it was appropriate to assign it as such, it'd be interesting to determine why they did that :-)
What's the use of value types in .Net?
The official guidelines suggest that there can be very few practical uses for these. Does anyone have examples of where they've put them to good use?
[ "Au Contrare... you'll find C/C++ people flocking to structs a.k.a. value types.\nAn example would be data packets. If you have a large number of data packets to transfer/transmit, you'd use value structs to model your data packets.\nreason: Turning something into a class adds an overhead of (approx 8-16 Bytes I forget) of overhead in the object header in addition to the instance data. In scenarios where this is unacceptable, value types are your safest bet\nAnother use would be situations where you need value type semantics - once you create-initialize a object, it is readonly/immutable and can be passed around to n clients.\n", "For the most part, it's good to emulate the behaviour of the framework. Many elementary data types such as ints are value types. If you have types that have similar properties, use value types. For example, when writing a Complex data type or a BigInteger, value types are the logical solution. The same goes for the other cases where the framework used value types: DateTime, Point, etc.\nWhen in doubt, use a reference type instead.\n", "Enums are first class citizens of .NET world. As for structures I found that in most cases classes can be used, however for memory-intense scenarios consider using structures. As a practical example I used structures as data structures for OSCAR (ICQ) protocols primitives.\n", "I tend to use enum for avoiding magic numbers, this can be overcome by const I guess, but enum allows you to group them up.\ni.e\nenum MyWeirdType {\nTypeA, TypeB, TypeC};\n\nswitch(value){\ncase MyWeirdType.TypeA:\n...\n\n", "You should use a value type whenever:\n\nThe use of a class isn't necessary (no need for inheritance)\nYou want to make sure there's no need to initialize the type.\nYou have a reason to want the type to be allocated in stack space\nYou want the type to be a complete independent entity on assigment instead of a \"link\" to the instance as it is in reference types.\n\n", "Exactly what most other people use them for.. Fast and light data/value access. As well as being ideal for making grouping properties (where it makes sense of course) into an object.\nFor example:\n\nDisplay/Data value differences, such as String pairs of image names and a path for a control (or whatever). You want the path for the work under the hood, but the name to be visible to the user.\nObvious grouping of values for the metrics of objects. We all know Size etc but there may be plenty of situations where the base \"metric\" types are not enough for you.\n\"Typing\" of enum values, being more than a fixed enum, but less that a full blown class (already has been mentioned, just want to advocate).\n\nIts important to remember the differences between value and reference types. Used properly, they can really improve efficiency of your code as well as make the object model more robust.\n", "Value types, specifically, structs and enums, and have proper uses in object-oriented programming.\nEnums are, as aku said, first class citizens in .NET, which can be used from all sorts of things from Colors to DialogBox options to various types of flags.\nStructs, as far as my experience goes, are great as Data Transfer Objects; logicless containers of data especially when they comprise mostly of primitive types.\nAnd of course, primitive types are all value types, which resolve to System.Object (unlike in Java where primitive types aren't related to structs and need some sort of wrapper).\n", "Actually prior to .net 3.5 SP1 there has been a performance issue with the intensive use of value types as mentioned here in Vance Morrison's blog.\nAs far as I can see the vast majority of the time you should be using classes and the JITter should guarantee a good level of performance.\nstructs have 'value type semantics', so will pass by value rather than by reference. We can see this difference in behaviour in the following example:-\nusing System;\n\nnamespace StructClassTest {\n\n struct A {\n public string Foobar { get; set; }\n }\n\n class B {\n public string Foobar { get; set; }\n }\n\n class Program {\n static void Main() {\n A a = new A();\n a.Foobar = \"hi\";\n B b = new B();\n b.Foobar = \"hi\";\n\n StructTest(a);\n ClassTest(b);\n\n Console.WriteLine(\"a.Foobar={0}, b.Foobar={1}\", a.Foobar, b.Foobar);\n\n Console.ReadKey(true);\n }\n\n static void StructTest(A a) {\n a.Foobar = \"hello\";\n }\n\n static void ClassTest(B b) {\n b.Foobar = \"hello\";\n }\n }\n}\n\nThe struct will be passed by value so StructTest() will get it's own A struct and when it changes a.Foobar will change the Foobar of its new type. ClassTest() will receive a reference to b and thus the .Foobar property of b will be changed. Thus we'd obtain the following output:-\na.Foobar=hi, b.Foobar=hello\n\nSo if you desire value type semantics then that would be another reason to declare something as a struct. Note interestingly that the DateTime type in .net is a value type, so the .net architects decided that it was appropriate to assign it as such, it'd be interesting to determine why they did that :-)\n" ]
[ 1, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net" ]
stackoverflow_0000037931_.net.txt
Q: How can an application use multiple cores or CPUs in .NET or Java? When launching a thread or a process in .NET or Java, is there a way to choose which processor or core it is launched on? How does the shared memory model work in such cases? A: If you're using multiple threads, the operating system will automatically take care of using multiple cores. A: is there a way to choose which processor or core it is launched on? You can use the task manager to tell windows what CPU(s) your program should be allowed to run on. Normally this is only useful for troubleshooting legacy programs which have broken implementations of multi-threading. To do this, Run task manager Find your process in the Processes window. Right click and choose Set Affinity... Tick the checkboxes next to the CPU's you want to allow your application to run on. Windows will then only schedule threads from that process onto those particular CPU's If I recall correctly, windows will 'remember' these settings for subsequent times your process is run, but please don't quote me on that - run some tests yourself :-) You can also do this programatically in .NET after your program has launched using using the System.Diagnostics.Process.ProcessorAffinity property, but I don't think it will 'remember' the settings, so there will always be a short period in which your app is run on whichever CPU windows sees fit. I don't know how to do this in java sorry. Note: This applies at the entire process level. If you set affinity for CPU0 only, and then launch 50 threads, all 50 of those threads will run on CPU0, and CPU1, 2, 3, etc will sit around doing nothing. Just to reiterate the point, this is primarily useful for troubleshooting broken legacy software. If your software is not broken, you really shouldn't mess with any of these settings, and let windows decide the best CPU(s) to run your program on, so it can take the rest of the system's performance into account. As for the 'shared memory' model, it works the same, but there are more things that can go subtly wrong when your app is running on multiple CPU's as opposed to just timeslices on a single one. For an eye-opening example, read this ridiculousfish article about CPU's and Memory Barriers. It's aimed at OSX development on PowerPC, but general enough that it should apply everywhere. IMHO it's one of the top ten 'all developers should read this' articles I've read. A: The operating system takes care of multi-threading when the virtual machine is using native threads (as opposed to green-threads), and you can't specify low level details, like choosing a processor for a certain thread. It is better that way because you usually have many more threads than you have processors available, so the operating system needs to do time-slicing to give all threads a chance to run. That being said, you can set threads priorities if you have a critical task, and a threading API usually provides this possibility. See the Java API for example: http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#setPriority(int) PS: there's something broken in the parsing engine ... I had to add the above link as plain text A: I have used this in a couple of programs because my core 0 was kind of messed up. // Programmatically set process affinity var process = System.Diagnostics.Process.GetCurrentProcess(); // Set Core 0 process.ProcessorAffinity = new IntPtr(0x0001); or // Set Core 1 process.ProcessorAffinity = new IntPtr(0x0002); More on this in "Process.ProcessorAffinity Property". A: I would have a look at the Parallel extensions to the .NET framework. It is still in CTP, however it supposed to make the best use of multi core processors. The easiest place to get started for .NET is on the parallel teams blog. As for Java I have no idea.
How can an application use multiple cores or CPUs in .NET or Java?
When launching a thread or a process in .NET or Java, is there a way to choose which processor or core it is launched on? How does the shared memory model work in such cases?
[ "If you're using multiple threads, the operating system will automatically take care of using multiple cores.\n", "\nis there a way to choose which processor or core it is launched on?\n\nYou can use the task manager to tell windows what CPU(s) your program should be allowed to run on. Normally this is only useful for troubleshooting legacy programs which have broken implementations of multi-threading. To do this, \n\nRun task manager\nFind your process in the Processes window.\nRight click and choose Set Affinity...\nTick the checkboxes next to the CPU's you want to allow your application to run on. Windows will then only schedule threads from that process onto those particular CPU's\n\nIf I recall correctly, windows will 'remember' these settings for subsequent times your process is run, but please don't quote me on that - run some tests yourself :-)\nYou can also do this programatically in .NET after your program has launched using using the System.Diagnostics.Process.ProcessorAffinity property, but I don't think it will 'remember' the settings, so there will always be a short period in which your app is run on whichever CPU windows sees fit. I don't know how to do this in java sorry.\nNote:\nThis applies at the entire process level. If you set affinity for CPU0 only, and then launch 50 threads, all 50 of those threads will run on CPU0, and CPU1, 2, 3, etc will sit around doing nothing.\nJust to reiterate the point, this is primarily useful for troubleshooting broken legacy software. If your software is not broken, you really shouldn't mess with any of these settings, and let windows decide the best CPU(s) to run your program on, so it can take the rest of the system's performance into account.\n\nAs for the 'shared memory' model, it works the same, but there are more things that can go subtly wrong when your app is running on multiple CPU's as opposed to just timeslices on a single one.\nFor an eye-opening example, read this ridiculousfish article about CPU's and Memory Barriers.\nIt's aimed at OSX development on PowerPC, but general enough that it should apply everywhere. IMHO it's one of the top ten 'all developers should read this' articles I've read.\n", "The operating system takes care of multi-threading when the virtual machine is using native threads (as opposed to green-threads), and you can't specify low level details, like choosing a processor for a certain thread. It is better that way because you usually have many more threads than you have processors available, so the operating system needs to do time-slicing to give all threads a chance to run.\nThat being said, you can set threads priorities if you have a critical task, and a threading API usually provides this possibility. See the Java API for example: http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#setPriority(int)\nPS: there's something broken in the parsing engine ... I had to add the above link as plain text\n", "I have used this in a couple of programs because my core 0 was kind of messed up.\n// Programmatically set process affinity\nvar process = System.Diagnostics.Process.GetCurrentProcess();\n\n// Set Core 0\nprocess.ProcessorAffinity = new IntPtr(0x0001);\n\nor\n// Set Core 1\nprocess.ProcessorAffinity = new IntPtr(0x0002);\n\nMore on this in \"Process.ProcessorAffinity Property\".\n", "I would have a look at the Parallel extensions to the .NET framework. It is still in CTP, however it supposed to make the best use of multi core processors. The easiest place to get started for .NET is on the parallel teams blog.\nAs for Java I have no idea.\n" ]
[ 7, 4, 1, 0, 0 ]
[]
[]
[ "c#", "java", "multithreading" ]
stackoverflow_0000037089_c#_java_multithreading.txt
Q: C# string concatenation and string interning When performing string concatentation of an existing string in the intern pool, is a new string entered into the intern pool or is a reference returned to the existing string in the intern pool? According to this article, String.Concat and StringBuilder will insert new string instances into the intern pool? http://community.bartdesmet.net/blogs/bart/archive/2006/09/27/4472.aspx Can anyone explain how concatenation works with the intern pool? A: If you create new strings, they will not automatically be put into the intern pool, unless you concatenate constants compile-time, in which case the compiler will create one string result and intern that as part of the JIT process. A: You can see whether a string has been interned by calling String.IsInterned. The call will return a new string that is either a reference to an interned string equal to the string that was passed as an argument, or null if the string was not interned.
C# string concatenation and string interning
When performing string concatentation of an existing string in the intern pool, is a new string entered into the intern pool or is a reference returned to the existing string in the intern pool? According to this article, String.Concat and StringBuilder will insert new string instances into the intern pool? http://community.bartdesmet.net/blogs/bart/archive/2006/09/27/4472.aspx Can anyone explain how concatenation works with the intern pool?
[ "If you create new strings, they will not automatically be put into the intern pool, unless you concatenate constants compile-time, in which case the compiler will create one string result and intern that as part of the JIT process.\n", "You can see whether a string has been interned by calling String.IsInterned. The call will return a new string that is either a reference to an interned string equal to the string that was passed as an argument, or null if the string was not interned.\n" ]
[ 4, 0 ]
[]
[]
[ ".net", "c#", "string" ]
stackoverflow_0000038010_.net_c#_string.txt
Q: How to get your network support team behind click-once? I'm trying to make the case for click-once and smart client development but my network support team wants to keep with web development for everything. What is the best way to convince them that click-once and smart client development have a place in the business? A: Here is a couple of ideas that may help long running processes, they are not asp.net best friend. scaling, using client side processing as compared to bigger or more servers reduces cost etc. A: We use ClickOnce where I work; in terms of comparison to a web release I would base the case around the need for providing users with a rich client app, otherwise it might well actually be better to use web applications. In terms of releasing a rich client app ClickOnce is fantastic; you can set it up to enforce updates on startup thus enforcing a version throughout the network. You can make the case that ClickOnce gives you the same benefit of having a single deployment point that web deployment possesses. Personally I've found ClickOnce to be unbelievably useful. If you're developing rich client .net apps (in Windows, though let's face it the vast majority of real .net development is in Windows) and want to deploy it across a network nothing else compares. A: They have a place in the Windows environment but not in any other environment and so if you intend on writing applications for external clients, then your probably best sticking with Web based development. I heard this "Write Once, Run Many" before from Microsoft when Asp.net 1.1 was released, it never happened in practice. A: @Mark scaling, using client side processing as compared to bigger or more servers reduces cost etc. I'm not sure I would entirely agree with this. It would seem to cost less to buy 1 powerful server and 1,000's of "dum terminals" than an average powerful server and 1,000 of powerful desktop computers. A: @GateKiller when i speak of scaling i was talking about the cost of buying more servers and not clients. most workstations in an organization barely use 50% of their computing power right through the day. If i was to use a click once deployed application i would be using the grunt of existing workstations therefore not having any further cost on the organiztion.
How to get your network support team behind click-once?
I'm trying to make the case for click-once and smart client development but my network support team wants to keep with web development for everything. What is the best way to convince them that click-once and smart client development have a place in the business?
[ "Here is a couple of ideas that may help\n\nlong running processes, they are not asp.net best friend.\nscaling, using client side processing as compared to bigger or more servers reduces cost etc.\n\n", "We use ClickOnce where I work; in terms of comparison to a web release I would base the case around the need for providing users with a rich client app, otherwise it might well actually be better to use web applications.\nIn terms of releasing a rich client app ClickOnce is fantastic; you can set it up to enforce updates on startup thus enforcing a version throughout the network. You can make the case that ClickOnce gives you the same benefit of having a single deployment point that web deployment possesses.\nPersonally I've found ClickOnce to be unbelievably useful. If you're developing rich client .net apps (in Windows, though let's face it the vast majority of real .net development is in Windows) and want to deploy it across a network nothing else compares.\n", "They have a place in the Windows environment but not in any other environment and so if you intend on writing applications for external clients, then your probably best sticking with Web based development.\nI heard this \"Write Once, Run Many\" before from Microsoft when Asp.net 1.1 was released, it never happened in practice.\n", "@Mark\n\nscaling, using client side processing as compared to bigger or more servers reduces cost etc.\n\nI'm not sure I would entirely agree with this. It would seem to cost less to buy 1 powerful server and 1,000's of \"dum terminals\" than an average powerful server and 1,000 of powerful desktop computers.\n", "@GateKiller\nwhen i speak of scaling i was talking about the cost of buying more servers and not clients.\nmost workstations in an organization barely use 50% of their computing power right through the day. If i was to use a click once deployed application i would be using the grunt of existing workstations therefore not having any further cost on the organiztion.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "smartclient" ]
stackoverflow_0000038002_smartclient.txt
Q: Ruby blocks/Java closures in C I've been trying to understand how Ruby blocks work, and to do that I've been trying to implement them in C. One easy way to implement closures is to pass a void* to the enclosing stack to the closure/function but Ruby blocks also seem to handle returns and break statements from the scope that uses the block. loop do break i if (i >= 4000) i *= 2 end I think one of the closures proposals for Java works like this also. So, how would you implement Ruby-blocks/Java-closures in C? A: The concept of closures requires the concept of contexts. C's context is based on the stack and the registers of the CPU, so to create a block/closure, you need to be able to manipulate the stack pointer in a correct (and reentrant) way, and store/restore registers as needed. The way this is done by interpreters or virtual machines is to have a context structure or something similar, and not use the stack and registers directly. This structure keeps track of a stack and optionally some registers, if you're designing a register based VM. At least, that's the simplest way to do it (though slightly less performant than actually mapping things correctly). A: I haven't actually implemented any of this, so take it with a sack of salt. There are two parts to a closure: the data environment and the code environment. Like you said, you can probably pass a void* to handle references to data. You could probably use setjmp and longjmp to implement the non-linear control flow jumps that the Ruby break requires. If you want closures you should probably be programming in a language that actually supports them. :-) UPDATE: Interesting things are happening in Clang. They've prototyped a closure for C. http://lists.cs.uiuc.edu/pipermail/cfe-dev/2008-August/002670.html might prove to be interesting reading. A: There's a good set of slides on Ruby Blocks as part of the "Rails with Passion" course: Ruby_Blocks.pdf This covers representing a block, how they get passed arguments and executed, and even further into things like Proc objects. It's very clearly explained. It might then be of interest to look at how the JRuby guys handled these in their parsing to Java. Take a look at the source at codehaus.
Ruby blocks/Java closures in C
I've been trying to understand how Ruby blocks work, and to do that I've been trying to implement them in C. One easy way to implement closures is to pass a void* to the enclosing stack to the closure/function but Ruby blocks also seem to handle returns and break statements from the scope that uses the block. loop do break i if (i >= 4000) i *= 2 end I think one of the closures proposals for Java works like this also. So, how would you implement Ruby-blocks/Java-closures in C?
[ "The concept of closures requires the concept of contexts. C's context is based on the stack and the registers of the CPU, so to create a block/closure, you need to be able to manipulate the stack pointer in a correct (and reentrant) way, and store/restore registers as needed.\nThe way this is done by interpreters or virtual machines is to have a context structure or something similar, and not use the stack and registers directly. This structure keeps track of a stack and optionally some registers, if you're designing a register based VM. At least, that's the simplest way to do it (though slightly less performant than actually mapping things correctly).\n", "I haven't actually implemented any of this, so take it with a sack of salt.\nThere are two parts to a closure: the data environment and the code environment. Like you said, you can probably pass a void* to handle references to data. You could probably use setjmp and longjmp to implement the non-linear control flow jumps that the Ruby break requires. \nIf you want closures you should probably be programming in a language that actually supports them. :-)\nUPDATE: Interesting things are happening in Clang. They've prototyped a closure for C. http://lists.cs.uiuc.edu/pipermail/cfe-dev/2008-August/002670.html might prove to be interesting reading.\n", "There's a good set of slides on Ruby Blocks as part of the \"Rails with Passion\" course:\nRuby_Blocks.pdf\nThis covers representing a block, how they get passed arguments and executed, and even further into things like Proc objects. It's very clearly explained.\nIt might then be of interest to look at how the JRuby guys handled these in their parsing to Java. Take a look at the source at codehaus.\n" ]
[ 10, 3, 2 ]
[]
[]
[ "c", "java", "ruby" ]
stackoverflow_0000019838_c_java_ruby.txt
Q: Viewing event log via a web interface I'd like to be able to view the event log for a series of asp.net websites running on IIS. Can I do this externally, for example, through a web interface? A: No, but there are two solutions I would recommend: Adiscon EventLogger is a third-party product that will send your Windows EventLog to a SQL database. You can either send all events or create filters. Of course, once the events are in a SQL database, you can use any of the usual tools to create a web interface. You can use ASP.NET's HealthMonitoring configuration section to configure .NET to send all ASP.NET-related events directly to a SQL database. This covers exceptions, heartbeats, and a host of other event types. The SqlWebEventProvider is a cinch to setup. A: Do you want to know if you can home-roll something or are you looking for an app you can get off the shelf? I'm not a Windows guy, but I think Microsoft's MOM/SCOM solution will probably let you view the event log over a web UI - probably really heavy and expensive if that's all you need though. A quick google found http://www.codeproject.com/KB/XML/Event_Logger.aspx which shows that you can get in if you want to roll your own... also an MS tool on msdn Sorry I can't be more help
Viewing event log via a web interface
I'd like to be able to view the event log for a series of asp.net websites running on IIS. Can I do this externally, for example, through a web interface?
[ "No, but there are two solutions I would recommend:\n\nAdiscon EventLogger is a third-party product that will send your Windows EventLog to a SQL database. You can either send all events or create filters. Of course, once the events are in a SQL database, you can use any of the usual tools to create a web interface.\nYou can use ASP.NET's HealthMonitoring configuration section to configure .NET to send all ASP.NET-related events directly to a SQL database. This covers exceptions, heartbeats, and a host of other event types. The SqlWebEventProvider is a cinch to setup.\n\n", "Do you want to know if you can home-roll something or are you looking for an app you can get off the shelf?\nI'm not a Windows guy, but I think Microsoft's MOM/SCOM solution will probably let you view the event log over a web UI - probably really heavy and expensive if that's all you need though.\nA quick google found http://www.codeproject.com/KB/XML/Event_Logger.aspx which shows that you can get in if you want to roll your own... also an MS tool on msdn\nSorry I can't be more help\n" ]
[ 2, 0 ]
[]
[]
[ "asp.net", "iis", "logging", "monitoring" ]
stackoverflow_0000037821_asp.net_iis_logging_monitoring.txt
Q: What is the easiest-to-use web "rich text editor" I am looking for a text editor to be used in a web page. Where users can format the text and get a WYSIWYG experience. Doesn't need to be too fancy. But has to be easy to use and integrate into the page. Has to generate HTML as output. Support AJAX (one I checked works only with standard form submit) and has to be small in terms of download to the user's browser. A: Well it depends what platform you are on if you are looking for server-side functionality as well, but the defacto badass WYSIWYg in my opinion is FCKeditor. I have worked with this personally in numerous environments (both professional and hobby level) and have always been impressed. It's certainly worth a look. I believe it is employed by open source projects such as SubText as well. Perhaps, Jon Galloway can add to this if he reads this question. Or Phil if he is currently a user. A: TinyMCE is the simplest I've found to use. I've never used it in an AJAX-enabled application, but there are instructions on how to do so on the project's wiki. A: Try FCKeditor. It supports integration with most popular platforms, and it's fairly lightweight. A: You might also want to look at YUI's Rich Text Editor. If you're starting your site from scratch or haven't invested a lot of effort into another JavaScript platform, Yahoo User Interface (YUI) is a very complete JavaScript library that could help you add other AJAX elements beyond a text editor. A: I just did a full day of evaluation of all the ones mentioned so far (and then some), and the one I liked the best is Obout Editor. I think it might be for ASP.NET only, so it might not work for you, but if you are using .NET, it's great. The HTML output is clean and nicely styled, and the rendered output looks the same in the editor as it does when you output it to the page (something I had trouble with when using the others due to doctype settings in the editor). It costs a few bucks, but it was worth it for us. A: I found TinyMCE pretty easy to implement. And it's light on bandwidth usage too. A: Using fck for some tine now, after "free text box", or something like that. Had problems only once, when I put fck inside asp.net ajax updatepanel, but found fix on forums. Problem was solved in next release. I would like to see some nice photo browser in it, because fck comes only with simple browser that displays filename, no thumbs. The other one, that has thumbs costs bunch of money. Didn't try it with asp.net mvc, don't know how will uploading work. It uses one ascx for wrapping js functionality. A: i started out using free text box when i was doing a lot of asp.net programming, but now that most of what i do is php i've moved to the FCK editor. while the change wasn't necessarily prompted by the language, i feel that the fck editor is a better choice because of it's versatility. A: For something minimalist, take a look at Widg Editor, it's truly tiny and very simple. It's only haphazardly supported as a hobby project though. I'm currently using the RTE component of DynarchLib, which is highly customisable - definitely does AJAX - but a bit complicated and not very pretty. It is actively supported, and you can get answers on their forum very quickly. I previously tried Dojo's editor, and found it broken and badly undocumented. YMMV. Edit: In response to other people's answers, I've now tried TinyMCE and found it to be excellent. More easily configurable and far fewer problems than anything else I've tried. Use TinyMCE!
What is the easiest-to-use web "rich text editor"
I am looking for a text editor to be used in a web page. Where users can format the text and get a WYSIWYG experience. Doesn't need to be too fancy. But has to be easy to use and integrate into the page. Has to generate HTML as output. Support AJAX (one I checked works only with standard form submit) and has to be small in terms of download to the user's browser.
[ "Well it depends what platform you are on if you are looking for server-side functionality as well, but the defacto badass WYSIWYg in my opinion is FCKeditor. I have worked with this personally in numerous environments (both professional and hobby level) and have always been impressed.\nIt's certainly worth a look. I believe it is employed by open source projects such as SubText as well. Perhaps, Jon Galloway can add to this if he reads this question. Or Phil if he is currently a user.\n", "TinyMCE is the simplest I've found to use. I've never used it in an AJAX-enabled application, but there are instructions on how to do so on the project's wiki.\n", "Try FCKeditor. It supports integration with most popular platforms, and it's fairly lightweight. \n", "You might also want to look at YUI's Rich Text Editor. \nIf you're starting your site from scratch or haven't invested a lot of effort into another JavaScript platform, Yahoo User Interface (YUI) is a very complete JavaScript library that could help you add other AJAX elements beyond a text editor.\n", "I just did a full day of evaluation of all the ones mentioned so far (and then some), and the one I liked the best is Obout Editor. I think it might be for ASP.NET only, so it might not work for you, but if you are using .NET, it's great. The HTML output is clean and nicely styled, and the rendered output looks the same in the editor as it does when you output it to the page (something I had trouble with when using the others due to doctype settings in the editor). It costs a few bucks, but it was worth it for us.\n", "I found TinyMCE pretty easy to implement. And it's light on bandwidth usage too.\n", "Using fck for some tine now, after \"free text box\", or something like that. Had problems only once, when I put fck inside asp.net ajax updatepanel, but found fix on forums. Problem was solved in next release.\nI would like to see some nice photo browser in it, because fck comes only with simple browser that displays filename, no thumbs. The other one, that has thumbs costs bunch of money.\nDidn't try it with asp.net mvc, don't know how will uploading work. It uses one ascx for wrapping js functionality.\n", "i started out using free text box when i was doing a lot of asp.net programming, but now that most of what i do is php i've moved to the FCK editor.\nwhile the change wasn't necessarily prompted by the language, i feel that the fck editor is a better choice because of it's versatility.\n", "For something minimalist, take a look at Widg Editor, it's truly tiny and very simple. It's only haphazardly supported as a hobby project though.\nI'm currently using the RTE component of DynarchLib, which is highly customisable - definitely does AJAX - but a bit complicated and not very pretty. It is actively supported, and you can get answers on their forum very quickly.\nI previously tried Dojo's editor, and found it broken and badly undocumented. YMMV.\n\nEdit: In response to other people's answers, I've now tried TinyMCE and found it to be excellent. More easily configurable and far fewer problems than anything else I've tried. Use TinyMCE!\n" ]
[ 11, 9, 4, 3, 2, 2, 1, 1, 1 ]
[]
[]
[ "editor", "html" ]
stackoverflow_0000021274_editor_html.txt
Q: How to implement mouse dragging in Visual Basic? I need to create a quick-n-dirty knob control in Visual Basic 2005 Express, the value of which is incremented/decremented by "grabbing" it with the mouse and moving the cursor up/down. Because the knob itself doesn't move, I need to keep tracking the mouse movement outside of the rectangle of the control. I use a Label with an ImageList to implement this (I have a list of 127 bitmaps representing the knob in various positions). Which events should I react to? A: You need the control to handle three events: Mouse Down, Mouse Move and Mouse Up. On the Mouse Down event, you will need to capture the mouse. This means the mouse messages are sent to the control that has the capture. In the mouse move event, if the input is captured then update the displayed image depending on the amount the mouse moved. In the mouse up event, release the capture if the input is captured. The boolean jjnguy suggests is unnecessary as the Capture property of a Control is readable so it's possible to determine if the capture has been set. A: Your problem will be to determine which bitmap you have to display based upon the coordinates the mouse reports in the mouse_move event. You'll need to perform some magic to transform the coordinates and come up with a value that you can use to pick the right image. It doesn't sound too complicated, just a little bit of trial and error in the math. Skizz has already show you how to capture the events.
How to implement mouse dragging in Visual Basic?
I need to create a quick-n-dirty knob control in Visual Basic 2005 Express, the value of which is incremented/decremented by "grabbing" it with the mouse and moving the cursor up/down. Because the knob itself doesn't move, I need to keep tracking the mouse movement outside of the rectangle of the control. I use a Label with an ImageList to implement this (I have a list of 127 bitmaps representing the knob in various positions). Which events should I react to?
[ "You need the control to handle three events: Mouse Down, Mouse Move and Mouse Up. On the Mouse Down event, you will need to capture the mouse. This means the mouse messages are sent to the control that has the capture. In the mouse move event, if the input is captured then update the displayed image depending on the amount the mouse moved. In the mouse up event, release the capture if the input is captured.\nThe boolean jjnguy suggests is unnecessary as the Capture property of a Control is readable so it's possible to determine if the capture has been set.\n", "Your problem will be to determine which bitmap you have to display based upon the coordinates the mouse reports in the mouse_move event. You'll need to perform some magic to transform the coordinates and come up with a value that you can use to pick the right image. \nIt doesn't sound too complicated, just a little bit of trial and error in the math. Skizz has already show you how to capture the events.\n" ]
[ 0, 0 ]
[]
[]
[ "drag_and_drop", "user_controls", "vb.net" ]
stackoverflow_0000038081_drag_and_drop_user_controls_vb.net.txt
Q: Cross Page Postback doesn't work for client-side enabled button I am using a cross page postback for Page A to pass data to Page B. The button that causes the postback has its postbackurl set but is disabled until the user selects a value from a DDL at which point the button is enable using javascript. However this prevents the cross page postback from occurring, Page A just postbacks to itself. If the button is never disabled it works fine. Anyone know how to solve this? A: It looks like when the button is disabled .Net doesn't bother adding the necessary bits to handle the cross page postback on the client, so they will be missing when the button is enable client-side. I guess one solution would be to have the button enabled to start with (so that .Net adds the cross page postback controls) and then disable it using javascript as soon as the control loads on the client. But this sounds a bit clunky.
Cross Page Postback doesn't work for client-side enabled button
I am using a cross page postback for Page A to pass data to Page B. The button that causes the postback has its postbackurl set but is disabled until the user selects a value from a DDL at which point the button is enable using javascript. However this prevents the cross page postback from occurring, Page A just postbacks to itself. If the button is never disabled it works fine. Anyone know how to solve this?
[ "It looks like when the button is disabled .Net doesn't bother adding the necessary bits to handle the cross page postback on the client, so they will be missing when the button is enable client-side.\nI guess one solution would be to have the button enabled to start with (so that .Net adds the cross page postback controls) and then disable it using javascript as soon as the control loads on the client. But this sounds a bit clunky.\n" ]
[ 2 ]
[]
[]
[ "asp.net", "postback" ]
stackoverflow_0000038107_asp.net_postback.txt
Q: What is a good maintainability index using Visual Studio 2008 code analysis? My company recently purchased TFS and I have started looking into the code analysis tools to help drive up code quality and noticed a good looking metric "maintainability index". Is anyone using this metric for code reviews/checkins/etc? If so, what is an acceptable index for developers to work toward? A: The maintainability index is not as much a fixed value you look at, it's more of an indication that code is hard to understand, test and/or debug. I usually try to keep high-level code (basically anything except for the real plumbing code) above 80, where 90+ would be good. It adds a competitive element to programming as maintainable as possible to me. The code analysis tool really shines in the area of dependencies and the number of branches within a method though. More branches mean harder testing, which makes it more error-prone. Dependencies, same thing. In other people's code, I use the maintainability index to spot possible bad parts in the code, so I know where to review it. Also, methods/classes with a high number of lines are an indication of poor code to me (unless it can't be avoided, again, the plumbing works). In the end, I think it mainly depends on how often your code will change. Code that's expected to change a lot has to score higher in maintainability than your typical 'write once' code.
What is a good maintainability index using Visual Studio 2008 code analysis?
My company recently purchased TFS and I have started looking into the code analysis tools to help drive up code quality and noticed a good looking metric "maintainability index". Is anyone using this metric for code reviews/checkins/etc? If so, what is an acceptable index for developers to work toward?
[ "The maintainability index is not as much a fixed value you look at, it's more of an indication that code is hard to understand, test and/or debug. I usually try to keep high-level code (basically anything except for the real plumbing code) above 80, where 90+ would be good. It adds a competitive element to programming as maintainable as possible to me.\nThe code analysis tool really shines in the area of dependencies and the number of branches within a method though. More branches mean harder testing, which makes it more error-prone. Dependencies, same thing.\nIn other people's code, I use the maintainability index to spot possible bad parts in the code, so I know where to review it. Also, methods/classes with a high number of lines are an indication of poor code to me (unless it can't be avoided, again, the plumbing works).\nIn the end, I think it mainly depends on how often your code will change. Code that's expected to change a lot has to score higher in maintainability than your typical 'write once' code.\n" ]
[ 22 ]
[]
[]
[ "code_analysis", "visual_studio" ]
stackoverflow_0000038158_code_analysis_visual_studio.txt
Q: Inbox Management (in Outlook) I've gone back and forth between having an organized inbox and having an inbox with absolutely everything I've received in it. Would you recommend leaving everything in an inbox, or organize it? If you organize it, is there any method to your madness or possibly an Outlook (2003) plug-in to aid in this task? For what it's worth, I feel way more productive with everything in my inbox, grouped by date. I feel like a spend way more time doing inbox management any other way. A: I would recommend following the inbox zero approach advocated by 43 folders. Joel Spolsky apparently uses it and a lot of people feel it's a great way of decluttering and organising your email life :-). A: If you don't want to actually clear out your inbox, you could use a good search utility like Google Desktop, Yahoo Desktop Search (is that what it's called) or my current favorite, Xobni. With these tools you don't have to worry about where you put the mails you saved. Just save them all and let the tools find it. A: I switched to gMail and have never been happier. You could also try using a tags plugin like http://www.taglocity.com/index.html A: I'm going with the Microsoft way; Delete it Defer it Delegate it Do it It works for me great. You can read about it at http://www.microsoft.com/atwork/manageinfo/email.mspx. A: We've invested in a few licenses of Simply File for our employees. Works a treat at managing your inbox - it learns (don't ask me how, but it is very good) how to file things for you and does it automatically. I was sceptical about it at first, until I tried it then I was a convert. A: Keep to the ideal of inbox zero in the actual inbox, then employ a decent search engine (Google Desktop or Xobni for example). I have a handful of project- or filter-specific folders (e.g. for system generated status messages that go to a mailing list), but generally all archived email is dumped in one folder. In Outlook 2007 categories (which can approach the usefulness of tags) do add a potentially useful dimension. A: I use message flags for my "action folders" and shunt everything into one big Archive folder after I process it (use the Ctrl+Shift+V shortcut to do this). As an example, I might flag a received message with a red flag (reply), a blue flag (pending, meaning I have to do something about it first), or maybe a green flag (reference). I then have search folders for each of my flag colors. This flagging/search folder method is explained fairly well in this blog post. I've also implemented a Gmail-like conversation view search folder which has been pretty handy. A: The best place to start with getting control of your email is definitely Merlin Mann's excellent Inbox Zero series. In particular his Google Tech Talk video is a great talk.
Inbox Management (in Outlook)
I've gone back and forth between having an organized inbox and having an inbox with absolutely everything I've received in it. Would you recommend leaving everything in an inbox, or organize it? If you organize it, is there any method to your madness or possibly an Outlook (2003) plug-in to aid in this task? For what it's worth, I feel way more productive with everything in my inbox, grouped by date. I feel like a spend way more time doing inbox management any other way.
[ "I would recommend following the inbox zero approach advocated by 43 folders. Joel Spolsky apparently uses it and a lot of people feel it's a great way of decluttering and organising your email life :-).\n", "If you don't want to actually clear out your inbox, you could use a good search utility like Google Desktop, Yahoo Desktop Search (is that what it's called) or my current favorite, Xobni.\nWith these tools you don't have to worry about where you put the mails you saved. Just save them all and let the tools find it.\n", "I switched to gMail and have never been happier.\nYou could also try using a tags plugin like http://www.taglocity.com/index.html\n", "I'm going with the Microsoft way;\n\nDelete it\nDefer it\nDelegate it\nDo it\n\nIt works for me great.\nYou can read about it at http://www.microsoft.com/atwork/manageinfo/email.mspx.\n", "We've invested in a few licenses of Simply File for our employees. Works a treat at managing your inbox - it learns (don't ask me how, but it is very good) how to file things for you and does it automatically.\nI was sceptical about it at first, until I tried it then I was a convert.\n", "Keep to the ideal of inbox zero in the actual inbox, then employ a decent search engine (Google Desktop or Xobni for example). \nI have a handful of project- or filter-specific folders (e.g. for system generated status messages that go to a mailing list), but generally all archived email is dumped in one folder. \nIn Outlook 2007 categories (which can approach the usefulness of tags) do add a potentially useful dimension. \n", "I use message flags for my \"action folders\" and shunt everything into one big Archive folder after I process it (use the Ctrl+Shift+V shortcut to do this). As an example, I might flag a received message with a red flag (reply), a blue flag (pending, meaning I have to do something about it first), or maybe a green flag (reference). I then have search folders for each of my flag colors.\nThis flagging/search folder method is explained fairly well in this blog post.\nI've also implemented a Gmail-like conversation view search folder which has been pretty handy.\n", "The best place to start with getting control of your email is definitely Merlin Mann's excellent Inbox Zero series. In particular his Google Tech Talk video is a great talk.\n" ]
[ 7, 4, 3, 2, 2, 1, 1, 0 ]
[]
[]
[ "email", "gtd", "outlook" ]
stackoverflow_0000038117_email_gtd_outlook.txt
Q: Why is the subprocess.Popen class not named Subprocess? The primary class in the subprocess module is name Popen, and represents a subprocess. Popen sounds like someone was trying to force the name to follow some function naming format, rather than chosing a name that actually represents what the object is. Does anyone know why it was chosen over something simple like, say, Subprocess? A: Now, I'm not saying that this is the greatest name in the world, but here was the idea as I understand it. Originally, the popen family was in the os module and was an implementation of the venerable posix popen. The movement to the subprocess module would have been an opportune time to rename them, but I guess that keeping Popen makes it easier to find in the docs for those who have a long history with python or even to the venerable posix functions. From its earliest posix incarnation, Popen has always been meant to open a Process and allow you to read and write from its stdio like a file. Thus the mnemonic for Popen is that it is short for ProcessOpen in an attempt to kind of, sorta, look like open. A: subprocess.Popen replaces the group of os.popenX POSIX functions (which have a long history). I suppose that the name Popen makes it more likely for people used to the old functions to find and use the new ones. The PEP for subprocess (PEP 324) has a little bit of discussion on the name of the module but not of class Popen. The list of PEPs (Python enhancement proposals) is in general an excellent place to start if you're looking for the rationale for features of Python.
Why is the subprocess.Popen class not named Subprocess?
The primary class in the subprocess module is name Popen, and represents a subprocess. Popen sounds like someone was trying to force the name to follow some function naming format, rather than chosing a name that actually represents what the object is. Does anyone know why it was chosen over something simple like, say, Subprocess?
[ "Now, I'm not saying that this is the greatest name in the world, but here was the idea as I understand it.\nOriginally, the popen family was in the os module and was an implementation of the venerable posix popen. The movement to the subprocess module would have been an opportune time to rename them, but I guess that keeping Popen makes it easier to find in the docs for those who have a long history with python or even to the venerable posix functions.\nFrom its earliest posix incarnation, Popen has always been meant to open a Process and allow you to read and write from its stdio like a file. Thus the mnemonic for Popen is that it is short for ProcessOpen in an attempt to kind of, sorta, look like open.\n", "subprocess.Popen replaces the group of os.popenX POSIX functions (which have a long history). I suppose that the name Popen makes it more likely for people used to the old functions to find and use the new ones.\nThe PEP for subprocess (PEP 324) has a little bit of discussion on the name of the module but not of class Popen. The list of PEPs (Python enhancement proposals) is in general an excellent place to start if you're looking for the rationale for features of Python.\n" ]
[ 8, 5 ]
[ "I suppose the name was chosen because the functionality subprocess is replacing was formerly in the os module as the os.popen function. There could be even ways to automate migration between the two.\n" ]
[ -1 ]
[ "python", "subprocess" ]
stackoverflow_0000038197_python_subprocess.txt
Q: C++ std::tr2 for VS2005 Is Boost the only way for VS2005 users experience TR2? Also is there a idiot proof way of downloading only the TR2 related packages? I was looking at the boost installer provided by BoostPro Consulting. If I select the options for all the threading options with all the packages for MSVC8 it requires 1.1GB. While I am not short of space, it seems ridiculous that a library needs over a gigabyte of space and it takes BPC a long time to catch up with the current release. What packages do I need? I'm really only interested in those that comprise std::tr2 and can find that out by comparing those on offer to those in from the TR2 report and selecting those from the list but even then it isn't clear what is needed and the fact that it is a version behind annoys me. I know from previous encounters with Boost (1.33.1) that self compiling is a miserable experience: A lot of time wasted to get it started and then a hoard of errors passes across your screen faster than you can read, so what you are left with is an uneasy feeling that something is broken but you don't quite know what. I've never had these problems with any Apache library but that is another rant... A: I believe you're actually referring to TR1, rather than TR2. The call for proposals for TR2 is open, but don't expect to see much movement until the new C++ standard is out. Also, although boost is a provider of an implementation of TR1, dinkumware and the GNU FSF are other providers - on VC2005 boost is probably the easiest way to access this functionality. The libraries from boost which are likely to be of most importance are reference smart pointer bind type traits array regular expressions The documentation for building boost has been gradually improving for the last few releases, the current getting started guide is quite detailed. smart pointer and bind, should work from header files, and IMO, these are the most useful elements of TR1. A: Part of the beauty of Boost is that all code is in header files. They have to for template reasons. So probably downloading the code and including it in your project will work. There are some libraries in Boost that do need compiling, but as long as you don't need those... A: The libraries I am most interested in from TR1/TR2 are threads and the related atomics. A: Compiling the boost libraries for yourself is actually quite simple, if not that well documented. The documentation is in the jamroot file. Run bjam --help in the boost root directory for a detailed list of options. As an example I used the following command line to build my current set up with boost 1.36.0: bjam --build-type=complete --toolset=msvc --build-dir=c:\boost\build install It ran for about a half hour on my machine and put the resulting files into c:\boost
C++ std::tr2 for VS2005
Is Boost the only way for VS2005 users experience TR2? Also is there a idiot proof way of downloading only the TR2 related packages? I was looking at the boost installer provided by BoostPro Consulting. If I select the options for all the threading options with all the packages for MSVC8 it requires 1.1GB. While I am not short of space, it seems ridiculous that a library needs over a gigabyte of space and it takes BPC a long time to catch up with the current release. What packages do I need? I'm really only interested in those that comprise std::tr2 and can find that out by comparing those on offer to those in from the TR2 report and selecting those from the list but even then it isn't clear what is needed and the fact that it is a version behind annoys me. I know from previous encounters with Boost (1.33.1) that self compiling is a miserable experience: A lot of time wasted to get it started and then a hoard of errors passes across your screen faster than you can read, so what you are left with is an uneasy feeling that something is broken but you don't quite know what. I've never had these problems with any Apache library but that is another rant...
[ "I believe you're actually referring to TR1, rather than TR2. The call for proposals for TR2 is open, but don't expect to see much movement until the new C++ standard is out. Also, although boost is a provider of an implementation of TR1, dinkumware and the GNU FSF are other providers - on VC2005 boost is probably the easiest way to access this functionality. \nThe libraries from boost which are likely to be of most importance are\n\nreference\nsmart pointer \nbind \ntype traits\narray\nregular expressions\n\nThe documentation for building boost has been gradually improving for the last few releases, the current getting started guide is quite detailed. smart pointer and bind, should work from header files, and IMO, these are the most useful elements of TR1.\n", "Part of the beauty of Boost is that all code is in header files. They have to for template reasons. So probably downloading the code and including it in your project will work. There are some libraries in Boost that do need compiling, but as long as you don't need those...\n", "The libraries I am most interested in from TR1/TR2 are threads and the related atomics.\n", "Compiling the boost libraries for yourself is actually quite simple, if not that well documented. The documentation is in the jamroot file. Run bjam --help in the boost root directory for a detailed list of options. As an example I used the following command line to build my current set up with boost 1.36.0:\nbjam --build-type=complete --toolset=msvc --build-dir=c:\\boost\\build install\n\nIt ran for about a half hour on my machine and put the resulting files into c:\\boost\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "boost", "c++", "c++_tr2", "visual_studio_2005" ]
stackoverflow_0000017117_boost_c++_c++_tr2_visual_studio_2005.txt
Q: Optimizing/Customizing Sharepoint Search Crawling With SharePoint Server 2007, there is also a Search Feature and a Crawler. However, the Crawler is somewhat limited in that it only supports Basic Auth when crawling external sites and that there is no way to tell it to ignore no-index,no-follow attributes. Now, there is a site i'd like to index, unfortunately this site uses it's own Authentication System, and it uses no-index,no-follow on the pages. As I control that site, i can remove the Attributes, but it's a PITA to do so. Also, it does not solve the Authentication issue. So I just wonder if it's possible to extend Sharepoint's Crawler somehow? A: The limitation of MOSS crawling sites with different forms authentication should have been addressed in MOSS SP1. : http://www.microsoft.com/downloads/details.aspx?FamilyID=ad59175c-ad6a-4027-8c2f-db25322f791b&displaylang=en Here's a link to a post which describes how to get the hotfix for pre-SP1 MOSS to enable the crawling of sites with forms authentication: http://blogs.microsoft.co.il/blogs/adir_ron/archive/2007/10/11/moss-search-for-sso-form-based-authentication-sites.aspx Hope that helps!
Optimizing/Customizing Sharepoint Search Crawling
With SharePoint Server 2007, there is also a Search Feature and a Crawler. However, the Crawler is somewhat limited in that it only supports Basic Auth when crawling external sites and that there is no way to tell it to ignore no-index,no-follow attributes. Now, there is a site i'd like to index, unfortunately this site uses it's own Authentication System, and it uses no-index,no-follow on the pages. As I control that site, i can remove the Attributes, but it's a PITA to do so. Also, it does not solve the Authentication issue. So I just wonder if it's possible to extend Sharepoint's Crawler somehow?
[ "The limitation of MOSS crawling sites with different forms authentication should have been addressed in MOSS SP1. :\nhttp://www.microsoft.com/downloads/details.aspx?FamilyID=ad59175c-ad6a-4027-8c2f-db25322f791b&displaylang=en\nHere's a link to a post which describes how to get the hotfix for pre-SP1 MOSS to enable the crawling of sites with forms authentication:\nhttp://blogs.microsoft.co.il/blogs/adir_ron/archive/2007/10/11/moss-search-for-sso-form-based-authentication-sites.aspx\nHope that helps!\n" ]
[ 3 ]
[]
[]
[ ".net", "c#", "sharepoint" ]
stackoverflow_0000035219_.net_c#_sharepoint.txt
Q: Variable Holding data in a while statement I know I must be missing something, but in a while statement how does the variable hold the data, when it finishes the first pass and goes into the second pass? { int num1 = 0 ; int num2 = 0; int num3 = 0; while (num1 < 10) {cout << "enter your first number: "; cin >> num1; cout << "Enter your second number: "; cin >> num2; num1 = num1 + num2 ; cout << "Number 1 is now: " << num1 <<endl; cout << "Enter Number 3: " ; cin >> num3; num1 = num1 + num3; cout << "Number 1 is now: " << num1 << endl; num1++; }; In this code. The Variable doesn't hold the data. I'm not sure what I'm doing wrong! A: I'm not clear exactly what you're asking, but variables will maintain their value for each iteration of a loop, as long as they're declared outside of the loop itself. For example: int a = 0; while(a < 10) { int b = 0; cout << "a: " << a << " b: " << b << "\n"; a++; b++; } In the above, the value output for b will always be 0, as it's declared inside the loop and is being reinitialized each time, whereas a will maintain its value and get incremented each iteration. If b were an object, rather than an int, its constructor and destructor would get called each iteration. A: Is num1 the variable you're having trouble with? This line: cin >> num1; is setting num1 to the value input by the user. So the value calculated for it in the previous run through the loop is being overwritten each time by the new input. A: I'm not sure I understand your question. In C any data that's not overwritten is carried over into the next iteration of the loop, and imagine that C++ works much the same way. A: Do you understand how when you say "num1" you're referring to the same variable each time, and that each time you change num1 you replace the previous value?
Variable Holding data in a while statement
I know I must be missing something, but in a while statement how does the variable hold the data, when it finishes the first pass and goes into the second pass? { int num1 = 0 ; int num2 = 0; int num3 = 0; while (num1 < 10) {cout << "enter your first number: "; cin >> num1; cout << "Enter your second number: "; cin >> num2; num1 = num1 + num2 ; cout << "Number 1 is now: " << num1 <<endl; cout << "Enter Number 3: " ; cin >> num3; num1 = num1 + num3; cout << "Number 1 is now: " << num1 << endl; num1++; }; In this code. The Variable doesn't hold the data. I'm not sure what I'm doing wrong!
[ "I'm not clear exactly what you're asking, but variables will maintain their value for each iteration of a loop, as long as they're declared outside of the loop itself. For example:\nint a = 0;\n\nwhile(a < 10)\n{\n int b = 0;\n\n cout << \"a: \" << a << \" b: \" << b << \"\\n\";\n\n a++;\n b++;\n}\n\nIn the above, the value output for b will always be 0, as it's declared inside the loop and is being reinitialized each time, whereas a will maintain its value and get incremented each iteration. If b were an object, rather than an int, its constructor and destructor would get called each iteration.\n", "Is num1 the variable you're having trouble with? This line:\ncin >> num1;\n\nis setting num1 to the value input by the user. So the value calculated for it in the previous run through the loop is being overwritten each time by the new input.\n", "I'm not sure I understand your question. In C any data that's not overwritten is carried over into the next iteration of the loop, and imagine that C++ works much the same way.\n", "Do you understand how when you say \"num1\" you're referring to the same variable each time, and that each time you change num1 you replace the previous value?\n" ]
[ 2, 2, 1, 1 ]
[]
[]
[ "c++" ]
stackoverflow_0000036114_c++.txt
Q: after opening target in new window - new window cannot be closed I've got a page with an control - the link is to a gif file. Right clicking on the link (in IE7) and selecting "open target in new window" correctly displays the image. However I can't then close the new IE window. MORE INFO: Works OK in Firefox 3 What might I be doing wrong ? TIA Tom A: There isn't really something you can do wrong to prevent a window from being closed on the client. My guess is this is a problem with the system installation. Test this again using another browser on the same computer, and then on another computer.
after opening target in new window - new window cannot be closed
I've got a page with an control - the link is to a gif file. Right clicking on the link (in IE7) and selecting "open target in new window" correctly displays the image. However I can't then close the new IE window. MORE INFO: Works OK in Firefox 3 What might I be doing wrong ? TIA Tom
[ "There isn't really something you can do wrong to prevent a window from being closed on the client. \nMy guess is this is a problem with the system installation. \nTest this again using another browser on the same computer, and then on another computer. \n" ]
[ 1 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000038151_asp.net.txt
Q: How do I unit test a WCF service? We have a whole bunch of DLLs that give us access to our database and other applications and services. We've wrapped these DLLs with a thin WCF service layer which our clients then consume. I'm a little unsure on how to write unit tests that only test the WCF service layer. Should I just write unit tests for the DLLs, and integration tests for the WCF services? I'd appreciate any wisdom... I know that if my unit tests actually go to the database they won't actually be true unit tests. I also understand that I don't really need to test the WCF service host in a unit test. So, I'm confused about exactly what to test and how. A: If you want to unit test your WCF service classes make sure you design them with loose coupling in mind so you can mock out each dependancy as you only want to test the logic inside the service class itself. For example, in the below service I break out my data access repository using "Poor Man's Dependency Injection". Public Class ProductService Implements IProductService Private mRepository As IProductRepository Public Sub New() mRepository = New ProductRepository() End Sub Public Sub New(ByVal repository As IProductRepository) mRepository = repository End Sub Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductService.GetProducts Return mRepository.GetProducts() End Function End Class On the client you can mock the WCF service itself using the interface of the service contract. <TestMethod()> _ Public Sub ShouldPopulateProductsListOnViewLoadWhenPostBackIsFalse() mMockery = New MockRepository() mView = DirectCast(mMockery.Stub(Of IProductView)(), IProductView) mProductService = DirectCast(mMockery.DynamicMock(Of IProductService)(), IProductService) mPresenter = New ProductPresenter(mView, mProductService) Dim ProductList As New List(Of Product)() ProductList.Add(New Product) Using mMockery.Record() SetupResult.For(mView.PageIsPostBack).Return(False).Repeat.Once() Expect.Call(mProductService.GetProducts()).Return(ProductList).Repeat.Once() End Using Using mMockery.Playback() mPresenter.OnViewLoad() End Using 'Verify that we hit the service dependency during the method when postback is false Assert.AreEqual(1, mView.Products.Count) mMockery.VerifyAll() End Sub A: It depends on what the thin WCF service does. If it's really thin and there's no interesting code there, don't bother unit testing it. Don't be afraid to not unit test something if there's no real code there. If the test cannot be at least one level simpler then the code under the test, don't bother. If the code is dumb, the test will also be dumb. You don't want to have more dumb code to maintain. If you can have tests that go all the way to the db then great! It's even better. It's not a "true unit test?" Not a problem at all. A: The consumer of your service doesn't care what's underneath your service. To really test your service layer, I think your layer needs to go down to DLLs and the database and write at least CRUD test.
How do I unit test a WCF service?
We have a whole bunch of DLLs that give us access to our database and other applications and services. We've wrapped these DLLs with a thin WCF service layer which our clients then consume. I'm a little unsure on how to write unit tests that only test the WCF service layer. Should I just write unit tests for the DLLs, and integration tests for the WCF services? I'd appreciate any wisdom... I know that if my unit tests actually go to the database they won't actually be true unit tests. I also understand that I don't really need to test the WCF service host in a unit test. So, I'm confused about exactly what to test and how.
[ "If you want to unit test your WCF service classes make sure you design them with loose coupling in mind so you can mock out each dependancy as you only want to test the logic inside the service class itself.\nFor example, in the below service I break out my data access repository using \"Poor Man's Dependency Injection\".\nPublic Class ProductService\n Implements IProductService\n\n Private mRepository As IProductRepository\n\n Public Sub New()\n mRepository = New ProductRepository()\n End Sub\n\n Public Sub New(ByVal repository As IProductRepository)\n mRepository = repository\n End Sub\n\n Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductService.GetProducts\n Return mRepository.GetProducts()\n End Function\nEnd Class\n\nOn the client you can mock the WCF service itself using the interface of the service contract.\n<TestMethod()> _\nPublic Sub ShouldPopulateProductsListOnViewLoadWhenPostBackIsFalse()\n mMockery = New MockRepository()\n mView = DirectCast(mMockery.Stub(Of IProductView)(), IProductView)\n mProductService = DirectCast(mMockery.DynamicMock(Of IProductService)(), IProductService)\n mPresenter = New ProductPresenter(mView, mProductService)\n Dim ProductList As New List(Of Product)()\n ProductList.Add(New Product)\n Using mMockery.Record()\n SetupResult.For(mView.PageIsPostBack).Return(False).Repeat.Once()\n Expect.Call(mProductService.GetProducts()).Return(ProductList).Repeat.Once()\n End Using\n Using mMockery.Playback()\n mPresenter.OnViewLoad()\n End Using\n 'Verify that we hit the service dependency during the method when postback is false\n Assert.AreEqual(1, mView.Products.Count)\n mMockery.VerifyAll()\nEnd Sub\n\n", "It depends on what the thin WCF service does. If it's really thin and there's no interesting code there, don't bother unit testing it. Don't be afraid to not unit test something if there's no real code there. If the test cannot be at least one level simpler then the code under the test, don't bother. If the code is dumb, the test will also be dumb. You don't want to have more dumb code to maintain.\nIf you can have tests that go all the way to the db then great! It's even better. It's not a \"true unit test?\" Not a problem at all. \n", "The consumer of your service doesn't care what's underneath your service.\nTo really test your service layer, I think your layer needs to go down to DLLs and the database and write at least CRUD test. \n" ]
[ 7, 7, 4 ]
[]
[]
[ "unit_testing", "wcf" ]
stackoverflow_0000037375_unit_testing_wcf.txt
Q: C#.Net: Why is my Process.Start() hanging? I'm trying to run a batch file, as another user, from my web app. For some reason, the batch file hangs! I can see "cmd.exe" running in the task manager, but it just sits there forever, unable to be killed, and the batch file is not running. Here's my code: SecureString password = new SecureString(); foreach (char c in "mypassword".ToCharArray()) password.AppendChar(c); ProcessStartInfo psi = new ProcessStartInfo(); psi.WorkingDirectory = @"c:\build"; psi.FileName = Environment.SystemDirectory + @"\cmd.exe"; psi.Arguments = "/q /c build.cmd"; psi.UseShellExecute = false; psi.UserName = "builder"; psi.Password = password; Process.Start(psi); If you didn't guess, this batch file builds my application (a different application than the one that is executing this command). The Process.Start(psi); line returns immediately, as it should, but the batch file just seems to hang, without executing. Any ideas? EDIT: See my answer below for the contents of the batch file. The output.txt never gets created. I added these lines: psi.RedirectStandardOutput = true; Process p = Process.Start(psi); String outp = p.StandardOutput.ReadLine(); and stepped through them in debug mode. The code hangs on the ReadLine(). I'm stumped! A: I believe I've found the answer. It seems that Microsoft, in all their infinite wisdom, has blocked batch files from being executed by IIS in Windows Server 2003. Brenden Tompkins has a work-around here: http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx That won't work for me, because my batch file uses IF and GOTO, but it would definitely work for simple batch files. A: Why not just do all the work in C# instead of using batch files? I was bored so i wrote this real quick, it's just an outline of how I would do it since I don't know what the command line switches do or the file paths. using System; using System.IO; using System.Text; using System.Security; using System.Diagnostics; namespace asdf { class StackoverflowQuestion { private const string MSBUILD = @"path\to\msbuild.exe"; private const string BMAIL = @"path\to\bmail.exe"; private const string WORKING_DIR = @"path\to\working_directory"; private string stdout; private Process p; public void DoWork() { // build project StartProcess(MSBUILD, "myproject.csproj /t:Build", true); } public void StartProcess(string file, string args, bool redirectStdout) { SecureString password = new SecureString(); foreach (char c in "mypassword".ToCharArray()) password.AppendChar(c); ProcessStartInfo psi = new ProcessStartInfo(); p = new Process(); psi.WindowStyle = ProcessWindowStyle.Hidden; psi.WorkingDirectory = WORKING_DIR; psi.FileName = file; psi.UseShellExecute = false; psi.RedirectStandardOutput = redirectStdout; psi.UserName = "builder"; psi.Password = password; p.StartInfo = psi; p.EnableRaisingEvents = true; p.Exited += new EventHandler(p_Exited); p.Start(); if (redirectStdout) { stdout = p.StandardOutput.ReadToEnd(); } } void p_Exited(object sender, EventArgs e) { if (p.ExitCode != 0) { // failed StringBuilder args = new StringBuilder(); args.Append("-s k2smtpout.secureserver.net "); args.Append("-f build@example.com "); args.Append("-t josh@example.com "); args.Append("-a \"Build failed.\" "); args.AppendFormat("-m {0} -h", stdout); // send email StartProcess(BMAIL, args.ToString(), false); } } } } A: Without seeing the build.cmd it's hard to tell what is going on, however, you should build the path using Path.Combine(arg1, arg2); It's the correct way to build a path. Path.Combine( Environment.SystemDirectory, "cmd.exe" ); I don't remember now but don't you have to set UseShellExecute = true ? A: Another possibility to "debug" it is to use standardoutput and then read from it: psi.RedirectStandardOutput = True; Process proc = Process.Start(psi); String whatever = proc.StandardOutput.ReadLine(); A: In order to "see" what's going on, I'd suggest you transform the process into something more interactive (turn off Echo off) and put some "prints" to see if anything is actually happening. What is in the output.txt file after you run this? Does the bmail actually executes? Put some prints after/before to see what's going on. Also add "@" to the arguments, just in case: psi.Arguments = @"/q /c build.cmd"; It has to be something very simple :) A: My guess would be that the build.cmd is waiting for some sort of user-interaction/reply. If you log the output of the command with the "> logfile.txt" operator at the end, it might help you find the problem. A: Here's the contents of build.cmd: @echo off set path=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;%path% msbuild myproject.csproj /t:Build > output.txt IF NOT ERRORLEVEL 1 goto :end :error bmail -s k2smtpout.secureserver.net -f build@example.com -t josh@example.com -a "Build failed." -m output.txt -h :end del output.txt As you can see, I'm careful not to output anything. It all goes to a file that gets emailed to me if the build happens to fail. I've actually been running this file as a scheduled task nightly for quite a while now. I'm trying to build a web app that allows me to run it on demand. Thanks for everyone's help so far! The Path.Combine tip was particularly useful. A: I think cmd.exe hangs if the parameters are incorrect. If the batch executes correctly then I would just shell execute it like this instead. ProcessStartInfo psi = new ProcessStartInfo(); Process p = new Process(); psi.WindowStyle = ProcessWindowStyle.Hidden; psi.WorkingDirectory = @"c:\build"; psi.FileName = @"C:\build\build.cmd"; psi.UseShellExecute = true; psi.UserName = "builder"; psi.Password = password; p.StartInfo = psi; p.Start(); Also it could be that cmd.exe just can't find build.cmd so why not give the full path to the file? A: What are the endlines of you batch? If the code hangs on ReadLine, then the problem might be that it's unable to read the batch file…
C#.Net: Why is my Process.Start() hanging?
I'm trying to run a batch file, as another user, from my web app. For some reason, the batch file hangs! I can see "cmd.exe" running in the task manager, but it just sits there forever, unable to be killed, and the batch file is not running. Here's my code: SecureString password = new SecureString(); foreach (char c in "mypassword".ToCharArray()) password.AppendChar(c); ProcessStartInfo psi = new ProcessStartInfo(); psi.WorkingDirectory = @"c:\build"; psi.FileName = Environment.SystemDirectory + @"\cmd.exe"; psi.Arguments = "/q /c build.cmd"; psi.UseShellExecute = false; psi.UserName = "builder"; psi.Password = password; Process.Start(psi); If you didn't guess, this batch file builds my application (a different application than the one that is executing this command). The Process.Start(psi); line returns immediately, as it should, but the batch file just seems to hang, without executing. Any ideas? EDIT: See my answer below for the contents of the batch file. The output.txt never gets created. I added these lines: psi.RedirectStandardOutput = true; Process p = Process.Start(psi); String outp = p.StandardOutput.ReadLine(); and stepped through them in debug mode. The code hangs on the ReadLine(). I'm stumped!
[ "I believe I've found the answer. It seems that Microsoft, in all their infinite wisdom, has blocked batch files from being executed by IIS in Windows Server 2003. Brenden Tompkins has a work-around here:\nhttp://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx\nThat won't work for me, because my batch file uses IF and GOTO, but it would definitely work for simple batch files.\n", "Why not just do all the work in C# instead of using batch files?\nI was bored so i wrote this real quick, it's just an outline of how I would do it since I don't know what the command line switches do or the file paths.\nusing System;\nusing System.IO;\nusing System.Text;\nusing System.Security;\nusing System.Diagnostics;\n\nnamespace asdf\n{\n class StackoverflowQuestion\n {\n private const string MSBUILD = @\"path\\to\\msbuild.exe\";\n private const string BMAIL = @\"path\\to\\bmail.exe\";\n private const string WORKING_DIR = @\"path\\to\\working_directory\";\n\n private string stdout;\n private Process p;\n\n public void DoWork()\n {\n // build project\n StartProcess(MSBUILD, \"myproject.csproj /t:Build\", true);\n }\n\n public void StartProcess(string file, string args, bool redirectStdout)\n {\n SecureString password = new SecureString();\n foreach (char c in \"mypassword\".ToCharArray())\n password.AppendChar(c);\n\n ProcessStartInfo psi = new ProcessStartInfo();\n p = new Process();\n psi.WindowStyle = ProcessWindowStyle.Hidden;\n psi.WorkingDirectory = WORKING_DIR;\n psi.FileName = file;\n psi.UseShellExecute = false;\n psi.RedirectStandardOutput = redirectStdout;\n psi.UserName = \"builder\";\n psi.Password = password;\n p.StartInfo = psi;\n p.EnableRaisingEvents = true;\n p.Exited += new EventHandler(p_Exited);\n p.Start();\n\n if (redirectStdout)\n {\n stdout = p.StandardOutput.ReadToEnd();\n }\n }\n\n void p_Exited(object sender, EventArgs e)\n {\n if (p.ExitCode != 0)\n {\n // failed\n StringBuilder args = new StringBuilder();\n args.Append(\"-s k2smtpout.secureserver.net \");\n args.Append(\"-f build@example.com \");\n args.Append(\"-t josh@example.com \");\n args.Append(\"-a \\\"Build failed.\\\" \");\n args.AppendFormat(\"-m {0} -h\", stdout);\n\n // send email\n StartProcess(BMAIL, args.ToString(), false);\n }\n }\n }\n}\n\n", "Without seeing the build.cmd it's hard to tell what is going on, however, you should build the path using Path.Combine(arg1, arg2); It's the correct way to build a path. \nPath.Combine( Environment.SystemDirectory, \"cmd.exe\" );\n\nI don't remember now but don't you have to set UseShellExecute = true ?\n", "Another possibility to \"debug\" it is to use standardoutput and then read from it:\npsi.RedirectStandardOutput = True;\nProcess proc = Process.Start(psi);\nString whatever = proc.StandardOutput.ReadLine();\n\n", "In order to \"see\" what's going on, I'd suggest you transform the process into something more interactive (turn off Echo off) and put some \"prints\" to see if anything is actually happening. What is in the output.txt file after you run this?\nDoes the bmail actually executes?\nPut some prints after/before to see what's going on. \nAlso add \"@\" to the arguments, just in case: \npsi.Arguments = @\"/q /c build.cmd\";\n\nIt has to be something very simple :)\n", "My guess would be that the build.cmd is waiting for some sort of user-interaction/reply. If you log the output of the command with the \"> logfile.txt\" operator at the end, it might help you find the problem.\n", "Here's the contents of build.cmd:\n@echo off\nset path=C:\\WINDOWS\\Microsoft.NET\\Framework\\v2.0.50727;%path%\n\nmsbuild myproject.csproj /t:Build > output.txt\nIF NOT ERRORLEVEL 1 goto :end\n\n:error\nbmail -s k2smtpout.secureserver.net -f build@example.com -t josh@example.com -a \"Build failed.\" -m output.txt -h\n\n:end\ndel output.txt\n\nAs you can see, I'm careful not to output anything. It all goes to a file that gets emailed to me if the build happens to fail. I've actually been running this file as a scheduled task nightly for quite a while now. I'm trying to build a web app that allows me to run it on demand.\nThanks for everyone's help so far! The Path.Combine tip was particularly useful.\n", "I think cmd.exe hangs if the parameters are incorrect.\nIf the batch executes correctly then I would just shell execute it like this instead.\nProcessStartInfo psi = new ProcessStartInfo();\nProcess p = new Process();\npsi.WindowStyle = ProcessWindowStyle.Hidden;\npsi.WorkingDirectory = @\"c:\\build\";\npsi.FileName = @\"C:\\build\\build.cmd\";\npsi.UseShellExecute = true;\npsi.UserName = \"builder\";\npsi.Password = password;\np.StartInfo = psi;\np.Start();\n\nAlso it could be that cmd.exe just can't find build.cmd so why not give the full path to the file?\n", "What are the endlines of you batch? If the code hangs on ReadLine, then the problem might be that it's unable to read the batch file…\n" ]
[ 5, 3, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000034183_.net_c#.txt
Q: Integrating a custom gui framework with the VS designer Imagine you homebrew a custom gui framework that doesn't use windows handles (compact framework, so please don't argue with "whys"). One of the main disadvantages of developing such a framework is that you lose compatability with the winform designer. So my question is to all of you who know a lot about VS customisation, would there be a clever mechanism by which one could incorperate the gui framework into the designer and get it to spit out your custom code instead of the standard windows stuff in the InitialiseComponent() method? A: I recently watched a video of these guys who built a WoW AddOn designer for Visual Studio. They overcame the task of getting their completely custom controls to render correctly in the designer. I'm not sure if this is exactly what you need, but might be worth looking at. It's open-source: http://www.codeplex.com/WarcraftAddOnStudio A: I've also since discovered that DXCore from DevExpress is a tool that simplifies plugin development. The default implementation wouldn't let me dock as document (central) but regardless one can still easily generate a plugin with it that can compile a file on the fly and render the contents of it which may well do the job for me. :)
Integrating a custom gui framework with the VS designer
Imagine you homebrew a custom gui framework that doesn't use windows handles (compact framework, so please don't argue with "whys"). One of the main disadvantages of developing such a framework is that you lose compatability with the winform designer. So my question is to all of you who know a lot about VS customisation, would there be a clever mechanism by which one could incorperate the gui framework into the designer and get it to spit out your custom code instead of the standard windows stuff in the InitialiseComponent() method?
[ "I recently watched a video of these guys who built a WoW AddOn designer for Visual Studio.\nThey overcame the task of getting their completely custom controls to render correctly in the designer. I'm not sure if this is exactly what you need, but might be worth looking at. It's open-source:\nhttp://www.codeplex.com/WarcraftAddOnStudio\n", "I've also since discovered that DXCore from DevExpress is a tool that simplifies plugin development. The default implementation wouldn't let me dock as document (central) but regardless one can still easily generate a plugin with it that can compile a file on the fly and render the contents of it which may well do the job for me. :)\n" ]
[ 1, 1 ]
[]
[]
[ "gui_designer", "visual_studio" ]
stackoverflow_0000033233_gui_designer_visual_studio.txt
Q: Develop on local Oracle instance I want our team to develop against local instances of an Oracle database. With MS SQL, I can use SQL Express Edition. What are my options? A: Oracle has an express edition as well. I believe it is more limited though (IIRC, you can only have one database on an instance) Oracle XE A: I have had a lot of success using Oracle 10g Express Edition. It comes with Oracle Aplication Express which allows the simple admin and creation of software via a web interface. It is limited to 4Gb of Disk Space, 1Gb of Ram and will only use 1 processor. It's free and in my experience has been 100% reliable. It can easily be hosted within a Virtual machine. Also Oracle SQL Developer is a cross platform application that can be used with any version of Oracle and is also free. Oracle 10g is superb. Go for it :-) A: I'm happy with Oracle XE for development purposes. I do have this piece of wisdow to share; if you're having problems like ORA-12519: TNS:no appropriate service handler found or ORA-12560: TNS:protocol adapter error from time to time then try to change your PROCESSES parameter, logon to Oracle using sys as sysdba and execute the following: ALTER SYSTEM SET PROCESSES=150 SCOPE=SPFILE; After changing the PROCESSES parameter restart your Oracle service. A: Oracle allows developers to download and use Oracle for free for the purpose of developing software (at least for the initial prototype, best to read the license terms). Downloads here. A: We ended up using Oracle XE. Install client, install express, reboot, it just works. A: I don't recommend Oracle XE. My co-workers and I have been doing a project in Oracle and got severely tripped up after trying to use XE for our local development instances. The database worked fine until we started running local stress tests, at which point it started dropping connections. I don't know whether this is an intentional, documented limitation or if perhaps we each just hit a weird bug, but I strongly recommend that you stay away from XE. When we both switched over to the full version, our problems immediately went away. Also, Oracle doesn't require any kind of licensing confirmation for the full server; you have to click something to say that you have indeed acquired a license, but it doesn't make you prove it. So if you indeed have a license to use Oracle, there's no reason why you can't just install the full version on your development machines.
Develop on local Oracle instance
I want our team to develop against local instances of an Oracle database. With MS SQL, I can use SQL Express Edition. What are my options?
[ "Oracle has an express edition as well. I believe it is more limited though (IIRC, you can only have one database on an instance)\nOracle XE\n", "I have had a lot of success using Oracle 10g Express Edition. It comes with Oracle Aplication Express which allows the simple admin and creation of software via a web interface. It is limited to 4Gb of Disk Space, 1Gb of Ram and will only use 1 processor.\nIt's free and in my experience has been 100% reliable. It can easily be hosted within a Virtual machine.\nAlso Oracle SQL Developer is a cross platform application that can be used with any version of Oracle and is also free. Oracle 10g is superb. Go for it :-)\n", "I'm happy with Oracle XE for development purposes.\nI do have this piece of wisdow to share; if you're having problems like ORA-12519: TNS:no appropriate service handler found or ORA-12560: TNS:protocol adapter error from time to time then try to change your PROCESSES parameter, logon to Oracle using sys as sysdba and execute the following:\nALTER SYSTEM SET PROCESSES=150 SCOPE=SPFILE;\n\nAfter changing the PROCESSES parameter restart your Oracle service. \n", "Oracle allows developers to download and use Oracle for free for the purpose of developing software (at least for the initial prototype, best to read the license terms). Downloads here.\n", "We ended up using Oracle XE. Install client, install express, reboot, it just works.\n", "I don't recommend Oracle XE. My co-workers and I have been doing a project in Oracle and got severely tripped up after trying to use XE for our local development instances. The database worked fine until we started running local stress tests, at which point it started dropping connections.\nI don't know whether this is an intentional, documented limitation or if perhaps we each just hit a weird bug, but I strongly recommend that you stay away from XE. When we both switched over to the full version, our problems immediately went away.\nAlso, Oracle doesn't require any kind of licensing confirmation for the full server; you have to click something to say that you have indeed acquired a license, but it doesn't make you prove it. So if you indeed have a license to use Oracle, there's no reason why you can't just install the full version on your development machines.\n" ]
[ 21, 8, 6, 4, 2, 0 ]
[]
[]
[ "oracle" ]
stackoverflow_0000026002_oracle.txt
Q: Link to samba shares in html First off if you're unaware, samba or smb == Windows file sharing, \\computer\share etc. I have a bunch of different files on a bunch of different computers. It's mostly media and there is quite a bit of it. I'm looking into various ways of consolidating this into something more manageable. Currently there are a few options I'm looking at, the most insane of which is some kind of samba share indexer that would generate a list of things shared on the various samba servers I tell it about and upload them to a website which could then be searched and browsed. It's a cheap solution, OK? Ignoring the fact that the idea is obviously a couple of methods short of a class, do you chaps know of any way to link to samba file shares in html in a cross-browser way? In windows one does \\computer\share, in linux one does smb://computer/share, neither of which work afaik from browsers that aren't also used as file managers (e.g. any browser that isn't Internet Explorer). Some Clarifications The computers used to access this website are a mixture of WIndows (XP) and Linux (Ubuntu) with a mixture of browsers (Opera and Firefox). In linux entering smb://computer/share only seems to work in Nautilus (and presumably Konqueror / Dolphin for you KDE3.5/4 people). It doesn't work in Firefox or Opera (Firefox does nothing, Opera complains the URL is invalid). I don't have a Windows box handy atm so I'm unsure if \\computer\share works in anything apart from IE (e.g. Firefox / Opera). If you have a better idea for consolidating a bunch of random samba shares (it certainly can't get much worse than mine ;-)) it's worth knowing that there is no guarantee that any of the servers I would be wanting to index / consolidate would be up at any particular moment. Moreover, I wouldn't want the knowledge of what they have shared lost or hidden just because they weren't available. I would want to know that they share 'foo' but they are currently down. A: Hmm, protocol handlers look interesting. As Mark said, in Windows protocol handlers can be dealt with at the OS level Protocol handlers can also be done at the browser level (which is preferred, as it is cross platform and doesn't involve installing anything). Summary of how it works in Firefox Summary of how it works in Opera A: I'd probably just setup Apache on the SAMBA servers and let it serve the files via HTTP. That'd give you a nice autoindex default page too, and you could just wget and concatenate each index for your master list. A couple of other thoughts: file://server/share/file is the defacto Windows way of doing it You can register protocol handlers in Windows, so you could register smb and redirect it to file://. I'd suspect GNOME/KDE/etc. would offer the same. A: To make the links work cross platform you could look at the User Agent either in a CGI script or in JavaScript and update your URLs appropriately. Alternatively, if you want to consolidate SMB shares you could try using Microsoft DFS (which also works with Samba). You set up a DFS root and tell it about all the other SMB/Samba shares you have in your environment. Clients then connect to the root and see all the shares as if they were hosted on that single root machine; the root silently redirects clients to the correct system when they open a share. Think of it as like symbolic links or a virtual file system for SMB. It would solve your browsing problem. I'm not sure if it would solve your searching one.
Link to samba shares in html
First off if you're unaware, samba or smb == Windows file sharing, \\computer\share etc. I have a bunch of different files on a bunch of different computers. It's mostly media and there is quite a bit of it. I'm looking into various ways of consolidating this into something more manageable. Currently there are a few options I'm looking at, the most insane of which is some kind of samba share indexer that would generate a list of things shared on the various samba servers I tell it about and upload them to a website which could then be searched and browsed. It's a cheap solution, OK? Ignoring the fact that the idea is obviously a couple of methods short of a class, do you chaps know of any way to link to samba file shares in html in a cross-browser way? In windows one does \\computer\share, in linux one does smb://computer/share, neither of which work afaik from browsers that aren't also used as file managers (e.g. any browser that isn't Internet Explorer). Some Clarifications The computers used to access this website are a mixture of WIndows (XP) and Linux (Ubuntu) with a mixture of browsers (Opera and Firefox). In linux entering smb://computer/share only seems to work in Nautilus (and presumably Konqueror / Dolphin for you KDE3.5/4 people). It doesn't work in Firefox or Opera (Firefox does nothing, Opera complains the URL is invalid). I don't have a Windows box handy atm so I'm unsure if \\computer\share works in anything apart from IE (e.g. Firefox / Opera). If you have a better idea for consolidating a bunch of random samba shares (it certainly can't get much worse than mine ;-)) it's worth knowing that there is no guarantee that any of the servers I would be wanting to index / consolidate would be up at any particular moment. Moreover, I wouldn't want the knowledge of what they have shared lost or hidden just because they weren't available. I would want to know that they share 'foo' but they are currently down.
[ "Hmm, protocol handlers look interesting.\nAs Mark said, in Windows protocol handlers can be dealt with at the OS level\nProtocol handlers can also be done at the browser level (which is preferred, as it is cross platform and doesn't involve installing anything).\nSummary of how it works in Firefox\nSummary of how it works in Opera\n", "I'd probably just setup Apache on the SAMBA servers and let it serve the files via HTTP. That'd give you a nice autoindex default page too, and you could just wget and concatenate each index for your master list.\nA couple of other thoughts:\n\nfile://server/share/file is the defacto Windows way of doing it\nYou can register protocol handlers in Windows, so you could register smb and redirect it to file://. I'd suspect GNOME/KDE/etc. would offer the same.\n\n", "To make the links work cross platform you could look at the User Agent either in a CGI script or in JavaScript and update your URLs appropriately.\nAlternatively, if you want to consolidate SMB shares you could try using Microsoft DFS (which also works with Samba).\nYou set up a DFS root and tell it about all the other SMB/Samba shares you have in your environment. Clients then connect to the root and see all the shares as if they were hosted on that single root machine; the root silently redirects clients to the correct system when they open a share.\nThink of it as like symbolic links or a virtual file system for SMB.\nIt would solve your browsing problem. I'm not sure if it would solve your searching one.\n" ]
[ 6, 3, 1 ]
[]
[]
[ "html", "samba", "smb" ]
stackoverflow_0000037804_html_samba_smb.txt
Q: Can you set, or where is, the local document root? When opening a file from your hard drive into your browser, where is the document root? To illustrate, given the following HTML code, if the page is opened from the local machine (file:///) then where should the css file be for the browser to find it? <link href="/temp/test.css" rel="stylesheet" type="text/css" /> A: You can, but probably don't want to, set the document root on a per-file basis in the head of your file: <base href="my-root"> A: It depends on what browser you use, but Internet Explorer, for example, would take you to the root directory of your harddrive (eg. C:/), while browsers such as Firefox does nothing. A: On a Mac, the document root is what you see in the window that appears after you double click on the main hard drive icon on your desktop. The temp folder needs to be in there for a browser to find the CSS file as you have it written in your code. Actually, you could also write the code like this: <link href="file:///temp/test.css" rel="stylesheet" type="text/css" /> A: Eric, the document root is the folder in which your file is, wherever it may be. A: As far as local, static html goes, unless you specify it, most browsers will take the location of the html file you are viewing as the root. So any css put in there can just be referenced by it's name only. The lazy way to get the correct reference for your css file is to open it in your browser. Then just grab the url that you see there - something like: file:///blah/test.css and copy that into your stylesheet link on your html: <link href="file:///blah/test.css" rel="stylesheet" type="text/css"> Either that or you can just take the url for the html file and amend it to refer to the stylesheet. Then your local page should load fine with the local stylesheet. A: If you're interested in setting the document root, you might look at getting a web server installed on your machine, or, if you already have one (like Apache or IIS), storing your project-in-development in the web root of that server (htdocs in Apache, not entirely sure in IIS). If you'd rather leave your files where they are, you can set up virtual hosts and even map them to addresses that you can type into your browser (for example, I have a local.mrwarshaw.com address that resolves to the web root of my personal site's development folder). If you're on Windows and don't want to mess around with setting up a server on your own, you could get a package like XAMPP or WAMPP, though bear in mind that those carry the extra weight of PHP and MySQL with them. Still, if you've got the space, they're a pretty easy drop-in development environment for your machine.
Can you set, or where is, the local document root?
When opening a file from your hard drive into your browser, where is the document root? To illustrate, given the following HTML code, if the page is opened from the local machine (file:///) then where should the css file be for the browser to find it? <link href="/temp/test.css" rel="stylesheet" type="text/css" />
[ "You can, but probably don't want to, set the document root on a per-file basis in the head of your file:\n\n<base href=\"my-root\">\n\n", "It depends on what browser you use, but Internet Explorer, for example, would take you to the root directory of your harddrive (eg. C:/), while browsers such as Firefox does nothing. \n", "On a Mac, the document root is what you see in the window that appears after you double click on the main hard drive icon on your desktop. The temp folder needs to be in there for a browser to find the CSS file as you have it written in your code. \nActually, you could also write the code like this:\n<link href=\"file:///temp/test.css\" rel=\"stylesheet\" type=\"text/css\" />\n\n", "Eric, the document root is the folder in which your file is, wherever it may be.\n", "As far as local, static html goes, unless you specify it, most browsers will take the location of the html file you are viewing as the root. So any css put in there can just be referenced by it's name only. \nThe lazy way to get the correct reference for your css file is to open it in your browser. Then just grab the url that you see there - something like: file:///blah/test.css and copy that into your stylesheet link on your html: <link href=\"file:///blah/test.css\" rel=\"stylesheet\" type=\"text/css\">\nEither that or you can just take the url for the html file and amend it to refer to the stylesheet.\nThen your local page should load fine with the local stylesheet.\n", "If you're interested in setting the document root, you might look at getting a web server installed on your machine, or, if you already have one (like Apache or IIS), storing your project-in-development in the web root of that server (htdocs in Apache, not entirely sure in IIS). If you'd rather leave your files where they are, you can set up virtual hosts and even map them to addresses that you can type into your browser (for example, I have a local.mrwarshaw.com address that resolves to the web root of my personal site's development folder).\nIf you're on Windows and don't want to mess around with setting up a server on your own, you could get a package like XAMPP or WAMPP, though bear in mind that those carry the extra weight of PHP and MySQL with them. Still, if you've got the space, they're a pretty easy drop-in development environment for your machine.\n" ]
[ 14, 2, 1, 0, 0, 0 ]
[]
[]
[ "css", "directory", "html" ]
stackoverflow_0000018920_css_directory_html.txt
Q: What is the best way to tell if an object is modified? I have an object that is mapped to a cookie as a serialized base-64 string. I only want to write out a new cookie if there are changes made to the object stored in the cookie on server-side. What I want to do is get a hash code when the object is pulled from the cookie/initialized and compare the original hash code to the hash code that exists just before I send the cookie header off to the client to ensure I don't have to re-serialize/send the cookie unless changes were made. I was going to override the .NET's Object.GetHashCode() method, but I wasn't sure that this is the best way to go about checking if an object is modified. Are there any other ways I can check if an object is modified, or should I override the GetHashCode() method. Update I decided to accept @rmbarnes's answer as it had an interesting solution to the problem, and because I decided to use his advice at the end of his post and not check for modification. I'd still be interested to hear any other solutions anyone may have to my scenario however. A: GetHashCode() should always be in sync with Equals(), and Equals() isn't necessarily guaranteed to check for all of the fields in your object (there's certain situations where you want that to not be the case). Furthermore, GetHashCode() isn't guaranteed to return unique values for all possible object states. It's conceivable (though unlikely) that two object states could result in the same HashCode (which does, after all, only have an int's worth of possible states; see the Pigeonhole Principle for more details). If you can ensure that Equals() checks all of the appropriate fields, then you could possibly clone the object to record its state and then check it with Equals() against the new state to see if its changed. BTW: Your mention of serialization gave me an idea. You could serialize the object, record it, and then when you check for object changing, repeat the process and compare the serialized values. That would let you check for state changes without having to make any code changes to your object. However, this isn't a great solution, because: It's probably very inefficient It's prone to serialization changes in the object; you might get false positives on the object state change. A: At the end of the object's constructor you could serialize the object to a base 64 string just like the cookie stores it, and store this in a member variable. When you want to check if the cookie needs recreating, re - serialize the object and compare this new base 64 string against the one stored in a member variable. If it has changed, reset the cookie with the new value. Watch out for the gotcha - don't include the member variable storing the base 64 serialization in the serialization itself. I presume your language uses something like a sleep() function (is how PHP does it) to serialize itself, so just make sure the member is not included in that function. This will always work because you are comparing the exact value you'd be saving in the cookie, and wouldn't need to override GetHashCode() which sounds like it could have nasty consequences. All that said I'd probably just drop the test and always reset the cookie, can't be that much overhead in it when compared to doing the change check, and far less likelyhood of bugs. A: I personally would say go with the plan you have.. A good hash code is the best way to see if an object is "as-is".. Theres tons of hashing algorithms you can look at, check out the obvious Wikipedia page on hash functions and go from there.. Override GetHashCode and go for it! Just make sure ALL the elements of the information make up part of the hash :) A: Seems odd to me why you'd want to store the same object both server side and client side - especially if you're comparing them on each trip. I'd guess that deserializing the cookie and comparing it to the server side object would be equivalent in performance to just serializing the object again. But, if you wanted to do this, I'd compare the serialized server side object with the cookie's value and update accordingly. Worst case, you did the serialization for naught. Best case, you did a string compare. The alternative, deserializing and comparing the objects, has a worst case of deserializing, comparing n fields, and then serializing. Best case is deserializing and comparing n fields.
What is the best way to tell if an object is modified?
I have an object that is mapped to a cookie as a serialized base-64 string. I only want to write out a new cookie if there are changes made to the object stored in the cookie on server-side. What I want to do is get a hash code when the object is pulled from the cookie/initialized and compare the original hash code to the hash code that exists just before I send the cookie header off to the client to ensure I don't have to re-serialize/send the cookie unless changes were made. I was going to override the .NET's Object.GetHashCode() method, but I wasn't sure that this is the best way to go about checking if an object is modified. Are there any other ways I can check if an object is modified, or should I override the GetHashCode() method. Update I decided to accept @rmbarnes's answer as it had an interesting solution to the problem, and because I decided to use his advice at the end of his post and not check for modification. I'd still be interested to hear any other solutions anyone may have to my scenario however.
[ "GetHashCode() should always be in sync with Equals(), and Equals() isn't necessarily guaranteed to check for all of the fields in your object (there's certain situations where you want that to not be the case).\nFurthermore, GetHashCode() isn't guaranteed to return unique values for all possible object states. It's conceivable (though unlikely) that two object states could result in the same HashCode (which does, after all, only have an int's worth of possible states; see the Pigeonhole Principle for more details).\nIf you can ensure that Equals() checks all of the appropriate fields, then you could possibly clone the object to record its state and then check it with Equals() against the new state to see if its changed.\nBTW: Your mention of serialization gave me an idea. You could serialize the object, record it, and then when you check for object changing, repeat the process and compare the serialized values. That would let you check for state changes without having to make any code changes to your object. However, this isn't a great solution, because:\n\nIt's probably very inefficient\nIt's prone to serialization changes in the object; you might get false positives on the object state change.\n\n", "At the end of the object's constructor you could serialize the object to a base 64 string just like the cookie stores it, and store this in a member variable. \nWhen you want to check if the cookie needs recreating, re - serialize the object and compare this new base 64 string against the one stored in a member variable. If it has changed, reset the cookie with the new value.\nWatch out for the gotcha - don't include the member variable storing the base 64 serialization in the serialization itself. I presume your language uses something like a sleep() function (is how PHP does it) to serialize itself, so just make sure the member is not included in that function.\nThis will always work because you are comparing the exact value you'd be saving in the cookie, and wouldn't need to override GetHashCode() which sounds like it could have nasty consequences.\nAll that said I'd probably just drop the test and always reset the cookie, can't be that much overhead in it when compared to doing the change check, and far less likelyhood of bugs.\n", "I personally would say go with the plan you have.. A good hash code is the best way to see if an object is \"as-is\".. Theres tons of hashing algorithms you can look at, check out the obvious Wikipedia page on hash functions and go from there..\nOverride GetHashCode and go for it! Just make sure ALL the elements of the information make up part of the hash :)\n", "Seems odd to me why you'd want to store the same object both server side and client side - especially if you're comparing them on each trip.\nI'd guess that deserializing the cookie and comparing it to the server side object would be equivalent in performance to just serializing the object again.\nBut, if you wanted to do this, I'd compare the serialized server side object with the cookie's value and update accordingly. Worst case, you did the serialization for naught. Best case, you did a string compare.\nThe alternative, deserializing and comparing the objects, has a worst case of deserializing, comparing n fields, and then serializing. Best case is deserializing and comparing n fields.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000034809_.net_c#.txt
Q: Clone a control in silverlight What's the best way to clone a control in Silverlight (including it's children)? UPDATE Is there a better way in Silverlight 2? A: Here's a great thread about serializing and deserializing objects in Silverlight 1.1. As for a "best way," I'd say it would definitely be caching the xaml for the control and calling createFromXaml on it.
Clone a control in silverlight
What's the best way to clone a control in Silverlight (including it's children)? UPDATE Is there a better way in Silverlight 2?
[ "Here's a great thread about serializing and deserializing objects in Silverlight 1.1. \nAs for a \"best way,\" I'd say it would definitely be caching the xaml for the control and calling createFromXaml on it.\n" ]
[ 1 ]
[]
[]
[ "silverlight" ]
stackoverflow_0000038378_silverlight.txt
Q: Auto defines in C editors... Why? When Eclipse creates a new file (.c or .h file) in a C project the editor always auto creates a #define at the top of the file like this: If the file is named 'myCFile.c' there will be a #define at the start of the file like this #ifndef MYCFILE_C_ #define MYCFILE_C_ I have seen other editors do this as well (Codewright and SlikEdit I think). The #defines don't seem to do anything for the editor as I can just delete them without any problem, and I can't think of a reason why I would want to use them. Does anyone know why they are there? A: It's to guard against multiple definitions. A: Sometimes people include a whole .c file in other .c files (or even .h files), so it has the exact same purpose of preventing an include file from getting included multiple times and the compiler spitting out multiple definition errors. It is strange, though, that it would be the default behavior of an editor to put this in anything but a .h file. This would be a rarely needed feature. A: A more modern version of this is to use: #pragma once It is quite unusual to see this in a .c file, normally it is in the header files only. A: I think it's a throwback of C include issues, where multiple copies of the source would get included - unless you are meticulous with include chains (One file includes n others). Checking if a symbol is defined and including only if the symbol is defined - was a way out of this.
Auto defines in C editors... Why?
When Eclipse creates a new file (.c or .h file) in a C project the editor always auto creates a #define at the top of the file like this: If the file is named 'myCFile.c' there will be a #define at the start of the file like this #ifndef MYCFILE_C_ #define MYCFILE_C_ I have seen other editors do this as well (Codewright and SlikEdit I think). The #defines don't seem to do anything for the editor as I can just delete them without any problem, and I can't think of a reason why I would want to use them. Does anyone know why they are there?
[ "It's to guard against multiple definitions.\n", "Sometimes people include a whole .c file in other .c files (or even .h files), so it has the exact same purpose of preventing an include file from getting included multiple times and the compiler spitting out multiple definition errors.\nIt is strange, though, that it would be the default behavior of an editor to put this in anything but a .h file. This would be a rarely needed feature.\n", "A more modern version of this is to use:\n#pragma once\n\nIt is quite unusual to see this in a .c file, normally it is in the header files only.\n", "I think it's a throwback of C include issues, where multiple copies of the source would get included - unless you are meticulous with include chains (One file includes n others).\nChecking if a symbol is defined and including only if the symbol is defined - was a way out of this.\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "c", "c_preprocessor", "eclipse", "header_files", "include_guards" ]
stackoverflow_0000037665_c_c_preprocessor_eclipse_header_files_include_guards.txt
Q: When should I mock? I have a basic understanding of mock and fake objects, but I'm not sure I have a feeling about when/where to use mocking - especially as it would apply to this scenario here. A: Mock objects are useful when you want to test interactions between a class under test and a particular interface. For example, we want to test that method sendInvitations(MailServer mailServer) calls MailServer.createMessage() exactly once, and also calls MailServer.sendMessage(m) exactly once, and no other methods are called on the MailServer interface. This is when we can use mock objects. With mock objects, instead of passing a real MailServerImpl, or a test TestMailServer, we can pass a mock implementation of the MailServer interface. Before we pass a mock MailServer, we "train" it, so that it knows what method calls to expect and what return values to return. At the end, the mock object asserts, that all expected methods were called as expected. This sounds good in theory, but there are also some downsides. Mock shortcomings If you have a mock framework in place, you are tempted to use mock object every time you need to pass an interface to the class under the test. This way you end up testing interactions even when it is not necessary. Unfortunately, unwanted (accidental) testing of interactions is bad, because then you're testing that a particular requirement is implemented in a particular way, instead of that the implementation produced the required result. Here's an example in pseudocode. Let's suppose we've created a MySorter class and we want to test it: // the correct way of testing testSort() { testList = [1, 7, 3, 8, 2] MySorter.sort(testList) assert testList equals [1, 2, 3, 7, 8] } // incorrect, testing implementation testSort() { testList = [1, 7, 3, 8, 2] MySorter.sort(testList) assert that compare(1, 2) was called once assert that compare(1, 3) was not called assert that compare(2, 3) was called once .... } (In this example we assume that it's not a particular sorting algorithm, such as quick sort, that we want to test; in that case, the latter test would actually be valid.) In such an extreme example it's obvious why the latter example is wrong. When we change the implementation of MySorter, the first test does a great job of making sure we still sort correctly, which is the whole point of tests - they allow us to change the code safely. On the other hand, the latter test always breaks and it is actively harmful; it hinders refactoring. Mocks as stubs Mock frameworks often allow also less strict usage, where we don't have to specify exactly how many times methods should be called and what parameters are expected; they allow creating mock objects that are used as stubs. Let's suppose we have a method sendInvitations(PdfFormatter pdfFormatter, MailServer mailServer) that we want to test. The PdfFormatter object can be used to create the invitation. Here's the test: testInvitations() { // train as stub pdfFormatter = create mock of PdfFormatter let pdfFormatter.getCanvasWidth() returns 100 let pdfFormatter.getCanvasHeight() returns 300 let pdfFormatter.addText(x, y, text) returns true let pdfFormatter.drawLine(line) does nothing // train as mock mailServer = create mock of MailServer expect mailServer.sendMail() called exactly once // do the test sendInvitations(pdfFormatter, mailServer) assert that all pdfFormatter expectations are met assert that all mailServer expectations are met } In this example, we don't really care about the PdfFormatter object so we just train it to quietly accept any call and return some sensible canned return values for all methods that sendInvitation() happens to call at this point. How did we come up with exactly this list of methods to train? We simply ran the test and kept adding the methods until the test passed. Notice, that we trained the stub to respond to a method without having a clue why it needs to call it, we simply added everything that the test complained about. We are happy, the test passes. But what happens later, when we change sendInvitations(), or some other class that sendInvitations() uses, to create more fancy pdfs? Our test suddenly fails because now more methods of PdfFormatter are called and we didn't train our stub to expect them. And usually it's not only one test that fails in situations like this, it's any test that happens to use, directly or indirectly, the sendInvitations() method. We have to fix all those tests by adding more trainings. Also notice, that we can't remove methods no longer needed, because we don't know which of them are not needed. Again, it hinders refactoring. Also, the readability of test suffered terribly, there's lots of code there that we didn't write because of we wanted to, but because we had to; it's not us who want that code there. Tests that use mock objects look very complex and are often difficult to read. The tests should help the reader understand, how the class under the test should be used, thus they should be simple and straightforward. If they are not readable, nobody is going to maintain them; in fact, it's easier to delete them than to maintain them. How to fix that? Easily: Try using real classes instead of mocks whenever possible. Use the real PdfFormatterImpl. If it's not possible, change the real classes to make it possible. Not being able to use a class in tests usually points to some problems with the class. Fixing the problems is a win-win situation - you fixed the class and you have a simpler test. On the other hand, not fixing it and using mocks is a no-win situation - you didn't fix the real class and you have more complex, less readable tests that hinder further refactorings. Try creating a simple test implementation of the interface instead of mocking it in each test, and use this test class in all your tests. Create TestPdfFormatter that does nothing. That way you can change it once for all tests and your tests are not cluttered with lengthy setups where you train your stubs. All in all, mock objects have their use, but when not used carefully, they often encourage bad practices, testing implementation details, hinder refactoring and produce difficult to read and difficult to maintain tests. For some more details on shortcomings of mocks see also Mock Objects: Shortcomings and Use Cases. A: A unit test should test a single codepath through a single method. When the execution of a method passes outside of that method, into another object, and back again, you have a dependency. When you test that code path with the actual dependency, you are not unit testing; you are integration testing. While that's good and necessary, it isn't unit testing. If your dependency is buggy, your test may be affected in such a way to return a false positive. For instance, you may pass the dependency an unexpected null, and the dependency may not throw on null as it is documented to do. Your test does not encounter a null argument exception as it should have, and the test passes. Also, you may find its hard, if not impossible, to reliably get the dependent object to return exactly what you want during a test. That also includes throwing expected exceptions within tests. A mock replaces that dependency. You set expectations on calls to the dependent object, set the exact return values it should give you to perform the test you want, and/or what exceptions to throw so that you can test your exception handling code. In this way you can test the unit in question easily. TL;DR: Mock every dependency your unit test touches. A: Rule of thumb: If the function you are testing needs a complicated object as a parameter, and it would be a pain to simply instantiate this object (if, for example it tries to establish a TCP connection), use a mock. A: You should mock an object when you have a dependency in a unit of code you are trying to test that needs to be "just so". For example, when you are trying to test some logic in your unit of code but you need to get something from another object and what is returned from this dependency might affect what you are trying to test - mock that object. A great podcast on the topic can be found here
When should I mock?
I have a basic understanding of mock and fake objects, but I'm not sure I have a feeling about when/where to use mocking - especially as it would apply to this scenario here.
[ "Mock objects are useful when you want to test interactions between a class under test and a particular interface.\nFor example, we want to test that method sendInvitations(MailServer mailServer) calls MailServer.createMessage() exactly once, and also calls MailServer.sendMessage(m) exactly once, and no other methods are called on the MailServer interface. This is when we can use mock objects.\nWith mock objects, instead of passing a real MailServerImpl, or a test TestMailServer, we can pass a mock implementation of the MailServer interface. Before we pass a mock MailServer, we \"train\" it, so that it knows what method calls to expect and what return values to return. At the end, the mock object asserts, that all expected methods were called as expected.\nThis sounds good in theory, but there are also some downsides.\nMock shortcomings\nIf you have a mock framework in place, you are tempted to use mock object every time you need to pass an interface to the class under the test. This way you end up testing interactions even when it is not necessary. Unfortunately, unwanted (accidental) testing of interactions is bad, because then you're testing that a particular requirement is implemented in a particular way, instead of that the implementation produced the required result.\nHere's an example in pseudocode. Let's suppose we've created a MySorter class and we want to test it:\n// the correct way of testing\ntestSort() {\n testList = [1, 7, 3, 8, 2] \n MySorter.sort(testList)\n\n assert testList equals [1, 2, 3, 7, 8]\n}\n\n\n// incorrect, testing implementation\ntestSort() {\n testList = [1, 7, 3, 8, 2] \n MySorter.sort(testList)\n\n assert that compare(1, 2) was called once \n assert that compare(1, 3) was not called \n assert that compare(2, 3) was called once \n ....\n}\n\n(In this example we assume that it's not a particular sorting algorithm, such as quick sort, that we want to test; in that case, the latter test would actually be valid.)\nIn such an extreme example it's obvious why the latter example is wrong. When we change the implementation of MySorter, the first test does a great job of making sure we still sort correctly, which is the whole point of tests - they allow us to change the code safely. On the other hand, the latter test always breaks and it is actively harmful; it hinders refactoring.\nMocks as stubs\nMock frameworks often allow also less strict usage, where we don't have to specify exactly how many times methods should be called and what parameters are expected; they allow creating mock objects that are used as stubs.\nLet's suppose we have a method sendInvitations(PdfFormatter pdfFormatter, MailServer mailServer) that we want to test. The PdfFormatter object can be used to create the invitation. Here's the test:\ntestInvitations() {\n // train as stub\n pdfFormatter = create mock of PdfFormatter\n let pdfFormatter.getCanvasWidth() returns 100\n let pdfFormatter.getCanvasHeight() returns 300\n let pdfFormatter.addText(x, y, text) returns true \n let pdfFormatter.drawLine(line) does nothing\n\n // train as mock\n mailServer = create mock of MailServer\n expect mailServer.sendMail() called exactly once\n\n // do the test\n sendInvitations(pdfFormatter, mailServer)\n\n assert that all pdfFormatter expectations are met\n assert that all mailServer expectations are met\n}\n\nIn this example, we don't really care about the PdfFormatter object so we just train it to quietly accept any call and return some sensible canned return values for all methods that sendInvitation() happens to call at this point. How did we come up with exactly this list of methods to train? We simply ran the test and kept adding the methods until the test passed. Notice, that we trained the stub to respond to a method without having a clue why it needs to call it, we simply added everything that the test complained about. We are happy, the test passes.\nBut what happens later, when we change sendInvitations(), or some other class that sendInvitations() uses, to create more fancy pdfs? Our test suddenly fails because now more methods of PdfFormatter are called and we didn't train our stub to expect them. And usually it's not only one test that fails in situations like this, it's any test that happens to use, directly or indirectly, the sendInvitations() method. We have to fix all those tests by adding more trainings. Also notice, that we can't remove methods no longer needed, because we don't know which of them are not needed. Again, it hinders refactoring.\nAlso, the readability of test suffered terribly, there's lots of code there that we didn't write because of we wanted to, but because we had to; it's not us who want that code there. Tests that use mock objects look very complex and are often difficult to read. The tests should help the reader understand, how the class under the test should be used, thus they should be simple and straightforward. If they are not readable, nobody is going to maintain them; in fact, it's easier to delete them than to maintain them.\nHow to fix that? Easily:\n\nTry using real classes instead of mocks whenever possible. Use the real PdfFormatterImpl. If it's not possible, change the real classes to make it possible. Not being able to use a class in tests usually points to some problems with the class. Fixing the problems is a win-win situation - you fixed the class and you have a simpler test. On the other hand, not fixing it and using mocks is a no-win situation - you didn't fix the real class and you have more complex, less readable tests that hinder further refactorings.\nTry creating a simple test implementation of the interface instead of mocking it in each test, and use this test class in all your tests. Create TestPdfFormatter that does nothing. That way you can change it once for all tests and your tests are not cluttered with lengthy setups where you train your stubs.\n\nAll in all, mock objects have their use, but when not used carefully, they often encourage bad practices, testing implementation details, hinder refactoring and produce difficult to read and difficult to maintain tests.\nFor some more details on shortcomings of mocks see also Mock Objects: Shortcomings and Use Cases.\n", "A unit test should test a single codepath through a single method. When the execution of a method passes outside of that method, into another object, and back again, you have a dependency.\nWhen you test that code path with the actual dependency, you are not unit testing; you are integration testing. While that's good and necessary, it isn't unit testing.\nIf your dependency is buggy, your test may be affected in such a way to return a false positive. For instance, you may pass the dependency an unexpected null, and the dependency may not throw on null as it is documented to do. Your test does not encounter a null argument exception as it should have, and the test passes.\nAlso, you may find its hard, if not impossible, to reliably get the dependent object to return exactly what you want during a test. That also includes throwing expected exceptions within tests.\nA mock replaces that dependency. You set expectations on calls to the dependent object, set the exact return values it should give you to perform the test you want, and/or what exceptions to throw so that you can test your exception handling code. In this way you can test the unit in question easily.\nTL;DR: Mock every dependency your unit test touches.\n", "Rule of thumb:\nIf the function you are testing needs a complicated object as a parameter, and it would be a pain to simply instantiate this object (if, for example it tries to establish a TCP connection), use a mock.\n", "You should mock an object when you have a dependency in a unit of code you are trying to test that needs to be \"just so\". \nFor example, when you are trying to test some logic in your unit of code but you need to get something from another object and what is returned from this dependency might affect what you are trying to test - mock that object.\nA great podcast on the topic can be found here\n" ]
[ 213, 161, 73, 7 ]
[]
[]
[ "language_agnostic", "mocking", "unit_testing" ]
stackoverflow_0000038181_language_agnostic_mocking_unit_testing.txt
Q: Positioning controls in the middle of a CheckBox THis is a followup to my previous question "Font-dependent control positioning." It's an attempt to solve the real problem behind that question, perhaps in ways different than the one I was asking about. Example of the problem statement: I want a checkbox that says "Adjust prices by <X> <Y> after loading," where <X> is a number---adjustable with a NumericUpDown---and <Y> is either "percent" or "dollars," with the choices being made by a ComboBox. This will be on a single line. The complication: I want to be able to change my fonts for all these controls (basically setting them to System.Drawing.Fonts.MessageBoxFont, which is Tahoma 8 pt on Windows XP/etc. and Segoe UI 9 pt on Vista), without messing up my layout, which with my current Position-property--setting paradigm does not work. More generally, I'd like the controls to be dynamically laid out in a font-independent way, so that the <X> NumericUpDown fits snugly into the space between "by " and the <Y> ComboBox, and similarly the <X> ComboBox fits in with respect to the <X> CheckBox and the string " after loading" to its right. The part everyone seems to miss: This is all nested within a CheckBox. So, ideally, clicking on the words "after loading" should check/uncheck the checkbox, and draw that little highlight rectangle around "Adjust prices by          after loading." So just slapping an extra Label on the end doesn't work, because then it doesn't toggle the CheckBox; similarly, trying to band-aid things by hooking up such a Label's Click event won't produce the desired highlight-rectangle. Solutions? At this point I'm thinking either: Rethink the problem, somehow, maybe with an ugly solution like two separate lines of text: "Adjust found prices after loading" (CheckBox), "Adjustment amount:" (NumericUpDown and ComboBox). This is really bad because my options box is absolutely full of options of this type (i.e. the type in the example), so it would at least double in vertical size. Some sort of custom control? SplittableCheckBox? Some kind of magic with a TableLayout control? (Pretty sure this fails at "the part everyone seems to miss.) Give up and either go back to MS Sans Serif, or use Tahoma uniformly, or package Segoe UI with my application, thus disrespecting the system default fonts. (New, via edit) Switch to WPF, if someone can convince me that it supports this scenario exactly. A: If you have several options that follow this layout, why not create a user control? The user control will contain the CheckBox, a NumericUpDown, a ComboBox and a label for the "after loading". You can override OnFontChanged to adjust the location of the controls based on the rendering of the text with the given font. Add an EventHandler to the Label to check/uncheck the CheckBox. As for having the focus rectangle surround all of the controls, you should be able to give the user control focus when one of its inner controls is clicked.
Positioning controls in the middle of a CheckBox
THis is a followup to my previous question "Font-dependent control positioning." It's an attempt to solve the real problem behind that question, perhaps in ways different than the one I was asking about. Example of the problem statement: I want a checkbox that says "Adjust prices by <X> <Y> after loading," where <X> is a number---adjustable with a NumericUpDown---and <Y> is either "percent" or "dollars," with the choices being made by a ComboBox. This will be on a single line. The complication: I want to be able to change my fonts for all these controls (basically setting them to System.Drawing.Fonts.MessageBoxFont, which is Tahoma 8 pt on Windows XP/etc. and Segoe UI 9 pt on Vista), without messing up my layout, which with my current Position-property--setting paradigm does not work. More generally, I'd like the controls to be dynamically laid out in a font-independent way, so that the <X> NumericUpDown fits snugly into the space between "by " and the <Y> ComboBox, and similarly the <X> ComboBox fits in with respect to the <X> CheckBox and the string " after loading" to its right. The part everyone seems to miss: This is all nested within a CheckBox. So, ideally, clicking on the words "after loading" should check/uncheck the checkbox, and draw that little highlight rectangle around "Adjust prices by          after loading." So just slapping an extra Label on the end doesn't work, because then it doesn't toggle the CheckBox; similarly, trying to band-aid things by hooking up such a Label's Click event won't produce the desired highlight-rectangle. Solutions? At this point I'm thinking either: Rethink the problem, somehow, maybe with an ugly solution like two separate lines of text: "Adjust found prices after loading" (CheckBox), "Adjustment amount:" (NumericUpDown and ComboBox). This is really bad because my options box is absolutely full of options of this type (i.e. the type in the example), so it would at least double in vertical size. Some sort of custom control? SplittableCheckBox? Some kind of magic with a TableLayout control? (Pretty sure this fails at "the part everyone seems to miss.) Give up and either go back to MS Sans Serif, or use Tahoma uniformly, or package Segoe UI with my application, thus disrespecting the system default fonts. (New, via edit) Switch to WPF, if someone can convince me that it supports this scenario exactly.
[ "If you have several options that follow this layout, why not create a user control? The user control will contain the CheckBox, a NumericUpDown, a ComboBox and a label for the \"after loading\". You can override OnFontChanged to adjust the location of the controls based on the rendering of the text with the given font. Add an EventHandler to the Label to check/uncheck the CheckBox.\nAs for having the focus rectangle surround all of the controls, you should be able to give the user control focus when one of its inner controls is clicked.\n" ]
[ 0 ]
[]
[]
[ "fonts", "layout", "winforms" ]
stackoverflow_0000038428_fonts_layout_winforms.txt
Q: Security advice for jquery ajax data post? I'm using jquery ajax to post updates back to my server. I'm concerned about making sure I have put in place appropriate measures so that only my AJAX calls can post data. My stack is PHP on Apache against a MySQL backend. Advice greatly appreciated! A: Any request that the AJAX calls in your pages can make can also be made by someone outside of the application. If done right, you will not be able to tell if they were made as part of an AJAX call from your webapp or by hand/other means. There are two scenarios I can think of which you might be talking about when you say you want to make sure that only your AJAX calls can post data: either you don't want a malicious user to be able to post data that interferes with another user's data or you actually want to restrict the posts to being in the "flow" of a multi-request operation. If you are concerned with the first case (someone posting malicious data to/as another user) the solution is the same whether you are using AJAX or not -- you just have to authenticate the user through whatever means is necessary -- usually via session cookie. If you are concerned with the second case, then you are going to have to do something like issue a unique token at each step of the process, and store the expected token on the server side. Then when a request is made, check that there is a corresponding entry on the server side for the action that is being taken and that the expected tokens match and that that token has not been used yet. If there is no, you reject the request, if there is, then you mark that token as used and process the request. If what you are concerned about is something other than one of these two scenarios then the answer will depend on more specifics than you have provided. A: Use sessions to ensure that any Ajax posts are done in an authenticated context. Think of your Ajax code as just another client to your server, it becomes easier to tackle authentication issues that way.
Security advice for jquery ajax data post?
I'm using jquery ajax to post updates back to my server. I'm concerned about making sure I have put in place appropriate measures so that only my AJAX calls can post data. My stack is PHP on Apache against a MySQL backend. Advice greatly appreciated!
[ "Any request that the AJAX calls in your pages can make can also be made by someone outside of the application. If done right, you will not be able to tell if they were made as part of an AJAX call from your webapp or by hand/other means.\nThere are two scenarios I can think of which you might be talking about when you say you want to make sure that only your AJAX calls can post data: either you don't want a malicious user to be able to post data that interferes with another user's data or you actually want to restrict the posts to being in the \"flow\" of a multi-request operation. \nIf you are concerned with the first case (someone posting malicious data to/as another user) the solution is the same whether you are using AJAX or not -- you just have to authenticate the user through whatever means is necessary -- usually via session cookie.\nIf you are concerned with the second case, then you are going to have to do something like issue a unique token at each step of the process, and store the expected token on the server side. Then when a request is made, check that there is a corresponding entry on the server side for the action that is being taken and that the expected tokens match and that that token has not been used yet. If there is no, you reject the request, if there is, then you mark that token as used and process the request.\nIf what you are concerned about is something other than one of these two scenarios then the answer will depend on more specifics than you have provided.\n", "Use sessions to ensure that any Ajax posts are done in an authenticated context. Think of your Ajax code as just another client to your server, it becomes easier to tackle authentication issues that way.\n" ]
[ 28, 5 ]
[]
[]
[ "ajax", "jquery", "post", "security" ]
stackoverflow_0000038421_ajax_jquery_post_security.txt
Q: Is this minimum spanning tree algorithm correct? The minimum spanning tree problem is to take a connected weighted graph and find the subset of its edges with the lowest total weight while keeping the graph connected (and as a consequence resulting in an acyclic graph). The algorithm I am considering is: Find all cycles. remove the largest edge from each cycle. The impetus for this version is an environment that is restricted to "rule satisfaction" without any iterative constructs. It might also be applicable to insanely parallel hardware (i.e. a system where you expect to have several times more degrees of parallelism then cycles). Edits: The above is done in a stateless manner (all edges that are not the largest edge in any cycle are selected/kept/ignored, all others are removed). A: What happens if two cycles overlap? Which one has its longest edge removed first? Does it matter if the longest edge of each is shared between the two cycles or not? For example: V = { a, b, c, d } E = { (a,b,1), (b,c,2), (c,a,4), (b,d,9), (d,a,3) } There's an a -> b -> c -> a cycle, and an a -> b -> d -> a A: Your algorithm isn't quite clearly defined. If you have a complete graph, your algorithm would seem to entail, in the first step, removing all but the two minimum elements. Also, listing all the cycles in a graph can take exponential time. Elaboration: In a graph with n nodes and an edge between every pair of nodes, there are, if I have my math right, n!/(2k(n-k)!) cycles of size k, if you're counting a cycle as some subgraph of k nodes and k edges with each node having degree 2. A: @shrughes.blogspot.com: I don't know about removing all but two - I've been sketching out various runs of the algorithm and assuming that parallel runs may remove an edge more than once I can't find a situation where I'm left without a spanning tree. Whether or not it's minimal I don't know. A: For this to work, you'd have to detail how you would want to find all cycles, apparently without any iterative constructs, because that is a non-trivial task. I'm not sure that's possible. If you really want to find a MST algorithm that doesn't use iterative constructs, take a look at Prim's or Kruskal's algorithm and see if you could modify those to suit your needs. Also, is recursion barred in this theoretical architecture? If so, it might actually be impossible to find a MST on a graph, because you'd have no means whatsoever of inspecting every vertex/edge on the graph. A: I dunno if it works, but no matter what your algorithm is not even worth implementing. Finding all cycles will be the freaking huge bottleneck that will kill it. Also doing that without iterations is impossible. Why don't you implement some standard algorithm, let's say Prim's. A: @Tynan The system can be described (somewhat over simplified) as a systems of rules describing categorizations. "Things are in category A if they are in B but not in C", "Nodes connected to nodes in Z are also in Z", "Every category in M is connected to a node N and has 'child' categories, also in M for every node connected to N". It's slightly more complicated than this. (I have shown that by creating unstable rules you can model a turning machine but that's beside the point.) It can't explicitly define iteration or recursion but can operate on recursive data with rules like the 2nd and 3rd ones. @Marcin, Assume that there are an unlimited number of processors. It is trivial to show that the program can be run in O(n^2) for n being the longest cycle. With better data structures, this can be reduced to O(n*O(set lookup function)), I can envision hardware (quantum computers?) that can evaluate all cycles in constant time. giving a O(1) solution to the MST problem. The Reverse-delete algorithm seems to provide a partial proof of correctness (that the proposed algorithm will not produce a non-minimal spanning tree) this is derived by arguing that mt algorithm will remove every edge that the Reverse-delete algorithm will. However I'm not sure how to show that my algorithm won't delete more than that algorithm. Hhmm.... A: OK this is an attempt to finish the proof of correctness. By analogy to the Reverse-delete algorithm, we know that enough edges will be removed. What remains is to show that there will not be to many edges removed. Removing to many edges can be described as removing all the edges between the side of a binary partition of the graph nodes. However only edges in a cycle are ever removed, therefor, for all edge between partitions to be removed, there needs to be a return path to complete the cycle. If we only consider edges between the partitions then the algorithm can at most remove the larger of each pair of edges, this can never remove the smallest bridging edge. Therefor for any arbitrary binary partitioning, the algorithm can't sever all links between the side. What remains is to show that this extends to >2 way partitions.
Is this minimum spanning tree algorithm correct?
The minimum spanning tree problem is to take a connected weighted graph and find the subset of its edges with the lowest total weight while keeping the graph connected (and as a consequence resulting in an acyclic graph). The algorithm I am considering is: Find all cycles. remove the largest edge from each cycle. The impetus for this version is an environment that is restricted to "rule satisfaction" without any iterative constructs. It might also be applicable to insanely parallel hardware (i.e. a system where you expect to have several times more degrees of parallelism then cycles). Edits: The above is done in a stateless manner (all edges that are not the largest edge in any cycle are selected/kept/ignored, all others are removed).
[ "What happens if two cycles overlap? Which one has its longest edge removed first? Does it matter if the longest edge of each is shared between the two cycles or not?\nFor example:\nV = { a, b, c, d }\nE = { (a,b,1), (b,c,2), (c,a,4), (b,d,9), (d,a,3) }\n\nThere's an a -> b -> c -> a cycle, and an a -> b -> d -> a\n", "Your algorithm isn't quite clearly defined. If you have a complete graph, your algorithm would seem to entail, in the first step, removing all but the two minimum elements. Also, listing all the cycles in a graph can take exponential time.\nElaboration:\nIn a graph with n nodes and an edge between every pair of nodes, there are, if I have my math right, n!/(2k(n-k)!) cycles of size k, if you're counting a cycle as some subgraph of k nodes and k edges with each node having degree 2.\n", "@shrughes.blogspot.com:\nI don't know about removing all but two - I've been sketching out various runs of the algorithm and assuming that parallel runs may remove an edge more than once I can't find a situation where I'm left without a spanning tree. Whether or not it's minimal I don't know.\n", "For this to work, you'd have to detail how you would want to find all cycles, apparently without any iterative constructs, because that is a non-trivial task. I'm not sure that's possible. If you really want to find a MST algorithm that doesn't use iterative constructs, take a look at Prim's or Kruskal's algorithm and see if you could modify those to suit your needs.\nAlso, is recursion barred in this theoretical architecture? If so, it might actually be impossible to find a MST on a graph, because you'd have no means whatsoever of inspecting every vertex/edge on the graph.\n", "I dunno if it works, but no matter what your algorithm is not even worth implementing. Finding all cycles will be the freaking huge bottleneck that will kill it. Also doing that without iterations is impossible. Why don't you implement some standard algorithm, let's say Prim's.\n", "@Tynan The system can be described (somewhat over simplified) as a systems of rules describing categorizations. \"Things are in category A if they are in B but not in C\", \"Nodes connected to nodes in Z are also in Z\", \"Every category in M is connected to a node N and has 'child' categories, also in M for every node connected to N\". It's slightly more complicated than this. (I have shown that by creating unstable rules you can model a turning machine but that's beside the point.) It can't explicitly define iteration or recursion but can operate on recursive data with rules like the 2nd and 3rd ones.\n@Marcin, Assume that there are an unlimited number of processors. It is trivial to show that the program can be run in O(n^2) for n being the longest cycle. With better data structures, this can be reduced to O(n*O(set lookup function)), I can envision hardware (quantum computers?) that can evaluate all cycles in constant time. giving a O(1) solution to the MST problem.\nThe Reverse-delete algorithm seems to provide a partial proof of correctness (that the proposed algorithm will not produce a non-minimal spanning tree) this is derived by arguing that mt algorithm will remove every edge that the Reverse-delete algorithm will. However I'm not sure how to show that my algorithm won't delete more than that algorithm. \nHhmm....\n", "OK this is an attempt to finish the proof of correctness. By analogy to the Reverse-delete algorithm, we know that enough edges will be removed. What remains is to show that there will not be to many edges removed. \nRemoving to many edges can be described as removing all the edges between the side of a binary partition of the graph nodes. However only edges in a cycle are ever removed, therefor, for all edge between partitions to be removed, there needs to be a return path to complete the cycle. If we only consider edges between the partitions then the algorithm can at most remove the larger of each pair of edges, this can never remove the smallest bridging edge. Therefor for any arbitrary binary partitioning, the algorithm can't sever all links between the side.\nWhat remains is to show that this extends to >2 way partitions.\n" ]
[ 1, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "algorithm", "correctness" ]
stackoverflow_0000037471_algorithm_correctness.txt
Q: New Project : MySQL or SQL 2005 Express I am starting a new client/server project at work and I want to start using some of the newer technologies I've been reading about, LINQ and Generics being the main ones. Up until now I have been developing these types of applications with MySQL as clients were unwilling to pay the large licence costs for MSSQL. I have played around a small amount with the express versions but have never actually developed anything with them. The new application will not have more than 5 concurrent connections but will be needed for daily reporting. Can MSSQL 2005 express still be downloaded? I cant seem to find it on the microsoft site. I would be hesitant to use MSSQL 2008 on a project so soon after its release. Are the express version adequate for my needs, I'm sure loads of people reading this have used them. Did you encounter any problems? A: The answer to the question on any project in regards to what platform/technologies to use is: What does everyone know best? Yes express can still be downloaded. Will it fit your requirements? That depends on your requirements, of course. I have deployed MSSQL2005 Express on several enterprise level projects which I knew had a fixed database size that would never be exceeded (Express has a limit of each database of 4Gb). Also keep in mind there are other hardware constraints such as a 1 cpu limit. Another thing to consider is if you need the Enterprise level tools that come with a paid edition of SQL Server. If you are moving a lot of flat data around you are stuck writing your own Bulk Copy Procs, which rule the house, but its an extra step, no doubt. A: Note sure about #2 but you can download SQL Server Express 2005 here. A: Sql express has more features, and is a lot more powerful, but will only run on windows boxes. If you ever need to scale Sql express can be switched easily to a commercial variant. MySql doesn't support half the features, but does have most of the basic ones you actually need, and will run on windows or *nix boxes. It's also not throttled in the same way as Sql express is. In my opinion (having used both extensively, but not touched MySql for a few years) Sql express is a far better DB system. If you're building .Net applications the Linq support is a deal clincher. If you aren't going for pure Sql server support, I wouldn't go for pure MySql support instead. Use a DBFactory design pattern to load your data layer or use simple SQL:92 syntax that's a lowest common denominator. A: Why not go to Sql server express 2008? A: I'm mostly going to advocate MS SQL Server because of .NET integration. Linq To Sql is pretty much my favorite way to do deal with databases these days: anonymous functions make everything better! My current place of work has also used MSSQL Express for real projects, so you have at least two of us confirming that the restrictions aren't too harsh. A: I have about 50 web sites running perl/apache/mysql and about 10 running C#/ASP.Net/SQL Server (Lite) and other (large) applications running on SQL Server (Heavy). I never have problems with SQL Server - it just works. I often have problems with MySQL. My advice would be to go for the SQL Server based option even if you had to pay for it.
New Project : MySQL or SQL 2005 Express
I am starting a new client/server project at work and I want to start using some of the newer technologies I've been reading about, LINQ and Generics being the main ones. Up until now I have been developing these types of applications with MySQL as clients were unwilling to pay the large licence costs for MSSQL. I have played around a small amount with the express versions but have never actually developed anything with them. The new application will not have more than 5 concurrent connections but will be needed for daily reporting. Can MSSQL 2005 express still be downloaded? I cant seem to find it on the microsoft site. I would be hesitant to use MSSQL 2008 on a project so soon after its release. Are the express version adequate for my needs, I'm sure loads of people reading this have used them. Did you encounter any problems?
[ "The answer to the question on any project in regards to what platform/technologies to use is: What does everyone know best?\n\nYes express can still be downloaded.\nWill it fit your requirements? That depends on your requirements, of course. I have deployed MSSQL2005 Express on several enterprise level projects which I knew had a fixed database size that would never be exceeded (Express has a limit of each database of 4Gb). Also keep in mind there are other hardware constraints such as a 1 cpu limit.\n\nAnother thing to consider is if you need the Enterprise level tools that come with a paid edition of SQL Server. If you are moving a lot of flat data around you are stuck writing your own Bulk Copy Procs, which rule the house, but its an extra step, no doubt.\n", "Note sure about #2 but you can download SQL Server Express 2005 here.\n", "Sql express has more features, and is a lot more powerful, but will only run on windows boxes. If you ever need to scale Sql express can be switched easily to a commercial variant.\nMySql doesn't support half the features, but does have most of the basic ones you actually need, and will run on windows or *nix boxes. It's also not throttled in the same way as Sql express is.\nIn my opinion (having used both extensively, but not touched MySql for a few years) Sql express is a far better DB system. If you're building .Net applications the Linq support is a deal clincher.\nIf you aren't going for pure Sql server support, I wouldn't go for pure MySql support instead. Use a DBFactory design pattern to load your data layer or use simple SQL:92 syntax that's a lowest common denominator.\n", "Why not go to Sql server express 2008?\n", "I'm mostly going to advocate MS SQL Server because of .NET integration. Linq To Sql is pretty much my favorite way to do deal with databases these days: anonymous functions make everything better! My current place of work has also used MSSQL Express for real projects, so you have at least two of us confirming that the restrictions aren't too harsh.\n", "I have about 50 web sites running perl/apache/mysql and about 10 running C#/ASP.Net/SQL Server (Lite) and other (large) applications running on SQL Server (Heavy). I never have problems with SQL Server - it just works. I often have problems with MySQL.\nMy advice would be to go for the SQL Server based option even if you had to pay for it.\n" ]
[ 6, 1, 1, 1, 1, 1 ]
[]
[]
[ "mysql", "sql_server" ]
stackoverflow_0000026843_mysql_sql_server.txt
Q: Simple audio input API on a Mac? I'd like to pull a stream of PCM samples from a Mac's line-in or built-in mic and do a little live analysis (the exact nature doesn't pertain to this question, but it could be an FFT every so often, or some basic statistics on the sample levels, or what have you). What's a good fit for this? Writing an AudioUnit that just passes the sound through and incidentally hands it off somewhere for analysis? Writing a JACK-aware app and figuring out how to get it to play with the JACK server? Ecasound? This is a cheesy proof-of-concept hobby project, so simplicity of API is the driving factor (followed by reasonable choice of programming language). A: The principal framework for audio development in Mac OS X is Core Audio; it's the basis for all audio I/O. There are layers on top of it like Audio Toolbox, Audio Queue Services, QuickTime, and QTKit that you can use if you want a simplified API for common tasks. To just pull a stream of samples, you'd probably want to use Audio Queue Services; the AudioQueueNewInput function will set up recording of PCM data and pass it to a callback you supply. On your Mac there's a set of Core Audio examples in /Developer/Examples/CoreAudio/SimpleSDK that includes a use (AQRecord in AudioQueueTools) of the Audio Queue Services recording APIs. A: I think portaudio is what you need. Reading from the mike from a console app is a 10 line C file (see patests in the portaudio distrib). A: Apple provides sample code for reading and writing audio data. Additionally there is a lot of good information in the Audio section of the Apple Developer site.
Simple audio input API on a Mac?
I'd like to pull a stream of PCM samples from a Mac's line-in or built-in mic and do a little live analysis (the exact nature doesn't pertain to this question, but it could be an FFT every so often, or some basic statistics on the sample levels, or what have you). What's a good fit for this? Writing an AudioUnit that just passes the sound through and incidentally hands it off somewhere for analysis? Writing a JACK-aware app and figuring out how to get it to play with the JACK server? Ecasound? This is a cheesy proof-of-concept hobby project, so simplicity of API is the driving factor (followed by reasonable choice of programming language).
[ "The principal framework for audio development in Mac OS X is Core Audio; it's the basis for all audio I/O. There are layers on top of it like Audio Toolbox, Audio Queue Services, QuickTime, and QTKit that you can use if you want a simplified API for common tasks.\nTo just pull a stream of samples, you'd probably want to use Audio Queue Services; the AudioQueueNewInput function will set up recording of PCM data and pass it to a callback you supply.\nOn your Mac there's a set of Core Audio examples in /Developer/Examples/CoreAudio/SimpleSDK that includes a use (AQRecord in AudioQueueTools) of the Audio Queue Services recording APIs.\n", "I think portaudio is what you need.\nReading from the mike from a console app is a 10 line C file (see patests in the portaudio distrib).\n", "Apple provides sample code for reading and writing audio data. Additionally there is a lot of good information in the Audio section of the Apple Developer site.\n" ]
[ 6, 5, 3 ]
[]
[]
[ "audio", "macos" ]
stackoverflow_0000037529_audio_macos.txt
Q: C# WinForms - DataGridView/SQL Compact - Negative integer in primary key column I'm just getting dirty in WinForms, and I've discovered, through a lovely tutorial, the magic of dragging a database table onto the design view of my main form. So, all is lovely, I've got my DataGridView with all of the columns represented beautifully. BUT... When I run my application against this brand new, empty .sdf (empty save for the two tables I've created, which are themselves empty), I get a -1 in the column corresponding to my primary key/identity column whenever I try to create that first record. Any idea why this might be happening? If it helps, the column is an int. A: @Brian -1 is a good choice for the default value since no "real" rows are likely to have identities less than zero. If it defaulted to 0 or 1 then there'd be a chance that it'd clash with an existing row, causing a primary key violation. For applications that stay offline and create multiple rows before saving, a common practice is to continue counting backwards (-2, -3, -4) for each new row's identity. Then when they're saved, the server can replace them with the true "next" value from the table. A: Since it is an Identity column and you haven't saved it to the database yet it is -1. I am assuming here that this is before you save the table back to the database, correct? You need to perform the insert before that value will be set correctly.
C# WinForms - DataGridView/SQL Compact - Negative integer in primary key column
I'm just getting dirty in WinForms, and I've discovered, through a lovely tutorial, the magic of dragging a database table onto the design view of my main form. So, all is lovely, I've got my DataGridView with all of the columns represented beautifully. BUT... When I run my application against this brand new, empty .sdf (empty save for the two tables I've created, which are themselves empty), I get a -1 in the column corresponding to my primary key/identity column whenever I try to create that first record. Any idea why this might be happening? If it helps, the column is an int.
[ "@Brian -1 is a good choice for the default value since no \"real\" rows are likely to have identities less than zero. If it defaulted to 0 or 1 then there'd be a chance that it'd clash with an existing row, causing a primary key violation.\nFor applications that stay offline and create multiple rows before saving, a common practice is to continue counting backwards (-2, -3, -4) for each new row's identity. Then when they're saved, the server can replace them with the true \"next\" value from the table.\n", "Since it is an Identity column and you haven't saved it to the database yet it is -1. I am assuming here that this is before you save the table back to the database, correct? You need to perform the insert before that value will be set correctly.\n" ]
[ 5, 3 ]
[]
[]
[ "c#", "data_binding", "sql_server", "sql_server_ce", "winforms" ]
stackoverflow_0000038510_c#_data_binding_sql_server_sql_server_ce_winforms.txt
Q: WCF - Domain Objects and IExtensibleDataObject Typical scenario. We use old-school XML Web Services internally for communicating between a server farm and several distributed and local clients. No third parties involved, only our applications used by ourselves and our customers. We're currently pondering moving from XML WS to a WCF/object-based model and have been experimenting with various approaches. One of them involves transferring the domain objects/aggregates directly over the wire, possibly invoking DataContract attributes on them. By using IExtensibleDataObject and a DataContract using the Order property on the DataMembers, we should be able to cope with simple property versioning issues (remember, we control all clients and can easily force-update them). I keep hearing that we should use dedicated, transfer-only Data Transfer Objects (DTOs) over the wire. Why? Is there still a reason to do so? We use the same domain model on the server side and client side, of course, prefilling collections, etc. only when deemed right and "necessary." Collection properties utilize the service locator principle and IoC to invoke either an NHibernate-based "service" to fetch data directly (on the server side), and a WCF "service" client on the client side to talk to the WCF server farm. So - why do we need to use DTOs? A: Having worked with both approaches (shared domain objects and DTOs) I'd say the big problem with shared domain objects is when you don't control all clients, but from my past experiences I'd usually use DTOs unless it development speed were of the essence. If there's any chance that you won't always be in control of the clients then I'd definately recommend DTOs, because as soon as you share your domain objects with someone else's client application you start tying your internals to someone else's dev cycle. I've also found DTOs useful when working in a versioned service environment, which allowed us to radically change the internals of our app but still accept calls to the old versions of our service interfaces. Finally, if you have a lot of client applications it might also be beneficial to use DTOs as you're then protected with an easily versionable service. A: In my experience DTOs are most useful for: Strictly defining what will be sent over the wire and having a type specifically devoted to that definition. Isolating the rest of your application, client and server, from future changes. Interoperability with non-.Net systems. DTOs certainly aren't a requirement, but they make it easier to design "safe" types. In your scenario these design features may not matter that much. I've used WCF with both strict DTOs and shared Domain Objects and in both scenarios it worked great. The only thing I noticed when sending Domain Objects over the wire was that I tended to send more data (and in unexpected ways) then I needed to. This was likely more due to my lack of experience with WCF than anything else; but it's something you should definitely be wary of should you choose to go that route.
WCF - Domain Objects and IExtensibleDataObject
Typical scenario. We use old-school XML Web Services internally for communicating between a server farm and several distributed and local clients. No third parties involved, only our applications used by ourselves and our customers. We're currently pondering moving from XML WS to a WCF/object-based model and have been experimenting with various approaches. One of them involves transferring the domain objects/aggregates directly over the wire, possibly invoking DataContract attributes on them. By using IExtensibleDataObject and a DataContract using the Order property on the DataMembers, we should be able to cope with simple property versioning issues (remember, we control all clients and can easily force-update them). I keep hearing that we should use dedicated, transfer-only Data Transfer Objects (DTOs) over the wire. Why? Is there still a reason to do so? We use the same domain model on the server side and client side, of course, prefilling collections, etc. only when deemed right and "necessary." Collection properties utilize the service locator principle and IoC to invoke either an NHibernate-based "service" to fetch data directly (on the server side), and a WCF "service" client on the client side to talk to the WCF server farm. So - why do we need to use DTOs?
[ "Having worked with both approaches (shared domain objects and DTOs) I'd say the big problem with shared domain objects is when you don't control all clients, but from my past experiences I'd usually use DTOs unless it development speed were of the essence.\nIf there's any chance that you won't always be in control of the clients then I'd definately recommend DTOs, because as soon as you share your domain objects with someone else's client application you start tying your internals to someone else's dev cycle.\nI've also found DTOs useful when working in a versioned service environment, which allowed us to radically change the internals of our app but still accept calls to the old versions of our service interfaces. \nFinally, if you have a lot of client applications it might also be beneficial to use DTOs as you're then protected with an easily versionable service.\n", "In my experience DTOs are most useful for:\n\nStrictly defining what will be sent over the wire and having a type specifically devoted to that definition.\nIsolating the rest of your application, client and server, from future changes.\nInteroperability with non-.Net systems. DTOs certainly aren't a requirement, but they make it easier to design \"safe\" types.\n\nIn your scenario these design features may not matter that much. I've used WCF with both strict DTOs and shared Domain Objects and in both scenarios it worked great. The only thing I noticed when sending Domain Objects over the wire was that I tended to send more data (and in unexpected ways) then I needed to. This was likely more due to my lack of experience with WCF than anything else; but it's something you should definitely be wary of should you choose to go that route. \n" ]
[ 7, 6 ]
[]
[]
[ "domain_driven_design", "serialization", "soa", "soap", "wcf" ]
stackoverflow_0000025323_domain_driven_design_serialization_soa_soap_wcf.txt
Q: Is .NET 3.5 SP1 Required on the server to use Dynamic Data? Is .NET 3.5 SP1 Required on the server to use Dynamic Data? It looks like it generates a lot of code and therefore wouldn't require anything special on the server side. I ask because I would like to use it but the shared hosting provider my client is using only have 3.5 installed and not 3.5 SP1. A: Yes, SP1 is required. There are several bits of SP1 that Dynamic Data uses, notably the ASP.NET routing extensions and the new data annotation classes in System.ComponentModel.
Is .NET 3.5 SP1 Required on the server to use Dynamic Data?
Is .NET 3.5 SP1 Required on the server to use Dynamic Data? It looks like it generates a lot of code and therefore wouldn't require anything special on the server side. I ask because I would like to use it but the shared hosting provider my client is using only have 3.5 installed and not 3.5 SP1.
[ "Yes, SP1 is required.\nThere are several bits of SP1 that Dynamic Data uses, notably the ASP.NET routing extensions and the new data annotation classes in System.ComponentModel.\n" ]
[ 2 ]
[]
[]
[ "asp.net", "deployment", "dynamic_data" ]
stackoverflow_0000038572_asp.net_deployment_dynamic_data.txt