content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: How do I find broken NMEA log sentences with grep? My GPS logger occassionally leaves "unfinished" lines at the end of the log files. I think they're only at the end, but I want to check all lines just in case. A sample complete sentence looks like: $GPRMC,005727.000,A,3751.9418,S,14502.2569,E,0.00,339.17,210808,,,A*76 The line should start with a $ sign, and end with an * and a two character hex checksum. I don't care if the checksum is correct, just that it's present. It also needs to ignore "ADVER" sentences which don't have the checksum and are at the start of every file. The following Python code might work: import re from path import path nmea = re.compile("^\$.+\*[0-9A-F]{2}$") for log in path("gpslogs").files("*.log"): for line in log.lines(): if not nmea.match(line) and not "ADVER" in line: print "%s\n\t%s\n" % (log, line) Is there a way to do that with grep or awk or something simple? I haven't really figured out how to get grep to do what I want. Update: Thanks @Motti and @Paul, I was able to get the following to do almost what I wanted, but had to use single quotes and remove the trailing $ before it would work: grep -nvE '^\$.*\*[0-9A-F]{2}' *.log | grep -v ADVER | grep -v ADPMB Two further questions arise, how can I make it ignore blank lines? And can I combine the last two greps? A: The minimum of testing shows that this should do it: grep -Ev "^\$.*\*[0-9A-Fa-f]{2}$" a.txt | grep -v ADVER -E use extended regexp -v Show lines that do not match ^ starts with .* anything \* an asterisk [0-9A-Fa-f] hexadecimal digit {2} exactly two of the previous $ end of line | grep -v ADVER weed out the ADVER lines HTH, Motti. A: @Motti's answer doesn't ignore ADVER lines, but you easily pipe the results of that grep to another: grep -Ev "^\$.*\*[0-9A-Fa-f]{2}$" a.txt |grep -v ADVER A: @Tom (rephrased) I had to remove the trailing $ for it to work Removing the $ means that the line may end with something else (e.g. the following will be accepted) $GPRMC,005727.000,A,3751.9418,S,14502.2569,E,0.00,339.17,210808,,,A*76xxx @Tom And can I combine the last two greps? grep -Ev "ADVER|ADPMB" A: @Motti: Combining the greps isn't working, it's having no effect. I understand that without the trailing $ something else may folow the checksum & still match, but it didn't work at all with it so I had no choice... GNU grep 2.5.3 and GNU bash 3.2.39(1) if that makes any difference. And it looks like the log files are using DOS line-breaks (CR+LF). Does grep need a switch to handle that properly? A: @Tom GNU grep 2.5.3 and GNU bash 3.2.39(1) if that makes any difference. And it looks like the log files are using DOS line-breaks (CR+LF). Does grep need a switch to handle that properly? I'm using grep (GNU grep) 2.4.2 on Windows (for shame!) and it works for me (and DOS line-breaks are naturally accepted) , I don't really have access to other OSs at the moment so I'm sorry but I won't be able to help you any further :o(
How do I find broken NMEA log sentences with grep?
My GPS logger occassionally leaves "unfinished" lines at the end of the log files. I think they're only at the end, but I want to check all lines just in case. A sample complete sentence looks like: $GPRMC,005727.000,A,3751.9418,S,14502.2569,E,0.00,339.17,210808,,,A*76 The line should start with a $ sign, and end with an * and a two character hex checksum. I don't care if the checksum is correct, just that it's present. It also needs to ignore "ADVER" sentences which don't have the checksum and are at the start of every file. The following Python code might work: import re from path import path nmea = re.compile("^\$.+\*[0-9A-F]{2}$") for log in path("gpslogs").files("*.log"): for line in log.lines(): if not nmea.match(line) and not "ADVER" in line: print "%s\n\t%s\n" % (log, line) Is there a way to do that with grep or awk or something simple? I haven't really figured out how to get grep to do what I want. Update: Thanks @Motti and @Paul, I was able to get the following to do almost what I wanted, but had to use single quotes and remove the trailing $ before it would work: grep -nvE '^\$.*\*[0-9A-F]{2}' *.log | grep -v ADVER | grep -v ADPMB Two further questions arise, how can I make it ignore blank lines? And can I combine the last two greps?
[ "The minimum of testing shows that this should do it:\ngrep -Ev \"^\\$.*\\*[0-9A-Fa-f]{2}$\" a.txt | grep -v ADVER\n\n\n-E use extended regexp\n-v Show lines that do not match\n^ starts with\n.* anything\n\\* an asterisk \n[0-9A-Fa-f] hexadecimal digit\n{2} exactly two of the previous\n$ end of line\n| grep -v ADVER weed out the ADVER lines \n\nHTH, Motti.\n", "@Motti's answer doesn't ignore ADVER lines, but you easily pipe the results of that grep to another:\ngrep -Ev \"^\\$.*\\*[0-9A-Fa-f]{2}$\" a.txt |grep -v ADVER\n\n", "\n@Tom (rephrased) I had to remove the trailing $ for it to work\n\nRemoving the $ means that the line may end with something else (e.g. the following will be accepted)\n$GPRMC,005727.000,A,3751.9418,S,14502.2569,E,0.00,339.17,210808,,,A*76xxx\n\n\n@Tom And can I combine the last two greps?\n\ngrep -Ev \"ADVER|ADPMB\"\n\n", "@Motti: Combining the greps isn't working, it's having no effect.\nI understand that without the trailing $ something else may folow the checksum & still match, but it didn't work at all with it so I had no choice...\nGNU grep 2.5.3 and GNU bash 3.2.39(1) if that makes any difference.\nAnd it looks like the log files are using DOS line-breaks (CR+LF). Does grep need a switch to handle that properly?\n", "@Tom\n\nGNU grep 2.5.3 and GNU bash 3.2.39(1) if that makes any difference.\n And it looks like the log files are using DOS line-breaks (CR+LF). Does grep need a switch to handle that properly?\n\nI'm using grep (GNU grep) 2.4.2 on Windows (for shame!) and it works for me (and DOS line-breaks are naturally accepted) , I don't really have access to other OSs at the moment so I'm sorry but I won't be able to help you any further :o(\n" ]
[ 3, 1, 1, 0, 0 ]
[]
[]
[ "gps", "grep", "nmea", "regex" ]
stackoverflow_0000036742_gps_grep_nmea_regex.txt
Q: What are options available to get cron's results and how to set them up? I know that default cron's behavior is to send normal and error output to cron's owner local email box. Is there other ways to get theses results (for example to send it by email to a bunch of people, to store them somewhere, and so on) ? A: To email the output to a different email address just add the line MAILTO="user@example.com" To the crontab before the command A: You could chuck file redirection onto either the command shown or the actual command in the crontab for both stdout and stderr - like command > /tmp/log.txt 2>&1 . If you want several users to receive this log, you could insert a MAILTO=nameofmailinglist at the top of you cron file. A: The cron line is just like any other unix command line so you can redirect output to another program. Ie. * * * * * /path/my/command > /my/email/script 2&>1 A: This may be an unnecessary addition, but to qualify the redirection commands: > redirects standard output 2 is a Bourne shell specific term that means standard error 1 is a Bourne shell specific term that means standard output 2>&1 means redirect the standard error to standard output Also see the following useful article Standard Input and Output Redirection A: As far as I see it you've got three options: Redirect the output: either to a file, or to a program that will email the results as you want them Use MAILTO in cron, and redirect the email to any other single address for all your cron jobs. Do the redirection in your mail server or client, after cron has sent it.
What are options available to get cron's results and how to set them up?
I know that default cron's behavior is to send normal and error output to cron's owner local email box. Is there other ways to get theses results (for example to send it by email to a bunch of people, to store them somewhere, and so on) ?
[ "To email the output to a different email address just add the line \nMAILTO=\"user@example.com\"\n\nTo the crontab before the command \n", "You could chuck file redirection onto either the command shown or the actual command in the crontab for both stdout and stderr - like command > /tmp/log.txt 2>&1 .\nIf you want several users to receive this log, you could insert a MAILTO=nameofmailinglist at the top of you cron file.\n", "The cron line is just like any other unix command line so you can redirect output to another program. Ie.\n* * * * * /path/my/command > /my/email/script 2&>1\n\n", "This may be an unnecessary addition, but to qualify the redirection commands:\n> redirects standard output\n2 is a Bourne shell specific term that means standard error\n1 is a Bourne shell specific term that means standard output\n2>&1 means redirect the standard error to standard output\nAlso see the following useful article Standard Input and Output Redirection\n", "As far as I see it you've got three options:\n\nRedirect the output: either to a file, or to a program that will email the results as you want them\nUse MAILTO in cron, and redirect the email to any other single address for all your cron jobs.\nDo the redirection in your mail server or client, after cron has sent it.\n\n" ]
[ 7, 5, 4, 4, 2 ]
[]
[]
[ "batch_file", "cron", "linux", "unix" ]
stackoverflow_0000043349_batch_file_cron_linux_unix.txt
Q: Twitching Consumption of Web Services from Web Site to Web Application I am trying to consume multiple Web Services that redefine some of the same common classes in their wsdl. I currently have them referenced in a Web Site, but I want to convert to Web Application. Since some of the same classes are redefined from multiple Web Service, the problem is that when adding them in a web Application there is no way to make the namespaces the same across multiple Web Services, like you can in a Web Site. Is there any way to do what I am looking for? A: Show all files in your project, then hand-edit the autogenerated Reference.cs files to change the namespaces (and remove duplicates)? Ugly, but it ought to work. Alternatively, use wsdl.exe from the command line -- it can generate a single proxy for multiple services -- and then add the generated file to the project manually. The syntax is something like: wsdl http://svr/foo.asmx http://svr/bar.asmx /namespace:Fnord.Proxies
Twitching Consumption of Web Services from Web Site to Web Application
I am trying to consume multiple Web Services that redefine some of the same common classes in their wsdl. I currently have them referenced in a Web Site, but I want to convert to Web Application. Since some of the same classes are redefined from multiple Web Service, the problem is that when adding them in a web Application there is no way to make the namespaces the same across multiple Web Services, like you can in a Web Site. Is there any way to do what I am looking for?
[ "Show all files in your project, then hand-edit the autogenerated Reference.cs files to change the namespaces (and remove duplicates)? Ugly, but it ought to work.\nAlternatively, use wsdl.exe from the command line -- it can generate a single proxy for multiple services -- and then add the generated file to the project manually. The syntax is something like: wsdl http://svr/foo.asmx http://svr/bar.asmx /namespace:Fnord.Proxies\n" ]
[ 2 ]
[]
[]
[ "web_applications", "web_services", "wsdl" ]
stackoverflow_0000042262_web_applications_web_services_wsdl.txt
Q: Is there a standard HTML layout with multiple CSS styles available? When it comes to web-design, I am horrible at producing anything remotely good looking. Thankfully there are a lot of free sources for design templates. However, a problem with these designs is that they just cover a single page, and not many use cases. If you take a look at CSS Zen Gardens, they have 1 single HTML file, and can radically style it differently by just changing the CSS file. Now I am wondering if there is a standard HTML layout (tags and ids), that covers alot of use cases, and can be generically themed with different CSS files like Zen Garden. What I am imagining is a set of rules off how you write your html, and what boxes, lists, menus and styles you are supposed to use. A set of standard test pages covering the various uses can be created, and a new CSS file while have to support all the different pages in a nice view. Is there any projects that covers anything similar to what I am describing? A: Check out the Grids framework from YUI. Particularly awesome is the Grid Builder. Also, they have a set of reset, base, and font CSS files that will give you a good baseline to build on. A: I generally just try to follow the guidelines set by the HTML standard itself. Headings go in "h" tags (so one H1 tag for the main heading, then one or more H2 tags under that etc). Free text gets grouped in paragraphs in P tags. Logically-grouped sections of information go in DIV tags. Any kind of list (even menus that you eventually might want horizontally laid out) belong in list tags like UL, OL or DL. Tables of information go in TABLE tags. DON'T use table tags for layout. Be smart with your ID and CLASS attributes. Keep IDs unique and assign them to elements that you know represent something unique on the page, like a navigation menu or a page footer. Assign the same class to elements that are repeated but similar (which you might want to render with a similar visual style). I always start with a very plain, vertical page - just run everything I want down the page in black and white. Then I start adding CSS to make sure the bits are formatted and laid out the way I want. Take a look at the source of my home page for an example of what I'm talking about. A: I've used Bluprint CSS, it's easy and useful as you'll see. It also has some ruby scripts that allow you to change the number of columns and the distance between them. By default it's 950px for a span-24 element. A: BluePrintCSS was, from what I know, the first CSS framework. As YUI CSS Framework, It's help you to handle layout. That kind of framework will help you to build multiple CSS for your site. BluePrintCSS is a quite mature project so I encourage you to check it out.
Is there a standard HTML layout with multiple CSS styles available?
When it comes to web-design, I am horrible at producing anything remotely good looking. Thankfully there are a lot of free sources for design templates. However, a problem with these designs is that they just cover a single page, and not many use cases. If you take a look at CSS Zen Gardens, they have 1 single HTML file, and can radically style it differently by just changing the CSS file. Now I am wondering if there is a standard HTML layout (tags and ids), that covers alot of use cases, and can be generically themed with different CSS files like Zen Garden. What I am imagining is a set of rules off how you write your html, and what boxes, lists, menus and styles you are supposed to use. A set of standard test pages covering the various uses can be created, and a new CSS file while have to support all the different pages in a nice view. Is there any projects that covers anything similar to what I am describing?
[ "Check out the Grids framework from YUI. Particularly awesome is the Grid Builder. Also, they have a set of reset, base, and font CSS files that will give you a good baseline to build on.\n", "I generally just try to follow the guidelines set by the HTML standard itself. \n\nHeadings go in \"h\" tags (so one H1 tag for the main heading, then one or more H2 tags under that etc).\nFree text gets grouped in paragraphs in P tags.\nLogically-grouped sections of information go in DIV tags.\nAny kind of list (even menus that you eventually might want horizontally laid out) belong in list tags like UL, OL or DL.\nTables of information go in TABLE tags. DON'T use table tags for layout.\nBe smart with your ID and CLASS attributes. Keep IDs unique and assign them to elements that you know represent something unique on the page, like a navigation menu or a page footer. Assign the same class to elements that are repeated but similar (which you might want to render with a similar visual style).\n\nI always start with a very plain, vertical page - just run everything I want down the page in black and white. Then I start adding CSS to make sure the bits are formatted and laid out the way I want.\nTake a look at the source of my home page for an example of what I'm talking about.\n", "I've used Bluprint CSS, it's easy and useful as you'll see. It also has some ruby scripts that allow you to change the number of columns and the distance between them. By default it's 950px for a span-24 element.\n", "BluePrintCSS was, from what I know, the first CSS framework.\nAs YUI CSS Framework, It's help you to handle layout.\nThat kind of framework will help you to build multiple CSS for your site.\nBluePrintCSS is a quite mature project so I encourage you to check it out.\n" ]
[ 5, 2, 1, 1 ]
[]
[]
[ "css", "html" ]
stackoverflow_0000043400_css_html.txt
Q: How do I serialize a DOM to XML text, using JavaScript, in a cross browser way? I have an XML object (loaded using XMLHTTPRequest's responseXML). I have modified the object (using jQuery) and would like to store it as text in a string. There is apparently a simple way to do it in Firefox et al: var xmlString = new XMLSerializer().serializeToString( doc ); (from rosettacode ) But how does one do it in IE6 and other browsers (without, of course, breaking Firefox)? A: You can use doc.xml in internet exlporer. You'll get something like this: function xml2Str(xmlNode) { try { // Gecko- and Webkit-based browsers (Firefox, Chrome), Opera. return (new XMLSerializer()).serializeToString(xmlNode); } catch (e) { try { // Internet Explorer. return xmlNode.xml; } catch (e) { //Other browsers without XML Serializer alert('Xmlserializer not supported'); } } return false; } Found it here.
How do I serialize a DOM to XML text, using JavaScript, in a cross browser way?
I have an XML object (loaded using XMLHTTPRequest's responseXML). I have modified the object (using jQuery) and would like to store it as text in a string. There is apparently a simple way to do it in Firefox et al: var xmlString = new XMLSerializer().serializeToString( doc ); (from rosettacode ) But how does one do it in IE6 and other browsers (without, of course, breaking Firefox)?
[ "You can use doc.xml in internet exlporer.\nYou'll get something like this:\nfunction xml2Str(xmlNode) {\n try {\n // Gecko- and Webkit-based browsers (Firefox, Chrome), Opera.\n return (new XMLSerializer()).serializeToString(xmlNode);\n }\n catch (e) {\n try {\n // Internet Explorer.\n return xmlNode.xml;\n }\n catch (e) { \n //Other browsers without XML Serializer\n alert('Xmlserializer not supported');\n }\n }\n return false;\n}\n\nFound it here.\n" ]
[ 35 ]
[]
[]
[ "dom", "javascript", "serialization", "xml" ]
stackoverflow_0000043455_dom_javascript_serialization_xml.txt
Q: SQL Server 2005 One-way Replication In the business I work for we are discussion methods to reduce the read load on our primary database. One option that has been suggested is to have live one-way replication from our primary database to a slave database. Applications would then read from the slave database and write directly to the primary database. So... Application Reads From Slave Application Writes to Primary Primary Updates Slave Automatically What are the major pros and cons for this method? A: A few cons: 2 points of failure Application logic will have to take into account the delay between writing something and then reading it, since it won't be available immediately from the secondary database A strategy I have used is to send key reporting data to a secondary database nightly, de-normalizing it on the way, so that beefy queries can run on that database instead of locking up tables and stealing resources from the OLTP server. I'm not using any formal data warehousing or replication tools, rather I identify problem queries that are Ok without up-to-the-minute data and create data structures on the secondary server specifically for those queries. There are definitely pros to the "replicate everything" approach: You can run any ad-hoc query on the secondary, since it has all of your data If your primary server dies, you can re-purpose the secondary quickly to take over A: We are using one-way replications, but not from the same application. Our applications are reading-writing to the master database, the data gets synchronized to the replca database, and the reporting tools are using this replica. We don't want our application to read from a different database, so in this scenario I would suggest using file groups and partitioning on the master database. Using file groups (especially on different drives) and partitioning of files and indexes can help on performance a lot.
SQL Server 2005 One-way Replication
In the business I work for we are discussion methods to reduce the read load on our primary database. One option that has been suggested is to have live one-way replication from our primary database to a slave database. Applications would then read from the slave database and write directly to the primary database. So... Application Reads From Slave Application Writes to Primary Primary Updates Slave Automatically What are the major pros and cons for this method?
[ "A few cons:\n\n2 points of failure\nApplication logic will have to take into account the delay between writing something and then reading it, since it won't be available immediately from the secondary database\n\nA strategy I have used is to send key reporting data to a secondary database nightly, de-normalizing it on the way, so that beefy queries can run on that database instead of locking up tables and stealing resources from the OLTP server. I'm not using any formal data warehousing or replication tools, rather I identify problem queries that are Ok without up-to-the-minute data and create data structures on the secondary server specifically for those queries.\nThere are definitely pros to the \"replicate everything\" approach:\n\nYou can run any ad-hoc query on the secondary, since it has all of your data\nIf your primary server dies, you can re-purpose the secondary quickly to take over\n\n", "We are using one-way replications, but not from the same application. Our applications are reading-writing to the master database, the data gets synchronized to the replca database, and the reporting tools are using this replica.\nWe don't want our application to read from a different database, so in this scenario I would suggest using file groups and partitioning on the master database. Using file groups (especially on different drives) and partitioning of files and indexes can help on performance a lot.\n" ]
[ 2, 1 ]
[]
[]
[ "replication", "sql_server" ]
stackoverflow_0000043504_replication_sql_server.txt
Q: Can I prevent an inherited virtual method from being overridden in subclasses? I have some classes layed out like this class A { public virtual void Render() { } } class B : A { public override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } protected virtual void SpecialRender() { } } class C : B { protected override void SpecialRender() { // Do some cool stuff } } Is it possible to prevent the C class from overriding the Render method, without breaking the following code? A obj = new C(); obj.Render(); // calls B.Render -> c.SpecialRender A: You can seal individual methods to prevent them from being overridable: public sealed override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } A: Yes, you can use the sealed keyword in the B class's implementation of Render: class B : A { public sealed override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } protected virtual void SpecialRender() { } } A: In B, do protected override sealed void Render() { ... } A: try sealed class B : A { protected sealed override void SpecialRender() { // do stuff } } class C : B protected override void SpecialRender() { // not valid } } Of course, I think C can get around it by being new. A: An other (better ?) way is probablby using the new keyword to prevent a particular virtual method from being overiden: class A { public virtual void Render() { } } class B : A { public override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } protected virtual void SpecialRender() { } } class B2 : B { public new void Render() { } } class C : B2 { protected override void SpecialRender() { } //public override void Render() // compiler error //{ //} }
Can I prevent an inherited virtual method from being overridden in subclasses?
I have some classes layed out like this class A { public virtual void Render() { } } class B : A { public override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } protected virtual void SpecialRender() { } } class C : B { protected override void SpecialRender() { // Do some cool stuff } } Is it possible to prevent the C class from overriding the Render method, without breaking the following code? A obj = new C(); obj.Render(); // calls B.Render -> c.SpecialRender
[ "You can seal individual methods to prevent them from being overridable:\npublic sealed override void Render()\n{\n // Prepare the object for rendering \n SpecialRender();\n // Do some cleanup \n}\n\n", "Yes, you can use the sealed keyword in the B class's implementation of Render:\nclass B : A\n{\n public sealed override void Render()\n {\n // Prepare the object for rendering\n SpecialRender();\n // Do some cleanup\n }\n\n protected virtual void SpecialRender()\n {\n }\n}\n\n", "In B, do \nprotected override sealed void Render() { ... }\n\n", "try sealed\nclass B : A\n{\n protected sealed override void SpecialRender()\n {\n // do stuff\n }\n}\n\nclass C : B\n protected override void SpecialRender()\n {\n // not valid\n }\n}\n\nOf course, I think C can get around it by being new.\n", "An other (better ?) way is probablby using the new keyword to prevent a particular virtual method from being overiden:\nclass A\n{\n public virtual void Render()\n {\n }\n}\nclass B : A\n{\n public override void Render()\n {\n // Prepare the object for rendering \n SpecialRender();\n // Do some cleanup \n }\n protected virtual void SpecialRender()\n {\n }\n}\nclass B2 : B\n{\n public new void Render()\n {\n }\n}\nclass C : B2\n{\n protected override void SpecialRender()\n {\n }\n //public override void Render() // compiler error \n //{\n //}\n}\n\n" ]
[ 35, 3, 1, 1, 0 ]
[ "yes. If you mark a method as Sealed then it can not be overriden in a derived class.\n" ]
[ -1 ]
[ "c#", "polymorphism" ]
stackoverflow_0000043511_c#_polymorphism.txt
Q: How are partial methods used in C# 3.0? I have read about partial methods in the latest C# language specification, so I understand the principles, but I'm wondering how people are actually using them. Is there a particular design pattern that benefits from partial methods? A: Partial methods have been introduced for similar reasons to why partial classes were in .Net 2. A partial class is one that can be split across multiple files - the compiler builds them all into one file as it runs. The advantage for this is that Visual Studio can provide a graphical designer for part of the class while coders work on the other. The most common example is the Form designer. Developers don't want to be positioning buttons, input boxes, etc by hand most of the time. In .Net 1 it was auto-generated code in a #region block In .Net 2 these became separate designer classes - the form is still one class, it's just split into one file edited by the developers and one by the form designer This makes maintaining both much easier. Merges are simpler and there's less risk of the VS form designer accidentally undoing coders' manual changes. In .Net 3.5 Linq has been introduced. Linq has a DBML designer for building your data structures, and that generates auto-code. The extra bit here is that code needed to provide methods that developers might want to fill in. As developers will extend these classes (with extra partial files) they couldn't use abstract methods here. The other issue is that most of the time these methods wont be called, and calling empty methods is a waste of time. Empty methods are not optimised out. So Linq generates empty partial methods. If you don't create your own partial to complete them the C# compiler will just optimise them out. So that it can do this partial methods always return void. If you create a new Linq DBML file it will auto-generate a partial class, something like [System.Data.Linq.Mapping.DatabaseAttribute(Name="MyDB")] public partial class MyDataContext : System.Data.Linq.DataContext { ... partial void OnCreated(); partial void InsertMyTable(MyTable instance); partial void UpdateMyTable(MyTable instance); partial void DeleteMyTable(MyTable instance); ... Then in your own partial file you can extend this: public partial class MyDataContext { partial void OnCreated() { //do something on data context creation } } If you don't extend these methods they get optimised right out. Partial methods can't be public - as then they'd have to be there for other classes to call. If you write your own code generators I can see them being useful, but otherwise they're only really useful for the VS designer. The example I mentioned before is one possibility: //this code will get optimised out if no body is implemented partial void DoSomethingIfCompFlag(); #if COMPILER_FLAG //this code won't exist if the flag is off partial void DoSomethingIfCompFlag() { //your code } #endif Another potential use is if you had a large and complex class spilt across multiple files you might want partial references in the calling file. However I think in that case you should consider simplifying the class first. A: Partial methods are very similar in concept to the GoF Template Method behavioural pattern (Design Patterns, p325). They allow the behaviour of an algorithm or operation to be defined in one place and implemented or changed elsewhere enabling extensibility and customisation. I've started to use partial methods in C# 3.0 instead of template methods because the I think the code is cleaner. One nice feature is that unimplemented partial methods incur no runtime overhead as they're compiled away. A: Code generation is one of main reasons they exist and one of the main reasons to use them. EDIT: Even though that link is to information specific to Visual Basic, the same basic principles are relevant to C#. A: I see them as lightweight events. You can have a reusable code file (usually autogenerated but not necessarily) and for each implementation, just handle the events you care about in your partial class. In fact, this is how it's used in LINQ to SQL (and why the language feature was invented). A: Here is the best resource for partial classes in C#.NET 3.0: http://msdn.microsoft.com/en-us/library/wa80x488(VS.85).aspx I try to avoid using partial classes (with the exception of partials created by Visual Studio for designer files; those are great). To me, it's more important to have all of the code for a class in one place. If your class is well designed and represents one thing (single responsibility principle), then all of the code for that one thing should be in one place.
How are partial methods used in C# 3.0?
I have read about partial methods in the latest C# language specification, so I understand the principles, but I'm wondering how people are actually using them. Is there a particular design pattern that benefits from partial methods?
[ "Partial methods have been introduced for similar reasons to why partial classes were in .Net 2.\nA partial class is one that can be split across multiple files - the compiler builds them all into one file as it runs.\nThe advantage for this is that Visual Studio can provide a graphical designer for part of the class while coders work on the other.\nThe most common example is the Form designer. Developers don't want to be positioning buttons, input boxes, etc by hand most of the time.\n\nIn .Net 1 it was auto-generated code in a #region block\nIn .Net 2 these became separate designer classes - the form is still one class, it's just split into one file edited by the developers and one by the form designer\n\nThis makes maintaining both much easier. Merges are simpler and there's less risk of the VS form designer accidentally undoing coders' manual changes.\nIn .Net 3.5 Linq has been introduced. Linq has a DBML designer for building your data structures, and that generates auto-code.\nThe extra bit here is that code needed to provide methods that developers might want to fill in.\nAs developers will extend these classes (with extra partial files) they couldn't use abstract methods here.\nThe other issue is that most of the time these methods wont be called, and calling empty methods is a waste of time.\nEmpty methods are not optimised out.\nSo Linq generates empty partial methods. If you don't create your own partial to complete them the C# compiler will just optimise them out.\nSo that it can do this partial methods always return void.\nIf you create a new Linq DBML file it will auto-generate a partial class, something like\n[System.Data.Linq.Mapping.DatabaseAttribute(Name=\"MyDB\")]\npublic partial class MyDataContext : System.Data.Linq.DataContext\n{\n ...\n\n partial void OnCreated();\n partial void InsertMyTable(MyTable instance);\n partial void UpdateMyTable(MyTable instance);\n partial void DeleteMyTable(MyTable instance);\n\n ...\n\nThen in your own partial file you can extend this:\npublic partial class MyDataContext\n{\n partial void OnCreated() {\n //do something on data context creation\n }\n}\n\nIf you don't extend these methods they get optimised right out.\nPartial methods can't be public - as then they'd have to be there for other classes to call. If you write your own code generators I can see them being useful, but otherwise they're only really useful for the VS designer.\nThe example I mentioned before is one possibility:\n//this code will get optimised out if no body is implemented\npartial void DoSomethingIfCompFlag();\n\n#if COMPILER_FLAG\n//this code won't exist if the flag is off\npartial void DoSomethingIfCompFlag() {\n //your code\n}\n#endif\n\nAnother potential use is if you had a large and complex class spilt across multiple files you might want partial references in the calling file. However I think in that case you should consider simplifying the class first.\n", "Partial methods are very similar in concept to the GoF Template Method behavioural pattern (Design Patterns, p325). \nThey allow the behaviour of an algorithm or operation to be defined in one place and implemented or changed elsewhere enabling extensibility and customisation. I've started to use partial methods in C# 3.0 instead of template methods because the I think the code is cleaner.\nOne nice feature is that unimplemented partial methods incur no runtime overhead as they're compiled away.\n", "Code generation is one of main reasons they exist and one of the main reasons to use them.\n\nEDIT: Even though that link is to information specific to Visual Basic, the same basic principles are relevant to C#.\n", "I see them as lightweight events. You can have a reusable code file (usually autogenerated but not necessarily) and for each implementation, just handle the events you care about in your partial class. In fact, this is how it's used in LINQ to SQL (and why the language feature was invented).\n", "Here is the best resource for partial classes in C#.NET 3.0: http://msdn.microsoft.com/en-us/library/wa80x488(VS.85).aspx\nI try to avoid using partial classes (with the exception of partials created by Visual Studio for designer files; those are great). To me, it's more important to have all of the code for a class in one place. If your class is well designed and represents one thing (single responsibility principle), then all of the code for that one thing should be in one place.\n" ]
[ 23, 10, 2, 2, 0 ]
[]
[]
[ ".net_3.5", "c#", "design_patterns", "partial_methods" ]
stackoverflow_0000042187_.net_3.5_c#_design_patterns_partial_methods.txt
Q: When is Control.DestroyHandle called? When is this called? More specifically, I have a control I'm creating - how can I release handles when the window is closed. In normal win32 I'd do it during wm_close - is DestroyHandle the .net equivalent? I don't want to destroy the window handle myself - my control is listening for events on another object and when my control is destroyed, I want to stop listening to those events. Eg: void Dispose(bool disposing) { otherObject.Event -= myEventHandler; } A: Normally DestroyHandle is being called in Dispose method. So you need to make sure that all controls are disposed to avoid resource leaks. A: Dispose does call DestroyHandle, but not always. If I close the parent window, then Windows will destroy all child windows. In this situation Dispose won't call DestroyHandle (since it is already destroyed). In other words, DestroyHandle is called to destroy the window, it is not called when the window is destroyed. The solution is to override either OnHandleDestroyed, or Dispose. I'm opting for Dispose.
When is Control.DestroyHandle called?
When is this called? More specifically, I have a control I'm creating - how can I release handles when the window is closed. In normal win32 I'd do it during wm_close - is DestroyHandle the .net equivalent? I don't want to destroy the window handle myself - my control is listening for events on another object and when my control is destroyed, I want to stop listening to those events. Eg: void Dispose(bool disposing) { otherObject.Event -= myEventHandler; }
[ "Normally DestroyHandle is being called in Dispose method. So you need to make sure that all controls are disposed to avoid resource leaks.\n", "Dispose does call DestroyHandle, but not always. If I close the parent window, then Windows will destroy all child windows. In this situation Dispose won't call DestroyHandle (since it is already destroyed). In other words, DestroyHandle is called to destroy the window, it is not called when the window is destroyed.\nThe solution is to override either OnHandleDestroyed, or Dispose. I'm opting for Dispose.\n" ]
[ 3, 2 ]
[]
[]
[ ".net", "winforms" ]
stackoverflow_0000043490_.net_winforms.txt
Q: Can a fixture be changed dynamically between test methods in CakePHP? Is it possible to have a fixture change between test methods? If so, how can I do this? My syntax for this problem : In the cakephp framework i am building tests for a behavior that is configured by adding fields to the table. This is intended to work in the same way that adding the "created" and "modified" fields will auto-populate these fields on save. To test this I could create dozens of fixtures/model combos to test the different setups, but it would be a hundred times better, faster and easier to just have the fixture change "shape" between test methods. If you are not familiar with the CakePHP framework, you can maybe still help me as it uses SimpleTest Edit: rephrased question to be more general A: I'm not familiar specifically with CakePHP, but this kind of thing seems to happen anywhere with fixtures. There is no built in way in rails at least for this to happen, and I imagine not in cakePHP or anywhere else either because the whole idea of a fixture, is that it is fixed There are 2 'decent' workarounds I'm aware of Write a changefixture method, and just before you do your asserts/etc, run it with the parameters of what to change. It should go and update the database or whatever needs to be done. Don't use fixtures at all, and use some kind of object factory or object generator to create your objects each time A: This is not an answer to my quetion, but a solution to my issue example. Instead of using multiple fixtures or changing the fixtures, I edit the Model::_schema arrays by removing the fields that I wanted to test without. This has the effect that the model acts as if the fields was not there, but I am unsure if this is a 100% test. I do not think it is for all cases, but it works for my example.
Can a fixture be changed dynamically between test methods in CakePHP?
Is it possible to have a fixture change between test methods? If so, how can I do this? My syntax for this problem : In the cakephp framework i am building tests for a behavior that is configured by adding fields to the table. This is intended to work in the same way that adding the "created" and "modified" fields will auto-populate these fields on save. To test this I could create dozens of fixtures/model combos to test the different setups, but it would be a hundred times better, faster and easier to just have the fixture change "shape" between test methods. If you are not familiar with the CakePHP framework, you can maybe still help me as it uses SimpleTest Edit: rephrased question to be more general
[ "I'm not familiar specifically with CakePHP, but this kind of thing seems to happen anywhere with fixtures.\nThere is no built in way in rails at least for this to happen, and I imagine not in cakePHP or anywhere else either because the whole idea of a fixture, is that it is fixed\nThere are 2 'decent' workarounds I'm aware of\n\nWrite a changefixture method, and just before you do your asserts/etc, run it with the parameters of what to change. It should go and update the database or whatever needs to be done.\nDon't use fixtures at all, and use some kind of object factory or object generator to create your objects each time\n\n", "This is not an answer to my quetion, but a solution to my issue example.\nInstead of using multiple fixtures or changing the fixtures, I edit the Model::_schema arrays by removing the fields that I wanted to test without. This has the effect that the model acts as if the fields was not there, but I am unsure if this is a 100% test. I do not think it is for all cases, but it works for my example.\n" ]
[ 0, 0 ]
[]
[]
[ "cakephp", "fixture", "unit_testing" ]
stackoverflow_0000037785_cakephp_fixture_unit_testing.txt
Q: Dynamic Form Controls Using C# 2.0 what is the best way to implement dynamic form controls? I need to provide a set of controls per data object, so should i just do it manually and lay them out while increment the top value or is there a better way? A: You can use panels with automatic layout such as FlowLayoutPanel and TableLayoutPanel. Unfortunately there are only 2 panels with automatic layout out of box but you can create custom layout panel. I would recommend you to read following articles: How to: Create a Resizable Windows Form for Data Entry Walkthrough: Creating a Resizable Windows Form for Data Entry Another option would be using of WPF (Windows Presentation Presentation). WPF is a perfect match for your task. WPF controls can be hosted in WinForms apps so you don't have to switch to it completely. A: @Sam I know this question was about Windows Forms, but you should definitely start looking at WPF. This sort of scenario is really easy in WPF with DataTemplates and TemplateSelectors. A: What do you mean by “dynamic”? A new, fixed set of controls for each data row in the data set? Then use a UserControl that contains your controls. Or do you mean that, depending on your data layout, you want to provide the user with a customized set of controls, say, one TextBox for each column? A: Yeah, I've found manually layout out controls (incrementing their Top property by the height of the control plus a margin as I go) to be reasonably effective. Another approach is to place your controls in Panels with Dock set to Top, so that each successive panel docks up against the one above. Then you can toggle the visibility of individual panels and the controls underneath will snap up to fill the available space. Be aware that this can be a bit unpredictable: showing a hidden panel that's docked can sometimes change its position relative to other docked controls. A: Well that's the way we are doing it right now on a project. but that's only useful for simple cases. I suggest you use some sort of template for more complex cases. For instance I used Reflection to map a certain type of control to a certain property on my domain objects on an older project. You could try generating the code from templates using t4 see T4 Templates in Visual Studio for Code Generation Screencast for a simple example. You can apply this to WinForms. Also DevExperience has a nice ( expensive ) framework, see DevExpress eXpressApp Framework™ .
Dynamic Form Controls
Using C# 2.0 what is the best way to implement dynamic form controls? I need to provide a set of controls per data object, so should i just do it manually and lay them out while increment the top value or is there a better way?
[ "You can use panels with automatic layout such as FlowLayoutPanel and TableLayoutPanel. \nUnfortunately there are only 2 panels with automatic layout out of box but you can create custom layout panel.\nI would recommend you to read following articles: \nHow to: Create a Resizable Windows Form for Data Entry \nWalkthrough: Creating a Resizable Windows Form for Data Entry\nAnother option would be using of WPF (Windows Presentation Presentation).\nWPF is a perfect match for your task.\nWPF controls can be hosted in WinForms apps so you don't have to switch to it completely.\n", "@Sam I know this question was about Windows Forms, but you should definitely start looking at WPF. This sort of scenario is really easy in WPF with DataTemplates and TemplateSelectors.\n", "What do you mean by “dynamic”? A new, fixed set of controls for each data row in the data set? Then use a UserControl that contains your controls.\nOr do you mean that, depending on your data layout, you want to provide the user with a customized set of controls, say, one TextBox for each column?\n", "Yeah, I've found manually layout out controls (incrementing their Top property by the height of the control plus a margin as I go) to be reasonably effective.\nAnother approach is to place your controls in Panels with Dock set to Top, so that each successive panel docks up against the one above. Then you can toggle the visibility of individual panels and the controls underneath will snap up to fill the available space. Be aware that this can be a bit unpredictable: showing a hidden panel that's docked can sometimes change its position relative to other docked controls.\n", "Well that's the way we are doing it right now on a project. but that's only useful for simple cases. I suggest you use some sort of template for more complex cases.\nFor instance I used Reflection to map a certain type of control to a certain property on my domain objects on an older project.\nYou could try generating the code from templates using t4 see T4 Templates in Visual Studio for Code Generation Screencast for a simple example. You can apply this to WinForms.\nAlso DevExperience has a nice ( expensive ) framework, see DevExpress eXpressApp Framework™ .\n" ]
[ 8, 2, 1, 1, 1 ]
[]
[]
[ ".net", "c#", "winforms" ]
stackoverflow_0000043536_.net_c#_winforms.txt
Q: How to select an SQL database? We're living in a golden age of databases, with numerous high quality commercial and free databases. This is great, but the downside is there's not a simple obvious choice for someone who needs a database for his next project. What are the constraints/criteria you use for selecting a database? How well do the various databases you've used meet those constraints/criteria? What special features do the databases have? Which databases do you feel comfortable recommending to others? etc... A: I would think first on what the system requirements are for data access, data security, scalability, performance, disconnected scenarios, data transformation, data sizing. On the other side, consider also the experience and background of developers, operators, platform administrators. You should also think on what constraints you have regarding programming languages, operating systems, memory footprint, network bandwidth, hardware. Last, but not least, you have to think about business issues like budget for licences, support, operation. After all those considerations you should end up with just a couple of options and the selection should be easier. In other words, select the technology that suits the best the constraints and needs of your organization and project. I certainly think that you are right on saying that it is not an obvious choice given the wide number of alternatives, but this is the only way I think you can narrow them to the ones that are really feasible for your project. A: My selection criteria (mainly programming centric): Maintenance: How are updates/hotfixes installed? Transaction control: How it is implemented Are Stored Procedures supported? Can you use exception handling in Stored Procedures? Costs As a benefit: Can you use recursion on Stored Procedures? (E.g. in SQL Server 2000 the recursion stops after 32 passes IIRC) A: For most people in a corporate environment the choice comes down to "the one we have". Since you seem to be fortunate enough to have a choice, I'll take a quick run through the questions and maybe pose a few more at the end. The biggest criterion may be cost. Do you want/are you prepared to pay for your DBMS platform? If not, then Oracle, MS SQL Server, Sybase and others are probably out, although if you're not building a commercial app then there may be some wiggle room. Also, platform - can you run the software on your hardware? Other dimensions for consideration might include expected number of concurrent connections, transactional vs mostly reads, size, availability and I guess lots of others. "Special features" are, in the main, to be avoided - in my cynical world-view they're intended to lock you into a platform. So something like Oracle's PL/SQL is a feature that, while powerful (and likely to mean the need for extra CPU power at more licensing cost) is not portable. If you expect extremely high volumes then partitioning may be useful, I suppose. I have worked with Oracle, MS SQL Server, MySQL, PostreSQL, SQLite and Sybase that I can think of. I'd happily recommend all but Sybase, about which I have some concerns these days (I could easily be wrong, but personally I think the money could be better spent elsewhere) but not all for the same applications. Ideally, I like to have the warm feeling that it doesn't really matter what DB platform I'm using because I can port easily. With a good abstraction layer between data and business logic, I should be able to develop locally against, say, the excellent SQLite and implement painlessly on, for example, Postgres. With something like ActiveRecord from Rails coupled with a little awareness of things like differences in reserved words, this is almost completely cost-free. A: Surely the most compelling factor is the expertise of you or your team...or the pool of resource you are likely to hire in the future. I would tend to go with the grain most of the time, using MySQL in a LAMP team and SQL Server in a MS team, since either of these products is capable of doing everything necessary even in a high-load environment. The benefits of any other database are going to be marginal compared to the pain of learning how to use it well. The only exception to this, in my opinion, would be in a high-demand environment where: a. the obvious choice has been tried and is failing b. the benefits of scaling multiply the marginal benefit to such a degree that it will be worth the cost of using something unexpected. I would assume the need to hire at least two and preferably three excellent DBAs with long term familiarity with the new database. And first I would try to hire them for the technology that was failing, because it is more likely to be the way it's used than the technology itself that is causing the problem. A: The existing answers are great. It's worth bearing in mind that Oracle now has an XE version of it's 10g database which is available for free and comes with Application Express, a great web based development environment. It is limited, 4GB HD, 1 GB Ram and uses only one CPU. This is enough to run smaller system though and can be upgraded easily at a later date if necessary. Oracle can be one of the toughest to learn but is also one of the best to have on your CV :-) I think SQLServer from Microsoft also has a 'starter' type database. Don't discount the commercial products - if you are going to bet your company on a database technology I would rather be using a product from Oracle or Microsoft personally. Thats not to say there is anything wrong with Open Source. Spend a while evaluating them :-) A: Linux, Web Hosted - MySQL (PostreSQL maybe) Mainstream SME - MS SQL Big Iron (banking etc) - Oracle Thinking about anything other than those three is masturbation - any of the other databases becomes a discussion about niche products to solve particular problems that you probably haven't encountered yet. If you choose anything other than the three above you will - Struggle to find people to work on the project or keep the database going Struggle to motivate your decision without an academic discussion Someone will curse you, your ancestors and your lineage a few years down the line - and replace your choice anyway. Niche databases are not where architectural strides are made - it is technologies like middleware, messaging, cloud services etc where you can afford to (and should) go out on a limb to find good products.
How to select an SQL database?
We're living in a golden age of databases, with numerous high quality commercial and free databases. This is great, but the downside is there's not a simple obvious choice for someone who needs a database for his next project. What are the constraints/criteria you use for selecting a database? How well do the various databases you've used meet those constraints/criteria? What special features do the databases have? Which databases do you feel comfortable recommending to others? etc...
[ "I would think first on what the system requirements are for data access, data security, scalability, performance, disconnected scenarios, data transformation, data sizing. \nOn the other side, consider also the experience and background of developers, operators, platform administrators.\nYou should also think on what constraints you have regarding programming languages, operating systems, memory footprint, network bandwidth, hardware.\nLast, but not least, you have to think about business issues like budget for licences, support, operation.\nAfter all those considerations you should end up with just a couple of options and the selection should be easier.\nIn other words, select the technology that suits the best the constraints and needs of your organization and project. \nI certainly think that you are right on saying that it is not an obvious choice given the wide number of alternatives, but this is the only way I think you can narrow them to the ones that are really feasible for your project.\n", "My selection criteria (mainly programming centric):\n\nMaintenance: How are updates/hotfixes installed?\nTransaction control: How it is implemented\nAre Stored Procedures supported?\nCan you use exception handling in Stored Procedures?\nCosts\nAs a benefit: Can you use recursion on Stored Procedures? (E.g. in SQL Server 2000 the recursion stops after 32 passes IIRC)\n\n", "For most people in a corporate environment the choice comes down to \"the one we have\".\nSince you seem to be fortunate enough to have a choice, I'll take a quick run through the questions and maybe pose a few more at the end.\nThe biggest criterion may be cost. Do you want/are you prepared to pay for your DBMS platform? If not, then Oracle, MS SQL Server, Sybase and others are probably out, although if you're not building a commercial app then there may be some wiggle room. Also, platform - can you run the software on your hardware?\nOther dimensions for consideration might include expected number of concurrent connections, transactional vs mostly reads, size, availability and I guess lots of others.\n\"Special features\" are, in the main, to be avoided - in my cynical world-view they're intended to lock you into a platform. So something like Oracle's PL/SQL is a feature that, while powerful (and likely to mean the need for extra CPU power at more licensing cost) is not portable. If you expect extremely high volumes then partitioning may be useful, I suppose.\nI have worked with Oracle, MS SQL Server, MySQL, PostreSQL, SQLite and Sybase that I can think of. I'd happily recommend all but Sybase, about which I have some concerns these days (I could easily be wrong, but personally I think the money could be better spent elsewhere) but not all for the same applications.\nIdeally, I like to have the warm feeling that it doesn't really matter what DB platform I'm using because I can port easily. With a good abstraction layer between data and business logic, I should be able to develop locally against, say, the excellent SQLite and implement painlessly on, for example, Postgres. With something like ActiveRecord from Rails coupled with a little awareness of things like differences in reserved words, this is almost completely cost-free.\n", "Surely the most compelling factor is the expertise of you or your team...or the pool of resource you are likely to hire in the future. I would tend to go with the grain most of the time, using MySQL in a LAMP team and SQL Server in a MS team, since either of these products is capable of doing everything necessary even in a high-load environment. \nThe benefits of any other database are going to be marginal compared to the pain of learning how to use it well. The only exception to this, in my opinion, would be in a high-demand environment where:\na. the obvious choice has been tried and is failing\nb. the benefits of scaling multiply the marginal benefit to such a degree that it will be worth the cost of using something unexpected. \nI would assume the need to hire at least two and preferably three excellent DBAs with long term familiarity with the new database. \nAnd first I would try to hire them for the technology that was failing, because it is more likely to be the way it's used than the technology itself that is causing the problem.\n", "The existing answers are great. It's worth bearing in mind that Oracle now has an XE version of it's 10g database which is available for free and comes with Application Express, a great web based development environment.\nIt is limited, 4GB HD, 1 GB Ram and uses only one CPU. This is enough to run smaller system though and can be upgraded easily at a later date if necessary. Oracle can be one of the toughest to learn but is also one of the best to have on your CV :-)\nI think SQLServer from Microsoft also has a 'starter' type database. Don't discount the commercial products - if you are going to bet your company on a database technology I would rather be using a product from Oracle or Microsoft personally. Thats not to say there is anything wrong with Open Source. \nSpend a while evaluating them :-)\n", "\nLinux, Web Hosted - MySQL (PostreSQL maybe)\nMainstream SME - MS SQL\nBig Iron (banking etc) - Oracle\n\nThinking about anything other than those three is masturbation - any of the other databases becomes a discussion about niche products to solve particular problems that you probably haven't encountered yet. If you choose anything other than the three above you will -\n\nStruggle to find people to work on the project or keep the database going\nStruggle to motivate your decision without an academic discussion\nSomeone will curse you, your ancestors and your lineage a few years down the line - and replace your choice anyway.\n\nNiche databases are not where architectural strides are made - it is technologies like middleware, messaging, cloud services etc where you can afford to (and should) go out on a limb to find good products.\n" ]
[ 8, 3, 3, 2, 2, 0 ]
[]
[]
[ "database", "sql" ]
stackoverflow_0000029743_database_sql.txt
Q: How sophisticated should my Ajax code be? I have seen simple example Ajax source codes in many online tutorials. What I want to know is whether using the source code in the examples are perfectly alright or not? Is there anything more to be added to the code that goes into a real world application? What all steps are to be taken to make the application more robust and secure? Here is a sample source code I got from the web: function getChats() { xmlHttp=GetXmlHttpObject(); if (xmlHttp==null) { return; } var url="getchat.php?latest="+latest; xmlHttp.onreadystatechange=stateChanged; xmlHttp.open("GET",url,true); xmlHttp.send(null); } function GetXmlHttpObject() { var xmlHttp=null; try { xmlHttp=new XMLHttpRequest(); } catch (e) { try { xmlHttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { xmlHttp=new ActiveXObject("Microsoft.XMLHTTP"); } } return xmlHttp; } A: The code you posted is missing one important ingredient: the function stateChanged. If you don't quite understand the code you posted yourself, then what happens is when the call to getchats.php is complete, a function "stateChanged" is called and that function will be responsible for handling the response. Since the script you're calling and the function itself is prefixed with "gets" then I'm pretty sure the response is something you're going to be interested in. That aside, there are a number of ways to improve on the code you posted. I'd guess it works by declaring a single "xmlHttp" object and then making that available to every function (because if it doesn't, the stateChanged function has no way of getting the response). This is fine until you run an AJAX request before the last one (or last few) haven't replied yet, which in that case the object reference is overwritten to the latest request each time. Also, any AJAX code worth its salt provides functionality for sucess and failure (server errors, page not found, etc.) cases so that the appriopiate message can be delivered to the user. If you just want to use AJAX functionality on your website then I'd point you in the direction of jQuery or a similar framework. BUT if you actually want to understand the technology and what is happening behind the scenes, I'd continue doing what you're doing and asking specific questions as you try to build a small lightweight AJAX class on your own. This is how I done it, and although I use the jQuery framework today.. I'm still glad I know how it works behind the scenes. A: I would use a framework like DOMAssistant which has already done the hard work for you and will be more robust as well as adding extra useful features. Apart from that, you code looks like it would do the job. A: I would honestly recommend using one of the many libraries available for Ajax. I use prototype myself, while others prefer jQuery. I like prototype because it's pretty minimal. The Prototype Ajax tutorial explains it well. It also allows you to handle errors easily. new Ajax.Request('/some_url', { method:'get', onSuccess: function(transport){ var response = transport.responseText || "no response text"; alert("Success! \n\n" + response); }, onFailure: function(){ alert('Something went wrong...') } });
How sophisticated should my Ajax code be?
I have seen simple example Ajax source codes in many online tutorials. What I want to know is whether using the source code in the examples are perfectly alright or not? Is there anything more to be added to the code that goes into a real world application? What all steps are to be taken to make the application more robust and secure? Here is a sample source code I got from the web: function getChats() { xmlHttp=GetXmlHttpObject(); if (xmlHttp==null) { return; } var url="getchat.php?latest="+latest; xmlHttp.onreadystatechange=stateChanged; xmlHttp.open("GET",url,true); xmlHttp.send(null); } function GetXmlHttpObject() { var xmlHttp=null; try { xmlHttp=new XMLHttpRequest(); } catch (e) { try { xmlHttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { xmlHttp=new ActiveXObject("Microsoft.XMLHTTP"); } } return xmlHttp; }
[ "The code you posted is missing one important ingredient: the function stateChanged.\nIf you don't quite understand the code you posted yourself, then what happens is when the call to getchats.php is complete, a function \"stateChanged\" is called and that function will be responsible for handling the response. Since the script you're calling and the function itself is prefixed with \"gets\" then I'm pretty sure the response is something you're going to be interested in. \nThat aside, there are a number of ways to improve on the code you posted. I'd guess it works by declaring a single \"xmlHttp\" object and then making that available to every function (because if it doesn't, the stateChanged function has no way of getting the response). This is fine until you run an AJAX request before the last one (or last few) haven't replied yet, which in that case the object reference is overwritten to the latest request each time.\nAlso, any AJAX code worth its salt provides functionality for sucess and failure (server errors, page not found, etc.) cases so that the appriopiate message can be delivered to the user.\nIf you just want to use AJAX functionality on your website then I'd point you in the direction of jQuery or a similar framework.\nBUT if you actually want to understand the technology and what is happening behind the scenes, I'd continue doing what you're doing and asking specific questions as you try to build a small lightweight AJAX class on your own. This is how I done it, and although I use the jQuery framework today.. I'm still glad I know how it works behind the scenes.\n", "I would use a framework like DOMAssistant which has already done the hard work for you and will be more robust as well as adding extra useful features.\nApart from that, you code looks like it would do the job.\n", "I would honestly recommend using one of the many libraries available for Ajax. I use prototype myself, while others prefer jQuery. I like prototype because it's pretty minimal. The Prototype Ajax tutorial explains it well. It also allows you to handle errors easily.\nnew Ajax.Request('/some_url',\n {\n method:'get',\n onSuccess: function(transport){\n var response = transport.responseText || \"no response text\";\n alert(\"Success! \\n\\n\" + response);\n },\n onFailure: function(){ alert('Something went wrong...') }\n });\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "ajax" ]
stackoverflow_0000043507_ajax.txt
Q: Very simple C++ DLL that can be called from .net I'm trying to call a 3rd party vendor's C DLL from vb.net 2005 and am getting P/Invoke errors. I'm successfully calling other methods but have hit a bottle-neck on one of the more complex. The structures involved are horrendous and in an attempt to simplify the troubleshooting I'd like to create a C++ DLL to replicate the problem. Can somebody provide the smallest code snippet for a C++ DLL that can be called from .Net? I'm getting a Unable to find entry point named XXX in DLL error in my C++ dll. It should be simple to resolve but I'm not a C++ programmer. I'd like to use a .net declaration for the DLL of Declare Function Multiply Lib "C:\MyDll\Debug\MyDLL.DLL" Alias "Multiply" (ByVal ParOne As Integer, ByVal byvalParTwo As Integer) As Integer A: Try using the __decspec(dllexport) magic pixie dust in your C++ function declaration. This declaration sets up several things that you need to successfully export a function from a DLL. You may also need to use WINAPI or something similar: __declspec(dllexport) WINAPI int Multiply(int p1, int p2) { return p1 * p2; } The WINAPI sets up the function calling convention such that it's suitable for calling from a language such as VB.NET. A: You can try to look at the exported functions (through DumpBin or Dependency Walker) and see if the names are mangled. A: Using Greg's suggestion I found the following works. As mentioned I'm not a C++ programmer but just needed something practical. myclass.cpp #include "stdafx.h" BOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { return TRUE; } int _stdcall multiply(int x , int y) { return x*y; } myclass.def LIBRARY myclass EXPORTS multiply @1 stdafx.cpp #include "stdafx.h" stdafx.h // stdafx.h : include file for standard system include files, // or project specific include files that are used frequently, but // are changed infrequently // #if !defined(AFX_STDAFX_H__5DB9057C_BAE6_48D8_8E38_464F6CB80026__INCLUDED_) #define AFX_STDAFX_H__5DB9057C_BAE6_48D8_8E38_464F6CB80026__INCLUDED_ #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 // Insert your headers here #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers #include <windows.h> //{{AFX_INSERT_LOCATION}} // Microsoft Visual C++ will insert additional declarations immediately before the previous line. #endif // !defined(AFX_STDAFX_H__5DB9057C_BAE6_48D8_8E38_464F6CB80026__INCLUDED_)
Very simple C++ DLL that can be called from .net
I'm trying to call a 3rd party vendor's C DLL from vb.net 2005 and am getting P/Invoke errors. I'm successfully calling other methods but have hit a bottle-neck on one of the more complex. The structures involved are horrendous and in an attempt to simplify the troubleshooting I'd like to create a C++ DLL to replicate the problem. Can somebody provide the smallest code snippet for a C++ DLL that can be called from .Net? I'm getting a Unable to find entry point named XXX in DLL error in my C++ dll. It should be simple to resolve but I'm not a C++ programmer. I'd like to use a .net declaration for the DLL of Declare Function Multiply Lib "C:\MyDll\Debug\MyDLL.DLL" Alias "Multiply" (ByVal ParOne As Integer, ByVal byvalParTwo As Integer) As Integer
[ "Try using the __decspec(dllexport) magic pixie dust in your C++ function declaration. This declaration sets up several things that you need to successfully export a function from a DLL. You may also need to use WINAPI or something similar:\n__declspec(dllexport) WINAPI int Multiply(int p1, int p2)\n{\n return p1 * p2;\n}\n\nThe WINAPI sets up the function calling convention such that it's suitable for calling from a language such as VB.NET.\n", "You can try to look at the exported functions (through DumpBin or Dependency Walker) and see if the names are mangled.\n", "Using Greg's suggestion I found the following works. As mentioned I'm not a C++ programmer but just needed something practical.\nmyclass.cpp\n #include \"stdafx.h\"\nBOOL APIENTRY DllMain( HANDLE hModule, \n DWORD ul_reason_for_call, \n LPVOID lpReserved\n )\n{\n return TRUE;\n}\n\nint _stdcall multiply(int x , int y)\n{\n return x*y;\n}\n\nmyclass.def\n LIBRARY myclass\nEXPORTS\n\nmultiply @1\n\nstdafx.cpp\n #include \"stdafx.h\"\nstdafx.h\n// stdafx.h : include file for standard system include files,\n// or project specific include files that are used frequently, but\n// are changed infrequently\n//\n\n#if !defined(AFX_STDAFX_H__5DB9057C_BAE6_48D8_8E38_464F6CB80026__INCLUDED_)\n#define AFX_STDAFX_H__5DB9057C_BAE6_48D8_8E38_464F6CB80026__INCLUDED_\n\n#if _MSC_VER > 1000\n#pragma once\n#endif // _MSC_VER > 1000\n\n\n// Insert your headers here\n#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers\n\n#include <windows.h>\n\n\n//{{AFX_INSERT_LOCATION}}\n// Microsoft Visual C++ will insert additional declarations immediately before the previous line.\n\n#endif // !defined(AFX_STDAFX_H__5DB9057C_BAE6_48D8_8E38_464F6CB80026__INCLUDED_)\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ ".net", "c++", "dll" ]
stackoverflow_0000039064_.net_c++_dll.txt
Q: Override WebClientProtocol.Timeout via web.config Is it possible to override default value of WebClientProtocol.Timeout property via web.config? <httpRuntime executionTimeout="500" /> <!-- this doesn't help --> A: I cant think of a way to have just the Timeout property changed automatically via the webconfig. Manually configure the value or use DI to read the value in for you. It maybe possible also to change the value globally on the machine config.
Override WebClientProtocol.Timeout via web.config
Is it possible to override default value of WebClientProtocol.Timeout property via web.config? <httpRuntime executionTimeout="500" /> <!-- this doesn't help -->
[ "I cant think of a way to have just the Timeout property changed automatically via the webconfig.\nManually configure the value or use DI to read the value in for you. \nIt maybe possible also to change the value globally on the machine config.\n" ]
[ 2 ]
[]
[]
[ ".net", "configuration" ]
stackoverflow_0000043591_.net_configuration.txt
Q: What happened to the .Net Framework Configuration tool? Older versions of the .Net Framework used to install "Microsoft .NET Framework v1.0 / v1.1 / v2.0 Configuration" in the Control Panel, under Administrative Tools. I just noticed that there isn't a v3.0 or v3.5 version of this. Is this functionality now hiding somewhere else, or do I have to use the command-line tools instead? A: For 3.5, you must install this tool: http://www.microsoft.com/downloads/details.aspx?FamilyID=4377f86d-c913-4b5c-b87e-ef72e5b4e065&displaylang=en And for 3.0 you must use the 2.0 config tool. Source of Answer. A: Both 3 and 3.5 still use the Common Language Runtime of .NET Framework 2.0. So no control panel is needed, as you can still use the 2.0 control panel. A: The .NET Framework versions 3.0 and 3.5 have been built incrementally on the .NET Framework version 2.0. This version can be used to manage code access security policy for the .NET Framework 3.0, 3.5, and later versions as well. A: To sort out the confusion between the apparently conflicting answers above, this is my current understanding of the answer: Use the 2.0 version, as DAC and Codeslayer recommended If you don't have the 2.0 version (mine was helpfully uninstalled when I removed VS2005 and installed VS2008), then you can either install VS2005, or download the Windows SDK, as per GateKiller's link On my PC, even downloading the SDK didn't work; it installed mscorcfg.msc but not mscorcfg.dll. Digging about in the GAC, I notice mscorcfg.dll v3.5, which confuses me even more. Anyway, there is an iffy-looking copy-dlls-and-hack-registry solution at http://home.hot.rr.com/graye/Articles/CodeAccessSecurity.htm, and that's what I'm going to try next. Wish me luck!
What happened to the .Net Framework Configuration tool?
Older versions of the .Net Framework used to install "Microsoft .NET Framework v1.0 / v1.1 / v2.0 Configuration" in the Control Panel, under Administrative Tools. I just noticed that there isn't a v3.0 or v3.5 version of this. Is this functionality now hiding somewhere else, or do I have to use the command-line tools instead?
[ "For 3.5, you must install this tool:\n\nhttp://www.microsoft.com/downloads/details.aspx?FamilyID=4377f86d-c913-4b5c-b87e-ef72e5b4e065&displaylang=en\n\nAnd for 3.0 you must use the 2.0 config tool.\nSource of Answer.\n", "Both 3 and 3.5 still use the Common Language Runtime of .NET Framework 2.0. So no control panel is needed, as you can still use the 2.0 control panel.\n", "The .NET Framework versions 3.0 and 3.5 have been built incrementally on the .NET Framework version 2.0. This version can be used to manage code access security policy for the .NET Framework 3.0, 3.5, and later versions as well.\n", "To sort out the confusion between the apparently conflicting answers above, this is my current understanding of the answer:\n\nUse the 2.0 version, as DAC and Codeslayer recommended\nIf you don't have the 2.0 version (mine was helpfully uninstalled when I removed VS2005 and installed VS2008), then you can either install VS2005, or download the Windows SDK, as per GateKiller's link\n\nOn my PC, even downloading the SDK didn't work; it installed mscorcfg.msc but not mscorcfg.dll. Digging about in the GAC, I notice mscorcfg.dll v3.5, which confuses me even more. Anyway, there is an iffy-looking copy-dlls-and-hack-registry solution at http://home.hot.rr.com/graye/Articles/CodeAccessSecurity.htm, and that's what I'm going to try next. Wish me luck!\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ ".net", ".net_3.5", "visual_studio" ]
stackoverflow_0000043509_.net_.net_3.5_visual_studio.txt
Q: What types of executables can be decompiled? I think that java executables (jar files) are trivial to decompile and get the source code. What about other languages? .net and all? Which all languages can compile only to a decompile-able code? A: In general, languages like Java, C#, and VB.NET are relatively easy to decompile because they are compiled to an intermediary language, not pure machine language. In their IL form, they retain more metadata than C code does when compiled to machine language. Technically you aren't getting the original source code out, but a variation on the source code that, when compiled, will give you the compiled code back. It isn't identical to the source code, as things like comments, annotations, and compiler directives usually aren't carried forward into the compiled code. A: Managed languages can be easily decompiled because executable must contain a lot of metadata to support reflection. Languages like C++ can be compiled to native code. Program structure can be totally changed during compilation\translation processes. Compiler can easily replace\merge\delete parts of your code. There is no 1 to 1 relationship between original and compiled (native) code. A: .NET is very easy to decompile. The best tool to do that would be the .NET reflector recently acquired by RedGate. A: Most languages can be decompiled but some are easier to decompile than others. .Net and Java put more information about the original program in the executables (method names, variable names etc.) so you get more of your original information back. C++ for example will translate variables and functions etc. to memory adresses (yeah I know this is a gross simplification) so the decompiler won't know what stuff was called. But you can still get some of the structure of the program back though. A: VB6 if compiled to pcode is also possible to decompile to almost full source using P32Dasm, Flash (or actionscript) is also possible to decompile to full source using something like Flare
What types of executables can be decompiled?
I think that java executables (jar files) are trivial to decompile and get the source code. What about other languages? .net and all? Which all languages can compile only to a decompile-able code?
[ "In general, languages like Java, C#, and VB.NET are relatively easy to decompile because they are compiled to an intermediary language, not pure machine language. In their IL form, they retain more metadata than C code does when compiled to machine language. \nTechnically you aren't getting the original source code out, but a variation on the source code that, when compiled, will give you the compiled code back. It isn't identical to the source code, as things like comments, annotations, and compiler directives usually aren't carried forward into the compiled code.\n", "Managed languages can be easily decompiled because executable must contain a lot of metadata to support reflection.\nLanguages like C++ can be compiled to native code. Program structure can be totally changed during compilation\\translation processes.\nCompiler can easily replace\\merge\\delete parts of your code. There is no 1 to 1 relationship between original and compiled (native) code.\n", ".NET is very easy to decompile. The best tool to do that would be the .NET reflector recently acquired by RedGate.\n", "Most languages can be decompiled but some are easier to decompile than others. .Net and Java put more information about the original program in the executables (method names, variable names etc.) so you get more of your original information back. \nC++ for example will translate variables and functions etc. to memory adresses (yeah I know this is a gross simplification) so the decompiler won't know what stuff was called. But you can still get some of the structure of the program back though.\n", "VB6 if compiled to pcode is also possible to decompile to almost full source using P32Dasm, Flash (or actionscript) is also possible to decompile to full source using something like Flare\n" ]
[ 11, 5, 2, 1, 1 ]
[]
[]
[ "decompiling" ]
stackoverflow_0000043672_decompiling.txt
Q: Conditional Display in ASPX Pages on Sharepoint I wonder what the best practice for this scenario is: I have a Sharepoint Site (MOSS2007) with an ASPX Page on it. However, I cannot use any inline source and stuff like Event handlers do not work, because Sharepoint does not allow Server Side Script on ASPX Pages per default. Two solutions: Change the PageParserPath in web.config as per this site <PageParserPaths> <PageParserPath VirtualPath="/pages/test.aspx" CompilationMode="Always" AllowServerSideScript="true" /> </PageParserPaths> Create all the controls and Wire them up to Events in the .CS File, thus completely eliminating some of the benefits of ASP.net I wonder, what the best practice would be? Number one looks like it's the correct choice, but changing the web.config is something I want to use sparingly whenever possible. A: What does the ASPX page do? What functionality does it add? How are you adding the page into the site? By the looks of it this is just a "Web Part Page" in a document library. I would have to do a little research to be 100%, but my understanding is that inline code is ok, providing it's in a page that remains ghosted, and thereby trusted. Can you add your functionality into the site via a feature? I would avoide option 1, seems like bad advice to me. Allowing server side code in your page is a security risk as it then becomes possible for someone to inject malicious code. Sure you can secure the page, but we are talking remote execution with likely some pretty serious permissions. A: So in that case I would wrap it up in a feature and deploy it via a solution. This way I think you will avoid the issue you are seeing. This is especially useful if you plan to use this functionality within other sites too. You can also embed web parts directly in the page, much like you do a WebControl, thereby avoiding any gallery clutter. A: Thanks so far. I've successfully tried Andrew Connel's solution: http://www.andrewconnell.com/blog/articles/UsingCodeBehindFilesInSharePointSites.aspx Wrapping it into a solution is part of that, but the main problem was how to get the code into that, and it's more leaning towards Option 2 without having to create the controls in code. What I was missing: In the .cs File, it is required to manually add the "protected Button Trigger;" stuff, because there is no automatically generated .designer.cs file when using a class library. A: Well, it's a page that hosts user controls. It's a custom .aspx Page that will be created on the site, specially because I do not want to create WebParts. It's essentially an application running within Sharepoint, utilizing Lists and other functions, but all the functionality is only useful within the application, so flooding the web part gallery with countless web parts that only work in one place is something i'd like to avoid.
Conditional Display in ASPX Pages on Sharepoint
I wonder what the best practice for this scenario is: I have a Sharepoint Site (MOSS2007) with an ASPX Page on it. However, I cannot use any inline source and stuff like Event handlers do not work, because Sharepoint does not allow Server Side Script on ASPX Pages per default. Two solutions: Change the PageParserPath in web.config as per this site <PageParserPaths> <PageParserPath VirtualPath="/pages/test.aspx" CompilationMode="Always" AllowServerSideScript="true" /> </PageParserPaths> Create all the controls and Wire them up to Events in the .CS File, thus completely eliminating some of the benefits of ASP.net I wonder, what the best practice would be? Number one looks like it's the correct choice, but changing the web.config is something I want to use sparingly whenever possible.
[ "What does the ASPX page do? What functionality does it add? How are you adding the page into the site? By the looks of it this is just a \"Web Part Page\" in a document library. \nI would have to do a little research to be 100%, but my understanding is that inline code is ok, providing it's in a page that remains ghosted, and thereby trusted. Can you add your functionality into the site via a feature? \nI would avoide option 1, seems like bad advice to me. Allowing server side code in your page is a security risk as it then becomes possible for someone to inject malicious code. Sure you can secure the page, but we are talking remote execution with likely some pretty serious permissions.\n", "So in that case I would wrap it up in a feature and deploy it via a solution. This way I think you will avoid the issue you are seeing. This is especially useful if you plan to use this functionality within other sites too. \nYou can also embed web parts directly in the page, much like you do a WebControl, thereby avoiding any gallery clutter.\n", "Thanks so far. I've successfully tried Andrew Connel's solution:\nhttp://www.andrewconnell.com/blog/articles/UsingCodeBehindFilesInSharePointSites.aspx\nWrapping it into a solution is part of that, but the main problem was how to get the code into that, and it's more leaning towards Option 2 without having to create the controls in code.\nWhat I was missing:\nIn the .cs File, it is required to manually add the \"protected Button Trigger;\" stuff, because there is no automatically generated .designer.cs file when using a class library.\n", "Well, it's a page that hosts user controls. It's a custom .aspx Page that will be created on the site, specially because I do not want to create WebParts.\nIt's essentially an application running within Sharepoint, utilizing Lists and other functions, but all the functionality is only useful within the application, so flooding the web part gallery with countless web parts that only work in one place is something i'd like to avoid.\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "asp.net", "sharepoint" ]
stackoverflow_0000041576_asp.net_sharepoint.txt
Q: Can Database and transaction logs on the same drive cause problems? Can we have the database and transaction logs on the same drive? What will be its consequences if it is not recommended? A: The only downside is that it causes more thrashing on the disk, so worse performance. A single write will require 2 seeks (between: write transaction log, write data, commit log). Having the transaction log on a separate disk means as few as zero seeks, because the drive heads can remain on the transaction log and the data. A: The problem with having both on the same drive is that if the drive fails you lose both. If they are on different drives and the drive containing the data fails you can apply the log to the last backup so you don't lose any data. A: An company I worked for earlier had transaction logs and datafiles side by side on the same drive, in the same folder on several servers. This didn't cause any problems datawise. As others have noted it may well have impact on performance. And if you lose the drive you lose both. A: Just to add briefly to Ted Percival's comment above... A hard disk drive will perform fastest if it is doing sequential writes or sequential reads, because the drive head doesn't need to move around. SQL Server log files happen to be sequential, so if you dedicate a hard drive to ONLY the logs, you will see a noticeable performance improvement. That said, for smaller databases where performance is not an issue, it doesn't matter. And as for Nir's comment on drive failures -- hopefully you are handling that at a lower level, by putting both your data and logs on RAID arrays.
Can Database and transaction logs on the same drive cause problems?
Can we have the database and transaction logs on the same drive? What will be its consequences if it is not recommended?
[ "The only downside is that it causes more thrashing on the disk, so worse performance.\nA single write will require 2 seeks (between: write transaction log, write data, commit log). Having the transaction log on a separate disk means as few as zero seeks, because the drive heads can remain on the transaction log and the data.\n", "The problem with having both on the same drive is that if the drive fails you lose both.\nIf they are on different drives and the drive containing the data fails you can apply the log to the last backup so you don't lose any data.\n", "An company I worked for earlier had transaction logs and datafiles side by side on the same drive, in the same folder on several servers.\nThis didn't cause any problems datawise.\nAs others have noted it may well have impact on performance. And if you lose the drive you lose both.\n", "Just to add briefly to Ted Percival's comment above...\nA hard disk drive will perform fastest if it is doing sequential writes or sequential reads, because the drive head doesn't need to move around. \nSQL Server log files happen to be sequential, so if you dedicate a hard drive to ONLY the logs, you will see a noticeable performance improvement. That said, for smaller databases where performance is not an issue, it doesn't matter. \nAnd as for Nir's comment on drive failures -- hopefully you are handling that at a lower level, by putting both your data and logs on RAID arrays. \n" ]
[ 3, 3, 0, 0 ]
[ "In some scenarios you don't need transaction log at all. In that case you can switch database to Simple Recovery Mode and you gain performance and simpler administration benefits.\n" ]
[ -1 ]
[ "database", "sql_server" ]
stackoverflow_0000043259_database_sql_server.txt
Q: What is the replacement of Controller.ReadFromRequest in ASP.NET MVC? I am attempting to update a project from ASP.NET MVC Preview 3 to Preview 5 and it seems that Controller.ReadFromRequest(string key) has been removed from the Controller class. Does anyone know of any alternatives to retrieving information based on an identifier from a form? A: Looks like they've added controller.UpdateModel to address this issue, signature is: UpdateModel(object model, string[] keys) I haven't upgraded my app personally, so I'm not sure of the actual usage. I'll be interested to find out about this myself, as I'm using controller.ReadFromRequest as well. A: Not sure where it went. You could roll your own extension though: public static class MyBindingExtensions { public static T ReadFromRequest < T > (this Controller controller, string key) { // Setup HttpContextBase context = controller.ControllerContext.HttpContext; object val = null; T result = default(T); // Gaurd if (context == null) return result; // no point checking request // Bind value (check form then query string) if (context.Request.Form[key] != null) val = context.Request.Form[key]; if (val == null) { if (context.Request.QueryString[key] != null) val = context.Request.QueryString[key]; } // Cast value if (val != null) result = (t)val; return result; } } A: could you redo that link in something like tinyurl.com? I need this info too but can get that mega-link to work.
What is the replacement of Controller.ReadFromRequest in ASP.NET MVC?
I am attempting to update a project from ASP.NET MVC Preview 3 to Preview 5 and it seems that Controller.ReadFromRequest(string key) has been removed from the Controller class. Does anyone know of any alternatives to retrieving information based on an identifier from a form?
[ "Looks like they've added controller.UpdateModel to address this issue, signature is:\nUpdateModel(object model, string[] keys)\n\nI haven't upgraded my app personally, so I'm not sure of the actual usage. I'll be interested to find out about this myself, as I'm using controller.ReadFromRequest as well.\n", "Not sure where it went. You could roll your own extension though:\npublic static class MyBindingExtensions \n{\npublic static T ReadFromRequest < T > (this Controller controller, string key) \n{\n // Setup\n HttpContextBase context = controller.ControllerContext.HttpContext;\n object val = null;\n T result = default(T);\n\n // Gaurd\n if (context == null)\n return result; // no point checking request\n\n // Bind value (check form then query string)\n if (context.Request.Form[key] != null)\n val = context.Request.Form[key];\n if (val == null) \n {\n if (context.Request.QueryString[key] != null)\n val = context.Request.QueryString[key];\n }\n\n // Cast value\n if (val != null)\n result = (t)val;\n\n return result;\n}\n\n}\n\n", "could you redo that link in something like tinyurl.com?\nI need this info too but can get that mega-link to work.\n" ]
[ 3, 2, 0 ]
[]
[]
[ ".net_3.5", "asp.net_mvc", "c#", "entity_framework_ctp5" ]
stackoverflow_0000036064_.net_3.5_asp.net_mvc_c#_entity_framework_ctp5.txt
Q: WPF control performance What is a good (and preferably simple) way to test the rendering performance of WPF custom controls? I have several complex controls in which rendering performance is highly crucial. I want to be able to make sure that I can have lots of them drawwing out in a designer with a minimal impact on performance. A: Tool called Perforator will help you. See following article for details: Performance Profiling Tools for WPF
WPF control performance
What is a good (and preferably simple) way to test the rendering performance of WPF custom controls? I have several complex controls in which rendering performance is highly crucial. I want to be able to make sure that I can have lots of them drawwing out in a designer with a minimal impact on performance.
[ "Tool called Perforator will help you.\nSee following article for details:\nPerformance Profiling Tools for WPF\n" ]
[ 2 ]
[]
[]
[ ".net", "performance", "wpf" ]
stackoverflow_0000043768_.net_performance_wpf.txt
Q: How to implement a "related" degree measure algorithm? I was going to Ask a Question earlier today when I was presented to a surprising functionality in Stackoverflow. When I wrote my question title stackoverflow suggested me several related questions and I found out that there was already two similar questions. That was stunning! Then I started thinking how I would implement such function. How I would order questions by relatedness: Question that have higher number of words matchs with the new question If the number of matchs are the same, the order of words is considered Words that appears in the title has higher relevancy That would be a simple workflow or a complex score algortithm? Some stemming to increase the recall, maybe? Is there some library the implements this function? What other aspects would you consider? Maybe Jeff could answer himself! How did you implemented this in Stackoverflow? :) A: One such way to implement such an algorithm would involve ranking the questions as per a heuristic function which assigns a 'relevance' weight factor using the following steps: Apply a noise filter to the 'New' question to remove words that are common across a large number of objects such as: 'the', 'and', 'or', etc. Get the number of words contained in the 'New' question which match the words the set of questions already posted on the website. [A] Get the number of tag matches between the words in the 'New' question and the available. [B] Compute the 'relevance weight' based on [A] and [B] as 'x[A] + y[B]', where x and y are weight multipliers (Assign a higher weight multiplier to [B] as tagging is more relevant than simple word search) Get the top 5 questions which have the highest 'relevance weight'. The heuristic might require tweaking to get optimal results, but it should work. A: Your question seems similar to this one, which has some additional answers. A: @marcio Sorry, I am not aware of any direct API reference that I could suggest here and I have never worked with Lucene. However, I am aware that Google Desktop uses a Query API to rank and suggest the relevant search results. More information on the API can be found here. Perhaps others could chime in and guide you.
How to implement a "related" degree measure algorithm?
I was going to Ask a Question earlier today when I was presented to a surprising functionality in Stackoverflow. When I wrote my question title stackoverflow suggested me several related questions and I found out that there was already two similar questions. That was stunning! Then I started thinking how I would implement such function. How I would order questions by relatedness: Question that have higher number of words matchs with the new question If the number of matchs are the same, the order of words is considered Words that appears in the title has higher relevancy That would be a simple workflow or a complex score algortithm? Some stemming to increase the recall, maybe? Is there some library the implements this function? What other aspects would you consider? Maybe Jeff could answer himself! How did you implemented this in Stackoverflow? :)
[ "One such way to implement such an algorithm would involve ranking the questions as per a heuristic function which assigns a 'relevance' weight factor using the following steps:\n\nApply a noise filter to the 'New' question to remove words that are common across a large number of objects such as: 'the', 'and', 'or', etc.\nGet the number of words contained in the 'New' question which match the words the set of questions already posted on the website. [A]\nGet the number of tag matches between the words in the 'New' question and the available. [B]\nCompute the 'relevance weight' based on [A] and [B] as 'x[A] + y[B]', where x and y are weight multipliers (Assign a higher weight multiplier to [B] as tagging is more relevant than simple word search)\nGet the top 5 questions which have the highest 'relevance weight'.\n\nThe heuristic might require tweaking to get optimal results, but it should work.\n", "Your question seems similar to this one, which has some additional answers.\n", "@marcio\nSorry, I am not aware of any direct API reference that I could suggest here and I have never worked with Lucene.\nHowever, I am aware that Google Desktop uses a Query API to rank and suggest the relevant search results. More information on the API can be found here.\nPerhaps others could chime in and guide you. \n" ]
[ 3, 1, 0 ]
[ "Isn't StackOverflow going to be open sourced at some point? If so, you can always find out how they did it there.\nUpdate: It appears that they say they might open source it. I hope they do.\n" ]
[ -1 ]
[ "algorithm", "full_text_search", "indexing" ]
stackoverflow_0000042489_algorithm_full_text_search_indexing.txt
Q: How do I resize and convert an uploaded image to a PNG using GD? I want to allow users to upload avatar-type images in a variety of formats (GIF, JPEG, and PNG at least), but to save them all as PNG database BLOBs. If the images are oversized, pixelwise, I want to resize them before DB-insertion. What is the best way to use GD to do the resizing and PNG conversion? Edit: Sadly, only GD is available on the server I need to use, no ImageMagick. A: <?php /* Resizes an image and converts it to PNG returning the PNG data as a string */ function imageToPng($srcFile, $maxSize = 100) { list($width_orig, $height_orig, $type) = getimagesize($srcFile); // Get the aspect ratio $ratio_orig = $width_orig / $height_orig; $width = $maxSize; $height = $maxSize; // resize to height (orig is portrait) if ($ratio_orig < 1) { $width = $height * $ratio_orig; } // resize to width (orig is landscape) else { $height = $width / $ratio_orig; } // Temporarily increase the memory limit to allow for larger images ini_set('memory_limit', '32M'); switch ($type) { case IMAGETYPE_GIF: $image = imagecreatefromgif($srcFile); break; case IMAGETYPE_JPEG: $image = imagecreatefromjpeg($srcFile); break; case IMAGETYPE_PNG: $image = imagecreatefrompng($srcFile); break; default: throw new Exception('Unrecognized image type ' . $type); } // create a new blank image $newImage = imagecreatetruecolor($width, $height); // Copy the old image to the new image imagecopyresampled($newImage, $image, 0, 0, 0, 0, $width, $height, $width_orig, $height_orig); // Output to a temp file $destFile = tempnam(); imagepng($newImage, $destFile); // Free memory imagedestroy($newImage); if ( is_file($destFile) ) { $f = fopen($destFile, 'rb'); $data = fread($f); fclose($f); // Remove the tempfile unlink($destFile); return $data; } throw new Exception('Image conversion failed.'); } A: Your process steps should look like this: Verify the filetype Load the image if it is a supported filetype into GD using imagecreatefrom* Resizing using imagecopyresize or imagecopyresampled Save the image using imagepng($handle, 'filename.png', $quality, $filters) ImageMagick is faster, generates better images, is more configurable, and finally is (IMO) much easier to code for. @ceejayoz Just wait for the new GD - it's OOP like MySQLi and it's actually not bad :) A: If you want to use gdlib, use gdlib 2 or higher. It has a function called imagecopyresampled(), which will interpolate pixels while resizing and look much better. Also, I've always heard noted around the net that storing images in the database is bad form: It's slower to access than the disk Your server will need to run a script to get to the image instead of simply serving a file Your script now is responsible for a lot of stuff the web server used to handle: Setting the proper Content-Type header Setting the proper caching/timeout/E-tag headers, so clients can properly cache the image. If do not do this properly, the image serving script will be hit on every request, increasing the load on the server even more. The only advantage I can see is that you don't need to keep your database and image files synchronized. I would still recommend against it though. A: Are you sure you have no ImageMagick on server? I guest you use PHP (question is tagged with PHP). Hosting company which I use has no ImageMagick extension turned on according to phpinfo(). But when I asked them about they said here is the list of ImageMagick programs available from PHP code. So simply -- there are no IM interface in PHP, but I can call IM programs directly from PHP. I hope you have the same option. And I strongly agree -- storing images in database is not good idea. A: Something like this, perhaps: <?php //Input file $file = "myImage.png"; $img = ImageCreateFromPNG($file); //Dimensions $width = imagesx($img); $height = imagesy($img); $max_width = 300; $max_height = 300; $percentage = 1; //Image scaling calculations if ( $width > $max_width ) { $percentage = ($height / ($width / $max_width)) > $max_height ? $height / $max_height : $width / $max_width; } elseif ( $height > $max_height) { $percentage = ($width / ($height / $max_height)) > $max_width ? $width / $max_width : $height / $max_height; } $new_width = $width / $percentage; $new_height = $height / $percentage; //scaled image $out = imagecreatetruecolor($new_width, $new_height); imagecopyresampled($out, $img, 0, 0, 0, 0, $new_width, $new_height, $width, $height); //output image imagepng($out); ?> I haven't tested the code so there might be some syntax errors, however it should give you a fair presentation on how it could be done. Also, I assumed a PNG file. You might want to have some kind of switch statement to determine the file type. A: Is GD absolutely required? ImageMagick is faster, generates better images, is more configurable, and finally is (IMO) much easier to code for. A: This article seems like it would fit what you want. You'll need to change the saving imagejpeg() function to imagepng() and have it save the file to a string rather than output it to the page, but other than that it should be easy copy/paste into your existing code. A: I think this page is a good starting point. It uses imagecreatefrom(jpeg/gif/png) and resize and converts the image and then outputs to the browser. Instead of outputting the browser you could output to a BLOB in a DB without many minuttes of code-rewrite. A: phpThumb is a high-level abstraction that may be worth looking at.
How do I resize and convert an uploaded image to a PNG using GD?
I want to allow users to upload avatar-type images in a variety of formats (GIF, JPEG, and PNG at least), but to save them all as PNG database BLOBs. If the images are oversized, pixelwise, I want to resize them before DB-insertion. What is the best way to use GD to do the resizing and PNG conversion? Edit: Sadly, only GD is available on the server I need to use, no ImageMagick.
[ "<?php \n/*\nResizes an image and converts it to PNG returning the PNG data as a string\n*/\nfunction imageToPng($srcFile, $maxSize = 100) { \n list($width_orig, $height_orig, $type) = getimagesize($srcFile); \n\n // Get the aspect ratio\n $ratio_orig = $width_orig / $height_orig;\n\n $width = $maxSize; \n $height = $maxSize;\n\n // resize to height (orig is portrait) \n if ($ratio_orig < 1) {\n $width = $height * $ratio_orig;\n } \n // resize to width (orig is landscape)\n else {\n $height = $width / $ratio_orig;\n }\n\n // Temporarily increase the memory limit to allow for larger images\n ini_set('memory_limit', '32M'); \n\n switch ($type) \n {\n case IMAGETYPE_GIF: \n $image = imagecreatefromgif($srcFile); \n break; \n case IMAGETYPE_JPEG: \n $image = imagecreatefromjpeg($srcFile); \n break; \n case IMAGETYPE_PNG: \n $image = imagecreatefrompng($srcFile);\n break; \n default:\n throw new Exception('Unrecognized image type ' . $type);\n }\n\n // create a new blank image\n $newImage = imagecreatetruecolor($width, $height);\n\n // Copy the old image to the new image\n imagecopyresampled($newImage, $image, 0, 0, 0, 0, $width, $height, $width_orig, $height_orig);\n\n // Output to a temp file\n $destFile = tempnam();\n imagepng($newImage, $destFile); \n\n // Free memory \n imagedestroy($newImage);\n\n if ( is_file($destFile) ) {\n $f = fopen($destFile, 'rb'); \n $data = fread($f); \n fclose($f);\n\n // Remove the tempfile\n unlink($destFile); \n return $data;\n }\n\n throw new Exception('Image conversion failed.');\n}\n\n", "Your process steps should look like this:\n\nVerify the filetype\nLoad the image if it is a supported filetype into GD using imagecreatefrom*\nResizing using imagecopyresize or imagecopyresampled\nSave the image using imagepng($handle, 'filename.png', $quality, $filters)\n\n\nImageMagick is faster, generates better images, is more configurable, and finally is (IMO) much easier to code for.\n\n@ceejayoz Just wait for the new GD - it's OOP like MySQLi and it's actually not bad :)\n", "If you want to use gdlib, use gdlib 2 or higher. It has a function called imagecopyresampled(), which will interpolate pixels while resizing and look much better.\nAlso, I've always heard noted around the net that storing images in the database is bad form:\n\nIt's slower to access than the disk\nYour server will need to run a script to get to the image instead\nof simply serving a file\nYour script now is responsible for a lot of stuff the web server used\nto handle:\n\n\nSetting the proper Content-Type header\nSetting the proper caching/timeout/E-tag headers, so clients can properly cache the image. If do not do this properly, the image serving script will be hit on every request, increasing the load on the server even more.\n\n\nThe only advantage I can see is that you don't need to keep your database and image files synchronized. I would still recommend against it though.\n", "Are you sure you have no ImageMagick on server?\nI guest you use PHP (question is tagged with PHP). Hosting company which I use has no ImageMagick extension turned on according to phpinfo().\nBut when I asked them about they said here is the list of ImageMagick programs available from PHP code. So simply -- there are no IM interface in PHP, but I can call IM programs directly from PHP.\nI hope you have the same option.\nAnd I strongly agree -- storing images in database is not good idea.\n", "Something like this, perhaps: \n\n<?php\n //Input file\n $file = \"myImage.png\";\n $img = ImageCreateFromPNG($file);\n\n //Dimensions\n $width = imagesx($img);\n $height = imagesy($img);\n $max_width = 300;\n $max_height = 300;\n $percentage = 1;\n\n //Image scaling calculations\n if ( $width > $max_width ) { \n $percentage = ($height / ($width / $max_width)) > $max_height ?\n $height / $max_height :\n $width / $max_width;\n }\n elseif ( $height > $max_height) {\n $percentage = ($width / ($height / $max_height)) > $max_width ? \n $width / $max_width :\n $height / $max_height;\n }\n $new_width = $width / $percentage;\n $new_height = $height / $percentage;\n\n //scaled image\n $out = imagecreatetruecolor($new_width, $new_height);\n imagecopyresampled($out, $img, 0, 0, 0, 0, $new_width, $new_height, $width, $height);\n\n //output image\n imagepng($out);\n?>\n\nI haven't tested the code so there might be some syntax errors, however it should give you a fair presentation on how it could be done. Also, I assumed a PNG file. You might want to have some kind of switch statement to determine the file type.\n", "Is GD absolutely required? ImageMagick is faster, generates better images, is more configurable, and finally is (IMO) much easier to code for.\n", "This article seems like it would fit what you want. You'll need to change the saving imagejpeg() function to imagepng() and have it save the file to a string rather than output it to the page, but other than that it should be easy copy/paste into your existing code.\n", "I think this page is a good starting point. It uses imagecreatefrom(jpeg/gif/png) and resize and converts the image and then outputs to the browser. Instead of outputting the browser you could output to a BLOB in a DB without many minuttes of code-rewrite.\n", "phpThumb is a high-level abstraction that may be worth looking at.\n" ]
[ 24, 6, 3, 3, 3, 1, 0, 0, 0 ]
[]
[]
[ "database", "gd", "image", "php", "png" ]
stackoverflow_0000022259_database_gd_image_php_png.txt
Q: Which JSTL URL should I reference in my JSPs? I'm getting the following error when trying to run a JSP. I'm using Tomcat 6.0.18, and I'd like to use the latest version of JSTL. What version of JSTL should I use, and which URL goes with which version of JSTL? I'm getting this error "According to TLD or attribute directive in tag file, attribute key does not accept any expressions" <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core" %> I'll just say I had this working, but I want to switch the JSTL jar file that has the TLD files in the jar file. (instead of having to deploy them somewhere in the web application and define the references in web.xml). A: Go with <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> More on this topic here
Which JSTL URL should I reference in my JSPs?
I'm getting the following error when trying to run a JSP. I'm using Tomcat 6.0.18, and I'd like to use the latest version of JSTL. What version of JSTL should I use, and which URL goes with which version of JSTL? I'm getting this error "According to TLD or attribute directive in tag file, attribute key does not accept any expressions" <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core" %> I'll just say I had this working, but I want to switch the JSTL jar file that has the TLD files in the jar file. (instead of having to deploy them somewhere in the web application and define the references in web.xml).
[ "Go with \n<%@ taglib prefix=\"c\" uri=\"http://java.sun.com/jsp/jstl/core\" %>\n\nMore on this topic here\n" ]
[ 5 ]
[]
[]
[ "jsp", "jstl", "taglib", "uri" ]
stackoverflow_0000043809_jsp_jstl_taglib_uri.txt
Q: jQuery slicing and click events This is probably a really simple jQuery question, but I couldn't answer it after 10 minutes in the documentation so... I have a list of checkboxes, and I can get them with the selector 'input[type=checkbox]'. I want the user to be able to shift-click and select a range of checkboxes. To accomplish this, I need to get the index of a checkbox in the list, so I can pass that index to .slice(start, end). How do I get the index when the user clicks a box? A: The following selector should also work in jQuery: input:checkbox. You can then string the :gt(index) and :lt(index) filters together, so if you want the 5th to 7th checkboxes, you'd use input:checkbox:gt(4):lt(2). To get the index of the currently clicked checkbox, just use $("input:checkbox").index($(this)). A: This is a quick solution, but I would give each checkbox a unique ID, perhaps with an index hint, like so: <input id="checkbox-0" type="checkbox" /> <input id="checkbox-1" type="checkbox" /> <input id="checkbox-2" type="checkbox" /> <input id="checkbox-3" type="checkbox" /> <input id="checkbox-4" type="checkbox" /> You can then easily obtain the index: $(document).ready(function() { $("input:checkbox").click(function() { index = /checkbox-(\d+)/.exec(this.id)[1]; alert(index); }); }); A: Thanks for the answer, samjudson. After further experimentation, I found that you can even use just $(':checkbox') to select them. It's interesting that you can use the .slice() function to get the range, but you also have the option of doing it in the selector with :gt and :lt. I do find the syntax of .slice() to be cleaner than using the selector filters, though. I'm going to have to say that I don't like Ryan Duffield's solution as much, because it requires changes to the markup, and involves repeating code. A: @Gorgapor: I guess I need to take questions a little less literally sometimes. :-) I figured you were locked down to requiring some sort of index. I think you'll find though that as you use jQuery more, you usually don't need to do that sort of thing.
jQuery slicing and click events
This is probably a really simple jQuery question, but I couldn't answer it after 10 minutes in the documentation so... I have a list of checkboxes, and I can get them with the selector 'input[type=checkbox]'. I want the user to be able to shift-click and select a range of checkboxes. To accomplish this, I need to get the index of a checkbox in the list, so I can pass that index to .slice(start, end). How do I get the index when the user clicks a box?
[ "The following selector should also work in jQuery: input:checkbox.\nYou can then string the :gt(index) and :lt(index) filters together, so if you want the 5th to 7th checkboxes, you'd use input:checkbox:gt(4):lt(2).\nTo get the index of the currently clicked checkbox, just use $(\"input:checkbox\").index($(this)).\n", "This is a quick solution, but I would give each checkbox a unique ID, perhaps with an index hint, like so:\n<input id=\"checkbox-0\" type=\"checkbox\" />\n<input id=\"checkbox-1\" type=\"checkbox\" />\n<input id=\"checkbox-2\" type=\"checkbox\" />\n<input id=\"checkbox-3\" type=\"checkbox\" />\n<input id=\"checkbox-4\" type=\"checkbox\" />\n\nYou can then easily obtain the index:\n$(document).ready(function() {\n $(\"input:checkbox\").click(function() {\n index = /checkbox-(\\d+)/.exec(this.id)[1];\n alert(index);\n });\n});\n\n", "Thanks for the answer, samjudson.\nAfter further experimentation, I found that you can even use just $(':checkbox') to select them. It's interesting that you can use the .slice() function to get the range, but you also have the option of doing it in the selector with :gt and :lt. I do find the syntax of .slice() to be cleaner than using the selector filters, though.\nI'm going to have to say that I don't like Ryan Duffield's solution as much, because it requires changes to the markup, and involves repeating code.\n", "@Gorgapor: I guess I need to take questions a little less literally sometimes. :-) I figured you were locked down to requiring some sort of index. I think you'll find though that as you use jQuery more, you usually don't need to do that sort of thing.\n" ]
[ 9, 1, 0, 0 ]
[]
[]
[ "javascript", "jquery" ]
stackoverflow_0000043811_javascript_jquery.txt
Q: PHP and Java EE Backend Can I use Struts as a backend and PHP as front end for a web application? If yes, what may be the implications. A: The first thing to came to mind is Quercus (from the makers of the Resin servlet engine), as Jordi mentioned. It is a Java implementation of the PHP runtime and purportedly allows you to access Java objects directly from your PHP (part of me says "yay, at last"). On the other hand, while I have been itching to try a project this way, I would probably keep the separation between Java EE and PHP unless there was a real reason to integrate on the code-level. Instead, why don't you try an SOA approach, where your PHP "front-end" calls into the Struts application over a defined REST or SOAP API (strong vote for REST here) over HTTP. http://mydomain.com/rest/this-is-a-method-call?parameter1=foo You can use Struts to build your entire "backend" model, dealing only with business logic and data, and completely ignoring presentation. As you expose the API with these URLs, and you are basically building a REST API (which may come in handy later if you ever need to provide greater access to your backend, perhaps by other client apps). Your PHP application can be built separately (and rather thinly), calling into the REST API (perhaps using Curl) as if it would call into a database or some native PHP class library. Anyway, that's what I'd do. But, if you do use Quercus, please post how it went. A: I don't know much about Java, but I remember running into Quercus a while ago. It's a 100% Java interpreter for PHP code. So yes, you could have PHP templates on your Java app. Update: see Quercus: PHP in Java for more info. A: What do you mean by backend and and frontend? If you mean using Java for the admin side of your site and PHP for the part that the public will see then there is nothing stopping you. The implications are that you will have to maintain two applications in different languages. A: I think what you mean is you want to use PHP as your templating language and structs as your middleware (actions etc). I would imaging the answer would be no, not without some kind of bridge between the structs session and the PHP. If you say change x to 3 in java in a structs action, you couldn't just go <?php echo x ?> or whatever to get the value out, you would need to transfer that information back and forth somehow. Submitting would be OK though, I would imagine. Not recommended though.
PHP and Java EE Backend
Can I use Struts as a backend and PHP as front end for a web application? If yes, what may be the implications.
[ "The first thing to came to mind is Quercus (from the makers of the Resin servlet engine), as Jordi mentioned. It is a Java implementation of the PHP runtime and purportedly allows you to access Java objects directly from your PHP (part of me says \"yay, at last\").\nOn the other hand, while I have been itching to try a project this way, I would probably keep the separation between Java EE and PHP unless there was a real reason to integrate on the code-level.\nInstead, why don't you try an SOA approach, where your PHP \"front-end\" calls into the Struts application over a defined REST or SOAP API (strong vote for REST here) over HTTP. \nhttp://mydomain.com/rest/this-is-a-method-call?parameter1=foo\n\nYou can use Struts to build your entire \"backend\" model, dealing only with business logic and data, and completely ignoring presentation. As you expose the API with these URLs, and you are basically building a REST API (which may come in handy later if you ever need to provide greater access to your backend, perhaps by other client apps).\nYour PHP application can be built separately (and rather thinly), calling into the REST API (perhaps using Curl) as if it would call into a database or some native PHP class library.\nAnyway, that's what I'd do. But, if you do use Quercus, please post how it went.\n", "I don't know much about Java, but I remember running into Quercus a while ago. It's a 100% Java interpreter for PHP code.\nSo yes, you could have PHP templates on your Java app. Update: see Quercus: PHP in Java for more info.\n", "What do you mean by backend and and frontend?\nIf you mean using Java for the admin side of your site and PHP for the part that the public will see then there is nothing stopping you.\nThe implications are that you will have to maintain two applications in different languages.\n", "I think what you mean is you want to use PHP as your templating language and structs as your middleware (actions etc).\nI would imaging the answer would be no, not without some kind of bridge between the structs session and the PHP.\nIf you say change x to 3 in java in a structs action, you couldn't just go <?php echo x ?> or whatever to get the value out, you would need to transfer that information back and forth somehow.\nSubmitting would be OK though, I would imagine.\nNot recommended though.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "php", "struts" ]
stackoverflow_0000038948_php_struts.txt
Q: PHP best practices? What is a good way to remove the code from display pages when developing with PHP. Often the pages I work on need to be editted by an outside person. This person is often confused by lots of blocks of PHP, and also likes to break my code. I've tried moving blocks of code out into functions, so now there are functions spread out all throughout the HTML now. As some pages become more complex it becomes a program again, and processing POSTs are questionable. What can I be doing better in my PHP development? A: You don't need a "system" to do templating. You can do it on your own by keeping presentation & logic separate. This way the designer can screw up the display, but not the logic behind it. Here's a simple example: <?php $people = array('derek','joel','jeff'); $people[0] = 'martin'; // all your logic goes here include 'templates/people.php'; ?> Now here's the people.php file (which you give your designer): <html> <body> <?php foreach($people as $name):?> <b>Person:</b> <?=$name?> <br /> <?php endforeach;?> </body> </html> A: Take a look at how some of the popular PHP frameworks use templating. Examples include cakePHP, Zend Framework, and Code Igniter. Even if you are not going to base your site on these frameworks, the template design pattern is a good way to keep php code away from your web designers, so they can focus on layout and not functionality. A: It sounds to me like you need to begin implementing what is known as "separation of concerns" across your application generally. The examples folks give about templating, in response to your specific complaint about page editors breaking your code, are important, but represent just one example of this tactic. As your program gets larger and more complex it becomes harder to modify and debug--even if your designer is not breaking your code. Probably the most common separation is a three way split between data, logic and presentation as described in the design pattern Model-View-Controller (MVC). You do not need a full blown MVC framework in place to implement the same basic principles. The idea is simply to encapsulate code that deals with your data (model) in one place, the code that presents this data to the user (view) in another. You tie that code together with code that is only concerned with presenting the right data to the right user at a the right time (controller). From your description, it sounds like you have right now is a Transaction Script pattern, where you have a php file "dothis.php" that is loaded in the browser, and all the function definitions and HTML for the display are together. You already have functions, so you are already beginning to encapsulate pieces of logic. The way I would approach this would be, in keeping with the other answers here about templating, is to remove all of the HTML into another file only referencing simple PHP variables and maybe some loops (but as little conditional switching as you can). That will make the template easier to read and harder to break. When your page editor wants to modify the layout, give them THAT file. You then separate all of your data access functions to another file, ideally creating a class (or several classes, depending on how complex your data is and how frequently you need to reuse it). At this point your "dothis.php" has been stripped down to maybe some configuration code (which you can separate out to an include, and some authentication code (which you can separate out to its own class), and is only calling the data access functions, and calling the included template file. Your controller itself is therefore greatly simplified and easier to manage. A: I would highly recommend reading the book PHP In Action. It takes you through abstracting your database connections, templating systems and all the other basics of a web application. If every PHP developer read this book then the language would have a much better reputation. It also has chapters on refactoring, unit testing and the MVC control pattern. A: Does the outside person need to edit the logic, or just the display (HTML)? If it's the latter case, check out the Smarty template engine. A: I think I'd like to stay away from an unweildy framework. Just some approach I can use that generally makes the pages more readable with cleaner code. Stack Overflow wants me to decide which answer is best, when best is a subjective opinion. Who is to say what the 'best' practice is. A: If you decide to continue using functions, you can get some inspiration from WordPress. You can probably reduce the "program" to a minimum by making templates more granular. Also, good tools (i.e. HTML editors) can help designers ignore your PHP and work on the design without breaking the code. (But I have no suggestions, sorry.) The other way to some things is to create own template system instead of SMARTY, but it would probably take too long to create a working system to satisfy your needs that would go past just a replacing something like %%VARIABLE%% with a text. Our company uses SMARTY and even with a lot of code in templates, designers know how to work with it. For simple CMS sites we use ExpressionEngine, which uses HTML-like tags for inserting logic into templates. A: There's a lot that can be said on this topic but a very basic starting point would be to move as much code as possible out into separate files and then use include statements. A: I usually use includes, as they can be very useful for organising and grouping functions together. Also, comment your code. There's nothing worse than for someone else to see your work and not know why you've done this. Naming variables and functions sensibly can go a long way too - for example: $userName = "John Doe"; $dateOfBirth = "04/02/1982"; function calculateUserAgeFromBirth($userName, $dateOfBirth) Naming variables like this also helps minimise comments about what your code actually does.
PHP best practices?
What is a good way to remove the code from display pages when developing with PHP. Often the pages I work on need to be editted by an outside person. This person is often confused by lots of blocks of PHP, and also likes to break my code. I've tried moving blocks of code out into functions, so now there are functions spread out all throughout the HTML now. As some pages become more complex it becomes a program again, and processing POSTs are questionable. What can I be doing better in my PHP development?
[ "You don't need a \"system\" to do templating.\nYou can do it on your own by keeping presentation & logic separate.\nThis way the designer can screw up the display, but not the logic behind it.\nHere's a simple example:\n<?php \n$people = array('derek','joel','jeff');\n$people[0] = 'martin'; // all your logic goes here\ninclude 'templates/people.php';\n?>\n\nNow here's the people.php file (which you give your designer):\n<html> \n<body>\n<?php foreach($people as $name):?>\n <b>Person:</b> <?=$name?> <br />\n<?php endforeach;?> \n</body>\n</html>\n\n", "Take a look at how some of the popular PHP frameworks use templating. Examples include cakePHP, Zend Framework, and Code Igniter. Even if you are not going to base your site on these frameworks, the template design pattern is a good way to keep php code away from your web designers, so they can focus on layout and not functionality.\n", "It sounds to me like you need to begin implementing what is known as \"separation of concerns\" across your application generally. The examples folks give about templating, in response to your specific complaint about page editors breaking your code, are important, but represent just one example of this tactic. As your program gets larger and more complex it becomes harder to modify and debug--even if your designer is not breaking your code.\nProbably the most common separation is a three way split between data, logic and presentation as described in the design pattern Model-View-Controller (MVC). You do not need a full blown MVC framework in place to implement the same basic principles. The idea is simply to encapsulate code that deals with your data (model) in one place, the code that presents this data to the user (view) in another. You tie that code together with code that is only concerned with presenting the right data to the right user at a the right time (controller).\nFrom your description, it sounds like you have right now is a Transaction Script pattern, where you have a php file \"dothis.php\" that is loaded in the browser, and all the function definitions and HTML for the display are together. You already have functions, so you are already beginning to encapsulate pieces of logic.\nThe way I would approach this would be, in keeping with the other answers here about templating, is to remove all of the HTML into another file only referencing simple PHP variables and maybe some loops (but as little conditional switching as you can). That will make the template easier to read and harder to break. When your page editor wants to modify the layout, give them THAT file.\nYou then separate all of your data access functions to another file, ideally creating a class (or several classes, depending on how complex your data is and how frequently you need to reuse it). \nAt this point your \"dothis.php\" has been stripped down to maybe some configuration code (which you can separate out to an include, and some authentication code (which you can separate out to its own class), and is only calling the data access functions, and calling the included template file. Your controller itself is therefore greatly simplified and easier to manage.\n", "I would highly recommend reading the book PHP In Action. It takes you through abstracting your database connections, templating systems and all the other basics of a web application. If every PHP developer read this book then the language would have a much better reputation.\nIt also has chapters on refactoring, unit testing and the MVC control pattern. \n", "Does the outside person need to edit the logic, or just the display (HTML)?\nIf it's the latter case, check out the Smarty template engine.\n", "I think I'd like to stay away from an unweildy framework. Just some approach I can use that generally makes the pages more readable with cleaner code. \nStack Overflow wants me to decide which answer is best, when best is a subjective opinion. Who is to say what the 'best' practice is. \n", "If you decide to continue using functions, you can get some inspiration from WordPress. You can probably reduce the \"program\" to a minimum by making templates more granular.\nAlso, good tools (i.e. HTML editors) can help designers ignore your PHP and work on the design without breaking the code. (But I have no suggestions, sorry.)\nThe other way to some things is to create own template system instead of SMARTY, but it would probably take too long to create a working system to satisfy your needs that would go past just a replacing something like %%VARIABLE%% with a text.\nOur company uses SMARTY and even with a lot of code in templates, designers know how to work with it. For simple CMS sites we use ExpressionEngine, which uses HTML-like tags for inserting logic into templates.\n", "There's a lot that can be said on this topic but a very basic starting point would be to move as much code as possible out into separate files and then use include statements.\n", "I usually use includes, as they can be very useful for organising and grouping functions together. Also, comment your code. There's nothing worse than for someone else to see your work and not know why you've done this. Naming variables and functions sensibly can go a long way too - for example:\n$userName = \"John Doe\";\n$dateOfBirth = \"04/02/1982\";\n\nfunction calculateUserAgeFromBirth($userName, $dateOfBirth)\n\nNaming variables like this also helps minimise comments about what your code actually does.\n" ]
[ 19, 6, 5, 4, 2, 1, 1, 0, 0 ]
[]
[]
[ "php" ]
stackoverflow_0000036417_php.txt
Q: Targeting multiple versions of .net framework Suppose I have some code that would, in theory, compile against any version of the .net framework. Think "Hello World", if you like. If I actually compile the code, though, I'll get an executable that runs against one particular version. Is there any way to arrange things so that the compiled exe will just run against whatever version it finds? I strongly suspect that the answer is no, but I'd be happy to be proven wrong... Edit: Well, I'll go to the foot of our stairs. I had no idea that later frameworks would happily run exe's compiled under earlier versions. Thanks for all the responses! A: Im not sure if this is correct, but i'd try to compile it for the lowest version, the higher versions should be able to run the lower versions exe's. A: Read ScuttGu's post about VS 2008 Multi-Targeting Support One of the big changes we are making starting with the VS 2008 release is to support what we call "Multi-Targeting" - which means that Visual Studio will now support targeting multiple versions of the .NET Framework, and developers will be able to start taking advantage of the new features Visual Studio provides without having to always upgrade their existing projects and deployed applications to use a new version of the .NET Framework library. Now when you open an existing project or create a new one with VS 2008, you can pick which version of the .NET Framework to work with - and the IDE will update its compilers and feature-set to match this. Among other things, this means that features, controls, projects, item-templates, and assembly references that don't work with that version of the framework will be hidden, and when you build your application you'll be able to take the compiled output and copy it onto a machine that only has an older version of the .NET Framework installed, and you'll know that the application will work. That way you can use VS2008 to develop .NET 2.0 projects that will work on both .NET 2.0, 3.0 and 3.5 A: Along side multi targeting, the frameworks are backwards compatible, so something compiled to 1.0 will run on 1.1 and 2. Somthing compiled on 1.1 will run on 2 ... etc. A: I know @John Boker is correct when it comes to .Net class libraries. You can compile a class library against .Net 1.1 and then use it in a .Net 2.0 or higher project. I suspect the same is also true for executables. A: with 2005 & 2008, yes (on CLR 2.0) With 2003, no.. because it compiles down to CLR 1.1 You could theorectically write some code using #if (DOTNET35) and such so that you don't use features outside the compilers knowledge and then run the desired compiler on the app... I question the usefulness of this though. A: Well, AFAIK, all .NET versions (except version 1.x) compile to the same bytecode. In case of C#, all new features are simply syntactic sugar, which get transformed into C# 2.0 constructs when compiling. The key point where things could go wrong is when you use C# 3.0 or 3.5 specific DLLs. They don't work well with the .NET 2.0 framework, so you can't use those. I can't really think of a workaround for this, sorry :( A: On the subject of which .NET framework the user has installed, there is also a new option with the Client Profile that’s available with .NET 3.5 SP1. This basically allows you to ship a small (277k) bootstrap program which downloads and installs the required files (A subset od the full .NET framework). For more information, and general tips on creating a small .NET installation, see this great blog entry by Scott Hanselman.
Targeting multiple versions of .net framework
Suppose I have some code that would, in theory, compile against any version of the .net framework. Think "Hello World", if you like. If I actually compile the code, though, I'll get an executable that runs against one particular version. Is there any way to arrange things so that the compiled exe will just run against whatever version it finds? I strongly suspect that the answer is no, but I'd be happy to be proven wrong... Edit: Well, I'll go to the foot of our stairs. I had no idea that later frameworks would happily run exe's compiled under earlier versions. Thanks for all the responses!
[ "Im not sure if this is correct, but i'd try to compile it for the lowest version, the higher versions should be able to run the lower versions exe's.\n", "Read ScuttGu's post about VS 2008 Multi-Targeting Support\n\nOne of the big changes we are making\n starting with the VS 2008 release is\n to support what we call\n \"Multi-Targeting\" - which means that\n Visual Studio will now support\n targeting multiple versions of the\n .NET Framework, and developers will be\n able to start taking advantage of the\n new features Visual Studio provides\n without having to always upgrade their\n existing projects and deployed\n applications to use a new version of\n the .NET Framework library.\nNow when you open an existing project\n or create a new one with VS 2008, you\n can pick which version of the .NET\n Framework to work with - and the IDE\n will update its compilers and\n feature-set to match this. Among\n other things, this means that\n features, controls, projects,\n item-templates, and assembly\n references that don't work with that\n version of the framework will be\n hidden, and when you build your\n application you'll be able to take the\n compiled output and copy it onto a\n machine that only has an older version\n of the .NET Framework installed, and\n you'll know that the application will\n work.\n\nThat way you can use VS2008 to develop .NET 2.0 projects that will work on both .NET 2.0, 3.0 and 3.5\n", "Along side multi targeting, the frameworks are backwards compatible, so something compiled to 1.0 will run on 1.1 and 2. Somthing compiled on 1.1 will run on 2 ... etc.\n", "I know @John Boker is correct when it comes to .Net class libraries. You can compile a class library against .Net 1.1 and then use it in a .Net 2.0 or higher project.\nI suspect the same is also true for executables.\n", "with 2005 & 2008, yes (on CLR 2.0)\nWith 2003, no.. because it compiles down to CLR 1.1\nYou could theorectically write some code using #if (DOTNET35) and such so that you don't use features outside the compilers knowledge and then run the desired compiler on the app... I question the usefulness of this though.\n", "Well, AFAIK, all .NET versions (except version 1.x) compile to the same bytecode. In case of C#, all new features are simply syntactic sugar, which get transformed into C# 2.0 constructs when compiling.\nThe key point where things could go wrong is when you use C# 3.0 or 3.5 specific DLLs. They don't work well with the .NET 2.0 framework, so you can't use those.\nI can't really think of a workaround for this, sorry :(\n", "On the subject of which .NET framework the user has installed, there is also a new option with the Client Profile that’s available with .NET 3.5 SP1. This basically allows you to ship a small (277k) bootstrap program which downloads and installs the required files (A subset od the full .NET framework).\nFor more information, and general tips on creating a small .NET installation, see this great blog entry by Scott Hanselman.\n" ]
[ 6, 3, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net", "compilation", "version" ]
stackoverflow_0000043939_.net_compilation_version.txt
Q: Bigger than a char but smaller than a blob Char's are great because they are fixed size and thus make for a faster table. They are however limited to 255 characters. I want to hold 500 characters but a blob is variable length and that's not what I want. Is there some way to have a fixed length field of 500 characters in MySQL or am I going to have to use 2 char fields? A: I would suggest using a varchar(500). Even though varchar isn't a fixed length, the database should reserve the correct amount of space. You shouldn't notice any performance difference using varchar(500) over 2xchar(255). You're also probably going to cause extra overhead by joining two char fields together. A: I would suggest using a varchar(500) ... if you have MySQL 5.0.3 or higher. In previous versions, VARCHAR was restricted to 255 characters. Also, CHAR and VARCHAR do not work the same regarding trailing spaces. Be sure to read 10.4.1. The CHAR and VARCHAR Types (this is for MySQL 5.0). A: You're worrying too much about internal implementation details. Don't pre-optimize. Go with VARCHAR(500)
Bigger than a char but smaller than a blob
Char's are great because they are fixed size and thus make for a faster table. They are however limited to 255 characters. I want to hold 500 characters but a blob is variable length and that's not what I want. Is there some way to have a fixed length field of 500 characters in MySQL or am I going to have to use 2 char fields?
[ "I would suggest using a varchar(500). Even though varchar isn't a fixed length, the database should reserve the correct amount of space. You shouldn't notice any performance difference using varchar(500) over 2xchar(255).\nYou're also probably going to cause extra overhead by joining two char fields together.\n", "\nI would suggest using a varchar(500)\n\n... if you have MySQL 5.0.3 or higher. In previous versions, VARCHAR was restricted to 255 characters.\nAlso, CHAR and VARCHAR do not work the same regarding trailing spaces. Be sure to read 10.4.1. The CHAR and VARCHAR Types (this is for MySQL 5.0).\n", "You're worrying too much about internal implementation details. Don't pre-optimize. \nGo with VARCHAR(500)\n" ]
[ 7, 2, 0 ]
[]
[]
[ "database", "mysql" ]
stackoverflow_0000005075_database_mysql.txt
Q: How do I style (css) radio buttons and labels? Given the code bellow, how do I style the radio buttons to be next to the labels and style the label of the selected radio button differently than the other labels? <link href="http://yui.yahooapis.com/2.5.2/build/reset-fonts-grids/reset-fonts-grids.css" rel="stylesheet"> <link href="http://yui.yahooapis.com/2.5.2/build/base/base-min.css" rel="stylesheet"> <div class="input radio"> <fieldset> <legend>What color is the sky?</legend> <input type="hidden" name="color" value="" id="SubmitQuestion" /> <input type="radio" name="color" id="SubmitQuestion1" value="1" /> <label for="SubmitQuestion1">A strange radient green.</label> <input type="radio" name="color" id="SubmitQuestion2" value="2" /> <label for="SubmitQuestion2">A dark gloomy orange</label> <input type="radio" name="color" id="SubmitQuestion3" value="3" /> <label for="SubmitQuestion3">A perfect glittering blue</label> </fieldset> </div> Also let me state that I use the yui css styles as base. If you are not familir with them, they can be found here: reset-fonts-grids.css base-min.css Documentation for them both here : Yahoo! UI Library @pkaeding: Thanks. I tried some floating both thing that just looked messed up. The styling active radio button seemed to be doable with some input[type=radio]:active nomination on a google search, but I didnt get it to work properly. So the question I guess is more: Is this possible on all of todays modern browsers, and if not, what is the minimal JS needed? A: The first part of your question can be solved with just HTML & CSS; you'll need to use Javascript for the second part. Getting the Label Near the Radio Button I'm not sure what you mean by "next to": on the same line and near, or on separate lines? If you want all of the radio buttons on the same line, just use margins to push them apart. If you want each of them on their own line, you have two options (unless you want to venture into float: territory): Use <br />s to split the options apart and some CSS to vertically align them: <style type='text/css'> .input input { width: 20px; } </style> <div class="input radio"> <fieldset> <legend>What color is the sky?</legend> <input type="hidden" name="data[Submit][question]" value="" id="SubmitQuestion" /> <input type="radio" name="data[Submit][question]" id="SubmitQuestion1" value="1" /> <label for="SubmitQuestion1">A strange radient green.</label> <br /> <input type="radio" name="data[Submit][question]" id="SubmitQuestion2" value="2" /> <label for="SubmitQuestion2">A dark gloomy orange</label> <br /> <input type="radio" name="data[Submit][question]" id="SubmitQuestion3" value="3" /> <label for="SubmitQuestion3">A perfect glittering blue</label> </fieldset> </div> Follow A List Apart's article: Prettier Accessible Forms Applying a Style to the Currently Selected Label + Radio Button Styling the <label> is why you'll need to resort to Javascript. A library like jQuery is perfect for this: <style type='text/css'> .input label.focused { background-color: #EEEEEE; font-style: italic; } </style> <script type='text/javascript' src='jquery.js'></script> <script type='text/javascript'> $(document).ready(function() { $('.input :radio').focus(updateSelectedStyle); $('.input :radio').blur(updateSelectedStyle); $('.input :radio').change(updateSelectedStyle); }) function updateSelectedStyle() { $('.input :radio').removeClass('focused').next().removeClass('focused'); $('.input :radio:checked').addClass('focused').next().addClass('focused'); } </script> The focus and blur hooks are needed to make this work in IE. A: For any CSS3-enabled browser you can use an adjacent sibling selector for styling your labels input:checked + label { color: white; } MDN's browser compatibility table says essentially all of the current, popular browsers (Chrome, IE, Firefox, Safari), on both desktop and mobile, are compatible. A: This will get your buttons and labels next to each other, at least. I believe the second part can't be done in css alone, and will need javascript. I found a page that might help you with that part as well, but I don't have time right now to try it out: http://www.webmasterworld.com/forum83/6942.htm <style type="text/css"> .input input { float: left; } .input label { margin: 5px; } </style> <div class="input radio"> <fieldset> <legend>What color is the sky?</legend> <input type="hidden" name="data[Submit][question]" value="" id="SubmitQuestion" /> <input type="radio" name="data[Submit][question]" id="SubmitQuestion1" value="1" /> <label for="SubmitQuestion1">A strange radient green.</label> <input type="radio" name="data[Submit][question]" id="SubmitQuestion2" value="2" /> <label for="SubmitQuestion2">A dark gloomy orange</label> <input type="radio" name="data[Submit][question]" id="SubmitQuestion3" value="3" /> <label for="SubmitQuestion3">A perfect glittering blue</label> </fieldset> </div>
How do I style (css) radio buttons and labels?
Given the code bellow, how do I style the radio buttons to be next to the labels and style the label of the selected radio button differently than the other labels? <link href="http://yui.yahooapis.com/2.5.2/build/reset-fonts-grids/reset-fonts-grids.css" rel="stylesheet"> <link href="http://yui.yahooapis.com/2.5.2/build/base/base-min.css" rel="stylesheet"> <div class="input radio"> <fieldset> <legend>What color is the sky?</legend> <input type="hidden" name="color" value="" id="SubmitQuestion" /> <input type="radio" name="color" id="SubmitQuestion1" value="1" /> <label for="SubmitQuestion1">A strange radient green.</label> <input type="radio" name="color" id="SubmitQuestion2" value="2" /> <label for="SubmitQuestion2">A dark gloomy orange</label> <input type="radio" name="color" id="SubmitQuestion3" value="3" /> <label for="SubmitQuestion3">A perfect glittering blue</label> </fieldset> </div> Also let me state that I use the yui css styles as base. If you are not familir with them, they can be found here: reset-fonts-grids.css base-min.css Documentation for them both here : Yahoo! UI Library @pkaeding: Thanks. I tried some floating both thing that just looked messed up. The styling active radio button seemed to be doable with some input[type=radio]:active nomination on a google search, but I didnt get it to work properly. So the question I guess is more: Is this possible on all of todays modern browsers, and if not, what is the minimal JS needed?
[ "The first part of your question can be solved with just HTML & CSS; you'll need to use Javascript for the second part.\nGetting the Label Near the Radio Button\nI'm not sure what you mean by \"next to\": on the same line and near, or on separate lines? If you want all of the radio buttons on the same line, just use margins to push them apart. If you want each of them on their own line, you have two options (unless you want to venture into float: territory):\n\nUse <br />s to split the options apart and some CSS to vertically align them:\n\n<style type='text/css'>\n .input input\n {\n width: 20px;\n }\n</style>\n<div class=\"input radio\">\n <fieldset>\n <legend>What color is the sky?</legend>\n <input type=\"hidden\" name=\"data[Submit][question]\" value=\"\" id=\"SubmitQuestion\" />\n\n <input type=\"radio\" name=\"data[Submit][question]\" id=\"SubmitQuestion1\" value=\"1\" />\n <label for=\"SubmitQuestion1\">A strange radient green.</label>\n <br />\n <input type=\"radio\" name=\"data[Submit][question]\" id=\"SubmitQuestion2\" value=\"2\" />\n <label for=\"SubmitQuestion2\">A dark gloomy orange</label>\n <br />\n <input type=\"radio\" name=\"data[Submit][question]\" id=\"SubmitQuestion3\" value=\"3\" />\n <label for=\"SubmitQuestion3\">A perfect glittering blue</label>\n </fieldset>\n</div>\n\n\nFollow A List Apart's article: Prettier Accessible Forms\n\nApplying a Style to the Currently Selected Label + Radio Button\nStyling the <label> is why you'll need to resort to Javascript. A library like jQuery\nis perfect for this:\n<style type='text/css'>\n .input label.focused\n {\n background-color: #EEEEEE;\n font-style: italic;\n }\n</style>\n<script type='text/javascript' src='jquery.js'></script>\n<script type='text/javascript'>\n $(document).ready(function() {\n $('.input :radio').focus(updateSelectedStyle);\n $('.input :radio').blur(updateSelectedStyle);\n $('.input :radio').change(updateSelectedStyle);\n })\n\n function updateSelectedStyle() {\n $('.input :radio').removeClass('focused').next().removeClass('focused');\n $('.input :radio:checked').addClass('focused').next().addClass('focused');\n }\n</script>\n\nThe focus and blur hooks are needed to make this work in IE.\n", "For any CSS3-enabled browser you can use an adjacent sibling selector for styling your labels\ninput:checked + label {\n color: white;\n} \n\nMDN's browser compatibility table says essentially all of the current, popular browsers (Chrome, IE, Firefox, Safari), on both desktop and mobile, are compatible.\n", "This will get your buttons and labels next to each other, at least. I believe the second part can't be done in css alone, and will need javascript. I found a page that might help you with that part as well, but I don't have time right now to try it out: http://www.webmasterworld.com/forum83/6942.htm\n<style type=\"text/css\">\n.input input {\n float: left;\n}\n.input label {\n margin: 5px;\n}\n</style>\n<div class=\"input radio\">\n <fieldset>\n <legend>What color is the sky?</legend>\n <input type=\"hidden\" name=\"data[Submit][question]\" value=\"\" id=\"SubmitQuestion\" />\n\n <input type=\"radio\" name=\"data[Submit][question]\" id=\"SubmitQuestion1\" value=\"1\" />\n <label for=\"SubmitQuestion1\">A strange radient green.</label>\n\n <input type=\"radio\" name=\"data[Submit][question]\" id=\"SubmitQuestion2\" value=\"2\" />\n <label for=\"SubmitQuestion2\">A dark gloomy orange</label>\n <input type=\"radio\" name=\"data[Submit][question]\" id=\"SubmitQuestion3\" value=\"3\" />\n <label for=\"SubmitQuestion3\">A perfect glittering blue</label>\n </fieldset>\n</div>\n\n" ]
[ 33, 27, 5 ]
[]
[]
[ "css", "html", "radio_button", "styles" ]
stackoverflow_0000043643_css_html_radio_button_styles.txt
Q: DefaultValue for System.Drawing.SystemColors I have a line color property in my custom grid control. I want it to default to Drawing.SystemColors.InactiveBorder. I tried: [DefaultValue(typeof(System.Drawing.SystemColors), "InactiveBorder")] public Color LineColor { get; set; } But it doesn't seem to work. How do I do that with the default value attribute? A: You need to change first argument from SystemColors to Color. It seems that there is no type converter for the SystemColors type, only for the Color type. [DefaultValue(typeof(Color),"InactiveBorder")] A: According to the link Matt posted, the DefaultValue attribute doesn't set the default value of the property, it just lets the form designer know that the property has a default value. If you change a property from the default value it is shown as bold in the properties window. You can't set a default value using automatic properties - you'll have to do it the old-fashioned way: class MyClass { Color lineColor = SystemColors.InactiveBorder; [DefaultValue(true)] public Color LineColor { get { return lineColor; } set { lineColor = value; } } }
DefaultValue for System.Drawing.SystemColors
I have a line color property in my custom grid control. I want it to default to Drawing.SystemColors.InactiveBorder. I tried: [DefaultValue(typeof(System.Drawing.SystemColors), "InactiveBorder")] public Color LineColor { get; set; } But it doesn't seem to work. How do I do that with the default value attribute?
[ "You need to change first argument from SystemColors to Color.\nIt seems that there is no type converter for the SystemColors type, only for the Color type.\n[DefaultValue(typeof(Color),\"InactiveBorder\")]\n\n", "According to the link Matt posted, the DefaultValue attribute doesn't set the default value of the property, it just lets the form designer know that the property has a default value. If you change a property from the default value it is shown as bold in the properties window.\nYou can't set a default value using automatic properties - you'll have to do it the old-fashioned way:\nclass MyClass\n{\n Color lineColor = SystemColors.InactiveBorder;\n\n [DefaultValue(true)]\n public Color LineColor {\n get {\n return lineColor;\n }\n\n set {\n lineColor = value;\n }\n }\n}\n\n" ]
[ 13, 2 ]
[]
[]
[ ".net", "c#", "user_controls", "winforms" ]
stackoverflow_0000043738_.net_c#_user_controls_winforms.txt
Q: What are some compact algorithms for generating interesting time series data? The question sort of says it all. Whether it's for code testing purposes, or you're modeling a real-world process, or you're trying to impress a loved one, what are some algorithms that folks use to generate interesting time series data? Are there any good resources out there with a consolidated list? No constraints on values (except plus or minus infinity) or dimensions, but I'm looking for examples that people have found useful or exciting in practice. Bonus points for parsimonious and readable code samples. A: Don't have an answer for the algorithm part but you can see how "realistic" your data is with Benford's law A: There are a ton of PRN generators out there, and you can always get free random bits, or even buy them on CD or DVD. I've used simple sine wave generators mixed together with some phase and amplitude noise thrown in to get signals that sound and look interesting to humans when put through speakers or lights, but I don't know what you mean by interesting. There are ways to generate data that looks interesting in a chart form, but that would be different than data used on a stock chart, and neither would make a nice "static" image such as produced by an analog television tuned to a null channel. You can use Conway's game of life as a PRN, and "listen" to cells (or run all the cells through a logic circuit) to get some interesting time based signals. It would be interesting to look at the graph of DB updates/inserts for Stackoverflow over time, and you could mine that data. There really are infinite ways to generate an "interesting" time series data. Can you narrow the scope of your question? A: Try the kind of recurrences that can give variously simple or chaotic series based on the part of their phase spaces you explore: the simplest I can think of is the logistic map x(n+1) = r * x(n) * ( 1 - x(n) ). With r approx. 3.57 you get chaotic results that depend on the initial point. If you graph this versus time you can get lots of different series just by manipulating that parameter r. If you were to graph it as x(n+1) v. x(n) without connecting dots, you see a simple parabola take shape over time. This is one of the most basic functions from chaos theory and trying more interesting polynomials, graphing them as x(n+1) v. x(n) and watching a shape form, and then graphing x(n) v. n is a fun and interesting way to create series. Graphing x(n+1) v. x(n) makes it quickly obvious if you're only visiting a small number of points. Deeper recurrences become more interesting as well, and using different values of x(0) to check on sensitivity to initial conditions is also of interest. But for simplicity, control by a single parameter, and ability to find something to read about your recurrence, it'll be hard to beat the logistic map. I recommend: http://en.wikipedia.org/wiki/Logistic_map. It has a nice description of what to expect from different values of r.
What are some compact algorithms for generating interesting time series data?
The question sort of says it all. Whether it's for code testing purposes, or you're modeling a real-world process, or you're trying to impress a loved one, what are some algorithms that folks use to generate interesting time series data? Are there any good resources out there with a consolidated list? No constraints on values (except plus or minus infinity) or dimensions, but I'm looking for examples that people have found useful or exciting in practice. Bonus points for parsimonious and readable code samples.
[ "Don't have an answer for the algorithm part but you can see how \"realistic\" your data is with Benford's law\n", "There are a ton of PRN generators out there, and you can always get free random bits, or even buy them on CD or DVD.\nI've used simple sine wave generators mixed together with some phase and amplitude noise thrown in to get signals that sound and look interesting to humans when put through speakers or lights, but I don't know what you mean by interesting.\nThere are ways to generate data that looks interesting in a chart form, but that would be different than data used on a stock chart, and neither would make a nice \"static\" image such as produced by an analog television tuned to a null channel.\nYou can use Conway's game of life as a PRN, and \"listen\" to cells (or run all the cells through a logic circuit) to get some interesting time based signals.\nIt would be interesting to look at the graph of DB updates/inserts for Stackoverflow over time, and you could mine that data.\nThere really are infinite ways to generate an \"interesting\" time series data. Can you narrow the scope of your question?\n", "Try the kind of recurrences that can give variously simple or chaotic series based on the part of their phase spaces you explore: the simplest I can think of is the logistic map x(n+1) = r * x(n) * ( 1 - x(n) ). With r approx. 3.57 you get chaotic results that depend on the initial point.\nIf you graph this versus time you can get lots of different series just by manipulating that parameter r. If you were to graph it as x(n+1) v. x(n) without connecting dots, you see a simple parabola take shape over time.\nThis is one of the most basic functions from chaos theory and trying more interesting polynomials, graphing them as x(n+1) v. x(n) and watching a shape form, and then graphing x(n) v. n is a fun and interesting way to create series.\nGraphing x(n+1) v. x(n) makes it quickly obvious if you're only visiting a small number of points. Deeper recurrences become more interesting as well, and using different values of x(0) to check on sensitivity to initial conditions is also of interest.\nBut for simplicity, control by a single parameter, and ability to find something to read about your recurrence, it'll be hard to beat the logistic map.\nI recommend: http://en.wikipedia.org/wiki/Logistic_map. It has a nice description of what to expect from different values of r.\n" ]
[ 2, 2, 2 ]
[]
[]
[ "algorithm", "language_agnostic", "time_series" ]
stackoverflow_0000041097_algorithm_language_agnostic_time_series.txt
Q: How do I best populate an HTML table in ASP.NET? This is what I've got. It works. But, is there a simpler or better way? ASPX Page… <asp:Repeater ID="RepeaterBooks" runat="server"> <HeaderTemplate> <table class="report"> <tr> <th>Published</th> <th>Title</th> <th>Author</th> <th>Price</th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td><asp:Literal ID="LiteralPublished" runat="server" /></td> <td><asp:Literal ID="LiteralTitle" runat="server" /></td> <td><asp:Literal ID="LiteralAuthor" runat="server" /></td> <td><asp:Literal ID="LiteralPrice" runat="server" /></td> </tr> </ItemTemplate> <FooterTemplate> </table> </FooterTemplate> </asp:Repeater> ASPX.VB Code Behind… Protected Sub Page_Load( ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim db As New BookstoreDataContext RepeaterBooks.DataSource = From b In db.Books _ Order By b.Published _ Select b RepeaterBooks.DataBind() End Sub Sub RepeaterBooks_ItemDataBound( ByVal sender As Object, ByVal e As System.Web.UI.WebControls.RepeaterItemEventArgs) Handles RepeaterBooks.ItemDataBound If e.Item.ItemType = ListItemType.Item Or e.Item.ItemType = ListItemType.AlternatingItem Then Dim b As Book = DirectCast(e.Item.DataItem, Book) DirectCast(e.Item.FindControl("LiteralPublished"), Literal).Text = "<nobr>" + b.Published.ToShortDateString + "</nobr>" DirectCast(e.Item.FindControl("LiteralTitle"), Literal).Text = "<nobr>" + TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Title)) + "</nobr>" DirectCast(e.Item.FindControl("LiteralAuthor"), Literal).Text = "<nobr>" + TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Author)) + "</nobr>" DirectCast(e.Item.FindControl("LiteralPrice"), Literal).Text = "<nobr>" + Format(b.Price, "c") + "</nobr>" End If End Sub Function TryNbsp(ByVal s As String) As String If s = "" Then Return "&nbsp;" Else Return s End If End Function A: @Geoff That sort of Eval statement was actually added in 2.0, but if performance is important Eval should be avoided since it uses Reflection. The repeater is a pretty good way of doing it, although it might be faster to generate the table in code: ASPX Page: <table class="report" id="bookTable" runat="server"> <tr> <th>Published</th> <th>Title</th> <th>Author</th> <th>Price</th> </tr> </table> Code Behind: Protected Sub Page_Load( ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostback Then BuildTable() End If End Sub Private Sub BuildTable() Dim db As New BookstoreDataContext Dim bookCollection = from b in db.Books _ Order By b.Published _ Select b Dim row As HtmlTableRow Dim cell As HtmlTableCell For Each book As Books In bookCollection row = New HtmlTableRow() cell = New HtmlTableCell With { .InnerText = b.Published.ToShortDateString } row.Controls.Add(cell) cell = New HtmlTableCell With { .InnerText = TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Title)) } row.Controls.Add(cell) cell = New HtmlTableCell With { .InnerText = TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Author)) row.Controls.Add(cell) cell = New HtmlTableCell With { .InnerText = Format(b.Price, "c") } row.Controls.Add(cell) bookTable.Controls.Add(row) Next I guess it depends on how important speed is to you. For simplicity's sake I think I would go with the Repeater. A: The ListView control introduced with framework 3.5 might be a little bit better solution. Your markup would look like this: <asp:ListView runat="server" ID="ListView1" DataSourceID="SqlDataSource1"> <LayoutTemplate> <table runat="server" id="table1" runat="server" > <tr runat="server" id="itemPlaceholder" ></tr> </table> </LayoutTemplate> <ItemTemplate> <tr runat="server"> <td runat="server"> <asp:Label ID="NameLabel" runat="server" Text='<%#Eval("Name") %>' /> </td> </tr> </ItemTemplate> </asp:ListView> You'll want to set your data source ID from a public or private property in the code-behind class. A: In .Net 3.0+ you can replace your ItemDataBound to the asp:Literal by doing something like this: <ItemTemplate> <tr> <td><%# Eval("published") %></td> ... where "published" is the name of a field in the data you have bound to the repeater Edit: @Alassek: I think the performance hit of reflection is often over-emphasized. Obviously you need to benchmark performance of your app, but the hit of the Eval is likely measured in milliseconds. Unless your app is serving many concurrent hits, this probably isn't an issue, and the simplicity of the code using Eval, along with it being a good separation of the presentation, make it a good solution. A: I agree with Geoff, the only time we use Literals is if we want to do something different with the data. For example, we might want a DueDate field to say "Today" or "Yesterday" instead of the actual date. A: This is what the GridView is for. <asp:GridView runat="server" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField HeaderText="Published" DataField="Published" /> <asp:BoundField HeaderText="Author" DataField="Author" /> </Columns> </asp:GridView> A: I would use a GridView (or DataGrid, if you are using an older version of ASP.NET). <asp:GridView ID="gvBooks" runat="server" AutoGenerateColumns="False"> <Columns> <asp:BoundField HeaderText="Published" DataField="Published" /> <asp:BoundField HeaderText="Title" DataField="Title" /> <asp:BoundField HeaderText="Author" DataField="Author" /> <asp:BoundField HeaderText="Price" DataField="Price" /> </Columns> </asp:GridView> With some code-behind: Private Sub gvBooksRowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvBooks.RowDataBound Select Case e.Row.RowType Case DataControlRowType.DataRow ''' Your code here ''' End Select End Sub You can bind it in a similar way. The RowDataBound event is what you need. A: ALassek wrote: …generate the table in code… I like the look of that! It seems MUCH less likely to produce a run-time exception due to a typo or field name change. A: If you don't need ASP.NET handled edit cabilities I would stay away from the DataGrid and the GridView ... they provide unnecessary bloat.
How do I best populate an HTML table in ASP.NET?
This is what I've got. It works. But, is there a simpler or better way? ASPX Page… <asp:Repeater ID="RepeaterBooks" runat="server"> <HeaderTemplate> <table class="report"> <tr> <th>Published</th> <th>Title</th> <th>Author</th> <th>Price</th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td><asp:Literal ID="LiteralPublished" runat="server" /></td> <td><asp:Literal ID="LiteralTitle" runat="server" /></td> <td><asp:Literal ID="LiteralAuthor" runat="server" /></td> <td><asp:Literal ID="LiteralPrice" runat="server" /></td> </tr> </ItemTemplate> <FooterTemplate> </table> </FooterTemplate> </asp:Repeater> ASPX.VB Code Behind… Protected Sub Page_Load( ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim db As New BookstoreDataContext RepeaterBooks.DataSource = From b In db.Books _ Order By b.Published _ Select b RepeaterBooks.DataBind() End Sub Sub RepeaterBooks_ItemDataBound( ByVal sender As Object, ByVal e As System.Web.UI.WebControls.RepeaterItemEventArgs) Handles RepeaterBooks.ItemDataBound If e.Item.ItemType = ListItemType.Item Or e.Item.ItemType = ListItemType.AlternatingItem Then Dim b As Book = DirectCast(e.Item.DataItem, Book) DirectCast(e.Item.FindControl("LiteralPublished"), Literal).Text = "<nobr>" + b.Published.ToShortDateString + "</nobr>" DirectCast(e.Item.FindControl("LiteralTitle"), Literal).Text = "<nobr>" + TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Title)) + "</nobr>" DirectCast(e.Item.FindControl("LiteralAuthor"), Literal).Text = "<nobr>" + TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Author)) + "</nobr>" DirectCast(e.Item.FindControl("LiteralPrice"), Literal).Text = "<nobr>" + Format(b.Price, "c") + "</nobr>" End If End Sub Function TryNbsp(ByVal s As String) As String If s = "" Then Return "&nbsp;" Else Return s End If End Function
[ "@Geoff\nThat sort of Eval statement was actually added in 2.0, but if performance is important Eval should be avoided since it uses Reflection.\nThe repeater is a pretty good way of doing it, although it might be faster to generate the table in code:\nASPX Page:\n<table class=\"report\" id=\"bookTable\" runat=\"server\">\n <tr>\n <th>Published</th>\n <th>Title</th>\n <th>Author</th>\n <th>Price</th>\n </tr>\n </table>\n\nCode Behind:\nProtected Sub Page_Load( ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load\n If Not Page.IsPostback Then\n BuildTable()\n End If\nEnd Sub\n\nPrivate Sub BuildTable()\n Dim db As New BookstoreDataContext\n Dim bookCollection = from b in db.Books _\n Order By b.Published _\n Select b\n Dim row As HtmlTableRow\n Dim cell As HtmlTableCell\n\n For Each book As Books In bookCollection\n row = New HtmlTableRow()\n cell = New HtmlTableCell With { .InnerText = b.Published.ToShortDateString }\n row.Controls.Add(cell)\n cell = New HtmlTableCell With { .InnerText = TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Title)) }\n row.Controls.Add(cell)\n cell = New HtmlTableCell With { .InnerText = TryNbsp(HttpContext.Current.Server.HtmlEncode(b.Author))\n row.Controls.Add(cell)\n cell = New HtmlTableCell With { .InnerText = Format(b.Price, \"c\") }\n row.Controls.Add(cell)\n bookTable.Controls.Add(row)\n Next\n\nI guess it depends on how important speed is to you. For simplicity's sake I think I would go with the Repeater.\n", "The ListView control introduced with framework 3.5 might be a little bit better solution. Your markup would look like this:\n<asp:ListView runat=\"server\" ID=\"ListView1\"\n DataSourceID=\"SqlDataSource1\">\n <LayoutTemplate>\n <table runat=\"server\" id=\"table1\" runat=\"server\" >\n <tr runat=\"server\" id=\"itemPlaceholder\" ></tr>\n </table>\n </LayoutTemplate>\n <ItemTemplate>\n <tr runat=\"server\">\n <td runat=\"server\">\n <asp:Label ID=\"NameLabel\" runat=\"server\"\n Text='<%#Eval(\"Name\") %>' />\n </td>\n </tr>\n </ItemTemplate>\n</asp:ListView>\n\nYou'll want to set your data source ID from a public or private property in the code-behind class.\n", "In .Net 3.0+ you can replace your ItemDataBound to the asp:Literal by doing something like this:\n<ItemTemplate>\n <tr>\n <td><%# Eval(\"published\") %></td>\n ...\n\nwhere \"published\" is the name of a field in the data you have bound to the repeater\nEdit:\n@Alassek: I think the performance hit of reflection is often over-emphasized. Obviously you need to benchmark performance of your app, but the hit of the Eval is likely measured in milliseconds. Unless your app is serving many concurrent hits, this probably isn't an issue, and the simplicity of the code using Eval, along with it being a good separation of the presentation, make it a good solution.\n", "I agree with Geoff, the only time we use Literals is if we want to do something different with the data.\nFor example, we might want a DueDate field to say \"Today\" or \"Yesterday\" instead of the actual date.\n", "This is what the GridView is for.\n<asp:GridView runat=\"server\" DataSourceID=\"SqlDataSource1\">\n <Columns>\n <asp:BoundField HeaderText=\"Published\" DataField=\"Published\" />\n <asp:BoundField HeaderText=\"Author\" DataField=\"Author\" />\n </Columns>\n</asp:GridView>\n\n", "I would use a GridView (or DataGrid, if you are using an older version of ASP.NET).\n<asp:GridView ID=\"gvBooks\" runat=\"server\" AutoGenerateColumns=\"False\">\n <Columns>\n <asp:BoundField HeaderText=\"Published\" DataField=\"Published\" />\n <asp:BoundField HeaderText=\"Title\" DataField=\"Title\" /> \n <asp:BoundField HeaderText=\"Author\" DataField=\"Author\" />\n <asp:BoundField HeaderText=\"Price\" DataField=\"Price\" />\n </Columns>\n</asp:GridView>\n\nWith some code-behind:\nPrivate Sub gvBooksRowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvBooks.RowDataBound\n Select Case e.Row.RowType\n Case DataControlRowType.DataRow\n\n ''' Your code here '''\n\n End Select\nEnd Sub\n\nYou can bind it in a similar way. The RowDataBound event is what you need.\n", "\nALassek wrote:\n…generate the table in code…\n\nI like the look of that! It seems MUCH less likely to produce a run-time exception due to a typo or field name change.\n", "If you don't need ASP.NET handled edit cabilities I would stay away from the DataGrid and the GridView ... they provide unnecessary bloat.\n" ]
[ 4, 3, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "asp.net", "html", "vb.net" ]
stackoverflow_0000043803_asp.net_html_vb.net.txt
Q: Hudson can't build my Maven 2 project because it says artifacts are missing from the repository? (they aren't) I'm using Hudson and Maven 2 for my automated build/CI. I can build fine with maven from the command line, but when I run the same goal with Hudson, the build fails complaining of missing artifacts. I'm running Hudson as a windows XP service. A: Make sure you're running Hudson as the same user that you are using to run Maven from the command line. Maven creates a separate repository for each user. If you are running Hudson as a Windows service, this won't be the same user as you have logged on as and will be running "mvn" commands with. This means the artifacts in the repositories may be different. To fix, either start Hudson manually as the user which works, or update the repository for the user which Hudson is running as. A: Obvious question, but have you got Hudson set up to point to the same Maven repository as your command line build? You can check this from the Hudson admin gui - look in the Maven section of the Manage Hudson page. This should have a MAVEN_HOME environment variable listed. Look in the settings.xml file under: MAVEN_HOME\conf\settings.xml The localRepository configuration item is the location of the Maven repository that the Hudson build is using.
Hudson can't build my Maven 2 project because it says artifacts are missing from the repository? (they aren't)
I'm using Hudson and Maven 2 for my automated build/CI. I can build fine with maven from the command line, but when I run the same goal with Hudson, the build fails complaining of missing artifacts. I'm running Hudson as a windows XP service.
[ "Make sure you're running Hudson as the same user that you are using to run Maven from the command line. Maven creates a separate repository for each user. If you are running Hudson as a Windows service, this won't be the same user as you have logged on as and will be running \"mvn\" commands with. This means the artifacts in the repositories may be different.\nTo fix, either start Hudson manually as the user which works, or update the repository for the user which Hudson is running as.\n", "Obvious question, but have you got Hudson set up to point to the same Maven repository as your command line build? You can check this from the Hudson admin gui - look in the Maven section of the Manage Hudson page. This should have a MAVEN_HOME environment variable listed. Look in the settings.xml file under:\nMAVEN_HOME\\conf\\settings.xml\n\nThe localRepository configuration item is the location of the Maven repository that the Hudson build is using.\n" ]
[ 3, 3 ]
[]
[]
[ "continuous_integration", "hudson", "maven_2" ]
stackoverflow_0000044144_continuous_integration_hudson_maven_2.txt
Q: IE6 and XML prolog With an XML prolog like ? xml version="1.0" encoding="iso-8859-1"? > and a Doctype like !DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd"> I can get my page to render as expected. However, in IE7 the same page does not render correctly. (a span inside a div does not align vertically) Articles on the web suggest that XML prolog + doctype will throw IE6 into quirks mode. However this article seems to suggest otherwise, although it does not mention the version (is it 6 or 7) it applies to, though the article is dated sep 2005 which makes me believe it applies to IE6 Does XML Prolog + doc type throw IE6 into quirks mode? What about IE7? Any recommendations on for or against using the prolog + doctype? A: Adding an XML prolog before the doctype will throw IE6 into quirks rendering mode. (See here.) In fact, any space before the doctype will throw IE6 into quirks mode. This is not the case for IE7 and above. You can use document.compatMode (example) to have the browser tell you what mode it is using to do the rendering. The IE blog entry on MSDN is referring to changes made to IE7 that allow IE7 to stay in standards mode when using the appropriate doctype even if it is preceded by an XML prolog. I would generally recommend omitting the prolog and keeping the browser in standards mode; I think this will make your life easier moving forward.
IE6 and XML prolog
With an XML prolog like ? xml version="1.0" encoding="iso-8859-1"? > and a Doctype like !DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd"> I can get my page to render as expected. However, in IE7 the same page does not render correctly. (a span inside a div does not align vertically) Articles on the web suggest that XML prolog + doctype will throw IE6 into quirks mode. However this article seems to suggest otherwise, although it does not mention the version (is it 6 or 7) it applies to, though the article is dated sep 2005 which makes me believe it applies to IE6 Does XML Prolog + doc type throw IE6 into quirks mode? What about IE7? Any recommendations on for or against using the prolog + doctype?
[ "Adding an XML prolog before the doctype will throw IE6 into quirks rendering mode. (See here.) In fact, any space before the doctype will throw IE6 into quirks mode. This is not the case for IE7 and above. You can use document.compatMode (example) to have the browser tell you what mode it is using to do the rendering.\nThe IE blog entry on MSDN is referring to changes made to IE7 that allow IE7 to stay in standards mode when using the appropriate doctype even if it is preceded by an XML prolog.\nI would generally recommend omitting the prolog and keeping the browser in standards mode; I think this will make your life easier moving forward. \n" ]
[ 2 ]
[]
[]
[ "internet_explorer_6", "standards" ]
stackoverflow_0000044087_internet_explorer_6_standards.txt
Q: SQL Server Alter Computed Column Does anyone know of a way to alter a computed column without dropping the column in SQL Server. I want to stop using the column as a computed column and start storing data directly in the column, but would like to retain the current values. Is this even possible? A: Not that I know of but here is something you can do add another column to the table update that column with the values of the computed column then drop the computed column A: If you need to maintain the name of the column (so as not to break client code), you will need to drop the column and add back a stored column with the same name. You can do this without downtime by making the changes (along the lines of SQLMenace's solution) in a single transaction. Here's some pseudo-code: begin transaction drop computed colum X add stored column X populate column using the old formula commit transaction A: Ok, so let me see if I got this straight. You want to take a column that is currently computed and make it a plain-jane data column. Normally this would drop the column but you want to keep the data in the column. Make a new table with the primary key columns from your source table and the generated column. Copy the data from your source table into the new table. Change the column on your source table. Copy the data back. No matter what you do I am pretty sure changing the column will drop it. This way is a bit more complex but not that bad and it saves your data. [Edit: @SqlMenace's answer is much easier. :) Curse you Menace!! :)]
SQL Server Alter Computed Column
Does anyone know of a way to alter a computed column without dropping the column in SQL Server. I want to stop using the column as a computed column and start storing data directly in the column, but would like to retain the current values. Is this even possible?
[ "Not that I know of but here is something you can do\nadd another column to the table\nupdate that column with the values of the computed column then drop the computed column\n", "If you need to maintain the name of the column (so as not to break client code), you will need to drop the column and add back a stored column with the same name. You can do this without downtime by making the changes (along the lines of SQLMenace's solution) in a single transaction. Here's some pseudo-code:\n\nbegin transaction\n drop computed colum X\n add stored column X\n populate column using the old formula\ncommit transaction\n\n", "Ok, so let me see if I got this straight. You want to take a column that is currently computed and make it a plain-jane data column. Normally this would drop the column but you want to keep the data in the column.\n\nMake a new table with the primary key columns from your source table and the generated column.\nCopy the data from your source table into the new table.\nChange the column on your source table.\nCopy the data back.\n\nNo matter what you do I am pretty sure changing the column will drop it. This way is a bit more complex but not that bad and it saves your data.\n[Edit: @SqlMenace's answer is much easier. :) Curse you Menace!! :)]\n" ]
[ 10, 2, 1 ]
[]
[]
[ "alter_table", "sql_server" ]
stackoverflow_0000044118_alter_table_sql_server.txt
Q: SQL: Select like column from two tables I have a database with two tables (Table1 and Table2). They both have a common column [ColumnA] which is an nvarchar. How can I select this column from both tables and return it as a single column in my result set? So I'm looking for something like: ColumnA in Table1: a b c ColumnA in Table2: d e f Result set should be: a b c d e f A: SELECT ColumnA FROM Table1 UNION Select ColumnB FROM Table2 ORDER BY 1 Also, if you know the contents of Table1 and Table2 will NEVER overlap, you can use UNION ALL in place of UNION instead. Saves a little bit of resources that way. -- Kevin Fairchild A: Do you care if you get dups or not? UNION will be slower than UNION ALL because UNION will filter out dups A: Use the UNION operator: SELECT ColumnA FROM Table1 UNION SELECT ColumnA FROM Table2 A: The union answer is almost correct, depending on overlapping values: SELECT distinct ColumnA FROM Table1 UNION SELECT distinct ColumnA FROM Table2 If 'd' appeared in Table1 or 'c' appeared in Table2 you would have multiple rows with them. A: You can use a union select: Select columnA from table1 union select columnA from table2 A: SELECT Table1.*, Table2.d, Table2.e, Table2.f FROM Table1 JOIN Table2 ON Table1.a = Table2.a Or am I misunderstanding your question? Edit: It appears I did. A: I believe it's: SELECT columna FROM table1 UNION SELECT columnb FROM table2; A: In Oracle (at least) there is UNION and UNION ALL, UNION ALL will return all results from both sets even if there are duplicates, where as UNION will return the distinct results from both sets.
SQL: Select like column from two tables
I have a database with two tables (Table1 and Table2). They both have a common column [ColumnA] which is an nvarchar. How can I select this column from both tables and return it as a single column in my result set? So I'm looking for something like: ColumnA in Table1: a b c ColumnA in Table2: d e f Result set should be: a b c d e f
[ "SELECT ColumnA FROM Table1 UNION Select ColumnB FROM Table2 ORDER BY 1\n\nAlso, if you know the contents of Table1 and Table2 will NEVER overlap, you can use UNION ALL in place of UNION instead. Saves a little bit of resources that way.\n-- Kevin Fairchild\n", "Do you care if you get dups or not?\nUNION will be slower than UNION ALL because UNION will filter out dups\n", "Use the UNION operator:\nSELECT ColumnA FROM Table1\nUNION\nSELECT ColumnA FROM Table2\n\n", "The union answer is almost correct, depending on overlapping values: \nSELECT distinct ColumnA FROM Table1\nUNION\nSELECT distinct ColumnA FROM Table2\n\nIf 'd' appeared in Table1 or 'c' appeared in Table2 you would have multiple rows with them. \n", "You can use a union select: \nSelect columnA from table1 union select columnA from table2 \n\n", "SELECT Table1.*, Table2.d, Table2.e, Table2.f \nFROM Table1 JOIN Table2 ON Table1.a = Table2.a\n\nOr am I misunderstanding your question?\nEdit: It appears I did.\n", "I believe it's:\nSELECT columna FROM table1 UNION SELECT columnb FROM table2;\n\n", "In Oracle (at least) there is UNION and UNION ALL, UNION ALL will return all results from both sets even if there are duplicates, where as UNION will return the distinct results from both sets.\n" ]
[ 16, 3, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "sql" ]
stackoverflow_0000044181_sql.txt
Q: Connecting private IPs A friend of mine told me there was a way to connect two private IPs without using a proxy server. The idea was that both computers connected to a public server and some how the server joined the private connections and won't use any more bandwidth. Is this true? How's this technique named? A: There is a technique called "Hole Punching" that works well with "Cone" NAT (Cone is a technical familly of router). That's not an 100% sure technique, today, it works well with UDP on about 80% of the router. There is some implementations of library to realize Hole Punching: STUN (wikipedia) A: This is true. It's the way FogCreek Copilot works Take a look at item 2 on Joel's Copilot 2.0 post. A: Your friend might be referring to VIP's (Virtual IP's). From my understanding a VIP is usually controlled by a piece of hardware like a router and then redirects to one of your 2 private IP's. We use this with a cluster of machines behind a VIP. I'm not a network guy so that's pretty much the extent of my knowledge. A: If you're looking at joining two private networks (two networks of machines behind a NAT), the best way to do this is with a VPN. There are many pieces of equipment available to accomplish this. A: I'm not sure it's what you're thinking of, but you could do something similar with ssh tunneling. Let's say you wanted userA on 10.1.2.3/24 to connect a mysql server on userB's on 192.168.0.3/24. There's no direct network connectivity between the two networks, but both machines can connect to serverA on the public internet. userB runs this command: ssh -R localhost:13306:localhost:3306 username@serverA userA runs this command: ssh -L 3306:localhost:13306 username@serverA Now userA can use whatever tool they please to connect to mysql on localhost and the cxn will be tunneled through serverA and to the mysql daemon running on localhost on userB's machine. (hopefully no typos, typed with one hand as I hold my two day old daughter =))
Connecting private IPs
A friend of mine told me there was a way to connect two private IPs without using a proxy server. The idea was that both computers connected to a public server and some how the server joined the private connections and won't use any more bandwidth. Is this true? How's this technique named?
[ "There is a technique called \"Hole Punching\" that works well with \"Cone\" NAT (Cone is a technical familly of router). That's not an 100% sure technique, today, it works well with UDP on about 80% of the router.\nThere is some implementations of library to realize Hole Punching: STUN (wikipedia)\n", "This is true. It's the way FogCreek Copilot works\nTake a look at item 2 on Joel's Copilot 2.0 post.\n", "Your friend might be referring to VIP's (Virtual IP's). From my understanding a VIP is usually controlled by a piece of hardware like a router and then redirects to one of your 2 private IP's. We use this with a cluster of machines behind a VIP. I'm not a network guy so that's pretty much the extent of my knowledge.\n", "If you're looking at joining two private networks (two networks of machines behind a NAT), the best way to do this is with a VPN. There are many pieces of equipment available to accomplish this. \n", "I'm not sure it's what you're thinking of, but you could do something similar with ssh tunneling. Let's say you wanted userA on 10.1.2.3/24 to connect a mysql server on userB's on 192.168.0.3/24. There's no direct network connectivity between the two networks, but both machines can connect to serverA on the public internet.\nuserB runs this command:\nssh -R localhost:13306:localhost:3306 username@serverA\n\nuserA runs this command:\nssh -L 3306:localhost:13306 username@serverA\n\nNow userA can use whatever tool they please to connect to mysql on localhost and the cxn will be tunneled through serverA and to the mysql daemon running on localhost on userB's machine.\n(hopefully no typos, typed with one hand as I hold my two day old daughter =))\n" ]
[ 2, 0, 0, 0, 0 ]
[]
[]
[ "ip_address", "tcp" ]
stackoverflow_0000044177_ip_address_tcp.txt
Q: Best way to use a property to reference a Key-Value pair in a dictionary This is a fairly trivial matter, but I'm curious to hear people's opinions on it. If I have a Dictionary which I'm access through properties, which of these formats would you prefer for the property? /// <summary> /// This class's FirstProperty property /// </summary> [DefaultValue("myValue")] public string FirstProperty { get { return Dictionary["myKey"]; } set { Dictionary["myKey"] = value; } This is probably the typical way of doing it. It's fairly efficient, easy to understand, etc. The only disadvantage is with a longer or more complex key it would be possible to misspell it or change only one instance or something, leading me to this: /// <summary> /// This class's SecondProperty property /// </summary> [DefaultValue("myValue")] private const string DICT_MYKEY = "myKey" public string SecondProperty { get { return Dictionary[DICT_MYKEY]; } set { Dictionary[DICT_MYKEY] = value; } Which is marginally more complicated, but seems to offer additional safety, and is closer to what I would think of as the "Code Complete" solution. The downside is that when you also have a /// block and a [DefaultValue()] block above the property already, it starts getting a bit crowded up there. So which do you like better, and why? Does anybody have any better ideas? A: I like the second one purely because any avoidance of magic strings/numbers in code is a good thing. IMO if you need to reference a number or string literal in code more than once, it should be a constant. In most cases even if it's only used once it should be in a constant A: I agree with @Glenn for a purely nit-picky point of view. The answer is whatever works for you. All this code takes place in 10 lines (if you include the omitted last curly brace). Nobody is going to get lost and the chance of mistyping is pretty slim (not impossible but very slim). On the other hand, if you used the key somewhere else, then DEFINATELY go with the constant. Personally, I would go off on you about your curly brace style. :) Just kidding! It really is a matter of style. A: This isn't answering your question, but I don't think "DefaultValue" means what you think it means. It doesn't set a default value for your property. See MSDN and this question for more details. A: A lot of people would probably argue that the second option is "correct", because any value used more than once should be refactored into a constant. I would most likely use the first option. You have already gotten close to the "Code Complete" solution by encapsulating the dictionary entry in a strong typed property. This reduces the chance of screwing up retrieving the wrong Dictionary entry in your implementation. There are only 2 places where you could mess up typing "myKey", in the getter and setter, and this would be very easy to spot. The second option would just get too messy. A: You could match the property names up to the keys and use reflection to get the name for the lookup. public string FirstProperty { get { return Dictionary[PropertyName()]; } set { Dictionary[PropertyName()] = value; } private string PropertyName() { return new StackFrame(1).GetMethod().Name.Substring(4); } This has the added benefit of making all your property implementation identical, so you could set them up in visual studio as code snippets if you want. A: When you only use a magic string in one context, like you do, I think it's alright. But if you ever need to use the key in another part of the class, go const. A: @Joel you don't want to count on StackFrame. In-lining can ruin your day when you least expect it. But to the question: Either way doesn't really matter a whole lot.
Best way to use a property to reference a Key-Value pair in a dictionary
This is a fairly trivial matter, but I'm curious to hear people's opinions on it. If I have a Dictionary which I'm access through properties, which of these formats would you prefer for the property? /// <summary> /// This class's FirstProperty property /// </summary> [DefaultValue("myValue")] public string FirstProperty { get { return Dictionary["myKey"]; } set { Dictionary["myKey"] = value; } This is probably the typical way of doing it. It's fairly efficient, easy to understand, etc. The only disadvantage is with a longer or more complex key it would be possible to misspell it or change only one instance or something, leading me to this: /// <summary> /// This class's SecondProperty property /// </summary> [DefaultValue("myValue")] private const string DICT_MYKEY = "myKey" public string SecondProperty { get { return Dictionary[DICT_MYKEY]; } set { Dictionary[DICT_MYKEY] = value; } Which is marginally more complicated, but seems to offer additional safety, and is closer to what I would think of as the "Code Complete" solution. The downside is that when you also have a /// block and a [DefaultValue()] block above the property already, it starts getting a bit crowded up there. So which do you like better, and why? Does anybody have any better ideas?
[ "I like the second one purely because any avoidance of magic strings/numbers in code is a good thing. IMO if you need to reference a number or string literal in code more than once, it should be a constant. In most cases even if it's only used once it should be in a constant \n", "I agree with @Glenn for a purely nit-picky point of view. The answer is whatever works for you. All this code takes place in 10 lines (if you include the omitted last curly brace). Nobody is going to get lost and the chance of mistyping is pretty slim (not impossible but very slim). On the other hand, if you used the key somewhere else, then DEFINATELY go with the constant.\nPersonally, I would go off on you about your curly brace style. :) Just kidding! It really is a matter of style.\n", "This isn't answering your question, but I don't think \"DefaultValue\" means what you think it means. It doesn't set a default value for your property.\nSee MSDN and this question for more details.\n", "A lot of people would probably argue that the second option is \"correct\", because any value used more than once should be refactored into a constant. I would most likely use the first option. You have already gotten close to the \"Code Complete\" solution by encapsulating the dictionary entry in a strong typed property. This reduces the chance of screwing up retrieving the wrong Dictionary entry in your implementation. \nThere are only 2 places where you could mess up typing \"myKey\", in the getter and setter, and this would be very easy to spot. \nThe second option would just get too messy. \n", "You could match the property names up to the keys and use reflection to get the name for the lookup.\npublic string FirstProperty {\nget {\n return Dictionary[PropertyName()];\n}\nset {\n Dictionary[PropertyName()] = value;\n}\n\nprivate string PropertyName()\n{\n return new StackFrame(1).GetMethod().Name.Substring(4);\n}\n\nThis has the added benefit of making all your property implementation identical, so you could set them up in visual studio as code snippets if you want.\n", "When you only use a magic string in one context, like you do, I think it's alright.\nBut if you ever need to use the key in another part of the class, go const.\n", "@Joel you don't want to count on StackFrame. In-lining can ruin your day when you least expect it.\nBut to the question: Either way doesn't really matter a whole lot.\n" ]
[ 4, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net", "constants", "dictionary", "properties" ]
stackoverflow_0000044100_.net_constants_dictionary_properties.txt
Q: Direct TCP/IP connections in P2P apps From a Joel's post on Copilot: Direct Connect! We’ve always done everything we can to make sure that Fog Creek Copilot can connect in any networking situation, no matter what firewalls or NATs are in place. To make this happen, both parties make outbound connections to our server, which relays traffic on their behalf. Well, in many cases, this isn’t necessary. So version 2.0 does something rather clever: it sets up the initial connection through our servers, so you get connected right away with 100% reliability. But then once you’re all connected, it quietly, in the background, looks for a way to make a direct connection. If it can’t, no big deal: you just keep relaying through our server. If you can make a direct peer-to-peer connection, it silently shifts your data onto the direct connection. You won’t notice anything except, probably, much faster communication. How do they change the server connection to a P2P connection? A: It's pretty tricky and interesting. I'm sure I have some details wrong, but the overview is this: The programs can already talk to each other through Joel's server, so they can exchange information with each other and Joel's server. Further, Joel has their external IP addresses, and they give joel information about their internal IP addresses. They decide to try this hole punch technique. Computer A initiates a TCP connection with Computer B using B's external IP address. It won't go through, but what it does is tell's A's router that it needs to allow incoming packets from B on a given port. Computer B does the same thing, but its message gets through to A since A's router opened a port/ip combination that matches what B sent (there's some port magic that happens here - this is non trivial, but doable). B's router remembers that B initiated a connection with A on a given port and IP, and so A's packets now flow into B past their router correctly as well. So it's actually pretty straight forward, but the implementation has details, especially regarding how ports are given to new TCP connections, and how NAT routers typically deal with TCP requests and how they map to external ports. These details are the interesting, and difficult, bit. -Adam A: There is a technique called "Hole Punching" that works well with "Cone" NAT (Cone is a technical familly of router). That's not an 100% sure technique, today, it works well with UDP on about 80% of the router. There is some implementations of library to realize Hole Punching: STUN (wikipedia) A: I believe the simple version is that they drop the server connection and replace it with the P2P connection. Something along the lines of: Machine1 connects to copilot's servers. Machine1 connects to copilot's servers. Machine1 connects to copilot's servers. Machine2 subsequently connects, and they begin screen sharing. Machine2 opens a port intended for Machine1 to connect to. Machine1 tries to connect to the now open port on Machine2. If this connection is established: The connection to copilot's servers is severed. Data is instead transfered over the direct (P2P) connection between the two machines.
Direct TCP/IP connections in P2P apps
From a Joel's post on Copilot: Direct Connect! We’ve always done everything we can to make sure that Fog Creek Copilot can connect in any networking situation, no matter what firewalls or NATs are in place. To make this happen, both parties make outbound connections to our server, which relays traffic on their behalf. Well, in many cases, this isn’t necessary. So version 2.0 does something rather clever: it sets up the initial connection through our servers, so you get connected right away with 100% reliability. But then once you’re all connected, it quietly, in the background, looks for a way to make a direct connection. If it can’t, no big deal: you just keep relaying through our server. If you can make a direct peer-to-peer connection, it silently shifts your data onto the direct connection. You won’t notice anything except, probably, much faster communication. How do they change the server connection to a P2P connection?
[ "It's pretty tricky and interesting. I'm sure I have some details wrong, but the overview is this:\nThe programs can already talk to each other through Joel's server, so they can exchange information with each other and Joel's server. Further, Joel has their external IP addresses, and they give joel information about their internal IP addresses.\nThey decide to try this hole punch technique. Computer A initiates a TCP connection with Computer B using B's external IP address. It won't go through, but what it does is tell's A's router that it needs to allow incoming packets from B on a given port.\nComputer B does the same thing, but its message gets through to A since A's router opened a port/ip combination that matches what B sent (there's some port magic that happens here - this is non trivial, but doable).\nB's router remembers that B initiated a connection with A on a given port and IP, and so A's packets now flow into B past their router correctly as well.\nSo it's actually pretty straight forward, but the implementation has details, especially regarding how ports are given to new TCP connections, and how NAT routers typically deal with TCP requests and how they map to external ports. These details are the interesting, and difficult, bit.\n-Adam\n", "There is a technique called \"Hole Punching\" that works well with \"Cone\" NAT (Cone is a technical familly of router). That's not an 100% sure technique, today, it works well with UDP on about 80% of the router.\nThere is some implementations of library to realize Hole Punching: STUN (wikipedia)\n", "I believe the simple version is that they drop the server connection and replace it with the P2P connection.\nSomething along the lines of:\n\nMachine1 connects to copilot's servers.\nMachine1 connects to copilot's servers.\nMachine1 connects to copilot's servers.\nMachine2 subsequently connects, and they begin screen sharing.\nMachine2 opens a port intended for Machine1 to connect to.\nMachine1 tries to connect to the now open port on Machine2.\n\nIf this connection is established:\n\nThe connection to copilot's servers is severed.\nData is instead transfered over the direct (P2P) connection between the two machines.\n\n" ]
[ 10, 1, 1 ]
[]
[]
[ "networking", "p2p", "tcp" ]
stackoverflow_0000044205_networking_p2p_tcp.txt
Q: Looking for a simple JavaScript example that updates DOM I am looking for a simple JavaScript example that updates DOM. Any suggestions? A: Here is a short pure-javascript example. Assume you have a div with the id "maincontent". var newnode = document.createTextNode('Here is some text.'); document.getElementById('maincontent').appendChild(newnode); Of course, things are a lot easier (especially when you want to do more complicated things) with jQuery. A: @Ravi Here's working example of your code <html> <head> <title>Font Detect please</title> <script src="prototype.js" type="text/javascript"></script> <script type="text/javascript"> function changeTD() { $('Myanmar3').innerHTML = 'False'; } </script> </head> <body> <table border="1"> <tr><td>Font</td><td>Installed</td></tr> <tr><td>Myanmar3</td><td id="Myanmar3">True</td></tr> </table> <a href="javascript:void(0);" onclick="changeTD();">Click Me</a> </body> </html> You'll notice that I added a little link that you have to click to actually make the change. I thought this might make it easier to try out for real. A: I believe that this tutorial on jQuery has an example that might help you: http://docs.jquery.com/Tutorials:Getting_Started_with_jQuery A: A more specific question might give more helpful results, but here's a simple pair of snippets that shows and later updates text in a status container element. // give some visual cue that you're waiting container.appendChild( document.createTextNode( "Getting stuff from remote server..." ) ); // then later... // update request status container.replaceChild( document.createTextNode( "Done." ), container.firstChild ); A: <html> <head> <title>Font Detect please</title> <script src="prototype.js" type="text/javascript"></script> <script type="text/javascript"> $('Myanmar3').update('False'); $('Myanmar3').innerHTML; </script> </head> <body> <table border="1"> <tr><td>Font</td><td>Installed</td></tr> <tr><td>Myanmar3</td><td id=Myanmar3>True</td></tr> </table> </body> </html> I have a simple code like that above and am trying to change the result True to false via Javascript using Prototype. What might I be doing wrong? Edit: Got it. I didn't call it. :D
Looking for a simple JavaScript example that updates DOM
I am looking for a simple JavaScript example that updates DOM. Any suggestions?
[ "Here is a short pure-javascript example. Assume you have a div with the id \"maincontent\".\nvar newnode = document.createTextNode('Here is some text.');\ndocument.getElementById('maincontent').appendChild(newnode);\n\nOf course, things are a lot easier (especially when you want to do more complicated things) with jQuery.\n", "@Ravi\nHere's working example of your code\n<html>\n <head>\n <title>Font Detect please</title>\n\n <script src=\"prototype.js\" type=\"text/javascript\"></script>\n <script type=\"text/javascript\">\n function changeTD()\n {\n $('Myanmar3').innerHTML = 'False'; \n }\n </script>\n </head>\n <body> \n\n <table border=\"1\">\n <tr><td>Font</td><td>Installed</td></tr>\n <tr><td>Myanmar3</td><td id=\"Myanmar3\">True</td></tr>\n </table> \n\n <a href=\"javascript:void(0);\" onclick=\"changeTD();\">Click Me</a>\n\n </body>\n</html>\n\nYou'll notice that I added a little link that you have to click to actually make the change. I thought this might make it easier to try out for real.\n", "I believe that this tutorial on jQuery has an example that might help you: http://docs.jquery.com/Tutorials:Getting_Started_with_jQuery\n", "A more specific question might give more helpful results, but here's a simple pair of snippets that shows and later updates text in a status container element.\n\n// give some visual cue that you're waiting\ncontainer.appendChild( document.createTextNode( \"Getting stuff from remote server...\" ) );\n\n// then later... \n// update request status \ncontainer.replaceChild( document.createTextNode( \"Done.\" ), container.firstChild );\n\n\n", "<html>\n <head>\n <title>Font Detect please</title>\n\n <script src=\"prototype.js\" type=\"text/javascript\"></script>\n <script type=\"text/javascript\">\n $('Myanmar3').update('False'); \n $('Myanmar3').innerHTML; \n </script>\n </head>\n <body> \n\n <table border=\"1\">\n <tr><td>Font</td><td>Installed</td></tr>\n <tr><td>Myanmar3</td><td id=Myanmar3>True</td></tr>\n </table> \n\n </body>\n</html>\n\nI have a simple code like that above and am trying to change the result True to false via Javascript using Prototype. What might I be doing wrong?\nEdit: Got it. I didn't call it. :D\n" ]
[ 5, 1, 0, 0, 0 ]
[]
[]
[ "dom", "javascript" ]
stackoverflow_0000044190_dom_javascript.txt
Q: COTS Workshop Registration System Does anyone have any experience with any COTS systems for managing workshops and the associated registrations, courses, communications, etc.? We have a home-built Perl system that is about 8 years old and is currently embedded as an iframe in a SharePoint portal site (externally facing). Needless to say, it isn't integrated into our site well, looks like crap, needs an overhaul, lacks features, etc. It would be nice to find either a product we can install or a service that provides those features. Thanks! A: You might also look into Moodle - it's a platform developed to supplement classroom teaching (or implement online learning courses) but should have all the major features you listed, and would support your needs reasonably well, as well as enhancing your event with an online component such as slide/presentation distribution only to registered users or users that took a particular class, etc)
COTS Workshop Registration System
Does anyone have any experience with any COTS systems for managing workshops and the associated registrations, courses, communications, etc.? We have a home-built Perl system that is about 8 years old and is currently embedded as an iframe in a SharePoint portal site (externally facing). Needless to say, it isn't integrated into our site well, looks like crap, needs an overhaul, lacks features, etc. It would be nice to find either a product we can install or a service that provides those features. Thanks!
[ "You might also look into Moodle - it's a platform developed to supplement classroom teaching (or implement online learning courses) but should have all the major features you listed, and would support your needs reasonably well, as well as enhancing your event with an online component such as slide/presentation distribution only to registered users or users that took a particular class, etc)\n" ]
[ 2 ]
[]
[]
[ "cots" ]
stackoverflow_0000043960_cots.txt
Q: C# .Net 3.5 Code to replace a file extension using LINQ I've written this very simple function to replace a file extension using LINQ in C#.NET 3.5 however I have a feeling that there's a more elegant way to do this. (I'm not committed to using LINQ here - just looking for a more elegant approach.) Ideas? private string ReplaceFileExtension(string fileName, string newExtension) { string[] dotSplit = fileName.Split('.'); return String.Join(".", dotSplit.Take(dotSplit.Length - 1).ToArray()) + "." + newExtension; } (I'm aware of the fact that this won't work if the original file name doesn't have a dot.) A: It's very easy... just use System.IO.Path.ChangeExtension
C# .Net 3.5 Code to replace a file extension using LINQ
I've written this very simple function to replace a file extension using LINQ in C#.NET 3.5 however I have a feeling that there's a more elegant way to do this. (I'm not committed to using LINQ here - just looking for a more elegant approach.) Ideas? private string ReplaceFileExtension(string fileName, string newExtension) { string[] dotSplit = fileName.Split('.'); return String.Join(".", dotSplit.Take(dotSplit.Length - 1).ToArray()) + "." + newExtension; } (I'm aware of the fact that this won't work if the original file name doesn't have a dot.)
[ "It's very easy... just use System.IO.Path.ChangeExtension\n" ]
[ 16 ]
[]
[]
[ ".net_3.5", "c#", "linq" ]
stackoverflow_0000044404_.net_3.5_c#_linq.txt
Q: BLOB Storage - 100+ GB, MySQL, SQLite, or PostgreSQL + Python I have an idea for a simple application which will monitor a group of folders, index any files it finds. A gui will allow me quickly tag new files and move them into a single database for storage and also provide an easy mechanism for querying the db by tag, name, file type and date. At the moment I have about 100+ GB of files on a couple removable hard drives, the database will be at least that big. If possible I would like to support full text search of the embedded binary and text documents. This will be a single user application. Not trying to start a DB war, but what open source DB is going to work best for me? I am pretty sure SQLLite is off the table but I could be wrong. A: Why store the files in the database at all? Simply store your meta-data and a filename. If you need to copy them to a new location for some reason, just do that as a file system copy. Once you remove the file contents then any competent database will be able to handle the meta-data for a few hundred thousand files. A: I'm still researching this option for one of my own projects, but CouchDB may be worth a look. A: My preference would be to store the document with the metadata. One reason, is relational integrity. You can't easily move the files or modify the files without the action being brokered by the db. I am sure I can handle these problems but it isn't as clean as I would like and my experience has been that most vendors can handle huge amounts of binary data in the database these days. I guess I was wondering if PostgreSQL or MySQL have any obvious advantages in these areas, I am primarily familiar with Oracle. Anyway, thanks for the response, if the DB knows where the external file is it will also be easy to bring the file in at a later date if I want. Another aspect of the question was if either database is easier to work with when using Python. I'm assuming that is a wash. A: I always hate to answer "don't", but you'd be better off indexing with something like Lucene (PyLucene). That and storing the paths in the database rather than the file contents is almost always recommended. To add to that, none of those database engines will store LOBs in a separate dataspace (they'll be embedded in the table's data space) so any of those engines should perfom nearly equally as well (well except sqllite). You need to move to Informix, DB2, SQLServer or others to get that kind of binary object handling. A: Pretty much any of them would work (even though SQLLite wasn't meant to be used in a concurrent multi-user environment, which could be a problem...) since you don't want to index the actual contents of the files. The only limiting factor is the maximum "packet" size of the given DB (by packet I'm referring to a query/response). Usually these limit are around 2MB, meaning that your files must be smaller than 2MB. Of course you could increase this limit, but the whole process is rather inefficient, since for example to insert a file you would have to: Read the entire file into memory Transform the file in a query (which usually means hex encoding it - thus doubling the size from the start) Executing the generated query (which itself means - for the database - that it has to parse it) I would go with a simple DB and the associated files stored using a naming convention which makes them easy to find (for example based on the primary key). Of course this design is not "pure", but it will perform much better and is also easier to use.
BLOB Storage - 100+ GB, MySQL, SQLite, or PostgreSQL + Python
I have an idea for a simple application which will monitor a group of folders, index any files it finds. A gui will allow me quickly tag new files and move them into a single database for storage and also provide an easy mechanism for querying the db by tag, name, file type and date. At the moment I have about 100+ GB of files on a couple removable hard drives, the database will be at least that big. If possible I would like to support full text search of the embedded binary and text documents. This will be a single user application. Not trying to start a DB war, but what open source DB is going to work best for me? I am pretty sure SQLLite is off the table but I could be wrong.
[ "Why store the files in the database at all? Simply store your meta-data and a filename. If you need to copy them to a new location for some reason, just do that as a file system copy.\nOnce you remove the file contents then any competent database will be able to handle the meta-data for a few hundred thousand files.\n", "I'm still researching this option for one of my own projects, but CouchDB may be worth a look.\n", "My preference would be to store the document with the metadata. One reason, is relational integrity. You can't easily move the files or modify the files without the action being brokered by the db. I am sure I can handle these problems but it isn't as clean as I would like and my experience has been that most vendors can handle huge amounts of binary data in the database these days. I guess I was wondering if PostgreSQL or MySQL have any obvious advantages in these areas, I am primarily familiar with Oracle. Anyway, thanks for the response, if the DB knows where the external file is it will also be easy to bring the file in at a later date if I want. Another aspect of the question was if either database is easier to work with when using Python. I'm assuming that is a wash.\n", "I always hate to answer \"don't\", but you'd be better off indexing with something like Lucene (PyLucene). That and storing the paths in the database rather than the file contents is almost always recommended.\nTo add to that, none of those database engines will store LOBs in a separate dataspace (they'll be embedded in the table's data space) so any of those engines should perfom nearly equally as well (well except sqllite). You need to move to Informix, DB2, SQLServer or others to get that kind of binary object handling.\n", "Pretty much any of them would work (even though SQLLite wasn't meant to be used in a concurrent multi-user environment, which could be a problem...) since you don't want to index the actual contents of the files.\nThe only limiting factor is the maximum \"packet\" size of the given DB (by packet I'm referring to a query/response). Usually these limit are around 2MB, meaning that your files must be smaller than 2MB. Of course you could increase this limit, but the whole process is rather inefficient, since for example to insert a file you would have to:\n\nRead the entire file into memory\nTransform the file in a query (which usually means hex encoding it - thus doubling the size from the start)\nExecuting the generated query (which itself means - for the database - that it has to parse it)\n\nI would go with a simple DB and the associated files stored using a naming convention which makes them easy to find (for example based on the primary key). Of course this design is not \"pure\", but it will perform much better and is also easier to use.\n" ]
[ 2, 2, 1, 0, 0 ]
[ "why are you wasting time emulating something that the filesystem should be able to handle? more storage + grep is your answer.\n" ]
[ -1 ]
[ "blob", "database" ]
stackoverflow_0000044372_blob_database.txt
Q: How do you reference a bitmap on the stage in actionscript? How do you reference a bitmap on the stage in flash using actionscript 3? I have a bitmap on the stage in flash and at the end of the movie I would like to swap it out for the next in the sequence before the movie loops. in my library i have 3 images, exported for actionscript, with the class name img1/img2/img3. here is how my layers in flash are set out. layer 5 : mask2:MovieClip layer 4 : img2:Bitmap layer 3 : mask1:MovieClip layer 2 : img1:Bitmap layer 1 : background:Bitmap at the end of the movie I would like to swap img1 with img2, so the movie loops seamlessly, then ideally swap img2 (on layer 4) with img3 and so on until I get to the end of my images. but I can not find out how to reference the images that have already been put on the stage (in design time), any one have any idea of how to do this? The end movie will hopefully load images dynamically from the web server (I have the code for this bit) and display them as well as img1/img2/img3. Any help would be appreciated. EDIT: @81bronco , I tried this but the instance name is greyed out for graphics, it will only allow me to do it with movieclips and buttons. I half got it to work by turning them into moveclips, and clearing the images in the moveclip out before adding a new one (using something simpler to what vanhornRF suggested), but for some odd reason when the mask kicks in the images I cleared out come back for the mask animation. A: To reference something on the stage, you need to give the stage instance a name - not give the symbol in the library a class name. Click on the item on the stage and look at the properties panel. There should be a text entry box just above the entry boxes for the item's dimensions. Enter a name there. Elsewhere in your code, you can then refer to that item on stage by it's instance name. A: It should be something like this: imageHolder.removeChild( imageIndex ) or imageHolder.removeChildByName( imageName ) and after that imageHolder.addChild( newImage ) A: I would probably do something like this in your document class for(var i:int=0; i<numChildren; i++){ trace(getChildAt(i),"This is the child at position "+i); } I do this because I still code in the flash IDE and its debugger is so very painful to get working most of the time it's easier to just trace variables out, so you can either use that for loop to print the object names of the items currently on your stage, or use a debugger program to find the objects as well. Now that you have the children and at what index they actually are at within the stage, you can reference them by calling getChildAt(int), you can removeChildAt(int), you can addChildAt(displayObject, int) and swapChildrenAt(int, int). The int in these arguments would represent the index position that was returned by your trace statement and the displayObject would obviously just represent anything you wanted to add to the stage or parent DisplayObject. Using those 4 commands you should be able to freely re-arrange any movieclips you have on stage so that they will appear to transition seamlessly. @81bronco One should definitely name your assets on stage if you want to uniquely reference them specifically to avoid any confusion if there ends up being a lot of items on stage A: Hey Re0sless, when you remove those items from the stage do they have any event listeners attached to them, any timers or loaders? Any of those things can make an object stick around in flash's memory and not remove properly. Also on top of just removing the item, perhaps try nulling it as well? Sometimes that helps in clearing out its references so it can be properly destroyed. Of course it could also be something silly like removing the item at one instance doesn't remove the item from future frames as well, but I really don't think that's the case.
How do you reference a bitmap on the stage in actionscript?
How do you reference a bitmap on the stage in flash using actionscript 3? I have a bitmap on the stage in flash and at the end of the movie I would like to swap it out for the next in the sequence before the movie loops. in my library i have 3 images, exported for actionscript, with the class name img1/img2/img3. here is how my layers in flash are set out. layer 5 : mask2:MovieClip layer 4 : img2:Bitmap layer 3 : mask1:MovieClip layer 2 : img1:Bitmap layer 1 : background:Bitmap at the end of the movie I would like to swap img1 with img2, so the movie loops seamlessly, then ideally swap img2 (on layer 4) with img3 and so on until I get to the end of my images. but I can not find out how to reference the images that have already been put on the stage (in design time), any one have any idea of how to do this? The end movie will hopefully load images dynamically from the web server (I have the code for this bit) and display them as well as img1/img2/img3. Any help would be appreciated. EDIT: @81bronco , I tried this but the instance name is greyed out for graphics, it will only allow me to do it with movieclips and buttons. I half got it to work by turning them into moveclips, and clearing the images in the moveclip out before adding a new one (using something simpler to what vanhornRF suggested), but for some odd reason when the mask kicks in the images I cleared out come back for the mask animation.
[ "To reference something on the stage, you need to give the stage instance a name - not give the symbol in the library a class name.\nClick on the item on the stage and look at the properties panel. There should be a text entry box just above the entry boxes for the item's dimensions. Enter a name there.\nElsewhere in your code, you can then refer to that item on stage by it's instance name.\n", "It should be something like this:\nimageHolder.removeChild( imageIndex )\n\nor\nimageHolder.removeChildByName( imageName )\n\nand after that\nimageHolder.addChild( newImage )\n\n", "I would probably do something like this in your document class\nfor(var i:int=0; i<numChildren; i++){\n trace(getChildAt(i),\"This is the child at position \"+i);\n}\n\nI do this because I still code in the flash IDE and its debugger is so very painful to get working most of the time it's easier to just trace variables out, so you can either use that for loop to print the object names of the items currently on your stage, or use a debugger program to find the objects as well.\nNow that you have the children and at what index they actually are at within the stage, you can reference them by calling getChildAt(int), you can removeChildAt(int), you can addChildAt(displayObject, int) and swapChildrenAt(int, int). The int in these arguments would represent the index position that was returned by your trace statement and the displayObject would obviously just represent anything you wanted to add to the stage or parent DisplayObject.\nUsing those 4 commands you should be able to freely re-arrange any movieclips you have on stage so that they will appear to transition seamlessly.\n@81bronco One should definitely name your assets on stage if you want to uniquely reference them specifically to avoid any confusion if there ends up being a lot of items on stage\n", "Hey Re0sless, when you remove those items from the stage do they have any event listeners attached to them, any timers or loaders? Any of those things can make an object stick around in flash's memory and not remove properly. Also on top of just removing the item, perhaps try nulling it as well? Sometimes that helps in clearing out its references so it can be properly destroyed.\nOf course it could also be something silly like removing the item at one instance doesn't remove the item from future frames as well, but I really don't think that's the case.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "actionscript_3", "flash" ]
stackoverflow_0000043354_actionscript_3_flash.txt
Q: Outlook Email via a Webpage I have a web application developed with ASP.net and C# that is running on my companies' intranet. Because all the users for this application are all using Microsoft Outlook without exception, I would like for the the application to open up an Outlook message on the client-side. I understand that Office is designed to be run on the desktop and not from a server, however I have no trouble creating a Word or Excel document on the client-side. I have code that instantiates the Outlook object using the Microsoft.Office.Interop.Outlook namespace and Outlook installed on the server. When I try to run the code from the server, I get a DCOM source error message that states "The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {000C101C-0000-0000-C000-000000000046} to the user This security permission can be modified using the Component Services administrative tool." I have modified the permissions using the Component Services tool, but still get this same error. Is there a way to overcome this or is this a fruitless exercise because Outlook cannot be opened on the client side from the server-side code? Mailto will not work due to the extreme length that the emails can obtain. Also, the user that sends it needs add in eye-candy to the text for the recipients. A: You cannot open something on the client from server side code. You'd have to use script on the page to do what you're wanting (or something else client-side like ActiveX or embedded .NET or something) Here's a sample Javascript that invokes an Outlook MailItem from an webpage. This could easily be injected into the page from your server-side code so it executes on the client. http://www.codeproject.com/KB/aspnet/EmailUsingJavascript.aspx A: (hint: formatting in your question) I'm not understanding what's wrong with a mailto link or a formmail-type page. A: If everyone in the company uses Outlook, then just using a standard "mailto" link should always open Outlook. It sounds like you're over-engineering this. A: Do you want to open an existing E-Mail or create a new one? Perhaps I misunderstood your question; could you provide a link like: mailto:recipient@email.tld?subject=This%20is%20the%20subject&body=Hello%20there! When the user clicks on that a link, a new Outlook-E-Mail will be opened and the: Recipient: recipient@email-tld Subject: This is the subject Body: Hello there! All these fields are already filled from the link. A: I'll just throw this out there cuz it's been asked. Mailto has a lot of disadvantages; mainly size. Since the sender needs to do alot of formatting on the email text, the html code generated can take up a lot of space that fails when using mailto. thanks for the suggestion though.
Outlook Email via a Webpage
I have a web application developed with ASP.net and C# that is running on my companies' intranet. Because all the users for this application are all using Microsoft Outlook without exception, I would like for the the application to open up an Outlook message on the client-side. I understand that Office is designed to be run on the desktop and not from a server, however I have no trouble creating a Word or Excel document on the client-side. I have code that instantiates the Outlook object using the Microsoft.Office.Interop.Outlook namespace and Outlook installed on the server. When I try to run the code from the server, I get a DCOM source error message that states "The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {000C101C-0000-0000-C000-000000000046} to the user This security permission can be modified using the Component Services administrative tool." I have modified the permissions using the Component Services tool, but still get this same error. Is there a way to overcome this or is this a fruitless exercise because Outlook cannot be opened on the client side from the server-side code? Mailto will not work due to the extreme length that the emails can obtain. Also, the user that sends it needs add in eye-candy to the text for the recipients.
[ "You cannot open something on the client from server side code. You'd have to use script on the page to do what you're wanting (or something else client-side like ActiveX or embedded .NET or something) \nHere's a sample Javascript that invokes an Outlook MailItem from an webpage. This could easily be injected into the page from your server-side code so it executes on the client. \nhttp://www.codeproject.com/KB/aspnet/EmailUsingJavascript.aspx\n", "(hint: formatting in your question)\nI'm not understanding what's wrong with a mailto link or a formmail-type page.\n", "If everyone in the company uses Outlook, then just using a standard \"mailto\" link should always open Outlook. It sounds like you're over-engineering this.\n", "Do you want to open an existing E-Mail or create a new one?\nPerhaps I misunderstood your question; could you provide a link like: \nmailto:recipient@email.tld?subject=This%20is%20the%20subject&body=Hello%20there!\n\nWhen the user clicks on that a link, a new Outlook-E-Mail will be opened and the:\n\nRecipient: recipient@email-tld \nSubject: This is the subject\nBody: Hello there! \n\nAll these fields are already filled from the link.\n", "I'll just throw this out there cuz it's been asked.\nMailto has a lot of disadvantages; mainly size. Since the sender needs to do alot of formatting on the email text, the html code generated can take up a lot of space that fails when using mailto.\nthanks for the suggestion though.\n" ]
[ 6, 2, 1, 1, 1 ]
[]
[]
[ "asp.net", "c#", "ms_office", "outlook" ]
stackoverflow_0000044421_asp.net_c#_ms_office_outlook.txt
Q: Error handling / error logging in C++ for library/app combo I've encountered the following problem pattern frequently over the years: I'm writing complex code for a package comprised of a standalone application and also a library version of the core that people can use from inside other apps. Both our own app and presumably ones that users create with the core library are likely to be run both in batch mode (off-line, scripted, remote, and/or from command line), as well as interactively. The library/app takes complex and large runtime input and there may be a variety of error-like outputs including severe error messages, input syntax warnings, status messages, and run statistics. Note that these are all incidental outputs, not the primary purpose of the application which would be displayed or saved elsewhere and using different methods. Some of these (probably only the very severe ones) might require a dialog box if run interactively; but it needs to log without stalling for user input if run in batch mode; and if run as a library the client program obviously wants to intercept and/or examine the errors as they occur. It all needs to be cross-platform: Linux, Windows, OSX. And we want the solution to not be weird on any platform. For example, output to stderr is fine for Linux, but won't work on Windows when linked to a GUI app. Client programs of the library may create multiple instances of the main class, and it would be nice if the client app could distinguish a separate error stream with each instance. Let's assume everybody agrees it's good enough for the library methods to log errors via a simple call (error code and/or severity, then printf-like arguments giving an error message). The contentious part is how this is recorded or retrieved by the client app. I've done this many times over the years, and am never fully satisfied with the solution. Furthermore, it's the kind of subproblem that's actually not very important to users (they want to see the error log if something goes wrong, but they don't really care about our technique for implementing it), but the topic gets the programmers fired up and they invariably waste inordinate time on this detail and are never quite happy. Anybody have any wisdom for how to integrate this functionality into a C++ API, or is there an accepted paradigm or a good open source solution (not GPL, please, I'd like a solution I can use in commercial closed apps as well as OSS projects)? A: We use Apache's Log4cxx for logging which isn't perfect, but provides a lot of infrastructure and a consistent approach across projects. I believe it is cross-platform, though we only use it on Windows. It provides for run time configuration via an ini file which allows you to control how the log file is output, and you could write your own appenders if you want specific behaviour (e.g. an error dialog under the UI). If clients of your library also adopt it then it would integrate their logging output into the same log file(s). Differentiation between instances of the main class could be supported using the nested diagnostic context (NDC) feature. A: Log4Cxx should work for you. You need to implement a provider that allows the library user to catch the log output in callbacks. The library would export a function to install the callbacks. That function should, behind the scenes, reconfigure log4cxxx to get rid of all appenders and set up the "custom" appender. Of course, the library user has an option to not install the callbacks and use log4cxx as is.
Error handling / error logging in C++ for library/app combo
I've encountered the following problem pattern frequently over the years: I'm writing complex code for a package comprised of a standalone application and also a library version of the core that people can use from inside other apps. Both our own app and presumably ones that users create with the core library are likely to be run both in batch mode (off-line, scripted, remote, and/or from command line), as well as interactively. The library/app takes complex and large runtime input and there may be a variety of error-like outputs including severe error messages, input syntax warnings, status messages, and run statistics. Note that these are all incidental outputs, not the primary purpose of the application which would be displayed or saved elsewhere and using different methods. Some of these (probably only the very severe ones) might require a dialog box if run interactively; but it needs to log without stalling for user input if run in batch mode; and if run as a library the client program obviously wants to intercept and/or examine the errors as they occur. It all needs to be cross-platform: Linux, Windows, OSX. And we want the solution to not be weird on any platform. For example, output to stderr is fine for Linux, but won't work on Windows when linked to a GUI app. Client programs of the library may create multiple instances of the main class, and it would be nice if the client app could distinguish a separate error stream with each instance. Let's assume everybody agrees it's good enough for the library methods to log errors via a simple call (error code and/or severity, then printf-like arguments giving an error message). The contentious part is how this is recorded or retrieved by the client app. I've done this many times over the years, and am never fully satisfied with the solution. Furthermore, it's the kind of subproblem that's actually not very important to users (they want to see the error log if something goes wrong, but they don't really care about our technique for implementing it), but the topic gets the programmers fired up and they invariably waste inordinate time on this detail and are never quite happy. Anybody have any wisdom for how to integrate this functionality into a C++ API, or is there an accepted paradigm or a good open source solution (not GPL, please, I'd like a solution I can use in commercial closed apps as well as OSS projects)?
[ "We use Apache's Log4cxx for logging which isn't perfect, but provides a lot of infrastructure and a consistent approach across projects. I believe it is cross-platform, though we only use it on Windows. \nIt provides for run time configuration via an ini file which allows you to control how the log file is output, and you could write your own appenders if you want specific behaviour (e.g. an error dialog under the UI).\nIf clients of your library also adopt it then it would integrate their logging output into the same log file(s).\nDifferentiation between instances of the main class could be supported using the nested diagnostic context (NDC) feature.\n", "Log4Cxx should work for you. You need to implement a provider that allows the library user to catch the log output in callbacks. The library would export a function to install the callbacks. That function should, behind the scenes, reconfigure log4cxxx to get rid of all appenders and set up the \"custom\" appender.\nOf course, the library user has an option to not install the callbacks and use log4cxx as is.\n" ]
[ 1, 1 ]
[]
[]
[ "api", "api_design", "c++", "error_handling", "error_logging" ]
stackoverflow_0000039525_api_api_design_c++_error_handling_error_logging.txt
Q: Does Microsoft ASP.NET Ajax Cause DOM Object Leaks? We've been using "Drip" to try and identify why pages with UpdatePanels in them tend to use a lot of client-side memory. With a page with a regular postback, we are seeing 0 leaks detected by Drip. However, when we add an update panel to the mix, every single DOM object that is inside of the update panel appears to leak (according to Drip). I am not certain is Drip is reliable enough to report these kinds of things - the reported leaks do seem to indicate Drip is modifying the page slightly. Does anyone have any experience with this? Should I panic and stop using Microsoft Ajax? I'm not above doubting Microsoft, but it seems fishy to me that it could be this bad. Also, if you know of a tool that is better than Drip, that would be helpful as well. A: According to ASP.NET AJAX in Action, p. 257 Just before the old markup is replaced with the updated HTML, all the DOM elements in the panel are examined for Microsoft Ajax behaviours or controls attached to them. To avoid memory leaks, the components associated with DOM elements are disposed, and then destroyed when the HTMl is replaced. So as far as I know, any asp.net ajax components within the update panel are disposed to prevent memory leaks, but anything else in there will just be replaced with the html received. So if you don't have any asp.net ajax components in the target container for the response, it would be basically the same as an inner html replacement with any other js framework / ajax request, so i would say that it's just the how the browser handles this, rather than asp.net ajax causing this. Also, while it may be "leaking", it may be by design, meaning that the browser might not have reclaimed the dom elements yet and released them. Also, drip might be causing those to leak, as it is attaching to those dom elements. A: That's very likely. This was pretty much what we assumed (browser problem, not necessarily Ajax). Our problem is now, with this application being accessed by many people via a Citrix environment, with each page continually creating DOM objects and not releasing them, the Citrix environment starts thrashing after some usage. I've seen similar complaints online (especially where you are dumb enough to access an Ajax website via Citrix), but it doesn't make me feel much better that this is the intended behavior. I'm wondering now if anyone has come up with a clever workaround. We also have a client app where we are using the .NET BrowserControl to access these websites, rather than just straight IE7, so if anyone knows a secret API call (FreeStaleDomObjectsFTW()) we can utilize from that end of the stack, that would be useful as well. A: you could attach to the pageLoading event of the PageRequestManager class and go through the panels updating property and remove the DOM elements in each.
Does Microsoft ASP.NET Ajax Cause DOM Object Leaks?
We've been using "Drip" to try and identify why pages with UpdatePanels in them tend to use a lot of client-side memory. With a page with a regular postback, we are seeing 0 leaks detected by Drip. However, when we add an update panel to the mix, every single DOM object that is inside of the update panel appears to leak (according to Drip). I am not certain is Drip is reliable enough to report these kinds of things - the reported leaks do seem to indicate Drip is modifying the page slightly. Does anyone have any experience with this? Should I panic and stop using Microsoft Ajax? I'm not above doubting Microsoft, but it seems fishy to me that it could be this bad. Also, if you know of a tool that is better than Drip, that would be helpful as well.
[ "According to ASP.NET AJAX in Action, p. 257\n\nJust before the old markup is replaced with the updated HTML, all the DOM elements in the panel are examined for Microsoft Ajax behaviours or controls attached to them. To avoid memory leaks, the components associated with DOM elements are disposed, and then destroyed when the HTMl is replaced.\n\nSo as far as I know, any asp.net ajax components within the update panel are disposed to prevent memory leaks, but anything else in there will just be replaced with the html received.\nSo if you don't have any asp.net ajax components in the target container for the response, it would be basically the same as an inner html replacement with any other js framework / ajax request, so i would say that it's just the how the browser handles this, rather than asp.net ajax causing this.\nAlso, while it may be \"leaking\", it may be by design, meaning that the browser might not have reclaimed the dom elements yet and released them. Also, drip might be causing those to leak, as it is attaching to those dom elements.\n", "That's very likely. This was pretty much what we assumed (browser problem, not necessarily Ajax).\nOur problem is now, with this application being accessed by many people via a Citrix environment, with each page continually creating DOM objects and not releasing them, the Citrix environment starts thrashing after some usage. I've seen similar complaints online (especially where you are dumb enough to access an Ajax website via Citrix), but it doesn't make me feel much better that this is the intended behavior.\nI'm wondering now if anyone has come up with a clever workaround. We also have a client app where we are using the .NET BrowserControl to access these websites, rather than just straight IE7, so if anyone knows a secret API call (FreeStaleDomObjectsFTW()) we can utilize from that end of the stack, that would be useful as well.\n", "you could attach to the pageLoading event of the PageRequestManager class and go through the panels updating property and remove the DOM elements in each.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "asp.net", "asp.net_ajax", "dom", "memory_leaks" ]
stackoverflow_0000044080_asp.net_asp.net_ajax_dom_memory_leaks.txt
Q: What's the best way to insert/update/delete multiple records in a database from an application? Given a small set of entities (say, 10 or fewer) to insert, delete, or update in an application, what is the best way to perform the necessary database operations? Should multiple queries be issued, one for each entity to be affected? Or should some sort of XML construct that can be parsed by the database engine be used, so that only one command needs to be issued? I ask this because a common pattern at my current shop seems to be to format up an XML document containing all the changes, then send that string to the database to be processed by the database engine's XML functionality. However, using XML in this way seems rather cumbersome given the simple nature of the task to be performed. A: You didn't mention what database you are using, but in SQL Server 2008, you can use table variables to pass complex data like this to a stored procedure. Parse it there and perform your operations. For more info, see Scott Allen's article on ode to code. A: It depends on how many you need to do, and how fast the operations need to run. If it's only a few, then doing them one at a time with whatever mechanism you have for doing single operations will work fine. If you need to do thousands or more, and it needs to run quickly, you should re-use the connection and command, changing the arguments for the parameters to the query during each iteration. This will minimize resource usage. You don't want to re-create the connection and command for each operation. A: Most databases support BULK UPDATE or BULK DELETE operations. A: From a "business entity" design standpoint, if you are doing different operations on each of a set of entities, you should have each entity handle its own persistence. If there are common batch activities (like "delete all older than x date", for instance), I would write a static method on a collection class that executes the batch update or delete. I generally let entities handle their own inserts atomically. A: The answer depends on the volume of data you're talking about. If you've got a fairly small set of records in memory that you need to synchronise back to disk then multiple queries is probably appropriate. If it's a larger set of data you need to look at other options. I recently had to implement a mechanism where an external data feed gave me ~17,000 rows of dta that I needed to synchronise with a local table. The solution I chose there was to load the external data into a staging table and call a stored proc that did the synchronisation completely within the database.
What's the best way to insert/update/delete multiple records in a database from an application?
Given a small set of entities (say, 10 or fewer) to insert, delete, or update in an application, what is the best way to perform the necessary database operations? Should multiple queries be issued, one for each entity to be affected? Or should some sort of XML construct that can be parsed by the database engine be used, so that only one command needs to be issued? I ask this because a common pattern at my current shop seems to be to format up an XML document containing all the changes, then send that string to the database to be processed by the database engine's XML functionality. However, using XML in this way seems rather cumbersome given the simple nature of the task to be performed.
[ "You didn't mention what database you are using, but in SQL Server 2008, you can use table variables to pass complex data like this to a stored procedure. Parse it there and perform your operations. For more info, see Scott Allen's article on ode to code.\n", "It depends on how many you need to do, and how fast the operations need to run. If it's only a few, then doing them one at a time with whatever mechanism you have for doing single operations will work fine.\nIf you need to do thousands or more, and it needs to run quickly, you should re-use the connection and command, changing the arguments for the parameters to the query during each iteration. This will minimize resource usage. You don't want to re-create the connection and command for each operation.\n", "Most databases support BULK UPDATE or BULK DELETE operations. \n", "From a \"business entity\" design standpoint, if you are doing different operations on each of a set of entities, you should have each entity handle its own persistence.\nIf there are common batch activities (like \"delete all older than x date\", for instance), I would write a static method on a collection class that executes the batch update or delete. I generally let entities handle their own inserts atomically.\n", "The answer depends on the volume of data you're talking about. If you've got a fairly small set of records in memory that you need to synchronise back to disk then multiple queries is probably appropriate. If it's a larger set of data you need to look at other options.\nI recently had to implement a mechanism where an external data feed gave me ~17,000 rows of dta that I needed to synchronise with a local table. The solution I chose there was to load the external data into a staging table and call a stored proc that did the synchronisation completely within the database.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "database", "sql" ]
stackoverflow_0000044469_database_sql.txt
Q: Running DB Migrations from application I have a rails application where each user has a separate database. (taking Joel Spolsky's advice on this). I want to run DB migrations from the rails application to create a new database and tables for this user. What is the easiest way to do this? Maybe the db migration is not the best for this type of thing. Thanks! It would be nice if it could be a completely automated process. The following process would be ideal. A user signs up on our site to use this web app Migrations are run to create this users database and get tables setup correctly Is there a way of calling a rake task from a ruby application? A: We use seperate configuration files for each user. So in the config/ dir we would have roo.database.yml which would connect to my personal database, and I would copy that over the database.yml file that is used by rails. We were thinking of expanding the rails Rakefile so we could specify the developer as a environment variable, which would then select a specfic datbase configuration, allowing us to only have one database.yml file. We haven't done this though as the above method works well enough. A: To answer part of your question, here's how you'd run a rake task from inside Rails code: require 'rake' load 'path/to/task.rake' Rake::Task['foo:bar:baz'].invoke Mind you, I have no idea how (or why) you could have one database per user. A: Actually I have discovered a good way to run DB migrations from an application: ActiveRecord::Migrator.migrate("db/migrate/")
Running DB Migrations from application
I have a rails application where each user has a separate database. (taking Joel Spolsky's advice on this). I want to run DB migrations from the rails application to create a new database and tables for this user. What is the easiest way to do this? Maybe the db migration is not the best for this type of thing. Thanks! It would be nice if it could be a completely automated process. The following process would be ideal. A user signs up on our site to use this web app Migrations are run to create this users database and get tables setup correctly Is there a way of calling a rake task from a ruby application?
[ "We use seperate configuration files for each user. So in the config/ dir we would have roo.database.yml which would connect to my personal database, and I would copy that over the database.yml file that is used by rails.\nWe were thinking of expanding the rails Rakefile so we could specify the developer as a environment variable, which would then select a specfic datbase configuration, allowing us to only have one database.yml file. We haven't done this though as the above method works well enough.\n", "To answer part of your question, here's how you'd run a rake task from inside Rails code:\nrequire 'rake'\nload 'path/to/task.rake'\n\nRake::Task['foo:bar:baz'].invoke\n\nMind you, I have no idea how (or why) you could have one database per user.\n", "Actually I have discovered a good way to run DB migrations from an application:\n\nActiveRecord::Migrator.migrate(\"db/migrate/\")\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "ruby_on_rails" ]
stackoverflow_0000038922_ruby_on_rails.txt
Q: How do you generate a random number in C#? I would like to generate a random floating point number between 2 values. What is the best way to do this in C#? A: The only thing I'd add to Eric's response is an explanation; I feel that knowledge of why code works is better than knowing what code works. The explanation is this: let's say you want a number between 2.5 and 4.5. The range is 2.0 (4.5 - 2.5). NextDouble only returns a number between 0 and 1.0, but if you multiply this by the range you will get a number between 0 and range. So, this would give us random doubles between 0.0 and 2.0: rng.NextDouble() * 2.0 But, we want them between 2.5 and 4.5! How do we do this? Add the smallest number, 2.5: 2.5 + rng.NextDouble() * 2.0 Now, we get a number between 0.0 and 2.0; if you add 2.5 to each of these values we see that the range is now between 2.5 and 4.5. At first I thought that it mattered if b > a or a > b, but if you work it out both ways you'll find it works out identically so long as you don't mess up the order of the variables used. I like to express it with longer variable names so I don't get mixed up: double NextDouble(Random rng, double min, double max) { return min + (rng.NextDouble() * (max - min)); } A: System.Random r = new System.Random(); double rnd( double a, double b ) { return a + r.NextDouble()*(b-a); } A: // generate a random number starting with 5 and less than 15 Random r = new Random(); int num = r.Next(5, 15); For doubles you can replace Next with NextDouble A: How random? If you can deal with pseudo-random then simply: Random randNum = new Random(); randNum. NextDouble(Min, Max); If you want a "better" random number, then you probably should look at the Mersenne Twister algorithm. Plenty of people hav already implemented it for you though A: Here is a snippet of how to get Cryographically safe random numbers: This will fill in the 8 bytes with a crytographically strong sequence of random values. byte[] salt = new byte[8]; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetBytes(salt); For more details see How Random is your Random??" (inspired by a CodingHorror article on deck shuffling) A: For an explaination of why Longhorn has been downmodded so much: http://msdn.microsoft.com/en-us/magazine/cc163367.aspx Look for the implementation of NextDouble and the explanation of what is a random double. That link is also a goo example of how to use cryptographic random numbers (like Sameer mentioned) only with actual useful outputs instead of a bit stream.
How do you generate a random number in C#?
I would like to generate a random floating point number between 2 values. What is the best way to do this in C#?
[ "The only thing I'd add to Eric's response is an explanation; I feel that knowledge of why code works is better than knowing what code works.\nThe explanation is this: let's say you want a number between 2.5 and 4.5. The range is 2.0 (4.5 - 2.5). NextDouble only returns a number between 0 and 1.0, but if you multiply this by the range you will get a number between 0 and range.\nSo, this would give us random doubles between 0.0 and 2.0:\nrng.NextDouble() * 2.0\nBut, we want them between 2.5 and 4.5! How do we do this? Add the smallest number, 2.5:\n2.5 + rng.NextDouble() * 2.0\nNow, we get a number between 0.0 and 2.0; if you add 2.5 to each of these values we see that the range is now between 2.5 and 4.5.\nAt first I thought that it mattered if b > a or a > b, but if you work it out both ways you'll find it works out identically so long as you don't mess up the order of the variables used. I like to express it with longer variable names so I don't get mixed up:\ndouble NextDouble(Random rng, double min, double max)\n{\n return min + (rng.NextDouble() * (max - min));\n}\n", "System.Random r = new System.Random();\n\ndouble rnd( double a, double b )\n{\n return a + r.NextDouble()*(b-a);\n}\n\n", "// generate a random number starting with 5 and less than 15\nRandom r = new Random();\nint num = r.Next(5, 15); \n\nFor doubles you can replace Next with NextDouble\n", "How random? If you can deal with pseudo-random then simply:\nRandom randNum = new Random();\nrandNum. NextDouble(Min, Max);\n\nIf you want a \"better\" random number, then you probably should look at the Mersenne Twister algorithm. Plenty of people hav already implemented it for you though\n", "Here is a snippet of how to get Cryographically safe random numbers:\nThis will fill in the 8 bytes with a crytographically strong sequence of random values.\nbyte[] salt = new byte[8];\nRNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();\nrng.GetBytes(salt);\n\nFor more details see How Random is your Random??\" (inspired by a CodingHorror article on deck shuffling)\n", "For an explaination of why Longhorn has been downmodded so much: http://msdn.microsoft.com/en-us/magazine/cc163367.aspx Look for the implementation of NextDouble and the explanation of what is a random double.\nThat link is also a goo example of how to use cryptographic random numbers (like Sameer mentioned) only with actual useful outputs instead of a bit stream.\n" ]
[ 60, 20, 2, 1, 1, 1 ]
[]
[]
[ "c#", "floating_point", "random" ]
stackoverflow_0000044408_c#_floating_point_random.txt
Q: catching button clicks in javascript without server interaction I've got a sign up form that requires the user to enter their email and password, both are in two separate text boxes. I want to provide a button that the user can click so that the password (which is masked) will appear in a popup when the user clicks the button. Currently my JavaScript code for this is as follows: function toggleShowPassword() { var button = $get('PASSWORD_TEXTBOX_ID'); var password; if (button) { password = button.value; alert(password); button.value = password; } } The problem is that every time the user clicks the button, the password is cleared in both Firefox and IE. I want them to be able to see their password in clear text to verify without having to retype their password. My questions are: Why does the password field keep getting reset with each button click? How can I make it so the password field is NOT cleared once the user has seen his/her password in clear text? A: I would assume that the browser has some issue with the script attempting to set the value of a password field: button.value = password; This line of code has no real purpose. password.value is not affected in the previous lines where you are reading the value and using it in the alert(). This should be a simpler version of your code: function toggleShowPassword() { var button = $get('PASSWORD_TEXTBOX_ID'); if (button) { alert(button.value); } } edit: actually I just did a quick test, and Firefox has no problem setting the password field's value with code such as button.value = "blah". So it doesn't seem like this would be the case ... I would check if your ASP.NET code is causing a postback as others have suggested. A: It sounds that you're doing a request to the server on each click, the password box being reset in each page load is typical behavior of the browsers. A: You didn't say you were using ASP.NET, but... By design, ASP.NET clears during postback the value of TextBox controls whose Mode is Password. I work around this in a subclass with the following code: // If the TextMode is "password", the Text property won't work if ( TextMode == System.Web.UI.WebControls.TextBoxMode.Password ) Attributes[ "value" ] = stringValue; A: If you don't want the button to submit the form, then be sure it has type 'button' rather than 'submit'. For example, you might do something like this: <input type="button" value="Show My Password" onclick="toggleShowPassword()"/> A: In your HTML: <input type="button" onclick="toggleShowPassword();"> You need to use "button" rather than "submit" to prevent your form from posting. A: I did a quick example up of a working version: <html> <head> <script type="text/javascript" src="prototype.js"></script> <script type="text/javascript"> function toggleShowPassword() { var textBox = $('PasswordText'); if (textBox) { alert(textBox.value); } } </script> </head> <body> <input type="password" id="PasswordText" /><input type="button" onclick="toggleShowPassword();" value="Show Password" /> </body> </html> The key is that the input is of type button and not submit. I used the prototype library for retrieving the element by ID. A: You do not need to do button.value = password; since reading the value does not change it. I'm not sure why it's being cleared, maybe JavaScript does not allow password field values to be modified. A: hah! the answer if here: http://forums.asp.net/p/1067527/1548528.aspx I figured out the solution... the fix was simple change OnClientClick="myOnClick()" to OnClientClick="return myOnClick()" Here's the fully corrected code... function myOnClick() { //perform some other actions... return false; } Untitled Page
catching button clicks in javascript without server interaction
I've got a sign up form that requires the user to enter their email and password, both are in two separate text boxes. I want to provide a button that the user can click so that the password (which is masked) will appear in a popup when the user clicks the button. Currently my JavaScript code for this is as follows: function toggleShowPassword() { var button = $get('PASSWORD_TEXTBOX_ID'); var password; if (button) { password = button.value; alert(password); button.value = password; } } The problem is that every time the user clicks the button, the password is cleared in both Firefox and IE. I want them to be able to see their password in clear text to verify without having to retype their password. My questions are: Why does the password field keep getting reset with each button click? How can I make it so the password field is NOT cleared once the user has seen his/her password in clear text?
[ "I would assume that the browser has some issue with the script attempting to set the value of a password field:\nbutton.value = password;\n\nThis line of code has no real purpose. password.value is not affected in the previous lines where you are reading the value and using it in the alert().\nThis should be a simpler version of your code:\nfunction toggleShowPassword() { \n var button = $get('PASSWORD_TEXTBOX_ID');\n if (button)\n {\n alert(button.value);\n } \n\n}\nedit: actually I just did a quick test, and Firefox has no problem setting the password field's value with code such as button.value = \"blah\". So it doesn't seem like this would be the case ... I would check if your ASP.NET code is causing a postback as others have suggested.\n", "It sounds that you're doing a request to the server on each click, the password box being reset in each page load is typical behavior of the browsers.\n", "You didn't say you were using ASP.NET, but...\nBy design, ASP.NET clears during postback the value of TextBox controls whose Mode is Password. I work around this in a subclass with the following code:\n\n// If the TextMode is \"password\", the Text property won't work\nif ( TextMode == System.Web.UI.WebControls.TextBoxMode.Password )\n Attributes[ \"value\" ] = stringValue;\n\n", "If you don't want the button to submit the form, then be sure it has type 'button' rather than 'submit'. For example, you might do something like this:\n<input type=\"button\" value=\"Show My Password\" onclick=\"toggleShowPassword()\"/>\n\n", "In your HTML:\n<input type=\"button\" onclick=\"toggleShowPassword();\">\n\nYou need to use \"button\" rather than \"submit\" to prevent your form from posting.\n", "I did a quick example up of a working version:\n<html>\n <head>\n <script type=\"text/javascript\" src=\"prototype.js\"></script>\n <script type=\"text/javascript\">\n function toggleShowPassword() { \n var textBox = $('PasswordText');\n if (textBox)\n {\n alert(textBox.value); \n } \n }\n </script>\n </head>\n <body>\n <input type=\"password\" id=\"PasswordText\" /><input type=\"button\" onclick=\"toggleShowPassword();\" value=\"Show Password\" />\n </body>\n</html>\n\nThe key is that the input is of type button and not submit. I used the prototype library for retrieving the element by ID.\n", "You do not need to do button.value = password; since reading the value does not change it. I'm not sure why it's being cleared, maybe JavaScript does not allow password field values to be modified.\n", "hah!\nthe answer if here:\nhttp://forums.asp.net/p/1067527/1548528.aspx\nI figured out the solution... the fix was simple change\n OnClientClick=\"myOnClick()\"\n\nto\n OnClientClick=\"return myOnClick()\"\n\nHere's the fully corrected code...\n\n\n \n function myOnClick() {\n //perform some other actions...\n return false;\n }\n \n Untitled Page\n\n\n \n \n \n \n \n\n\n" ]
[ 1, 1, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "ajax", "javascript" ]
stackoverflow_0000044401_ajax_javascript.txt
Q: Which way do you prefer to create your forms in MVC? Which way do you prefer to create your forms in MVC? <% Html.Form() { %> <% } %> Or <form action="<%= Url.Action("ManageImage", "UserAccount") %>" method="post"> </form> I understand that Html.Form() as of PR5 now just uses the URL provided by the request. However something about that doesn't sit well with me, especially since I will be getting all the baggage of any querystrings that are included. What is your take? A: The second way, definitely. The first way is programmer-centric, which is not what the V part of MVC is about. The second way is more designer centric, only binding to the model where it is necessary, leaving the HTML as natural as possible. A: On the whole, I think I'm kinda old-school as I prefer to roll my own HTML elements. I also prefer a view engine like like NHaml, which makes writing HTML almost an order of magnitude simpler. A: I have to agree with both of you, I am not really like this simplistic WebForms style that seems to be integrating its way in to MVC. This stuff almost seems like it should be a 3rd party library or at the very least an extensions library that can be included if needed or wanted. A: I am totally in the opinion of old school HTML, that is what designers use. I don't like to include to much code centric syntax for this reason. I treat the web form view engine like a third party library, because I replaced it with a different view engine. If you do not like the way the web form view model works or the direction it is going, you can always go a different route. That is one of the main reasons I love ASP.NET MVC. A: I agree with Andrew Peters, DRY. It should also be pointed out that you can specify your controller, action, and params to the .Form() helper and if they fit into your routing rules then no query string parameters will be used. I also understand what Will was saying about the V in MVC. In my opinion I do not think it is a problem to put code in the view as long as it is for the view. It is really easy to cross the line between controller and view if you are not careful. Personally I can not stand to use C# as a template engine without my eyes bleeding or getting the urge to murder someone. This helps me keep my logic separated, controller logic in C#, view logic in brail. A: The reason for using helpers is that they allow you to encapsulate common patterns in a consistent and DRY fashion. Think of them as a way of refactoring views to remove duplication just as you would with regular code. For example, I blogged about some RESTful NHaml helpers that can build urls based on a model.
Which way do you prefer to create your forms in MVC?
Which way do you prefer to create your forms in MVC? <% Html.Form() { %> <% } %> Or <form action="<%= Url.Action("ManageImage", "UserAccount") %>" method="post"> </form> I understand that Html.Form() as of PR5 now just uses the URL provided by the request. However something about that doesn't sit well with me, especially since I will be getting all the baggage of any querystrings that are included. What is your take?
[ "The second way, definitely. The first way is programmer-centric, which is not what the V part of MVC is about. The second way is more designer centric, only binding to the model where it is necessary, leaving the HTML as natural as possible.\n", "On the whole, I think I'm kinda old-school as I prefer to roll my own HTML elements.\nI also prefer a view engine like like NHaml, which makes writing HTML almost an order of magnitude simpler.\n", "I have to agree with both of you, I am not really like this simplistic WebForms style that seems to be integrating its way in to MVC. This stuff almost seems like it should be a 3rd party library or at the very least an extensions library that can be included if needed or wanted.\n", "I am totally in the opinion of old school HTML, that is what designers use. I don't like to include to much code centric syntax for this reason. I treat the web form view engine like a third party library, because I replaced it with a different view engine. If you do not like the way the web form view model works or the direction it is going, you can always go a different route. That is one of the main reasons I love ASP.NET MVC.\n", "I agree with Andrew Peters, DRY. It should also be pointed out that you can specify your controller, action, and params to the .Form() helper and if they fit into your routing rules then no query string parameters will be used.\nI also understand what Will was saying about the V in MVC. In my opinion I do not think it is a problem to put code in the view as long as it is for the view. It is really easy to cross the line between controller and view if you are not careful. Personally I can not stand to use C# as a template engine without my eyes bleeding or getting the urge to murder someone. This helps me keep my logic separated, controller logic in C#, view logic in brail.\n", "The reason for using helpers is that they allow you to encapsulate common patterns in a consistent and DRY fashion. Think of them as a way of refactoring views to remove duplication just as you would with regular code. \nFor example, I blogged about some RESTful NHaml helpers that can build urls based on a model.\n" ]
[ 7, 3, 1, 1, 1, 0 ]
[]
[]
[ "asp.net_mvc", "forms", "model_view_controller" ]
stackoverflow_0000042282_asp.net_mvc_forms_model_view_controller.txt
Q: Is it possible to craft a glob that matches files in the current directory and all subdirectories? For this directory structure: . |-- README.txt |-- firstlevel.rb `-- lib |-- models | |-- foo | | `-- fourthlevel.rb | `-- thirdlevel.rb `-- secondlevel.rb 3 directories, 5 files The glob would match: firstlevel.rb lib/secondlevel.rb lib/models/thirdlevel.rb lib/models/foo/fourthlevel.rb A: Apologies if I've missed the real point of the question but, if I was using sh/bash/etc., then I would probably use find to do the job: find . -name '*.rb' -type f Globs can get a bit nasty when used from within a script and find is much more flexible. A: In zsh, **/*.rb works A: In Ruby itself: Dir.glob('**/*.rb') perhaps? A: Looks like it can't be done from bash If you using zsh then ls **/*.rb will produce the correct result. Otherwise you can hijack the ruby interpreter (and probably those of other languages) ruby -e "puts Dir.glob('**/*.rb')" Thanks to Chris and Gaius for your answers.
Is it possible to craft a glob that matches files in the current directory and all subdirectories?
For this directory structure: . |-- README.txt |-- firstlevel.rb `-- lib |-- models | |-- foo | | `-- fourthlevel.rb | `-- thirdlevel.rb `-- secondlevel.rb 3 directories, 5 files The glob would match: firstlevel.rb lib/secondlevel.rb lib/models/thirdlevel.rb lib/models/foo/fourthlevel.rb
[ "Apologies if I've missed the real point of the question but, if I was using sh/bash/etc., then I would probably use find to do the job:\nfind . -name '*.rb' -type f\n\nGlobs can get a bit nasty when used from within a script and find is much more flexible.\n", "In zsh, **/*.rb works\n", "In Ruby itself:\nDir.glob('**/*.rb') perhaps?\n\n", "Looks like it can't be done from bash\nIf you using zsh then\nls **/*.rb\n\nwill produce the correct result. \nOtherwise you can hijack the ruby interpreter (and probably those of other languages) \nruby -e \"puts Dir.glob('**/*.rb')\"\n\nThanks to Chris and Gaius for your answers.\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "glob", "shell" ]
stackoverflow_0000044481_glob_shell.txt
Q: preferred way to implement visitor pattern in dynamic languages? As an exercise for myself, I was translating a sample program into various languages. Starting in C#, I had a visitor-pattern interface like so: interface Visitor { void Accept(Bedroom x); void Accept(Bathroom x); void Accept(Kitchen x); void Accept(LivingRoom x); } Moving to ruby (or python, or whatever), I no longer get dispatch from the compiler based on type. Should I do the dispatch in the visitor itself? class Cleaner def accept(x) acceptBedroom(x) if Bedroom === x acceptBathroom(x) if Bathroom === x acceptKitchen(x) if Kitchen===x acceptLivingRoom(x) if LivingRoom===x end ... Or should I do the dispatch in the different specializaions of the room: class Bathroom<Room def initialize(name) super(name) end def accept(visitor) visitor.acceptBathroom(self) end end Or is there another preferred idiom that is used in dynamic languages? A: My recommendation is to use the former approach. There are pluses and minuses for each approach. The former is harder to maintain as the number of Room types grows; the latter is harder as the number of Cleaner types grows. In Ruby, you could try def accept(x) send "accept#{x.class}".to_sym, x end PS: not all dynamically typed languages are unable to do dispatch based on type; some can infer type, or failing that, can used forced casting to pick the proper method among the overloaded options. A: I would go with the second version. The first one looks like the kind of code smell that Visitor is supposed to solve: long if-else-if or switch-case statements.
preferred way to implement visitor pattern in dynamic languages?
As an exercise for myself, I was translating a sample program into various languages. Starting in C#, I had a visitor-pattern interface like so: interface Visitor { void Accept(Bedroom x); void Accept(Bathroom x); void Accept(Kitchen x); void Accept(LivingRoom x); } Moving to ruby (or python, or whatever), I no longer get dispatch from the compiler based on type. Should I do the dispatch in the visitor itself? class Cleaner def accept(x) acceptBedroom(x) if Bedroom === x acceptBathroom(x) if Bathroom === x acceptKitchen(x) if Kitchen===x acceptLivingRoom(x) if LivingRoom===x end ... Or should I do the dispatch in the different specializaions of the room: class Bathroom<Room def initialize(name) super(name) end def accept(visitor) visitor.acceptBathroom(self) end end Or is there another preferred idiom that is used in dynamic languages?
[ "My recommendation is to use the former approach. There are pluses and minuses for each approach. The former is harder to maintain as the number of Room types grows; the latter is harder as the number of Cleaner types grows.\nIn Ruby, you could try\ndef accept(x)\n send \"accept#{x.class}\".to_sym, x\nend\n\nPS: not all dynamically typed languages are unable to do dispatch based on type; some can infer type, or failing that, can used forced casting to pick the proper method among the overloaded options.\n", "I would go with the second version. The first one looks like the kind of code smell that Visitor is supposed to solve: long if-else-if or switch-case statements.\n" ]
[ 3, 3 ]
[]
[]
[ "c#", "design_patterns", "dynamic_languages", "ruby" ]
stackoverflow_0000044337_c#_design_patterns_dynamic_languages_ruby.txt
Q: How to set up a robot.txt which only allows the default page of a site Say I have a site on http://example.com. I would really like allowing bots to see the home page, but any other page need to blocked as it is pointless to spider. In other words http://example.com & http://example.com/ should be allowed, but http://example.com/anything and http://example.com/someendpoint.aspx should be blocked. Further it would be great if I can allow certain query strings to passthrough to the home page: http://example.com?okparam=true but not http://example.com?anythingbutokparam=true A: So after some research, here is what I found - a solution acceptable by the major search providers: google , yahoo & msn (I could on find a validator here) : User-Agent: * Disallow: /* Allow: /?okparam= Allow: /$ The trick is using the $ to mark the end of URL. A: Google's Webmaster Tools report that disallow always takes precedence over allow, so there's no easy way of doing this in a robots.txt file. You could accomplish this by puting a noindex,nofollow META tag in the HTML every page but the home page. A: Basic robots.txt: Disallow: /subdir/ I don't think that you can create an expression saying 'everything but the root', you have to fill in all sub directories. The query string limitation is also not possible from robots.txt. You have to do it in the background code (the processing part), or maybe with server rewrite-rules. A: Disallow: * Allow: index.ext If I remember correctly the second clause should override the first. A: As far as I know, not all the crawlers support Allow tag. One possible solution might be putting everything except the home page into another folder and disallowing that folder.
How to set up a robot.txt which only allows the default page of a site
Say I have a site on http://example.com. I would really like allowing bots to see the home page, but any other page need to blocked as it is pointless to spider. In other words http://example.com & http://example.com/ should be allowed, but http://example.com/anything and http://example.com/someendpoint.aspx should be blocked. Further it would be great if I can allow certain query strings to passthrough to the home page: http://example.com?okparam=true but not http://example.com?anythingbutokparam=true
[ "So after some research, here is what I found - a solution acceptable by the major search providers: google , yahoo & msn (I could on find a validator here) :\nUser-Agent: *\nDisallow: /*\nAllow: /?okparam=\nAllow: /$\n\nThe trick is using the $ to mark the end of URL.\n", "Google's Webmaster Tools report that disallow always takes precedence over allow, so there's no easy way of doing this in a robots.txt file.\nYou could accomplish this by puting a noindex,nofollow META tag in the HTML every page but the home page.\n", "Basic robots.txt:\nDisallow: /subdir/\n\nI don't think that you can create an expression saying 'everything but the root', you have to fill in all sub directories.\nThe query string limitation is also not possible from robots.txt. You have to do it in the background code (the processing part), or maybe with server rewrite-rules.\n", "Disallow: *\nAllow: index.ext\n\nIf I remember correctly the second clause should override the first.\n", "As far as I know, not all the crawlers support Allow tag. One possible solution might be putting everything except the home page into another folder and disallowing that folder.\n" ]
[ 53, 1, 0, 0, 0 ]
[]
[]
[ "bots", "googlebot", "robots.txt", "slurp", "web_crawler" ]
stackoverflow_0000043427_bots_googlebot_robots.txt_slurp_web_crawler.txt
Q: How to have two remote origins for Git? Our git server will be local, but we want an server where our local repo is also kept online but only used in a push to fashion. How can one do that? A: You can add remotes with git remote add <name> <url> You can then push to a remote with git push <name> master:master to push your local master branch to the remote master branch. When you create a repo with git clone the remote is named origin but you can create a public repository for your online server and push to it with git push public master:master
How to have two remote origins for Git?
Our git server will be local, but we want an server where our local repo is also kept online but only used in a push to fashion. How can one do that?
[ "You can add remotes with git remote add <name> <url>\nYou can then push to a remote with git push <name> master:master to push your local master branch to the remote master branch.\nWhen you create a repo with git clone the remote is named origin but you can create a public repository for your online server and push to it with git push public master:master\n" ]
[ 34 ]
[]
[]
[ "git" ]
stackoverflow_0000044714_git.txt
Q: JavaFX video encoding On JavaFX's Wikipedia In May 2008 (...) Sun Also announced a multi-year agreement with On2 Technologies to bring comprehensive video capabilities to the JavaFX product family using the company's TrueMotion Video codec. Do you know if it will include encoding capabilities for Webcam Video like Flash or just playback/streaming? Thanks A: The JavaFX API just supports media playback at the moment (see here: javafx.scene.media.MediaView). There might very well be mere Java APIs for encoding, however.
JavaFX video encoding
On JavaFX's Wikipedia In May 2008 (...) Sun Also announced a multi-year agreement with On2 Technologies to bring comprehensive video capabilities to the JavaFX product family using the company's TrueMotion Video codec. Do you know if it will include encoding capabilities for Webcam Video like Flash or just playback/streaming? Thanks
[ "The JavaFX API just supports media playback at the moment (see here: javafx.scene.media.MediaView). There might very well be mere Java APIs for encoding, however.\n" ]
[ 1 ]
[]
[]
[ "encoding", "flash", "javafx", "video" ]
stackoverflow_0000044516_encoding_flash_javafx_video.txt
Q: Is there a plugin for targetting .NET 1.1 with VS 2008? Is there a plugin for targetting .NET 1.1 with VS 2008? A: From what I know, you can hack the build files to target the 1.1 runtime instead. Google for your question and you should turn up pages like this one. A: According to Scott Guthrie, the reason VS 2008 does not support 1.0 or 1.1... "...is that there were significant CLR engine changes between .NET 1.x and 2.x that make debugging very difficult to support. In the end the costing of the work to support that was so large and impacted so many parts of Visual Studio that we weren't able to add 1.1 support in this release." Sounds like it would be difficult to really create such a plugin. The only hope you might find in his statement is that they "weren't able to add 1.1 support in this release" (emphasis mine). i.e. maybe they will add it down the road. I wouldn't hold my breath though. EDIT: Looks like the link @lassevk provided shows some promise for those people that can't accept running VS 2003 side-by-side with VS 2008. Looks like a lot of work though. :)
Is there a plugin for targetting .NET 1.1 with VS 2008?
Is there a plugin for targetting .NET 1.1 with VS 2008?
[ "From what I know, you can hack the build files to target the 1.1 runtime instead.\nGoogle for your question and you should turn up pages like this one.\n", "According to Scott Guthrie, the reason VS 2008 does not support 1.0 or 1.1...\n\n\"...is that there were significant CLR engine changes between .NET 1.x and 2.x that make debugging very difficult to support. In the end the costing of the work to support that was so large and impacted so many parts of Visual Studio that we weren't able to add 1.1 support in this release.\"\n\nSounds like it would be difficult to really create such a plugin. The only hope you might find in his statement is that they \"weren't able to add 1.1 support in this release\" (emphasis mine). i.e. maybe they will add it down the road.\nI wouldn't hold my breath though.\n\nEDIT: Looks like the link @lassevk provided shows some promise for those people that can't accept running VS 2003 side-by-side with VS 2008. Looks like a lot of work though. :)\n" ]
[ 4, 4 ]
[]
[]
[ ".net", ".net_1.1", "multi_targeting", "visual_studio_2008" ]
stackoverflow_0000044737_.net_.net_1.1_multi_targeting_visual_studio_2008.txt
Q: How to cache ASP.NET user controls? I heard on a recent podcast (Polymorphic) that it is possible to cache a user control as opposed to the entire page. I think my header control which displays static content and my footer control could benefit from being cached. How can I go about caching just those controls? A: Take a look here You can use VaryByParam and VaryByControl in the output cache. A: I think you can specify OutputCache in the control's markup file like you'd do on an ASPX page. And it'd get properly cached automatically. Just read up on OutputCache page directive on MSDN and get the parameters right and it should do what you want it to. It's been a long time since I write classic ASP.NET but I believe that's how it's done.
How to cache ASP.NET user controls?
I heard on a recent podcast (Polymorphic) that it is possible to cache a user control as opposed to the entire page. I think my header control which displays static content and my footer control could benefit from being cached. How can I go about caching just those controls?
[ "Take a look here\nYou can use VaryByParam and VaryByControl in the output cache.\n", "I think you can specify OutputCache in the control's markup file like you'd do on an ASPX page. And it'd get properly cached automatically.\nJust read up on OutputCache page directive on MSDN and get the parameters right and it should do what you want it to.\nIt's been a long time since I write classic ASP.NET but I believe that's how it's done.\n" ]
[ 4, 1 ]
[]
[]
[ "asp.net", "caching", "user_controls" ]
stackoverflow_0000044757_asp.net_caching_user_controls.txt
Q: NT authentication login I am working on a site where users can login to get more private information. My client has another site else where that uses nt authentication for accessing it. What they want to do is have a button on the site I am working on under the private area that will send them to the nt authenticated site, but not require them to log on to that site instead passing the username and password that they used to log into my site to the other site for them. Is it possible to do this? and how would I accomplish it? Is there a better way to do this? A: Here's an (untested) theory, the details of which will greatly depend on what types of authentication the Sharepoint site will accept. I'll tackle Basic, since it's the easiest. You'll write out some JavaScript that uses XMLHttpRequest to submit a request to the Sharepoint site, and add their username and password to the request headers. Their browser will run that JavaScript, and get logged into the Sharepoint site. Now, when they click the link, the client's browser should have the cached credentials to send to the Sharepoint site. Possible issues: XMLHttpRequest does not allow cross domain auth Browser and XHR don't share auth info Sharepoint and XHR can't agree on auth method Another option is to proxy the connection to Sharepoint, which allows you to login server side (bypassing XHR limitations and browser security) - but requiring load on your server and possibly some URL target issues. A: How will the other site validate your username and password? Ideally your site shouldn't even be remembering the user's password to be able to pass it to another site (you store hashes of the password, not the password itself, and only use the actually password during validation). What if your site provided a token to the user, who presents that token to the new site, which in turn asks your site to validate the token. Basically the second site is trusting you to tell them who the user is. This all breaks down if the second site is actually using the Windows accounts for anything other than just retrieving a user name (for example permissions on the underlying file), since the user is not logged on as the actual Windows user account in this scenario. A: If you need to authenticate against the second site, you may need to spawn a new thread and call the windows LogonUser API. Once you have the security token, assign it to the new thread and do your connection via that thread. LogonUser requires enhanced privileges, and isn't Managed code, so there are some pretty severe hiccups to using it. But that's been the only work around I've been able to find to get a Forms authenticated site talking to a Windows Authenticated Service/Site. Hope this helps. A: Is this an intranet environment? If so they shouldn't have to login anyways. If sharepoint is setup using "Integrated Authentication" and the site is listed as a trusted site in IE, the browser will use there network cred for auto login. This can be setup on firefox as well. A: Your users will not be able to connect to the NTLM site directly without getting an NTLM challenge. I would write what would effectively be a proxy to the NTLM site; i.e your server-side code will have credentials to connect to the NTLM site, and it passes through the requests from your users. As you mention it's SharePoint (spit) bear in mind that SharePoint has a bunch of Web Services you could use for this (rather than doing screen-scraping).
NT authentication login
I am working on a site where users can login to get more private information. My client has another site else where that uses nt authentication for accessing it. What they want to do is have a button on the site I am working on under the private area that will send them to the nt authenticated site, but not require them to log on to that site instead passing the username and password that they used to log into my site to the other site for them. Is it possible to do this? and how would I accomplish it? Is there a better way to do this?
[ "Here's an (untested) theory, the details of which will greatly depend on what types of authentication the Sharepoint site will accept. I'll tackle Basic, since it's the easiest.\nYou'll write out some JavaScript that uses XMLHttpRequest to submit a request to the Sharepoint site, and add their username and password to the request headers. Their browser will run that JavaScript, and get logged into the Sharepoint site.\nNow, when they click the link, the client's browser should have the cached credentials to send to the Sharepoint site.\nPossible issues:\n\nXMLHttpRequest does not allow cross domain auth\nBrowser and XHR don't share auth info\nSharepoint and XHR can't agree on auth method\n\nAnother option is to proxy the connection to Sharepoint, which allows you to login server side (bypassing XHR limitations and browser security) - but requiring load on your server and possibly some URL target issues.\n", "How will the other site validate your username and password?\nIdeally your site shouldn't even be remembering the user's password to be able to pass it to another site (you store hashes of the password, not the password itself, and only use the actually password during validation).\nWhat if your site provided a token to the user, who presents that token to the new site, which in turn asks your site to validate the token. Basically the second site is trusting you to tell them who the user is.\nThis all breaks down if the second site is actually using the Windows accounts for anything other than just retrieving a user name (for example permissions on the underlying file), since the user is not logged on as the actual Windows user account in this scenario.\n", "If you need to authenticate against the second site, you may need to spawn a new thread and call the windows LogonUser API. Once you have the security token, assign it to the new thread and do your connection via that thread.\nLogonUser requires enhanced privileges, and isn't Managed code, so there are some pretty severe hiccups to using it. But that's been the only work around I've been able to find to get a Forms authenticated site talking to a Windows Authenticated Service/Site.\nHope this helps.\n", "Is this an intranet environment? If so they shouldn't have to login anyways. If sharepoint is setup using \"Integrated Authentication\" and the site is listed as a trusted site in IE, the browser will use there network cred for auto login. This can be setup on firefox as well.\n", "Your users will not be able to connect to the NTLM site directly without getting an NTLM challenge. I would write what would effectively be a proxy to the NTLM site; i.e your server-side code will have credentials to connect to the NTLM site, and it passes through the requests from your users. \nAs you mention it's SharePoint (spit) bear in mind that SharePoint has a bunch of Web Services you could use for this (rather than doing screen-scraping).\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "authentication", "ntlm" ]
stackoverflow_0000044102_authentication_ntlm.txt
Q: C++ linker unresolved external symbols I'm building an application against some legacy, third party libraries, and having problems with the linking stage. I'm trying to compile with Visual Studio 9. My compile command is: cl -DNT40 -DPOMDLL -DCRTAPI1=_cdecl -DCRTAPI2=cdecl -D_WIN32 -DWIN32 -DWIN32_LEAN_AND_MEAN -DWNT -DBYPASS_FLEX -D_INTEL=1 -DIPLIB=none -I. -I"D:\src\include" -I"C:\Program Files\Microsoft Visual Studio 9.0\VC\include" -c -nologo -EHsc -W1 -Ox -Oy- -MD mymain.c The code compiles cleanly. The link command is: link -debug -nologo -machine:IX86 -verbose:lib -subsystem:console mymain.obj wsock32.lib advapi32.lib msvcrt.lib oldnames.lib kernel32.lib winmm.lib [snip large list of dependencies] D:\src\lib\app_main.obj -out:mymain.exe The errors that I'm getting are: app_main.obj : error LNK2019: unresolved external symbol "_\_declspec(dllimport) public: void __thiscall std::locale::facet::_Register(void)" (__imp_?_Register@facet@locale@std@@QAEXXZ) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<char>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@D@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<unsigned short>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@G@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<unsigned short> const & __cdecl std::use_facet<class std::ctype<unsigned short> >(class std::locale const &)" (??$use_facet@V?$ctype@G@std@@@std@@YAABV?$ctype@G@0@ABVlocale@0@@Z) mymain.exe : fatal error LNK1120: 3 unresolved externals Notice that these errors are coming from the legacy code, not my code - app_main.obj is part of the legacy code, while mymain.c is my source. I've done some searching around, and what that I've read says that this type of error is caused by a mismatch with the -MD switch between my code and the library that I'm linking to. Since I'm dealing with legacy code, a solution has to come from my environment. It's been a long time since I've done C++ work, and even longer since I've used Visual Studio, so I'm hoping that this is just some ignorance on my part. Any ideas on how to get these resolved? A: These are standard library references. Make sure that all libraries (including the standard library) are using the same linkage. E.g. you can't link statically while linking the standard lib dynamically. The same goes for the threading model used. Take special care that you and the 3rd party library use the same linkage options. This can be a real pain in the *ss. A: Check this on MSDN: /MD Causes your application to use the multithread- and DLL-specific version of the run-time library. /MT Causes your application to use the multithread, static version of the run-time library. Note: "... so that the linker will use LIBCMT.lib to resolve external symbols" So you'll need a different set of libraries. How I went about finding out which libraries to link: Find a configuration that does link, and add /verbose option. Pipe the output to a text file. Try the configuration that doesn't link. Look in the verbose output from step 2 for the symbols that are unresolved ("_declspec(dllimport) public: void thiscall std::locale::facet::Register(void)" in your case) and find the used libraries. Add those libraries to the list of libraries you're linking to. Old skool but it worked for me. Jan A: If you still wish to get the project to compile using VS2008 (or in the future) I can suggest using a binary editor to view the object file in question mainapp.obj. Here is an example from a small project of mine. The zdbException.obj contains the following excerpt DEFAULTLIB:"libc pmtd" /DEFAULTLI B:"uuid.lib" /DE FAULTLIB:"uuid.l ib" /include:?id @?$num_put@DV?$o streambuf_iterat or@DU?$char_trai ts@D@std@@@std@@ @std@@2V0locale@ 2@A /include:?id @?$numpunct@D@st d@@2V0locale@2@A /DEFAULTLIB:"LI BCMTD" /DEFAULTL IB:"OLDNAMES" /E DITANDCONTINUE Note the entry /DEFAULTLIB:"LIBCMTD". This indicates the object file was compiled with the static c run-time multi-threaded debug. There is also the possibility that the functions referenced in the obj are deprecated in the standard run-time lib shipped with VS2008. A: After trying to get this stuff to compile under VS 2008, I tried earlier versions of VS - 2005 worked with warnings, and 2003 just worked. I double checked the linkages and couldn't find any problems, so either I just couldn't find it, or that wasn't the problem. So to reiterate, downgrading to VS 2003 fixed it.
C++ linker unresolved external symbols
I'm building an application against some legacy, third party libraries, and having problems with the linking stage. I'm trying to compile with Visual Studio 9. My compile command is: cl -DNT40 -DPOMDLL -DCRTAPI1=_cdecl -DCRTAPI2=cdecl -D_WIN32 -DWIN32 -DWIN32_LEAN_AND_MEAN -DWNT -DBYPASS_FLEX -D_INTEL=1 -DIPLIB=none -I. -I"D:\src\include" -I"C:\Program Files\Microsoft Visual Studio 9.0\VC\include" -c -nologo -EHsc -W1 -Ox -Oy- -MD mymain.c The code compiles cleanly. The link command is: link -debug -nologo -machine:IX86 -verbose:lib -subsystem:console mymain.obj wsock32.lib advapi32.lib msvcrt.lib oldnames.lib kernel32.lib winmm.lib [snip large list of dependencies] D:\src\lib\app_main.obj -out:mymain.exe The errors that I'm getting are: app_main.obj : error LNK2019: unresolved external symbol "_\_declspec(dllimport) public: void __thiscall std::locale::facet::_Register(void)" (__imp_?_Register@facet@locale@std@@QAEXXZ) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<char>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@D@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<unsigned short>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@G@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<unsigned short> const & __cdecl std::use_facet<class std::ctype<unsigned short> >(class std::locale const &)" (??$use_facet@V?$ctype@G@std@@@std@@YAABV?$ctype@G@0@ABVlocale@0@@Z) mymain.exe : fatal error LNK1120: 3 unresolved externals Notice that these errors are coming from the legacy code, not my code - app_main.obj is part of the legacy code, while mymain.c is my source. I've done some searching around, and what that I've read says that this type of error is caused by a mismatch with the -MD switch between my code and the library that I'm linking to. Since I'm dealing with legacy code, a solution has to come from my environment. It's been a long time since I've done C++ work, and even longer since I've used Visual Studio, so I'm hoping that this is just some ignorance on my part. Any ideas on how to get these resolved?
[ "These are standard library references. Make sure that all libraries (including the standard library) are using the same linkage. E.g. you can't link statically while linking the standard lib dynamically. The same goes for the threading model used. Take special care that you and the 3rd party library use the same linkage options.\nThis can be a real pain in the *ss.\n", "Check this on MSDN:\n\n/MD Causes your application to use the multithread- and DLL-specific version of the run-time library.\n/MT Causes your application to use the multithread, static version of the run-time library.\n\nNote: \"... so that the linker will use LIBCMT.lib to resolve external symbols\"\nSo you'll need a different set of libraries.\nHow I went about finding out which libraries to link:\n\nFind a configuration that does link, and add /verbose option.\nPipe the output to a text file.\nTry the configuration that doesn't link.\nLook in the verbose output from step 2 for the symbols that are unresolved (\"_declspec(dllimport) public: void thiscall std::locale::facet::Register(void)\" in your case) and find the used libraries.\nAdd those libraries to the list of libraries you're linking to.\n\nOld skool but it worked for me.\nJan\n", "If you still wish to get the project to compile using VS2008 (or in the future) I can suggest using a binary editor to view the object file in question mainapp.obj.\nHere is an example from a small project of mine.\nThe zdbException.obj contains the following excerpt\nDEFAULTLIB:\"libc\npmtd\" /DEFAULTLI\nB:\"uuid.lib\" /DE\nFAULTLIB:\"uuid.l\nib\" /include:?id\n@?$num_put@DV?$o\nstreambuf_iterat\nor@DU?$char_trai\nts@D@std@@@std@@\n@std@@2V0locale@\n2@A /include:?id\n@?$numpunct@D@st\nd@@2V0locale@2@A\n /DEFAULTLIB:\"LI\nBCMTD\" /DEFAULTL\nIB:\"OLDNAMES\" /E\nDITANDCONTINUE \n\nNote the entry /DEFAULTLIB:\"LIBCMTD\". This indicates the object file was compiled with the static c run-time multi-threaded debug.\nThere is also the possibility that the functions referenced in the obj are deprecated in the standard run-time lib shipped with VS2008.\n", "After trying to get this stuff to compile under VS 2008, I tried earlier versions of VS - 2005 worked with warnings, and 2003 just worked. I double checked the linkages and couldn't find any problems, so either I just couldn't find it, or that wasn't the problem. \nSo to reiterate, downgrading to VS 2003 fixed it.\n" ]
[ 4, 2, 2, -2 ]
[]
[]
[ "c++" ]
stackoverflow_0000023209_c++.txt
Q: What can cause .NET assembly registration to fail? We've seen an issue where one of our installers (msi) returns the error code 2908, which is used to indicate that an assembly failed to register. Later in the installation, we get the following (sanitized) error: MyAssemblyName, version="1.0.1.1", culture="neutral", publicKeyToken="119EFC79848A50". Please refer to Help and Support for more information. HRESULT: 0x8002802F. The assembly registers properly on most systems. Has anyone else encountered this issue? How did you solve it? A: I found a pair of blog postings that appear to cover this topic.
What can cause .NET assembly registration to fail?
We've seen an issue where one of our installers (msi) returns the error code 2908, which is used to indicate that an assembly failed to register. Later in the installation, we get the following (sanitized) error: MyAssemblyName, version="1.0.1.1", culture="neutral", publicKeyToken="119EFC79848A50". Please refer to Help and Support for more information. HRESULT: 0x8002802F. The assembly registers properly on most systems. Has anyone else encountered this issue? How did you solve it?
[ "I found a pair of blog postings that appear to cover this topic.\n" ]
[ 1 ]
[]
[]
[ ".net", "windows", "windows_installer" ]
stackoverflow_0000044467_.net_windows_windows_installer.txt
Q: Ajax Autocomplete Webservice Call - Service Method, am I calling this correctly? Ok, so my method in my webservice requires a type to be passed, it is called in the ServiceMethod property of the AutoCompleteExtender, I am fuzzy about how I should do that so I called it like this: ServiceMethod="DropDownLoad<<%=(typeof)subCategory%>>" where subCategory is a page property that looks like this: protected SubCategory subCategory { get { var subCategory = NHibernateObjectHelper.LoadDataObject<SubCategory>(Convert.ToInt32(Request.QueryString["SCID"])); return subCategory; } } A: You could use the AutoCompleteExtender's ContextKey parameter to use a single web method that accepted a type name as its context key. Then in the web method, use reflection and that parameter to return the desired string[]. A: I dont' think calling a Generic Method on a webservice is possible. If you look at the service description of two identical methods, one generic, one not: [WebMethod] public string[] GetSearchList(string prefixText, int count) { } [WebMethod] public string[] GetSearchList2<T>(string prefixText, int count) { } They are identical. It appears that both SOAP 1.x and HTTP POST do not allow this type of operation.
Ajax Autocomplete Webservice Call - Service Method, am I calling this correctly?
Ok, so my method in my webservice requires a type to be passed, it is called in the ServiceMethod property of the AutoCompleteExtender, I am fuzzy about how I should do that so I called it like this: ServiceMethod="DropDownLoad<<%=(typeof)subCategory%>>" where subCategory is a page property that looks like this: protected SubCategory subCategory { get { var subCategory = NHibernateObjectHelper.LoadDataObject<SubCategory>(Convert.ToInt32(Request.QueryString["SCID"])); return subCategory; } }
[ "You could use the AutoCompleteExtender's ContextKey parameter to use a single web method that accepted a type name as its context key. Then in the web method, use reflection and that parameter to return the desired string[].\n", "I dont' think calling a Generic Method on a webservice is possible.\nIf you look at the service description of two identical methods, one generic, one not:\n[WebMethod]\npublic string[] GetSearchList(string prefixText, int count)\n{\n}\n\n[WebMethod]\npublic string[] GetSearchList2<T>(string prefixText, int count)\n{\n}\n\nThey are identical. It appears that both SOAP 1.x and HTTP POST do not allow this type of operation.\n" ]
[ 2, 1 ]
[]
[]
[ "asp.net", "asp.net_ajax", "c#" ]
stackoverflow_0000044771_asp.net_asp.net_ajax_c#.txt
Q: How to prefetch Oracle sequence ID-s in a distributed environment I have a distributed Java application running on 5 application servers. The servers all use the same Oracle 9i database running on a 6th machine. The application need to prefetch a batch of 100 IDs from a sequence. It's relatively easy to do in a single-threaded, non-distributed environment, you can just issue these queries: SELECT seq.nextval FROM dual; ALTER SEQUENCE seq INCREMENT BY 100; SELECT seq.nextval FROM dual; The first select fetches the first sequence ID that the application can use, the second select returns the last one that can be used. Things get way more interesting in a multithreaded environment. You can't be sure that before the second select another thread doesn't increase the sequence by 100 again. This issue can be solved by synchronizing the access on the Java side - you only let one thread begin fetching the IDs at one time. The situation becomes really hard when you can't synchronize because parts of the application doesn't run on the same JVM, not even on the same physical machine. I found some references on forums that others have problems with solving this problem too, but none of the answers are really working not to mention being reasonable. Can the community provide a solution for this problem? Some more information: I can't really play with the transaction isolation levels. I use JPA and the change would affect the entire application, not only the prefetching queries and that's not acceptable for me. On PostgreSQL I could do the following: SELECT setval('seq', NEXTVAL('seq') + n - 1) The solution by Matthew works when you can use a fixed increment value (which is perfectly acceptable in my case). However is there a solution when you don't want to fix the size of the increment, but want to adjust it dynamically? A: Why not just have the sequence as increment by 100 all the time? each "nextval" gives you 100 sequence numbers to work with SQL> create sequence so_test start with 100 increment by 100 nocache; Sequence created. SQL> select so_test.nextval - 99 as first_seq, so_test.currval as last_seq from dual; FIRST_SEQ LAST_SEQ ---------- ---------- 1 100 SQL> / FIRST_SEQ LAST_SEQ ---------- ---------- 101 200 SQL> / FIRST_SEQ LAST_SEQ ---------- ---------- 201 300 SQL> A note on your example.. Watch out for DDL.. It will produce an implicit commit Example of commit produced by DDL SQL> select * from xx; no rows selected SQL> insert into xx values ('x'); 1 row created. SQL> alter sequence so_test increment by 100; Sequence altered. SQL> rollback; Rollback complete. SQL> select * from xx; Y ----- x SQL> A: Why do you need to fetch the sequence IDs in the first place? In most cases you would insert into a table and return the ID. insert into t (my_pk, my_data) values (mysequence.nextval, :the_data) returning my_pk into :the_pk; It sounds like you are trying to pre-optimize the processing. If you REALLY need to pre-fetch the IDs then just call the sequence 100 times. The entire point of a sequence is that it manages the numbering. You're not supposed to assume that you can get 100 consecutive numbers. A: Matthew has the correct approach here. In my opinion, it is very unusual for an application to reset a sequence's current value after every use. Much more conventional to set the increment size to whatever you need upfront. Also, this way is much more performant. Selecting nextval from a sequence is a highly optimised operation in Oracle, whereas running ddl to alter the sequence is much more expensive. I guess that doesn't really answer the last point in your edited question... A: For when you don't want a fixed size increment, sequences aren't really what you are after, all they really guarantee is that you will be getting a unique number always bigger than the last one you got. There is always the possibility that you'll end up with gaps, and you can't really adjust the increment amount on the fly safely or effectively. I can't really think of any case where I've had to do this kind of thing, but likely the easiest way is just to store the "current" number somewhere and update it as you need it. Something like this. drop table t_so_test; create table t_so_test (curr_num number(10)); insert into t_so_test values (1); create or replace procedure p_get_next_seq (inc IN NUMBER, v_next_seq OUT NUMBER) As BEGIN update t_so_test set curr_num = curr_num + inc RETURNING curr_num into v_next_seq; END; / SQL> var p number; SQL> execute p_get_next_seq(100,:p); PL/SQL procedure successfully completed. SQL> print p; P ---------- 101 SQL> execute p_get_next_seq(10,:p); PL/SQL procedure successfully completed. SQL> print p; P ---------- 111 SQL> execute p_get_next_seq(1000,:p); PL/SQL procedure successfully completed. SQL> print p; P ---------- 1111 SQL>
How to prefetch Oracle sequence ID-s in a distributed environment
I have a distributed Java application running on 5 application servers. The servers all use the same Oracle 9i database running on a 6th machine. The application need to prefetch a batch of 100 IDs from a sequence. It's relatively easy to do in a single-threaded, non-distributed environment, you can just issue these queries: SELECT seq.nextval FROM dual; ALTER SEQUENCE seq INCREMENT BY 100; SELECT seq.nextval FROM dual; The first select fetches the first sequence ID that the application can use, the second select returns the last one that can be used. Things get way more interesting in a multithreaded environment. You can't be sure that before the second select another thread doesn't increase the sequence by 100 again. This issue can be solved by synchronizing the access on the Java side - you only let one thread begin fetching the IDs at one time. The situation becomes really hard when you can't synchronize because parts of the application doesn't run on the same JVM, not even on the same physical machine. I found some references on forums that others have problems with solving this problem too, but none of the answers are really working not to mention being reasonable. Can the community provide a solution for this problem? Some more information: I can't really play with the transaction isolation levels. I use JPA and the change would affect the entire application, not only the prefetching queries and that's not acceptable for me. On PostgreSQL I could do the following: SELECT setval('seq', NEXTVAL('seq') + n - 1) The solution by Matthew works when you can use a fixed increment value (which is perfectly acceptable in my case). However is there a solution when you don't want to fix the size of the increment, but want to adjust it dynamically?
[ "Why not just have the sequence as increment by 100 all the time? each \"nextval\" gives you 100 sequence numbers to work with\nSQL> create sequence so_test start with 100 increment by 100 nocache;\n\nSequence created.\n\nSQL> select so_test.nextval - 99 as first_seq, so_test.currval as last_seq from dual;\n\n FIRST_SEQ LAST_SEQ\n---------- ----------\n 1 100\n\nSQL> /\n\n FIRST_SEQ LAST_SEQ\n---------- ----------\n 101 200\n\nSQL> /\n\n FIRST_SEQ LAST_SEQ\n---------- ----------\n 201 300\n\nSQL> \n\nA note on your example.. Watch out for DDL.. It will produce an implicit commit\nExample of commit produced by DDL\nSQL> select * from xx;\n\nno rows selected\n\nSQL> insert into xx values ('x');\n\n1 row created.\n\nSQL> alter sequence so_test increment by 100;\n\nSequence altered.\n\nSQL> rollback;\n\nRollback complete.\n\nSQL> select * from xx;\n\nY\n-----\nx\n\nSQL> \n\n", "Why do you need to fetch the sequence IDs in the first place? In most cases you would insert into a table and return the ID.\ninsert into t (my_pk, my_data) values (mysequence.nextval, :the_data)\nreturning my_pk into :the_pk;\n\nIt sounds like you are trying to pre-optimize the processing.\nIf you REALLY need to pre-fetch the IDs then just call the sequence 100 times. The entire point of a sequence is that it manages the numbering. You're not supposed to assume that you can get 100 consecutive numbers.\n", "Matthew has the correct approach here. In my opinion, it is very unusual for an application to reset a sequence's current value after every use. Much more conventional to set the increment size to whatever you need upfront.\nAlso, this way is much more performant. Selecting nextval from a sequence is a highly optimised operation in Oracle, whereas running ddl to alter the sequence is much more expensive.\nI guess that doesn't really answer the last point in your edited question...\n", "For when you don't want a fixed size increment, sequences aren't really what you are after, all they really guarantee is that you will be getting a unique number always bigger than the last one you got. There is always the possibility that you'll end up with gaps, and you can't really adjust the increment amount on the fly safely or effectively. \nI can't really think of any case where I've had to do this kind of thing, but likely the easiest way is just to store the \"current\" number somewhere and update it as you need it.\nSomething like this.\ndrop table t_so_test;\n\ncreate table t_so_test (curr_num number(10));\n\ninsert into t_so_test values (1);\ncreate or replace procedure p_get_next_seq (inc IN NUMBER, v_next_seq OUT NUMBER) As\nBEGIN\n update t_so_test set curr_num = curr_num + inc RETURNING curr_num into v_next_seq;\nEND;\n/\n\n\nSQL> var p number;\nSQL> execute p_get_next_seq(100,:p);\n\nPL/SQL procedure successfully completed.\n\nSQL> print p;\n\n P\n----------\n 101\n\nSQL> execute p_get_next_seq(10,:p); \n\nPL/SQL procedure successfully completed.\n\nSQL> print p;\n\n P\n----------\n 111\n\nSQL> execute p_get_next_seq(1000,:p);\n\nPL/SQL procedure successfully completed.\n\nSQL> print p;\n\n P\n----------\n 1111\n\nSQL> \n\n" ]
[ 11, 3, 1, 1 ]
[]
[]
[ "java", "oracle" ]
stackoverflow_0000043808_java_oracle.txt
Q: Can the HTTP version or headers affect the visual appearance of a web page? I know, I would have thought the answer was obviously "no" as well, but I am experiencing a strange situation where when I view my site from our staging server it appears slightly larger than when I view it from my local dev server. I have used Charles to confirm that all of the content -- the HTML, the images, the CSS, the javascript, everything is the same. The ONLY difference in the traffic is that (because the local site is served from the Django development mode server) the response headers look like this: HTTP/1.0 200 OK Server WSGIServer/0.1 Python/2.5.2 Date Thu, 04 Sep 2008 23:56:10 GMT Vary Cookie Content-Length 2301 Content-Type text/html; charset=utf-8 Whereas on the staging server (where Django is running inside Apache) the headers look like this: HTTP/1.1 200 OK Date Thu, 04 Sep 2008 23:56:06 GMT Server Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5 with Suhosin-Patch Vary Cookie Content-Length 2301 Content-Type text/html; charset=utf-8 So, as far as I can tell the only differences are HTTP/1.1 vs HTTP/1.0, the server identifer (Apache vs WSGIServer) and the order of the Date/Server headers. To elaborate a bit further on the differences in appearance, basically it appears as if the version of the site on the staging server is "zoomed in" by about 10%. For example, the primary logo which dominates our home page is 220 pixels wide but when server from our staging server shows up as 245 pixels wide. Everything else on the page, (other images, text, spacing, etc) is also proportionately larger. This is all in Firefox 3. I don't have any other browsers available to test with at the moment. Has anyone else encountered any bizarre behavior anything like this before? I am at a loss. A: Have you tried View -> Zoom -> Reset on both sites?
Can the HTTP version or headers affect the visual appearance of a web page?
I know, I would have thought the answer was obviously "no" as well, but I am experiencing a strange situation where when I view my site from our staging server it appears slightly larger than when I view it from my local dev server. I have used Charles to confirm that all of the content -- the HTML, the images, the CSS, the javascript, everything is the same. The ONLY difference in the traffic is that (because the local site is served from the Django development mode server) the response headers look like this: HTTP/1.0 200 OK Server WSGIServer/0.1 Python/2.5.2 Date Thu, 04 Sep 2008 23:56:10 GMT Vary Cookie Content-Length 2301 Content-Type text/html; charset=utf-8 Whereas on the staging server (where Django is running inside Apache) the headers look like this: HTTP/1.1 200 OK Date Thu, 04 Sep 2008 23:56:06 GMT Server Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5 with Suhosin-Patch Vary Cookie Content-Length 2301 Content-Type text/html; charset=utf-8 So, as far as I can tell the only differences are HTTP/1.1 vs HTTP/1.0, the server identifer (Apache vs WSGIServer) and the order of the Date/Server headers. To elaborate a bit further on the differences in appearance, basically it appears as if the version of the site on the staging server is "zoomed in" by about 10%. For example, the primary logo which dominates our home page is 220 pixels wide but when server from our staging server shows up as 245 pixels wide. Everything else on the page, (other images, text, spacing, etc) is also proportionately larger. This is all in Firefox 3. I don't have any other browsers available to test with at the moment. Has anyone else encountered any bizarre behavior anything like this before? I am at a loss.
[ "Have you tried View -> Zoom -> Reset on both sites?\n" ]
[ 9 ]
[]
[]
[ "django", "firefox", "python" ]
stackoverflow_0000045013_django_firefox_python.txt
Q: How to use the SharePoint MultipleLookupField control? I want to use the MultipleLookupField control in a web page that will run in the context of SharePoint. I was wondering if anyone would help me with an example, which shows step by step how to use the control two display two SPField Collections. A: I'm not entirely sure I understand your question, especially the bit about displaying two SPField collections. Sorry if this turns out to be the answer to a completely different question! Anyway here's a quick demo walkthrough of using the MultipleLookupField in a web part. Create a team site. Add a few tasks to the task list. Also put a document in the Shared Documents library. Create a new column in the Shared Documents library; call it "Related", have it be a Lookup into the Title field of the Tasks list, and allow multiple values. Now create a web part, do all the usual boilerplate and then add this: Label l; MultipleLookupField mlf; protected override void CreateChildControls() { base.CreateChildControls(); SPList list = SPContext.Current.Web.Lists["Shared Documents"]; if (list != null && list.Items.Count > 0) { LiteralControl lit = new LiteralControl("Associate tasks to " + list.Items[0].Name); this.Controls.Add(lit); mlf = new MultipleLookupField(); mlf.ControlMode = SPControlMode.Edit; mlf.FieldName = "Related"; mlf.ItemId = list.Items[0].ID; mlf.ListId = list.ID; mlf.ID = "Related"; this.Controls.Add(mlf); Button b = new Button(); b.Text = "Change"; b.Click += new EventHandler(bClick); this.Controls.Add(b); l = new Label(); this.Controls.Add(l); } } void bClick(object sender, EventArgs e) { l.Text = ""; foreach (SPFieldLookupValue val in (SPFieldLookupValueCollection)mlf.Value) { l.Text += val.LookupValue.ToString() + " "; } SPListItem listitem = mlf.List.Items[0]; listitem["Related"] = mlf.Value; listitem.Update(); mlf.Value = listitem["Related"]; } protected override void OnInit(EventArgs e) { base.OnInit(e); EnsureChildControls(); } Granted, this is borderline ridiculous -- everything is hard-coded, there is no error-handling at all, and it serves no useful purpose -- but it's only meant as a quick demo. Now build and deploy this web part and add an instance of it to your team site's homepage; it should allow you to get and set the tasks which are associated with the first document in the library. The strange bit towards the end of the button Click handler, where we read a value from mlf.Value and then write it back again, appears to be required if you want the UI to stay in sync with the actual list values. Try omitting the last line of bClick to see what I mean. This has been driving me nuts for the last hour or so, and I'm hoping another commenter can come up with a better approach... A: Hm. Works fine on mine, so let's see if we can work out how your setup is different... It looks as though it's having trouble populating the control; my first guess would be that this is because the code makes so many assumptions about the lists it's talking to. Can you check that you've got a plain vanilla Team site, with (assume these names are case-sensitive): A list called Tasks, with several items in it A library called Shared Documents with at least one document A column called Related in the Shared Documents library The Related column is a Lookup field into the Title column of Tasks, and allows multiple values. The first document in Shared Documents has a value for Related Then add the webpart. Fingers crossed... A: Hm. OK, I'm still trying to break mine... so I went to the layouts directory and created a file foo.aspx. Here it is: <%@ Page Language="C#" Inherits="System.Web.UI.Page" MasterPageFile="~/_layouts/simple.master" %> <%@ Register Tagprefix="foo" Namespace="Foople" Assembly="Foople, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9f4da00116c38ec5"%> <asp:Content ContentPlaceHolderId="PlaceHolderMain" runat="server"> <foo:WebPart1 id="fred" runat="server" /> <foo:WebPart1a id="barney" runat="server" /> </asp:Content> WebPart1 is the webpart from before. WebPart1a is the exact same code, but in a class that inherits directly from WebControl rather than from WebPart. It works fine, apart from a security validation problem on the postback that I can't be bothered to debug. Changing the masterpage to ~masterurl/default.master, I uploaded foo.aspx to the Shared Documents library, and it works fine from there too -- both the WebControl and the WebPart behave properly, and the security problem is gone too. So I'm at a loss. Although I did notice this page with an obscure might-be-bug which is also in SPFolder.get_ContentTypeOrder(): http://forums.msdn.microsoft.com/en-US/sharepointdevelopment/thread/63baf273-7f36-453e-8293-26417759e2e1/ Any chance you could post your code?
How to use the SharePoint MultipleLookupField control?
I want to use the MultipleLookupField control in a web page that will run in the context of SharePoint. I was wondering if anyone would help me with an example, which shows step by step how to use the control two display two SPField Collections.
[ "I'm not entirely sure I understand your question, especially the bit about displaying two SPField collections. Sorry if this turns out to be the answer to a completely different question!\nAnyway here's a quick demo walkthrough of using the MultipleLookupField in a web part.\nCreate a team site. Add a few tasks to the task list. Also put a document in the Shared Documents library. Create a new column in the Shared Documents library; call it \"Related\", have it be a Lookup into the Title field of the Tasks list, and allow multiple values.\nNow create a web part, do all the usual boilerplate and then add this:\nLabel l;\nMultipleLookupField mlf;\n\nprotected override void CreateChildControls()\n{\n base.CreateChildControls();\n SPList list = SPContext.Current.Web.Lists[\"Shared Documents\"];\n if (list != null && list.Items.Count > 0)\n {\n LiteralControl lit = new LiteralControl(\"Associate tasks to \" + \n list.Items[0].Name);\n this.Controls.Add(lit);\n\n mlf = new MultipleLookupField();\n mlf.ControlMode = SPControlMode.Edit;\n mlf.FieldName = \"Related\";\n mlf.ItemId = list.Items[0].ID;\n mlf.ListId = list.ID;\n mlf.ID = \"Related\";\n this.Controls.Add(mlf);\n\n Button b = new Button();\n b.Text = \"Change\";\n b.Click += new EventHandler(bClick);\n this.Controls.Add(b);\n\n l = new Label();\n this.Controls.Add(l);\n }\n\n}\n\nvoid bClick(object sender, EventArgs e)\n{\n l.Text = \"\";\n foreach (SPFieldLookupValue val in (SPFieldLookupValueCollection)mlf.Value)\n {\n l.Text += val.LookupValue.ToString() + \" \";\n }\n SPListItem listitem = mlf.List.Items[0];\n listitem[\"Related\"] = mlf.Value;\n listitem.Update();\n mlf.Value = listitem[\"Related\"];\n}\n\nprotected override void OnInit(EventArgs e)\n{\n base.OnInit(e);\n EnsureChildControls();\n}\n\nGranted, this is borderline ridiculous -- everything is hard-coded, there is no error-handling at all, and it serves no useful purpose -- but it's only meant as a quick demo. Now build and deploy this web part and add an instance of it to your team site's homepage; it should allow you to get and set the tasks which are associated with the first document in the library.\nThe strange bit towards the end of the button Click handler, where we read a value from mlf.Value and then write it back again, appears to be required if you want the UI to stay in sync with the actual list values. Try omitting the last line of bClick to see what I mean. This has been driving me nuts for the last hour or so, and I'm hoping another commenter can come up with a better approach...\n", "Hm. Works fine on mine, so let's see if we can work out how your setup is different...\nIt looks as though it's having trouble populating the control; my first guess would be that this is because the code makes so many assumptions about the lists it's talking to. Can you check that you've got a plain vanilla Team site, with (assume these names are case-sensitive): \n\nA list called Tasks, with several items in it\nA library called Shared Documents with at least one document\nA column called Related in the Shared Documents library\nThe Related column is a Lookup field into the Title column of Tasks, and allows multiple values.\nThe first document in Shared Documents has a value for Related\n\nThen add the webpart. Fingers crossed...\n", "Hm. OK, I'm still trying to break mine... so I went to the layouts directory and created a file foo.aspx. Here it is:\n<%@ Page Language=\"C#\" Inherits=\"System.Web.UI.Page\" MasterPageFile=\"~/_layouts/simple.master\" %> \n<%@ Register Tagprefix=\"foo\" Namespace=\"Foople\" Assembly=\"Foople, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9f4da00116c38ec5\"%>\n<asp:Content ContentPlaceHolderId=\"PlaceHolderMain\" runat=\"server\">\n<foo:WebPart1 id=\"fred\" runat=\"server\" />\n<foo:WebPart1a id=\"barney\" runat=\"server\" />\n</asp:Content>\n\nWebPart1 is the webpart from before. WebPart1a is the exact same code, but in a class that inherits directly from WebControl rather than from WebPart.\nIt works fine, apart from a security validation problem on the postback that I can't be bothered to debug.\nChanging the masterpage to ~masterurl/default.master, I uploaded foo.aspx to the Shared Documents library, and it works fine from there too -- both the WebControl and the WebPart behave properly, and the security problem is gone too.\nSo I'm at a loss. Although I did notice this page with an obscure might-be-bug which is also in SPFolder.get_ContentTypeOrder(): http://forums.msdn.microsoft.com/en-US/sharepointdevelopment/thread/63baf273-7f36-453e-8293-26417759e2e1/\nAny chance you could post your code?\n" ]
[ 2, 0, 0 ]
[]
[]
[ "sharepoint" ]
stackoverflow_0000039910_sharepoint.txt
Q: reassign value to query string parameter I have a "showall" query string parameter in the url, the parameter is being added dynamically when "Show All/Show Pages" button is clicked. I want the ability to toggle "showall" query string parameter value depending on user clicking the "Show All/Show Pages" button. I'm doing some nested "if's" and string.Replace() on the url, is there a better way? All manipulations are done on the server. p.s. Toran, good suggestion, however I HAVE TO USE URL PARAMETER due to some other issues. A: Just to elaborate on Toran's answer: Use: <asp:HiddenField ID="ShowAll" Value="False" runat="server" /> To toggle your state: protected void ToggleState(object sender, EventArgs e) { //parse string as boolean, invert, and convert back to string ShowAll.Value = (!Boolean.Parse(ShowAll.Value)).ToString(); } A: Another dirty alternative could be just to use a hidden input and set that on/off instead of manipulating the url. A: Would it be too much of an effort just to have the value hard-coded into the URL (I know it's not too nice) with a default value or true then just have booleanVar = !booleanVar; run on every page load? At least that would move away from the need of having nested ifs to manipulate the URL. A: I am not sure based upon the question, but isn't this where HTTPHandlers come to the rescue? Shouldn't you be handling the variable alteration on the object prior to page rendering in this case then?
reassign value to query string parameter
I have a "showall" query string parameter in the url, the parameter is being added dynamically when "Show All/Show Pages" button is clicked. I want the ability to toggle "showall" query string parameter value depending on user clicking the "Show All/Show Pages" button. I'm doing some nested "if's" and string.Replace() on the url, is there a better way? All manipulations are done on the server. p.s. Toran, good suggestion, however I HAVE TO USE URL PARAMETER due to some other issues.
[ "Just to elaborate on Toran's answer:\nUse:\n<asp:HiddenField ID=\"ShowAll\" Value=\"False\" runat=\"server\" />\nTo toggle your state:\nprotected void ToggleState(object sender, EventArgs e)\n{\n //parse string as boolean, invert, and convert back to string\n ShowAll.Value = (!Boolean.Parse(ShowAll.Value)).ToString();\n}\n\n", "Another dirty alternative could be just to use a hidden input and set that on/off instead of manipulating the url.\n", "Would it be too much of an effort just to have the value hard-coded into the URL (I know it's not too nice) with a default value or true then just have \nbooleanVar = !booleanVar; \nrun on every page load?\nAt least that would move away from the need of having nested ifs to manipulate the URL. \n", "I am not sure based upon the question, but isn't this where HTTPHandlers come to the rescue? Shouldn't you be handling the variable alteration on the object prior to page rendering in this case then?\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "c#", "query_string" ]
stackoverflow_0000044999_c#_query_string.txt
Q: Substitution Control at the User Control Level? I am trying to create some cached user controls. Basically Header and Footer are static. Except the footer has one link that reads in the URL of the page and puts it into the javascript for sending a link to a friend. So I need that link to be dynamic. I set up a substitution control and had the static method return the dynamic link. Go to run and find that substitution controls are not supported at the user control level. Is there any work around to this? Is there another control like substitution that works on the User Controls that I am not aware of? A: I would forget about server side caching in this instance and rely on the simplicity of client side caching. Your Javascript code could be client side cached just as easily as HTML, either by linking to an external javascript file and adding the necessary headers/expiries, or by embedding the script within the page itself and ensuring the page itself is cached. Another possible method is by making an Ajax call on the page load to fetch the generated footer complete with correct link. This may take time on the first page load, but subsequent ajax requests would be cached on the client, thus seeing no penalty to future requests.
Substitution Control at the User Control Level?
I am trying to create some cached user controls. Basically Header and Footer are static. Except the footer has one link that reads in the URL of the page and puts it into the javascript for sending a link to a friend. So I need that link to be dynamic. I set up a substitution control and had the static method return the dynamic link. Go to run and find that substitution controls are not supported at the user control level. Is there any work around to this? Is there another control like substitution that works on the User Controls that I am not aware of?
[ "I would forget about server side caching in this instance and rely on the simplicity of client side caching.\nYour Javascript code could be client side cached just as easily as HTML, either by linking to an external javascript file and adding the necessary headers/expiries, or by embedding the script within the page itself and ensuring the page itself is cached.\nAnother possible method is by making an Ajax call on the page load to fetch the generated footer complete with correct link. This may take time on the first page load, but subsequent ajax requests would be cached on the client, thus seeing no penalty to future requests.\n" ]
[ 1 ]
[]
[]
[ "asp.net", "caching", "user_controls" ]
stackoverflow_0000044851_asp.net_caching_user_controls.txt
Q: How do you retrieve the commit message and file list for a particular revision? I need to deploy a few files that were checked in sometime ago (can't remember the exact ones), so I'm looking to get a list so I can deploy just those files. What is the svn command to do this? A: @Dana & @John Actually, svn log -v -r <#> http://my.svn.server/repository-root will work and show you all modified files within this repository. Or if you wanted this to work from within a working copy, you could use the output of svn info | grep Repository Root or something to find the actual repository root. --verbose is the same as -v, and those options simply list all of the affected files. A: svn log has a --verbose parameter. I don't have a repository here to test with, but does that return a list of modified files? You can also use svn diff -r <revision> to retrieve the full change details, which you can parse or read manually to find out which files were changed.
How do you retrieve the commit message and file list for a particular revision?
I need to deploy a few files that were checked in sometime ago (can't remember the exact ones), so I'm looking to get a list so I can deploy just those files. What is the svn command to do this?
[ "@Dana & @John\nActually, svn log -v -r <#> http://my.svn.server/repository-root will work and show you all modified files within this repository. Or if you wanted this to work from within a working copy, you could use the output of svn info | grep Repository Root or something to find the actual repository root.\n--verbose is the same as -v, and those options simply list all of the affected files.\n", "svn log has a --verbose parameter. I don't have a repository here to test with, but does that return a list of modified files?\nYou can also use svn diff -r <revision> to retrieve the full change details, which you can parse or read manually to find out which files were changed.\n" ]
[ 8, 4 ]
[]
[]
[ "svn", "version_control" ]
stackoverflow_0000045042_svn_version_control.txt
Q: Categories of controllers in MVC Routing? (Duplicate Controller names in separate Namespaces) I'm looking for some examples or samples of routing for the following sort of scenario: The general example of doing things is: {controller}/{action}/{id} So in the scenario of doing a product search for a store you'd have: public class ProductsController: Controller { public ActionResult Search(string id) // id being the search string { ... } } Say you had a few stores to do this and you wanted that consistently, is there any way to then have: {category}/{controller}/{action}/{id} So that you could have a particular search for a particular store, but use a different search method for a different store? (If you required the store name to be a higher priority than the function itself in the url) Or would it come down to: public class ProductsController: Controller { public ActionResult Search(int category, string id) // id being the search string { if(category == 1) return Category1Search(); if(category == 2) return Category2Search(); ... } } It may not be a great example, but basically the idea is to use the same controller name and therefore have a simple URL across a few different scenarios, or are you kind of stuck with requiring unique controller names, and no way to put them in slightly different namespaces/directories? Edit to add: The other reason I want this is because I might want a url that has the categories, and that certain controllers will only work under certain categories. IE: /this/search/items/search+term <-- works /that/search/items/search+term <-- won't work - because the search controller isn't allowed. A: I actually found it not even by searching, but by scanning through the ASP .NET forums in this question. Using this you can have the controllers of the same name under any part of the namespace, so long as you qualify which routes belong to which namespaces (you can have multiple namespaces per routes if you need be!) But from here, you can put in a directory under your controller, so if your controller was "MyWebShop.Controllers", you'd put a directory of "Shop1" and the namespace would be "MyWebShop.Controllers.Shop1" Then this works: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); var shop1namespace = new RouteValueDictionary(); shop1namespace.Add("namespaces", new HashSet<string>(new string[] { "MyWebShop.Controllers.Shop1" })); routes.Add("Shop1", new Route("Shop1/{controller}/{action}/{id}", new MvcRouteHandler()) { Defaults = new RouteValueDictionary(new { action = "Index", id = (string)null }), DataTokens = shop1namespace }); var shop2namespace = new RouteValueDictionary(); shop2namespace.Add("namespaces", new HashSet<string>(new string[] { "MyWebShop.Controllers.Shop2" })); routes.Add("Shop2", new Route("Shop2/{controller}/{action}/{id}", new MvcRouteHandler()) { Defaults = new RouteValueDictionary(new { action = "Index", id = (string)null }), DataTokens = shop2namespace }); var defaultnamespace = new RouteValueDictionary(); defaultnamespace.Add("namespaces", new HashSet<string>(new string[] { "MyWebShop.Controllers" })); routes.Add("Default", new Route("{controller}/{action}/{id}", new MvcRouteHandler()) { Defaults = new RouteValueDictionary(new { controller = "Home", action = "Index", id = "" }), DataTokens = defaultnamespace }); } The only other thing is that it will reference a view still in the base directory, so if you put the view into directories to match, you will have to put the view name in when you return it inside the controller. A: The best way to do this without any compromises would be to implement your own ControllerFactory by inheriting off of IControllerFactory. The CreateController method that you will implement handles creating the controller instance to handle the request by the RouteHandler and the ControllerActionInvoker. The convention is to use the name of the controller, when creating it, therefore you will need to override this functionality. This will be where you put your custom logic for creating the controller based on the route since you will have multiple controllers with the same name, but in different folders. Then you will need to register your custom controller factory in the application startup, just like your routes. Another area you will need to take into consideration is finding your views when creating the controller. If you plan on using the same view for all of them, then you shouldn't have to do anything different than the convention being used. If you plan on organizing your views also, then you will need to create your own ViewLocator also and assign it to the controller when creating it in your controller factory. To get an idea of code, there are a few questions I have answered on SO that relate to this question, but this one is different to some degree, because the controller names will be the same. I included links for reference. Views in separate assemblies in ASP.NET MVC asp.net mvc - subfolders Another route, but may require some compromises will be to use the new AcceptVerbs attribute. Check this question out for more details. I haven't played with this new functionality yet, but it could be another route.
Categories of controllers in MVC Routing? (Duplicate Controller names in separate Namespaces)
I'm looking for some examples or samples of routing for the following sort of scenario: The general example of doing things is: {controller}/{action}/{id} So in the scenario of doing a product search for a store you'd have: public class ProductsController: Controller { public ActionResult Search(string id) // id being the search string { ... } } Say you had a few stores to do this and you wanted that consistently, is there any way to then have: {category}/{controller}/{action}/{id} So that you could have a particular search for a particular store, but use a different search method for a different store? (If you required the store name to be a higher priority than the function itself in the url) Or would it come down to: public class ProductsController: Controller { public ActionResult Search(int category, string id) // id being the search string { if(category == 1) return Category1Search(); if(category == 2) return Category2Search(); ... } } It may not be a great example, but basically the idea is to use the same controller name and therefore have a simple URL across a few different scenarios, or are you kind of stuck with requiring unique controller names, and no way to put them in slightly different namespaces/directories? Edit to add: The other reason I want this is because I might want a url that has the categories, and that certain controllers will only work under certain categories. IE: /this/search/items/search+term <-- works /that/search/items/search+term <-- won't work - because the search controller isn't allowed.
[ "I actually found it not even by searching, but by scanning through the ASP .NET forums in this question.\nUsing this you can have the controllers of the same name under any part of the namespace, so long as you qualify which routes belong to which namespaces (you can have multiple namespaces per routes if you need be!)\nBut from here, you can put in a directory under your controller, so if your controller was \"MyWebShop.Controllers\", you'd put a directory of \"Shop1\" and the namespace would be \"MyWebShop.Controllers.Shop1\"\nThen this works:\n public static void RegisterRoutes(RouteCollection routes)\n {\n routes.IgnoreRoute(\"{resource}.axd/{*pathInfo}\");\n\n var shop1namespace = new RouteValueDictionary();\n shop1namespace.Add(\"namespaces\", new HashSet<string>(new string[] \n { \n \"MyWebShop.Controllers.Shop1\"\n }));\n\n routes.Add(\"Shop1\", new Route(\"Shop1/{controller}/{action}/{id}\", new MvcRouteHandler())\n {\n Defaults = new RouteValueDictionary(new\n {\n action = \"Index\",\n id = (string)null\n }),\n DataTokens = shop1namespace\n });\n\n var shop2namespace = new RouteValueDictionary();\n shop2namespace.Add(\"namespaces\", new HashSet<string>(new string[] \n { \n \"MyWebShop.Controllers.Shop2\"\n }));\n\n routes.Add(\"Shop2\", new Route(\"Shop2/{controller}/{action}/{id}\", new MvcRouteHandler())\n {\n Defaults = new RouteValueDictionary(new\n {\n action = \"Index\",\n id = (string)null\n }),\n DataTokens = shop2namespace\n });\n\n var defaultnamespace = new RouteValueDictionary();\n defaultnamespace.Add(\"namespaces\", new HashSet<string>(new string[] \n { \n \"MyWebShop.Controllers\"\n }));\n\n routes.Add(\"Default\", new Route(\"{controller}/{action}/{id}\", new MvcRouteHandler())\n {\n Defaults = new RouteValueDictionary(new { controller = \"Home\", action = \"Index\", id = \"\" }),\n DataTokens = defaultnamespace \n });\n }\n\nThe only other thing is that it will reference a view still in the base directory, so if you put the view into directories to match, you will have to put the view name in when you return it inside the controller.\n", "The best way to do this without any compromises would be to implement your own ControllerFactory by inheriting off of IControllerFactory. The CreateController method that you will implement handles creating the controller instance to handle the request by the RouteHandler and the ControllerActionInvoker. The convention is to use the name of the controller, when creating it, therefore you will need to override this functionality. This will be where you put your custom logic for creating the controller based on the route since you will have multiple controllers with the same name, but in different folders. Then you will need to register your custom controller factory in the application startup, just like your routes.\nAnother area you will need to take into consideration is finding your views when creating the controller. If you plan on using the same view for all of them, then you shouldn't have to do anything different than the convention being used. If you plan on organizing your views also, then you will need to create your own ViewLocator also and assign it to the controller when creating it in your controller factory.\nTo get an idea of code, there are a few questions I have answered on SO that relate to this question, but this one is different to some degree, because the controller names will be the same. I included links for reference.\n\nViews in separate assemblies in ASP.NET MVC\nasp.net mvc - subfolders\n\nAnother route, but may require some compromises will be to use the new AcceptVerbs attribute. Check this question out for more details. I haven't played with this new functionality yet, but it could be another route.\n" ]
[ 3, 1 ]
[]
[]
[ "asp.net_mvc", "controller", "routing" ]
stackoverflow_0000043201_asp.net_mvc_controller_routing.txt
Q: Is there any easy way to determine what factors are contributing to the size of an HTML element? For example I have a situation where I have something like this (contrived) example: <div id="outer" style="margin: auto> <div id="inner1" style="float: left">content</div> <div id="inner2" style="float: left">content</div> <div id="inner3" style="float: left">content</div> <br style="clear: both"/> </div> where there are no widths set on any elements, and what I want is #inner1, #inner2 and #inner3 to appear next to each other horizontally inside #outer but what is happening is that #inner1 and #inner2 are appearing next to each other and then #inner3 is wrapping on to the next line. In the actual page where this is happening there is a lot more going on, but I have inspected all of the elements very carefully with Firebug and do not understand why the #inner3 element is not appearing on the same line as #inner1 and #inner2 and causing #outer to get wider. So, my question is: Is there any way to determine why the browser is sizing #outer the way it is, or why it is choosing to wrap #inner3 even though there is plenty of room to put it on the previous "line"? Baring specific solutions to this problem, what tips or techniques do you hardcore HTML/CSS/Web UI guys have for a poor back end developer who has found himself working on the front end? A: It would be nice to have a tool that could tell you exactly what all your layout problems are, but in this case the browser rendered the page exactly how it should have -- the combined width of the floats exceeded the width of the containing block, so the last one drops to a new line (this is slightly different than the IE6 expanding box/float drop problem which is typically caused by content inside the float, not the floats themselves). So in this case, there was nothing wrong with your page. Debugging this is simply a matter of walking through your HTML in Firebug and figuring out which children of a block is exceeding the block's width. Firebug provides plenty of information for this purpose, although sometimes I need to use a calculator. I think what you described about being able to see which elements constrain other elements would simply be too complex and overwhelming, especially for elements that are removed from normal flow (such as floats or positioned elements). Also, a deeper understanding of how CSS layout helps a lot as well. It can get pretty complicated. For example, it is generally recommended to assign explicit widths to floated elements -- the W3C CSS2 spec states that floats need to have an explicit width, and does not provide instructions of what to do without it. I think most modern browsers use the "shrink to fit" method, and will constrain themselves to the width of the content. However, this is not guaranteed in older browsers, and in something like a 3-column layout, you'll be at the mercy of at the width of content inside the floats. Also, if you're striving for IE6 compatibility, there are a number of float related bugs that could also cause similar problems. A: Try the Web Developer Plugin for Firefox. Specifically, the Information -> Display Block Size and Outline -> Outline Block Level Elements options. This will allow to see the borders of your elements, and their size as Firefox sees them. A: In Firebug's CSS tab, you can see what style rules apply to a selected elements in the cascading order. This may or may not help you in your problem. My guess would be that something about the content of #inner3 is causing it to wrap below the first line, and the #outer is just getting sized to accommodate the smaller needed space. A: So I found the answer in my specific case -- there was a div much further up in the DOM that had specific left/right margins set which compressed it and everything in it. But the heart of the question is really how can you easily debug this sort of issue? What would be perfect in this case for example would be something in Firebug that, when hovering over an element's size in the layout panel would display a tool tip that says something like "width constrained by outer element X; height constrained by style Z on element Q" or "width contributed to by inner elements A, B and C". I wish I had the time to write something like this, although I suspect it would be difficult (if not impossible) to get that information out of Firefox's rendering engine.
Is there any easy way to determine what factors are contributing to the size of an HTML element?
For example I have a situation where I have something like this (contrived) example: <div id="outer" style="margin: auto> <div id="inner1" style="float: left">content</div> <div id="inner2" style="float: left">content</div> <div id="inner3" style="float: left">content</div> <br style="clear: both"/> </div> where there are no widths set on any elements, and what I want is #inner1, #inner2 and #inner3 to appear next to each other horizontally inside #outer but what is happening is that #inner1 and #inner2 are appearing next to each other and then #inner3 is wrapping on to the next line. In the actual page where this is happening there is a lot more going on, but I have inspected all of the elements very carefully with Firebug and do not understand why the #inner3 element is not appearing on the same line as #inner1 and #inner2 and causing #outer to get wider. So, my question is: Is there any way to determine why the browser is sizing #outer the way it is, or why it is choosing to wrap #inner3 even though there is plenty of room to put it on the previous "line"? Baring specific solutions to this problem, what tips or techniques do you hardcore HTML/CSS/Web UI guys have for a poor back end developer who has found himself working on the front end?
[ "It would be nice to have a tool that could tell you exactly what all your layout problems are, but in this case the browser rendered the page exactly how it should have -- the combined width of the floats exceeded the width of the containing block, so the last one drops to a new line (this is slightly different than the IE6 expanding box/float drop problem which is typically caused by content inside the float, not the floats themselves). So in this case, there was nothing wrong with your page.\nDebugging this is simply a matter of walking through your HTML in Firebug and figuring out which children of a block is exceeding the block's width. Firebug provides plenty of information for this purpose, although sometimes I need to use a calculator. I think what you described about being able to see which elements constrain other elements would simply be too complex and overwhelming, especially for elements that are removed from normal flow (such as floats or positioned elements). \nAlso, a deeper understanding of how CSS layout helps a lot as well. It can get pretty complicated.\nFor example, it is generally recommended to assign explicit widths to floated elements -- the W3C CSS2 spec states that floats need to have an explicit width, and does not provide instructions of what to do without it. I think most modern browsers use the \"shrink to fit\" method, and will constrain themselves to the width of the content. However, this is not guaranteed in older browsers, and in something like a 3-column layout, you'll be at the mercy of at the width of content inside the floats.\nAlso, if you're striving for IE6 compatibility, there are a number of float related bugs that could also cause similar problems. \n", "Try the Web Developer Plugin for Firefox. Specifically, the Information -> Display Block Size and Outline -> Outline Block Level Elements options. This will allow to see the borders of your elements, and their size as Firefox sees them.\n", "In Firebug's CSS tab, you can see what style rules apply to a selected elements in the cascading order. This may or may not help you in your problem.\nMy guess would be that something about the content of #inner3 is causing it to wrap below the first line, and the #outer is just getting sized to accommodate the smaller needed space.\n", "So I found the answer in my specific case -- there was a div much further up in the DOM that had specific left/right margins set which compressed it and everything in it. \nBut the heart of the question is really how can you easily debug this sort of issue? What would be perfect in this case for example would be something in Firebug that, when hovering over an element's size in the layout panel would display a tool tip that says something like \"width constrained by outer element X; height constrained by style Z on element Q\" or \"width contributed to by inner elements A, B and C\". \nI wish I had the time to write something like this, although I suspect it would be difficult (if not impossible) to get that information out of Firefox's rendering engine.\n" ]
[ 4, 2, 0, 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0000044864_css_html.txt
Q: IRAPIStream COM Interface in .NET I'm trying to use the OpenNETCF RAPI class to interact with a windows mobile device using the RAPI.Invoke() method. According to the following article: http://blog.opennetcf.com/ncowburn/2007/07/27/HOWTORetrieveTheDeviceIDFromTheDesktop.aspx You can do the communication in either block or stream mode. I have used block mode before, but now I need to do something a bit more complicated with a lot more data and continuous communication and therefore need to use the stream mode. Unfortunately on that article, and basically everywhere else, there is no explaination of how to use IRAPIStream in .NET I have found C/C++ documentation, but my desktop app needs to be written in C# Does anyone know how to properly implement the IRAPIStream COM interface in .NET? And better yet, anyone actually used RAPI.Invoke() with IRAPIStream before? Examples would be much appreciated. Edit: Upon a closer look at the RAPI class documentation, I realized that the Invoke() method doesn't support the stream interface.... so OpenNETCF is likely out, but maybe there is still a way to do it? A: I have found that generally the most performant and stable way to push/pull large amounts of data of a device over activesync is to use a socket. Early on we used CeRapiInvoke and a stream to pull data down of the device but ditched this early on in favour of using tcp/ip over a socket.
IRAPIStream COM Interface in .NET
I'm trying to use the OpenNETCF RAPI class to interact with a windows mobile device using the RAPI.Invoke() method. According to the following article: http://blog.opennetcf.com/ncowburn/2007/07/27/HOWTORetrieveTheDeviceIDFromTheDesktop.aspx You can do the communication in either block or stream mode. I have used block mode before, but now I need to do something a bit more complicated with a lot more data and continuous communication and therefore need to use the stream mode. Unfortunately on that article, and basically everywhere else, there is no explaination of how to use IRAPIStream in .NET I have found C/C++ documentation, but my desktop app needs to be written in C# Does anyone know how to properly implement the IRAPIStream COM interface in .NET? And better yet, anyone actually used RAPI.Invoke() with IRAPIStream before? Examples would be much appreciated. Edit: Upon a closer look at the RAPI class documentation, I realized that the Invoke() method doesn't support the stream interface.... so OpenNETCF is likely out, but maybe there is still a way to do it?
[ "I have found that generally the most performant and stable way to push/pull large amounts of data of a device over activesync is to use a socket.\nEarly on we used CeRapiInvoke and a stream to pull data down of the device but ditched this early on in favour of using tcp/ip over a socket.\n" ]
[ 1 ]
[]
[]
[ ".net", "c#", "compact_framework", "rapi", "windows_mobile" ]
stackoverflow_0000041009_.net_c#_compact_framework_rapi_windows_mobile.txt
Q: ASP.NET MVC - How do action names affect the url? Using MVC out of the box I found the generated URLs can be misleading and I wanted to know if this can be fixed or if my approach/understanding is wrong. Suppose I have a CreateEgg page, which has a form on it, and once the form is filled in and submitted the user is taken to a ListEggs page with the new egg in it. So my egg controller will look some thing like this: public class EggController : Controller { public void Add() { //do stuff RenderView("CreateEgg", viewData); } public void Create() { //do stuff RenderView("ListEggs", viewData); } } So my first page will have a url of something like http://localhost/egg/add and the form on the page will have an action of: using (Html.Form<EggController>(c => c.Create()) Meaning the second page will have a url of http://localhost/Egg/Create, to me this is misleading, the action should be called Create, because im creating the egg, but a list view is being displayed so the url of http://localhost/Egg/List would make more scene. How do I achieve this without making my view or action names misleading? A: The problem is your action does two things, violating the Single Responsibility Principle. If your Create action redirects to the List action when it's done creating the item, then this problem disappears. A: ActionVerbs Outlined in Scott Gu's post seem to be a good approch; Scott says: You can create overloaded implementations of action methods, and use a new [AcceptVerbs] attribute to have ASP.NET MVC filter how they are dispatched. For example, below we can declare two Create action methods - one that will be called in GET scenarios, and one that will be called in POST scenarios [AcceptVerbs("GET")] public object Create() {} [AcceptVerbs("POST")] public object Create(string productName, Decimal unitPrice) {} A: How A Method Becomes An Action by Phil Haack
ASP.NET MVC - How do action names affect the url?
Using MVC out of the box I found the generated URLs can be misleading and I wanted to know if this can be fixed or if my approach/understanding is wrong. Suppose I have a CreateEgg page, which has a form on it, and once the form is filled in and submitted the user is taken to a ListEggs page with the new egg in it. So my egg controller will look some thing like this: public class EggController : Controller { public void Add() { //do stuff RenderView("CreateEgg", viewData); } public void Create() { //do stuff RenderView("ListEggs", viewData); } } So my first page will have a url of something like http://localhost/egg/add and the form on the page will have an action of: using (Html.Form<EggController>(c => c.Create()) Meaning the second page will have a url of http://localhost/Egg/Create, to me this is misleading, the action should be called Create, because im creating the egg, but a list view is being displayed so the url of http://localhost/Egg/List would make more scene. How do I achieve this without making my view or action names misleading?
[ "The problem is your action does two things, violating the Single Responsibility Principle.\nIf your Create action redirects to the List action when it's done creating the item, then this problem disappears.\n", "ActionVerbs Outlined in Scott Gu's post seem to be a good approch;\nScott says:\n\nYou can create overloaded\n implementations of action methods, and\n use a new [AcceptVerbs] attribute to\n have ASP.NET MVC filter how they are\n dispatched. For example, below we can\n declare two Create action methods -\n one that will be called in GET\n scenarios, and one that will be called\n in POST scenarios\n\n[AcceptVerbs(\"GET\")]\npublic object Create() {}\n[AcceptVerbs(\"POST\")]\npublic object Create(string productName, Decimal unitPrice) {}\n\n", "How A Method Becomes An Action by Phil Haack\n" ]
[ 4, 0, 0 ]
[]
[]
[ "asp.net_mvc", "c#" ]
stackoverflow_0000038431_asp.net_mvc_c#.txt
Q: What is the best way to send html/image email? Do you attach the images? Use absolute urls? How do you best avoid getting flagged as spam? A: One of the biggest causes, that I have found, for email to be flagged as spam is DNS. Make sure the domain / MX records from which you are sending the email actually resolve correctly back from the server used for sending. As for images, you could attach them, but the most common way is to host them and use absolute urls. Primarily this is a bandwidth issue - you have to figure you're going to get an open rate of 10 - 15%: if you have to attach all the assets to every email, 85% of the bandwidth you'll use will be wasted. A: You attach the emails then reference them in your HTML like so: <img src="cid:imagefilename.jpg" /> Outlook, at least, recognizes this as a reference to an attached image and dumps it in appropriately. A: You'll want to use absolute URLs to link out to images on a server. Users won't want to download your attachments. Also most email clients will not displays images by default, so it's a good idea to keep the really important content as text. Email clients generally all use very different rendering methods. For example, Outlook 2007 uses Word's HTML rendering engine, whereas previous versions used Internet Explorer. Do be aware that CSS support is also very limited to in emails. Most clients, especially web mail, will strip out everything outside of the <body> tag, as well as <style> tags. This means that external or embedded CSS will not work, and that inline styles are the safest bet (the style="" attribute). There is also poor support for many CSS rules in Outlook 2007. This means that a lot people have returned to using tables for laying out email. As it was pointed out, Campaign Monitor is an excellent resource, and I especially recommend their CSS Compatibility Chart A: Campaign Monitor is a great resources for html email: http://www.campaignmonitor.com/resources/#building Also http://www.email-standards.org/, but seems down right now.
What is the best way to send html/image email?
Do you attach the images? Use absolute urls? How do you best avoid getting flagged as spam?
[ "One of the biggest causes, that I have found, for email to be flagged as spam is DNS. Make sure the domain / MX records from which you are sending the email actually resolve correctly back from the server used for sending.\nAs for images, you could attach them, but the most common way is to host them and use absolute urls. Primarily this is a bandwidth issue - you have to figure you're going to get an open rate of 10 - 15%: if you have to attach all the assets to every email, 85% of the bandwidth you'll use will be wasted.\n", "You attach the emails then reference them in your HTML like so:\n<img src=\"cid:imagefilename.jpg\" />\n\nOutlook, at least, recognizes this as a reference to an attached image and dumps it in appropriately.\n", "You'll want to use absolute URLs to link out to images on a server. Users won't want to download your attachments. Also most email clients will not displays images by default, so it's a good idea to keep the really important content as text.\nEmail clients generally all use very different rendering methods. For example, Outlook 2007 uses Word's HTML rendering engine, whereas previous versions used Internet Explorer.\nDo be aware that CSS support is also very limited to in emails. Most clients, especially web mail, will strip out everything outside of the <body> tag, as well as <style> tags. This means that external or embedded CSS will not work, and that inline styles are the safest bet (the style=\"\" attribute). There is also poor support for many CSS rules in Outlook 2007. This means that a lot people have returned to using tables for laying out email.\nAs it was pointed out, Campaign Monitor is an excellent resource, and I especially recommend their CSS Compatibility Chart\n", "Campaign Monitor is a great resources for html email:\nhttp://www.campaignmonitor.com/resources/#building\nAlso http://www.email-standards.org/, but seems down right now.\n" ]
[ 14, 9, 5, 2 ]
[]
[]
[ "email", "html" ]
stackoverflow_0000044532_email_html.txt
Q: Python packages - import by class, not file Say I have the following file structure: app/ app.py controllers/ __init__.py project.py plugin.py If app/controllers/project.py defines a class Project, app.py would import it like this: from app.controllers.project import Project I'd like to just be able to do: from app.controllers import Project How would this be done? A: You need to put from project import Project in controllers/__init__.py. Note that when Absolute imports become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named project), i.e., from .project import Project
Python packages - import by class, not file
Say I have the following file structure: app/ app.py controllers/ __init__.py project.py plugin.py If app/controllers/project.py defines a class Project, app.py would import it like this: from app.controllers.project import Project I'd like to just be able to do: from app.controllers import Project How would this be done?
[ "You need to put\nfrom project import Project\n\nin controllers/__init__.py.\nNote that when Absolute imports become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named project), i.e.,\nfrom .project import Project\n\n" ]
[ 103 ]
[]
[]
[ "package", "python" ]
stackoverflow_0000045122_package_python.txt
Q: Interfaces and Versioning I am designing a new System and I have a lot of Interfaces that will grow over time with the system. What is the best practice to name this interfaces ISomethingV01 ISomethingV02 etc and I do this public interface ISomething{ void method(); } then I have to add method 2 so now what I do? public interface ISomethingV2:ISomething{ void method2(); } or same other way? A: I think you're overrusing interfaces. Meyer and Martin told us: "Open for extension but closed for modification!" and then Cwalina (et al) reiterated: From Framework Design Guidelines... In general, classes are the preferred construct for exposing abstractions. The main drawback of interfaces is that they are much less flexible than classes when it comes to allowing for evolution of APIs. Once you ship an interface, the set of its members is fixed forever. Any additions to the interface would break existing types implementing the interface. A class offers much more flexibility. You can add members to classes that have already shipped. As long as the method is not abstract (i.e., as long as you provide a default implementation of the method), any existing derived classes continue to function unchanged. A: Ideally, you shouldn't be changing your interfaces very often (if at all). If you do need to change an interface, you should reconsider its purpose and see if the original name still applies to it. If you still feel that the interfaces will change, and the interfaces changes are small (adding items) and you have control of the whole code base, then you should just modify the interface and fix all the compilation errors. If your change is a change in how the interface is to be used, then you need to create a separate interface (most likely with a different name) to support that alternative usage pattern. Even if you end up creating ISomething, ISomething2 and ISomething3, the consumers of your interfaces will have a hard time figuring out what the differences are between the interfaces. When should they use ISomething2 and when should they use ISomething3? Then you have to go about the process of obsoleting ISomething and ISomething2. A: I agree with Garo Yeriazarian, changing interface is a serious decision. Also, if you want to promote usage of new version of interface you should mark old version as obsolete. In .NET you can add ObsoleteAttribute. A: The purpose of an interface is to define an abstract pattern that at type must implement. It would be better implement as: public interface ISomething public class Something1 : ISomething public class Something2 : ISomething You do not gain anything in the form of code reusability or scalable design by creating multiple versions of the same interface. A: I don't know why people downvote your post. I think that good naming guidelines are very important. If you need to maintain compatibility with prev. version of the same interface consider using inheritance. If you need to introduce new version of interface consider following rule: Try to add meaningful suffix to you interface. If it's not possible to create concise name, consider adding version number.
Interfaces and Versioning
I am designing a new System and I have a lot of Interfaces that will grow over time with the system. What is the best practice to name this interfaces ISomethingV01 ISomethingV02 etc and I do this public interface ISomething{ void method(); } then I have to add method 2 so now what I do? public interface ISomethingV2:ISomething{ void method2(); } or same other way?
[ "I think you're overrusing interfaces. \nMeyer and Martin told us: \"Open for extension but closed for modification!\"\nand then Cwalina (et al) reiterated: \nFrom Framework Design Guidelines...\n\nIn general, classes are the preferred\n construct for exposing abstractions.\n The main drawback of interfaces is\n that they are much less flexible than\n classes when it comes to allowing for\n evolution of APIs. Once you ship an\n interface, the set of its members is\n fixed forever. Any additions to the\n interface would break existing types\n implementing the interface.\nA class offers much more flexibility.\n You can add members to classes that\n have already shipped. As long as the\n method is not abstract (i.e., as long\n as you provide a default\n implementation of the method), any\n existing derived classes continue to\n function unchanged.\n\n\n", "Ideally, you shouldn't be changing your interfaces very often (if at all). If you do need to change an interface, you should reconsider its purpose and see if the original name still applies to it.\nIf you still feel that the interfaces will change, and the interfaces changes are small (adding items) and you have control of the whole code base, then you should just modify the interface and fix all the compilation errors.\nIf your change is a change in how the interface is to be used, then you need to create a separate interface (most likely with a different name) to support that alternative usage pattern.\nEven if you end up creating ISomething, ISomething2 and ISomething3, the consumers of your interfaces will have a hard time figuring out what the differences are between the interfaces. When should they use ISomething2 and when should they use ISomething3? Then you have to go about the process of obsoleting ISomething and ISomething2.\n", "I agree with Garo Yeriazarian, changing interface is a serious decision. Also, if you want to promote usage of new version of interface you should mark old version as obsolete. In .NET you can add ObsoleteAttribute.\n", "The purpose of an interface is to define an abstract pattern that at type must implement.\nIt would be better implement as:\npublic interface ISomething\n\npublic class Something1 : ISomething\npublic class Something2 : ISomething\n\nYou do not gain anything in the form of code reusability or scalable design by creating multiple versions of the same interface.\n", "I don't know why people downvote your post. I think that good naming guidelines are very important. \nIf you need to maintain compatibility with prev. version of the same interface consider using inheritance.\nIf you need to introduce new version of interface consider following rule:\n\nTry to add meaningful suffix to you\n interface. If it's not possible to\n create concise name, consider adding\n version number.\n\n" ]
[ 7, 5, 4, 2, 2 ]
[]
[]
[ "interface", "naming_conventions", "versioning" ]
stackoverflow_0000045123_interface_naming_conventions_versioning.txt
Q: Default smart device project can't find dependencies When running the default c++ project in Visual Studios for a Windows CE 5.0 device, I get an error complaining about missing resources. Depends says that my executable needs ayghsell.dll (the Windows Mobile shell), and CoreDll.dll. Does this mean that my executable can only be run on Windows Mobile devices, instead of any generic Windows CE installation? If that's the case, how do I create an executable targeting generic WinCE? A: Depends what you mean by a generic Windows CE installation. Windows CE itself is a modularised operating system, so different devices can have different modules included. Therefore each Windows CE device can have a radically different OS installed (headless even). Coredll is the standard "common" library that gets included in a Windows CE installation, however it can contain different components depending on the other modules in the system. If you want to target a relatively standard version of Windows CE either target the Standard SDK set of components, or go for a Windows Mobile platform. If you have an SDK then install and use that. If none is available then you can generate an SDK using Platform Builder and the OS project files. To get your application to work on a non-Windows Mobile installation of Windows CE you just have to remove the code that uses the aygshell library, and not link to those libraries.
Default smart device project can't find dependencies
When running the default c++ project in Visual Studios for a Windows CE 5.0 device, I get an error complaining about missing resources. Depends says that my executable needs ayghsell.dll (the Windows Mobile shell), and CoreDll.dll. Does this mean that my executable can only be run on Windows Mobile devices, instead of any generic Windows CE installation? If that's the case, how do I create an executable targeting generic WinCE?
[ "Depends what you mean by a generic Windows CE installation. Windows CE itself is a modularised operating system, so different devices can have different modules included. Therefore each Windows CE device can have a radically different OS installed (headless even).\nCoredll is the standard \"common\" library that gets included in a Windows CE installation, however it can contain different components depending on the other modules in the system.\nIf you want to target a relatively standard version of Windows CE either target the Standard SDK set of components, or go for a Windows Mobile platform.\nIf you have an SDK then install and use that. If none is available then you can generate an SDK using Platform Builder and the OS project files.\nTo get your application to work on a non-Windows Mobile installation of Windows CE you just have to remove the code that uses the aygshell library, and not link to those libraries.\n" ]
[ 3 ]
[]
[]
[ "c++", "visual_studio", "windows_ce", "windows_mobile" ]
stackoverflow_0000044821_c++_visual_studio_windows_ce_windows_mobile.txt
Q: Problems with disabling IIS shutdown of idle worker process? I ran into an issue with an IIS web app shutting down an idle worker process! The next request would then have to re-initialize the application, leading to delays. I disabled the IIS shutdown of idle worker processes on the application pool to resolve this. Are there any issues associated with turning this off? If the process is leaking memory, I imagine it is nice to recycle the process every now and then. Are there any other benefits to having this process shutdown? A: I'm assuming that you're referring to IIS 6. Instead of disabling shutdown altogether, maybe you can just increase the amount of time it waits before killing the process. The server is essentially conserving resources - if your server can stand the resource allocation for a process that mostly sits around doing nothing, then there isn't any harm in letting it be. As you mentioned, setting the auto-recycling of the process on a memory limit would be a good idea, if the possibility of a memory leak is there.
Problems with disabling IIS shutdown of idle worker process?
I ran into an issue with an IIS web app shutting down an idle worker process! The next request would then have to re-initialize the application, leading to delays. I disabled the IIS shutdown of idle worker processes on the application pool to resolve this. Are there any issues associated with turning this off? If the process is leaking memory, I imagine it is nice to recycle the process every now and then. Are there any other benefits to having this process shutdown?
[ "I'm assuming that you're referring to IIS 6.\nInstead of disabling shutdown altogether, maybe you can just increase the amount of time it waits before killing the process. The server is essentially conserving resources - if your server can stand the resource allocation for a process that mostly sits around doing nothing, then there isn't any harm in letting it be.\nAs you mentioned, setting the auto-recycling of the process on a memory limit would be a good idea, if the possibility of a memory leak is there.\n" ]
[ 1 ]
[]
[]
[ "iis" ]
stackoverflow_0000045180_iis.txt
Q: ASP.NET Merge: Virtual path 'obal.asax' is not allowed I am doing a Web Deployment of my website and I have the merge assemblies property set to true. For some reason I get the following error. aspnet_merge : error occurred: An error occurred when merging assemblies: The relative virtual path 'Global.asax' is not allowed here. It seems to have something to do with the Global.asax, but I'm not really sure why its getting truncated. My code compiles locally fine, but its only the merge that is messing up. Any ideas? A: As a shot in the dark: Is it a slash issue? I vaguely remember MSBuild forcibly requiring a trailing slash on some of its properties.
ASP.NET Merge: Virtual path 'obal.asax' is not allowed
I am doing a Web Deployment of my website and I have the merge assemblies property set to true. For some reason I get the following error. aspnet_merge : error occurred: An error occurred when merging assemblies: The relative virtual path 'Global.asax' is not allowed here. It seems to have something to do with the Global.asax, but I'm not really sure why its getting truncated. My code compiles locally fine, but its only the merge that is messing up. Any ideas?
[ "As a shot in the dark:\nIs it a slash issue? I vaguely remember MSBuild forcibly requiring a trailing slash on some of its properties.\n" ]
[ 1 ]
[]
[]
[ "asp.net", "merge", "msbuild" ]
stackoverflow_0000044076_asp.net_merge_msbuild.txt
Q: Java: Flash a window to grab user's attention Is there a better way to flash a window in Java than this: public static void flashWindow(JFrame frame) throws InterruptedException { int sleepTime = 50; frame.setVisible(false); Thread.sleep(sleepTime); frame.setVisible(true); Thread.sleep(sleepTime); frame.setVisible(false); Thread.sleep(sleepTime); frame.setVisible(true); Thread.sleep(sleepTime); frame.setVisible(false); Thread.sleep(sleepTime); frame.setVisible(true); } I know that this code is scary...But it works alright. (I should implement a loop...) A: There are two common ways to do this: use JNI to set urgency hints on the taskbar's window, and create a notification icon/message. I prefer the second way, since it's cross-platform and less annoying. See documentation on the TrayIcon class, particularly the displayMessage() method. The following links may be of interest: New System Tray Functionality in Java SE 6 Java Programming - Iconified window blinking TrayIcon for earlier versions of Java A: Well, there are a few minor improvements we could make. ;) I would use a Timer to make sure callers don't have to wait for the method to return. And preventing more than one flashing operation at a time on a given window would be nice too. import java.util.Map; import java.util.Timer; import java.util.TimerTask; import java.util.concurrent.ConcurrentHashMap; import javax.swing.JFrame; public class WindowFlasher { private final Timer timer = new Timer(); private final Map<JFrame, TimerTask> flashing = new ConcurrentHashMap<JFrame, TimerTask>(); public void flashWindow(final JFrame window, final long period, final int blinks) { TimerTask newTask = new TimerTask() { private int remaining = blinks * 2; @Override public void run() { if (remaining-- > 0) window.setVisible(!window.isVisible()); else { window.setVisible(true); cancel(); } } @Override public boolean cancel() { flashing.remove(this); return super.cancel(); } }; TimerTask oldTask = flashing.put(window, newTask); // if the window is already flashing, cancel the old task if (oldTask != null) oldTask.cancel(); timer.schedule(newTask, 0, period); } }
Java: Flash a window to grab user's attention
Is there a better way to flash a window in Java than this: public static void flashWindow(JFrame frame) throws InterruptedException { int sleepTime = 50; frame.setVisible(false); Thread.sleep(sleepTime); frame.setVisible(true); Thread.sleep(sleepTime); frame.setVisible(false); Thread.sleep(sleepTime); frame.setVisible(true); Thread.sleep(sleepTime); frame.setVisible(false); Thread.sleep(sleepTime); frame.setVisible(true); } I know that this code is scary...But it works alright. (I should implement a loop...)
[ "There are two common ways to do this: use JNI to set urgency hints on the taskbar's window, and create a notification icon/message. I prefer the second way, since it's cross-platform and less annoying.\nSee documentation on the TrayIcon class, particularly the displayMessage() method.\nThe following links may be of interest:\n\nNew System Tray Functionality in Java SE 6\nJava Programming - Iconified window blinking\nTrayIcon for earlier versions of Java\n\n", "Well, there are a few minor improvements we could make. ;)\nI would use a Timer to make sure callers don't have to wait for the method to return. And preventing more than one flashing operation at a time on a given window would be nice too.\nimport java.util.Map;\nimport java.util.Timer;\nimport java.util.TimerTask;\nimport java.util.concurrent.ConcurrentHashMap;\nimport javax.swing.JFrame;\n\npublic class WindowFlasher {\n\n private final Timer timer = new Timer();\n private final Map<JFrame, TimerTask> flashing\n = new ConcurrentHashMap<JFrame, TimerTask>();\n\n public void flashWindow(final JFrame window,\n final long period,\n final int blinks) {\n TimerTask newTask = new TimerTask() {\n private int remaining = blinks * 2;\n\n @Override\n public void run() {\n if (remaining-- > 0)\n window.setVisible(!window.isVisible());\n else {\n window.setVisible(true);\n cancel();\n }\n }\n\n @Override\n public boolean cancel() {\n flashing.remove(this);\n return super.cancel();\n }\n };\n TimerTask oldTask = flashing.put(window, newTask);\n\n // if the window is already flashing, cancel the old task\n if (oldTask != null)\n oldTask.cancel();\n timer.schedule(newTask, 0, period);\n }\n}\n\n" ]
[ 6, 1 ]
[]
[]
[ "java", "user_interface" ]
stackoverflow_0000045075_java_user_interface.txt
Q: How do you get the current image name from an ASP.Net website? Scenario: You have an ASP.Net webpage that should display the next image in a series of images. If 1.jpg is currently loaded, the refresh should load 2.jpg. Assuming I would use this code, where do you get the current images name. string currImage = MainPic.ImageUrl.Replace(".jpg", ""); currImage = currImage.Replace("~/Images/", ""); int num = (Convert.ToInt32(currImage) + 1) % 3; MainPic.ImageUrl = "~/Images/" + num.ToString() + ".jpg"; The problem with the above code is that the webpage used is the default site with the image set to 1.jpg, so the loaded image is always 2.jpg. So in the process of loading the page, is it possible to pull the last image used from the pages properties? A: You can store data in your page's ViewState dictionary So in your Page_Load you could write something like... var lastPicNum = (int)ViewState["lastPic"]; lastPicNum++; MainPic.ImageUrl = string.Format("~/Images/{0}.jpg", lastPicNum); ViewState["lastPic"] = lastPicNum; you should get the idea. And if you're programming ASP.NET and still does not understands how ViewState and web forms work, you should read this MSDN article Understanding ViewState from the beginning will help with a lot of ASP.NET gotchas as well. A: int num = 1; if(Session["ImageNumber"] != null) { num = Convert.ToInt32(Session["ImageNumber"]) + 1; } Session["ImageNumber"] = num; A: You'll have to hide the last value in a HiddenField or ViewState or somewhere like that... A: If you need to change images to the next in the sequence if you hit the F5 or similar refresh button, then you need to store the last image id or something in a server-side storage, or in a cookie. Use a Session variable or similar. A: It depends on how long you want it to persist (remember) the last viewed value. My preferred choice would be the SESSION. A: @chakrit does this really work if refreshing the page? i thought the viewstate was stored on the page, and had to be sent to the server on a postback, with a refresh that is not happening. A: @John ah Sorry I thought that your "refresh" meant postbacks. In that case, just use a Session variable. FYI, I suggested you use the ViewState dictionary instead of Session because the variable is used inside only that single page, so it shouldn't be using session-wide variable, that's bad practice.
How do you get the current image name from an ASP.Net website?
Scenario: You have an ASP.Net webpage that should display the next image in a series of images. If 1.jpg is currently loaded, the refresh should load 2.jpg. Assuming I would use this code, where do you get the current images name. string currImage = MainPic.ImageUrl.Replace(".jpg", ""); currImage = currImage.Replace("~/Images/", ""); int num = (Convert.ToInt32(currImage) + 1) % 3; MainPic.ImageUrl = "~/Images/" + num.ToString() + ".jpg"; The problem with the above code is that the webpage used is the default site with the image set to 1.jpg, so the loaded image is always 2.jpg. So in the process of loading the page, is it possible to pull the last image used from the pages properties?
[ "You can store data in your page's ViewState dictionary\nSo in your Page_Load you could write something like...\nvar lastPicNum = (int)ViewState[\"lastPic\"];\nlastPicNum++;\n\nMainPic.ImageUrl = string.Format(\"~/Images/{0}.jpg\", lastPicNum);\n\nViewState[\"lastPic\"] = lastPicNum;\n\nyou should get the idea.\nAnd if you're programming ASP.NET and still does not understands how ViewState and web forms work, you should read this MSDN article\nUnderstanding ViewState from the beginning will help with a lot of ASP.NET gotchas as well.\n", "int num = 1;\n\nif(Session[\"ImageNumber\"] != null)\n{\n num = Convert.ToInt32(Session[\"ImageNumber\"]) + 1;\n}\n\nSession[\"ImageNumber\"] = num;\n\n", "You'll have to hide the last value in a HiddenField or ViewState or somewhere like that...\n", "If you need to change images to the next in the sequence if you hit the F5 or similar refresh button, then you need to store the last image id or something in a server-side storage, or in a cookie. Use a Session variable or similar.\n", "It depends on how long you want it to persist (remember) the last viewed value. My preferred choice would be the SESSION.\n", "@chakrit\ndoes this really work if refreshing the page?\ni thought the viewstate was stored on the page, and had to be sent to the server on a postback, with a refresh that is not happening.\n", "@John ah Sorry I thought that your \"refresh\" meant postbacks.\nIn that case, just use a Session variable.\nFYI, I suggested you use the ViewState dictionary instead of Session because the variable is used inside only that single page, so it shouldn't be using session-wide variable, that's bad practice.\n" ]
[ 5, 5, 0, 0, 0, 0, 0 ]
[]
[]
[ "asp.net", "c#" ]
stackoverflow_0000044787_asp.net_c#.txt
Q: Windows Forms Application Performance My app has many controls on its surface, and more are added dynamically at runtime. Although i am using tabs to limit the number of controls shown, and double-buffering too, it still flickers and stutters when it has to redraw (resize, maximize, etc). What are your tips and tricks to improve WinForms app performance? A: I know of two things you can do but they don't always apply to all situations. You're going to get better performance if you're using absolute positioning for each control (myNewlyCreatedButton.Location.X/Y) as opposed to using a flow layout panel or a table layout panel. WinForms has to do a lot less math trying to figure out where controls should be placed. If there is a single operation in which you're adding/removing/modifying a lot of controls, call "SuspendLayout()" on the container of the affected controls (whether it is a panel or the whole form), and when you're done with your work call "ResumeLayout()" on the same panel. If you don't, the form will have to do a layout pass each and every time you add/remove/modify a control, which cost a lot more time. see: http://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx Although, I'm not sure how these approaches could apply when resizing a window. A: Although more general than some of the other tips, here is mine: When using a large number of "items", try to avoid creating a control for each one of them, rather reuse the controls. For example if you have 10 000 items, each corresponding to a button, it is very easy to (programatically) create a 10 000 buttons and wire up their event handlers, such that when you enter in the event handler, you know exactly which element you must work on. However it is much more efficient if you create, lets say, 500 buttons (because you know that only 500 buttons will be visible on the screen at any one time) and introduce a "mapping layer" between the buttons and the items, which dynamically reassigns the buttons to different items every time the user does something which would result in changing the set of buttons which should be visible (like moving a scrollbar for example). A: Although, I'm not sure how these approaches could apply when resizing a window. Handle the ResizeBegin and ResizeEnd events to call SuspendLayout() and ResumeLayout(). These events are only on the System.Windows.Form class (although I wish they were also on Control). A: Are you making good use of SuspendLayout() and ResumeLayout()? http://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx
Windows Forms Application Performance
My app has many controls on its surface, and more are added dynamically at runtime. Although i am using tabs to limit the number of controls shown, and double-buffering too, it still flickers and stutters when it has to redraw (resize, maximize, etc). What are your tips and tricks to improve WinForms app performance?
[ "I know of two things you can do but they don't always apply to all situations.\n\nYou're going to get better performance if you're using absolute positioning for each control (myNewlyCreatedButton.Location.X/Y) as opposed to using a flow layout panel or a table layout panel. WinForms has to do a lot less math trying to figure out where controls should be placed.\nIf there is a single operation in which you're adding/removing/modifying a lot of controls, call \"SuspendLayout()\" on the container of the affected controls (whether it is a panel or the whole form), and when you're done with your work call \"ResumeLayout()\" on the same panel. If you don't, the form will have to do a layout pass each and every time you add/remove/modify a control, which cost a lot more time. see: http://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx\n\nAlthough, I'm not sure how these approaches could apply when resizing a window.\n", "Although more general than some of the other tips, here is mine:\nWhen using a large number of \"items\", try to avoid creating a control for each one of them, rather reuse the controls. For example if you have 10 000 items, each corresponding to a button, it is very easy to (programatically) create a 10 000 buttons and wire up their event handlers, such that when you enter in the event handler, you know exactly which element you must work on. However it is much more efficient if you create, lets say, 500 buttons (because you know that only 500 buttons will be visible on the screen at any one time) and introduce a \"mapping layer\" between the buttons and the items, which dynamically reassigns the buttons to different items every time the user does something which would result in changing the set of buttons which should be visible (like moving a scrollbar for example).\n", "\nAlthough, I'm not sure how these approaches could apply when resizing a window.\n\nHandle the ResizeBegin and ResizeEnd events to call SuspendLayout() and ResumeLayout(). These events are only on the System.Windows.Form class (although I wish they were also on Control).\n", "Are you making good use of SuspendLayout() and ResumeLayout()?\nhttp://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx\n" ]
[ 5, 4, 2, 1 ]
[]
[]
[ "performance", "winforms" ]
stackoverflow_0000044914_performance_winforms.txt
Q: Visual studio 2005 closes slowly I experience that on several different machines, with plugins, without plugins, with VB.net or c# solutions of many different sizes, closing the solution in VS 2005 generally takes significantly more time than actually building the solution. This has always been the case for me since I started using Visual Studio 2005, so I have learned to live with it, but I am curious: What on earth is visual studio doing when you have actually told it to shut down? Is it significant? Is it configurable, can you turn it off? A: What on earth is visual studio doing when you have actually told it to shut down? You can use Process Monitor from sysinternals. It maybe because of some plugins. Try resetting your Visual Studio settings (Tools->Import and Export Settings->Reset All Settings). A: I've found that closing all the open documents before you close the solution helps speed it up. Or maybe it's just a perception thing, but it seems faster :)
Visual studio 2005 closes slowly
I experience that on several different machines, with plugins, without plugins, with VB.net or c# solutions of many different sizes, closing the solution in VS 2005 generally takes significantly more time than actually building the solution. This has always been the case for me since I started using Visual Studio 2005, so I have learned to live with it, but I am curious: What on earth is visual studio doing when you have actually told it to shut down? Is it significant? Is it configurable, can you turn it off?
[ "\nWhat on earth is visual studio doing\n when you have actually told it to shut\n down?\n\nYou can use Process Monitor from sysinternals. It maybe because of some plugins. Try resetting your Visual Studio settings (Tools->Import and Export Settings->Reset All Settings).\n", "I've found that closing all the open documents before you close the solution helps speed it up. Or maybe it's just a perception thing, but it seems faster :)\n" ]
[ 1, 0 ]
[]
[]
[ "c#", "solution_explorer", "vb.net", "visual_studio", "visual_studio_2005" ]
stackoverflow_0000045313_c#_solution_explorer_vb.net_visual_studio_visual_studio_2005.txt
Q: Where do I begin to learn about SQL Server alerts or notifications? Just recently started having issues with an SQL Server Agent Job that contains an SSIS package to extract production data and summarize it into a separate reporting database. I think that some of the Alerts/Notifications settings I tried playing with caused the problem as the job had been running to completion unattended for the previous two weeks. So... Where's a good place to start reading up on SQL Agent Alerts and Notifications? I want to enable some sort of alert/notification so that I'm always informed: That the job completes successfully (as a check to ensure that it's always executed), or That the job ran into some sort of error, which should include enough info (such as error number) that I can diagnose the cause of the error As always, any help will be greatly appreciated! A: Books Online is probably a good place to start (or at least I like it and generally find it useful). SQLMenace and bofe made some good points. Here's my additional two cents: I'd recommend configuring Database Mail rather than SQL Mail (i.e. SMTP vs. MAPI, which I think is deprecated anyway). Once you get the mail profile configured, you'll have to also configure the SQL agent to use that mail profile (which is just a page of settings for the agent properties), or else your SSIS job notifications won't actually get sent, even though you can successfully send a test email from Management Studio. I don't use alerts as often as job notifications, so the only tricky thing I can recall about them is that if you're raising an error and you want the alert to email you when that happens, you have to make sure that the raised error gets written to the log. I think that just boils down to "RAISERROR ... WITH LOG"; here's the BOL link for the syntax details. A: You'll want to have "When the job completes" marked in your notifications page on the job's properties. Just go to that dropdown and switch it to job completion instead of failure (which is on the screenshot). You'll also want to make sure that your server has e-mail configured. I think it's under SQL Surface Area Configuration for Features. A: In each step of the job click on advanced then from there you can log to a file or to a table, this will have all errorcodes and other things why the job failed You should be able to see this also from the job history. Right click on the job-->view history, click on the + sign to expand, the click on each step and it will be in the lower panel To set up notifications you need to set up an operator and the in the job on the notification tab you pick it from the email dropdown
Where do I begin to learn about SQL Server alerts or notifications?
Just recently started having issues with an SQL Server Agent Job that contains an SSIS package to extract production data and summarize it into a separate reporting database. I think that some of the Alerts/Notifications settings I tried playing with caused the problem as the job had been running to completion unattended for the previous two weeks. So... Where's a good place to start reading up on SQL Agent Alerts and Notifications? I want to enable some sort of alert/notification so that I'm always informed: That the job completes successfully (as a check to ensure that it's always executed), or That the job ran into some sort of error, which should include enough info (such as error number) that I can diagnose the cause of the error As always, any help will be greatly appreciated!
[ "Books Online is probably a good place to start (or at least I like it and generally find it useful).\nSQLMenace and bofe made some good points. Here's my additional two cents:\nI'd recommend configuring Database Mail rather than SQL Mail (i.e. SMTP vs. MAPI, which I think is deprecated anyway). Once you get the mail profile configured, you'll have to also configure the SQL agent to use that mail profile (which is just a page of settings for the agent properties), or else your SSIS job notifications won't actually get sent, even though you can successfully send a test email from Management Studio.\nI don't use alerts as often as job notifications, so the only tricky thing I can recall about them is that if you're raising an error and you want the alert to email you when that happens, you have to make sure that the raised error gets written to the log. I think that just boils down to \"RAISERROR ... WITH LOG\"; here's the BOL link for the syntax details. \n", "You'll want to have \"When the job completes\" marked in your notifications page on the job's properties.\nJust go to that dropdown and switch it to job completion instead of failure (which is on the screenshot).\n\nYou'll also want to make sure that your server has e-mail configured. I think it's under SQL Surface Area Configuration for Features.\n", "In each step of the job click on advanced then from there you can log to a file or to a table, this will have all errorcodes and other things why the job failed\nYou should be able to see this also from the job history. \nRight click on the job-->view history, click on the + sign to expand, the click on each step and it will be in the lower panel\nTo set up notifications you need to set up an operator and the in the job on the notification tab you pick it from the email dropdown\n" ]
[ 2, 0, 0 ]
[]
[]
[ "alert", "etl", "notifications", "sql_server", "ssis" ]
stackoverflow_0000022306_alert_etl_notifications_sql_server_ssis.txt
Q: Mixing C/C++ Libraries Is it possible for gcc to link against a library that was created with Visual C++? If so, are there any conflicts/problems that might arise from doing so? A: Some of the comments in the answers here are slightly too generalistic. Whilst no, in the specific case mentioned gcc binaries won't link with a VC++ library (AFAIK). The actual means of interlinking code/libraries is a question of the ABI standard being used. An increasingly common standard in the embedded world is the EABI (or ARM ABI) standard (based on work done during Itanium development http://www.codesourcery.com/cxx-abi/). If compilers are EABI compliant they can produce executables and libraries which will work with each other. An example of multiple toolchains working together is ARM's RVCT compiler which produces binaries which will work with GCC ARM ABI binaries. (The code sourcery link is down at the moment but can be google cached) A: I would guess not. Usually c++ compilers have quite different methods of name-mangling which means that the linkers will fail to find the correct symbols. This is a good thing by the way, because C++ compilers are allowed by the standard to have much greater levels of incompatibility than just this that will cause your program to crash, die, eat puppies and smear paint all over the wall. Usual schemes to work around this usually involve language independent techniques like COM or CORBA. A simpler sanctified method is to use C "wrappers" around your C++ code. A: It is not possible. It's usually not even possible to link libraries produced by different versions of the same compiler. A: No. Plain and simple :-) A: Yes, if you make it a dynamic link and make the interface c-style. lib.exe will generate import libraries which are compatible with the gcc toolchain. That will resolve your linking problems. However that is just the start of the problem. Your larger problems will be things like exceptions, and memory allocation. You must ensure that no exception cross from VC++ to gcc code, there are no guarantees of compatibility. Every object from the VC++ library will need to live on the heap because: Do not mix gcc new/delete with anything from VC++, bad things will happen. This goes for object construction on the stack too. However, if you make an interface like create_some_obj()/delete_some_obj() you do not end up using gcc new to construct VC++ objects. Maybe make a small handler object that handles construction and destruction. This way you preserve RAII, but still use the c-interface for the true interface. Calling convention must be correct. In VC++ there is cdecl and stdcall. If gcc tried to call an imported function with the wrong calling type, bad things will happen. The bottom line is keep a simple interface that is ANSI C compliant, and you should be fine. The fact that crazy C++ goes on behind is okay, as long as it is contained. Oh and make sure all the code is re-entrant, or you risk opening a whole nother can-o-worms.
Mixing C/C++ Libraries
Is it possible for gcc to link against a library that was created with Visual C++? If so, are there any conflicts/problems that might arise from doing so?
[ "Some of the comments in the answers here are slightly too generalistic. \nWhilst no, in the specific case mentioned gcc binaries won't link with a VC++ library (AFAIK). The actual means of interlinking code/libraries is a question of the ABI standard being used.\nAn increasingly common standard in the embedded world is the EABI (or ARM ABI) standard (based on work done during Itanium development http://www.codesourcery.com/cxx-abi/). If compilers are EABI compliant they can produce executables and libraries which will work with each other. An example of multiple toolchains working together is ARM's RVCT compiler which produces binaries which will work with GCC ARM ABI binaries.\n(The code sourcery link is down at the moment but can be google cached)\n", "I would guess not. Usually c++ compilers have quite different methods of name-mangling which means that the linkers will fail to find the correct symbols. This is a good thing by the way, because C++ compilers are allowed by the standard to have much greater levels of incompatibility than just this that will cause your program to crash, die, eat puppies and smear paint all over the wall.\nUsual schemes to work around this usually involve language independent techniques like COM or CORBA. A simpler sanctified method is to use C \"wrappers\" around your C++ code.\n", "It is not possible. It's usually not even possible to link libraries produced by different versions of the same compiler.\n", "No. Plain and simple :-)\n", "Yes, if you make it a dynamic link and make the interface c-style. lib.exe will generate import libraries which are compatible with the gcc toolchain.\nThat will resolve your linking problems. However that is just the start of the problem.\nYour larger problems will be things like exceptions, and memory allocation.\n\nYou must ensure that no exception cross from VC++ to gcc code, there are no guarantees of compatibility.\nEvery object from the VC++ library will need to live on the heap because:\nDo not mix gcc new/delete with anything from VC++, bad things will happen. This goes for object construction on the stack too. However, if you make an interface like create_some_obj()/delete_some_obj() you do not end up using gcc new to construct VC++ objects. Maybe make a small handler object that handles construction and destruction. This way you preserve RAII, but still use the c-interface for the true interface.\nCalling convention must be correct. In VC++ there is cdecl and stdcall. If gcc tried to call an imported function with the wrong calling type, bad things will happen.\n\nThe bottom line is keep a simple interface that is ANSI C compliant, and you should be fine. The fact that crazy C++ goes on behind is okay, as long as it is contained.\nOh and make sure all the code is re-entrant, or you risk opening a whole nother can-o-worms.\n" ]
[ 4, 1, 1, 1, 1 ]
[]
[]
[ "c++", "gcc", "linker", "visual_studio" ]
stackoverflow_0000043194_c++_gcc_linker_visual_studio.txt
Q: Accessible controls for ASP.NET In my last job we ended up rewriting the complete ASP.NET stack (forms, controls, validation, postback handling, ajax library etc...) - the reason I was given was that the ASP.NET controls were not accessible enough, not were any of the third party controls that were assessed for the project. Can anyone point me to good accessible ASP.NET controls that do ajax as well? Failing that, how would you approach creating accessible, ajax enabled controls? A: You could take a look at the 'App_Browsers' feature in .NET. It gives you the opportunity to hook into the rendering engine for each control. The original intention for this was to be able to alter the HTML output of controls depending on the user's browser - but you can also do it for all browsers. You could also take a look at these control adapters, which make the normal ASP.NET controls 'CSS Friendly'.
Accessible controls for ASP.NET
In my last job we ended up rewriting the complete ASP.NET stack (forms, controls, validation, postback handling, ajax library etc...) - the reason I was given was that the ASP.NET controls were not accessible enough, not were any of the third party controls that were assessed for the project. Can anyone point me to good accessible ASP.NET controls that do ajax as well? Failing that, how would you approach creating accessible, ajax enabled controls?
[ "You could take a look at the 'App_Browsers' feature in .NET.\nIt gives you the opportunity to hook into the rendering engine for each control. The original intention for this was to be able to alter the HTML output of controls depending on the user's browser - but you can also do it for all browsers.\nYou could also take a look at these control adapters, which make the normal ASP.NET controls 'CSS Friendly'.\n" ]
[ 1 ]
[]
[]
[ "accessibility", "ajax", "asp.net" ]
stackoverflow_0000045387_accessibility_ajax_asp.net.txt
Q: Any Windows APIs to get file handles besides createfile and openfile? I am trying to snoop on a log file that an application is writing to. I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results. I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them. A: You can use Sysinternal's FileMon. It is an excellent monitor that can tell you exactly which file-related system calls are being made and what are the parameters. I think that this approach is much easier than hooking API calls and much less intrusive. A: Here's a link which might be of use: Guerilla-Style File Monitoring with C# and C++ It is possible to create a file without touching CreateFile API but can I ask what DLL injection method you're using? If you're using something like Windows Hooks your DLL won't be installed until sometime after the target application initializes and you'll miss early calls to CreateFile. Whereas if you're using something like DetourCreateProcessWithDll your CreateFile hook can be installed prior to any of the application startup code running. In my experience 99.9% of created/opened files result in a call to CreateFile, including files opened through C and C++ libs, third-party libs, etc. Maybe there are some undocumented DDK functions which don't route through CreateFile, but for a typical log file, I doubt it. A: Process Monitor from sysinternals could help too.
Any Windows APIs to get file handles besides createfile and openfile?
I am trying to snoop on a log file that an application is writing to. I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results. I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them.
[ "You can use Sysinternal's FileMon. \nIt is an excellent monitor that can tell you exactly which file-related system calls are being\nmade and what are the parameters.\nI think that this approach is much easier than hooking API calls and much less intrusive.\n", "Here's a link which might be of use:\nGuerilla-Style File Monitoring with C# and C++\nIt is possible to create a file without touching CreateFile API but can I ask what DLL injection method you're using? If you're using something like Windows Hooks your DLL won't be installed until sometime after the target application initializes and you'll miss early calls to CreateFile. Whereas if you're using something like DetourCreateProcessWithDll your CreateFile hook can be installed prior to any of the application startup code running.\nIn my experience 99.9% of created/opened files result in a call to CreateFile, including files opened through C and C++ libs, third-party libs, etc. Maybe there are some undocumented DDK functions which don't route through CreateFile, but for a typical log file, I doubt it.\n", "Process Monitor from sysinternals could help too.\n" ]
[ 8, 6, 3 ]
[]
[]
[ "api", "c++", "logfile", "windows" ]
stackoverflow_0000013806_api_c++_logfile_windows.txt
Q: Is there any difference between the box models of IE8 and Firefox3? What are the main differences (if any) between the box models of IE8 and Firefox3? Are they the same now? What are the other main differences between these two browsers? Can a web developer assume that these two browsers as the same since they (seem to) support the latest web standards? A: The Internet Explorer box model has been "fixed" since Internet Explorer 6 so long as your pages are in standard compliants mode. See: Quirks mode and Internet Explorer box model bug. Until I learnt about doctype declerations getting IE to work properly was a real PAIN, because IE runs in "quirks mode" by default. So having a standards mode doctype will eliminate a whole bunch of the most painful CSS problems. A: I would never assume that any browser renders a page exactly the same.. always test! Even though they support standards, there are plenty of variations between different browsers and even different versions. FF1 renders differently to FF2 which renders differently to FF3. You also have to remember that each browser has their own JavaScript engine which again, will cause some scripts to work and other to fail. You can ofcourse reduce these differences by using CSS and JavaScript frameworks which have been developed to support multiple browsers. However, you still must test in all browsers. There will always be something that doesn't quite look or behave right. A: Things that will always differ between the two (and other browsers) are default values (font sizes in headings, for example). The way they achieve default visuals is often different, as well, such as whether or not they use padding or margin to achieve the indentation in bulleted lists. Something quite positive that I just noticed is that IE8 finally fixes IE's handling of margin: 0 auto for block elements that you want horizontally centered in their respective parents.
Is there any difference between the box models of IE8 and Firefox3?
What are the main differences (if any) between the box models of IE8 and Firefox3? Are they the same now? What are the other main differences between these two browsers? Can a web developer assume that these two browsers as the same since they (seem to) support the latest web standards?
[ "The Internet Explorer box model has been \"fixed\" since Internet Explorer 6 so long as your pages are in standard compliants mode.\nSee: Quirks mode and Internet Explorer box model bug.\nUntil I learnt about doctype declerations getting IE to work properly was a real PAIN, because IE runs in \"quirks mode\" by default. So having a standards mode doctype will eliminate a whole bunch of the most painful CSS problems.\n", "I would never assume that any browser renders a page exactly the same.. always test!\nEven though they support standards, there are plenty of variations between different browsers and even different versions. FF1 renders differently to FF2 which renders differently to FF3.\nYou also have to remember that each browser has their own JavaScript engine which again, will cause some scripts to work and other to fail.\nYou can ofcourse reduce these differences by using CSS and JavaScript frameworks which have been developed to support multiple browsers.\nHowever, you still must test in all browsers. There will always be something that doesn't quite look or behave right.\n", "Things that will always differ between the two (and other browsers) are default values (font sizes in headings, for example). The way they achieve default visuals is often different, as well, such as whether or not they use padding or margin to achieve the indentation in bulleted lists.\nSomething quite positive that I just noticed is that IE8 finally fixes IE's handling of margin: 0 auto for block elements that you want horizontally centered in their respective parents. \n" ]
[ 11, 3, 1 ]
[]
[]
[ "browser", "firefox", "internet_explorer_8" ]
stackoverflow_0000045407_browser_firefox_internet_explorer_8.txt
Q: GUI toolkit for rapid development? I want to write a front-end to an application written in C/C++. I use Solaris 10 and plan to port the application to some other architectures (Windows first). A: I'd recommend taking a look at wxWidgets to provide some cross platform UI widgets that will work on Solaris and Windows. A: Qt 4 is the best tool for this job. If you want to work with other languages, it also has bindings for Java and Python A: On a Mac, this would be easy. The Cocoa API is great when programming in Objective C (which compiles fine with C/C++ files). Otherwise the situation is a bit more grim. As for Rapid prototype, you might want to check the CodeGear (Borland/C++ Builder) tools. I think their VCL library is cross-platform. Otherwise, you could interface with a scripting language like Ruby and use fantastic front end libraries like Shoes. Python also interfaces with wxWidgets to make writing cross-platform front ends easy. Keep in mind that this all requires taking time to make sure your C/C++ code can talk to the scripting language. This is not trivial, and the amount of effort required depends upon the style of your code base. (Oh my God.) Lastly, you could just use wxWidgets itself. This might be your best bet since it requires no additional overhead than coding the UI itself. That said, C++ is not the greatest language for designing UIs. And super lastly, consider writing a code generator that converts from say Shoes to whatever wxWidgets code is needed to generate the same Shoes app. That way you can do easier UI design but still get C++ code in the end. Likewise, you could code gen off of the Python/wxWidgets code. Then sell such a code generator. :-) A: GTK-- and Glade. Thats' the C++ bindings on GTK GTK will work on windows ( just look at GIMP ) Works everywhere, no QT license to mess with your millions-making. A: I use wxWidgets myself. It makes good use of the C++ language features and uses smart pointers, so object and memory management is not that hard. In fact, it feels like writing in a scripting language. Coupled with a dialog editor/code generator like wxFormBuilder or wxDesigner, (links to screenshots) it becomes a good toolkit for rapid development. A: Have a look at FLTK which supports X11 and Windows. A: Ultimate++ is a cross platform rapid application development framework for C++. It is aimed specifically at rapid development. The Ultimate++ website provides some comparisons to other frameworks mentioned such as Qt and wxWidgets. A: I have used ASP.NET Web Forms to make UI front-end to collection of command line application written in legacy language, RESTful-ish web service, and bash scripts. Once it works on Firefox, it should work at least on Firefox on other architecture. If you haven't played around with it, you should give ASP.NET a try (ASP.NET MVC seems to be the current trend). Not quite the same as RAD, but it does give you visual design of forms etc.
GUI toolkit for rapid development?
I want to write a front-end to an application written in C/C++. I use Solaris 10 and plan to port the application to some other architectures (Windows first).
[ "I'd recommend taking a look at wxWidgets to provide some cross platform UI widgets that will work on Solaris and Windows.\n", "Qt 4 is the best tool for this job. If you want to work with other languages, it also has bindings for Java and Python\n", "On a Mac, this would be easy. The Cocoa API is great when programming in Objective C (which compiles fine with C/C++ files).\nOtherwise the situation is a bit more grim. As for Rapid prototype, you might want to check the CodeGear (Borland/C++ Builder) tools. I think their VCL library is cross-platform.\nOtherwise, you could interface with a scripting language like Ruby and use fantastic front end libraries like Shoes. Python also interfaces with wxWidgets to make writing cross-platform front ends easy. Keep in mind that this all requires taking time to make sure your C/C++ code can talk to the scripting language. This is not trivial, and the amount of effort required depends upon the style of your code base. (Oh my God.)\nLastly, you could just use wxWidgets itself. This might be your best bet since it requires no additional overhead than coding the UI itself. That said, C++ is not the greatest language for designing UIs.\nAnd super lastly, consider writing a code generator that converts from say Shoes to whatever wxWidgets code is needed to generate the same Shoes app. That way you can do easier UI design but still get C++ code in the end. Likewise, you could code gen off of the Python/wxWidgets code. Then sell such a code generator. :-)\n", "GTK-- and Glade.\nThats' the C++ bindings on GTK\nGTK will work on windows ( just look at GIMP )\nWorks everywhere, no QT license to mess with your millions-making.\n", "I use wxWidgets myself. It makes good use of the C++ language features and uses smart pointers, so object and memory management is not that hard. In fact, it feels like writing in a scripting language.\nCoupled with a dialog editor/code generator like wxFormBuilder or wxDesigner, (links to screenshots) it becomes a good toolkit for rapid development.\n", "Have a look at FLTK which supports X11 and Windows.\n", "Ultimate++ is a cross platform rapid application development framework for C++. It is aimed specifically at rapid development. The Ultimate++ website provides some comparisons to other frameworks mentioned such as Qt and wxWidgets.\n", "I have used ASP.NET Web Forms to make UI front-end to collection of command line application written in legacy language, RESTful-ish web service, and bash scripts.\nOnce it works on Firefox, it should work at least on Firefox on other architecture. If you haven't played around with it, you should give ASP.NET a try (ASP.NET MVC seems to be the current trend). Not quite the same as RAD, but it does give you visual design of forms etc. \n" ]
[ 3, 3, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "solaris", "unix", "user_interface" ]
stackoverflow_0000029555_solaris_unix_user_interface.txt
Q: Scripting the Visual Studio IDE I'd like to create a script that will configure the Visual Studio IDE the way I like it. Nothing vastly complicated, just a few Tools/Options settings, adding some External Tools, that kind of thing. I know that this can be done inside VS with Import/Export Settings, but I'd like to be able to automate it from outside of VS. Is this possible, and if so, how? Edited to add: doing it from outside of VS is important to me -- I'm hoping to use this as part of a more general "configure this newly-Ghosted PC just the way I like it" script. Edited again: the solution seems to be to hack CurrentSettings.vssettings, or use AutoIt. Details below. A: Answering my own question, in two ways: In VS2005/8, the things I mentioned (Tools/Options, External Tools) are all stored in the CurrentSettings.vssettings file, in the folder "Visual Studio 200{5|8}\Settings". This file is just XML, and it can be edited programmatically by anything that knows how to parse XML. You can also just paste a new vssettings file over the top of the default one (at least, this works for me). The larger question of configuring a virgin PC. It turns out that not everything I want to change has an API, so I need some way of pretending to be a user who is actually sitting there clicking on things. The best approach to this seems to be AutoIt, whose scripting language I will now have to learn in my Copious Free Time. A: An easy way is to use the macro recorder to do something simple, then look at the code it produces and edit it as you see fit. A: On my machine Visual Studio stores it's local settings in a file called VCComponents.dat. Its a text file, so perhaps you could find a way of placing your settings directly in there. The file is stored in my users local AppData\Local\Microsoft\VC folder
Scripting the Visual Studio IDE
I'd like to create a script that will configure the Visual Studio IDE the way I like it. Nothing vastly complicated, just a few Tools/Options settings, adding some External Tools, that kind of thing. I know that this can be done inside VS with Import/Export Settings, but I'd like to be able to automate it from outside of VS. Is this possible, and if so, how? Edited to add: doing it from outside of VS is important to me -- I'm hoping to use this as part of a more general "configure this newly-Ghosted PC just the way I like it" script. Edited again: the solution seems to be to hack CurrentSettings.vssettings, or use AutoIt. Details below.
[ "Answering my own question, in two ways:\n\nIn VS2005/8, the things I mentioned (Tools/Options, External Tools) are all stored in the CurrentSettings.vssettings file, in the folder \"Visual Studio 200{5|8}\\Settings\". This file is just XML, and it can be edited programmatically by anything that knows how to parse XML. You can also just paste a new vssettings file over the top of the default one (at least, this works for me).\nThe larger question of configuring a virgin PC. It turns out that not everything I want to change has an API, so I need some way of pretending to be a user who is actually sitting there clicking on things. The best approach to this seems to be AutoIt, whose scripting language I will now have to learn in my Copious Free Time.\n\n", "An easy way is to use the macro recorder to do something simple, then look at the code it produces and edit it as you see fit. \n", "On my machine Visual Studio stores it's local settings in a file called VCComponents.dat. Its a text file, so perhaps you could find a way of placing your settings directly in there.\nThe file is stored in my users local AppData\\Local\\Microsoft\\VC folder\n" ]
[ 2, 1, 0 ]
[]
[]
[ "ide", "scripting", "visual_studio" ]
stackoverflow_0000042643_ide_scripting_visual_studio.txt
Q: WebDAV query trouble - unable to read body of e-mail Our group (corporate environment) needs to monitor a couple of faceless accounts' Outlook inbox for specific types of bounced e-mails. WebDAV (using C# 2.0) is one of the paths we've traveled and we're almost there, except for one minor problem: we're getting the response below for the e-mail body element <a:propstat> <a:status>HTTP/1.1 404 Resource Not Found</a:status> - <a:prop> <a:htmldescription /> <a:textdescription /> </a:prop> </a:propstat> The only real commonality is that it only happens on messages that our Exchange server is returning to us as "Undeliverable". Note: All other e-mails come across just fine. Any thoughts? A: It looks like undeliverable messages in Exchange have a content-type of "multipart/report; report-type=delivery-status". Probably because they don't have a body, just a summary of the delivery attempt which can actually all be gathered from the Headers of the message. Perhaps the WebDAV access (I don't have access to an OWA account right now to check) doesn't know what to do with that, i.e. is just thinks the e-mails don't have a body.
WebDAV query trouble - unable to read body of e-mail
Our group (corporate environment) needs to monitor a couple of faceless accounts' Outlook inbox for specific types of bounced e-mails. WebDAV (using C# 2.0) is one of the paths we've traveled and we're almost there, except for one minor problem: we're getting the response below for the e-mail body element <a:propstat> <a:status>HTTP/1.1 404 Resource Not Found</a:status> - <a:prop> <a:htmldescription /> <a:textdescription /> </a:prop> </a:propstat> The only real commonality is that it only happens on messages that our Exchange server is returning to us as "Undeliverable". Note: All other e-mails come across just fine. Any thoughts?
[ "It looks like undeliverable messages in Exchange have a content-type of \"multipart/report; report-type=delivery-status\". Probably because they don't have a body, just a summary of the delivery attempt which can actually all be gathered from the Headers of the message. Perhaps the WebDAV access (I don't have access to an OWA account right now to check) doesn't know what to do with that, i.e. is just thinks the e-mails don't have a body.\n" ]
[ 1 ]
[]
[]
[ "c#", "c#_2.0", "email", "outlook", "webdav" ]
stackoverflow_0000045155_c#_c#_2.0_email_outlook_webdav.txt
Q: dotNetNuke/Moodle integration anyone out there have a moodle module for dotnetnuke, or some kind of integration setup that at least allows SSO? A: This webpage provides details on how to implement Single Sign-on between DotNetNuke and Moodle.
dotNetNuke/Moodle integration
anyone out there have a moodle module for dotnetnuke, or some kind of integration setup that at least allows SSO?
[ "This webpage provides details on how to implement Single Sign-on between DotNetNuke and Moodle.\n" ]
[ 0 ]
[]
[]
[ "dotnetnuke", "moodle", "single_sign_on" ]
stackoverflow_0000044692_dotnetnuke_moodle_single_sign_on.txt
Q: Will the Garbage Collector call IDisposable.Dispose for me? The .NET IDisposable Pattern implies that if you write a finalizer, and implement IDisposable, that your finalizer needs to explicitly call Dispose. This is logical, and is what I've always done in the rare situations where a finalizer is warranted. However, what happens if I just do this: class Foo : IDisposable { public void Dispose(){ CloseSomeHandle(); } } and don't implement a finalizer, or anything. Will the framework call the Dispose method for me? Yes I realise this sounds dumb, and all logic implies that it won't, but I've always had 2 things at the back of my head which have made me unsure. Someone a few years ago once told me that it would in fact do this, and that person had a very solid track record of "knowing their stuff." The compiler/framework does other 'magic' things depending on what interfaces you implement (eg: foreach, extension methods, serialization based on attributes, etc), so it makes sense that this might be 'magic' too. While I've read a lot of stuff about it, and there's been lots of things implied, I've never been able to find a definitive Yes or No answer to this question. A: The .Net Garbage Collector calls the Object.Finalize method of an object on garbage collection. By default this does nothing and must be overidden if you want to free additional resources. Dispose is NOT automatically called and must be explicity called if resources are to be released, such as within a 'using' or 'try finally' block see http://msdn.microsoft.com/en-us/library/system.object.finalize.aspx for more information A: I want to emphasize Brian's point in his comment, because it is important. Finalizers are not deterministic destructors like in C++. As others have pointed out, there is no guarantee of when it will be called, and indeed if you have enough memory, if it will ever be called. But the bad thing about finalizers is that, as Brian said, it causes your object to survive a garbage collection. This can be bad. Why? As you may or may not know, the GC is split into generations - Gen 0, 1 and 2, plus the Large Object Heap. Split is a loose term - you get one block of memory, but there are pointers of where the Gen 0 objects start and end. The thought process is that you'll likely use lots of objects that will be short lived. So those should be easy and fast for the GC to get to - Gen 0 objects. So when there is memory pressure, the first thing it does is a Gen 0 collection. Now, if that doesn't resolve enough pressure, then it goes back and does a Gen 1 sweep (redoing Gen 0), and then if still not enough, it does a Gen 2 sweep (redoing Gen 1 and Gen 0). So cleaning up long lived objects can take a while and be rather expensive (since your threads may be suspended during the operation). This means that if you do something like this: ~MyClass() { } Your object, no matter what, will live to Generation 2. This is because the GC has no way of calling the finalizer during garbage collection. So objects that have to be finalized are moved to a special queue to be cleaned out by a different thread (the finalizer thread - which if you kill makes all kinds of bad things happen). This means your objects hang around longer, and potentially force more garbage collections. So, all of that is just to drive home the point that you want to use IDisposable to clean up resources whenever possible and seriously try to find ways around using the finalizer. It's in your application's best interests. A: There's lots of good discussion already here, and I'm a little late to the party, but I wanted to add a few points myself. The Garbage collecter will never directly execute a Dispose method for you. The GC will execute finalizers when it feels like it. One common pattern that is used for objects that have a finalizer is to have it call a method which is by convention defined as Dispose(bool disposing) passing false to indicate that the call was made due to finalization rather than an explicit Dispose call. This is because it is not safe to make any assumptions about other managed objects while finalizing an object (they may have already been finalized). class SomeObject : IDisposable { IntPtr _SomeNativeHandle; FileStream _SomeFileStream; // Something useful here ~ SomeObject() { Dispose(false); } public void Dispose() { Dispose(true); } protected virtual void Dispose(bool disposing) { if(disposing) { GC.SuppressFinalize(this); //Because the object was explicitly disposed, there will be no need to //run the finalizer. Suppressing it reduces pressure on the GC //The managed reference to an IDisposable is disposed only if the _SomeFileStream.Dispose(); } //Regardless, clean up the native handle ourselves. Because it is simple a member // of the current instance, the GC can't have done anything to it, // and this is the onlyplace to safely clean up if(IntPtr.Zero != _SomeNativeHandle) { NativeMethods.CloseHandle(_SomeNativeHandle); _SomeNativeHandle = IntPtr.Zero; } } } That's the simple version, but there are a lot of nuances that can trip you up on this pattern. The contract for IDisposable.Dispose indicates that it must be safe to call multiple times (calling Dispose on an object that was already disposed should do nothing) It can get very complicated to properly manage an inheritance hierarchy of disposable objects, especially if different layers introduce new Disposable and unmanaged resources. In the pattern above Dispose(bool) is virtual to allow it to be overridden so that it can be managed, but I find it to be error-prone. In my opinion, it is much better to completely avoid having any types that directly contain both disposable references and native resources that may require finalization. SafeHandles provide a very clean way of doing this by encapsulating native resources into disposable that internally provide their own finalization (along with a number of other benefits like removing the window during P/Invoke where a native handle could be lost due to an asynchronous exception). Simply defining a SafeHandle makes this Trivial: private class SomeSafeHandle : SafeHandleZeroOrMinusOneIsInvalid { public SomeSafeHandle() : base(true) { } protected override bool ReleaseHandle() { return NativeMethods.CloseHandle(handle); } } Allows you to simplify the containing type to: class SomeObject : IDisposable { SomeSafeHandle _SomeSafeHandle; FileStream _SomeFileStream; // Something useful here public virtual void Dispose() { _SomeSafeHandle.Dispose(); _SomeFileStream.Dispose(); } } A: I don't think so. You have control over when Dispose is called, which means you could in theory write disposal code that makes assumptions about (for instance) the existence of other objects. You have no control over when the finalizer is called, so it would be iffy to have the finalizer automatically call Dispose on your behalf. EDIT: I went away and tested, just to make sure: class Program { static void Main(string[] args) { Fred f = new Fred(); f = null; GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Fred's gone, and he's not coming back..."); Console.ReadLine(); } } class Fred : IDisposable { ~Fred() { Console.WriteLine("Being finalized"); } void IDisposable.Dispose() { Console.WriteLine("Being Disposed"); } } A: Not in the case you describe, But the GC will call the Finalizer for you, if you have one. HOWEVER. The next garbage collection ,instead of being collected, the object will go into the finalization que, everything gets collected, then it's finalizer called. The next collection after that it will be freed. Depending on the memory pressure of your app, you may not have a gc for that object generation for a while. So in the case of say, a file stream or a db connection, you may have to wait a while for the unmanaged resource to be freed in the finalizer call for a while, causing some issues. A: The GC will not call dispose. It may call your finalizer, but even this isn't guaranteed under all circumstances. See this article for a discussion of the best way to handle this. A: No, it's not called. But this makes easy to don't forget to dispose your objects. Just use the using keyword. I did the following test for this: class Program { static void Main(string[] args) { Foo foo = new Foo(); foo = null; Console.WriteLine("foo is null"); GC.Collect(); Console.WriteLine("GC Called"); Console.ReadLine(); } } class Foo : IDisposable { public void Dispose() { Console.WriteLine("Disposed!"); } A: The documentation on IDisposable gives a pretty clear and detailed explaination of the behavior, as well as example code. The GC will NOT call the Dispose() method on the interface, but it will call the finalizer for your object. A: The IDisposable pattern was created primarily to be called by the developer, if you have an object that implements IDispose the developer should either implement the using keyword around the context of the object or call the Dispose method directly. The fail safe for the pattern is to implement the finalizer calling the Dispose() method. If you don't do that you may create some memory leaks i.e.: If you create some COM wrapper and never call the System.Runtime.Interop.Marshall.ReleaseComObject(comObject) (which would be placed in the Dispose method). There is no magic in the clr to call Dispose methods automatically other than tracking objects that contain finalizers and storing them in the Finalizer table by the GC and calling them when some clean up heuristics kick in by the GC.
Will the Garbage Collector call IDisposable.Dispose for me?
The .NET IDisposable Pattern implies that if you write a finalizer, and implement IDisposable, that your finalizer needs to explicitly call Dispose. This is logical, and is what I've always done in the rare situations where a finalizer is warranted. However, what happens if I just do this: class Foo : IDisposable { public void Dispose(){ CloseSomeHandle(); } } and don't implement a finalizer, or anything. Will the framework call the Dispose method for me? Yes I realise this sounds dumb, and all logic implies that it won't, but I've always had 2 things at the back of my head which have made me unsure. Someone a few years ago once told me that it would in fact do this, and that person had a very solid track record of "knowing their stuff." The compiler/framework does other 'magic' things depending on what interfaces you implement (eg: foreach, extension methods, serialization based on attributes, etc), so it makes sense that this might be 'magic' too. While I've read a lot of stuff about it, and there's been lots of things implied, I've never been able to find a definitive Yes or No answer to this question.
[ "The .Net Garbage Collector calls the Object.Finalize method of an object on garbage collection. By default this does nothing and must be overidden if you want to free additional resources.\nDispose is NOT automatically called and must be explicity called if resources are to be released, such as within a 'using' or 'try finally' block\nsee http://msdn.microsoft.com/en-us/library/system.object.finalize.aspx for more information\n", "I want to emphasize Brian's point in his comment, because it is important.\nFinalizers are not deterministic destructors like in C++. As others have pointed out, there is no guarantee of when it will be called, and indeed if you have enough memory, if it will ever be called.\nBut the bad thing about finalizers is that, as Brian said, it causes your object to survive a garbage collection. This can be bad. Why?\nAs you may or may not know, the GC is split into generations - Gen 0, 1 and 2, plus the Large Object Heap. Split is a loose term - you get one block of memory, but there are pointers of where the Gen 0 objects start and end. \nThe thought process is that you'll likely use lots of objects that will be short lived. So those should be easy and fast for the GC to get to - Gen 0 objects. So when there is memory pressure, the first thing it does is a Gen 0 collection. \nNow, if that doesn't resolve enough pressure, then it goes back and does a Gen 1 sweep (redoing Gen 0), and then if still not enough, it does a Gen 2 sweep (redoing Gen 1 and Gen 0). So cleaning up long lived objects can take a while and be rather expensive (since your threads may be suspended during the operation).\nThis means that if you do something like this:\n~MyClass() { }\n\nYour object, no matter what, will live to Generation 2. This is because the GC has no way of calling the finalizer during garbage collection. So objects that have to be finalized are moved to a special queue to be cleaned out by a different thread (the finalizer thread - which if you kill makes all kinds of bad things happen). This means your objects hang around longer, and potentially force more garbage collections.\nSo, all of that is just to drive home the point that you want to use IDisposable to clean up resources whenever possible and seriously try to find ways around using the finalizer. It's in your application's best interests.\n", "There's lots of good discussion already here, and I'm a little late to the party, but I wanted to add a few points myself.\n\nThe Garbage collecter will never directly execute a Dispose method for you.\nThe GC will execute finalizers when it feels like it.\nOne common pattern that is used for objects that have a finalizer is to have it call a method which is by convention defined as Dispose(bool disposing) passing false to indicate that the call was made due to finalization rather than an explicit Dispose call.\nThis is because it is not safe to make any assumptions about other managed objects while finalizing an object (they may have already been finalized).\n\n\nclass SomeObject : IDisposable {\n IntPtr _SomeNativeHandle;\n FileStream _SomeFileStream;\n\n // Something useful here\n\n ~ SomeObject() {\n Dispose(false);\n }\n\n public void Dispose() {\n Dispose(true);\n }\n\n protected virtual void Dispose(bool disposing) {\n if(disposing) {\n GC.SuppressFinalize(this);\n //Because the object was explicitly disposed, there will be no need to \n //run the finalizer. Suppressing it reduces pressure on the GC\n\n //The managed reference to an IDisposable is disposed only if the \n _SomeFileStream.Dispose();\n }\n\n //Regardless, clean up the native handle ourselves. Because it is simple a member\n // of the current instance, the GC can't have done anything to it, \n // and this is the onlyplace to safely clean up\n\n if(IntPtr.Zero != _SomeNativeHandle) {\n NativeMethods.CloseHandle(_SomeNativeHandle);\n _SomeNativeHandle = IntPtr.Zero;\n }\n }\n}\n\nThat's the simple version, but there are a lot of nuances that can trip you up on this pattern. \n\nThe contract for IDisposable.Dispose indicates that it must be safe to call multiple times (calling Dispose on an object that was already disposed should do nothing)\nIt can get very complicated to properly manage an inheritance hierarchy of disposable objects, especially if different layers introduce new Disposable and unmanaged resources. In the pattern above Dispose(bool) is virtual to allow it to be overridden so that it can be managed, but I find it to be error-prone.\n\nIn my opinion, it is much better to completely avoid having any types that directly contain both disposable references and native resources that may require finalization. SafeHandles provide a very clean way of doing this by encapsulating native resources into disposable that internally provide their own finalization (along with a number of other benefits like removing the window during P/Invoke where a native handle could be lost due to an asynchronous exception).\nSimply defining a SafeHandle makes this Trivial:\n\nprivate class SomeSafeHandle\n : SafeHandleZeroOrMinusOneIsInvalid {\n public SomeSafeHandle()\n : base(true)\n { }\n\n protected override bool ReleaseHandle()\n { return NativeMethods.CloseHandle(handle); }\n}\n\nAllows you to simplify the containing type to:\n\nclass SomeObject : IDisposable {\n SomeSafeHandle _SomeSafeHandle;\n FileStream _SomeFileStream;\n // Something useful here\n public virtual void Dispose() {\n _SomeSafeHandle.Dispose();\n _SomeFileStream.Dispose();\n }\n}\n\n", "I don't think so. You have control over when Dispose is called, which means you could in theory write disposal code that makes assumptions about (for instance) the existence of other objects. You have no control over when the finalizer is called, so it would be iffy to have the finalizer automatically call Dispose on your behalf.\n\nEDIT: I went away and tested, just to make sure:\nclass Program\n{\n static void Main(string[] args)\n {\n Fred f = new Fred();\n f = null;\n GC.Collect();\n GC.WaitForPendingFinalizers();\n Console.WriteLine(\"Fred's gone, and he's not coming back...\");\n Console.ReadLine();\n }\n}\n\nclass Fred : IDisposable\n{\n ~Fred()\n {\n Console.WriteLine(\"Being finalized\");\n }\n\n void IDisposable.Dispose()\n {\n Console.WriteLine(\"Being Disposed\");\n }\n}\n\n", "Not in the case you describe,\nBut the GC will call the Finalizer for you, if you have one.\nHOWEVER. The next garbage collection ,instead of being collected, the object will go into the finalization que, everything gets collected, then it's finalizer called. The next collection after that it will be freed.\nDepending on the memory pressure of your app, you may not have a gc for that object generation for a while. So in the case of say, a file stream or a db connection, you may have to wait a while for the unmanaged resource to be freed in the finalizer call for a while, causing some issues.\n", "The GC will not call dispose. It may call your finalizer, but even this isn't guaranteed under all circumstances.\nSee this article for a discussion of the best way to handle this.\n", "No, it's not called.\nBut this makes easy to don't forget to dispose your objects. Just use the using keyword.\nI did the following test for this:\nclass Program\n{\n static void Main(string[] args)\n {\n Foo foo = new Foo();\n foo = null;\n Console.WriteLine(\"foo is null\");\n GC.Collect();\n Console.WriteLine(\"GC Called\");\n Console.ReadLine();\n }\n}\n\nclass Foo : IDisposable\n{\n public void Dispose()\n {\n\n Console.WriteLine(\"Disposed!\");\n }\n\n", "The documentation on IDisposable gives a pretty clear and detailed explaination of the behavior, as well as example code. The GC will NOT call the Dispose() method on the interface, but it will call the finalizer for your object.\n", "The IDisposable pattern was created primarily to be called by the developer, if you have an object that implements IDispose the developer should either implement the using keyword around the context of the object or call the Dispose method directly.\nThe fail safe for the pattern is to implement the finalizer calling the Dispose() method. If you don't do that you may create some memory leaks i.e.: If you create some COM wrapper and never call the System.Runtime.Interop.Marshall.ReleaseComObject(comObject) (which would be placed in the Dispose method).\nThere is no magic in the clr to call Dispose methods automatically other than tracking objects that contain finalizers and storing them in the Finalizer table by the GC and calling them when some clean up heuristics kick in by the GC.\n" ]
[ 129, 71, 35, 6, 5, 2, 1, 0, 0 ]
[]
[]
[ ".net", "dispose", "idisposable" ]
stackoverflow_0000045036_.net_dispose_idisposable.txt
Q: Document or RPC based web services My gut feel is that document based web services are preferred in practice - is this other peoples experience? Are they easier to support? (I noted that SharePoint uses Any for the "document type" in its WSDL interface, I guess that makes it Document based). Also - are people offering both WSDL and Rest type services now for the same functionality? WSDL is popular for code generation, but for front ends like PHP and Rails they seem to prefer rest. A: Document versus RPC is only a question if you are using SOAP Web Services which require a service description (WSDL). RESTful web services do not not use WSDL because the service can't be described by it, and the feeling is that REST is simpler and easier to understand. Some people have proposed WADL as a way to describe REST services. Languages like Python, Ruby and PHP make it easier to work with REST. the WSDL is used to generate C# code (a web service proxy) that can be easily called from a static language. This happens when you add a Service Reference or Web Reference in Visual Studio. Whether you provide SOAP or REST services depends on your user population. Whether the services are to be used over the internet or just inside your organization affects your choice. SOAP may have some features (WS-* standards) that work well for B2B or internal use, but suck for an internet service. Document/literal versus RPC for SOAP services are described on this IBM DevelopWorks article. Document/literal is generally considered the best to use in terms of interoperability (Java to .NET etc). As to whether it is easier to support, that depends on your circumstances. My personal view is that people tend to make this stuff more complicated than it needs to be, and REST's simpler approach is superior. A: As mentioned it is better to choose the Document Literal over RPC encoded whenever possible. It is true that the old java libraries (Axis1, Glue and other prehistoric stuff) support only RPC encoded, however in today's most modern Java SOAP libs just does not support it (e.x. AXIS2, XFire, CXF). Therefore try to expose RPC encoded service only if you know that you need to deal with a consumer that can not do better. But then again maybe just XML RPC could help for these legacy implementations. A: BiranLy's answer is excellent. I would just like to add that document-vs-RPC can come down to implementation issues as well. We have found Microsoft to be Document-preferring, while our Java-based libraries were RPC-based. Whatever you choose, make sure you know what other potential clients will assume as well.
Document or RPC based web services
My gut feel is that document based web services are preferred in practice - is this other peoples experience? Are they easier to support? (I noted that SharePoint uses Any for the "document type" in its WSDL interface, I guess that makes it Document based). Also - are people offering both WSDL and Rest type services now for the same functionality? WSDL is popular for code generation, but for front ends like PHP and Rails they seem to prefer rest.
[ "Document versus RPC is only a question if you are using SOAP Web Services which require a service description (WSDL). RESTful web services do not not use WSDL because the service can't be described by it, and the feeling is that REST is simpler and easier to understand. Some people have proposed WADL as a way to describe REST services.\nLanguages like Python, Ruby and PHP make it easier to work with REST. the WSDL is used to generate C# code (a web service proxy) that can be easily called from a static language. This happens when you add a Service Reference or Web Reference in Visual Studio.\nWhether you provide SOAP or REST services depends on your user population. Whether the services are to be used over the internet or just inside your organization affects your choice. SOAP may have some features (WS-* standards) that work well for B2B or internal use, but suck for an internet service.\nDocument/literal versus RPC for SOAP services are described on this IBM DevelopWorks article. Document/literal is generally considered the best to use in terms of interoperability (Java to .NET etc). As to whether it is easier to support, that depends on your circumstances. My personal view is that people tend to make this stuff more complicated than it needs to be, and REST's simpler approach is superior.\n", "As mentioned it is better to choose the Document Literal over RPC encoded whenever possible.\nIt is true that the old java libraries (Axis1, Glue and other prehistoric stuff) support only RPC encoded, however in today's most modern Java SOAP libs just does not support it (e.x. AXIS2, XFire, CXF). \nTherefore try to expose RPC encoded service only if you know that you need to deal with a consumer that can not do better. But then again maybe just XML RPC could help for these legacy implementations.\n", "BiranLy's answer is excellent. I would just like to add that document-vs-RPC can come down to implementation issues as well. We have found Microsoft to be Document-preferring, while our Java-based libraries were RPC-based. Whatever you choose, make sure you know what other potential clients will assume as well.\n" ]
[ 30, 5, 1 ]
[]
[]
[ "rest", "web_services", "wsdl" ]
stackoverflow_0000005598_rest_web_services_wsdl.txt
Q: fprintf returns success but can't write to an existing file In my code fprintf returns successfully by returning the number of bytes written in STREAM, but in the actual file the string I wrote is not there. A: The output is probably just buffered. Try closing the file using close() or call fflush() on the stream to force the string to the file.
fprintf returns success but can't write to an existing file
In my code fprintf returns successfully by returning the number of bytes written in STREAM, but in the actual file the string I wrote is not there.
[ "The output is probably just buffered. Try closing the file using close() or call fflush() on the stream to force the string to the file.\n" ]
[ 5 ]
[]
[]
[ "c", "file", "io", "printf", "stream" ]
stackoverflow_0000045571_c_file_io_printf_stream.txt
Q: Path to Program-Files on remote computer How do I determine the (local-) path for the "Program Files" directory on a remote computer? There does not appear to any version of SHGetFolderPath (or related function) that takes the name of a remote computer as a parameter. I guess I could try to query HKLM\Software\Microsoft\Windows\CurrentVersion\ProgramFilesDir using remote-registry, but I was hoping there would be "documented" way of doing it. A: Many of the standard paths require a user to be logged in, especially the SH* functions as those are provided by the "shell", that is, Explorer. I suspect the only way you're going to get the right path is through the registry like you already mentioned. A: This is what I ended up doing: (pszComputer must be on the form "\\name". nPath is size of pszPath (in TCHARs)) DWORD GetProgramFilesDir(PCTSTR pszComputer, PTSTR pszPath, DWORD& nPath) { DWORD n; HKEY hHKLM; if ((n = RegConnectRegistry(pszComputer, HKEY_LOCAL_MACHINE, &hHKLM)) == ERROR_SUCCESS) { HKEY hWin; if ((n = RegOpenKeyEx(hHKLM, _T("Software\\Microsoft\\Windows\\CurrentVersion"), 0, KEY_READ, &hWin)) == ERROR_SUCCESS) { DWORD nType, cbPath = nPath * sizeof(TCHAR); n = RegQueryValueEx(hWin, _T("ProgramFilesDir"), NULL, &nType, reinterpret_cast<PBYTE>(pszPath), &cbPath); nPath = cbPath / sizeof(TCHAR); RegCloseKey(hWin); } RegCloseKey(hHKLM); } return n; }
Path to Program-Files on remote computer
How do I determine the (local-) path for the "Program Files" directory on a remote computer? There does not appear to any version of SHGetFolderPath (or related function) that takes the name of a remote computer as a parameter. I guess I could try to query HKLM\Software\Microsoft\Windows\CurrentVersion\ProgramFilesDir using remote-registry, but I was hoping there would be "documented" way of doing it.
[ "Many of the standard paths require a user to be logged in, especially the SH* functions as those are provided by the \"shell\", that is, Explorer. I suspect the only way you're going to get the right path is through the registry like you already mentioned.\n", "This is what I ended up doing: (pszComputer must be on the form \"\\\\name\". nPath is size of pszPath (in TCHARs))\nDWORD GetProgramFilesDir(PCTSTR pszComputer, PTSTR pszPath, DWORD& nPath) \n{\n DWORD n;\n HKEY hHKLM;\n if ((n = RegConnectRegistry(pszComputer, HKEY_LOCAL_MACHINE, &hHKLM)) == ERROR_SUCCESS)\n {\n HKEY hWin;\n if ((n = RegOpenKeyEx(hHKLM, _T(\"Software\\\\Microsoft\\\\Windows\\\\CurrentVersion\"), 0, KEY_READ, &hWin)) == ERROR_SUCCESS)\n {\n DWORD nType, cbPath = nPath * sizeof(TCHAR);\n n = RegQueryValueEx(hWin, _T(\"ProgramFilesDir\"), NULL, &nType, reinterpret_cast<PBYTE>(pszPath), &cbPath);\n nPath = cbPath / sizeof(TCHAR);\n RegCloseKey(hWin);\n }\n RegCloseKey(hHKLM);\n }\n return n;\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "winapi" ]
stackoverflow_0000040769_winapi.txt
Q: Postback events from within DataView I'm presenting information from a DataTable on my page and would like to add some sorting functionality which goes a bit beyond a straight forward column sort. As such I have been trying to place LinkButtons in the HeaderItems of my GridView which post-back to functions that change session information before reloading the page. Clicking my links DOES cause a post-back but they don't seem to generate any OnClick events as my OnClick functions don't get executed. I have AutoEventWireup set to true and if I move the links out of the GridView they work fine. I've got around the problem by creating regular anchors, appending queries to their hrefs and checking for them at page load but I'd prefer C# to be doing the grunt work. Any ideas? Update: To clarify the IDs of the controls match their OnClick function names. A: You're on the right track but try working with the Command Name/Argument of the LinkButton. Try something like this: In the HeaderTemplate of the the TemplateField, add a LinkButton and set the CommandName and CommandArgument <HeaderTemplate> <asp:LinkButton ID="LinkButton1" runat="server" CommandName="sort" CommandArgument="Products" Text="<%# Bind('ProductName")' /> </HeaderTemplate> Next, set the RowCommand event of the GridView protected void GridView1_RowCommand(object sender, GridViewCommandEventArgs e) { if (e.CommandName == "sort") { //Now sort by e.CommandArgument } } This way, you have a lot of control of your LinkButtons and you don't need to do much work to keep track of them. A: Two things to keep in mind when using events on dynamically generated controls in ASP.Net: Firstly, the controls should ideally be created in the Page.Init event handler. This is to ensure that the controls have already been created before the event handling code is ran. Secondly, you must assign the same value to the controls ID property, so that the event handler code knows that that was the control that should handle the event. A: You can specify the method to call when the link is clicked. <HeaderTemplate> <asp:LinkButton ID="lnkHdr1" Text="Hdr1" OnCommand="lnkHdr1_OnCommand" CommandArgument="Hdr1" runat="server"></asp:LinkButton> </HeaderTemplate> The code-behind: protected void lnkHdr1_OnCommand(object sender, CommandEventArgs e) { // e.CommandArgument }
Postback events from within DataView
I'm presenting information from a DataTable on my page and would like to add some sorting functionality which goes a bit beyond a straight forward column sort. As such I have been trying to place LinkButtons in the HeaderItems of my GridView which post-back to functions that change session information before reloading the page. Clicking my links DOES cause a post-back but they don't seem to generate any OnClick events as my OnClick functions don't get executed. I have AutoEventWireup set to true and if I move the links out of the GridView they work fine. I've got around the problem by creating regular anchors, appending queries to their hrefs and checking for them at page load but I'd prefer C# to be doing the grunt work. Any ideas? Update: To clarify the IDs of the controls match their OnClick function names.
[ "You're on the right track but try working with the Command Name/Argument of the LinkButton. Try something like this:\nIn the HeaderTemplate of the the TemplateField, add a LinkButton and set the CommandName and CommandArgument\n<HeaderTemplate> \n <asp:LinkButton ID=\"LinkButton1\" runat=\"server\" CommandName=\"sort\" CommandArgument=\"Products\" Text=\"<%# Bind('ProductName\")' />\n</HeaderTemplate>\n\nNext, set the RowCommand event of the GridView\nprotected void GridView1_RowCommand(object sender, GridViewCommandEventArgs e)\n{\n if (e.CommandName == \"sort\")\n {\n //Now sort by e.CommandArgument\n\n }\n}\n\nThis way, you have a lot of control of your LinkButtons and you don't need to do much work to keep track of them.\n", "Two things to keep in mind when using events on dynamically generated controls in ASP.Net:\n\nFirstly, the controls should ideally be created in the Page.Init event handler. This is to ensure that the controls have already been created before the event handling code is ran.\nSecondly, you must assign the same value to the controls ID property, so that the event handler code knows that that was the control that should handle the event.\n\n", "You can specify the method to call when the link is clicked.\n<HeaderTemplate>\n <asp:LinkButton\n ID=\"lnkHdr1\"\n Text=\"Hdr1\"\n OnCommand=\"lnkHdr1_OnCommand\"\n CommandArgument=\"Hdr1\"\n runat=\"server\"></asp:LinkButton>\n</HeaderTemplate>\n\nThe code-behind:\nprotected void lnkHdr1_OnCommand(object sender, CommandEventArgs e)\n{\n // e.CommandArgument\n}\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ ".net", "asp.net", "c#", "gridview", "postback" ]
stackoverflow_0000045475_.net_asp.net_c#_gridview_postback.txt
Q: How to get browser IP or hostname? I have a web application that should behave differently for internal users than external ones. The web application is available over the Internet, and therefore obviously to the internal users as well. All the users are anonymous, not authenticated, but the page should render differently for internal users than external. What I'm doing in my code is use Request.UserHostName and then Dns.GetHostEntry. The result is then compared to a setting in my web.config (that holds something like *.mydomain.local) . If the comparison gives a positive result then I render the HTML that the internal user should see otherwise I render the HTML the external user should see. However, my problem is that I don't always get the expected value from Request.UserHostName. on the development site I get the IP-number (?) of the machine running the browser but on the customer site I don't get the IP-number of the user machine, I get some other IP-number. The browsers don't have any proxies set or anything like that. Should I be using something else than Request.UserHostName? A: I recommend using IP addresses as well. I'm dealing with this exact same situation setting up an authentication system right now as well and the conditions described by Epso and Robin M are exactly what is happening. External users coming to the site give me their actual IP address while all internal users provide the IP of the gateway machine(router) on to the private subnet the webservers sit on. To deal with it I just check for that one IP. If I get the IP of the gateway, I provide the internal access. If I get anything else they get the external one which requires additional authentication in my case. In yours, it would just mean a different interface. A: Try Request.UserHostAddress, which returns the client's IP address. Assuming your internal network uses IP addresses reserved for LANs, it should be relatively simple to check if an IP is internal or external. A: There might be a firewall that is doing some sort of NAT, to enable inside clients to use the external dns-name to reach the server. Is the IP-number you get on customer site the same at the external customer-server ip? In that case you can hard code for that one IP-address. All internal computers behind that firewall will appear to have to same ip-address and you can classify them as "internal". A: It looks like you're being returned a public facing IP Address. Get the user to go to http://www.myipaddress.com . If this is the same as the IP Address returned to your software, then this is definitely the case. The only solution I can see to get around this is to either get them to connect to the machine holding the asp.net application via a VPN, or to use some other kind of authentication. The latter is probably the best option. A: It does sound like there is a proxy between users and the server on the customer site (it doesn't need to be configured in the browser). It may be an internal or external proxy depending on your network configuration. I would avoid using the UserHostName for what is effectively authentication as it is presented by the browser duing the request and would be easy to spoof. IP address would be much more effective as it's difficult to spoof an IP address in a TCP/IP connection (and maintain a connection). It's still weak authentication but may be sufficient in this scenario. Even if you are using IP address, if there's a NAT proxy between client and server, you may have to accept that anything coming through that proxy is trusted (I'm assuming that external/untrusted clients don't come through that proxy). If that isn't acceptable, you're back to other methods of authentication. Rather than requiring a logon or VPN connection, you might consider a permanent cookie or client certificates and only give those to internal clients but you would need some way of delivering those to the client. You could certainly deliver a permanent cookie based on a one-time logon. Cookies can be spoofed in a similar way in that the UserHostName can be however you've got a better opportunity to create a cookie value that is less guessable than a domain name.
How to get browser IP or hostname?
I have a web application that should behave differently for internal users than external ones. The web application is available over the Internet, and therefore obviously to the internal users as well. All the users are anonymous, not authenticated, but the page should render differently for internal users than external. What I'm doing in my code is use Request.UserHostName and then Dns.GetHostEntry. The result is then compared to a setting in my web.config (that holds something like *.mydomain.local) . If the comparison gives a positive result then I render the HTML that the internal user should see otherwise I render the HTML the external user should see. However, my problem is that I don't always get the expected value from Request.UserHostName. on the development site I get the IP-number (?) of the machine running the browser but on the customer site I don't get the IP-number of the user machine, I get some other IP-number. The browsers don't have any proxies set or anything like that. Should I be using something else than Request.UserHostName?
[ "I recommend using IP addresses as well. I'm dealing with this exact same situation setting up an authentication system right now as well and the conditions described by Epso and Robin M are exactly what is happening. External users coming to the site give me their actual IP address while all internal users provide the IP of the gateway machine(router) on to the private subnet the webservers sit on.\nTo deal with it I just check for that one IP. If I get the IP of the gateway, I provide the internal access. If I get anything else they get the external one which requires additional authentication in my case. In yours, it would just mean a different interface.\n", "Try Request.UserHostAddress, which returns the client's IP address. Assuming your internal network uses IP addresses reserved for LANs, it should be relatively simple to check if an IP is internal or external.\n", "There might be a firewall that is doing some sort of NAT, to enable inside clients to use the external dns-name to reach the server.\nIs the IP-number you get on customer site the same at the external customer-server ip? In that case you can hard code for that one IP-address. All internal computers behind that firewall will appear to have to same ip-address and you can classify them as \"internal\".\n", "It looks like you're being returned a public facing IP Address. Get the user to go to http://www.myipaddress.com . If this is the same as the IP Address returned to your software, then this is definitely the case. \nThe only solution I can see to get around this is to either get them to connect to the machine holding the asp.net application via a VPN, or to use some other kind of authentication. The latter is probably the best option. \n", "It does sound like there is a proxy between users and the server on the customer site (it doesn't need to be configured in the browser). It may be an internal or external proxy depending on your network configuration.\nI would avoid using the UserHostName for what is effectively authentication as it is presented by the browser duing the request and would be easy to spoof. IP address would be much more effective as it's difficult to spoof an IP address in a TCP/IP connection (and maintain a connection). It's still weak authentication but may be sufficient in this scenario.\nEven if you are using IP address, if there's a NAT proxy between client and server, you may have to accept that anything coming through that proxy is trusted (I'm assuming that external/untrusted clients don't come through that proxy).\nIf that isn't acceptable, you're back to other methods of authentication. Rather than requiring a logon or VPN connection, you might consider a permanent cookie or client certificates and only give those to internal clients but you would need some way of delivering those to the client. You could certainly deliver a permanent cookie based on a one-time logon. Cookies can be spoofed in a similar way in that the UserHostName can be however you've got a better opportunity to create a cookie value that is less guessable than a domain name.\n" ]
[ 3, 2, 0, 0, 0 ]
[]
[]
[ "asp.net", "visual_studio" ]
stackoverflow_0000045553_asp.net_visual_studio.txt
Q: Adding functionality to Rails I'm working on a Rails app and am looking to include some functionality from "Getting the Hostname or IP in Ruby on Rails" that I asked. I'm having problems getting it to work. I was under the impression that I should just make a file in the lib directory, so I named it 'get_ip.rb', with the contents: require 'socket' module GetIP def local_ip orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true # turn off reverse DNS resolution temporarily UDPSocket.open do |s| s.connect '64.233.187.99', 1 s.addr.last end ensure Socket.do_not_reverse_lookup = orig end end I had also tried defining GetIP as a class but when I do the usual ruby script/console, I'm not able to use the local_ip method at all. Any ideas? A: require will load a file. If that file contains any class/module definitions, then your other code will now be able to use them. If the file just contains code which is not in any modules, it will get run as if it were in the same place as your 'require' call (like PHP include) include is to do with modules. It takes all the methods in the module, and adds them to your class. Like this: class Orig end Orig.new.first_method # no such method module MyModule def first_method end end class Orig include MyModule end Orig.new.first_method # will now run first_method as it's been added. There's also extend which works like include does, but instead of adding the methods as instance methods, adds them as class methods, like this: Note above, how when I wanted to access first_method, I created a new object of Orig class. That's what I mean by instance method. class SecondClass extend MyModule end SecondClass.first_method # will call first_method Note that in this example I'm not making any new objects, just calling the method directly on the class, as if it had been defined as self.first_method all along. So there you go :-) A: You haven't described how you're trying to use the method, so I apologize in advance if this is stuff you already know. The methods on a module never come into use unless the module is included into a class. Instance methods on a class require there to be an instance of the class. You probably want a class method instead. And the file itself should be loaded, generally through the require statement. If the following code is in the file getip.rb, require 'socket' class GetIP def self.local_ip orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true UDPSocket.open do |s| s.connect '64.233.187.99', 1 s.addr.last end ensure Socket.do_not_reverse_lookup = orig end end Then you should be able to run it by saying, require 'getip' GetIP.local_ip A: require and include are two different things. require is to strictly load a file once from a load path. The loadpath is a string and this is the key used to determine if the file has already been loaded. include is used to "mix-in" modules into other classes. include is called on a module and the module methods are included as instance methods on the class. module MixInMethods def mixed_in_method "I'm a part of #{self.class}" end end class SampleClass include MixInMethods end mixin_class = SampleClass.new puts my_class.mixed_in_method # >> I'm a part of SampleClass But many times the module you want to mix in is not in the same file as the target class. So you do a require 'module_file_name' and then inside the class you do an include module.
Adding functionality to Rails
I'm working on a Rails app and am looking to include some functionality from "Getting the Hostname or IP in Ruby on Rails" that I asked. I'm having problems getting it to work. I was under the impression that I should just make a file in the lib directory, so I named it 'get_ip.rb', with the contents: require 'socket' module GetIP def local_ip orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true # turn off reverse DNS resolution temporarily UDPSocket.open do |s| s.connect '64.233.187.99', 1 s.addr.last end ensure Socket.do_not_reverse_lookup = orig end end I had also tried defining GetIP as a class but when I do the usual ruby script/console, I'm not able to use the local_ip method at all. Any ideas?
[ "require will load a file. If that file contains any class/module definitions, then your other code will now be able to use them. If the file just contains code which is not in any modules, it will get run as if it were in the same place as your 'require' call (like PHP include)\ninclude is to do with modules.\nIt takes all the methods in the module, and adds them to your class. Like this:\nclass Orig\nend\n\nOrig.new.first_method # no such method\n\nmodule MyModule\n def first_method\n end\nend\n\nclass Orig\n include MyModule\nend\nOrig.new.first_method # will now run first_method as it's been added.\n\nThere's also extend which works like include does, but instead of adding the methods as instance methods, adds them as class methods, like this:\nNote above, how when I wanted to access first_method, I created a new object of Orig class. That's what I mean by instance method.\nclass SecondClass\n extend MyModule\nend\nSecondClass.first_method # will call first_method\n\nNote that in this example I'm not making any new objects, just calling the method directly on the class, as if it had been defined as self.first_method all along.\nSo there you go :-)\n", "You haven't described how you're trying to use the method, so I apologize in advance if this is stuff you already know.\nThe methods on a module never come into use unless the module is included into a class. Instance methods on a class require there to be an instance of the class. You probably want a class method instead. And the file itself should be loaded, generally through the require statement.\nIf the following code is in the file getip.rb,\nrequire 'socket'\n\nclass GetIP\n def self.local_ip\n orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true\n\n UDPSocket.open do |s|\n s.connect '64.233.187.99', 1\n s.addr.last\n end\n ensure\n Socket.do_not_reverse_lookup = orig\n end\nend\n\nThen you should be able to run it by saying,\nrequire 'getip'\nGetIP.local_ip\n\n", "require and include are two different things.\nrequire is to strictly load a file once from a load path. The loadpath is a string and this is the key used to determine if the file has already been loaded.\ninclude is used to \"mix-in\" modules into other classes. include is called on a module and the module methods are included as instance methods on the class.\n module MixInMethods\n def mixed_in_method\n \"I'm a part of #{self.class}\"\n end\n end\n\n class SampleClass\n include MixInMethods\n end\n\n mixin_class = SampleClass.new\n puts my_class.mixed_in_method # >> I'm a part of SampleClass\n\nBut many times the module you want to mix in is not in the same file as the target class. So you do a require 'module_file_name' and then inside the class you do an include module. \n" ]
[ 5, 4, 0 ]
[]
[]
[ "ruby", "ruby_on_rails" ]
stackoverflow_0000045253_ruby_ruby_on_rails.txt