content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
NHibernate SetTimeout on ICriteria
Could someone tell me what the units the SetTimeout(int) method in the ICriteria interface uses?
Is it milliseconds, seconds, minutes or other?
A:
A little bit of poking around suggests that it could be seconds:
Assuming that ICriteria is the same as the Criteria interface in Hibernate core, then the JavaDoc for org.hibernate.Criteria provides a hint - the "see also" link to java.sql.Statement.setQueryTimeout(). The latter refers to its timeout parameter as seconds.
Assuming that the NHibernate implementation follows the implied contract of that method, then that should be fine. However, for peace of mind's sake, I went and looked for some NHibernate specific stuff. There are various references to CommandTimeout; for example, here, related to NHibernate. Sure enough, the documentation for CommandTimeout states that it's seconds.
I almost didn't post the above, because I don't know the answer outright, and can't find any concrete documentation - but since there is so little on the issue, I figured it couldn't hurt to present these findings.
A:
I think it's seconds. The NHibernate API closely mirrors Hibernate Core for Java, where the Criteria.setTimeout(int) method uses seconds as the units (see also Statement.setQueryTimeout(int)).
Also, after looking at some NHibernate source, it appears that it's using that value to set the timeout for the underlying ADO.NET query, which uses seconds.
| NHibernate SetTimeout on ICriteria | Could someone tell me what the units the SetTimeout(int) method in the ICriteria interface uses?
Is it milliseconds, seconds, minutes or other?
| [
"A little bit of poking around suggests that it could be seconds:\nAssuming that ICriteria is the same as the Criteria interface in Hibernate core, then the JavaDoc for org.hibernate.Criteria provides a hint - the \"see also\" link to java.sql.Statement.setQueryTimeout(). The latter refers to its timeout parameter as seconds.\nAssuming that the NHibernate implementation follows the implied contract of that method, then that should be fine. However, for peace of mind's sake, I went and looked for some NHibernate specific stuff. There are various references to CommandTimeout; for example, here, related to NHibernate. Sure enough, the documentation for CommandTimeout states that it's seconds.\nI almost didn't post the above, because I don't know the answer outright, and can't find any concrete documentation - but since there is so little on the issue, I figured it couldn't hurt to present these findings.\n",
"I think it's seconds. The NHibernate API closely mirrors Hibernate Core for Java, where the Criteria.setTimeout(int) method uses seconds as the units (see also Statement.setQueryTimeout(int)).\nAlso, after looking at some NHibernate source, it appears that it's using that value to set the timeout for the underlying ADO.NET query, which uses seconds.\n"
] | [
28,
8
] | [] | [] | [
"nhibernate"
] | stackoverflow_0000033955_nhibernate.txt |
Q:
XML Parser Validation Report
Most XML parsers will give up after the first error in a document. In fact, IIRC, that's actually part of the 'official' spec for parsers.
I'm looking for something that will break that rule. It should take a given schema (assuming a valid schema) and an xml input and attempt to keep going after the first error and either raise an event for each error or return a list when finished, so I can use it to generate some kind of a report of the errors in the document. This requirement comes from above, so let's try to keep the purist "but it wouldn't make sense to keep going" comments to a minimum.
I'm looking for something that will evaluate both whether the document is well-formed and whether or not it conforms to the schema. Ideally it would evaluate those as different classes of error. I'd prefer a .Net solution but I could use a standalone .exe as well. If you know of one that uses a different platform go ahead and post it because someone else might find it useful.
Update:
I expect that most of the documents where I use this will be mostly well-formed. Maybe an & included as data instead of & here and there, or an occasional mis-placed tag. I don't expect the parser to be able to recover from anything, just to make a best-effort to keep going. If a document is too out of whack it should spit out as much as it can followed by some kind of 'fatal, unable to continue' error. Otherwise the schema validation part is pretty easy.
A:
In fact, IIRC, that's actually part of the 'official' spec for parsers.
Official does not need to be quoted :)
fatal error
[Definition:] An error which a conforming XML processor must detect and report to the application. After encountering a fatal error, the processor may continue processing the data to search for further errors and may report such errors to the application. In order to support correction of errors, the processor may make unprocessed data from the document (with intermingled character data and markup) available to the application. Once a fatal error is detected, however, the processor must not continue normal processing (i.e., it must not continue to pass character data and information about the document's logical structure to the application in the normal way).
You could use xmllint with the recover option.
A:
Sounds like you might want TagSoup. It may not be exactly what you want, but as far as bad-document-handling parsers go it's the gold standard.
A:
Xerces has a feature you can set on to try and continue after a fatal error:
http://apache.org/xml/features/continue-after-fatal-error
True: Attempt to continue parsing after a fatal error.
False: Stops parse on first fatal error.
Default: false
Note: The behavior
of the parser when this feature is set
to true is undetermined! Therefore use
this feature with extreme caution
because the parser may get stuck in an
infinite loop or worse.
| XML Parser Validation Report | Most XML parsers will give up after the first error in a document. In fact, IIRC, that's actually part of the 'official' spec for parsers.
I'm looking for something that will break that rule. It should take a given schema (assuming a valid schema) and an xml input and attempt to keep going after the first error and either raise an event for each error or return a list when finished, so I can use it to generate some kind of a report of the errors in the document. This requirement comes from above, so let's try to keep the purist "but it wouldn't make sense to keep going" comments to a minimum.
I'm looking for something that will evaluate both whether the document is well-formed and whether or not it conforms to the schema. Ideally it would evaluate those as different classes of error. I'd prefer a .Net solution but I could use a standalone .exe as well. If you know of one that uses a different platform go ahead and post it because someone else might find it useful.
Update:
I expect that most of the documents where I use this will be mostly well-formed. Maybe an & included as data instead of & here and there, or an occasional mis-placed tag. I don't expect the parser to be able to recover from anything, just to make a best-effort to keep going. If a document is too out of whack it should spit out as much as it can followed by some kind of 'fatal, unable to continue' error. Otherwise the schema validation part is pretty easy.
| [
"\nIn fact, IIRC, that's actually part of the 'official' spec for parsers.\n\nOfficial does not need to be quoted :)\n\nfatal error\n[Definition:] An error which a conforming XML processor must detect and report to the application. After encountering a fatal error, the processor may continue processing the data to search for further errors and may report such errors to the application. In order to support correction of errors, the processor may make unprocessed data from the document (with intermingled character data and markup) available to the application. Once a fatal error is detected, however, the processor must not continue normal processing (i.e., it must not continue to pass character data and information about the document's logical structure to the application in the normal way).\n\nYou could use xmllint with the recover option.\n",
"Sounds like you might want TagSoup. It may not be exactly what you want, but as far as bad-document-handling parsers go it's the gold standard.\n",
"Xerces has a feature you can set on to try and continue after a fatal error:\n\nhttp://apache.org/xml/features/continue-after-fatal-error\n True: Attempt to continue parsing after a fatal error.\n False: Stops parse on first fatal error. \n Default: false\n Note: The behavior\n of the parser when this feature is set\n to true is undetermined! Therefore use\n this feature with extreme caution\n because the parser may get stuck in an\n infinite loop or worse.\n\n"
] | [
1,
1,
1
] | [] | [] | [
"validation",
"xml"
] | stackoverflow_0000032505_validation_xml.txt |
Q:
How to embed control change commands inside of a MIDI file
I am making a simple game in order to learn a new language. I am in the process of collecting some music for the game and would like to use the MIDI format so that I can control the flow of the track (i.e., I would like to have an introduction that only plays once and does not play again when the song loops.)
I am having a tough time finding information on how to modify existing MIDI files so that they may send a control change signal to the synthesizer. Has anyone had experience with this?
I think that I should have been more clear with my original question. I am using an existing game engine which takes care of playing the music. I am under the impression that this control change value must be embedded directly in the MIDI file itself as I have no control over the synthesizer. From the manual:
MIDI files are played via the
DirectMusic Synthesizer. If a BGM MIDI
file contains the control change value
111, that value is recognized as where
the song will start repeating after it
reaches the end.
I wish I could do it programmatically. I suppose what I am after here is some sort of editor which will allow me to modify the MIDI file that I already have.
A:
Sounds like what you really want is a midi editor
A:
try looking in the Midi 1.0 spec
Here's a table of the control change messages though it looks like you're looking for a way to do this in software. yes?
you could try just sending it as raw midi data (ie. the messages on that table)
looking over your question again... my answer is not that useful...
what I would do if I were you is separate the introduction into it's own file and then you have a file containing just what you want to loop.
you could also look at the spec for the Standard Midi File format (SMF)
A:
DirectMusicProducer is probably your best free option if you are playing using DirectMusic. I don't believe the MIDI record feature will include control changes, but your engine may support playing segment files which are much more flexible.
The only MIDI sequencer I use cost around $300 (USD) about 10 years ago (and no longer appears to exist), but I am not aware of any good quality free MIDI file sequencers. (Note that "MIDI editor" is probably different to "MIDI file editor" or "MIDI sequencer")
| How to embed control change commands inside of a MIDI file | I am making a simple game in order to learn a new language. I am in the process of collecting some music for the game and would like to use the MIDI format so that I can control the flow of the track (i.e., I would like to have an introduction that only plays once and does not play again when the song loops.)
I am having a tough time finding information on how to modify existing MIDI files so that they may send a control change signal to the synthesizer. Has anyone had experience with this?
I think that I should have been more clear with my original question. I am using an existing game engine which takes care of playing the music. I am under the impression that this control change value must be embedded directly in the MIDI file itself as I have no control over the synthesizer. From the manual:
MIDI files are played via the
DirectMusic Synthesizer. If a BGM MIDI
file contains the control change value
111, that value is recognized as where
the song will start repeating after it
reaches the end.
I wish I could do it programmatically. I suppose what I am after here is some sort of editor which will allow me to modify the MIDI file that I already have.
| [
"Sounds like what you really want is a midi editor\n",
"try looking in the Midi 1.0 spec\nHere's a table of the control change messages though it looks like you're looking for a way to do this in software. yes?\nyou could try just sending it as raw midi data (ie. the messages on that table)\nlooking over your question again... my answer is not that useful...\nwhat I would do if I were you is separate the introduction into it's own file and then you have a file containing just what you want to loop.\nyou could also look at the spec for the Standard Midi File format (SMF)\n",
"DirectMusicProducer is probably your best free option if you are playing using DirectMusic. I don't believe the MIDI record feature will include control changes, but your engine may support playing segment files which are much more flexible.\nThe only MIDI sequencer I use cost around $300 (USD) about 10 years ago (and no longer appears to exist), but I am not aware of any good quality free MIDI file sequencers. (Note that \"MIDI editor\" is probably different to \"MIDI file editor\" or \"MIDI sequencer\")\n"
] | [
4,
3,
2
] | [] | [] | [
"midi"
] | stackoverflow_0000034023_midi.txt |
Q:
php scripts writing to non-world-writable files
How can you allow a PHP script to write to a file with high-security restrictions, such as only allowing a single user to write to it?
The difficulty seems to be that a PHP script is running as a low-permissions user (maybe apache, or www, or nobody?), and even if I chown apache the_writable_file, the directory it's in might not be writable for the low-level user. In general, what's the usual way that PHP can work with local files in a secure way?
A:
Unfortunately, in shared hosts that use mod_php, there is no way to restrict access to secure files to your web app and login user.
The solution is to run your web app as your login user. When you do that, UNIX file permissions can correctly lock everyone else out. There are several ways to implement that, including SuExec, suPHP, or running PHP with FastCGI with mod_fcgid or mod_proxy_fcgid. FastCGI is my favorite way.
Another solution is to use a dedicated host or virtual private server.
A:
Sure, chgrp apache the_writable_file and chmod g+w the_writable_file. After that, only your secure user and the apache user will be able to write to the file. Since the apache user is typically forbidden from logging in, you only have to worry about web users writing to your secure file using through the http daemon.
A:
All the containing folders need to have execute permissions.
For example, if the file's in /foo/bar/the_writable_file, the directories "foo" and "bar" both need to have executable permission to access the_writable_file, even if they don't have read/write permission.
| php scripts writing to non-world-writable files | How can you allow a PHP script to write to a file with high-security restrictions, such as only allowing a single user to write to it?
The difficulty seems to be that a PHP script is running as a low-permissions user (maybe apache, or www, or nobody?), and even if I chown apache the_writable_file, the directory it's in might not be writable for the low-level user. In general, what's the usual way that PHP can work with local files in a secure way?
| [
"Unfortunately, in shared hosts that use mod_php, there is no way to restrict access to secure files to your web app and login user.\nThe solution is to run your web app as your login user. When you do that, UNIX file permissions can correctly lock everyone else out. There are several ways to implement that, including SuExec, suPHP, or running PHP with FastCGI with mod_fcgid or mod_proxy_fcgid. FastCGI is my favorite way.\nAnother solution is to use a dedicated host or virtual private server.\n",
"Sure, chgrp apache the_writable_file and chmod g+w the_writable_file. After that, only your secure user and the apache user will be able to write to the file. Since the apache user is typically forbidden from logging in, you only have to worry about web users writing to your secure file using through the http daemon.\n",
"All the containing folders need to have execute permissions.\nFor example, if the file's in /foo/bar/the_writable_file, the directories \"foo\" and \"bar\" both need to have executable permission to access the_writable_file, even if they don't have read/write permission.\n"
] | [
2,
0,
0
] | [] | [] | [
"permissions",
"php"
] | stackoverflow_0000034015_permissions_php.txt |
Q:
Can't create a subversion repository with Eclipse 3.4.0, svn 1.5.1
I'm working on mac OS x 10.4. I have a subversion repository stored on an external drive connected via USB. I created a new python project in Eclipse (using the PyDev plugin). When I use right click Team->Share Project to set up a new project with subversion, I get the following error:
Error while creating module: org.tigris.subversion.javahl.ClientException: Couldn't open a repository
svn: Unable to open ra_local session to URL
svn: Unable to open repository 'file:///Volumes/svn-repos/Palindrome/Palindrome'
The subversion repository has the following permissions:
drwxrwxrwx 9 cameronl cameronl 306 Aug 23 10:08 svn-repos
The external drive is formatted as Mac OS extended
I'm completely stumped. Anybody have any suggestions?
A:
Try adding the repository first using the "SVN Repository Exploring" perspective (Window > Open Perspective > Other... > SVN Repository Exploring).
Make sure that the URL you are using points to the correct directory, which typically contains these default repository files:
conf/ dav/ db/ format hooks/ locks/ README.txt
Hope this helps.
| Can't create a subversion repository with Eclipse 3.4.0, svn 1.5.1 | I'm working on mac OS x 10.4. I have a subversion repository stored on an external drive connected via USB. I created a new python project in Eclipse (using the PyDev plugin). When I use right click Team->Share Project to set up a new project with subversion, I get the following error:
Error while creating module: org.tigris.subversion.javahl.ClientException: Couldn't open a repository
svn: Unable to open ra_local session to URL
svn: Unable to open repository 'file:///Volumes/svn-repos/Palindrome/Palindrome'
The subversion repository has the following permissions:
drwxrwxrwx 9 cameronl cameronl 306 Aug 23 10:08 svn-repos
The external drive is formatted as Mac OS extended
I'm completely stumped. Anybody have any suggestions?
| [
"Try adding the repository first using the \"SVN Repository Exploring\" perspective (Window > Open Perspective > Other... > SVN Repository Exploring).\nMake sure that the URL you are using points to the correct directory, which typically contains these default repository files:\nconf/ dav/ db/ format hooks/ locks/ README.txt\n\nHope this helps.\n"
] | [
2
] | [] | [] | [
"eclipse",
"macos",
"svn"
] | stackoverflow_0000033990_eclipse_macos_svn.txt |
Q:
Install the Radrails plugin for Aptana Studio offline
I downloaded and installed the Aptana Studio free version. But apparently, to install the Radrails plugin for ruby on rails development you have to connect to the internet. I don't have internet on my machine right now. So is there a way I could download the installer from another machine and copy it over my existing Aptana installation?
Update: Found a link for download here (Access denied now)
A:
I wrote down my duel with Aptana Rails - See if this helps you.
There is a link on manual installation that may be what you're looking for.
A:
If you're able to actually install it on the machine with the Internet connection, then you can simply copy over the directory you installed it in. Eclipse installations are completely self-contained in their installation directories.
| Install the Radrails plugin for Aptana Studio offline | I downloaded and installed the Aptana Studio free version. But apparently, to install the Radrails plugin for ruby on rails development you have to connect to the internet. I don't have internet on my machine right now. So is there a way I could download the installer from another machine and copy it over my existing Aptana installation?
Update: Found a link for download here (Access denied now)
| [
"I wrote down my duel with Aptana Rails - See if this helps you.\nThere is a link on manual installation that may be what you're looking for.\n",
"If you're able to actually install it on the machine with the Internet connection, then you can simply copy over the directory you installed it in. Eclipse installations are completely self-contained in their installation directories.\n"
] | [
3,
0
] | [] | [] | [
"aptana",
"radrails",
"ruby_on_rails"
] | stackoverflow_0000034019_aptana_radrails_ruby_on_rails.txt |
Q:
Checking if userinput is a valid URI in XUL
Is there a built-in function/method that can check if a given string is a valid URI or not in the Mozilla XUL toolkit? I have looked for one but found none, but since this is my first time using XUL and its documentation it could be that I just overlooked it. So I'm just making sure before I start writing my own IsValidURI function.
A:
The nsIIOService.newURI(...) method is what you're looking for. It throws NS_ERROR_MALFORMED_URI if the URI string is invalid.
Example:
try {
var ioServ = Components.classes["@mozilla.org/network/io-service;1"]
.getService(Components.interfaces.nsIIOService);
var uriObj = ioServ.newURI(uriString, uriCharset, baseURI);
} catch (e) {
// catch the error here
}
| Checking if userinput is a valid URI in XUL | Is there a built-in function/method that can check if a given string is a valid URI or not in the Mozilla XUL toolkit? I have looked for one but found none, but since this is my first time using XUL and its documentation it could be that I just overlooked it. So I'm just making sure before I start writing my own IsValidURI function.
| [
"The nsIIOService.newURI(...) method is what you're looking for. It throws NS_ERROR_MALFORMED_URI if the URI string is invalid.\nExample:\ntry {\n var ioServ = Components.classes[\"@mozilla.org/network/io-service;1\"]\n .getService(Components.interfaces.nsIIOService);\n var uriObj = ioServ.newURI(uriString, uriCharset, baseURI);\n} catch (e) {\n\n // catch the error here\n\n}\n\n"
] | [
4
] | [] | [] | [
"firefox",
"javascript",
"validation",
"xul"
] | stackoverflow_0000034076_firefox_javascript_validation_xul.txt |
Q:
Using Unsigned Primitive Types
Most of time we represent concepts which can never be less than 0. For example to declare length, we write:
int length;
The name expresses its purpose well but you can assign negative values to it. It seems that for some situations, you can represent your intent more clearly by writing it this way instead:
uint length;
Some disadvantages that I can think of:
unsigned types (uint, ulong, ushort) are not CLS compliant so you can't use it with other languages that don't support this
.Net classes use signed types most of the time so you have to cast
Thoughts?
A:
If you decrement a signed number with a value of 0, it becomes negative and you can easily test for this. If you decrement an unsigned number with a value of 0, it underflows and becomes the maximum value for the type - somewhat more difficult to check for.
A:
Your second point is the most important. Generally you should just use int since that's a pretty good "catch-all" for integer values. I would only use uint if you absolutely need the ability to count higher than int, but without using the extra memory long requires (it's not much more memory, so don't be cheap :-p).
A:
“When in Rome, do as the Romans do.”
While there is theoretically an advantage in using unsigned values where applicable because it makes the code more expressive, this is simply not done in C#. I'm not sure why the developers initially didn't design the interfaces to handle uints and make the type CLS compliant but now the train has left the station.
Since consistency is generally important I'd advise taking the C# road and using ints.
A:
I think the subtle use of uint vs. int will cause confusing with developers unless it was written into developer guidelines for the company.
If the length, for example, can't be less than zero then it should be expressed clearly in the business logic so future developers can read the code and know the true intent.
Just my 2 cents.
A:
I will point out that in C# you can turn on /checked to check for arithmetic overflow / underflow, which isn't a bad idea anyways. If performance matters in a critical section, you can still use unchecked to avoid this.
For internal code (ie code that won't be referenced in any interop manor with other languages) I vote for using unsigned when the situation warrants it, such as length variables as mentioned earlier. This - along with checked arithmetic - provides one more net for developers, catching subtle bugs earlier.
Another point in the signed vs unsigned debate is that some programmers use values such as -1 to indicate errors, when they wouldn't otherwise have meaning. I subscribe to the view that each variable should have only one purpose, but if you - or colleagues you code with - like to indicate errors in this way, leaving variables signed gives you the flexibility to add error states later.
A:
Your two points are good. The primary reason to avoid it is casting, though. Casting makes them incredibly annoying to use. I tried using unisigned variables once but I had to sprinkle casts absolutely everywhere because the framework methods all use signed integers. Therefore, whenever you call a framework method, you have to cast.
| Using Unsigned Primitive Types | Most of time we represent concepts which can never be less than 0. For example to declare length, we write:
int length;
The name expresses its purpose well but you can assign negative values to it. It seems that for some situations, you can represent your intent more clearly by writing it this way instead:
uint length;
Some disadvantages that I can think of:
unsigned types (uint, ulong, ushort) are not CLS compliant so you can't use it with other languages that don't support this
.Net classes use signed types most of the time so you have to cast
Thoughts?
| [
"If you decrement a signed number with a value of 0, it becomes negative and you can easily test for this. If you decrement an unsigned number with a value of 0, it underflows and becomes the maximum value for the type - somewhat more difficult to check for. \n",
"Your second point is the most important. Generally you should just use int since that's a pretty good \"catch-all\" for integer values. I would only use uint if you absolutely need the ability to count higher than int, but without using the extra memory long requires (it's not much more memory, so don't be cheap :-p).\n",
"“When in Rome, do as the Romans do.”\nWhile there is theoretically an advantage in using unsigned values where applicable because it makes the code more expressive, this is simply not done in C#. I'm not sure why the developers initially didn't design the interfaces to handle uints and make the type CLS compliant but now the train has left the station.\nSince consistency is generally important I'd advise taking the C# road and using ints.\n",
"I think the subtle use of uint vs. int will cause confusing with developers unless it was written into developer guidelines for the company.\nIf the length, for example, can't be less than zero then it should be expressed clearly in the business logic so future developers can read the code and know the true intent.\nJust my 2 cents.\n",
"I will point out that in C# you can turn on /checked to check for arithmetic overflow / underflow, which isn't a bad idea anyways. If performance matters in a critical section, you can still use unchecked to avoid this. \nFor internal code (ie code that won't be referenced in any interop manor with other languages) I vote for using unsigned when the situation warrants it, such as length variables as mentioned earlier. This - along with checked arithmetic - provides one more net for developers, catching subtle bugs earlier.\nAnother point in the signed vs unsigned debate is that some programmers use values such as -1 to indicate errors, when they wouldn't otherwise have meaning. I subscribe to the view that each variable should have only one purpose, but if you - or colleagues you code with - like to indicate errors in this way, leaving variables signed gives you the flexibility to add error states later.\n",
"Your two points are good. The primary reason to avoid it is casting, though. Casting makes them incredibly annoying to use. I tried using unisigned variables once but I had to sprinkle casts absolutely everywhere because the framework methods all use signed integers. Therefore, whenever you call a framework method, you have to cast.\n"
] | [
5,
5,
5,
2,
2,
0
] | [] | [] | [
"c#",
"primitive_types"
] | stackoverflow_0000033852_c#_primitive_types.txt |
Q:
VS2008: Copy Web Site from command line
How do I execute the "Copy Web Site" command for an ASP.NET project in VS2008 from the command line? If I need to script this, let me have some pointers on where I can learn that.
A:
Would this help you get started?
Walkthrough: Deploying an ASP.NET Web Application Using XCOPY
| VS2008: Copy Web Site from command line | How do I execute the "Copy Web Site" command for an ASP.NET project in VS2008 from the command line? If I need to script this, let me have some pointers on where I can learn that.
| [
"Would this help you get started?\nWalkthrough: Deploying an ASP.NET Web Application Using XCOPY\n"
] | [
4
] | [] | [] | [
"asp.net",
"command_line",
"visual_studio_2008"
] | stackoverflow_0000034112_asp.net_command_line_visual_studio_2008.txt |
Q:
Dividing a list of nodes in half
<xsl:for-each select="./node [position() <= (count(*) div 2)]">
<li>foo</li>
</xsl:for-each>
<xsl:for-each select="./node [count(*) div 2 < position()]">
<li>bar</li>
</xsl:for-each>
My list has 12 nodes, but the second list is always 8 and the first is always 4. What's wrong with my selects?
A:
When you do count(*), the current node is the node element being processed. You want either count(current()/node) or last() (preferable), or just calculate the midpoint to a variable for better performance and clearer code:
<xsl:variable name="nodes" select="node"/>
<xsl:variable name="mid" select="count($nodes) div 2"/>
<xsl:for-each select="$nodes[position() <= $mid]">
<li>foo</li>
</xsl:for-each>
<xsl:for-each select="$nodes[$mid < position()]">
<li>bar</li>
</xsl:for-each>
A:
You could try using the last() function which will give you the size of the current context:
<xsl:for-each select="./node [position() <= last() div 2]">
<li>foo</li>
</xsl:for-each>
<xsl:for-each select="./node [last() div 2 < position()]">
<li>bar</li>
</xsl:for-each>
A:
I'm not at all sure, but it seems to me that count(*) is not doing what you think it is. That counts the number of children of the current node, not the size of the current node list. Could you print it out to check that it's 8 or 9 instead of 12?
Use last() to get the context size.
A:
Try count(../node). The following will gives the correct result on my test XML file (a simple nodes root with node elements), using the xsltproc XSLT processor.
<xsl:for-each select="node[position() <= (count(../node) div 2)]">
...
</xsl:for-each>
<xsl:for-each select="node[(count(../node) div 2) < position()]">
...
</xsl:for-each>
| Dividing a list of nodes in half | <xsl:for-each select="./node [position() <= (count(*) div 2)]">
<li>foo</li>
</xsl:for-each>
<xsl:for-each select="./node [count(*) div 2 < position()]">
<li>bar</li>
</xsl:for-each>
My list has 12 nodes, but the second list is always 8 and the first is always 4. What's wrong with my selects?
| [
"When you do count(*), the current node is the node element being processed. You want either count(current()/node) or last() (preferable), or just calculate the midpoint to a variable for better performance and clearer code:\n<xsl:variable name=\"nodes\" select=\"node\"/>\n<xsl:variable name=\"mid\" select=\"count($nodes) div 2\"/>\n<xsl:for-each select=\"$nodes[position() <= $mid]\">\n <li>foo</li>\n</xsl:for-each>\n<xsl:for-each select=\"$nodes[$mid < position()]\">\n <li>bar</li>\n</xsl:for-each>\n\n",
"You could try using the last() function which will give you the size of the current context:\n<xsl:for-each select=\"./node [position() <= last() div 2]\">\n <li>foo</li>\n</xsl:for-each>\n<xsl:for-each select=\"./node [last() div 2 < position()]\">\n <li>bar</li>\n</xsl:for-each>\n\n",
"I'm not at all sure, but it seems to me that count(*) is not doing what you think it is. That counts the number of children of the current node, not the size of the current node list. Could you print it out to check that it's 8 or 9 instead of 12?\nUse last() to get the context size.\n",
"Try count(../node). The following will gives the correct result on my test XML file (a simple nodes root with node elements), using the xsltproc XSLT processor.\n<xsl:for-each select=\"node[position() <= (count(../node) div 2)]\">\n ...\n</xsl:for-each>\n<xsl:for-each select=\"node[(count(../node) div 2) < position()]\">\n ...\n</xsl:for-each>\n\n"
] | [
7,
2,
0,
0
] | [] | [] | [
"xml",
"xslt"
] | stackoverflow_0000034087_xml_xslt.txt |
Q:
Microsoft Office 2007 automated installation - editing the config.xml file
I'm creating an automated installation of Office 2007. To customise your Office 2007 installation the Office Customization Tool (OCT) does most of the work for you. One the OCT's features is the ability to run additional programs during the Office installation. However it is pretty poor at it.
Fortunately by editing the appropiate config.xml file contained within the installer files you have more control over running these additional programs. Within the config.xml file this feature is defined by the command element. This link on TechNet talks all about it.
In this documentation it states:
Attributes
You can specify double-quotation marks (") in the Path and Args attributes by specifying two double-quotation marks together ("").
<Command Path="myscript.exe" Args="/id ""123 abc"" /q" />
I would like to use double-quotation marks in an argument that I wish to pass to the command I'm executing. Unfortunately when I configure my config.xml file as shown in the example, the Office 2007 installer crashes and displays the following error message in the setup logs:
Parsing config.xml at: \\aumel1pc356\c$\Documents and Settings\nichollsd2\Desktop\source\office\Enterprise.WW\config.xml
Error: XML document load failed for file: \\aumel1pc356\c$\Documents and Settings\nichollsd2\Desktop\source\office\Enterprise.WW\config.xml HResult: 0x1.
Does anyone have any experience with this issue? I'd love to get another perspective on it.
A:
In standard XML you embed quotes in attribute values using ", &34; or .
See the page on Wikipedia for a list of XML entity references.
I don't know if this will solve your problem, but seeing as it is an XML parser error it should.
| Microsoft Office 2007 automated installation - editing the config.xml file | I'm creating an automated installation of Office 2007. To customise your Office 2007 installation the Office Customization Tool (OCT) does most of the work for you. One the OCT's features is the ability to run additional programs during the Office installation. However it is pretty poor at it.
Fortunately by editing the appropiate config.xml file contained within the installer files you have more control over running these additional programs. Within the config.xml file this feature is defined by the command element. This link on TechNet talks all about it.
In this documentation it states:
Attributes
You can specify double-quotation marks (") in the Path and Args attributes by specifying two double-quotation marks together ("").
<Command Path="myscript.exe" Args="/id ""123 abc"" /q" />
I would like to use double-quotation marks in an argument that I wish to pass to the command I'm executing. Unfortunately when I configure my config.xml file as shown in the example, the Office 2007 installer crashes and displays the following error message in the setup logs:
Parsing config.xml at: \\aumel1pc356\c$\Documents and Settings\nichollsd2\Desktop\source\office\Enterprise.WW\config.xml
Error: XML document load failed for file: \\aumel1pc356\c$\Documents and Settings\nichollsd2\Desktop\source\office\Enterprise.WW\config.xml HResult: 0x1.
Does anyone have any experience with this issue? I'd love to get another perspective on it.
| [
"In standard XML you embed quotes in attribute values using ", &34; or .\nSee the page on Wikipedia for a list of XML entity references.\nI don't know if this will solve your problem, but seeing as it is an XML parser error it should.\n"
] | [
2
] | [] | [] | [
"installation",
"office_2007",
"packaging",
"xml"
] | stackoverflow_0000033976_installation_office_2007_packaging_xml.txt |
Q:
How to you pass a variable amount of parmeters to web-service
We are trying to create a web-service that we plan to pass a variable amount of variables to it.
Can this be done?
Basically instead of pass all possible parameters we wish to pass only the set values and use the defaults set in the web-service.
Here is an example of the XML we are looking to send, we would sent an unknown amount of functions depending on the needed return.
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<WebMethod xmlns="http://tempuri.org/">
<domains>
<function1>
<title>Some Title</title>
<type>25</type>
</function1>
<function2 />
<function3>
<param>13</param>
</function3>
</domains>
</WebMethod>
</soap:Body>
</soap:Envelope>
Will this work or should we do a different way?
A:
I would pass in an xml document instead of doing concreate functions for this.
The webservice in your example is leaky - the consumer needs to know too much about this interface and the implementation of the webservice internally.
XML Document and then tie that with an XSD. That way you can prevalidte the input to the webservice.
Take a look at these
IBM Developer
ASP.NET Forum
I would also recommend using this for testing webservices and its free
WSStudio
A:
You can simply pass a variable-length array as a parameter.
A:
If you dont like the idea of an Array (this is not slating Konrad's answer - you may have differing param types) you can pass complex objects (i.e. objects that you made yourself).. The downside is that you cannot then test using the ASMX page, but would need to do it all in code (which isn't really a bad thing, especially if you are used to it).
A:
I agree with littlegeek. Do not make your web service is hard method. Make it a receiving end point to receive messages. Particularly, a Command Message.
http://www.eaipatterns.com/CommandMessage.html
| How to you pass a variable amount of parmeters to web-service | We are trying to create a web-service that we plan to pass a variable amount of variables to it.
Can this be done?
Basically instead of pass all possible parameters we wish to pass only the set values and use the defaults set in the web-service.
Here is an example of the XML we are looking to send, we would sent an unknown amount of functions depending on the needed return.
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<WebMethod xmlns="http://tempuri.org/">
<domains>
<function1>
<title>Some Title</title>
<type>25</type>
</function1>
<function2 />
<function3>
<param>13</param>
</function3>
</domains>
</WebMethod>
</soap:Body>
</soap:Envelope>
Will this work or should we do a different way?
| [
"I would pass in an xml document instead of doing concreate functions for this. \nThe webservice in your example is leaky - the consumer needs to know too much about this interface and the implementation of the webservice internally. \nXML Document and then tie that with an XSD. That way you can prevalidte the input to the webservice. \nTake a look at these\nIBM Developer\nASP.NET Forum\nI would also recommend using this for testing webservices and its free\nWSStudio\n",
"You can simply pass a variable-length array as a parameter.\n",
"If you dont like the idea of an Array (this is not slating Konrad's answer - you may have differing param types) you can pass complex objects (i.e. objects that you made yourself).. The downside is that you cannot then test using the ASMX page, but would need to do it all in code (which isn't really a bad thing, especially if you are used to it).\n",
"I agree with littlegeek. Do not make your web service is hard method. Make it a receiving end point to receive messages. Particularly, a Command Message.\nhttp://www.eaipatterns.com/CommandMessage.html\n"
] | [
5,
0,
0,
0
] | [] | [] | [
"c#",
"soap",
"web_services"
] | stackoverflow_0000034128_c#_soap_web_services.txt |
Q:
How to handle variable width FieldObjects in Crystal Reports
I have a Crystal Report which is viewed via a CrystalReportViewer control on an .aspx page (using VS2008).
The report has two data-driven FieldObjects (which can contain a variable number of chars) which I would like to display on the same line beside each other.
Problem is when the text in the first FieldObject is too long it overlaps the text in the second FieldObject.
I have tried setting the 'CanGrow=True' and 'MaxNumberOfLines=1' on the first FieldObject to 'push' the second FieldObject further to the right, but this didn't work.
How do I get the second FieldObject to always display immediately after the first FieldObject regardless of the length of the text in the first?
Cheers in advance of any knowledge you can drop.
A:
you can add a text object to the report. And while editing the text of the text object, drag the field you want to show from the object explorer into the text box. Then hit space, then drag the second field in to the same text box. Your two fields will always be one space a part. You could, of course, add more spaces or any other text you want.
A:
Or you can create a function which returns field1 + " " + field2 and add the function to the report.
| How to handle variable width FieldObjects in Crystal Reports | I have a Crystal Report which is viewed via a CrystalReportViewer control on an .aspx page (using VS2008).
The report has two data-driven FieldObjects (which can contain a variable number of chars) which I would like to display on the same line beside each other.
Problem is when the text in the first FieldObject is too long it overlaps the text in the second FieldObject.
I have tried setting the 'CanGrow=True' and 'MaxNumberOfLines=1' on the first FieldObject to 'push' the second FieldObject further to the right, but this didn't work.
How do I get the second FieldObject to always display immediately after the first FieldObject regardless of the length of the text in the first?
Cheers in advance of any knowledge you can drop.
| [
"you can add a text object to the report. And while editing the text of the text object, drag the field you want to show from the object explorer into the text box. Then hit space, then drag the second field in to the same text box. Your two fields will always be one space a part. You could, of course, add more spaces or any other text you want.\n",
"Or you can create a function which returns field1 + \" \" + field2 and add the function to the report.\n"
] | [
5,
1
] | [] | [] | [
"crystal_reports"
] | stackoverflow_0000033937_crystal_reports.txt |
Q:
Using Hibernate to work with Text Files
I am using Hibernate in a Java application to access my Database and it works pretty well with MS-SQL and MySQL. But some of the data I have to show on some forms has to come from Text files, and by Text files I mean Human-Readable files, they can be CSV, Tab-Delimited, or even a key, value pair, per line since my data is as simple as this, but my preference of course is XML files.
My question is: Can I use hibernate to read those files using HQL, Query , EntityManager and all those resources Hibernate provides me to access files. Which file format should I use and How I configure My persistence.xml file to recognize files as Tables?
A:
Hibernate is written against the JDBC API. So, you need a JDBC driver that works with the file format you are interested in. Obviously, even for read-only access, this isn't going to perform well, but it might still be useful if that's not a high priority. On a Windows system, you can set up ODBC datasources for delimited text files, Excel files, etc. Then you can set up the JdbcOdbcDriver in your Java application to use this data source.
For most of the applications I work on, I would not consider this approach; I would use an import/export mechanism to convert from a real database (even if it's an in-process database like Berkeley DB or Derby) to the text files. Yes, it's an extra step, but it could be automated, and the performance isn't likely to be much worse than trying to use the text files directly (it will likely be much better, overall), and it will be more robust and easy to develop.
A:
A quick google came up with
JDBC driver for csv files
JDBC driver for XML files
Hope this might provide some inspiration?
A:
Like erickson said, your only hope is in finding a JDBC driver for that task. There is maybe xlsql (CSV, XML and Excel driver) which could fit the task. After that, you just have to either find or write the most simple Hibernate Dialect which fits your driver.
| Using Hibernate to work with Text Files | I am using Hibernate in a Java application to access my Database and it works pretty well with MS-SQL and MySQL. But some of the data I have to show on some forms has to come from Text files, and by Text files I mean Human-Readable files, they can be CSV, Tab-Delimited, or even a key, value pair, per line since my data is as simple as this, but my preference of course is XML files.
My question is: Can I use hibernate to read those files using HQL, Query , EntityManager and all those resources Hibernate provides me to access files. Which file format should I use and How I configure My persistence.xml file to recognize files as Tables?
| [
"Hibernate is written against the JDBC API. So, you need a JDBC driver that works with the file format you are interested in. Obviously, even for read-only access, this isn't going to perform well, but it might still be useful if that's not a high priority. On a Windows system, you can set up ODBC datasources for delimited text files, Excel files, etc. Then you can set up the JdbcOdbcDriver in your Java application to use this data source.\nFor most of the applications I work on, I would not consider this approach; I would use an import/export mechanism to convert from a real database (even if it's an in-process database like Berkeley DB or Derby) to the text files. Yes, it's an extra step, but it could be automated, and the performance isn't likely to be much worse than trying to use the text files directly (it will likely be much better, overall), and it will be more robust and easy to develop.\n",
"A quick google came up with\n\nJDBC driver for csv files\nJDBC driver for XML files\n\nHope this might provide some inspiration?\n",
"Like erickson said, your only hope is in finding a JDBC driver for that task. There is maybe xlsql (CSV, XML and Excel driver) which could fit the task. After that, you just have to either find or write the most simple Hibernate Dialect which fits your driver.\n"
] | [
8,
6,
0
] | [] | [] | [
"database",
"hibernate",
"java",
"text_files"
] | stackoverflow_0000033438_database_hibernate_java_text_files.txt |
Q:
How to program user preferences
I'm using Ruby on Rails for an internal site. Different users of the site have access to a wide variety of data and highly disparate perspectives of the data. Within those different classes of users, there needs to be levels of access. Within the levels of access I need to be able to add features from other classes of users.
In the released "Version 1.0" of the intranet site I have implemented the general classes of users. I am now needed to implement much finer-grained control of a users access.
The question is how?
What is the generally accepted practice for coding up user preferences (display the map (or not); access to this feature, but not this feature) without exploding the database schema and populating the view code with <% if feature_allowed %> tags everywhere.
A:
Another totally different approach would be to use acts_as_authenticated and authorization plugins. The tables will be built by the plugins (ie users, roles and roles_users). From the doc:
The authorization plugin provides the following:
A simple way of checking authorization at either the class or instance method
level using #permit and #permit?
Authorization using roles for the entire application, a model class, or an
instance of a model (i.e., a particular object).
Some english-like dynamic methods that draw on the defined roles. You will be
able to use methods like "user.is_fan_of angelina" or "angelina.has_fans?",
where a 'fan' is only defined in the roles table.
Pick-and-choose a mixin for your desired level of database complexity. For
all the features, you will want to use "object roles table" (see below)
A:
populating the view code with <% if
feature_allowed %> tags everywhere.
I don't think you want to do that. Assuming none of the alternatives suggested are practicable, at the very least you should consider shifting those checks into your controllers, where you can refactor them into a before_filter.
See section 11.3 in "Agile Web Development With Rails" (page 158 in my copy of the 2nd edition) where they do exactly that.
| How to program user preferences | I'm using Ruby on Rails for an internal site. Different users of the site have access to a wide variety of data and highly disparate perspectives of the data. Within those different classes of users, there needs to be levels of access. Within the levels of access I need to be able to add features from other classes of users.
In the released "Version 1.0" of the intranet site I have implemented the general classes of users. I am now needed to implement much finer-grained control of a users access.
The question is how?
What is the generally accepted practice for coding up user preferences (display the map (or not); access to this feature, but not this feature) without exploding the database schema and populating the view code with <% if feature_allowed %> tags everywhere.
| [
"Another totally different approach would be to use acts_as_authenticated and authorization plugins. The tables will be built by the plugins (ie users, roles and roles_users). From the doc:\n\nThe authorization plugin provides the following:\n\n\nA simple way of checking authorization at either the class or instance method\nlevel using #permit and #permit?\nAuthorization using roles for the entire application, a model class, or an\ninstance of a model (i.e., a particular object).\nSome english-like dynamic methods that draw on the defined roles. You will be\nable to use methods like \"user.is_fan_of angelina\" or \"angelina.has_fans?\",\nwhere a 'fan' is only defined in the roles table.\nPick-and-choose a mixin for your desired level of database complexity. For\nall the features, you will want to use \"object roles table\" (see below)\n\n",
"\npopulating the view code with <% if\n feature_allowed %> tags everywhere.\n\nI don't think you want to do that. Assuming none of the alternatives suggested are practicable, at the very least you should consider shifting those checks into your controllers, where you can refactor them into a before_filter. \nSee section 11.3 in \"Agile Web Development With Rails\" (page 158 in my copy of the 2nd edition) where they do exactly that.\n"
] | [
3,
2
] | [] | [] | [
"ruby",
"ruby_on_rails",
"user_controls",
"user_interface"
] | stackoverflow_0000032966_ruby_ruby_on_rails_user_controls_user_interface.txt |
Q:
COMException "Library not registered." while using System.DirectoryServices
I have only just started received the following error in my windows forms application under .NET 2 framework on windows 2000 when using System.DirectoryServices.
{System.Runtime.InteropServices.COMException}
System.Runtime.InteropServices.COMException: {"Library not registered."}
_className: Nothing
_COMPlusExceptionCode: -532459699
_data: Nothing
_dynamicMethods: Nothing
_exceptionMethod: Nothing
_exceptionMethodString: Nothing
_helpURL: Nothing
_HResult: -2147319779
_innerException: Nothing
_message: "Library not registered."
_remoteStackIndex: 0
_remoteStackTraceString: Nothing
_source: Nothing
_stackTrace: {System.Array}
_stackTraceString: Nothing
_xcode: -532459699
_xptrs: 0
Source: "System.DirectoryServices"
StackTrace: " at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)
at System.DirectoryServices.DirectoryEntry.Bind()
at System.DirectoryServices.DirectoryEntry.get_AdsObject()
at System.DirectoryServices.DirectorySearcher.FindAll(Boolean findMoreThanOne)
at System.DirectoryServices.DirectorySearcher.FindAll()
I have re-installed the framework and re-registered activeds.dll however this has not resolved the issue. I am guessing I need to find another dll and re-register it however it is not clear which dll this would be.
A:
Having used Reflector to have a quick peak at the Directory Services code, it looks like your Active Directory Service Interfaces installation might be kaput.
You can download version 2.5 from Technet although I'm not sure if it's the latest version or if it works with Windows 2000.
| COMException "Library not registered." while using System.DirectoryServices | I have only just started received the following error in my windows forms application under .NET 2 framework on windows 2000 when using System.DirectoryServices.
{System.Runtime.InteropServices.COMException}
System.Runtime.InteropServices.COMException: {"Library not registered."}
_className: Nothing
_COMPlusExceptionCode: -532459699
_data: Nothing
_dynamicMethods: Nothing
_exceptionMethod: Nothing
_exceptionMethodString: Nothing
_helpURL: Nothing
_HResult: -2147319779
_innerException: Nothing
_message: "Library not registered."
_remoteStackIndex: 0
_remoteStackTraceString: Nothing
_source: Nothing
_stackTrace: {System.Array}
_stackTraceString: Nothing
_xcode: -532459699
_xptrs: 0
Source: "System.DirectoryServices"
StackTrace: " at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)
at System.DirectoryServices.DirectoryEntry.Bind()
at System.DirectoryServices.DirectoryEntry.get_AdsObject()
at System.DirectoryServices.DirectorySearcher.FindAll(Boolean findMoreThanOne)
at System.DirectoryServices.DirectorySearcher.FindAll()
I have re-installed the framework and re-registered activeds.dll however this has not resolved the issue. I am guessing I need to find another dll and re-register it however it is not clear which dll this would be.
| [
"Having used Reflector to have a quick peak at the Directory Services code, it looks like your Active Directory Service Interfaces installation might be kaput.\nYou can download version 2.5 from Technet although I'm not sure if it's the latest version or if it works with Windows 2000.\n"
] | [
1
] | [] | [] | [
".net_2.0",
"active_directory",
"com"
] | stackoverflow_0000034270_.net_2.0_active_directory_com.txt |
Q:
What is the best way to add an event in JavaScript?
I see 2 main ways to set events in JavaScript:
Add an event directly inside the tag like this:
<a href="" onclick="doFoo()">do foo</a>
Set them by JavaScript like this:
<a id="bar" href="">do bar</a>
and add an event in a <script> section inside the <head> section or in an external JavaScript file, like that if you're using prototypeJS:
Event.observe(window, 'load', function() {
$('bar').observe('click', doBar);
}
I think the first method is easier to read and maintain (because the JavaScript action is directly bound to the link) but it's not so clean (because users can click on the link even if the page is not fully loaded, which may cause JavaScript errors in some cases).
The second method is cleaner (actions are added when the page is fully loaded) but it's more difficult to know that an action is linked to the tag.
Which method is the best?
A killer answer will be fully appreciated!
A:
I think the first method is easier to read and maintain
I've found the opposite to be true. Bear in mind that sometimes more than one event handler will be bound to a given control.
Declaring all events in one central place helps to organize the actions taking place on the site. If you need to change something you don't have to search for all places making a call to a function, you simply have to change it in one place. When adding more elements that should have the same functionality you don't have to remember to add the handlers to them; instead, it's often enough to let them declare a class, or even not change them at all because they logically belong to a container element of which all child elements get wired to an action. From an actual code:
$$('#itemlist table th > a').invoke('observe', 'click', performSort);
This wired an event handler to all column headers in a table to make the table sortable. Imagine the effort to make all column headers sortable separately.
A:
In my experience, there are two major points to this:
1) The most important thing is to be consistent. I don't think either of the two methods is necessarily easier to read, as long as you stick to it. I only get confused when both methods are used in a project (or even worse on the same page) because then I have to start searching for the calls and don't immediately know where to look.
2) The second kind, i.e. Event.observe() has advantages when the same or a very similar action is taken on multiple events because this becomes obvious when all those calls are in the same place. Also, as Konrad pointed out, in some cases this can be handled with a single call.
A:
I believe the second method is generally preferred because it keeps information about action (i.e. the JavaScript) separate from the markup in the same way CSS separates presentation from markup.
I agree that this makes it a little more difficult to see what's happening in your page, but good tools like firebug will help you with this a lot. You'll also find much better IDE support available if you keep the mixing of HTML and Javascript to a minimum.
This approach really comes into its own as your project grows, and you find you want to attach the same javascript event to a bunch of different element types on many different pages. In that case, it becomes much easier to have a single pace which attaches events, rather than having to search many different HTML files to find where a particular function is called.
A:
You can also use addEventListener (not in IE) / attachEvent (in IE).
Check out: http://www.quirksmode.org/js/events_advanced.html
These allow you to attach a function (or multiple functions) to an event on an existing DOM object. They also have the advantage of allowing un-attachment later.
In general, if you're using a serious amount of javascript, it can be useful to make your javascript readable, as opposed to your html. So you could say that onclick=X in the html is very clear, but this is both a lack of separation of the code -- another syntactic dependency between pieces -- and a case in which you have to read both the html and the javascript to understand the dynamic behavior of the page.
A:
Libraries like YUI and jQuery provide methods to add events only once the DOM is ready, which can be before window.onload. They also ensure that you can add multiple event handlers so that you can use scripts from different sources without the different event handlers overwriting each other.
So your practical choices are;
One. If your script is simple and the only one that will ever run on the page, create an init function like so:
window.onload = function () {
init();
}
function init() {
// actual function calls go here
doFoo();
}
Two. If you have many scripts or plan to mashup scripts from different sources, use a library and its onDOMReady method to safely add your event handlers
A:
My personal preference is to use jQuery in external js files so the js is completely separate from the html. Javascript should be unobtrusive so inline (ie, the first example) is not really the best choice in my opinion. When looking at the html, the only sign that you are using js should be the script includes in the head.
An example of attaching (and handling) events might be something like this
var myObject = {
allLinkElements: null,
init: function()
{
// Set all the elements we need
myObject.setElements();
// Set event handlers for elements
myObject.setEventHandlers();
},
clickedLink: function()
{
// Handle the click event
alert('you clicked a link');
},
setElements: function()
{
// Find all <a> tags on the page
myObject.allLinkElements = $('a');
// Find other elements...
},
setEventHandlers: function()
{
// Loop through each link
myObject.allLinkElements.each(function(id)
{
// Assign the handler for the click event
$(this).click(myObject.clickedLink);
});
// Assign handlers for other elements...
}
}
// Wait for the DOM to be ready before initialising
$(document).ready(myObject.init);
I think this approach is useful if you want to keep all of your js organised, as you can use specific objects for tasks and everything is nicely contained.
Of course, the huge benefit of letting jQuery (or another well known library) do the hard work is that cross-browser support is (largely) taken care of which makes life much easier
| What is the best way to add an event in JavaScript? | I see 2 main ways to set events in JavaScript:
Add an event directly inside the tag like this:
<a href="" onclick="doFoo()">do foo</a>
Set them by JavaScript like this:
<a id="bar" href="">do bar</a>
and add an event in a <script> section inside the <head> section or in an external JavaScript file, like that if you're using prototypeJS:
Event.observe(window, 'load', function() {
$('bar').observe('click', doBar);
}
I think the first method is easier to read and maintain (because the JavaScript action is directly bound to the link) but it's not so clean (because users can click on the link even if the page is not fully loaded, which may cause JavaScript errors in some cases).
The second method is cleaner (actions are added when the page is fully loaded) but it's more difficult to know that an action is linked to the tag.
Which method is the best?
A killer answer will be fully appreciated!
| [
"\nI think the first method is easier to read and maintain\n\nI've found the opposite to be true. Bear in mind that sometimes more than one event handler will be bound to a given control.\nDeclaring all events in one central place helps to organize the actions taking place on the site. If you need to change something you don't have to search for all places making a call to a function, you simply have to change it in one place. When adding more elements that should have the same functionality you don't have to remember to add the handlers to them; instead, it's often enough to let them declare a class, or even not change them at all because they logically belong to a container element of which all child elements get wired to an action. From an actual code:\n$$('#itemlist table th > a').invoke('observe', 'click', performSort);\n\nThis wired an event handler to all column headers in a table to make the table sortable. Imagine the effort to make all column headers sortable separately.\n",
"In my experience, there are two major points to this:\n1) The most important thing is to be consistent. I don't think either of the two methods is necessarily easier to read, as long as you stick to it. I only get confused when both methods are used in a project (or even worse on the same page) because then I have to start searching for the calls and don't immediately know where to look.\n2) The second kind, i.e. Event.observe() has advantages when the same or a very similar action is taken on multiple events because this becomes obvious when all those calls are in the same place. Also, as Konrad pointed out, in some cases this can be handled with a single call.\n",
"I believe the second method is generally preferred because it keeps information about action (i.e. the JavaScript) separate from the markup in the same way CSS separates presentation from markup.\nI agree that this makes it a little more difficult to see what's happening in your page, but good tools like firebug will help you with this a lot. You'll also find much better IDE support available if you keep the mixing of HTML and Javascript to a minimum.\nThis approach really comes into its own as your project grows, and you find you want to attach the same javascript event to a bunch of different element types on many different pages. In that case, it becomes much easier to have a single pace which attaches events, rather than having to search many different HTML files to find where a particular function is called.\n",
"You can also use addEventListener (not in IE) / attachEvent (in IE).\nCheck out: http://www.quirksmode.org/js/events_advanced.html\nThese allow you to attach a function (or multiple functions) to an event on an existing DOM object. They also have the advantage of allowing un-attachment later.\nIn general, if you're using a serious amount of javascript, it can be useful to make your javascript readable, as opposed to your html. So you could say that onclick=X in the html is very clear, but this is both a lack of separation of the code -- another syntactic dependency between pieces -- and a case in which you have to read both the html and the javascript to understand the dynamic behavior of the page.\n",
"Libraries like YUI and jQuery provide methods to add events only once the DOM is ready, which can be before window.onload. They also ensure that you can add multiple event handlers so that you can use scripts from different sources without the different event handlers overwriting each other.\nSo your practical choices are;\nOne. If your script is simple and the only one that will ever run on the page, create an init function like so:\nwindow.onload = function () {\n init();\n}\nfunction init() {\n // actual function calls go here\n doFoo();\n}\n\nTwo. If you have many scripts or plan to mashup scripts from different sources, use a library and its onDOMReady method to safely add your event handlers\n",
"My personal preference is to use jQuery in external js files so the js is completely separate from the html. Javascript should be unobtrusive so inline (ie, the first example) is not really the best choice in my opinion. When looking at the html, the only sign that you are using js should be the script includes in the head.\nAn example of attaching (and handling) events might be something like this\nvar myObject = {\n\n allLinkElements: null, \n\n init: function()\n {\n // Set all the elements we need\n myObject.setElements();\n\n // Set event handlers for elements\n myObject.setEventHandlers();\n },\n\n clickedLink: function()\n {\n // Handle the click event\n alert('you clicked a link');\n },\n\n setElements: function()\n {\n // Find all <a> tags on the page\n myObject.allLinkElements = $('a');\n\n // Find other elements...\n },\n\n setEventHandlers: function()\n {\n // Loop through each link\n myObject.allLinkElements.each(function(id)\n { \n // Assign the handler for the click event\n $(this).click(myObject.clickedLink);\n });\n\n // Assign handlers for other elements...\n }\n}\n\n// Wait for the DOM to be ready before initialising\n$(document).ready(myObject.init);\n\nI think this approach is useful if you want to keep all of your js organised, as you can use specific objects for tasks and everything is nicely contained. \nOf course, the huge benefit of letting jQuery (or another well known library) do the hard work is that cross-browser support is (largely) taken care of which makes life much easier\n"
] | [
9,
4,
3,
1,
0,
0
] | [] | [] | [
"event_binding",
"events",
"html",
"javascript"
] | stackoverflow_0000034126_event_binding_events_html_javascript.txt |
Q:
Visually Tag/Mark a Window
I'm looking for a way to visually mark or tag a window (any OS) so that it stands out.
A while back, I accidentally replaced a live production database containing thousands of records with an empty dev version, simply because the two instances of Enterprise Manager looked identical to one another. I'd like to avoid that in the future!
A:
None that I'm aware of, but perhaps a virtual desktop system for your OS of choice would help keep the separation a little better for you.
A:
If you are using TOAD for your db access you can set a custom colour for each of your connections. (List of Quest products here scroll down page to the TOAD links)
The colour appears as a border around each TOAD window (at least it did on the Windows version I used in my last job)
I set production dbs to RED, pre-production to orange, and dev to green.
A:
Since you didn't restrict your question, I'll answer with my solution to this type of problem. I often am logged into multiple different machines with PuTTY as different users (including as root). If I have a window that I want to distinguish from the others, for example when I need to make sure I know I am typing commands as root, I change the background of the window to a different colour. A window that is mostly red really stands out as being something extraordinary.
| Visually Tag/Mark a Window | I'm looking for a way to visually mark or tag a window (any OS) so that it stands out.
A while back, I accidentally replaced a live production database containing thousands of records with an empty dev version, simply because the two instances of Enterprise Manager looked identical to one another. I'd like to avoid that in the future!
| [
"None that I'm aware of, but perhaps a virtual desktop system for your OS of choice would help keep the separation a little better for you.\n",
"If you are using TOAD for your db access you can set a custom colour for each of your connections. (List of Quest products here scroll down page to the TOAD links)\nThe colour appears as a border around each TOAD window (at least it did on the Windows version I used in my last job)\nI set production dbs to RED, pre-production to orange, and dev to green.\n",
"Since you didn't restrict your question, I'll answer with my solution to this type of problem. I often am logged into multiple different machines with PuTTY as different users (including as root). If I have a window that I want to distinguish from the others, for example when I need to make sure I know I am typing commands as root, I change the background of the window to a different colour. A window that is mostly red really stands out as being something extraordinary.\n"
] | [
1,
1,
0
] | [] | [] | [
"user_interface"
] | stackoverflow_0000034286_user_interface.txt |
Q:
Automatic image rotation based on a logo
We're looking for a package to help identify and automatically rotate faxed TIFF images based on a watermark or logo.
We use libtiff for rotation currently, but don't know of any other libraries or packages I can use for detecting this logo and determining how to rotate the images.
I have done some basic work with OpenCV but I'm not sure that it is the right tool for this job. I would prefer to use C/C++ but Java, Perl or PHP would be acceptable too.
A:
You are in the right place using OpenCV, it is an excellent utility. For example, this guy used it for template matching, which is fairly similar to what you need to do. Also, the link Roddy specified looks similar to what you want to do.
I feel that OpenCV is the best library out there for this kind of development.
@Brian, OpenCV and the IntelIPP are closely linked and very similar (both Intel libs). As far as I know, if OpenCV finds the intel IPP on your computer it will automatically use it under the hood for improved speed.
A:
The Intel Performance Primitives (IPP) library has a lot of very efficient algorithms that help with this kind of a task. The library is callable from C/C++ and we have found it to be very fast. I should also note that it's not limited to just Intel hardware.
A:
That's quite a complex and specialized algorithm that you need.
Have a look at http://en.wikipedia.org/wiki/Template_matching. There's also a demo program (but no source) at http://www.lps.usp.br/~hae/software/cirateg/index.html
Obviously these require you to know the logo you are looking for in advance...
| Automatic image rotation based on a logo | We're looking for a package to help identify and automatically rotate faxed TIFF images based on a watermark or logo.
We use libtiff for rotation currently, but don't know of any other libraries or packages I can use for detecting this logo and determining how to rotate the images.
I have done some basic work with OpenCV but I'm not sure that it is the right tool for this job. I would prefer to use C/C++ but Java, Perl or PHP would be acceptable too.
| [
"You are in the right place using OpenCV, it is an excellent utility. For example, this guy used it for template matching, which is fairly similar to what you need to do. Also, the link Roddy specified looks similar to what you want to do.\nI feel that OpenCV is the best library out there for this kind of development.\n@Brian, OpenCV and the IntelIPP are closely linked and very similar (both Intel libs). As far as I know, if OpenCV finds the intel IPP on your computer it will automatically use it under the hood for improved speed.\n",
"The Intel Performance Primitives (IPP) library has a lot of very efficient algorithms that help with this kind of a task. The library is callable from C/C++ and we have found it to be very fast. I should also note that it's not limited to just Intel hardware.\n",
"That's quite a complex and specialized algorithm that you need.\nHave a look at http://en.wikipedia.org/wiki/Template_matching. There's also a demo program (but no source) at http://www.lps.usp.br/~hae/software/cirateg/index.html\nObviously these require you to know the logo you are looking for in advance...\n"
] | [
1,
0,
0
] | [] | [] | [
"image_rotation",
"opencv",
"tiff",
"watermark"
] | stackoverflow_0000032643_image_rotation_opencv_tiff_watermark.txt |
Q:
How do I filter nodes of TreeView and Menu controls with sitemap data sources based on user permissions?
I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. And I'm using a site map for site navigation.
I have ASP.NET TreeView and Menu navigation controls populated using a SiteMapDataSource. But off-limits administrator-only pages are visible to non-administrator users.
Kevin Pang wrote:
I'm not sure how this question is any
different than your other question…
The other question deals with assigning and maintaining permissions.
This question just deals with presentation of navigation. Specifically TreeView and Menu controls with sitemap data sources.
<asp:Menu ID="Menu1" runat="server" DataSourceID="SiteMapDataSource1" />
<asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" ShowStartingNode="False" />
Nicholas wrote:
add role="SomeRole" in the sitemap
Does that only handle the display issue? Or are such page permissions enforced?
A:
I had to set securityTrimmingEnabled to "true" in my web.config file.
<?xml version="1.0"?>
<configuration>
...
<system.web>
...
<siteMap defaultProvider="default">
<providers>
<clear/>
<add name="default"
type="System.Web.XmlSiteMapProvider"
siteMapFile="web.sitemap"
securityTrimmingEnabled="true"/>
</providers>
</siteMap>
...
</system.web>
...
</configuration>
A:
I'm not sure how this question is any different than your other question, but I'll try to answer it anyways.
If you want a tutorial on how to implement role-based authentication, check out the one from 4GuysFromRolla.
A:
securityTrimmingEnabled="true" works for internal pages that have a config file restricting permissions, you can also add role="SomeRole" in the sitemap to ovveride the display mechanism, which is useful if you have menu items to external sites.
| How do I filter nodes of TreeView and Menu controls with sitemap data sources based on user permissions? | I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. And I'm using a site map for site navigation.
I have ASP.NET TreeView and Menu navigation controls populated using a SiteMapDataSource. But off-limits administrator-only pages are visible to non-administrator users.
Kevin Pang wrote:
I'm not sure how this question is any
different than your other question…
The other question deals with assigning and maintaining permissions.
This question just deals with presentation of navigation. Specifically TreeView and Menu controls with sitemap data sources.
<asp:Menu ID="Menu1" runat="server" DataSourceID="SiteMapDataSource1" />
<asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" ShowStartingNode="False" />
Nicholas wrote:
add role="SomeRole" in the sitemap
Does that only handle the display issue? Or are such page permissions enforced?
| [
"I had to set securityTrimmingEnabled to \"true\" in my web.config file.\n<?xml version=\"1.0\"?>\n<configuration>\n ...\n <system.web>\n ...\n <siteMap defaultProvider=\"default\">\n <providers>\n <clear/>\n <add name=\"default\"\n type=\"System.Web.XmlSiteMapProvider\"\n siteMapFile=\"web.sitemap\"\n securityTrimmingEnabled=\"true\"/>\n </providers>\n </siteMap>\n ...\n </system.web>\n ...\n</configuration>\n\n",
"I'm not sure how this question is any different than your other question, but I'll try to answer it anyways. \nIf you want a tutorial on how to implement role-based authentication, check out the one from 4GuysFromRolla.\n",
"securityTrimmingEnabled=\"true\" works for internal pages that have a config file restricting permissions, you can also add role=\"SomeRole\" in the sitemap to ovveride the display mechanism, which is useful if you have menu items to external sites.\n"
] | [
1,
1,
1
] | [] | [] | [
"asp.net",
"forms_authentication",
"sitemap"
] | stackoverflow_0000033395_asp.net_forms_authentication_sitemap.txt |
Q:
When do Request.Params and Request.Form differ?
I recently encountered a problem where a value was null if accessed with Request.Form but fine if retrieved with Request.Params. What are the differences between these methods that could cause this?
A:
Request.Form only includes variables posted through a form, while Request.Params includes both posted form variables and get variables specified as URL parameters.
A:
Request.Params contains a combination of QueryString, Form, Cookies and ServerVariables (added in that order).
The difference is that if you have a form variable called "key1" that is in both the QueryString and Form then Request.Params["key1"] will return the QueryString value and Request.Params.GetValues("key1") will return an array of [querystring-value, form-value].
If there are multiple form values or cookies with the same key then those values will be added to the array returned by GetValues (ie. GetValues will not return a jagged array)
A:
The reason was that the value I was retrieving was from a form element, but the submit was done through a link + JQuery, not through a form button submit.
| When do Request.Params and Request.Form differ? | I recently encountered a problem where a value was null if accessed with Request.Form but fine if retrieved with Request.Params. What are the differences between these methods that could cause this?
| [
"Request.Form only includes variables posted through a form, while Request.Params includes both posted form variables and get variables specified as URL parameters.\n",
"Request.Params contains a combination of QueryString, Form, Cookies and ServerVariables (added in that order).\nThe difference is that if you have a form variable called \"key1\" that is in both the QueryString and Form then Request.Params[\"key1\"] will return the QueryString value and Request.Params.GetValues(\"key1\") will return an array of [querystring-value, form-value].\nIf there are multiple form values or cookies with the same key then those values will be added to the array returned by GetValues (ie. GetValues will not return a jagged array)\n",
"The reason was that the value I was retrieving was from a form element, but the submit was done through a link + JQuery, not through a form button submit.\n"
] | [
32,
21,
1
] | [] | [] | [
"asp.net",
"c#",
"request"
] | stackoverflow_0000005706_asp.net_c#_request.txt |
Q:
Query TFS for updated files
I want to get an overview of files that are updated in TFS (that someone else checked in) that I don't have the latest version for.
A:
In Visual Studio Source Control Explorer, right click on the directory you want to compare, and select "Compare". It will pop up a dialog with a couple of filtering options, and then show you what's out of date.
A:
if they checked them in as part of a single changeset then you can find them that way.
(right click file in solution explorer, view history, double-click on the relevant changeset and you'll see all the related files for that checkin)
Is your question about finding this info via the TFS API via the website, or via the visual studio interface?
| Query TFS for updated files | I want to get an overview of files that are updated in TFS (that someone else checked in) that I don't have the latest version for.
| [
"In Visual Studio Source Control Explorer, right click on the directory you want to compare, and select \"Compare\". It will pop up a dialog with a couple of filtering options, and then show you what's out of date. \n",
"if they checked them in as part of a single changeset then you can find them that way. \n(right click file in solution explorer, view history, double-click on the relevant changeset and you'll see all the related files for that checkin)\nIs your question about finding this info via the TFS API via the website, or via the visual studio interface? \n"
] | [
2,
0
] | [] | [] | [
"tfs"
] | stackoverflow_0000034262_tfs.txt |
Q:
How do I truncate a string while converting to bytes in C#?
I would like to put a string into a byte array, but the string may be too big to fit. In the case where it's too large, I would like to put as much of the string as possible into the array. Is there an efficient way to find out how many characters will fit?
A:
In order to truncate a string to a UTF8 byte array without splitting in the middle of a character I use this:
static string Truncate(string s, int maxLength) {
if (Encoding.UTF8.GetByteCount(s) <= maxLength)
return s;
var cs = s.ToCharArray();
int length = 0;
int i = 0;
while (i < cs.Length){
int charSize = 1;
if (i < (cs.Length - 1) && char.IsSurrogate(cs[i]))
charSize = 2;
int byteSize = Encoding.UTF8.GetByteCount(cs, i, charSize);
if ((byteSize + length) <= maxLength){
i = i + charSize;
length += byteSize;
}
else
break;
}
return s.Substring(0, i);
}
The returned string can then be safely transferred to a byte array of length maxLength.
A:
You should be using the Encoding class to do your conversion to byte array correct? All Encoding objects have an overridden method GetMaxCharCount, which will give you "The maximum number of characters produced by decoding the specified number of bytes." You should be able to use this value to trim your string and properly encode it.
A:
Efficient way would be finding how much (pessimistically) bytes you will need per character with
Encoding.GetMaxByteCount(1);
then dividing your string size by the result, then converting that much characters with
public virtual int Encoding.GetBytes (
string s,
int charIndex,
int charCount,
byte[] bytes,
int byteIndex
)
If you want to use less memory use
Encoding.GetByteCount(string);
but that is a much slower method.
A:
The Encoding class in .NET has a method called GetByteCount which can take in a string or char[]. If you pass in 1 character, it will tell you how many bytes are needed for that 1 character in whichever encoding you are using.
The method GetMaxByteCount is faster, but it does a worst case calculation which could return a higher number than is actually needed.
| How do I truncate a string while converting to bytes in C#? | I would like to put a string into a byte array, but the string may be too big to fit. In the case where it's too large, I would like to put as much of the string as possible into the array. Is there an efficient way to find out how many characters will fit?
| [
"In order to truncate a string to a UTF8 byte array without splitting in the middle of a character I use this:\nstatic string Truncate(string s, int maxLength) {\n if (Encoding.UTF8.GetByteCount(s) <= maxLength)\n return s;\n var cs = s.ToCharArray();\n int length = 0;\n int i = 0;\n while (i < cs.Length){\n int charSize = 1;\n if (i < (cs.Length - 1) && char.IsSurrogate(cs[i]))\n charSize = 2;\n int byteSize = Encoding.UTF8.GetByteCount(cs, i, charSize);\n if ((byteSize + length) <= maxLength){\n i = i + charSize;\n length += byteSize;\n }\n else\n break;\n }\n return s.Substring(0, i);\n}\n\nThe returned string can then be safely transferred to a byte array of length maxLength.\n",
"You should be using the Encoding class to do your conversion to byte array correct? All Encoding objects have an overridden method GetMaxCharCount, which will give you \"The maximum number of characters produced by decoding the specified number of bytes.\" You should be able to use this value to trim your string and properly encode it.\n",
"Efficient way would be finding how much (pessimistically) bytes you will need per character with\nEncoding.GetMaxByteCount(1);\n\nthen dividing your string size by the result, then converting that much characters with\npublic virtual int Encoding.GetBytes (\n string s,\n int charIndex,\n int charCount,\n byte[] bytes,\n int byteIndex\n)\n\nIf you want to use less memory use\nEncoding.GetByteCount(string);\n\nbut that is a much slower method.\n",
"The Encoding class in .NET has a method called GetByteCount which can take in a string or char[]. If you pass in 1 character, it will tell you how many bytes are needed for that 1 character in whichever encoding you are using.\nThe method GetMaxByteCount is faster, but it does a worst case calculation which could return a higher number than is actually needed.\n"
] | [
6,
2,
1,
1
] | [] | [] | [
".net",
"arrays",
"c#",
"string",
"truncate"
] | stackoverflow_0000034395_.net_arrays_c#_string_truncate.txt |
Q:
How would you abbriviate XHTML to an arbitrary number of words?
How would you programmacially abbreviate XHTML to an arbitrary number of words without leaving unclosed or corrupted tags?
i.e.
<p>
Proin tristique dapibus neque. Nam eget purus sit amet leo
tincidunt accumsan.
</p>
<p>
Proin semper, orci at mattis blandit, augue justo blandit nulla.
<span>Quisque ante congue justo</span>, ultrices aliquet, mattis eget,
hendrerit, <em>justo</em>.
</p>
Abbreviated to 25 words would be:
<p>
Proin tristique dapibus neque. Nam eget purus sit amet leo
tincidunt accumsan.
</p>
<p>
Proin semper, orci at mattis blandit, augue justo blandit nulla.
<span>Quisque ante congue...</span>
</p>
A:
Recurse through the DOM tree, keeping a word count variable up to date. When the word count exceeds your maximum word count, insert "..." and remove all following siblings of the current node, then, as you go back up through the recursion, remove all the following siblings of each of its ancestors.
A:
You need to think of the XHTML as a hierarchy of elements and treat it as such. This is basically the way XML is meant to be treated. Then just go through the hierarchy recursively, adding the number of words together as you go. When you hit your limit throw everything else away.
I work mainly in PHP, and I would use the DOMDocument class in PHP to help me do this, you need to find something like that in your chosen language.
To make things clearer, here is the hierarchy for your sample:
- p
- Proin tristique dapibus neque. Nam eget purus sit amet leo
tincidunt accumsan.
- p
- Proin semper, orci at mattis blandit, augue justo blandit nulla.
- span
- Quisque ante congue justo
- , ultrices aliquet, mattis eget, hendrerit,
- em
- justo
- .
You hit the 25 word limit inside the span element, so you remove all remaining text within the span and add the ellipsis. All other child elements (both text and tags) can be discarded, and all subsequent elements can be discarded.
This should always leave you with valid markup as far as I can see, because you are treating it as a hierarchy and not just plain text, all closing tags that are required will still be there.
Of course if the XHTML you are dealing with is invalid to begin with, don't expect the output to be valid.
Sorry for the poor hierarchy example, couldn't work out how to nest lists.
| How would you abbriviate XHTML to an arbitrary number of words? | How would you programmacially abbreviate XHTML to an arbitrary number of words without leaving unclosed or corrupted tags?
i.e.
<p>
Proin tristique dapibus neque. Nam eget purus sit amet leo
tincidunt accumsan.
</p>
<p>
Proin semper, orci at mattis blandit, augue justo blandit nulla.
<span>Quisque ante congue justo</span>, ultrices aliquet, mattis eget,
hendrerit, <em>justo</em>.
</p>
Abbreviated to 25 words would be:
<p>
Proin tristique dapibus neque. Nam eget purus sit amet leo
tincidunt accumsan.
</p>
<p>
Proin semper, orci at mattis blandit, augue justo blandit nulla.
<span>Quisque ante congue...</span>
</p>
| [
"Recurse through the DOM tree, keeping a word count variable up to date. When the word count exceeds your maximum word count, insert \"...\" and remove all following siblings of the current node, then, as you go back up through the recursion, remove all the following siblings of each of its ancestors.\n",
"You need to think of the XHTML as a hierarchy of elements and treat it as such. This is basically the way XML is meant to be treated. Then just go through the hierarchy recursively, adding the number of words together as you go. When you hit your limit throw everything else away.\nI work mainly in PHP, and I would use the DOMDocument class in PHP to help me do this, you need to find something like that in your chosen language.\nTo make things clearer, here is the hierarchy for your sample:\n- p\n - Proin tristique dapibus neque. Nam eget purus sit amet leo\n tincidunt accumsan.\n- p\n - Proin semper, orci at mattis blandit, augue justo blandit nulla.\n - span\n - Quisque ante congue justo\n - , ultrices aliquet, mattis eget, hendrerit, \n - em\n - justo\n - .\n\nYou hit the 25 word limit inside the span element, so you remove all remaining text within the span and add the ellipsis. All other child elements (both text and tags) can be discarded, and all subsequent elements can be discarded.\nThis should always leave you with valid markup as far as I can see, because you are treating it as a hierarchy and not just plain text, all closing tags that are required will still be there.\nOf course if the XHTML you are dealing with is invalid to begin with, don't expect the output to be valid.\nSorry for the poor hierarchy example, couldn't work out how to nest lists. \n"
] | [
1,
1
] | [] | [] | [
"dom",
"dom_traversal",
"html",
"regex",
"xhtml"
] | stackoverflow_0000034394_dom_dom_traversal_html_regex_xhtml.txt |
Q:
Getting files and their version numbers from sharepoint
As a temporary stopgap until all the designers are in place we are currently hand-cranking a whole bunch of xml configuration files at work. One of the issues with this is file-versioning because people forget to update version numbers when updating the files (which is to be expected as humans generally suck at perfection).
Therefore I figure that as we store the files in Sharepoint I should be able to write a script to pull the files down from Sharepoint, get the version number and automatically enter/update the version number from Sharepoint into the file. This means when someone wants the "latest" files they can run the script and get the latest files with the version numbers correct (there is slightly more to it than this so the reason for using the script isn't just the benefit of auto-versioning).
Does anyone know how to get the files + version numbers from Sharepoint?
A:
I am assuming you are talking about documents in a list or a library, not source files in the 12 hive. If so, each library has built-in versioning. You can access it by clicking on the Form Library Settings available from each library (with appropriate admin privs, of course). From there, select Versioning Settings, and choose a setup that works for your process.
As for getting the version number in code, if you pull a SPListItem from the collection, there is a SPListItemVersionCollection named Versions attached to each item.
A:
There is a way to do it thru web services, but I have done more with implementing custom event handlers. Here is a bit of code that will do what you want. Keep in mind, you can only execute this from the server, so you may want to wrap this up in a web service to allow access from your embedded devices. Also, you will need to reference the Microsoft.SharePoint.dll in this code.
using (SPSite site = new SPSite("http://yoursitename/subsite"))
{
using (SPWeb web = site.OpenWeb())
{
SPListItemCollection list = web.Lists["MyDocumentLibrary"].GetItems(new SPQuery());
foreach(SPListItem itm in list) {
Stream inStream = itm.File.OpenBinaryStream();
XmlTextReader reader = new XmlTextReader(inStream);
XmlDocument xd = new XmlDocument();
xd.Load(reader);
//from here you can read whatever XML node that contains your version info
reader.Close();
inStream.Close();
}
}
}
The using() statements are to ensure that you do not create a memory leak, as the SPSite and SPWeb are unmanaged objects.
Edit: If the version number has been promoted to a library field, you can access it by the following within the for loop above:
itm["FieldName"]
| Getting files and their version numbers from sharepoint | As a temporary stopgap until all the designers are in place we are currently hand-cranking a whole bunch of xml configuration files at work. One of the issues with this is file-versioning because people forget to update version numbers when updating the files (which is to be expected as humans generally suck at perfection).
Therefore I figure that as we store the files in Sharepoint I should be able to write a script to pull the files down from Sharepoint, get the version number and automatically enter/update the version number from Sharepoint into the file. This means when someone wants the "latest" files they can run the script and get the latest files with the version numbers correct (there is slightly more to it than this so the reason for using the script isn't just the benefit of auto-versioning).
Does anyone know how to get the files + version numbers from Sharepoint?
| [
"I am assuming you are talking about documents in a list or a library, not source files in the 12 hive. If so, each library has built-in versioning. You can access it by clicking on the Form Library Settings available from each library (with appropriate admin privs, of course). From there, select Versioning Settings, and choose a setup that works for your process.\nAs for getting the version number in code, if you pull a SPListItem from the collection, there is a SPListItemVersionCollection named Versions attached to each item.\n",
"There is a way to do it thru web services, but I have done more with implementing custom event handlers. Here is a bit of code that will do what you want. Keep in mind, you can only execute this from the server, so you may want to wrap this up in a web service to allow access from your embedded devices. Also, you will need to reference the Microsoft.SharePoint.dll in this code.\nusing (SPSite site = new SPSite(\"http://yoursitename/subsite\"))\n{\n using (SPWeb web = site.OpenWeb())\n {\n SPListItemCollection list = web.Lists[\"MyDocumentLibrary\"].GetItems(new SPQuery());\n foreach(SPListItem itm in list) {\n Stream inStream = itm.File.OpenBinaryStream();\n XmlTextReader reader = new XmlTextReader(inStream);\n XmlDocument xd = new XmlDocument();\n xd.Load(reader);\n //from here you can read whatever XML node that contains your version info\n reader.Close();\n inStream.Close();\n }\n }\n}\n\nThe using() statements are to ensure that you do not create a memory leak, as the SPSite and SPWeb are unmanaged objects.\nEdit: If the version number has been promoted to a library field, you can access it by the following within the for loop above:\nitm[\"FieldName\"]\n\n"
] | [
1,
1
] | [] | [] | [
"sharepoint",
"versioning"
] | stackoverflow_0000033252_sharepoint_versioning.txt |
Q:
Do webtests need VS tester edition on the build server?
Using the TFS build server without VS 2008 Team System Tester Edition installed - is it possible to run a series of webtests as part of a build?
I know that Webtests can only be recorded using the Tester Edition of VS. Here's a post about this from Jeff, back when he was at Vertigo.
I'm just trying to run the tests, though. Does that require the Tester Edition of VS to be installed, as well?
A:
You don't have to have the tester's edition; the Developer Edition works, as long as you can code and run those tests locally.
I believe with the standard MSDN license, if you have Developer Edition, you can run a single build server with a copy of it. There might be some extra limitations, such as who can run builds on the server; you should review your license agreement to see if there are any issues.
| Do webtests need VS tester edition on the build server? | Using the TFS build server without VS 2008 Team System Tester Edition installed - is it possible to run a series of webtests as part of a build?
I know that Webtests can only be recorded using the Tester Edition of VS. Here's a post about this from Jeff, back when he was at Vertigo.
I'm just trying to run the tests, though. Does that require the Tester Edition of VS to be installed, as well?
| [
"You don't have to have the tester's edition; the Developer Edition works, as long as you can code and run those tests locally.\nI believe with the standard MSDN license, if you have Developer Edition, you can run a single build server with a copy of it. There might be some extra limitations, such as who can run builds on the server; you should review your license agreement to see if there are any issues.\n"
] | [
1
] | [] | [] | [
"build_process",
"tfs"
] | stackoverflow_0000034322_build_process_tfs.txt |
Q:
How to determine if an html tag splits across multiple lines
I'm writing a PHP script that involves scraping web pages. Currently, the script analyzes the page line by line, but it breaks if there is a tag that spans multiple lines, like
<img src="example.jpg"
alt="example">
If worse comes to worst, I could possibly preprocess the page by removing all line breaks, then re-inserting them at the closest >, but this seems like a kludge.
Ideally, I'd be able to detect a tag that spans lines, conjoin only those to lines, and continue processing.
So what's the best method to detect this?
A:
This is one of my pet peeves: never parse HTML by hand. Never parse HTML with regexps. Never parse HTML with string comparisons. Always use an HTML parser to parse HTML – that's what they're there for.
It's been a long time since I've done any PHP, but a quick search turned up this PHP5 HTML parser.
A:
Don't write a parser, use someone else's: DOMDocument::loadHTML - that's just one, I think there are a lot of others.
A:
Well, this doesn't answer the question and is more of an opinion, but...
I think that the best scraping strategy (and consequently, to eliminate this problem) is not to analyze an HTML line by line, which is unnatural to HTML, but to analyze it by its natural delimiter: <> pairs.
There will be two types of course:
Tag elements that are immediately closed, e.g., < br />
Tag elements that need a separate closing tag, e.g., < p > text < /p >
You can immediately see the advantage of using this strategy in the case of paragraph(p) tags: It will be easier to parse mutiline paragraphs instead of having to track where the closing tag is.
A:
Perhaps for future projects I'll use a parsing library, but that's kind of aside from the question at hand. This is my current solution. rstrpos is strpos, but from the reverse direction. Example use:
for($i=0; $i<count($lines); $i++)
{
$line = handle_mulitline_tags(&$i, $line, $lines);
}
And here's that implementation:
function rstrpos($string, $charToFind, $relativePos)
{
$searchPos = $relativePos;
$searchChar = '';
while (($searchChar != $charToFind)&&($searchPos>-1))
{
$newPos = $searchPos-1;
$searchChar = substr($string,$newPos,strlen($charToFind));
$searchPos = $newPos;
}
if (!empty($searchChar))
{
return $searchPos;
return TRUE;
}
else
{
return FALSE;
}
}
function handle_multiline_tags(&$i, $line, $lines)
{
//if a tag is opened but not closed before a line break,
$open = rstrpos($line, '<', strlen($line));
$close = rstrpos($line, '>', strlen($line));
if(($open > $close)&&($open > -1)&&($close > -1))
{
$i++;
return trim($line).trim(handle_multiline_tags(&$i, $lines[$i], $lines));
}
else
{
return trim($line);
}
}
This could probably be optimized in some way, but for my purposes, it's sufficient.
A:
Why don't you read in a line, and set it to a string, then check the string for tag openings and closings, If a tag spans more then one line add the next line to the string and move the part before the opening brace to your processed string. Then just parse through the entire file doing this. Its not beautiful but it should work.
A:
If you've gotta stick to your current method of parsing, and it's a regex, you can use the multi-line flag "m" to span across multiple lines.
| How to determine if an html tag splits across multiple lines | I'm writing a PHP script that involves scraping web pages. Currently, the script analyzes the page line by line, but it breaks if there is a tag that spans multiple lines, like
<img src="example.jpg"
alt="example">
If worse comes to worst, I could possibly preprocess the page by removing all line breaks, then re-inserting them at the closest >, but this seems like a kludge.
Ideally, I'd be able to detect a tag that spans lines, conjoin only those to lines, and continue processing.
So what's the best method to detect this?
| [
"This is one of my pet peeves: never parse HTML by hand. Never parse HTML with regexps. Never parse HTML with string comparisons. Always use an HTML parser to parse HTML – that's what they're there for.\nIt's been a long time since I've done any PHP, but a quick search turned up this PHP5 HTML parser.\n",
"Don't write a parser, use someone else's: DOMDocument::loadHTML - that's just one, I think there are a lot of others.\n",
"Well, this doesn't answer the question and is more of an opinion, but...\nI think that the best scraping strategy (and consequently, to eliminate this problem) is not to analyze an HTML line by line, which is unnatural to HTML, but to analyze it by its natural delimiter: <> pairs.\nThere will be two types of course:\n\nTag elements that are immediately closed, e.g., < br />\nTag elements that need a separate closing tag, e.g., < p > text < /p >\n\nYou can immediately see the advantage of using this strategy in the case of paragraph(p) tags: It will be easier to parse mutiline paragraphs instead of having to track where the closing tag is. \n",
"Perhaps for future projects I'll use a parsing library, but that's kind of aside from the question at hand. This is my current solution. rstrpos is strpos, but from the reverse direction. Example use:\nfor($i=0; $i<count($lines); $i++)\n{\n $line = handle_mulitline_tags(&$i, $line, $lines);\n}\n\nAnd here's that implementation:\nfunction rstrpos($string, $charToFind, $relativePos)\n{\n $searchPos = $relativePos;\n $searchChar = '';\n\n while (($searchChar != $charToFind)&&($searchPos>-1))\n {\n $newPos = $searchPos-1;\n $searchChar = substr($string,$newPos,strlen($charToFind));\n $searchPos = $newPos;\n }\n\n if (!empty($searchChar))\n {\n return $searchPos;\n return TRUE;\n }\n else\n {\n return FALSE;\n }\n}\n\nfunction handle_multiline_tags(&$i, $line, $lines)\n{\n //if a tag is opened but not closed before a line break,\n\n $open = rstrpos($line, '<', strlen($line));\n $close = rstrpos($line, '>', strlen($line));\n if(($open > $close)&&($open > -1)&&($close > -1))\n {\n $i++;\n return trim($line).trim(handle_multiline_tags(&$i, $lines[$i], $lines));\n }\n else\n {\n return trim($line);\n }\n}\n\nThis could probably be optimized in some way, but for my purposes, it's sufficient.\n",
"Why don't you read in a line, and set it to a string, then check the string for tag openings and closings, If a tag spans more then one line add the next line to the string and move the part before the opening brace to your processed string. Then just parse through the entire file doing this. Its not beautiful but it should work.\n",
"If you've gotta stick to your current method of parsing, and it's a regex, you can use the multi-line flag \"m\" to span across multiple lines.\n"
] | [
7,
2,
1,
1,
0,
0
] | [] | [] | [
"html",
"php",
"scripting"
] | stackoverflow_0000033814_html_php_scripting.txt |
Q:
How should I model a field that can contain both numeric and string values in SQL Server 2005?
I have a new database table I need to create...
It logically contains an ID, a name, and a "value".
That value field could be either numeric or a character string in nature.
I don't think I want to just make the field a varchar, because I also want to be able to query with filters like WHERE value > 0.5 and such.
What's the best way to model this concept in SQL Server 2005?
EDIT:
I'm not opposed to creating multiple fields here (one for numbers, one for non-numbers), but since they're all really the same concept, I wasn't sure that was a great idea.
I guess I could create separate fields, then have a view that sort of coalesces them into a single logical column.
Any opinions on that?
What I want to achieve is really pretty simple... usually this data will just be blindly displayed in a grid-type view.
I want to be also able to filter on the numeric values in that grid. This table will end up being in the tens of millions of records, so I don't want to paint myself into a corner with querying performance.
That querying performance is my main concern.
A:
A good way to get the query support you want is to have two columns: numvalue that stores a number and textvalue that stores characters. They should be nullable or at least have some default that represents no value. Your application can then decide which column to store its value and which to leave with no value.
A:
Your issue with mixing data may be how Sql 2005 sorts text data. It's not a 'natural' sort.
If you have a varchar field and you do:
where value > '20.5'
Values like "5" will be in your result (as in a character based sort "5" comes after "20.5")
You're going to be better off with separate columns for storage.
Use Coalesce to merge them into one column if you need them merged in your results:
select [ID], [Name], Coalesce( [value_str], [value_num] )
from [tablename]
A:
If you want to store numeric and string values in the same column, I am not sure you can avoid doing a lot of casts and converts when using that column as a query filter.
A:
two columns.
Table: (ValueLable as char(x), Value as numerica(p,s))
A:
I don't think it's possible to have a column with both varchar and int type. You could save your value as a varchar and cast it to int during your query. But this way you could get an exception if your value does contain any character. What are you trying to achieve?
A:
If you want it to be able to hold a character string, I think you have to make the column varchar, or similar.
An alternative could be to have 2 or 3 columns instead of the one value column. Maybe have the three columns, value_type (enum between "number" and "string"), number_value, string_value. Then you could reconstruct that query to be
WHERE value_type = 'number' AND number_value > 0.5
A:
I don't think you're going to be able to get around using VARCHAR or NVARCHAR as your data type. With mixed data like you're describing, you'll have to test the value when you pull the field out of the db and perform the appropriate CAST or CONVERT based on the data type.
A:
I guess I could create separate fields, then have a view that sort of coalesces them into a single logical column. Any opinions on that?
It depends on the source of the data. If you are getting the data from users (or some other system) in some free-form manner and don't really care what type of data it is, then the best way to store it is the most generic manner (varchar, etc). If the incoming data is more structured and you care about that structure, then it makes more sense to keep that structure in the database by using separate fields.
From the viewpoint of a SELECT it doesn't really matter; you can store it either way and read it as the same schema. Once you get into filters (as you mention) things get a bit more hairy, but still easily doable. However, you don't mention if you need to be able to update this data and if so, if you need to enforce any validation on the data.
From the sounds of it, you need to do different types of searches based on the "type" of value being stored. As such, it may make sense to add a Type field so that any filters can be quickly limited to the type of values that you care about. Note, by Type I mean a more logical, application scope, Type; not the actual datatype being stored.
My recommendation would be to use a single field with a Type column if you need to easily support UPDATEs or use multiple fields (or tables, if these are totally different data sets) if SELECTing and filtering is all that is needed.
A:
You might consider using two columns, one "string" and one "numeric" (whatever variants of those are appropriate) with the "string" column NOT NULL and the "numeric" column allowing NULL values. When inserting a value, always populate the "string" column indpendent of the type, however if the value is numeric, ALSO store it in the "numeric" column. Now you have a built in indicator as to the type (if the "numeric" column is populated it is numeric, if not it is a string), can always just pull the value for display from the "string" column, and can use the "numeric" value in calculations or for proper numeric sorting / comparison as needed. You could always add a third column indicating the value type, but this approach eliminates the need for that. Note that you might consider maintaining the numeric and string values using a set of INSERT and UPDATE triggers.
| How should I model a field that can contain both numeric and string values in SQL Server 2005? | I have a new database table I need to create...
It logically contains an ID, a name, and a "value".
That value field could be either numeric or a character string in nature.
I don't think I want to just make the field a varchar, because I also want to be able to query with filters like WHERE value > 0.5 and such.
What's the best way to model this concept in SQL Server 2005?
EDIT:
I'm not opposed to creating multiple fields here (one for numbers, one for non-numbers), but since they're all really the same concept, I wasn't sure that was a great idea.
I guess I could create separate fields, then have a view that sort of coalesces them into a single logical column.
Any opinions on that?
What I want to achieve is really pretty simple... usually this data will just be blindly displayed in a grid-type view.
I want to be also able to filter on the numeric values in that grid. This table will end up being in the tens of millions of records, so I don't want to paint myself into a corner with querying performance.
That querying performance is my main concern.
| [
"A good way to get the query support you want is to have two columns: numvalue that stores a number and textvalue that stores characters. They should be nullable or at least have some default that represents no value. Your application can then decide which column to store its value and which to leave with no value.\n",
"Your issue with mixing data may be how Sql 2005 sorts text data. It's not a 'natural' sort.\nIf you have a varchar field and you do:\nwhere value > '20.5'\n\nValues like \"5\" will be in your result (as in a character based sort \"5\" comes after \"20.5\")\nYou're going to be better off with separate columns for storage.\nUse Coalesce to merge them into one column if you need them merged in your results:\nselect [ID], [Name], Coalesce( [value_str], [value_num] )\nfrom [tablename]\n\n",
"If you want to store numeric and string values in the same column, I am not sure you can avoid doing a lot of casts and converts when using that column as a query filter. \n",
"two columns. \nTable: (ValueLable as char(x), Value as numerica(p,s))\n\n",
"I don't think it's possible to have a column with both varchar and int type. You could save your value as a varchar and cast it to int during your query. But this way you could get an exception if your value does contain any character. What are you trying to achieve?\n",
"If you want it to be able to hold a character string, I think you have to make the column varchar, or similar.\nAn alternative could be to have 2 or 3 columns instead of the one value column. Maybe have the three columns, value_type (enum between \"number\" and \"string\"), number_value, string_value. Then you could reconstruct that query to be\nWHERE value_type = 'number' AND number_value > 0.5\n\n",
"I don't think you're going to be able to get around using VARCHAR or NVARCHAR as your data type. With mixed data like you're describing, you'll have to test the value when you pull the field out of the db and perform the appropriate CAST or CONVERT based on the data type. \n",
"\nI guess I could create separate fields, then have a view that sort of coalesces them into a single logical column. Any opinions on that?\n\nIt depends on the source of the data. If you are getting the data from users (or some other system) in some free-form manner and don't really care what type of data it is, then the best way to store it is the most generic manner (varchar, etc). If the incoming data is more structured and you care about that structure, then it makes more sense to keep that structure in the database by using separate fields.\nFrom the viewpoint of a SELECT it doesn't really matter; you can store it either way and read it as the same schema. Once you get into filters (as you mention) things get a bit more hairy, but still easily doable. However, you don't mention if you need to be able to update this data and if so, if you need to enforce any validation on the data.\nFrom the sounds of it, you need to do different types of searches based on the \"type\" of value being stored. As such, it may make sense to add a Type field so that any filters can be quickly limited to the type of values that you care about. Note, by Type I mean a more logical, application scope, Type; not the actual datatype being stored.\nMy recommendation would be to use a single field with a Type column if you need to easily support UPDATEs or use multiple fields (or tables, if these are totally different data sets) if SELECTing and filtering is all that is needed.\n",
"You might consider using two columns, one \"string\" and one \"numeric\" (whatever variants of those are appropriate) with the \"string\" column NOT NULL and the \"numeric\" column allowing NULL values. When inserting a value, always populate the \"string\" column indpendent of the type, however if the value is numeric, ALSO store it in the \"numeric\" column. Now you have a built in indicator as to the type (if the \"numeric\" column is populated it is numeric, if not it is a string), can always just pull the value for display from the \"string\" column, and can use the \"numeric\" value in calculations or for proper numeric sorting / comparison as needed. You could always add a third column indicating the value type, but this approach eliminates the need for that. Note that you might consider maintaining the numeric and string values using a set of INSERT and UPDATE triggers.\n"
] | [
3,
2,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"database_design",
"sql_server"
] | stackoverflow_0000034398_database_design_sql_server.txt |
Q:
Standardised text editing behaviour across Mac applications
I've switched over to a Mac recently and, although things have been going quite well, the very different text-editing behaviours across applications is driving me insane.
Home, End, Page Up, Page Down, Apple-arrow, Ctrl-arrow, alt-arrow etc. quite often do different things depending on the application.
Is there a way to standardise this behaviour?
A:
There are standards, but they are not based around what you're used to from windows. It drove me mad until I got over myself and decided to learn what the actual standards were. Since then I've been sold.
The ones I use:
Command-Left/Right - Jump to start/end of line
Can also do this with ctrl-a/e which is great if you're used to ssh
Command-Up/Down - Jump to top/bottom of text field or document
Option-Left/Right - Jump to start/end of word or previous/next word
These basically replace home/end/pgup/pgdown, and ctrl-left/right from the windows world.
I find this to be a massive win due to the fact I have a macbook pro and almost no laptops have proper home/end/pgup/pgdown keys - not needing them in OSX is a godsend
Here's a big list of the rest of them
A:
And what's funny (and frustrating!) is that the Microsoft OS X apps (e.g. Entourage) use the Windows standards.
I develop on WinXP during the day but have an iMac at home, so it's confusing enough trying to switch modes between work and home. But then I have to remember if I'm writing an e-mail in Entourage, I need to revert back to Windows mode.
I can't think of any good reason why MS wouldn't follow the OS X keyboard standards...
| Standardised text editing behaviour across Mac applications | I've switched over to a Mac recently and, although things have been going quite well, the very different text-editing behaviours across applications is driving me insane.
Home, End, Page Up, Page Down, Apple-arrow, Ctrl-arrow, alt-arrow etc. quite often do different things depending on the application.
Is there a way to standardise this behaviour?
| [
"There are standards, but they are not based around what you're used to from windows. It drove me mad until I got over myself and decided to learn what the actual standards were. Since then I've been sold.\nThe ones I use:\n\nCommand-Left/Right - Jump to start/end of line\n\n\nCan also do this with ctrl-a/e which is great if you're used to ssh\n\nCommand-Up/Down - Jump to top/bottom of text field or document\nOption-Left/Right - Jump to start/end of word or previous/next word\n\nThese basically replace home/end/pgup/pgdown, and ctrl-left/right from the windows world.\nI find this to be a massive win due to the fact I have a macbook pro and almost no laptops have proper home/end/pgup/pgdown keys - not needing them in OSX is a godsend\nHere's a big list of the rest of them\n",
"And what's funny (and frustrating!) is that the Microsoft OS X apps (e.g. Entourage) use the Windows standards.\nI develop on WinXP during the day but have an iMac at home, so it's confusing enough trying to switch modes between work and home. But then I have to remember if I'm writing an e-mail in Entourage, I need to revert back to Windows mode.\nI can't think of any good reason why MS wouldn't follow the OS X keyboard standards...\n"
] | [
6,
0
] | [] | [] | [
"keyboard_shortcuts",
"macos"
] | stackoverflow_0000033971_keyboard_shortcuts_macos.txt |
Q:
Are there similar tools to Clone Detective for other languages/IDEs?
I just saw Clone Detective linked on YCombinator news, and the idea heavily appeals to me. It seems like it would be useful for many languages, not just C#, but I haven't seen anything similar elsewhere.
Edit: For those who don't want to follow the link, Clone Detective scans the codebase for duplicate code that may warrant refactoring to minimize duplication.
A:
Java has a few - some of the most popular static analysis tools have this built in along with many other useful rules.
Ones I have used, in the (purely subjective) order that I was happiest with:
PMD - comes with CPD - their copy and paste detector
Checkstyle - specific rules to look for duplicate code
Findbugs - the daddy of all Java static analysis tools. Includes duplicate code detection, along with just about anything else that you can think of, but quite resource intensive
There are some nice IDE plugins for all of these and many other reporting tools (for example, you can see results on a Hudson continuos build server, or your project's Maven site)
A:
The IntelliJ IDE (Java, Scala, Ruby,...) has a Locate Duplicate... tool. Usefull indeed !
| Are there similar tools to Clone Detective for other languages/IDEs? | I just saw Clone Detective linked on YCombinator news, and the idea heavily appeals to me. It seems like it would be useful for many languages, not just C#, but I haven't seen anything similar elsewhere.
Edit: For those who don't want to follow the link, Clone Detective scans the codebase for duplicate code that may warrant refactoring to minimize duplication.
| [
"Java has a few - some of the most popular static analysis tools have this built in along with many other useful rules.\nOnes I have used, in the (purely subjective) order that I was happiest with:\n\nPMD - comes with CPD - their copy and paste detector\nCheckstyle - specific rules to look for duplicate code\nFindbugs - the daddy of all Java static analysis tools. Includes duplicate code detection, along with just about anything else that you can think of, but quite resource intensive\n\nThere are some nice IDE plugins for all of these and many other reporting tools (for example, you can see results on a Hudson continuos build server, or your project's Maven site)\n",
"The IntelliJ IDE (Java, Scala, Ruby,...) has a Locate Duplicate... tool. Usefull indeed !\n"
] | [
2,
2
] | [] | [] | [
"c#",
"language_agnostic"
] | stackoverflow_0000032338_c#_language_agnostic.txt |
Q:
How to turn off sounds in TortoiseSVN?
I do not want TortoiseSVN to alert me with sounds - e.g. when it fails to update.
How do I turn off sounds in TortoiseSVN?
A:
Right click > TortoiseSVN > Settings > System Sounds..
Scroll down to the bottom.
A:
You can do this from the Sounds panel in Control Panel.
| How to turn off sounds in TortoiseSVN? | I do not want TortoiseSVN to alert me with sounds - e.g. when it fails to update.
How do I turn off sounds in TortoiseSVN?
| [
"Right click > TortoiseSVN > Settings > System Sounds..\nScroll down to the bottom.\n",
"You can do this from the Sounds panel in Control Panel.\n"
] | [
3,
1
] | [] | [] | [
"system_sounds",
"tortoisesvn"
] | stackoverflow_0000034698_system_sounds_tortoisesvn.txt |
Q:
MySQL "Error 1005" when adding tables
I've recently been working with a MySQL database, and using MySQL workbench to design the Database.
When I use the export to SQL function, so I can actually get the layout in to the Database, I get:
"Error 1005: Cannot create table"
This appears to be related to Foreign Keys in the create table statement.
Does anybody have a work around for this that doesn't involve taking the constraints out and putting them back in later? That's a less than ideal solution given the size of the database.
A:
When you get this (and other errors out of the InnoDB engine) issue:
SHOW ENGINE INNODB STATUS;
It will give a more detailed reason why the operation couldn't be completed. Make sure to run that from something that'll allow you to scroll or copy the data, as the response is quite long.
A:
I ran into this situation recently when I attempted (in InnoDB tables) to make a foreign key reference to a column that had a different data type.
MySQL 5.1 Documentation
| MySQL "Error 1005" when adding tables | I've recently been working with a MySQL database, and using MySQL workbench to design the Database.
When I use the export to SQL function, so I can actually get the layout in to the Database, I get:
"Error 1005: Cannot create table"
This appears to be related to Foreign Keys in the create table statement.
Does anybody have a work around for this that doesn't involve taking the constraints out and putting them back in later? That's a less than ideal solution given the size of the database.
| [
"When you get this (and other errors out of the InnoDB engine) issue:\nSHOW ENGINE INNODB STATUS;\n\nIt will give a more detailed reason why the operation couldn't be completed. Make sure to run that from something that'll allow you to scroll or copy the data, as the response is quite long.\n",
"I ran into this situation recently when I attempted (in InnoDB tables) to make a foreign key reference to a column that had a different data type.\nMySQL 5.1 Documentation\n"
] | [
7,
1
] | [] | [] | [
"mysql",
"mysql_error_1005",
"mysql_workbench"
] | stackoverflow_0000034579_mysql_mysql_error_1005_mysql_workbench.txt |
Q:
How do you swap DIVs on mouseover (jQuery)?
This most be the second most simple rollover effect, still I don't find any simple solution.
Wanted: I have a list of items and a corresponding list of slides (DIVs). After loading, the first list item should be selected (bold) and the first slide should be visible. When the user hovers over another list item, that list item should be selected instead and the corresponding slide be shown.
The following code works, but is awful. How can I get this behaviour in an elegant way? jquery has dozens of animated and complicated rollover effects, but I didn't come up with a clean way for this effect.
<script type="text/javascript">
function switchTo(id) {
document.getElementById('slide1').style.display=(id==1)?'block':'none';
document.getElementById('slide2').style.display=(id==2)?'block':'none';
document.getElementById('slide3').style.display=(id==3)?'block':'none';
document.getElementById('slide4').style.display=(id==4)?'block':'none';
document.getElementById('switch1').style.fontWeight=(id==1)?'bold':'normal';
document.getElementById('switch2').style.fontWeight=(id==2)?'bold':'normal';
document.getElementById('switch3').style.fontWeight=(id==3)?'bold':'normal';
document.getElementById('switch4').style.fontWeight=(id==4)?'bold':'normal';
}
</script>
<ul id="switches">
<li id="switch1" onmouseover="switchTo(1);" style="font-weight:bold;">First slide</li>
<li id="switch2" onmouseover="switchTo(2);">Second slide</li>
<li id="switch3" onmouseover="switchTo(3);">Third slide</li>
<li id="switch4" onmouseover="switchTo(4);">Fourth slide</li>
</ul>
<div id="slides">
<div id="slide1">Well well.</div>
<div id="slide2" style="display:none;">Oh no!</div>
<div id="slide3" style="display:none;">You again?</div>
<div id="slide4" style="display:none;">I'm gone!</div>
</div>
A:
Rather than displaying all slides when JS is off (which would likely break the page layout) I would place inside the switch LIs real A links to server-side code which returns the page with the "active" class pre-set on the proper switch/slide.
$(document).ready(function() {
switches = $('#switches > li');
slides = $('#slides > div');
switches.each(function(idx) {
$(this).data('slide', slides.eq(idx));
}).hover(
function() {
switches.removeClass('active');
slides.removeClass('active');
$(this).addClass('active');
$(this).data('slide').addClass('active');
});
});
#switches .active {
font-weight: bold;
}
#slides div {
display: none;
}
#slides div.active {
display: block;
}
<html>
<head>
<title>test</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script type="text/javascript" src="switch.js"></script>
</head>
<body>
<ul id="switches">
<li class="active">First slide</li>
<li>Second slide</li>
<li>Third slide</li>
<li>Fourth slide</li>
</ul>
<div id="slides">
<div class="active">Well well.</div>
<div>Oh no!</div>
<div>You again?</div>
<div>I'm gone!</div>
</div>
</body>
</html>
A:
Here's my light-markup jQuery version:
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript">
function switchTo(i) {
$('#switches li').css('font-weight','normal').eq(i).css('font-weight','bold');
$('#slides div').css('display','none').eq(i).css('display','block');
}
$(document).ready(function(){
$('#switches li').mouseover(function(event){
switchTo($('#switches li').index(event.target));
});
switchTo(0);
});
</script>
<ul id="switches">
<li>First slide</li>
<li>Second slide</li>
<li>Third slide</li>
<li>Fourth slide</li>
</ul>
<div id="slides">
<div>Well well.</div>
<div>Oh no!</div>
<div>You again?</div>
<div>I'm gone!</div>
</div>
This has the advantage of showing all the slides if the user has javascript turned off, uses very little HTML markup and the javascript is pretty readable. The switchTo function takes an index number of which <li> / <div> pair to activate, resets all the relevant elements to their default styles (non-bold for list items, display:none for the DIVs) and the sets the desired list-item and div to bold and display. As long as the client has javascript enabled, the functionality will be exactly the same as your original example.
A:
Here's the jQuery version:
<script type="text/javascript" src="http://jqueryjs.googlecode.com/files/jquery-1.2.6.min.js"></script>
<script type="text/javascript">
$(function () {
$("#switches li").mouseover(function () {
var $this = $(this);
$("#slides div").hide();
$("#slide" + $this.attr("id").replace(/switch/, "")).show();
$("#switches li").css("font-weight", "normal");
$this.css("font-weight", "bold");
});
});
</script>
<ul id="switches">
<li id="switch1" style="font-weight:bold;">First slide</li>
<li id="switch2">Second slide</li>
<li id="switch3">Third slide</li>
<li id="switch4">Fourth slide</li>
</ul>
<div id="slides">
<div id="slide1">Well well.</div>
<div id="slide2" style="display:none;">Oh no!</div>
<div id="slide3" style="display:none;">You again?</div>
<div id="slide4" style="display:none;">I'm gone!</div>
</div>
A:
<html>
<head>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript">
$(document).ready(
function(){
$( '#switches li' ).mouseover(
function(){
$( "#slides div" ).hide();
$( '#switches li' ).css( 'font-weight', 'normal' );
$( this ).css( 'font-weight', 'bold' );
$( '#slide' + $( this ).attr( 'id' ).replace( 'switch', '' ) ).show();
}
);
}
);
</script>
</head>
<body>
<ul id="switches">
<li id="switch1" style="font-weight:bold;">First slide</li>
<li id="switch2">Second slide</li>
<li id="switch3">Third slide</li>
<li id="switch4">Fourth slide</li>
</ul>
<div id="slides">
<div id="slide1">Well well.</div>
<div id="slide2" style="display:none;">Oh no!</div>
<div id="slide3" style="display:none;">You again?</div>
<div id="slide4" style="display:none;">I'm gone!</div>
</div>
</body>
</html>
A:
The only thing that's wrong with this code (at least to me) is that you're not using a loop to process all elements. Other than that, why not to it like that?
And with loop, I mean grabbing the container element via a JQuery and iterating over all child elements – basically a one-liner.
| How do you swap DIVs on mouseover (jQuery)? | This most be the second most simple rollover effect, still I don't find any simple solution.
Wanted: I have a list of items and a corresponding list of slides (DIVs). After loading, the first list item should be selected (bold) and the first slide should be visible. When the user hovers over another list item, that list item should be selected instead and the corresponding slide be shown.
The following code works, but is awful. How can I get this behaviour in an elegant way? jquery has dozens of animated and complicated rollover effects, but I didn't come up with a clean way for this effect.
<script type="text/javascript">
function switchTo(id) {
document.getElementById('slide1').style.display=(id==1)?'block':'none';
document.getElementById('slide2').style.display=(id==2)?'block':'none';
document.getElementById('slide3').style.display=(id==3)?'block':'none';
document.getElementById('slide4').style.display=(id==4)?'block':'none';
document.getElementById('switch1').style.fontWeight=(id==1)?'bold':'normal';
document.getElementById('switch2').style.fontWeight=(id==2)?'bold':'normal';
document.getElementById('switch3').style.fontWeight=(id==3)?'bold':'normal';
document.getElementById('switch4').style.fontWeight=(id==4)?'bold':'normal';
}
</script>
<ul id="switches">
<li id="switch1" onmouseover="switchTo(1);" style="font-weight:bold;">First slide</li>
<li id="switch2" onmouseover="switchTo(2);">Second slide</li>
<li id="switch3" onmouseover="switchTo(3);">Third slide</li>
<li id="switch4" onmouseover="switchTo(4);">Fourth slide</li>
</ul>
<div id="slides">
<div id="slide1">Well well.</div>
<div id="slide2" style="display:none;">Oh no!</div>
<div id="slide3" style="display:none;">You again?</div>
<div id="slide4" style="display:none;">I'm gone!</div>
</div>
| [
"Rather than displaying all slides when JS is off (which would likely break the page layout) I would place inside the switch LIs real A links to server-side code which returns the page with the \"active\" class pre-set on the proper switch/slide.\n\n\n$(document).ready(function() {\r\n switches = $('#switches > li');\r\n slides = $('#slides > div');\r\n switches.each(function(idx) {\r\n $(this).data('slide', slides.eq(idx));\r\n }).hover(\r\n function() {\r\n switches.removeClass('active');\r\n slides.removeClass('active');\r\n $(this).addClass('active');\r\n $(this).data('slide').addClass('active');\r\n });\r\n});\n#switches .active {\r\n font-weight: bold;\r\n}\r\n#slides div {\r\n display: none;\r\n}\r\n#slides div.active {\r\n display: block;\r\n}\n<html>\r\n\r\n<head>\r\n\r\n <title>test</title>\r\n\r\n <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js\"></script>\r\n <script type=\"text/javascript\" src=\"switch.js\"></script>\r\n\r\n</head>\r\n\r\n<body>\r\n\r\n <ul id=\"switches\">\r\n <li class=\"active\">First slide</li>\r\n <li>Second slide</li>\r\n <li>Third slide</li>\r\n <li>Fourth slide</li>\r\n </ul>\r\n <div id=\"slides\">\r\n <div class=\"active\">Well well.</div>\r\n <div>Oh no!</div>\r\n <div>You again?</div>\r\n <div>I'm gone!</div>\r\n </div>\r\n\r\n</body>\r\n\r\n</html>\n\n\n\n",
"Here's my light-markup jQuery version:\n<script type=\"text/javascript\" src=\"jquery.js\"></script>\n<script type=\"text/javascript\">\nfunction switchTo(i) {\n $('#switches li').css('font-weight','normal').eq(i).css('font-weight','bold');\n $('#slides div').css('display','none').eq(i).css('display','block');\n}\n$(document).ready(function(){\n $('#switches li').mouseover(function(event){\n switchTo($('#switches li').index(event.target));\n });\n switchTo(0);\n});\n</script>\n<ul id=\"switches\">\n <li>First slide</li>\n <li>Second slide</li>\n <li>Third slide</li>\n <li>Fourth slide</li>\n</ul>\n<div id=\"slides\">\n <div>Well well.</div>\n <div>Oh no!</div>\n <div>You again?</div>\n <div>I'm gone!</div>\n</div>\n\nThis has the advantage of showing all the slides if the user has javascript turned off, uses very little HTML markup and the javascript is pretty readable. The switchTo function takes an index number of which <li> / <div> pair to activate, resets all the relevant elements to their default styles (non-bold for list items, display:none for the DIVs) and the sets the desired list-item and div to bold and display. As long as the client has javascript enabled, the functionality will be exactly the same as your original example.\n",
"Here's the jQuery version:\n<script type=\"text/javascript\" src=\"http://jqueryjs.googlecode.com/files/jquery-1.2.6.min.js\"></script>\n<script type=\"text/javascript\">\n$(function () {\n $(\"#switches li\").mouseover(function () {\n var $this = $(this);\n $(\"#slides div\").hide();\n $(\"#slide\" + $this.attr(\"id\").replace(/switch/, \"\")).show();\n $(\"#switches li\").css(\"font-weight\", \"normal\");\n $this.css(\"font-weight\", \"bold\");\n });\n});\n</script>\n\n<ul id=\"switches\">\n <li id=\"switch1\" style=\"font-weight:bold;\">First slide</li>\n <li id=\"switch2\">Second slide</li>\n <li id=\"switch3\">Third slide</li>\n <li id=\"switch4\">Fourth slide</li>\n</ul>\n<div id=\"slides\">\n <div id=\"slide1\">Well well.</div>\n <div id=\"slide2\" style=\"display:none;\">Oh no!</div>\n <div id=\"slide3\" style=\"display:none;\">You again?</div>\n <div id=\"slide4\" style=\"display:none;\">I'm gone!</div>\n</div>\n\n",
"<html>\n<head>\n<script type=\"text/javascript\" src=\"jquery.js\"></script>\n<script type=\"text/javascript\">\n\n$(document).ready(\n function(){\n $( '#switches li' ).mouseover(\n function(){\n $( \"#slides div\" ).hide();\n $( '#switches li' ).css( 'font-weight', 'normal' );\n $( this ).css( 'font-weight', 'bold' );\n $( '#slide' + $( this ).attr( 'id' ).replace( 'switch', '' ) ).show();\n }\n );\n }\n);\n\n</script>\n</head>\n<body>\n<ul id=\"switches\">\n <li id=\"switch1\" style=\"font-weight:bold;\">First slide</li>\n <li id=\"switch2\">Second slide</li>\n <li id=\"switch3\">Third slide</li>\n <li id=\"switch4\">Fourth slide</li>\n</ul>\n<div id=\"slides\">\n <div id=\"slide1\">Well well.</div>\n <div id=\"slide2\" style=\"display:none;\">Oh no!</div>\n <div id=\"slide3\" style=\"display:none;\">You again?</div>\n <div id=\"slide4\" style=\"display:none;\">I'm gone!</div>\n</div>\n</body>\n</html>\n\n",
"The only thing that's wrong with this code (at least to me) is that you're not using a loop to process all elements. Other than that, why not to it like that?\nAnd with loop, I mean grabbing the container element via a JQuery and iterating over all child elements – basically a one-liner.\n"
] | [
19,
6,
5,
2,
0
] | [] | [] | [
"css",
"html",
"javascript",
"jquery"
] | stackoverflow_0000034536_css_html_javascript_jquery.txt |
Q:
How do you use the new ModelBinder classes in ASP.NET MVC Preview 5
You'll notice that Preview 5 includes the following in their release notes:
Added support for custom model binders. Custom binders allow you to define complex types as parameters to an action method. To use this feature, mark the complex type or the parameter declaration with [ModelBinder(…)].
So how do you go about actually using this facility so that I can have something like this work in my Controller:
public ActionResult Insert(Contact contact)
{
if (this.ViewData.ModelState.IsValid)
{
this.contactService.SaveContact(contact);
return this.RedirectToAction("Details", new { id = contact.ID}
}
}
A:
Well I looked into this. ASP.NET provides a common location for registering the implementation of IControlBinders. They also have the basics of this working via the new Controller.UpdateModel method.
So I essentially combined these two concepts by creating an implementation of IModelBinder that does the same thing as Controller.UpdateModel for all public properties of the modelClass.
public class ModelBinder : IModelBinder
{
public object GetValue(ControllerContext controllerContext, string modelName, Type modelType, ModelStateDictionary modelState)
{
object model = Activator.CreateInstance(modelType);
PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(model);
foreach (PropertyDescriptor descriptor in properties)
{
string key = modelName + "." + descriptor.Name;
object value = ModelBinders.GetBinder(descriptor.PropertyType).GetValue(controllerContext, key, descriptor.PropertyType, modelState);
if (value != null)
{
try
{
descriptor.SetValue(model, value);
continue;
}
catch
{
string errorMessage = String.Format("The value '{0}' is invalid for property '{1}'.", value, key);
string attemptedValue = Convert.ToString(value);
modelState.AddModelError(key, attemptedValue, errorMessage);
}
}
}
return model;
}
}
In your Global.asax.cs you'd need to add something like this:
protected void Application_Start()
{
ModelBinders.Binders.Add(typeof(Contact), new ModelBinder());
| How do you use the new ModelBinder classes in ASP.NET MVC Preview 5 | You'll notice that Preview 5 includes the following in their release notes:
Added support for custom model binders. Custom binders allow you to define complex types as parameters to an action method. To use this feature, mark the complex type or the parameter declaration with [ModelBinder(…)].
So how do you go about actually using this facility so that I can have something like this work in my Controller:
public ActionResult Insert(Contact contact)
{
if (this.ViewData.ModelState.IsValid)
{
this.contactService.SaveContact(contact);
return this.RedirectToAction("Details", new { id = contact.ID}
}
}
| [
"Well I looked into this. ASP.NET provides a common location for registering the implementation of IControlBinders. They also have the basics of this working via the new Controller.UpdateModel method. \nSo I essentially combined these two concepts by creating an implementation of IModelBinder that does the same thing as Controller.UpdateModel for all public properties of the modelClass.\npublic class ModelBinder : IModelBinder \n{\n public object GetValue(ControllerContext controllerContext, string modelName, Type modelType, ModelStateDictionary modelState)\n {\n object model = Activator.CreateInstance(modelType);\n\n PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(model);\n foreach (PropertyDescriptor descriptor in properties)\n {\n string key = modelName + \".\" + descriptor.Name;\n object value = ModelBinders.GetBinder(descriptor.PropertyType).GetValue(controllerContext, key, descriptor.PropertyType, modelState);\n if (value != null)\n {\n try\n {\n descriptor.SetValue(model, value);\n continue;\n }\n catch\n {\n string errorMessage = String.Format(\"The value '{0}' is invalid for property '{1}'.\", value, key);\n string attemptedValue = Convert.ToString(value);\n modelState.AddModelError(key, attemptedValue, errorMessage);\n }\n }\n }\n\n return model;\n }\n}\n\nIn your Global.asax.cs you'd need to add something like this:\nprotected void Application_Start()\n{\n ModelBinders.Binders.Add(typeof(Contact), new ModelBinder());\n\n"
] | [
2
] | [] | [] | [
"asp.net_mvc"
] | stackoverflow_0000034709_asp.net_mvc.txt |
Q:
Odd behaviour for rowSpan in Flex
I am experiencing some oddities when working with a Grid component in flex, I have the following form that uses a grid to align the fields, as you can see, each GridRow has a border.
My problem is that the border is still visible through GridItems that span multiple rows (observe the TextArea that spans 4 rows, the GridRow borders go right threw it!)
Any ideas of how to fix this?
A:
I think the problem is that when the Grid is drawn, it draws each row from top to bottom, and within each row the items left to right. So the row-spanned <mx:TextArea> item is drawn first extending down into the area of the 2 next rows, which get drawn after and on top.
The quickest way around I can see would be to draw the row borders on the <mx:GridItem>s instead, skipping the left and right edges based on the item's placement in the row. Something like this:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
<mx:Style>
Grid {
background-color: white;
horizontal-gap: 0;
}
GridItem {
padding-top: 5;
padding-left: 5;
padding-right: 5;
padding-bottom: 5;
background-color: #efefef;
border-style: solid;
border-thickness: 1;
border-color: black;
}
.left {
border-sides: top, bottom, left;
}
.right {
border-sides: top, bottom, right;
}
.center {
border-sides: top, bottom;
}
</mx:Style>
<mx:Grid>
<mx:GridRow>
<mx:GridItem styleName="left">
<mx:Label text="Label"/>
</mx:GridItem>
<mx:GridItem styleName="center">
<mx:ComboBox/>
</mx:GridItem>
<mx:GridItem styleName="center">
<mx:Label text="Label"/>
</mx:GridItem>
<mx:GridItem styleName="right">
<mx:ComboBox/>
</mx:GridItem>
</mx:GridRow>
<mx:GridRow>
<mx:GridItem styleName="left">
<mx:Label text="Label"/>
</mx:GridItem>
<mx:GridItem styleName="center">
<mx:TextInput/>
</mx:GridItem>
<mx:GridItem colSpan="2" rowSpan="3">
<mx:VBox width="100%" height="100%">
<mx:Label text="Label"/>
<mx:TextArea width="100%" height="100%"/>
</mx:VBox>
</mx:GridItem>
</mx:GridRow>
<mx:GridRow>
<mx:GridItem styleName="left">
<mx:Label text="Label"/>
</mx:GridItem>
<mx:GridItem styleName="center">
<mx:TextInput/>
</mx:GridItem>
</mx:GridRow>
<mx:GridRow>
<mx:GridItem styleName="left">
<mx:Label text="Label"/>
</mx:GridItem>
<mx:GridItem styleName="center">
<mx:TextInput/>
</mx:GridItem>
</mx:GridRow>
</mx:Grid>
</mx:Application>
| Odd behaviour for rowSpan in Flex | I am experiencing some oddities when working with a Grid component in flex, I have the following form that uses a grid to align the fields, as you can see, each GridRow has a border.
My problem is that the border is still visible through GridItems that span multiple rows (observe the TextArea that spans 4 rows, the GridRow borders go right threw it!)
Any ideas of how to fix this?
| [
"I think the problem is that when the Grid is drawn, it draws each row from top to bottom, and within each row the items left to right. So the row-spanned <mx:TextArea> item is drawn first extending down into the area of the 2 next rows, which get drawn after and on top.\nThe quickest way around I can see would be to draw the row borders on the <mx:GridItem>s instead, skipping the left and right edges based on the item's placement in the row. Something like this:\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<mx:Application xmlns:mx=\"http://www.adobe.com/2006/mxml\" layout=\"absolute\">\n <mx:Style>\n Grid {\n background-color: white;\n horizontal-gap: 0;\n }\n GridItem {\n padding-top: 5;\n padding-left: 5;\n padding-right: 5;\n padding-bottom: 5;\n background-color: #efefef;\n\n border-style: solid;\n border-thickness: 1;\n border-color: black;\n }\n .left {\n border-sides: top, bottom, left;\n }\n .right {\n border-sides: top, bottom, right;\n }\n .center {\n border-sides: top, bottom;\n }\n </mx:Style>\n <mx:Grid>\n <mx:GridRow>\n <mx:GridItem styleName=\"left\">\n <mx:Label text=\"Label\"/>\n </mx:GridItem>\n <mx:GridItem styleName=\"center\">\n <mx:ComboBox/>\n </mx:GridItem>\n <mx:GridItem styleName=\"center\">\n <mx:Label text=\"Label\"/>\n </mx:GridItem>\n <mx:GridItem styleName=\"right\">\n <mx:ComboBox/>\n </mx:GridItem>\n </mx:GridRow>\n <mx:GridRow>\n <mx:GridItem styleName=\"left\">\n <mx:Label text=\"Label\"/>\n </mx:GridItem>\n <mx:GridItem styleName=\"center\">\n <mx:TextInput/>\n </mx:GridItem>\n <mx:GridItem colSpan=\"2\" rowSpan=\"3\">\n <mx:VBox width=\"100%\" height=\"100%\">\n <mx:Label text=\"Label\"/>\n <mx:TextArea width=\"100%\" height=\"100%\"/>\n </mx:VBox>\n </mx:GridItem>\n </mx:GridRow>\n <mx:GridRow>\n <mx:GridItem styleName=\"left\">\n <mx:Label text=\"Label\"/>\n </mx:GridItem>\n <mx:GridItem styleName=\"center\">\n <mx:TextInput/>\n </mx:GridItem>\n </mx:GridRow>\n <mx:GridRow>\n <mx:GridItem styleName=\"left\">\n <mx:Label text=\"Label\"/>\n </mx:GridItem>\n <mx:GridItem styleName=\"center\">\n <mx:TextInput/>\n </mx:GridItem>\n </mx:GridRow>\n </mx:Grid>\n</mx:Application>\n\n"
] | [
1
] | [] | [] | [
"apache_flex"
] | stackoverflow_0000032596_apache_flex.txt |
Q:
How do you build a ratings implementation?
We have need for a "rating" system in a project we are working on, similar to the one in SO. However, in ours there are multiple entities that need to be "tagged" with a vote up (only up, never down, like an increment). Sometimes we will need to show all of the entities in order of what is rated highest, regardless of entity type, basically mixing the result sets, I guess. What data structures / algorithms do you use to implement this so that is flexible and still scalable?
A:
Since reddit's ranking algorithm rocks, it makes very much sense to have a look at it, if not copy it:
Given the time the entry was posted A and the time of 7:46:43 a.m. December 8, 2005 B we have ts as their difference in seconds:
ts = A - B
and x as the difference between the number of up votes U and the number of down votes D:
x = U - D
Where
y = 1 if x > 0
y = 0 if x = 0
y = -1 if x < 0
and z as the maximal value of the absolute value of x and 1:
z = |x| if |x| >= 1
z = 1 if |x| < 1
we have the rating as a function ƒ(ts, y, z):
ƒ(ts, y, z) = log10 z + (y • ts)/45000
| How do you build a ratings implementation? | We have need for a "rating" system in a project we are working on, similar to the one in SO. However, in ours there are multiple entities that need to be "tagged" with a vote up (only up, never down, like an increment). Sometimes we will need to show all of the entities in order of what is rated highest, regardless of entity type, basically mixing the result sets, I guess. What data structures / algorithms do you use to implement this so that is flexible and still scalable?
| [
"Since reddit's ranking algorithm rocks, it makes very much sense to have a look at it, if not copy it:\n\nGiven the time the entry was posted A and the time of 7:46:43 a.m. December 8, 2005 B we have ts as their difference in seconds:\nts = A - B\nand x as the difference between the number of up votes U and the number of down votes D:\nx = U - D\nWhere\ny = 1 if x > 0\ny = 0 if x = 0\ny = -1 if x < 0\n\nand z as the maximal value of the absolute value of x and 1:\nz = |x| if |x| >= 1\nz = 1 if |x| < 1\n\nwe have the rating as a function ƒ(ts, y, z):\nƒ(ts, y, z) = log10 z + (y • ts)/45000\n\n"
] | [
6
] | [] | [] | [
"algorithm",
"architecture",
"data_structures",
"database_design"
] | stackoverflow_0000034781_algorithm_architecture_data_structures_database_design.txt |
Q:
How do I implement Search Functionality in a website?
I want to implement search functionality for a website (assume it is similar to SO). I don't want to use Google search of stuff like that.
My question is:
How do I implement this?
There are two methods I am aware of:
Search all the databases in the application when the user gives his query.
Index all the data I have and store it somewhere else and query from there (like what Google does).
Can anyone tell me which way to go? What are the pros and cons?
Better, are there any better ways to do this?
A:
Use lucene,
http://lucene.apache.org/java/docs/
Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.
It is available in java and .net. It is also in available in php in the form of a zend framework module.
Lucene does what you wanted(indexing of the searched items), you have to keep track of a lucene index but it is much better than doing a database search in terms of performance. BTW, SO search is powered by lucene. :D
A:
It depends on how comprehensive your web site is and how much you want to do yourself.
If you are running a a small website without further possibilities to add a custom search, let google do the work (maybe add a sitemap) and use the google custom search.
If you run a medium site with an sql engine use the search features of your sql engine.
If you run some heavier software stack like J2EE or .Net use Lucene, a great, powerful search engine or its .Net clone lucene.Net
If you want to abstract your search from your application and be able to query it in a language neutral way with XML/HTTP and JSON APIs, have a look at solr. Solr runs lucene in the background, but adds a nice web interface to it.
A:
You might want to have a look at xapian and the omega front end. It's essentially a toolkit on which you can build search functionality.
A:
The best way to approach this will depend on how you construct your pages.
If they're frequently composed from a lot of different records (as I imagine stack overflow pages are), the indexing approach is likely to give better results unless you put a lot of work into effectively reconstructing the pages on the database side.
The disadvantage you have with the indexing approach is the turn around time. There are workarounds (like the Google's sitemap stuff), but they're also complex to get right.
If you go with database path, also be aware that modern search engine systems function much better if they have link data to process, so finding a system which can understand links between 'pages' in the database will have a positive effect.
A:
If you are on Microsoft plattform you could use the Indexing service. This integrates very easliy with IIS websites.
It has all the basic features like full text search, ranking, exlcude and include certain files types and you can add your own meta information as well via meta tags in the html pages.
Do a google and you'll find tons!
A:
This is somewhat orthogonal to your question, but I highly recommend the idea of a RESTful search. That is, to perform a search that has never been performed, the website POSTs a query to /searches/. To re-run a search, the website GETs /searches/{some id}
There are some good documents to be found regarding this, for example here.
(That said, I like indexing where possible, though it is an optimization, and thus can be premature.)
| How do I implement Search Functionality in a website? | I want to implement search functionality for a website (assume it is similar to SO). I don't want to use Google search of stuff like that.
My question is:
How do I implement this?
There are two methods I am aware of:
Search all the databases in the application when the user gives his query.
Index all the data I have and store it somewhere else and query from there (like what Google does).
Can anyone tell me which way to go? What are the pros and cons?
Better, are there any better ways to do this?
| [
"Use lucene,\nhttp://lucene.apache.org/java/docs/\n\nApache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.\n\nIt is available in java and .net. It is also in available in php in the form of a zend framework module.\nLucene does what you wanted(indexing of the searched items), you have to keep track of a lucene index but it is much better than doing a database search in terms of performance. BTW, SO search is powered by lucene. :D\n",
"It depends on how comprehensive your web site is and how much you want to do yourself.\nIf you are running a a small website without further possibilities to add a custom search, let google do the work (maybe add a sitemap) and use the google custom search.\nIf you run a medium site with an sql engine use the search features of your sql engine.\nIf you run some heavier software stack like J2EE or .Net use Lucene, a great, powerful search engine or its .Net clone lucene.Net\nIf you want to abstract your search from your application and be able to query it in a language neutral way with XML/HTTP and JSON APIs, have a look at solr. Solr runs lucene in the background, but adds a nice web interface to it.\n",
"You might want to have a look at xapian and the omega front end. It's essentially a toolkit on which you can build search functionality.\n",
"The best way to approach this will depend on how you construct your pages.\nIf they're frequently composed from a lot of different records (as I imagine stack overflow pages are), the indexing approach is likely to give better results unless you put a lot of work into effectively reconstructing the pages on the database side.\nThe disadvantage you have with the indexing approach is the turn around time. There are workarounds (like the Google's sitemap stuff), but they're also complex to get right.\nIf you go with database path, also be aware that modern search engine systems function much better if they have link data to process, so finding a system which can understand links between 'pages' in the database will have a positive effect.\n",
"If you are on Microsoft plattform you could use the Indexing service. This integrates very easliy with IIS websites. \nIt has all the basic features like full text search, ranking, exlcude and include certain files types and you can add your own meta information as well via meta tags in the html pages.\nDo a google and you'll find tons!\n",
"This is somewhat orthogonal to your question, but I highly recommend the idea of a RESTful search. That is, to perform a search that has never been performed, the website POSTs a query to /searches/. To re-run a search, the website GETs /searches/{some id}\nThere are some good documents to be found regarding this, for example here.\n(That said, I like indexing where possible, though it is an optimization, and thus can be premature.)\n"
] | [
40,
36,
4,
1,
1,
0
] | [
"If you application uses the Java EE stack and you are using Hibernate you can use the Compass Framework maintain a searchable index of your database. The Compass Framework uses Lucene under the hood.\nThe only catch is that you cannot replicate your search index. So you need to use a clustered database to hold the index tables or use the newer grid based index storage mechanisms that have been added to the Compass Framework 2.x.\n"
] | [
-2
] | [
"search"
] | stackoverflow_0000034314_search.txt |
Q:
Convert Web.config from .NET 2.0 to 3.5
What is the minimum I need to add to a .NET 2.0 WebSite's web.config to make it .NET 3.5?
Visual Studio adds all the config sections and script handlers, but if you aren't using those are they are really necessary?
Is there a command line tool to "upgrade" a .NET 2.0 web.config to 3.5?
A:
There is a good description of the 3.5 web.config available here:
https://web.archive.org/web/20211020153237/https://www.4guysfromrolla.com/articles/121207-1.aspx
The assemblies and config sections are important because they tell the runtime to use the new 3.5 dlls instead of the 2.0 dlls
The codedom section tells the compiler to use 3.5.
If you're not using ASP.Net Ajax you can probably skip the rest. I've never tested that though.
A:
I don't think either of these answers are definitive. The 4guysfromrolla reference is helpful.
Deploying .NET 3.5 to 100+ sites will be a pain. You can't just upgrade the server to the new framework, you have to upgrade the web.config of each site. As far as I can tell, there is no command line tool to do it.
A:
If you want to upgrade every site on a server you could probably make changes to the machine.config
A:
It depends on which features you want to include. Most of the 3.5 ASP.NET extensions are optional. You will want to include the assembly for System.Core and System.Xml.Linq. You will also to add compiler support for C# 3.0 if you plan to use that in your code behind. If you're deploying to IIS 7 there are HTTP handlers for the ASP.NET extensions and script modules.
| Convert Web.config from .NET 2.0 to 3.5 | What is the minimum I need to add to a .NET 2.0 WebSite's web.config to make it .NET 3.5?
Visual Studio adds all the config sections and script handlers, but if you aren't using those are they are really necessary?
Is there a command line tool to "upgrade" a .NET 2.0 web.config to 3.5?
| [
"There is a good description of the 3.5 web.config available here:\nhttps://web.archive.org/web/20211020153237/https://www.4guysfromrolla.com/articles/121207-1.aspx\nThe assemblies and config sections are important because they tell the runtime to use the new 3.5 dlls instead of the 2.0 dlls\nThe codedom section tells the compiler to use 3.5.\nIf you're not using ASP.Net Ajax you can probably skip the rest. I've never tested that though.\n",
"I don't think either of these answers are definitive. The 4guysfromrolla reference is helpful.\nDeploying .NET 3.5 to 100+ sites will be a pain. You can't just upgrade the server to the new framework, you have to upgrade the web.config of each site. As far as I can tell, there is no command line tool to do it.\n",
"If you want to upgrade every site on a server you could probably make changes to the machine.config\n",
"It depends on which features you want to include. Most of the 3.5 ASP.NET extensions are optional. You will want to include the assembly for System.Core and System.Xml.Linq. You will also to add compiler support for C# 3.0 if you plan to use that in your code behind. If you're deploying to IIS 7 there are HTTP handlers for the ASP.NET extensions and script modules.\n"
] | [
9,
1,
1,
0
] | [] | [] | [
".net",
"asp.net",
"configuration",
"migration"
] | stackoverflow_0000033949_.net_asp.net_configuration_migration.txt |
Q:
Wifi Management on XP (SP2/SP3)
Wifi support on Vista is fine, but Native Wifi on XP is half baked. NDIS 802.11 Wireless LAN Miniport Drivers only gets you part of the way there (e.g. network scanning). From what I've read (and tried), the 802.11 NDIS drivers on XP will not allow you to configure a wireless connection. You have to use the Native Wifi API in order to do this. (Please, correct me if I'm wrong here.) Applications like InSSIDer have helped me to understand the APIs, but InSSIDer is just a scanner and is not designed to configure Wifi networks.
So, the question is: where can I find some code examples (C# or C++) that deal with the configuration of Wifi networks on XP -- e.g. profile creation and connection management?
I should note that this is a XP Embedded application on a closed system where we can't use the built-in Wireless Zero Configuration (WZC). We have to build all Wifi management functionality into our .NET application.
Yes, I've Googled myself blue. It seems that someone should have a solution to this problem, but I can't find it. That's why I'm asking here.
Thanks.
A:
We use WZC on XP and Native WiFi on Vista, but here's the code which we use on Vista, FWIW.
Profile creation:
// open a handle to the service
if ((dwError = WlanOpenHandle(
WLAN_API_VERSION,
NULL, // reserved
&dwServiceVersion,
&hClient
)) != ERROR_SUCCESS)
{
hClient = NULL;
}
return dwError;
dwError=WlanSetProfile(hClient, &guid, 0, profile, NULL, TRUE, NULL, &reason_code);
Make a connection:
WLAN_CONNECTION_PARAMETERS conn;
conn.wlanConnectionMode=wlan_connection_mode_profile;
conn.strProfile=name;
conn.pDot11Ssid=NULL;
conn.pDesiredBssidList=NULL;
conn.dot11BssType=dot11_BSS_type_independent;
conn.dwFlags=NULL;
dwError = WlanConnect(hClient, &guid, &conn, NULL);
Check for connection:
BOOL ret=FALSE;
DWORD dwError;
DWORD size;
void *p=NULL;
WLAN_INTERFACE_STATE *ps;
dwError = WlanQueryInterface(hClient, &guid, wlan_intf_opcode_interface_state, NULL, &size, &p, NULL);
ps=(WLAN_INTERFACE_STATE *)p;
if(dwError!=0)
ret=FALSE;
else
if(*ps==wlan_interface_state_connected)
ret=TRUE;
if(p!=NULL) WlanFreeMemory(p);
return ret;
To keep connected to the network, just spawn a thread then keep checking for a connection, then re-connecting if need be.
EDIT: Man this markup stuff is lame. Takes me like 3 edits to get the farking thing right.
A:
Thanks for the feedback Nick. I've pretty much gotten the profile and connection management working. The trick is figuring out which parts of the Native Wifi API are not supported on XP. Fortunately, the Managed Wifi API has connect/disconnect notification events that do work on XP (NetworkChange also gives similar change events).
| Wifi Management on XP (SP2/SP3) | Wifi support on Vista is fine, but Native Wifi on XP is half baked. NDIS 802.11 Wireless LAN Miniport Drivers only gets you part of the way there (e.g. network scanning). From what I've read (and tried), the 802.11 NDIS drivers on XP will not allow you to configure a wireless connection. You have to use the Native Wifi API in order to do this. (Please, correct me if I'm wrong here.) Applications like InSSIDer have helped me to understand the APIs, but InSSIDer is just a scanner and is not designed to configure Wifi networks.
So, the question is: where can I find some code examples (C# or C++) that deal with the configuration of Wifi networks on XP -- e.g. profile creation and connection management?
I should note that this is a XP Embedded application on a closed system where we can't use the built-in Wireless Zero Configuration (WZC). We have to build all Wifi management functionality into our .NET application.
Yes, I've Googled myself blue. It seems that someone should have a solution to this problem, but I can't find it. That's why I'm asking here.
Thanks.
| [
"We use WZC on XP and Native WiFi on Vista, but here's the code which we use on Vista, FWIW.\nProfile creation:\n// open a handle to the service\nif ((dwError = WlanOpenHandle(\n WLAN_API_VERSION,\n NULL, // reserved\n &dwServiceVersion,\n &hClient\n )) != ERROR_SUCCESS)\n{\nhClient = NULL;\n}\nreturn dwError;\ndwError=WlanSetProfile(hClient, &guid, 0, profile, NULL, TRUE, NULL, &reason_code);\n\nMake a connection:\n WLAN_CONNECTION_PARAMETERS conn;\n\n conn.wlanConnectionMode=wlan_connection_mode_profile;\n conn.strProfile=name;\n conn.pDot11Ssid=NULL;\n conn.pDesiredBssidList=NULL;\n conn.dot11BssType=dot11_BSS_type_independent;\n conn.dwFlags=NULL;\n\n dwError = WlanConnect(hClient, &guid, &conn, NULL);\n\nCheck for connection:\n BOOL ret=FALSE;\n DWORD dwError;\n DWORD size;\n void *p=NULL;\n WLAN_INTERFACE_STATE *ps;\n\n dwError = WlanQueryInterface(hClient, &guid, wlan_intf_opcode_interface_state, NULL, &size, &p, NULL);\n ps=(WLAN_INTERFACE_STATE *)p;\n if(dwError!=0) \n ret=FALSE;\n else\n if(*ps==wlan_interface_state_connected) \n ret=TRUE;\n if(p!=NULL) WlanFreeMemory(p);\n return ret;\n\nTo keep connected to the network, just spawn a thread then keep checking for a connection, then re-connecting if need be.\nEDIT: Man this markup stuff is lame. Takes me like 3 edits to get the farking thing right.\n",
"Thanks for the feedback Nick. I've pretty much gotten the profile and connection management working. The trick is figuring out which parts of the Native Wifi API are not supported on XP. Fortunately, the Managed Wifi API has connect/disconnect notification events that do work on XP (NetworkChange also gives similar change events).\n"
] | [
1,
1
] | [] | [] | [
"networking",
"wifi",
"windows_xp",
"wireless"
] | stackoverflow_0000031673_networking_wifi_windows_xp_wireless.txt |
Q:
Email Delivery Question
This question comes on the heels of the question asked here.
The email that comes from our web server comes from an IP address that is different than that for the Exchange server. Is this okay if the SPF and Domain keys are setup properly?
A:
Short answer: Yes
A:
It should just fine. However some spam filters will do a reverse lookup on the originating IP address and see if it's assigned to the domain name the email claims to be from, and some may check to see if the IP is an actual MX for the domain.
So the downside is that some recipients may never get the email, and you may not know about it for a long time. I'd suggest routing your mail through an established MX rather than having a webserver do it directly (there are some security implications there too).
| Email Delivery Question | This question comes on the heels of the question asked here.
The email that comes from our web server comes from an IP address that is different than that for the Exchange server. Is this okay if the SPF and Domain keys are setup properly?
| [
"Short answer: Yes\n",
"It should just fine. However some spam filters will do a reverse lookup on the originating IP address and see if it's assigned to the domain name the email claims to be from, and some may check to see if the IP is an actual MX for the domain.\nSo the downside is that some recipients may never get the email, and you may not know about it for a long time. I'd suggest routing your mail through an established MX rather than having a webserver do it directly (there are some security implications there too).\n"
] | [
3,
3
] | [] | [] | [
"email",
"email_spam"
] | stackoverflow_0000032803_email_email_spam.txt |
Q:
Class design decision
I have a little dilemma that maybe you can help me sort out.
I've been working today in modifying ASP.NET's Membership to add a level of indirection. Basically, ASP.NET's Membership supports Users and Roles, leaving all authorization rules to be based on whether a user belongs to a Role or not.
What I need to do is add the concept of Function, where a user will belong to a role (or roles) and the role will have one or more functions associated with them, allowing us to authorize a specific action based on if the user belongs to a role which has a function assigned.
Having said that, my problem has nothing to do with it, it's a generic class design issue.
I want to provide an abstract method in my base RoleProvider class to create the function (and persist it), but I want to make it optional to save a description for that function, so I need to create my CreateFunction method with an overload, one signature accepting the name, and the other accepting the name and the description.
I can think of the following scenarios:
Create both signatures with the abstract modifier. This has the problem that the implementer may not respect the best practice that says that one overload should call the other one with the parameters normalized, and the logic should only be in the final one (the one with all the parameters). Besides, it's not nice to require both methods to be implemented by the developer.
Create the first like virtual, and the second like abstract. Call the second from the first, allow the implementer to override the behavior. It has the same problem, the implementer could make "bad decisions" when overriding it.
Same as before, but do not allow the first to be overriden (remove the virtual modifier). The problem here is that the implementer has to be aware that the method could be called with a null description and has to handle that situation.
I think the best option is the third one...
How is this scenario handled in general? When you design an abstract class and it contains overloaded methods. It isn't that uncommon I think...
A:
I feel the best combination of DRYness and forcing the contract is as follows (in pseudocode):
class Base {
public final constructor(name) {
constructor(name, null)
end
public abstract constructor(name, description);
}
or, alternatively:
class Base {
public abstract constructor(name);
public final constructor(name, description) {
constructor(name)
this.set_description(description)
}
private final set_description(description) {
...
}
}
There's a rule in Java that supports this decision: "never call non-final methods from a constructor."
A:
To answer the first part of your post, check out AzMan (Authorization Manager), which, incidentally, is built into windows. It has the capability to specify operations which can be recombined into roles or assigned directly to users.
Check out
To answer the second part of your question, I wouldn't use an Abstract class. Instead just provide the functionality in the constructor and be done with it. It appeasr you want the specified behavior, and you don't want it to change. Why force descendents to provide the implementation.
| Class design decision | I have a little dilemma that maybe you can help me sort out.
I've been working today in modifying ASP.NET's Membership to add a level of indirection. Basically, ASP.NET's Membership supports Users and Roles, leaving all authorization rules to be based on whether a user belongs to a Role or not.
What I need to do is add the concept of Function, where a user will belong to a role (or roles) and the role will have one or more functions associated with them, allowing us to authorize a specific action based on if the user belongs to a role which has a function assigned.
Having said that, my problem has nothing to do with it, it's a generic class design issue.
I want to provide an abstract method in my base RoleProvider class to create the function (and persist it), but I want to make it optional to save a description for that function, so I need to create my CreateFunction method with an overload, one signature accepting the name, and the other accepting the name and the description.
I can think of the following scenarios:
Create both signatures with the abstract modifier. This has the problem that the implementer may not respect the best practice that says that one overload should call the other one with the parameters normalized, and the logic should only be in the final one (the one with all the parameters). Besides, it's not nice to require both methods to be implemented by the developer.
Create the first like virtual, and the second like abstract. Call the second from the first, allow the implementer to override the behavior. It has the same problem, the implementer could make "bad decisions" when overriding it.
Same as before, but do not allow the first to be overriden (remove the virtual modifier). The problem here is that the implementer has to be aware that the method could be called with a null description and has to handle that situation.
I think the best option is the third one...
How is this scenario handled in general? When you design an abstract class and it contains overloaded methods. It isn't that uncommon I think...
| [
"I feel the best combination of DRYness and forcing the contract is as follows (in pseudocode):\nclass Base {\n public final constructor(name) {\n constructor(name, null)\n end\n\n public abstract constructor(name, description);\n}\n\nor, alternatively:\nclass Base {\n public abstract constructor(name);\n\n public final constructor(name, description) {\n constructor(name)\n this.set_description(description)\n }\n\n private final set_description(description) {\n ...\n }\n}\n\nThere's a rule in Java that supports this decision: \"never call non-final methods from a constructor.\"\n",
"To answer the first part of your post, check out AzMan (Authorization Manager), which, incidentally, is built into windows. It has the capability to specify operations which can be recombined into roles or assigned directly to users.\nCheck out\nTo answer the second part of your question, I wouldn't use an Abstract class. Instead just provide the functionality in the constructor and be done with it. It appeasr you want the specified behavior, and you don't want it to change. Why force descendents to provide the implementation.\n"
] | [
1,
0
] | [] | [] | [
"asp.net_membership",
"inheritance",
"oop"
] | stackoverflow_0000034806_asp.net_membership_inheritance_oop.txt |
Q:
How do I replicate content on a web farm
We have a Windows Server Web Edition 2003 Web Farm.
What can we use that handles replication across the servers for:
Content & IIS Configuration (App Pools, Virtual Directories, etc...)
We will be moving to Windows 2008 in the near future, so I guess what options are there on Windows 2008 as well.
A:
I'd look into Windows Distributed File System. It should be supported by both Windows Server 2003 & 2008.
A:
Distributed File System (DFS) is good for content, especially if each server (or a number of servers) host a replica synced up with File Replication Service (FRS). So if you've got two servers, each has a complete replica, so one going down doesn't mean the site goes down.
If all servers in your 'cluster' will host a replica, the home directory in IIS can be configured to go against the local drive (e.g., D:). If you have more servers than replicas, then you should use the DFS mount point (\domainname\dfsmountpointname).
| How do I replicate content on a web farm | We have a Windows Server Web Edition 2003 Web Farm.
What can we use that handles replication across the servers for:
Content & IIS Configuration (App Pools, Virtual Directories, etc...)
We will be moving to Windows 2008 in the near future, so I guess what options are there on Windows 2008 as well.
| [
"I'd look into Windows Distributed File System. It should be supported by both Windows Server 2003 & 2008.\n",
"Distributed File System (DFS) is good for content, especially if each server (or a number of servers) host a replica synced up with File Replication Service (FRS). So if you've got two servers, each has a complete replica, so one going down doesn't mean the site goes down.\nIf all servers in your 'cluster' will host a replica, the home directory in IIS can be configured to go against the local drive (e.g., D:). If you have more servers than replicas, then you should use the DFS mount point (\\domainname\\dfsmountpointname).\n"
] | [
1,
1
] | [] | [] | [
"iis",
"replication",
"webserver"
] | stackoverflow_0000030379_iis_replication_webserver.txt |
Q:
Print out the keys and Data of a Hashtable in C# .NET 1.1
I need debug some old code that uses a Hashtable to store response from various threads.
I need a way to go through the entire Hashtable and print out both keys and the data in the Hastable.
How can this be done?
A:
foreach(string key in hashTable.Keys)
{
Console.WriteLine(String.Format("{0}: {1}", key, hashTable[key]));
}
A:
I like:
foreach(DictionaryEntry entry in hashtable)
{
Console.WriteLine(entry.Key + ":" + entry.Value);
}
A:
public static void PrintKeysAndValues( Hashtable myList ) {
IDictionaryEnumerator myEnumerator = myList.GetEnumerator();
Console.WriteLine( "\t-KEY-\t-VALUE-" );
while ( myEnumerator.MoveNext() )
Console.WriteLine("\t{0}:\t{1}", myEnumerator.Key, myEnumerator.Value);
Console.WriteLine();
}
from: http://msdn.microsoft.com/en-us/library/system.collections.hashtable(VS.71).aspx
A:
This should work for pretty much every version of the framework...
foreach (string HashKey in TargetHash.Keys)
{
Console.WriteLine("Key: " + HashKey + " Value: " + TargetHash[HashKey]);
}
The trick is that you can get a list/collection of the keys (or the values) of a given hash to iterate through.
EDIT: Wow, you try to pretty your code a little and next thing ya know there 5 answers... 8^D
A:
I also found that this will work too.
System.Collections.IDictionaryEnumerator enumerator = hashTable.GetEnumerator();
while (enumerator.MoveNext())
{
string key = enumerator.Key.ToString();
string value = enumerator.Value.ToString();
Console.WriteLine(("Key = '{0}'; Value = '{0}'", key, value);
}
Thanks for the help.
| Print out the keys and Data of a Hashtable in C# .NET 1.1 | I need debug some old code that uses a Hashtable to store response from various threads.
I need a way to go through the entire Hashtable and print out both keys and the data in the Hastable.
How can this be done?
| [
"foreach(string key in hashTable.Keys)\n{\n Console.WriteLine(String.Format(\"{0}: {1}\", key, hashTable[key]));\n}\n\n",
"I like:\nforeach(DictionaryEntry entry in hashtable)\n{\n Console.WriteLine(entry.Key + \":\" + entry.Value);\n}\n\n",
"\n public static void PrintKeysAndValues( Hashtable myList ) {\n IDictionaryEnumerator myEnumerator = myList.GetEnumerator();\n Console.WriteLine( \"\\t-KEY-\\t-VALUE-\" );\n while ( myEnumerator.MoveNext() )\n Console.WriteLine(\"\\t{0}:\\t{1}\", myEnumerator.Key, myEnumerator.Value);\n Console.WriteLine();\n }\n\nfrom: http://msdn.microsoft.com/en-us/library/system.collections.hashtable(VS.71).aspx\n",
"This should work for pretty much every version of the framework...\nforeach (string HashKey in TargetHash.Keys)\n{\n Console.WriteLine(\"Key: \" + HashKey + \" Value: \" + TargetHash[HashKey]);\n}\n\nThe trick is that you can get a list/collection of the keys (or the values) of a given hash to iterate through.\nEDIT: Wow, you try to pretty your code a little and next thing ya know there 5 answers... 8^D\n",
"I also found that this will work too.\nSystem.Collections.IDictionaryEnumerator enumerator = hashTable.GetEnumerator();\n\nwhile (enumerator.MoveNext())\n{\n string key = enumerator.Key.ToString();\n string value = enumerator.Value.ToString();\n\n Console.WriteLine((\"Key = '{0}'; Value = '{0}'\", key, value);\n}\n\nThanks for the help.\n"
] | [
22,
9,
3,
1,
1
] | [] | [] | [
".net_1.1",
"c#",
"hashtable"
] | stackoverflow_0000034879_.net_1.1_c#_hashtable.txt |
Q:
Finding your own number in a box
100 (or some even number 2N :-) ) prisoners are in a room A. They are numbered from 1 to 100.
One by one (from prisoner #1 to prisoner #100, in order), they will be let into a room B in which 100 boxes (numbered from 1 to 100) await them. Inside the (closed) boxes are numbers from 1 to 100 (the numbers inside the boxes are randomly permuted!).
Once inside room B, each prisoner gets to open 50 boxes (he chooses which one he opens). If he finds the number that was assigned to him in one of these 50 boxes, the prisoner gets to walk into a room C and all boxes are closed again before the next one walks into room B from room A. Otherwise, all prisoners (in rooms A, B and C) gets killed.
Before entering room B, the prisoners can agree on a strategy (algorithm). There is no way to communicate between rooms (and no message can be left in room B!).
Is there an algorithm that maximizes the probability that all prisoners survive? What probability does that algorithm achieve?
Notes:
Doing things randomly (what you call 'no strategy') indeed gives a probability of 1/2 for each prisoner, but then the probability of all of them surviving is 1/2^100 (which is quite low). One can do much better!
The prisoners are not allowed to reorder the boxes!
All prisoners are killed the first time a prisoner fails to find his number. And no communication is possible.
Hint: one can save more than 30 prisoners on average, which is much more that (50/100) * (50/99) * [...] * 1
A:
This puzzle is explained at http://www.math.princeton.edu/~wwong/blog/blog200608191813.shtml and that person does a much better job of explaining the problem.
The "all prisoners are killed" statement is wrong.
The "you can save 30+ on average" is also wrong, the article says that 30% of the time you can save 100% of the prisoners.
A:
I find a low tech solution to this type of problem is always the best way to go.
first we make some assumptions about the situation
The prisoners are not all programmers or mathematicians
They don't want to die
The guards are well armed
So with a 0.005% chance that they will see tomorrow, there is a very simple and low tech solution to this problem. RIOT
its all about losses v potential gain, the chances are the prisoners far out number the guards, and using each other as human shields, as they are all dead men anyway if they don't, they can increase the chances they will over power a guard, once they have his weapon there chance goes up, helping them over power more guards to get more fire power to further increase there survival rate. once the guards realise what's happening, they will probably run for the hills and lock down the prison, this will give the media a heads up and then its a human rights issue.
A:
Implement a sorting algorithm and sort the boxes according to the numbers inside them.
First prisoner sorts 50 boxes, and the second prisoner sorts the other 50 and merges with the first one. (Note that the second prisoner can guess the values inside the first 50 boxes)
After the 2nd prisoner, all of the boxes will be in a sorted order !!!
Everybody else can open the boxes containing their numbers easily then.
A:
I don't know if this is allowed but the best approximation I can find is:
EDIT: Ok, I think this makes it. Of course I'm treating this as a computing problem, I don't think any prisioner will be able to perform this, although is pretty straight forward if you don't.
Find the first 50 primes, let's asume we hold them in an array called primes.
The first prissioner enters room B and opens the first box and finds the number m.
Wait primes[1]^m (that would be 3^m)
Open box 2 and read the number --> n
Wait (primes[2]^n - 1) * primes[1]^m, that would be (5^n - 1) * 3^m and the total time he has been waiting would be 3^n * 5^n
Repeat. After the first prisioner the total time for him would be:
3^m * 5^n * 7^p ... = X
Before the second prisioner enters the room factorize X. You know beforehand the prime numbers that have been used so the factorization is trivial. Doing so you obtain m, n, p, etc so the second prisioner knows every box/number combination the previous prisioner used.
The probability of the first one getting everybody killed is 1/2, the second one will have a 50 / (100 - n) (being n the numbers of attemps of the first one) the third one will have 50 / (100 - n - m) (if n + m = 100 then all positions are known) and so on.
Obviously the next prissioner must skip the already known boxes (except for the last choice if the box which contains his number is already known)
I don't know what's the exact possibility as it dependes on how many choices they have to do but I'd say it's pretty high.
EDIT: Rereading, if the prissioner does not have to stop when he obtains his number then the probability for the whole group is vastly improved, exactly 50%.
EDIT2: @OysterD see it this way. If the first prisioner can open 50 boxes then the second one know if its number is in any of that boxes. If it is, then he can open other 49 (and by doing so learning the box/number comination of the 100 boxes) and finally open his one. So if the first prissioner succeds then everyone succeds. Remember that each prisioner provides a way for the other to know exactly the boxes/number combination for every box he opens.
A:
Maybe I'm not reading it right, but the question seems to be badly constructed or missing information.
If he finds the number that was
assigned to him in one of these 50
boxes, the prisoner gets to walk into
a room C and all boxes are closed
again before the next one walks into
room B from room A. Otherwise, all
prisoners (in rooms A, B and C) gets
killed.
Does the last sentence there mean that all prisoners are killed the first time a prisoner fails to find their number? If not, what happens to prisoners that don't find their number?
If no communication is possible, and whenever a prisoner enters room B it is always in an identical state then there is no possibility for a strategy.
The prisoners could could say before they leave room A which number box they are going to open. But without subsequent prisoners knowing whether they were successful or not (assuming failure for one isn't failure for all) when the next prisoner enters room B they still have the same odds of picking their number as the previous prisoner (always 1:100).
If failure for one is failure for all, then by knowing which box the previous prisoners opened, and by dint of the fact that they haven't all been killed, each successive prisoner could reduce the odds of picking the wrong box by one box. i.e. 1:100 for the first prisoner, 1:99 for the second, down to 1:1 for the last.
A:
The prisoners could agree that prisoner 1 open boxes 1-50.
If they're all still alive, they agree that the next prisoner opens boxes 2-51. (the 2 is arbitrary, but simple to remember this rule) His odds of surviving given that P1 survived are now 50/99. You want to eliminate opening a box when you know that the previous guy found his.
I don't know if that's optimal, but it's lot better than random.
The probability of surviving that looks like
50/100 * 50/99 * 50/98 *. . .50/51 * 1
A:
I think since no communication is possible, the best strategy would involve
distributing the probability of each prisoners as evenly as possible
Am I on the right path or not?
Information available for each prisoner:
The number of survivied prisoners, so if you have an efficient box picking system that utilizes the order any prisoner enters room B, then a strategy is definitely possible
Which boxes the earlier prisoners picked. Of course, no communication is possible during the run and it wouldn't be possible to remember any 100s box picking permutation. But you could use this information to compute in a system before the run starts.
My take:
Draw a table of numbers with 2 columns, the first column contains the box number (from box #1 to box#100). Each prisoner then gets to pick 50 boxes and whatever box they pick, they should put 1 mark on the corresponding row in the second column.
All prisoners are however required to never pick any box twice. And no box may be marked more than 50. Some prisoners may get less options than others since some box may get filled to 50 marks first.
When a prisoner is moved to room B he/she opens whatever boxes he has marked on.
A:
Same concept.
Aonther take:
Write down a list of the first 100 binary numbers which has fifty 1s and fifty 0s.
Sort them from lowest to highest.
Prisoner #1 gets the first number, prisoner #2 gets the second, prisoner #3 gets the third and so on...
Each prisoner remembers his/her binary number.
When any prisoner is moved to room B, he/she then match the binary digits of the number he remembered with each of the box, the highest bit is matched with the leftmost box, the next highest bit is matched with the second leftmost box ... the lowest bit is matched with the rightmost box.
He/she opens whatever boxes matched with 1 and leave closed boxes matched with 0.
This would minimizes the probability because early prisoners will get digits that are different from the later prisoners and prisoners which has number close together would get digits close together. This doesn't guarantee survivability but if the early prisoners do survive, chances are the later prisoners would have a higher probability of surviving as well.
I haven't thought out the exact figures and rationale though, but this is one quick solution I can think of at the moment.
A:
If all prisoners are killed when someone fails to find their number then you either save 100 or 0. There is no way to save 30 people.
A:
There aren't any time limits in the question so I suggest that prisoners should decide to take 1 hour per box and open them in the order presented. If the second prisoner is allowed into the room after 2 hours then he knows that the first prisoner found his own number in box 2. Therefore he knows to skip box 2 in his sequence and opens boxes 1, 3, 4...51
First prisoners odds on losing are 50/100
Give that the first prisoner survived then the second prisoners chance of winning are 50/99
So answer appears to be ((50 ^51)*49!)/100!
which according to google makes 2.89*10^-9
which is pretty much nil
So even if the prisoners knew the boxes the previously lucky ones found their number in there'd be no hope
| Finding your own number in a box | 100 (or some even number 2N :-) ) prisoners are in a room A. They are numbered from 1 to 100.
One by one (from prisoner #1 to prisoner #100, in order), they will be let into a room B in which 100 boxes (numbered from 1 to 100) await them. Inside the (closed) boxes are numbers from 1 to 100 (the numbers inside the boxes are randomly permuted!).
Once inside room B, each prisoner gets to open 50 boxes (he chooses which one he opens). If he finds the number that was assigned to him in one of these 50 boxes, the prisoner gets to walk into a room C and all boxes are closed again before the next one walks into room B from room A. Otherwise, all prisoners (in rooms A, B and C) gets killed.
Before entering room B, the prisoners can agree on a strategy (algorithm). There is no way to communicate between rooms (and no message can be left in room B!).
Is there an algorithm that maximizes the probability that all prisoners survive? What probability does that algorithm achieve?
Notes:
Doing things randomly (what you call 'no strategy') indeed gives a probability of 1/2 for each prisoner, but then the probability of all of them surviving is 1/2^100 (which is quite low). One can do much better!
The prisoners are not allowed to reorder the boxes!
All prisoners are killed the first time a prisoner fails to find his number. And no communication is possible.
Hint: one can save more than 30 prisoners on average, which is much more that (50/100) * (50/99) * [...] * 1
| [
"This puzzle is explained at http://www.math.princeton.edu/~wwong/blog/blog200608191813.shtml and that person does a much better job of explaining the problem.\nThe \"all prisoners are killed\" statement is wrong.\nThe \"you can save 30+ on average\" is also wrong, the article says that 30% of the time you can save 100% of the prisoners.\n",
"I find a low tech solution to this type of problem is always the best way to go.\nfirst we make some assumptions about the situation\n\nThe prisoners are not all programmers or mathematicians\nThey don't want to die\nThe guards are well armed\n\nSo with a 0.005% chance that they will see tomorrow, there is a very simple and low tech solution to this problem. RIOT \nits all about losses v potential gain, the chances are the prisoners far out number the guards, and using each other as human shields, as they are all dead men anyway if they don't, they can increase the chances they will over power a guard, once they have his weapon there chance goes up, helping them over power more guards to get more fire power to further increase there survival rate. once the guards realise what's happening, they will probably run for the hills and lock down the prison, this will give the media a heads up and then its a human rights issue.\n",
"Implement a sorting algorithm and sort the boxes according to the numbers inside them.\nFirst prisoner sorts 50 boxes, and the second prisoner sorts the other 50 and merges with the first one. (Note that the second prisoner can guess the values inside the first 50 boxes)\nAfter the 2nd prisoner, all of the boxes will be in a sorted order !!!\nEverybody else can open the boxes containing their numbers easily then.\n",
"I don't know if this is allowed but the best approximation I can find is:\nEDIT: Ok, I think this makes it. Of course I'm treating this as a computing problem, I don't think any prisioner will be able to perform this, although is pretty straight forward if you don't.\nFind the first 50 primes, let's asume we hold them in an array called primes.\n\nThe first prissioner enters room B and opens the first box and finds the number m.\nWait primes[1]^m (that would be 3^m)\nOpen box 2 and read the number --> n\nWait (primes[2]^n - 1) * primes[1]^m, that would be (5^n - 1) * 3^m and the total time he has been waiting would be 3^n * 5^n\n\nRepeat. After the first prisioner the total time for him would be:\n3^m * 5^n * 7^p ... = X\nBefore the second prisioner enters the room factorize X. You know beforehand the prime numbers that have been used so the factorization is trivial. Doing so you obtain m, n, p, etc so the second prisioner knows every box/number combination the previous prisioner used.\nThe probability of the first one getting everybody killed is 1/2, the second one will have a 50 / (100 - n) (being n the numbers of attemps of the first one) the third one will have 50 / (100 - n - m) (if n + m = 100 then all positions are known) and so on.\nObviously the next prissioner must skip the already known boxes (except for the last choice if the box which contains his number is already known)\nI don't know what's the exact possibility as it dependes on how many choices they have to do but I'd say it's pretty high.\nEDIT: Rereading, if the prissioner does not have to stop when he obtains his number then the probability for the whole group is vastly improved, exactly 50%.\nEDIT2: @OysterD see it this way. If the first prisioner can open 50 boxes then the second one know if its number is in any of that boxes. If it is, then he can open other 49 (and by doing so learning the box/number comination of the 100 boxes) and finally open his one. So if the first prissioner succeds then everyone succeds. Remember that each prisioner provides a way for the other to know exactly the boxes/number combination for every box he opens.\n",
"Maybe I'm not reading it right, but the question seems to be badly constructed or missing information.\n\nIf he finds the number that was\n assigned to him in one of these 50\n boxes, the prisoner gets to walk into\n a room C and all boxes are closed\n again before the next one walks into\n room B from room A. Otherwise, all\n prisoners (in rooms A, B and C) gets\n killed.\n\nDoes the last sentence there mean that all prisoners are killed the first time a prisoner fails to find their number? If not, what happens to prisoners that don't find their number?\nIf no communication is possible, and whenever a prisoner enters room B it is always in an identical state then there is no possibility for a strategy.\nThe prisoners could could say before they leave room A which number box they are going to open. But without subsequent prisoners knowing whether they were successful or not (assuming failure for one isn't failure for all) when the next prisoner enters room B they still have the same odds of picking their number as the previous prisoner (always 1:100).\nIf failure for one is failure for all, then by knowing which box the previous prisoners opened, and by dint of the fact that they haven't all been killed, each successive prisoner could reduce the odds of picking the wrong box by one box. i.e. 1:100 for the first prisoner, 1:99 for the second, down to 1:1 for the last.\n",
"The prisoners could agree that prisoner 1 open boxes 1-50. \nIf they're all still alive, they agree that the next prisoner opens boxes 2-51. (the 2 is arbitrary, but simple to remember this rule) His odds of surviving given that P1 survived are now 50/99. You want to eliminate opening a box when you know that the previous guy found his. \nI don't know if that's optimal, but it's lot better than random. \nThe probability of surviving that looks like \n50/100 * 50/99 * 50/98 *. . .50/51 * 1\n",
"I think since no communication is possible, the best strategy would involve\n\ndistributing the probability of each prisoners as evenly as possible\n\nAm I on the right path or not?\nInformation available for each prisoner:\n\n\nThe number of survivied prisoners, so if you have an efficient box picking system that utilizes the order any prisoner enters room B, then a strategy is definitely possible\nWhich boxes the earlier prisoners picked. Of course, no communication is possible during the run and it wouldn't be possible to remember any 100s box picking permutation. But you could use this information to compute in a system before the run starts.\n\n\nMy take:\n\n\nDraw a table of numbers with 2 columns, the first column contains the box number (from box #1 to box#100). Each prisoner then gets to pick 50 boxes and whatever box they pick, they should put 1 mark on the corresponding row in the second column. \nAll prisoners are however required to never pick any box twice. And no box may be marked more than 50. Some prisoners may get less options than others since some box may get filled to 50 marks first.\nWhen a prisoner is moved to room B he/she opens whatever boxes he has marked on.\n\n\n",
"Same concept.\nAonther take:\n\n\nWrite down a list of the first 100 binary numbers which has fifty 1s and fifty 0s.\nSort them from lowest to highest.\nPrisoner #1 gets the first number, prisoner #2 gets the second, prisoner #3 gets the third and so on...\nEach prisoner remembers his/her binary number.\nWhen any prisoner is moved to room B, he/she then match the binary digits of the number he remembered with each of the box, the highest bit is matched with the leftmost box, the next highest bit is matched with the second leftmost box ... the lowest bit is matched with the rightmost box.\nHe/she opens whatever boxes matched with 1 and leave closed boxes matched with 0.\n\n\nThis would minimizes the probability because early prisoners will get digits that are different from the later prisoners and prisoners which has number close together would get digits close together. This doesn't guarantee survivability but if the early prisoners do survive, chances are the later prisoners would have a higher probability of surviving as well.\nI haven't thought out the exact figures and rationale though, but this is one quick solution I can think of at the moment.\n",
"If all prisoners are killed when someone fails to find their number then you either save 100 or 0. There is no way to save 30 people.\n",
"There aren't any time limits in the question so I suggest that prisoners should decide to take 1 hour per box and open them in the order presented. If the second prisoner is allowed into the room after 2 hours then he knows that the first prisoner found his own number in box 2. Therefore he knows to skip box 2 in his sequence and opens boxes 1, 3, 4...51\nFirst prisoners odds on losing are 50/100\nGive that the first prisoner survived then the second prisoners chance of winning are 50/99\nSo answer appears to be ((50 ^51)*49!)/100!\nwhich according to google makes 2.89*10^-9\nwhich is pretty much nil\nSo even if the prisoners knew the boxes the previously lucky ones found their number in there'd be no hope\n"
] | [
7,
3,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"algorithm",
"puzzle"
] | stackoverflow_0000034345_algorithm_puzzle.txt |
Q:
How to convert complex filename into HTML link?
I have a directory with PDF files that I need to create an index for. It is a PHP page with a list of links:
<A HREF="path to file">filename</A>
The filenames can be complicated:
LVD 2-1133 - Ändring av dumpningslina (1984-11-20).pdf
What is the correct way to link to this file on a Linux/Apache server?
Is there a PHP function to do this conversion?
A:
You can use rawurlencode() to convert a string according to the RFC 1738 spec.
This function replaces all non-alphanumeric characters by their associated code.
The difference with urlencode() is that spaces are encoded as plus signs.
You'll probably want to use the last one.
This technique is called Percent or URL encoding. See Wikipedia for more details.
A:
The urlencode() function will convert spaces into plus signs (+), so it won't work. The rawurlencode does the trick. Thanks.
Be sure to convert each part of the path separately, otherwise path/file will be converted into path%2Ffile. (which was what I missed)
A:
urlencode() should probably do what you want.
Edit: urlencode() works fine on swedish characters.
<?php
echo urlencode("åäö");
?>
converts to:
%E5%E4%F6
A:
URL encoding. I think it's urlencode() in PHP.
A:
rawurlencode will encode "exotic" characters in a URL.
| How to convert complex filename into HTML link? | I have a directory with PDF files that I need to create an index for. It is a PHP page with a list of links:
<A HREF="path to file">filename</A>
The filenames can be complicated:
LVD 2-1133 - Ändring av dumpningslina (1984-11-20).pdf
What is the correct way to link to this file on a Linux/Apache server?
Is there a PHP function to do this conversion?
| [
"You can use rawurlencode() to convert a string according to the RFC 1738 spec.\nThis function replaces all non-alphanumeric characters by their associated code.\nThe difference with urlencode() is that spaces are encoded as plus signs.\nYou'll probably want to use the last one.\nThis technique is called Percent or URL encoding. See Wikipedia for more details.\n",
"The urlencode() function will convert spaces into plus signs (+), so it won't work. The rawurlencode does the trick. Thanks.\nBe sure to convert each part of the path separately, otherwise path/file will be converted into path%2Ffile. (which was what I missed)\n",
"urlencode() should probably do what you want.\nEdit: urlencode() works fine on swedish characters.\n\n <?php\n echo urlencode(\"åäö\"); \n ?>\n\nconverts to:\n\n %E5%E4%F6\n\n",
"URL encoding. I think it's urlencode() in PHP.\n",
"rawurlencode will encode \"exotic\" characters in a URL.\n"
] | [
3,
1,
0,
0,
0
] | [] | [] | [
"html",
"php"
] | stackoverflow_0000035037_html_php.txt |
Q:
Is there a built in way in .Net AJAX to manually serialize an object to a JSON string?
I've found ScriptingJsonSerializationSection but I'm not sure how to use it. I could write a function to convert the object to a JSON string manually, but since .Net can do it on the fly with the <System.Web.Services.WebMethod()> and <System.Web.Script.Services.ScriptMethod()> attributes so there must be a built-in way that I'm missing.
PS: using Asp.Net 2.0 and VB.Net - I put this in the tags but I think people missed it.
A:
This should do the trick
Dim jsonSerialiser As New System.Web.Script.Serialization.JavaScriptSerializer
Dim jsonString as String = jsonSerialiser.Serialize(yourObject)
A:
I think what you're looking for is this class:
System.ServiceModel.Web.DataContractJsonSerializer
Here's an example from Rick Strahl: DataContractJsonSerializer in .NET 3.5
A:
Since the JavaScriptSerializer class is technically being deprecated, I believe DataContractJsonSerializer is the preferable way to go if you're using 3.0+.
A:
Well, I am currently using the following extension methods to serialize and deserialize objects:
using System.Web.Script.Serialization;
public static string ToJSON(this object objectToSerialize)
{
JavaScriptSerializer jss = new JavaScriptSerializer();
return jss.Serialize(objectToSerialize);
}
/// <typeparam name="T">The type we are deserializing the JSON to.</typeparam>
public static T FromJSON<T>(this string json)
{
JavaScriptSerializer jss = new JavaScriptSerializer();
return jss.Deserialize<T>(json);
}
I use this quite a bit - be forewarned, this implementation is a bit naive (i.e. there are some potential problems with it, depending on what you are serializing and how you use it on the client, particularly with DateTimes).
A:
In the System.Web.Extensions assembly, version 3.5.0.0, there's a JavaScriptSerializer class that should handle what you want.
A:
Try
System.Web.Script.Serialization.JavaScriptSerializer
or Check out JSON.org there is a whole list of libraries written to do exactly what you want.
| Is there a built in way in .Net AJAX to manually serialize an object to a JSON string? | I've found ScriptingJsonSerializationSection but I'm not sure how to use it. I could write a function to convert the object to a JSON string manually, but since .Net can do it on the fly with the <System.Web.Services.WebMethod()> and <System.Web.Script.Services.ScriptMethod()> attributes so there must be a built-in way that I'm missing.
PS: using Asp.Net 2.0 and VB.Net - I put this in the tags but I think people missed it.
| [
"This should do the trick\nDim jsonSerialiser As New System.Web.Script.Serialization.JavaScriptSerializer\nDim jsonString as String = jsonSerialiser.Serialize(yourObject)\n\n",
"I think what you're looking for is this class:\nSystem.ServiceModel.Web.DataContractJsonSerializer\nHere's an example from Rick Strahl: DataContractJsonSerializer in .NET 3.5\n",
"Since the JavaScriptSerializer class is technically being deprecated, I believe DataContractJsonSerializer is the preferable way to go if you're using 3.0+.\n",
"Well, I am currently using the following extension methods to serialize and deserialize objects:\nusing System.Web.Script.Serialization;\n\npublic static string ToJSON(this object objectToSerialize)\n{\n JavaScriptSerializer jss = new JavaScriptSerializer();\n return jss.Serialize(objectToSerialize);\n}\n\n/// <typeparam name=\"T\">The type we are deserializing the JSON to.</typeparam>\npublic static T FromJSON<T>(this string json)\n{\n JavaScriptSerializer jss = new JavaScriptSerializer();\n return jss.Deserialize<T>(json);\n}\n\nI use this quite a bit - be forewarned, this implementation is a bit naive (i.e. there are some potential problems with it, depending on what you are serializing and how you use it on the client, particularly with DateTimes).\n",
"In the System.Web.Extensions assembly, version 3.5.0.0, there's a JavaScriptSerializer class that should handle what you want.\n",
"Try\nSystem.Web.Script.Serialization.JavaScriptSerializer\n\nor Check out JSON.org there is a whole list of libraries written to do exactly what you want.\n"
] | [
11,
6,
4,
3,
2,
1
] | [] | [] | [
".net_2.0",
"asp.net",
"json",
"serialization",
"vb.net"
] | stackoverflow_0000035106_.net_2.0_asp.net_json_serialization_vb.net.txt |
Q:
What types of testing do you include in your build process?
I use TFS 2008. We run unit tests as part of our continuous integration build and integration tests nightly.
What other types of testing do you automate and include in your build process? what technologies do you use to do so?
I'm thinking about smoke tests, performance tests, load tests but don't know how realistic it is to integrate these with Team Build.
A:
First, we have check-in (smoke) tests that must run before code can be checked in. It's done automatically by running a job that runs the tests and then makes the check-in to source control upon successful test completion. Second, cruise control kicks off build and regression tests. The product is built then several sets of integration tests are run. The number of tests vary by where we are in the release cycle. More testing is added late in the cycle during ramp down. Cruise control takes all submissions within a certain time window (12 minutes) so your changes may be built and tested with a small number of others. Third, there's an automated nightly build and tests that are quite extensive. We have load or milestone points every 2 or 3 weeks. At a load point, all automated tests are run plus manual testing is done. Performance testing is also done for each milestone. Performance tests can be kicked off on request but the hardware available is limited so people have to queue up for performance tests. Usually people rely on the load performance tests unless they are making changes specifically to improve performance. Finally, stress tests are also done for each load. These tests are focussed on making sure the product has no memory leaks or anything else that prevents 24/7 running of the product as opposed to performance. All of this is done with ant, cruise control, and Python scripts.
A:
Integrating load testing during you build process is a bad idea, just do your normal unit testing to make sure that all your codes work as expected. Load and performance testing should be done separately.
| What types of testing do you include in your build process? | I use TFS 2008. We run unit tests as part of our continuous integration build and integration tests nightly.
What other types of testing do you automate and include in your build process? what technologies do you use to do so?
I'm thinking about smoke tests, performance tests, load tests but don't know how realistic it is to integrate these with Team Build.
| [
"First, we have check-in (smoke) tests that must run before code can be checked in. It's done automatically by running a job that runs the tests and then makes the check-in to source control upon successful test completion. Second, cruise control kicks off build and regression tests. The product is built then several sets of integration tests are run. The number of tests vary by where we are in the release cycle. More testing is added late in the cycle during ramp down. Cruise control takes all submissions within a certain time window (12 minutes) so your changes may be built and tested with a small number of others. Third, there's an automated nightly build and tests that are quite extensive. We have load or milestone points every 2 or 3 weeks. At a load point, all automated tests are run plus manual testing is done. Performance testing is also done for each milestone. Performance tests can be kicked off on request but the hardware available is limited so people have to queue up for performance tests. Usually people rely on the load performance tests unless they are making changes specifically to improve performance. Finally, stress tests are also done for each load. These tests are focussed on making sure the product has no memory leaks or anything else that prevents 24/7 running of the product as opposed to performance. All of this is done with ant, cruise control, and Python scripts.\n",
"Integrating load testing during you build process is a bad idea, just do your normal unit testing to make sure that all your codes work as expected. Load and performance testing should be done separately.\n"
] | [
3,
1
] | [] | [] | [
"build_automation",
"testing"
] | stackoverflow_0000034304_build_automation_testing.txt |
Q:
How do I store information in my executable in .Net
I'd like to bind a configuration file to my executable. I'd like to do this by storing an MD5 hash of the file inside the executable. This should keep anyone but the executable from modifying the file.
Essentially if someone modifies this file outside of the program the program should fail to load it again.
EDIT: The program processes credit card information so being able to change the configuration in any way could be a potential security risk. This software will be distributed to a large number of clients. Ideally client should have a configuration that is tied directly to the executable. This will hopefully keep a hacker from being able to get a fake configuration into place.
The configuration still needs to be editable though so compiling an individual copy for each customer is not an option.
It's important that this be dynamic. So that I can tie the hash to the configuration file as the configuration changes.
A:
A better solution is to store the MD5 in the configuration file. But instead of the MD5 being just of the configuration file, also include some secret "key" value, like a fixed guid, in the MD5.
write(MD5(SecretKey + ConfigFileText));
Then you simply remove that MD5 and rehash the file (including your secret key). If the MD5's are the same, then no-one modified it. This prevents someone from modifying it and re-applying the MD5 since they don't know your secret key.
Keep in mind this is a fairly weak solution (as is the one you are suggesting) as they could easily track into your program to find the key or where the MD5 is stored.
A better solution would be to use a public key system and sign the configuration file. Again that is weak since that would require the private key to be stored on their local machine. Pretty much anything that is contained on their local PC can be bypassed with enough effort.
If you REALLY want to store the information in your executable (which I would discourage) then you can just try appending it at the end of the EXE. That is usually safe. Modifying executable programs is virus like behavior and most operating system security will try to stop you too. If your program is in the Program Files directory, and your configuration file is in the Application Data directory, and the user is logged in as a non-administrator (in XP or Vista), then you will be unable to update the EXE.
Update: I don't care if you are using Asymmetric encryption, RSA or Quantum cryptography, if you are storing your keys on the user's computer (which you must do unless you route it all through a web service) then the user can find your keys, even if it means inspecting the registers on the CPU at run time! You are only buying yourself a moderate level of security, so stick with something that is simple. To prevent modification the solution I suggested is the best. To prevent reading then encrypt it, and if you are storing your key locally then use AES Rijndael.
Update: The FixedGUID / SecretKey could alternatively be generated at install time and stored somewhere "secret" in the registry. Or you could generate it every time you use it from hardware configuration. Then you are getting more complicated. How you want to do this to allow for moderate levels of hardware changes would be to take 6 different signatures, and hash your configuration file 6 times - once with each. Combine each one with a 2nd secret value, like the GUID mentioned above (either global or generated at install). Then when you check you verify each hash separately. As long as they have 3 out of 6 (or whatever your tolerance is) then you accept it. Next time you write it you hash it with the new hardware configuration. This allows them to slowly swap out hardware over time and get a whole new system. . . Maybe that is a weakness. It all comes down to your tolerance. There are variations based on tighter tolerances.
UPDATE: For a Credit Card system you might want to consider some real security. You should retain the services of a security and cryptography consultant. More information needs to be exchanged. They need to analyze your specific needs and risks.
Also, if you want security with .NET you need to first start with a really good .NET obfuscator (just Google it). A .NET assembly is way to easy to disassemble and get at the source code and read all your secrets. Not to sound a like a broken record, but anything that depends on the security of your user's system is fundamentally flawed from the beginning.
A:
Out of pure curiosity, what's your reasoning for never wanting to load the file if it's been changed?
Why not just keep all of the configuration information compiled in the executable? Why bother with an external file at all?
Edit
I just read your edit about this being a credit card info program. That poses a very interesting challenge.
I would think, for that level of security, some sort of pretty major encryption would be necessary but I don't know anything about handling that sort of thing in such a way that the cryptographic secrets can't just be extracted from the executable.
Is authenticating against some sort of online source a possibility?
A:
I'd suggest you use a Assymmetric Key Encryption to encrypt your configuration file, wherever they are stored, inside the executable or not.
If I remember correctly, RSA is one the variants.
For the explanation of it, see Public-key cryptography on Wikipedia
Store the "reading" key in your executable and keep to yourself the "writing" key. So no one but you can modify the configuration.
This has the advantages of:
No-one can modify the configuration unless they have the "writing" key because any modification will corrupt it entirely, even if they know the "reading" key it would takes ages to compute the other key.
Modification guarantee.
It's not hard - there are plenty of libraries available these days. There're also a lot of key-generation programs that can generate really, really long keys.
Do take some research on how to properly implement them though.
| How do I store information in my executable in .Net | I'd like to bind a configuration file to my executable. I'd like to do this by storing an MD5 hash of the file inside the executable. This should keep anyone but the executable from modifying the file.
Essentially if someone modifies this file outside of the program the program should fail to load it again.
EDIT: The program processes credit card information so being able to change the configuration in any way could be a potential security risk. This software will be distributed to a large number of clients. Ideally client should have a configuration that is tied directly to the executable. This will hopefully keep a hacker from being able to get a fake configuration into place.
The configuration still needs to be editable though so compiling an individual copy for each customer is not an option.
It's important that this be dynamic. So that I can tie the hash to the configuration file as the configuration changes.
| [
"A better solution is to store the MD5 in the configuration file. But instead of the MD5 being just of the configuration file, also include some secret \"key\" value, like a fixed guid, in the MD5.\nwrite(MD5(SecretKey + ConfigFileText));\n\nThen you simply remove that MD5 and rehash the file (including your secret key). If the MD5's are the same, then no-one modified it. This prevents someone from modifying it and re-applying the MD5 since they don't know your secret key.\nKeep in mind this is a fairly weak solution (as is the one you are suggesting) as they could easily track into your program to find the key or where the MD5 is stored. \nA better solution would be to use a public key system and sign the configuration file. Again that is weak since that would require the private key to be stored on their local machine. Pretty much anything that is contained on their local PC can be bypassed with enough effort.\nIf you REALLY want to store the information in your executable (which I would discourage) then you can just try appending it at the end of the EXE. That is usually safe. Modifying executable programs is virus like behavior and most operating system security will try to stop you too. If your program is in the Program Files directory, and your configuration file is in the Application Data directory, and the user is logged in as a non-administrator (in XP or Vista), then you will be unable to update the EXE.\nUpdate: I don't care if you are using Asymmetric encryption, RSA or Quantum cryptography, if you are storing your keys on the user's computer (which you must do unless you route it all through a web service) then the user can find your keys, even if it means inspecting the registers on the CPU at run time! You are only buying yourself a moderate level of security, so stick with something that is simple. To prevent modification the solution I suggested is the best. To prevent reading then encrypt it, and if you are storing your key locally then use AES Rijndael.\nUpdate: The FixedGUID / SecretKey could alternatively be generated at install time and stored somewhere \"secret\" in the registry. Or you could generate it every time you use it from hardware configuration. Then you are getting more complicated. How you want to do this to allow for moderate levels of hardware changes would be to take 6 different signatures, and hash your configuration file 6 times - once with each. Combine each one with a 2nd secret value, like the GUID mentioned above (either global or generated at install). Then when you check you verify each hash separately. As long as they have 3 out of 6 (or whatever your tolerance is) then you accept it. Next time you write it you hash it with the new hardware configuration. This allows them to slowly swap out hardware over time and get a whole new system. . . Maybe that is a weakness. It all comes down to your tolerance. There are variations based on tighter tolerances.\nUPDATE: For a Credit Card system you might want to consider some real security. You should retain the services of a security and cryptography consultant. More information needs to be exchanged. They need to analyze your specific needs and risks. \nAlso, if you want security with .NET you need to first start with a really good .NET obfuscator (just Google it). A .NET assembly is way to easy to disassemble and get at the source code and read all your secrets. Not to sound a like a broken record, but anything that depends on the security of your user's system is fundamentally flawed from the beginning. \n",
"Out of pure curiosity, what's your reasoning for never wanting to load the file if it's been changed?\nWhy not just keep all of the configuration information compiled in the executable? Why bother with an external file at all?\nEdit\nI just read your edit about this being a credit card info program. That poses a very interesting challenge.\nI would think, for that level of security, some sort of pretty major encryption would be necessary but I don't know anything about handling that sort of thing in such a way that the cryptographic secrets can't just be extracted from the executable.\nIs authenticating against some sort of online source a possibility?\n",
"I'd suggest you use a Assymmetric Key Encryption to encrypt your configuration file, wherever they are stored, inside the executable or not.\nIf I remember correctly, RSA is one the variants.\nFor the explanation of it, see Public-key cryptography on Wikipedia\nStore the \"reading\" key in your executable and keep to yourself the \"writing\" key. So no one but you can modify the configuration.\nThis has the advantages of:\n\nNo-one can modify the configuration unless they have the \"writing\" key because any modification will corrupt it entirely, even if they know the \"reading\" key it would takes ages to compute the other key.\nModification guarantee.\nIt's not hard - there are plenty of libraries available these days. There're also a lot of key-generation programs that can generate really, really long keys.\n\nDo take some research on how to properly implement them though.\n"
] | [
11,
1,
1
] | [
"just make a const string that holds the md5 hash and compile it into your app ... your app can then just refer to this const string when validating the configuration file\n"
] | [
-2
] | [
".net",
"c#"
] | stackoverflow_0000035103_.net_c#.txt |
Q:
Tools for finding memory corruption in managed C++ code
I have a .NET application, which is using an open source C++ compression library for compressing images. We are accessing the C++ library via managed C++. I'm seeing heap corruption during compression. A call to _CrtIsValidHeapPointer is finding an error on a call to free() when cleaning up after compression.
Are there tools such as Purify to help diagnosis this problem and what is causing the heap corruption when working in a combination of managed and unmanaged code? I do have the exception caught in the debugger, but it would be nice to have other tools to help find the solution to the problem.
A:
On *nix, there's a tool called Valgrind that I use for dealing with memory issues, like memory leaks and memory corruption.
A:
In native code, if the corruption always occurs in the same place in memory, you can use a data breakpoint to break the debugger when that memory is changed. Unfortunately, you cannot set a data breakpoint in the managed C++ environment, presumably because the GC could move the object in memory.
Not sure if this helps, but hopefully it leads you off in the right direction.
A:
Rational Purify for Windows supports .NET, so I guess that could be used.
| Tools for finding memory corruption in managed C++ code | I have a .NET application, which is using an open source C++ compression library for compressing images. We are accessing the C++ library via managed C++. I'm seeing heap corruption during compression. A call to _CrtIsValidHeapPointer is finding an error on a call to free() when cleaning up after compression.
Are there tools such as Purify to help diagnosis this problem and what is causing the heap corruption when working in a combination of managed and unmanaged code? I do have the exception caught in the debugger, but it would be nice to have other tools to help find the solution to the problem.
| [
"On *nix, there's a tool called Valgrind that I use for dealing with memory issues, like memory leaks and memory corruption.\n",
"In native code, if the corruption always occurs in the same place in memory, you can use a data breakpoint to break the debugger when that memory is changed. Unfortunately, you cannot set a data breakpoint in the managed C++ environment, presumably because the GC could move the object in memory.\nNot sure if this helps, but hopefully it leads you off in the right direction.\n",
"Rational Purify for Windows supports .NET, so I guess that could be used.\n"
] | [
1,
1,
0
] | [] | [] | [
".net",
"managed_c++"
] | stackoverflow_0000034973_.net_managed_c++.txt |
Q:
requiredfield validator is preventing another form from submitting
I have a page with many forms in panels and usercontrols, and a requiredfield validator I just added to one form is preventing all of my other forms from submitting. what's the rule that I'm not following?
A:
Are you using ValidationGroups? Try assigning each control with a validation group as well as the validator that you want to use. Something like:
<asp:TextBox ID="txt1" ValidationGroup="Group1" ruant="server" />
<asp:RequiredFieldValidator ID="rfv1" ... ValidationGroup="Group1" />
Note, if a button doesn't specify a validation group it will validate all controls that aren't assigned to a validation group.
A:
You should be setting ValidationGroup property to a different value for each group of elements. Your validator's ValidationGroup must only be same with the control that submit its form.
| requiredfield validator is preventing another form from submitting | I have a page with many forms in panels and usercontrols, and a requiredfield validator I just added to one form is preventing all of my other forms from submitting. what's the rule that I'm not following?
| [
"Are you using ValidationGroups? Try assigning each control with a validation group as well as the validator that you want to use. Something like:\n<asp:TextBox ID=\"txt1\" ValidationGroup=\"Group1\" ruant=\"server\" />\n<asp:RequiredFieldValidator ID=\"rfv1\" ... ValidationGroup=\"Group1\" />\n\nNote, if a button doesn't specify a validation group it will validate all controls that aren't assigned to a validation group.\n",
"You should be setting ValidationGroup property to a different value for each group of elements. Your validator's ValidationGroup must only be same with the control that submit its form.\n"
] | [
5,
0
] | [] | [] | [
"asp.net"
] | stackoverflow_0000035208_asp.net.txt |
Q:
Working in Visual Studio (2005 or 2008) on a networked drive
Have you guys had any experiences (positive or negative) by placing your source code/solution on a network drive for Visual Studio 2005 or 2008? Please note I am not referring to placing your actual source control system on that drive, but rather your working folder.
Thanks
A:
It works just fine. I have worked with source code from my "home" folder on many different systems (NFS, Samba, AD) and never had any problems. The only drawback is that you might experience somewhat longer compile times if your network is slow or there is much traffic on the network. Under normal circumstances this is not an issue though, since source code files are usually small and will be cached by the operating system anyway.
A:
Some folks in our company do that with their external dependencies, and they get occasional build errors, usually because a library or header can't be retrieved. When they rebuild again it all works. Of course the speed and traffic-level of your network would have a major effect on this.
| Working in Visual Studio (2005 or 2008) on a networked drive | Have you guys had any experiences (positive or negative) by placing your source code/solution on a network drive for Visual Studio 2005 or 2008? Please note I am not referring to placing your actual source control system on that drive, but rather your working folder.
Thanks
| [
"It works just fine. I have worked with source code from my \"home\" folder on many different systems (NFS, Samba, AD) and never had any problems. The only drawback is that you might experience somewhat longer compile times if your network is slow or there is much traffic on the network. Under normal circumstances this is not an issue though, since source code files are usually small and will be cached by the operating system anyway.\n",
"Some folks in our company do that with their external dependencies, and they get occasional build errors, usually because a library or header can't be retrieved. When they rebuild again it all works. Of course the speed and traffic-level of your network would have a major effect on this.\n"
] | [
1,
1
] | [] | [] | [
"visual_studio_2005",
"visual_studio_2008"
] | stackoverflow_0000035194_visual_studio_2005_visual_studio_2008.txt |
Q:
Simple Frameworks for Displaying Bitmaps and Handling Button Presses
We have a set of applications that basically display a bunch of bitmaps and text, then allow user to press "buttons" (certain bitmaps) that cause actions to occur.
We currently have these implemented using DirectX and a bunch of code to place the bitmaps and handle the button-presses. But we'd like to have the following features:
portable to Linux
some sort of "editor" that would allow us to lay out screens without hard-coding locations of elements in code
animation
we need to be able to overlay video
not resource intensive (these terminals don't have a lot of memory or CPU)
we're currently using C++, so management would prefer that, but other languages would be considered
We'd prefer a free, open-source solution, but would be willing to buy something if it is not too expensive. (We have a couple dozen developers, and tens of thousands of terminals deployed.)
We don't like the common GUI toolkits or widgets. We want something that has more of the look of a game than of a dialog box.
Any suggestions for off-the-shelf stuff we could use?
A:
Maybe the way to go is something like Clutter or Allegro. If you check in this article at ArsTechnica what they are using Clutter for, you might get an idea how to use it. I don't know for sure if it works on Windows, but I'm pretty sure it does, considering it only depends on libraries that are supported under Windows.
A:
You could try wxWidgets (it has wxBitmapButton) or try to implement your own solution using SDL for all of the graphics.
A:
"We don't like the common GUI toolkits or widgets. We want something that has more of the look of a game than of a dialog box."
You realize that Trolltech's QT has a style sheet language for widgets? Take a look at their white paper, specifically page 60
http://trolltech.com/pdf/qt43-whitepaper-us.pdf
Going over your other requirements:
portable to Linux
Yes. Also supports Windows, Mac, and embedded environments.
some sort of "editor" that would allow us to lay out screens without hard-coding locations of elements in code
Qt's Designer is a very nice tool. I use it all the time.
animation
Qt supports this.
we need to be able to overlay video
Qt supports this.
not resource intensive (these terminals don't have a lot of memory or CPU)
This might be the fly in the ointment. You could check out Qt's embedded option. I've never used that myself.
we're currently using C++, so management would prefer that, but other languages would be considered
Qt is for C++ and works with all major compilers.
We'd prefer a free, open-source solution, but would be willing to buy something if it is not too expensive. (We have a couple dozen developers, and tens of thousands of terminals deployed.)
Qt has both open-source and closed source options.
| Simple Frameworks for Displaying Bitmaps and Handling Button Presses | We have a set of applications that basically display a bunch of bitmaps and text, then allow user to press "buttons" (certain bitmaps) that cause actions to occur.
We currently have these implemented using DirectX and a bunch of code to place the bitmaps and handle the button-presses. But we'd like to have the following features:
portable to Linux
some sort of "editor" that would allow us to lay out screens without hard-coding locations of elements in code
animation
we need to be able to overlay video
not resource intensive (these terminals don't have a lot of memory or CPU)
we're currently using C++, so management would prefer that, but other languages would be considered
We'd prefer a free, open-source solution, but would be willing to buy something if it is not too expensive. (We have a couple dozen developers, and tens of thousands of terminals deployed.)
We don't like the common GUI toolkits or widgets. We want something that has more of the look of a game than of a dialog box.
Any suggestions for off-the-shelf stuff we could use?
| [
"Maybe the way to go is something like Clutter or Allegro. If you check in this article at ArsTechnica what they are using Clutter for, you might get an idea how to use it. I don't know for sure if it works on Windows, but I'm pretty sure it does, considering it only depends on libraries that are supported under Windows.\n",
"You could try wxWidgets (it has wxBitmapButton) or try to implement your own solution using SDL for all of the graphics.\n",
"\"We don't like the common GUI toolkits or widgets. We want something that has more of the look of a game than of a dialog box.\"\nYou realize that Trolltech's QT has a style sheet language for widgets? Take a look at their white paper, specifically page 60\nhttp://trolltech.com/pdf/qt43-whitepaper-us.pdf\nGoing over your other requirements:\n\nportable to Linux\n\nYes. Also supports Windows, Mac, and embedded environments.\n\nsome sort of \"editor\" that would allow us to lay out screens without hard-coding locations of elements in code\n\nQt's Designer is a very nice tool. I use it all the time.\n\nanimation\n\nQt supports this.\n\nwe need to be able to overlay video\n\nQt supports this.\n\nnot resource intensive (these terminals don't have a lot of memory or CPU)\n\nThis might be the fly in the ointment. You could check out Qt's embedded option. I've never used that myself.\n\nwe're currently using C++, so management would prefer that, but other languages would be considered\n\nQt is for C++ and works with all major compilers.\n\nWe'd prefer a free, open-source solution, but would be willing to buy something if it is not too expensive. (We have a couple dozen developers, and tens of thousands of terminals deployed.)\n\nQt has both open-source and closed source options.\n"
] | [
1,
0,
0
] | [] | [] | [
"bitmap",
"c++",
"graphics",
"user_interface"
] | stackoverflow_0000024196_bitmap_c++_graphics_user_interface.txt |
Q:
Best way to unit test ASP.NET MVC action methods that use BindingHelperExtensions.UpdateFrom?
In handling a form post I have something like
public ActionResult Insert()
{
Order order = new Order();
BindingHelperExtensions.UpdateFrom(order, this.Request.Form);
this.orderService.Save(order);
return this.RedirectToAction("Details", new { id = order.ID });
}
I am not using explicit parameters in the method as I anticipate having to adapt to variable number of fields etc. and a method with 20+ parameters is not appealing.
I suppose my only option here is mock up the whole HttpRequest, equivalent to what Rob Conery has done. Is this a best practice? Hard to tell with a framework which is so new.
I've also seen solutions involving using an ActionFilter so that you can transform the above method signature to something like
[SomeFilter]
public Insert(Contact contact)
A:
I'm now using ModelBinder so that my action method can look (basically) like:
public ActionResult Insert(Contact contact)
{
if (this.ViewData.ModelState.IsValid)
{
this.contactService.SaveContact(contact);
return this.RedirectToAction("Details", new { id = contact.ID });
}
else
{
return this.RedirectToAction("Create");
}
}
A:
Wrap it in an interface and mock it.
A:
Use NameValueDeserializer from http://www.codeplex.com/MVCContrib instead of UpdateFrom.
| Best way to unit test ASP.NET MVC action methods that use BindingHelperExtensions.UpdateFrom? | In handling a form post I have something like
public ActionResult Insert()
{
Order order = new Order();
BindingHelperExtensions.UpdateFrom(order, this.Request.Form);
this.orderService.Save(order);
return this.RedirectToAction("Details", new { id = order.ID });
}
I am not using explicit parameters in the method as I anticipate having to adapt to variable number of fields etc. and a method with 20+ parameters is not appealing.
I suppose my only option here is mock up the whole HttpRequest, equivalent to what Rob Conery has done. Is this a best practice? Hard to tell with a framework which is so new.
I've also seen solutions involving using an ActionFilter so that you can transform the above method signature to something like
[SomeFilter]
public Insert(Contact contact)
| [
"I'm now using ModelBinder so that my action method can look (basically) like:\n public ActionResult Insert(Contact contact)\n {\n\n if (this.ViewData.ModelState.IsValid)\n {\n this.contactService.SaveContact(contact);\n\n return this.RedirectToAction(\"Details\", new { id = contact.ID });\n }\n else\n {\n return this.RedirectToAction(\"Create\");\n }\n }\n\n",
"Wrap it in an interface and mock it.\n",
"Use NameValueDeserializer from http://www.codeplex.com/MVCContrib instead of UpdateFrom.\n"
] | [
1,
0,
0
] | [] | [] | [
"asp.net_mvc",
"unit_testing"
] | stackoverflow_0000028723_asp.net_mvc_unit_testing.txt |
Q:
XmlSerializer changes in .NET 3.5 SP1
I've seen quite a few posts on changes in .NET 3.5 SP1, but stumbled into one that I've yet to see documentation for yesterday. I had code working just fine on my machine, from VS, msbuild command line, everything, but it failed on the build server (running .NET 3.5 RTM).
[XmlRoot("foo")]
public class Foo
{
static void Main()
{
XmlSerializer serializer = new XmlSerializer(typeof(Foo));
string xml = @"<foo name='ack' />";
using (StringReader sr = new StringReader(xml))
{
Foo foo = serializer.Deserialize(sr) as Foo;
}
}
[XmlAttribute("name")]
public string Name { get; set; }
public Foo Bar { get; private set; }
}
In SP1, the above code runs just fine. In RTM, you get an InvalidOperationException:
Unable to generate a temporary class (result=1).
error CS0200: Property or indexer 'ConsoleApplication2.Foo.Bar' cannot be assign to -- it is read only
Of course, all that's needed to make it run under RTM is adding [XmlIgnore] to the Bar property.
My google fu is apparently not up to finding documentation of these kinds of changes. Is there a change list anywhere that lists this change (and similar under-the-hood changes that might jump up and shout "gotcha")? Is this a bug or a feature?
EDIT: In SP1, if I added a <Bar /> element, or set [XmlElement] for the Bar property, it won't get deserialized. It doesn't fail pre-SP1 when it tries to deserialize--it throws an exception when the XmlSerializer is constructed.
This makes me lean more toward it being a bug, especially if I set an [XmlElement] attribute for Foo.Bar. If it's unable to do what I ask it to do, it should be throwing an exception instead of silently ignoring Foo.Bar. Other invalid combinations/settings of XML serialization attributes result in an exception.
EDIT: Thank you, TonyB, I'd not known about setting the temp files location. For those that come across similar issues in the future, you do need an additional config flag:
<system.diagnostics>
<switches>
<add name="XmlSerialization.Compilation" value="1" />
</switches>
</system.diagnostics>
<system.xml.serialization>
<xmlSerializer tempFilesLocation="c:\\foo"/>
</system.xml.serialization>
Even with setting an [XmlElement] attribute on the Bar property, no mention was made of it in the generated serialization assembly--which fairly firmly puts this in the realm of a silently swallowed error (aka, a bug). Either that or the designers have decided [XmlIgnore] is no longer necessary for properties that can't be set--and you'd expect to see that in release notes, change lists, or the XmlIgnoreAttribute documentation.
A:
In SP1 does the foo.Bar property get properly deserialized?
In pre SP1 you wouldn't be able to deserialize the object because the set method of the Bar property is private so the XmlSerializer doesn't have a way to set that value. I'm not sure how SP1 is pulling it off.
You could try adding this to your web.config/app.config
<system.xml.serialization>
<xmlSerializer tempFilesLocation="c:\\foo"/>
</system.xml.serialization>
That will put the class generated by the XmlSerializer into c:\foo so you can see what it is doing in SP1 vs RTM
A:
I rather like this new (?) behavior because the XML document doesn't have any mention of Bar in it, so the deserializer should not even be attempting to set it.
| XmlSerializer changes in .NET 3.5 SP1 | I've seen quite a few posts on changes in .NET 3.5 SP1, but stumbled into one that I've yet to see documentation for yesterday. I had code working just fine on my machine, from VS, msbuild command line, everything, but it failed on the build server (running .NET 3.5 RTM).
[XmlRoot("foo")]
public class Foo
{
static void Main()
{
XmlSerializer serializer = new XmlSerializer(typeof(Foo));
string xml = @"<foo name='ack' />";
using (StringReader sr = new StringReader(xml))
{
Foo foo = serializer.Deserialize(sr) as Foo;
}
}
[XmlAttribute("name")]
public string Name { get; set; }
public Foo Bar { get; private set; }
}
In SP1, the above code runs just fine. In RTM, you get an InvalidOperationException:
Unable to generate a temporary class (result=1).
error CS0200: Property or indexer 'ConsoleApplication2.Foo.Bar' cannot be assign to -- it is read only
Of course, all that's needed to make it run under RTM is adding [XmlIgnore] to the Bar property.
My google fu is apparently not up to finding documentation of these kinds of changes. Is there a change list anywhere that lists this change (and similar under-the-hood changes that might jump up and shout "gotcha")? Is this a bug or a feature?
EDIT: In SP1, if I added a <Bar /> element, or set [XmlElement] for the Bar property, it won't get deserialized. It doesn't fail pre-SP1 when it tries to deserialize--it throws an exception when the XmlSerializer is constructed.
This makes me lean more toward it being a bug, especially if I set an [XmlElement] attribute for Foo.Bar. If it's unable to do what I ask it to do, it should be throwing an exception instead of silently ignoring Foo.Bar. Other invalid combinations/settings of XML serialization attributes result in an exception.
EDIT: Thank you, TonyB, I'd not known about setting the temp files location. For those that come across similar issues in the future, you do need an additional config flag:
<system.diagnostics>
<switches>
<add name="XmlSerialization.Compilation" value="1" />
</switches>
</system.diagnostics>
<system.xml.serialization>
<xmlSerializer tempFilesLocation="c:\\foo"/>
</system.xml.serialization>
Even with setting an [XmlElement] attribute on the Bar property, no mention was made of it in the generated serialization assembly--which fairly firmly puts this in the realm of a silently swallowed error (aka, a bug). Either that or the designers have decided [XmlIgnore] is no longer necessary for properties that can't be set--and you'd expect to see that in release notes, change lists, or the XmlIgnoreAttribute documentation.
| [
"In SP1 does the foo.Bar property get properly deserialized?\nIn pre SP1 you wouldn't be able to deserialize the object because the set method of the Bar property is private so the XmlSerializer doesn't have a way to set that value. I'm not sure how SP1 is pulling it off.\nYou could try adding this to your web.config/app.config\n<system.xml.serialization> \n <xmlSerializer tempFilesLocation=\"c:\\\\foo\"/> \n</system.xml.serialization> \n\nThat will put the class generated by the XmlSerializer into c:\\foo so you can see what it is doing in SP1 vs RTM\n",
"I rather like this new (?) behavior because the XML document doesn't have any mention of Bar in it, so the deserializer should not even be attempting to set it.\n"
] | [
4,
0
] | [] | [] | [
".net_3.5",
"serialization",
"xml"
] | stackoverflow_0000034925_.net_3.5_serialization_xml.txt |
Q:
Managing multiple identical databases efficiently?
How, if you have a database per client of a web application instead of one database used by all clients, do you go about providing updates and enhancements to all databases efficiently?
How do you roll out changes to schema and code in such a scenario?
A:
It's kinda difficult for us. We have a custom program that writes a lot of the sql code for the different databases for us. Essentially it writes the code once and then copies it over and over again along with placing the change database commands etc. It also makes sure that the primary key identities etc are in sync when they need to be. Beyond that I would look at Red Gate's products. They have saved us more than once here. With them you can easily compare the dbs and see what is differnt. A must when dealing with multiple copies.
A:
Use a code generator / scripting language to implement the original schema and updates to it over time.
A:
I've used Red Gate's SQL Packager for this in the past. The beauty of this tool is that it creates a C# project for you that actually does the work so if you need to you can extend the functionality of the default package to do other things like insert default values into new columns that have been added to the db etc. In the end you have a nice tool that you can hand to a technician and all they have to do to upgrade multiple DBs is point it to the database and click a button.
Red Gate also has a product called SQL multi-script that allows you to run scripts against multiple servers/dbs at the same time. I've never used this tool but I imagine if you're looking for something to use internally that doesn't need to be packaged up you'd want to look at that.
| Managing multiple identical databases efficiently? | How, if you have a database per client of a web application instead of one database used by all clients, do you go about providing updates and enhancements to all databases efficiently?
How do you roll out changes to schema and code in such a scenario?
| [
"It's kinda difficult for us. We have a custom program that writes a lot of the sql code for the different databases for us. Essentially it writes the code once and then copies it over and over again along with placing the change database commands etc. It also makes sure that the primary key identities etc are in sync when they need to be. Beyond that I would look at Red Gate's products. They have saved us more than once here. With them you can easily compare the dbs and see what is differnt. A must when dealing with multiple copies.\n",
"Use a code generator / scripting language to implement the original schema and updates to it over time.\n",
"I've used Red Gate's SQL Packager for this in the past. The beauty of this tool is that it creates a C# project for you that actually does the work so if you need to you can extend the functionality of the default package to do other things like insert default values into new columns that have been added to the db etc. In the end you have a nice tool that you can hand to a technician and all they have to do to upgrade multiple DBs is point it to the database and click a button. \nRed Gate also has a product called SQL multi-script that allows you to run scripts against multiple servers/dbs at the same time. I've never used this tool but I imagine if you're looking for something to use internally that doesn't need to be packaged up you'd want to look at that.\n"
] | [
1,
1,
1
] | [] | [] | [
"database"
] | stackoverflow_0000035256_database.txt |
Q:
How can you make a .net windows forms project look fresh?
I'm working on a visual studio 2005 vb.net windows forms project that's been around for several years. It's full of default textboxes, labels, dropdowns, datagrids, datetime pickers -- all the standard stuff. The end result is a very gray, old-looking project.
What would be the best approach to making this project look fresh and snazzy? I'd rather not rewrite the entire solution with all brand new forms objects, but would that be avoidable?
A:
I was actually just sprucing up a dialog today. A lot of it depends on what kind of application you have, and what OS it is running on. A couple of these tips will certainly go a long way to jazzing things up.
Ensure adequate spacing between controls — don't cram them all together. Space is appealing. You might also trying flowing the controls a little differently when you have more space.
Put in some new 3D and glossy images. You can put a big yellow exclamation mark on a custom warning dialog. Replace old toolbar buttons with new ones. Two libraries I have used and like are GlyFX and IconExperience. You can find free ones too. Ideally get a graphic artist to make some custom ones for the specific actions your application does to fill in between the common ones you use (make sure they all go together). That will go a long way to making it look fancy.
Try a different font. Tahoma is a good one. Often times the default font is MS Sans Serif. You can do better. Avoid Times New Roman and Comic Sans though. Also avoid large blocks of bold — use it sparingly. Generally you want all your fonts the same, and only use different fonts sparingly to set certain bits of text apart.
Add subdued colors to certain controls. This is a tricky one. You always want to use subdued colors, nothing bright or stark usually, but the colors should indicate something, or if you have a grid you can use it to show logical grouping. This is a slippery slope. Be aware that users might change their system colors, which will change how your colors look. Ideally give them a few color themes, or the ability to change colors.
Instead of thinking eye-candy, think usability. Make the most common course of action obvious. Mark Miller of DevExpress has a great talk on the Science of User Interface Design. I actually have a video of it and might be able to post it online with a little clean-up.
Invest in a few good quality 3rd party controls. Replacing all your controls could be a pain, but if you are using the default grids for example, you would really jazz it up with a good grid from DevExpress or some other component vendor. Be aware that different vendors have different philosophies for how their components are used, so swapping them out can be a bit of a pain. Start small to test the waters, and then try something really complicated before you commit to replacing all of them. The only thing worse then ugly grids is ugly grids mixed with pretty grids. Consistency is golden!
You also might look at replacing your old tool bars and menus with a Ribbon Control like Microsoft did in Office 2007. Then everyone will think you are really uptown! Again only replacing key components and UI elements without thinking you need to revamp the whole UI.
Of course pay attention to the basics like tab order, etc. Consistency, consistency, consistency.
Some apps lend themselves to full blown skinning, while others don't. Generally you don't want anything flashy that gets used a lot.
A:
One other thing to also check is that your controls have the FlatStyle property set to System instead of Standard.
What this will do is make sure that the app uses the system defaults for radio buttons, standard buttons and the like. This takes all your apps from the flat Win 2000 look and gives them the XP or Vista bling depending on the OS they are running.
A:
This isn't so much an "answer" as an opinion.
I tried to jazz up a WinForms project I created back a few years ago by giving the forms a fancy blue gradient background etc, and it looked pretty good on XP. But then on Vista it looked out of place. Taking away any custom painting and reverting the form to "battleship gray" made it look much better IMHO.
I'm seeing a lot of applications (particularly from MS) coming out with custom window chrome etc, and all it does is detract from the nice sense of consistency that Windows gives.
I guess what I'm saying is that you don't need to worry too much about making your application look fashionable. If you keep your colours based on the SystemColors enumeration then Windows can do that for you.
A:
I recommend purchasing a good 3rd-party control library - Infragistics and DevExpress, are just a couple. Most of these libraries give you the ability to drop in new compatible controls on top of your existing ones - for example, you can replace the default EditBox with an enhanced version. They also give you access to some of the snazzy new UIs such as Ribbon, or the Outlook-style navigator people are always wanting.
The reason I specifically recommend using one of these libraries is that they were designed to be relatively easy to use in existing applications, you get support, a community, and all sorts of upgrade paths/options.
The downside: money.
A:
What would be the best approach to making this project look fresh and snazzy?
IMHO the best thing you can do is make sure the controls are logically ordered, and have ample spacing between them, and add groupboxes / labels / etc where appropriate.
If you try and change the 'sea of gray' that is the default color scheme, your app will just end up looking crap.
A:
This depends on how the existing "gray old looking" project is structured in terms of code. For example, is data access code separated from the UI in a Data Access Layer, is the business logic in a Business Logic Layer? If yes, then cleaning the UI for a snazzy look should be relatively simple.
If everything is all there in the "Button Click" event, then a rewrite is the only way in my humble opinion as otherwise it will just be too time consuming trying to work with the existing code base.
Cheers
A:
You can subclass all the default controls and override their appearance. Admittedly, you will have to go thru the entire project and change all references of TextBox to MyTextBox, but all of the default properties and methods will still work. The same cannot be guaranteed if you go with a 3rd party vendor. The other advantage of this approach is you can pick one control at a time and perform an incremental upgrade of the application.
| How can you make a .net windows forms project look fresh? | I'm working on a visual studio 2005 vb.net windows forms project that's been around for several years. It's full of default textboxes, labels, dropdowns, datagrids, datetime pickers -- all the standard stuff. The end result is a very gray, old-looking project.
What would be the best approach to making this project look fresh and snazzy? I'd rather not rewrite the entire solution with all brand new forms objects, but would that be avoidable?
| [
"I was actually just sprucing up a dialog today. A lot of it depends on what kind of application you have, and what OS it is running on. A couple of these tips will certainly go a long way to jazzing things up.\n\nEnsure adequate spacing between controls — don't cram them all together. Space is appealing. You might also trying flowing the controls a little differently when you have more space.\nPut in some new 3D and glossy images. You can put a big yellow exclamation mark on a custom warning dialog. Replace old toolbar buttons with new ones. Two libraries I have used and like are GlyFX and IconExperience. You can find free ones too. Ideally get a graphic artist to make some custom ones for the specific actions your application does to fill in between the common ones you use (make sure they all go together). That will go a long way to making it look fancy.\nTry a different font. Tahoma is a good one. Often times the default font is MS Sans Serif. You can do better. Avoid Times New Roman and Comic Sans though. Also avoid large blocks of bold — use it sparingly. Generally you want all your fonts the same, and only use different fonts sparingly to set certain bits of text apart. \nAdd subdued colors to certain controls. This is a tricky one. You always want to use subdued colors, nothing bright or stark usually, but the colors should indicate something, or if you have a grid you can use it to show logical grouping. This is a slippery slope. Be aware that users might change their system colors, which will change how your colors look. Ideally give them a few color themes, or the ability to change colors.\nInstead of thinking eye-candy, think usability. Make the most common course of action obvious. Mark Miller of DevExpress has a great talk on the Science of User Interface Design. I actually have a video of it and might be able to post it online with a little clean-up.\nInvest in a few good quality 3rd party controls. Replacing all your controls could be a pain, but if you are using the default grids for example, you would really jazz it up with a good grid from DevExpress or some other component vendor. Be aware that different vendors have different philosophies for how their components are used, so swapping them out can be a bit of a pain. Start small to test the waters, and then try something really complicated before you commit to replacing all of them. The only thing worse then ugly grids is ugly grids mixed with pretty grids. Consistency is golden! \nYou also might look at replacing your old tool bars and menus with a Ribbon Control like Microsoft did in Office 2007. Then everyone will think you are really uptown! Again only replacing key components and UI elements without thinking you need to revamp the whole UI.\nOf course pay attention to the basics like tab order, etc. Consistency, consistency, consistency.\n\nSome apps lend themselves to full blown skinning, while others don't. Generally you don't want anything flashy that gets used a lot.\n",
"One other thing to also check is that your controls have the FlatStyle property set to System instead of Standard.\nWhat this will do is make sure that the app uses the system defaults for radio buttons, standard buttons and the like. This takes all your apps from the flat Win 2000 look and gives them the XP or Vista bling depending on the OS they are running.\n",
"This isn't so much an \"answer\" as an opinion.\nI tried to jazz up a WinForms project I created back a few years ago by giving the forms a fancy blue gradient background etc, and it looked pretty good on XP. But then on Vista it looked out of place. Taking away any custom painting and reverting the form to \"battleship gray\" made it look much better IMHO.\nI'm seeing a lot of applications (particularly from MS) coming out with custom window chrome etc, and all it does is detract from the nice sense of consistency that Windows gives.\nI guess what I'm saying is that you don't need to worry too much about making your application look fashionable. If you keep your colours based on the SystemColors enumeration then Windows can do that for you.\n",
"I recommend purchasing a good 3rd-party control library - Infragistics and DevExpress, are just a couple. Most of these libraries give you the ability to drop in new compatible controls on top of your existing ones - for example, you can replace the default EditBox with an enhanced version. They also give you access to some of the snazzy new UIs such as Ribbon, or the Outlook-style navigator people are always wanting.\nThe reason I specifically recommend using one of these libraries is that they were designed to be relatively easy to use in existing applications, you get support, a community, and all sorts of upgrade paths/options.\nThe downside: money.\n",
"\nWhat would be the best approach to making this project look fresh and snazzy?\n\nIMHO the best thing you can do is make sure the controls are logically ordered, and have ample spacing between them, and add groupboxes / labels / etc where appropriate. \nIf you try and change the 'sea of gray' that is the default color scheme, your app will just end up looking crap.\n",
"This depends on how the existing \"gray old looking\" project is structured in terms of code. For example, is data access code separated from the UI in a Data Access Layer, is the business logic in a Business Logic Layer? If yes, then cleaning the UI for a snazzy look should be relatively simple.\nIf everything is all there in the \"Button Click\" event, then a rewrite is the only way in my humble opinion as otherwise it will just be too time consuming trying to work with the existing code base.\nCheers\n",
"You can subclass all the default controls and override their appearance. Admittedly, you will have to go thru the entire project and change all references of TextBox to MyTextBox, but all of the default properties and methods will still work. The same cannot be guaranteed if you go with a 3rd party vendor. The other advantage of this approach is you can pick one control at a time and perform an incremental upgrade of the application.\n"
] | [
38,
9,
6,
6,
5,
3,
3
] | [] | [] | [
".net",
"user_interface",
"visual_studio",
"winforms"
] | stackoverflow_0000033703_.net_user_interface_visual_studio_winforms.txt |
Q:
Minimalistic Database Administration
I am a developer. An architect on good days. Somehow I find myself also being the DBA for my small company. My background is fair in the DB arts but I have never been a full fledged DBA. My question is what do I have to do to ensure a realiable and reasonably functional database environment with as little actual effort as possible?
I am sure that I need to make sure that backups are being performed and that is being done. That is an easy one. What else should I be doing on a consistant basis?
A:
I've been there. I used to have a job where I wrote code, did all the infrastructure stuff, wore the DBA hat, did user support, fixed the electric stapler when it jammed, and whatever else came up that might be remotely associated with IT. It was great! I learned a little about everything.
As far as the care and feeding of your database box, I'd recommend that you do the following:
Perform regular full backups.
Perform regular transaction log backups.
Monitor your backup jobs. There's a bunch of utilities out on the market that are relatively cheap that can automate this for you. In a small shop you're often too busy
to remember to check on them daily.
Test your backups. Do a drill. Restore an old copy of your most important databases. Prove to yourself that your backups are working and that you know how to restore them properly. You'd be suprised how many people only think about this during their first real disaster.
Store backups off-site. With all the online backup providers out there today, there's not much excuse for not having an offsite backup.
Limit sa access to your boxes.
If your database platform supports it, use only role based security. Resist the temptation to have one-off user specific security.
The basic idea here is that if you restrict who has access to the box, you'll have fewer problems. Secondly, if your backups are solid, there are few things that come up that you won't be able to deal with effectively.
A:
Who else is involved in the database? Are you the only person making schema changes (creating new objects, releasing new stored procedures, permissioning new users)?
Make sure that the number of users doing anything that could impact performance is reduced to as close to zero as possible, ideally including you.
Make sure that you're testing your backups - ideally run a DEV box that is recreating the production environment periodically, 1. a DEV box is a good idea, 2. a backup is only useful if you can restore from it.
Create groups for the various apps that connect to your database, so when a new user comes along you don't guess what permissions they need, just add them to the group, meanwhile permission the database objects to only the groups that need them
Use indices, primary keys, foreign keys, constraints, stats and whatever other tools your database supports. Normalise.
Optimise the most common code against your box - bad stored procedures/data access code will kill you.
A:
I would suggest:
A script to quickly restore the latest backup of a database, in case it gets corrupted
What kind of backups are you doing? Full backups each day, or incremental every hour, etc?
Some scripts to create new users and grant them basic access.
However, the number one suggestion is to limit as much as possible the power other users have, this will greatly reduce the chance of stuff getting badly messed up. Servers that have everyone as an sa tend to get screwed up quicker than servers that are locked down.
| Minimalistic Database Administration | I am a developer. An architect on good days. Somehow I find myself also being the DBA for my small company. My background is fair in the DB arts but I have never been a full fledged DBA. My question is what do I have to do to ensure a realiable and reasonably functional database environment with as little actual effort as possible?
I am sure that I need to make sure that backups are being performed and that is being done. That is an easy one. What else should I be doing on a consistant basis?
| [
"I've been there. I used to have a job where I wrote code, did all the infrastructure stuff, wore the DBA hat, did user support, fixed the electric stapler when it jammed, and whatever else came up that might be remotely associated with IT. It was great! I learned a little about everything.\nAs far as the care and feeding of your database box, I'd recommend that you do the following:\n\nPerform regular full backups. \nPerform regular transaction log backups. \nMonitor your backup jobs. There's a bunch of utilities out on the market that are relatively cheap that can automate this for you. In a small shop you're often too busy\nto remember to check on them daily.\nTest your backups. Do a drill. Restore an old copy of your most important databases. Prove to yourself that your backups are working and that you know how to restore them properly. You'd be suprised how many people only think about this during their first real disaster. \nStore backups off-site. With all the online backup providers out there today, there's not much excuse for not having an offsite backup. \nLimit sa access to your boxes. \nIf your database platform supports it, use only role based security. Resist the temptation to have one-off user specific security.\n\nThe basic idea here is that if you restrict who has access to the box, you'll have fewer problems. Secondly, if your backups are solid, there are few things that come up that you won't be able to deal with effectively.\n",
"Who else is involved in the database? Are you the only person making schema changes (creating new objects, releasing new stored procedures, permissioning new users)? \n\nMake sure that the number of users doing anything that could impact performance is reduced to as close to zero as possible, ideally including you. \nMake sure that you're testing your backups - ideally run a DEV box that is recreating the production environment periodically, 1. a DEV box is a good idea, 2. a backup is only useful if you can restore from it. \nCreate groups for the various apps that connect to your database, so when a new user comes along you don't guess what permissions they need, just add them to the group, meanwhile permission the database objects to only the groups that need them \nUse indices, primary keys, foreign keys, constraints, stats and whatever other tools your database supports. Normalise. \nOptimise the most common code against your box - bad stored procedures/data access code will kill you. \n\n",
"I would suggest:\n\nA script to quickly restore the latest backup of a database, in case it gets corrupted\nWhat kind of backups are you doing? Full backups each day, or incremental every hour, etc?\nSome scripts to create new users and grant them basic access.\n\nHowever, the number one suggestion is to limit as much as possible the power other users have, this will greatly reduce the chance of stuff getting badly messed up. Servers that have everyone as an sa tend to get screwed up quicker than servers that are locked down.\n"
] | [
5,
5,
3
] | [] | [] | [
"administration",
"database",
"task"
] | stackoverflow_0000035357_administration_database_task.txt |
Q:
DOS filename escaping for use with *nix commands
I want to escape a DOS filename so I can use it with sed. I have a DOS batch file something like this:
set FILENAME=%~f1
sed 's/Some Pattern/%FILENAME%/' inputfile
(Note: %~f1 - expands %1 to a Fully qualified path name - C:\utils\MyFile.txt)
I found that the backslashes in %FILENAME% are just escaping the next letter.
How can I double them up so that they are escaped?
(I have cygwin installed so feel free to use any other *nix commands)
Solution
Combining Jeremy and Alexandru Nedelcu's suggestions, and using | for the delimiter in the sed command I have
set FILENAME=%~f1
cygpath "s|Some Pattern|%FILENAME%|" >sedcmd.tmp
sed -f sedcmd.tmp inputfile
del /q sedcmd.tmp
A:
This will work. It's messy because in BAT files you can't use set var=`cmd` like you can in unix.
The fact that echo doesn't understand quotes is also messy, and could lead to trouble if Some Pattern contains shell meta characters.
set FILENAME=%~f1
echo s/Some Pattern/%FILENAME%/ | sed -e "s/\\/\\\\/g" >sedcmd.tmp
sed -f sedcmd.tmp inputfile
del /q sedcmd.tmp
[Edited]: I am suprised that it didn't work for you. I just tested it, and it worked on my machine. I am using sed from http://sourceforge.net/projects/unxutils and using cmd.exe to run those commands in a bat file.
A:
You could try as alternative (from the command prompt) ...
> cygpath -m c:\some\path
c:/some/path
As you can guess, it converts backslashes to slashes.
A:
@Alexandru & Jeremy, Thanks for your help. You both get upvotes
@Jeremy
Using your method I got the following error:
sed: -e expression #1, char 8:
unterminated `s' command
If you can edit your answer to make it work I'd accept it. (pasting my solution doesn't count)
Update: Ok, I tried it with UnixUtils and it worked. (For reference, the UnixUtils I downloaded was dated March 1, 2007, and uses GNU sed version 3.02, my Cygwin install has GNU sed version 4.1.5)
| DOS filename escaping for use with *nix commands | I want to escape a DOS filename so I can use it with sed. I have a DOS batch file something like this:
set FILENAME=%~f1
sed 's/Some Pattern/%FILENAME%/' inputfile
(Note: %~f1 - expands %1 to a Fully qualified path name - C:\utils\MyFile.txt)
I found that the backslashes in %FILENAME% are just escaping the next letter.
How can I double them up so that they are escaped?
(I have cygwin installed so feel free to use any other *nix commands)
Solution
Combining Jeremy and Alexandru Nedelcu's suggestions, and using | for the delimiter in the sed command I have
set FILENAME=%~f1
cygpath "s|Some Pattern|%FILENAME%|" >sedcmd.tmp
sed -f sedcmd.tmp inputfile
del /q sedcmd.tmp
| [
"This will work. It's messy because in BAT files you can't use set var=`cmd` like you can in unix.\nThe fact that echo doesn't understand quotes is also messy, and could lead to trouble if Some Pattern contains shell meta characters.\nset FILENAME=%~f1\necho s/Some Pattern/%FILENAME%/ | sed -e \"s/\\\\/\\\\\\\\/g\" >sedcmd.tmp\nsed -f sedcmd.tmp inputfile\ndel /q sedcmd.tmp\n\n[Edited]: I am suprised that it didn't work for you. I just tested it, and it worked on my machine. I am using sed from http://sourceforge.net/projects/unxutils and using cmd.exe to run those commands in a bat file.\n",
"You could try as alternative (from the command prompt) ...\n> cygpath -m c:\\some\\path\nc:/some/path\n\nAs you can guess, it converts backslashes to slashes.\n",
"@Alexandru & Jeremy, Thanks for your help. You both get upvotes\n@Jeremy\nUsing your method I got the following error:\n\nsed: -e expression #1, char 8:\n unterminated `s' command\n\nIf you can edit your answer to make it work I'd accept it. (pasting my solution doesn't count)\nUpdate: Ok, I tried it with UnixUtils and it worked. (For reference, the UnixUtils I downloaded was dated March 1, 2007, and uses GNU sed version 3.02, my Cygwin install has GNU sed version 4.1.5)\n"
] | [
2,
2,
0
] | [] | [] | [
"dos",
"scripting",
"shell"
] | stackoverflow_0000035286_dos_scripting_shell.txt |
Q:
Encapsulate multiple properties at once using Resharper 4.0
When using Resharper to encapsulate a class's properties, is there a way to get it to do more than one property at a time?
A:
You might or might not already know this (R# does suffer from a lack of discoverability, unless you get the one-page key-shortcut page printed out), but ALT-INS opens a box which can at least mass-generate properties for fields.
Not sure if that's any use - it's not the same as a retrospective encapsulation.
| Encapsulate multiple properties at once using Resharper 4.0 | When using Resharper to encapsulate a class's properties, is there a way to get it to do more than one property at a time?
| [
"You might or might not already know this (R# does suffer from a lack of discoverability, unless you get the one-page key-shortcut page printed out), but ALT-INS opens a box which can at least mass-generate properties for fields.\nNot sure if that's any use - it's not the same as a retrospective encapsulation.\n"
] | [
9
] | [
"I don't think there such a feature out of the box.\nHowever, you could write a RS plugin that does this. But this would be another question...\n"
] | [
-1
] | [
"resharper",
"visual_studio"
] | stackoverflow_0000035402_resharper_visual_studio.txt |
Q:
What is the best solution for maintaining backup and revision control on live websites?
What is the best solution for maintaining backup and revision control on live websites?
As part of my job I work with several live websites. We need an efficient means of maintaining backups of the live folders over time. Additionally, updating these sites can be a pain, especially if a change happens to break in the live environment for whatever reason.
What would be ideal would be hassle-free source control. I implemented SVN for a while which was great as a semi-solution for backup as well as revision control (easy reversion of temporary or breaking changes) etc.
Unfortunately SVN places .SVN hidden directories everywhere which cause problems, especially when other developers make folder structure changes or copy/move website directories. I've heard the argument that this is a matter of education etc. but the approach taken by SVN is simply not a practical solution for us.
I am thinking that maybe an incremental backup solution may be better.
Other possibilities include:
SVK, which is command-line only which becomes a problem. Besides, I am unsure on how appropriate this would be.
Mercurial, perhaps with some triggers to hide the distributed component which is not required in this case and would be unnecessarily complicated for other developers.
I experimented briefly with Mercurial but couldn't find a nice way to have the repository seperate and kept constantly in-sync with the live folder working copy. Maybe as a source control solution (making repository and live folder the same place) combined with another backup solution this could be the way to go.
One downside of Mercurial is that it doesn't place empty folders under source control which is problematic for websites which often have empty folders as placeholder locations for file uploads etc.
Rsync, which I haven't really investigated.
I'd really appreciate your advice on the best way to maintain backups of live websites, ideally with an easy means of retrieving past versions quickly.
Answer replies:
@Kibbee:
It's not so much about education as no familiarity with anything but VSS and a lack of time/effort to learn anything else.
The xcopy/7-zip approach sounds reasonable I guess but it would quickly take up a lot of room right?
As far as source control, I think I'd like the source control to just say that "this is the state of the folder now, I'll deal with that and if I can't match stuff up that's your fault, I'll just start new histories" rather than fail hard.
@Steve M:
Yeah that's a nicer way of doing it but would require a significant cultural change. Having said that I very much like this approach.
@mk:
Nice, I didn't think about using Rsync to deploy. Does this only upload the differences? Overwriting the entire live directory everytime we make a change would be problematic due to site downtime.
I am still curious to see if there are any more traditional options
A:
You can still use SVN, but instead of doing a checkout on your live environment, do an export, that way no .svn directories will be created. The downside, of course, is that no code changes on your live environment can take place. This is a good thing.
As a general rule, code changes on production systems should never be allowed. The change should be made and tested in a development/test/UAT environment, then once confirmed as OK, you can tag that code in SVN with something like RELEASE-x-x-x. Then, on the live system, export the code with that tag.
A:
We use option 3. Rsync. I wrote a bash script to do this along with some extra checking, but here are the basics of what it does.
Make a tag for pushing to live.
Run svn export on that tag.
rsync to live.
So far it has been working out. We don't have to worry about user conflicts or have a separate user for running svn up on the production machine.
A:
Any source control solution you pick is going to have problems if people are moving, deleting, or adding files and not telling the source control system about it. I'm not aware of any source control item that could solve this problem.
In the case where you just can't educate the people working on the project[1], then you may just have to go with daily snapshots. Something as simple as batch file using xcopy to a network drive, and possibly 7-zip on the command line to compress it so it doesn't take up too much space would probably be the simplest solution.
[1] I would highly disbelieve this, probably just more a case of people being too stubborn and not willing to learn, or do "extra work". Nevermind how much time source control could save them when they have to go back to previous versions, or 2 people have edited the same file.
A:
rsync will only upload the differences. I haven't personally used it, but Mark Pilgrim wrote a long time ago about how it even handles binary diffs brilliantly.
svn+rsync sounds like a fantastic solution. I'll have to try that in the future.
| What is the best solution for maintaining backup and revision control on live websites? | What is the best solution for maintaining backup and revision control on live websites?
As part of my job I work with several live websites. We need an efficient means of maintaining backups of the live folders over time. Additionally, updating these sites can be a pain, especially if a change happens to break in the live environment for whatever reason.
What would be ideal would be hassle-free source control. I implemented SVN for a while which was great as a semi-solution for backup as well as revision control (easy reversion of temporary or breaking changes) etc.
Unfortunately SVN places .SVN hidden directories everywhere which cause problems, especially when other developers make folder structure changes or copy/move website directories. I've heard the argument that this is a matter of education etc. but the approach taken by SVN is simply not a practical solution for us.
I am thinking that maybe an incremental backup solution may be better.
Other possibilities include:
SVK, which is command-line only which becomes a problem. Besides, I am unsure on how appropriate this would be.
Mercurial, perhaps with some triggers to hide the distributed component which is not required in this case and would be unnecessarily complicated for other developers.
I experimented briefly with Mercurial but couldn't find a nice way to have the repository seperate and kept constantly in-sync with the live folder working copy. Maybe as a source control solution (making repository and live folder the same place) combined with another backup solution this could be the way to go.
One downside of Mercurial is that it doesn't place empty folders under source control which is problematic for websites which often have empty folders as placeholder locations for file uploads etc.
Rsync, which I haven't really investigated.
I'd really appreciate your advice on the best way to maintain backups of live websites, ideally with an easy means of retrieving past versions quickly.
Answer replies:
@Kibbee:
It's not so much about education as no familiarity with anything but VSS and a lack of time/effort to learn anything else.
The xcopy/7-zip approach sounds reasonable I guess but it would quickly take up a lot of room right?
As far as source control, I think I'd like the source control to just say that "this is the state of the folder now, I'll deal with that and if I can't match stuff up that's your fault, I'll just start new histories" rather than fail hard.
@Steve M:
Yeah that's a nicer way of doing it but would require a significant cultural change. Having said that I very much like this approach.
@mk:
Nice, I didn't think about using Rsync to deploy. Does this only upload the differences? Overwriting the entire live directory everytime we make a change would be problematic due to site downtime.
I am still curious to see if there are any more traditional options
| [
"You can still use SVN, but instead of doing a checkout on your live environment, do an export, that way no .svn directories will be created. The downside, of course, is that no code changes on your live environment can take place. This is a good thing.\nAs a general rule, code changes on production systems should never be allowed. The change should be made and tested in a development/test/UAT environment, then once confirmed as OK, you can tag that code in SVN with something like RELEASE-x-x-x. Then, on the live system, export the code with that tag.\n",
"We use option 3. Rsync. I wrote a bash script to do this along with some extra checking, but here are the basics of what it does.\n\nMake a tag for pushing to live. \nRun svn export on that tag.\nrsync to live.\n\nSo far it has been working out. We don't have to worry about user conflicts or have a separate user for running svn up on the production machine.\n",
"Any source control solution you pick is going to have problems if people are moving, deleting, or adding files and not telling the source control system about it. I'm not aware of any source control item that could solve this problem. \nIn the case where you just can't educate the people working on the project[1], then you may just have to go with daily snapshots. Something as simple as batch file using xcopy to a network drive, and possibly 7-zip on the command line to compress it so it doesn't take up too much space would probably be the simplest solution.\n[1] I would highly disbelieve this, probably just more a case of people being too stubborn and not willing to learn, or do \"extra work\". Nevermind how much time source control could save them when they have to go back to previous versions, or 2 people have edited the same file.\n",
"rsync will only upload the differences. I haven't personally used it, but Mark Pilgrim wrote a long time ago about how it even handles binary diffs brilliantly.\nsvn+rsync sounds like a fantastic solution. I'll have to try that in the future.\n"
] | [
4,
2,
1,
1
] | [] | [] | [
"backup",
"hosting",
"version_control"
] | stackoverflow_0000027292_backup_hosting_version_control.txt |
Q:
How can I point Visual Studio 2008 to a new path for projects?
I didn't see the option to point the workspace (or it's VS equivalent, I'm still learning the terminology for Visual Studio, but it is called a workspace in Eclipse) to My Documents/Programming instead of -- well -- wherever it is now.
A:
What Craig said, plus if you do want to change the default it's in Tools -> Options -> Projects And Solutions.
I've never changed the default and never created a solution/project in the default location, which might tell you something about how relevant it is...
A:
Tools -> Options -> Projects & Solutions.
There is a Visual Studio Projects box.
A:
When you create the project you can specify whatever directory you want, you are not limited to the default.
| How can I point Visual Studio 2008 to a new path for projects? | I didn't see the option to point the workspace (or it's VS equivalent, I'm still learning the terminology for Visual Studio, but it is called a workspace in Eclipse) to My Documents/Programming instead of -- well -- wherever it is now.
| [
"What Craig said, plus if you do want to change the default it's in Tools -> Options -> Projects And Solutions.\nI've never changed the default and never created a solution/project in the default location, which might tell you something about how relevant it is...\n",
"Tools -> Options -> Projects & Solutions.\nThere is a Visual Studio Projects box.\n",
"When you create the project you can specify whatever directory you want, you are not limited to the default.\n"
] | [
1,
1,
0
] | [] | [] | [
"visual_studio"
] | stackoverflow_0000035432_visual_studio.txt |
Q:
How do I get the assembler output from a C file in VS2005
I think the file that is produced is an .asm file, any idea how to produce this in Visual Studio when you do a build?
A:
Project->Properties->Configuration Properties->C/C++->Output Files
There you should see an option for Assembler Output.
John.
A:
Open the Properties page for a project
Select the Configuration Properties -> C/C++ -> Output Files branch
Change the Assembler Output option to something other than No Listing
Make sure ASM List Location is set to a valid path or sub-path
Build.
A:
Or if using the Visual Studio command line,
cl.exe /Fa[assembler code filename]
If you just want to view the assembler code in the IDE you can start the debugger and then select the Go To Dissassembly choice on the right click menu.
| How do I get the assembler output from a C file in VS2005 | I think the file that is produced is an .asm file, any idea how to produce this in Visual Studio when you do a build?
| [
"Project->Properties->Configuration Properties->C/C++->Output Files\nThere you should see an option for Assembler Output.\nJohn.\n",
"\nOpen the Properties page for a project \nSelect the Configuration Properties -> C/C++ -> Output Files branch \nChange the Assembler Output option to something other than No Listing \nMake sure ASM List Location is set to a valid path or sub-path\nBuild.\n\n",
"Or if using the Visual Studio command line, \ncl.exe /Fa[assembler code filename]\nIf you just want to view the assembler code in the IDE you can start the debugger and then select the Go To Dissassembly choice on the right click menu.\n"
] | [
5,
5,
2
] | [] | [] | [
"assembly",
"c",
"visual_studio"
] | stackoverflow_0000034635_assembly_c_visual_studio.txt |
Q:
How do you use XML::Parser with Style => 'Objects'
The manual page for XML::Parser::Style::Objects is horrible. A simple hello world style program would really be helpful.
I really wanted to do something like this: (not real code of course)
use XML::Parser;
my $p = XML::Parser->new(Style => 'Objects', Pkg => 'MyNode');
my $tree = $p->parsefile('foo.xml');
$tree->doSomething();
MyNode::doSomething() {
my $self = shift;
print "This is a normal node";
for $kid ($self->Kids)
{
$kid->doSomething();
}
}
MyNode::special::doSomething() {
my $self = shift;
print "This is a special node";
}
A:
When ever I need to do something similar, usually I end up using XML::Parser::EasyTree it has better documentation and is simpler to use.
I highly recommend it.
A:
In all cases here is actual code that runs ... doesn't mean much but produces output and hopefully can get you started ...
use XML::Parser;
package MyNode::inner;
sub doSomething {
my $self = shift;
print "This is an inner node containing : ";
print $self->{Kids}->[0]->{Text};
print "\n";
}
package MyNode::Characters;
sub doSomething {}
package MyNode::foo;
sub doSomething {
my $self = shift;
print "This is an external node\n";
for $kid (@ { $self->{Kids} }) {
$kid->doSomething();
}
}
package main;
my $p = XML::Parser->new(Style => 'Objects', Pkg => 'MyNode');
my $tree = $p->parsefile('foo.xml');
for (@$tree) {
$_->doSomething();
}
with foo.xml
<foo> <inner>some text</inner> <inner>something else</inner></foo>
which outputs
>perl -w "tree.pl"
This is an external node
This is an inner node containing : some text
This is an inner node containing : something else
Hope that helps.
| How do you use XML::Parser with Style => 'Objects' | The manual page for XML::Parser::Style::Objects is horrible. A simple hello world style program would really be helpful.
I really wanted to do something like this: (not real code of course)
use XML::Parser;
my $p = XML::Parser->new(Style => 'Objects', Pkg => 'MyNode');
my $tree = $p->parsefile('foo.xml');
$tree->doSomething();
MyNode::doSomething() {
my $self = shift;
print "This is a normal node";
for $kid ($self->Kids)
{
$kid->doSomething();
}
}
MyNode::special::doSomething() {
my $self = shift;
print "This is a special node";
}
| [
"When ever I need to do something similar, usually I end up using XML::Parser::EasyTree it has better documentation and is simpler to use.\nI highly recommend it.\n",
"In all cases here is actual code that runs ... doesn't mean much but produces output and hopefully can get you started ...\nuse XML::Parser;\n\npackage MyNode::inner;\n sub doSomething {\n my $self = shift;\n print \"This is an inner node containing : \";\n print $self->{Kids}->[0]->{Text};\n print \"\\n\";\n }\npackage MyNode::Characters;\n sub doSomething {}\npackage MyNode::foo;\n sub doSomething {\n my $self = shift;\n print \"This is an external node\\n\";\n for $kid (@ { $self->{Kids} }) {\n $kid->doSomething();\n }\n }\n\npackage main;\n\nmy $p = XML::Parser->new(Style => 'Objects', Pkg => 'MyNode');\nmy $tree = $p->parsefile('foo.xml');\nfor (@$tree) {\n $_->doSomething();\n}\n\nwith foo.xml\n <foo> <inner>some text</inner> <inner>something else</inner></foo>\n\nwhich outputs\n>perl -w \"tree.pl\" \nThis is an external node\nThis is an inner node containing : some text\nThis is an inner node containing : something else\n\nHope that helps.\n"
] | [
2,
1
] | [] | [] | [
"perl",
"xml"
] | stackoverflow_0000034914_perl_xml.txt |
Q:
Compiler Error C2143 when using a struct
I'm compiling a simple .c in visual c++ with Compile as C Code (/TC)
and i get this compiler error
error C2143: syntax error : missing ';' before 'type'
on a line that calls for a simple struct
struct foo test;
same goes for using the typedef of the struct.
error C2275: 'FOO' : illegal use of this type as an expression
A:
I forgot that in C you have to declare all your variables before any code.
A:
Did you accidentally omit a semicolon on a previous line? If the previous line is an #include, you might have to look elsewhere for the missing semicolon.
Edit: If the rest of your code is valid C++, then there probably isn't enough information to determine what the problem is. Perhaps you could post your code to a pastebin so we can see the whole thing.
Ideally, in the process of making it smaller to post, it will suddenly start working and you'll then have discovered the problem!
A:
Because you've already made a typedef for the struct (because you used the 's1' version), you should write:
foo test;
rather than
struct foo test;
That will work in both C and C++
A:
How is your structure type defined? There are two ways to do it:
// This will define a typedef for S1, in both C and in C++
typedef struct {
int data;
int text;
} S1;
// This will define a typedef for S2 ONLY in C++, will create error in C.
struct S2 {
int data;
int text;
};
A:
C2143 basically says that the compiler got a token that it thinks is illegal in the current context. One of the implications of this error is that the actual problem may exist before the line that triggers the compiler error. As Greg said I think we need to see more of your code to diagnose this problem.
I'm also not sure why you think the fact that this is valid C++ code is helpful when attempting to figure out why it doesn't compile as C? C++ is (largely) a superset of C so there's any number of reasons why valid C++ code might not be syntactically correct C code, not least that C++ treats structs as classes!
| Compiler Error C2143 when using a struct | I'm compiling a simple .c in visual c++ with Compile as C Code (/TC)
and i get this compiler error
error C2143: syntax error : missing ';' before 'type'
on a line that calls for a simple struct
struct foo test;
same goes for using the typedef of the struct.
error C2275: 'FOO' : illegal use of this type as an expression
| [
"I forgot that in C you have to declare all your variables before any code.\n",
"Did you accidentally omit a semicolon on a previous line? If the previous line is an #include, you might have to look elsewhere for the missing semicolon.\nEdit: If the rest of your code is valid C++, then there probably isn't enough information to determine what the problem is. Perhaps you could post your code to a pastebin so we can see the whole thing.\nIdeally, in the process of making it smaller to post, it will suddenly start working and you'll then have discovered the problem!\n",
"Because you've already made a typedef for the struct (because you used the 's1' version), you should write:\nfoo test;\n\nrather than \nstruct foo test;\n\nThat will work in both C and C++\n",
"How is your structure type defined? There are two ways to do it:\n// This will define a typedef for S1, in both C and in C++\ntypedef struct {\n int data;\n int text;\n} S1;\n\n// This will define a typedef for S2 ONLY in C++, will create error in C.\nstruct S2 {\n int data;\n int text; \n};\n\n",
"C2143 basically says that the compiler got a token that it thinks is illegal in the current context. One of the implications of this error is that the actual problem may exist before the line that triggers the compiler error. As Greg said I think we need to see more of your code to diagnose this problem.\nI'm also not sure why you think the fact that this is valid C++ code is helpful when attempting to figure out why it doesn't compile as C? C++ is (largely) a superset of C so there's any number of reasons why valid C++ code might not be syntactically correct C code, not least that C++ treats structs as classes!\n"
] | [
6,
1,
1,
0,
0
] | [] | [] | [
"c",
"visual_c++"
] | stackoverflow_0000035333_c_visual_c++.txt |
Q:
Google Talk's Graphics Toolkit?
What graphics toolkit is used for the Window's Google Talk application?
A:
There isn't much information on this out there but it seems to be their own customized controls plus an IE component (and not Qt like Google Earth). This forum thread has a little bit of information.
A:
I'm not positive but I believe it's QT.
| Google Talk's Graphics Toolkit? | What graphics toolkit is used for the Window's Google Talk application?
| [
"There isn't much information on this out there but it seems to be their own customized controls plus an IE component (and not Qt like Google Earth). This forum thread has a little bit of information.\n",
"I'm not positive but I believe it's QT. \n"
] | [
1,
0
] | [] | [] | [
"toolkit",
"user_interface",
"windows"
] | stackoverflow_0000034711_toolkit_user_interface_windows.txt |
Q:
Lightweight rich-text XML format?
I am writing a basic word processing application and am trying to settle on a native "internal" format, the one that my code parses in order to render to the screen. I'd like this to be XML so that I can, in the future, just write XSLT to convert it to ODF or XHTML or whatever.
When searching for existing standards to use, the only one that looks promising is ODF. But that looks like massive overkill for what I need. All I need is paragraph tags, font selection, font size & decoration...that's pretty much it. It would take me a long time to implement even a minimal ODF renderer, and I'm not sure it's worth the trouble.
Right now I'm thinking of making my own XML format, but that's not really good practice. Better to use a standard, especially since then I can probably find the XSLTs I might need in the future already written.
Or should I just bite the bullet and implement ODF?
EDIT: Regarding the Answer
I knew about XSL-FO before, but due to the weight of the spec hadn't really consdiered it. But you're right, a subset would give me everything I need to work with and room to grow. Thanks so much the reminder.
Plus, by including a rendering library like FOP or RenderX, I get PDF generation for free. Not bad...
A:
As you are sure about needing to represent the presentational side of things, it may be worth looking at the XSL-FO W3C Recommendation. This is a full-blown page description language and the (deeply unfashionable) other half of the better-known XSLT.
Clearly the whole thing is anything but "lightwight", but if you just incorporated a
very limited subset - which could even just be (to match your spec of "paragraph tags, font selection, font size & decoration") fo:block and the common font properties, something like:
<yourcontainer xmlns:fo="http://www.w3.org/1999/XSL/Format">
<fo:block font-family="Arial, sans-serif" font-weight="bold"
font-size="16pt">Example Heading</fo:block>
<fo:block font-family="Times, serif"
font-size="12pt">Paragraph text here etc etc...</fo:block>
</yourcontainer>
This would perhaps have a few advantages over just rolling your own. There's an open specification to work from, and all that implies. It reuses CSS properties as XML attributes (in a similar manner to SVG), so many of the formatting details will seem somewhat familiar. You'd have an upgrade path if you later decided that, say, intelligent paging was a must-have feature - including more sections of the spec as they become relevant to your application.
There's one other thing you might get from investigating XSL-FO - seeing how even just-doing-paragraphs-and-fonts can be horrendously complicated. Trying to do text layout and line breaking 'The Right Way' for various different languages and use cases seems very daunting to me.
A:
If its only for word processing, then perhaps DocBook might be a little lighter than ODF?
However, the wiki entry states:
DocBook is a semantic markup language for technical documentation. It was originally intended for writing technical documents related to computer hardware and software but it can be used for any other sort of documentation.
So it might not be so suitable for a general-purpose word-processor?
The advantage of using DocBook would be the fact that a number of DocBook -> other format converters should be available? Hope this helps.
A:
I like DocBook, but it doesn't really fit. It strives to be presentation-independent, the intention being that you would use XSLT to render it to a presentation format.
In a word processor, the user is editing presentation along with the content. For example, the user doesn't want to mark a "keyword", necessarily, they want to make some text bold.
A DocBook editor would be a very nice thing (I'm not sure a good one exists), but it's not really what I'm doing.
A:
Well, right... But since I need to be able to convert to XML anyway, why hold both my document tree and the DOM tree in memory, when there's nothing preventing me from working right off the DOM tree?
Particularly since one unique feature of my program is that everything is always saved as you type, and I don't want to run a whole conversion to XML every time I hit a key. Easier just to tie input and output directly to my in-memory DOM tree.
Edit:
Oh, and the only problem with XHTML is that I do want to support basic pagination. Though I guess there's nothing stopping me with using some additional tags for that...
| Lightweight rich-text XML format? | I am writing a basic word processing application and am trying to settle on a native "internal" format, the one that my code parses in order to render to the screen. I'd like this to be XML so that I can, in the future, just write XSLT to convert it to ODF or XHTML or whatever.
When searching for existing standards to use, the only one that looks promising is ODF. But that looks like massive overkill for what I need. All I need is paragraph tags, font selection, font size & decoration...that's pretty much it. It would take me a long time to implement even a minimal ODF renderer, and I'm not sure it's worth the trouble.
Right now I'm thinking of making my own XML format, but that's not really good practice. Better to use a standard, especially since then I can probably find the XSLTs I might need in the future already written.
Or should I just bite the bullet and implement ODF?
EDIT: Regarding the Answer
I knew about XSL-FO before, but due to the weight of the spec hadn't really consdiered it. But you're right, a subset would give me everything I need to work with and room to grow. Thanks so much the reminder.
Plus, by including a rendering library like FOP or RenderX, I get PDF generation for free. Not bad...
| [
"As you are sure about needing to represent the presentational side of things, it may be worth looking at the XSL-FO W3C Recommendation. This is a full-blown page description language and the (deeply unfashionable) other half of the better-known XSLT.\nClearly the whole thing is anything but \"lightwight\", but if you just incorporated a\nvery limited subset - which could even just be (to match your spec of \"paragraph tags, font selection, font size & decoration\") fo:block and the common font properties, something like:\n<yourcontainer xmlns:fo=\"http://www.w3.org/1999/XSL/Format\">\n <fo:block font-family=\"Arial, sans-serif\" font-weight=\"bold\"\n font-size=\"16pt\">Example Heading</fo:block>\n <fo:block font-family=\"Times, serif\"\n font-size=\"12pt\">Paragraph text here etc etc...</fo:block>\n</yourcontainer>\n\nThis would perhaps have a few advantages over just rolling your own. There's an open specification to work from, and all that implies. It reuses CSS properties as XML attributes (in a similar manner to SVG), so many of the formatting details will seem somewhat familiar. You'd have an upgrade path if you later decided that, say, intelligent paging was a must-have feature - including more sections of the spec as they become relevant to your application.\nThere's one other thing you might get from investigating XSL-FO - seeing how even just-doing-paragraphs-and-fonts can be horrendously complicated. Trying to do text layout and line breaking 'The Right Way' for various different languages and use cases seems very daunting to me. \n",
"If its only for word processing, then perhaps DocBook might be a little lighter than ODF?\nHowever, the wiki entry states:\n\nDocBook is a semantic markup language for technical documentation. It was originally intended for writing technical documents related to computer hardware and software but it can be used for any other sort of documentation.\n\nSo it might not be so suitable for a general-purpose word-processor?\nThe advantage of using DocBook would be the fact that a number of DocBook -> other format converters should be available? Hope this helps.\n",
"I like DocBook, but it doesn't really fit. It strives to be presentation-independent, the intention being that you would use XSLT to render it to a presentation format.\nIn a word processor, the user is editing presentation along with the content. For example, the user doesn't want to mark a \"keyword\", necessarily, they want to make some text bold.\nA DocBook editor would be a very nice thing (I'm not sure a good one exists), but it's not really what I'm doing.\n",
"Well, right... But since I need to be able to convert to XML anyway, why hold both my document tree and the DOM tree in memory, when there's nothing preventing me from working right off the DOM tree?\nParticularly since one unique feature of my program is that everything is always saved as you type, and I don't want to run a whole conversion to XML every time I hit a key. Easier just to tie input and output directly to my in-memory DOM tree.\nEdit:\nOh, and the only problem with XHTML is that I do want to support basic pagination. Though I guess there's nothing stopping me with using some additional tags for that...\n"
] | [
5,
1,
1,
0
] | [
"XML is an external format, not internal.\nWhat's wrong with XHTML? It's simple and it's ubiquitous (at least HTML is). Your implementation would be easy to debug, and your users will be eternally greatful.\n"
] | [
-1
] | [
"standards",
"xml"
] | stackoverflow_0000031226_standards_xml.txt |
Q:
Bypass Forms Authentication auto redirect to login, How to?
I'm writing an app using asp.net-mvc deploying to iis6. I'm using forms authentication. Usually when a user tries to access a resource without proper authorization I want them to be redirected to a login page. FormsAuth does this for me easy enough.
Problem: Now I have an action being accessed by a console app. Whats the quickest way to have this action respond w/ status 401 instead of redirecting the request to the login page?
I want the console app to be able to react to this 401 StatusCode instead of it being transparent. I'd also like to keep the default, redirect unauthorized requests to login page behavior.
Note: As a test I added this to my global.asax and it didn't bypass forms auth:
protected void Application_AuthenticateRequest(object sender, EventArgs e)
{
HttpContext.Current.SkipAuthorization = true;
}
@Dale and Andy
I'm using the AuthorizeAttributeFilter provided in MVC preview 4. This is returning an HttpUnauthorizedResult. This result is correctly setting the statusCode to 401. The problem, as i understand it, is that asp.net is intercepting the response (since its taged as a 401) and redirecting to the login page instead of just letting it go through. I want to bypass this interception for certain urls.
A:
Ok, I worked around this. I made a custom ActionResult (HttpForbiddenResult) and custom ActionFilter (NoFallBackAuthorize).
To avoid redirection, HttpForbiddenResult marks responses with status code 403. FormsAuthentication doesn't catch responses with this code so the login redirection is effectively skipped. The NoFallBackAuthorize filter checks to see if the user is authorized much like the, included, Authorize filter. It differs in that it returns HttpForbiddenResult when access is denied.
The HttpForbiddenResult is pretty trivial:
public class HttpForbiddenResult : ActionResult
{
public override void ExecuteResult(ControllerContext context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
context.HttpContext.Response.StatusCode = 0x193; // 403
}
}
It doesn't appear to be possible to skip the login page redirection in the FormsAuthenticationModule.
A:
Might be a kludge (and may not even work) but on your Login page see if Request.QueryString["ReturnUrl"] != null and if so set Response.StatusCode = 401.
Bear in mind that you'll still need to get your console app to authenticate somehow. You don't get HTTP basic auth for free: you have to roll your own, but there are plenty of implementations about.
A:
Did you write your own FormsAuth attribute for the action? If so, in the OnActionExecuting method, you get passed the FilterExecutingContext. You can use this to pass back the 401 code.
public class FormsAuth : ActionFilterAttribute
{
public override void OnActionExecuting(FilterExecutingContext filterContext)
{
filterContext.HttpContext.Response.StatusCode = 401;
filterContext.Cancel = true;
}
}
This should work. I am not sure if you wrote the FormsAuth attribute or if you got it from somewhere else.
A:
I haven't used the AuthorizeAttribute that comes in Preview 4 yet. I rolled my own, because I have been using the MVC framework since the first CTP. I took a quick look at the attribute in reflector and it is doing what I mentioned above internally, except they use the hex equivalent of 401. I will need to look further up the call, to see where the exception is caught, because more than likely that is where they are doing the redirect. This is the functionality you will need to override. I am not sure if you can do it yet, but I will post back when I find it and give you a work around, unless Haacked sees this and posts it himself.
| Bypass Forms Authentication auto redirect to login, How to? | I'm writing an app using asp.net-mvc deploying to iis6. I'm using forms authentication. Usually when a user tries to access a resource without proper authorization I want them to be redirected to a login page. FormsAuth does this for me easy enough.
Problem: Now I have an action being accessed by a console app. Whats the quickest way to have this action respond w/ status 401 instead of redirecting the request to the login page?
I want the console app to be able to react to this 401 StatusCode instead of it being transparent. I'd also like to keep the default, redirect unauthorized requests to login page behavior.
Note: As a test I added this to my global.asax and it didn't bypass forms auth:
protected void Application_AuthenticateRequest(object sender, EventArgs e)
{
HttpContext.Current.SkipAuthorization = true;
}
@Dale and Andy
I'm using the AuthorizeAttributeFilter provided in MVC preview 4. This is returning an HttpUnauthorizedResult. This result is correctly setting the statusCode to 401. The problem, as i understand it, is that asp.net is intercepting the response (since its taged as a 401) and redirecting to the login page instead of just letting it go through. I want to bypass this interception for certain urls.
| [
"Ok, I worked around this. I made a custom ActionResult (HttpForbiddenResult) and custom ActionFilter (NoFallBackAuthorize).\nTo avoid redirection, HttpForbiddenResult marks responses with status code 403. FormsAuthentication doesn't catch responses with this code so the login redirection is effectively skipped. The NoFallBackAuthorize filter checks to see if the user is authorized much like the, included, Authorize filter. It differs in that it returns HttpForbiddenResult when access is denied. \nThe HttpForbiddenResult is pretty trivial:\n\npublic class HttpForbiddenResult : ActionResult\n{\n public override void ExecuteResult(ControllerContext context)\n {\n if (context == null)\n {\n throw new ArgumentNullException(\"context\");\n }\n context.HttpContext.Response.StatusCode = 0x193; // 403\n }\n}\n\nIt doesn't appear to be possible to skip the login page redirection in the FormsAuthenticationModule.\n",
"Might be a kludge (and may not even work) but on your Login page see if Request.QueryString[\"ReturnUrl\"] != null and if so set Response.StatusCode = 401.\nBear in mind that you'll still need to get your console app to authenticate somehow. You don't get HTTP basic auth for free: you have to roll your own, but there are plenty of implementations about.\n",
"Did you write your own FormsAuth attribute for the action? If so, in the OnActionExecuting method, you get passed the FilterExecutingContext. You can use this to pass back the 401 code.\npublic class FormsAuth : ActionFilterAttribute\n{\n public override void OnActionExecuting(FilterExecutingContext filterContext)\n {\n filterContext.HttpContext.Response.StatusCode = 401;\n filterContext.Cancel = true;\n }\n}\n\nThis should work. I am not sure if you wrote the FormsAuth attribute or if you got it from somewhere else.\n",
"I haven't used the AuthorizeAttribute that comes in Preview 4 yet. I rolled my own, because I have been using the MVC framework since the first CTP. I took a quick look at the attribute in reflector and it is doing what I mentioned above internally, except they use the hex equivalent of 401. I will need to look further up the call, to see where the exception is caught, because more than likely that is where they are doing the redirect. This is the functionality you will need to override. I am not sure if you can do it yet, but I will post back when I find it and give you a work around, unless Haacked sees this and posts it himself.\n"
] | [
5,
1,
0,
0
] | [
"I did some googling and this is what I came up with:\n\n HttpContext.Current.Response.StatusCode = 401;\n\nNot sure if it works or not, I haven't tested it. Either way, it's worth a try, right? :)\n"
] | [
-2
] | [
"asp.net",
"asp.net_mvc",
"forms_authentication"
] | stackoverflow_0000035322_asp.net_asp.net_mvc_forms_authentication.txt |
Q:
nmake, visualstudio, and .mak files
I was given a C++ project that was compiled using MS Visual Studio .net 2003 C++ compiler, and a .mak file that was used to compile it. I am able to build it from the command line using nmake project.mak, but the compiler complains that afxres.h was not found. I did a little searching around and the afxres.h is in the Visual Studio directory in an includes file. Where am I supposed to specify to nmake where to look for this header file?
A:
There should be an icon in your Start menu under Programs that opens a cmd.exe instance with all the correct MSVS environment variables set up for command line building.
A:
Another option is running the appropriate vars batch file from a regular command prompt. The name and location varies from version to version. For VS2003, I believe it's
C:\Program Files\Microsoft Visual Studio .NET 2003\Common7\Tools\vsvars32.bat
| nmake, visualstudio, and .mak files | I was given a C++ project that was compiled using MS Visual Studio .net 2003 C++ compiler, and a .mak file that was used to compile it. I am able to build it from the command line using nmake project.mak, but the compiler complains that afxres.h was not found. I did a little searching around and the afxres.h is in the Visual Studio directory in an includes file. Where am I supposed to specify to nmake where to look for this header file?
| [
"There should be an icon in your Start menu under Programs that opens a cmd.exe instance with all the correct MSVS environment variables set up for command line building.\n",
"Another option is running the appropriate vars batch file from a regular command prompt. The name and location varies from version to version. For VS2003, I believe it's\nC:\\Program Files\\Microsoft Visual Studio .NET 2003\\Common7\\Tools\\vsvars32.bat\n"
] | [
4,
2
] | [] | [] | [
"nmake",
"visual_studio"
] | stackoverflow_0000035429_nmake_visual_studio.txt |
Q:
looping and average in c++
Programming Student here...trying to work on a project but I'm stuck.
The project is trying to find the miles per gallon per trip then at the end outputting total miles and total gallons used and averaging miles per gallon
How do I loop back up to the first question after the first set of questions has been asked.
Also how will I average the trips...will I have to have a variable for each of the trips?
I'm stuck, any help would be great!
A:
You will have to tell us the type of data you are given.
As per your last question: remember that an average can be calculated in real time by either storing the sum and the number of data points (two numbers), or the current average and the number of data points (again, two numbers).
For instance:
class Averager {
double avg;
int n;
public:
Averager() : avg(0), n(0) {}
void addPoint(double v) {
avg = (n * avg + v) / (n + 1);
n++;
}
double average() const { return avg; }
};
| looping and average in c++ | Programming Student here...trying to work on a project but I'm stuck.
The project is trying to find the miles per gallon per trip then at the end outputting total miles and total gallons used and averaging miles per gallon
How do I loop back up to the first question after the first set of questions has been asked.
Also how will I average the trips...will I have to have a variable for each of the trips?
I'm stuck, any help would be great!
| [
"You will have to tell us the type of data you are given.\nAs per your last question: remember that an average can be calculated in real time by either storing the sum and the number of data points (two numbers), or the current average and the number of data points (again, two numbers).\nFor instance:\nclass Averager {\n double avg;\n int n;\npublic:\n Averager() : avg(0), n(0) {}\n void addPoint(double v) {\n avg = (n * avg + v) / (n + 1);\n n++;\n }\n double average() const { return avg; }\n};\n\n"
] | [
2
] | [] | [] | [
"c++"
] | stackoverflow_0000035522_c++.txt |
Q:
Trouble using JRun to Host Java Servlets
I am deploying new versions of java servlets with JRun as the host. I am having difficulty finding good sources for information about JRun and tutorials about how to configure and manage it.
After installing JRun and opening the launcher it can't start the admin server that it creates by default...so obviously I'm running into some issues just getting started.
edit for clarity: I'm not looking for an answer or help with a specific error but if anyone knows of good sources for information about JRun and how to work with it so I can quickly bring myself up to speed on it.
A:
Jrun development has pretty much stopped. You should look into running another application server. Jboss or Glassfish are good alternatives.
A:
This is probably going to be difficult to resolve unless you post either the error message from the log file or the list of steps that you took so far.
I have JRun 3.1 configured on my machine so maybe I can duplicate your issue if you give us more information.
A:
I didn't know JRun was even still in existence since 1999 or something like that. Anyway, Tomcat or Jetty would be my easy replacements. Tomcat for its scriptability from ANT etc and Jetty for its pure simplicity (start an instance in 5 lines of code!).
Glassfish is a huge system with many components, if you just want to host vanilla servlets and JSPs etc. then I would go for one of the above.
| Trouble using JRun to Host Java Servlets | I am deploying new versions of java servlets with JRun as the host. I am having difficulty finding good sources for information about JRun and tutorials about how to configure and manage it.
After installing JRun and opening the launcher it can't start the admin server that it creates by default...so obviously I'm running into some issues just getting started.
edit for clarity: I'm not looking for an answer or help with a specific error but if anyone knows of good sources for information about JRun and how to work with it so I can quickly bring myself up to speed on it.
| [
"Jrun development has pretty much stopped. You should look into running another application server. Jboss or Glassfish are good alternatives.\n",
"This is probably going to be difficult to resolve unless you post either the error message from the log file or the list of steps that you took so far.\nI have JRun 3.1 configured on my machine so maybe I can duplicate your issue if you give us more information.\n",
"I didn't know JRun was even still in existence since 1999 or something like that. Anyway, Tomcat or Jetty would be my easy replacements. Tomcat for its scriptability from ANT etc and Jetty for its pure simplicity (start an instance in 5 lines of code!).\nGlassfish is a huge system with many components, if you just want to host vanilla servlets and JSPs etc. then I would go for one of the above.\n"
] | [
2,
1,
1
] | [] | [] | [
"hosting",
"java",
"jrun",
"servlets"
] | stackoverflow_0000034726_hosting_java_jrun_servlets.txt |
Q:
Excluding Code Analysis rule in source
In a project I'm working on FxCop shows me lots of (and I mean more than 400) errors on the InitializeComponent() methods generated by the Windows Forms designer. Most of those errors are just the assignment of the Text property of labels.
I'd like to suppress those methods in source, so I copied the suppression code generated by FxCop into AssemblyInfo.cs, but it doesn't work.
This is the attribute that FxCop copied to the clipboard.
[module: SuppressMessage("Microsoft.Globalization",
"CA1303:DoNotPassLiteralsAsLocalizedParameters",
Scope = "member",
Target = "WindowsClient.MainForm.InitializeComponent():System.Void",
MessageId = "System.Windows.Forms.Control.set_Text(System.String)")]
Anyone knows the correct attribute to suppress this messages?
PS: I'm using Visual Studio 2005, C#, FxCop 1.36 beta.
A:
You've probably got the right code, but you also need to add CODE_ANALYSIS as a precompiler defined symbol in the project properties. I think those SuppressMessage attributes are only left in the compiled binaries if CODE_ANALYSIS is defined.
A:
In FxCop 1.36 there is actually a project option on the "Spelling & Analysis" tab that will supress analysis for any generated code.
If you don't want to turn analysis off for all generated code, you need to make sure that you add a CODE_ANALYSIS symbol to the list of conditional compilation symbols (project properties, Build tab). Without this symbol defined, the SupressMessage attributes will be removed from the compiled code so FxCop won't see them.
The other problem with your SuppressMessage attribute is that you are listing a "Target" of a specific method name (in this case WindowsClient.MainForm.InitializeComponent():System.Void) and listing a specific "Scope". You may want to try removing these; otherwise you should add this SuppressMessage to each instance of the method.
You should also upgrade to the RTM version of FxCop 1.36, the beta will not automatically detect the newer version.
A:
Module level suppression messages need to be pasted into the same file as the code that is raising the FxCop error before the namespace declaration or in assemblyinfo.cs. Additionally, you will need to have CODE_ANALYSIS defined as a conditional compiler symbols (Project > Properties > Build). Once that is in place, do a complete rebuild of project and the next time you run FxCop the error should be moved to the "Excluded in Source" tab.
Also, one small tip, but if you are dealing with a lot of FxCop exclusions it might be useful to wrap a region around them so you can get them out of the way.
| Excluding Code Analysis rule in source | In a project I'm working on FxCop shows me lots of (and I mean more than 400) errors on the InitializeComponent() methods generated by the Windows Forms designer. Most of those errors are just the assignment of the Text property of labels.
I'd like to suppress those methods in source, so I copied the suppression code generated by FxCop into AssemblyInfo.cs, but it doesn't work.
This is the attribute that FxCop copied to the clipboard.
[module: SuppressMessage("Microsoft.Globalization",
"CA1303:DoNotPassLiteralsAsLocalizedParameters",
Scope = "member",
Target = "WindowsClient.MainForm.InitializeComponent():System.Void",
MessageId = "System.Windows.Forms.Control.set_Text(System.String)")]
Anyone knows the correct attribute to suppress this messages?
PS: I'm using Visual Studio 2005, C#, FxCop 1.36 beta.
| [
"You've probably got the right code, but you also need to add CODE_ANALYSIS as a precompiler defined symbol in the project properties. I think those SuppressMessage attributes are only left in the compiled binaries if CODE_ANALYSIS is defined.\n",
"In FxCop 1.36 there is actually a project option on the \"Spelling & Analysis\" tab that will supress analysis for any generated code.\nIf you don't want to turn analysis off for all generated code, you need to make sure that you add a CODE_ANALYSIS symbol to the list of conditional compilation symbols (project properties, Build tab). Without this symbol defined, the SupressMessage attributes will be removed from the compiled code so FxCop won't see them.\nThe other problem with your SuppressMessage attribute is that you are listing a \"Target\" of a specific method name (in this case WindowsClient.MainForm.InitializeComponent():System.Void) and listing a specific \"Scope\". You may want to try removing these; otherwise you should add this SuppressMessage to each instance of the method.\nYou should also upgrade to the RTM version of FxCop 1.36, the beta will not automatically detect the newer version. \n",
"Module level suppression messages need to be pasted into the same file as the code that is raising the FxCop error before the namespace declaration or in assemblyinfo.cs. Additionally, you will need to have CODE_ANALYSIS defined as a conditional compiler symbols (Project > Properties > Build). Once that is in place, do a complete rebuild of project and the next time you run FxCop the error should be moved to the \"Excluded in Source\" tab.\nAlso, one small tip, but if you are dealing with a lot of FxCop exclusions it might be useful to wrap a region around them so you can get them out of the way.\n"
] | [
18,
4,
2
] | [] | [] | [
"code_analysis",
"fxcop",
"initializecomponent",
"suppression",
"visual_studio"
] | stackoverflow_0000035551_code_analysis_fxcop_initializecomponent_suppression_visual_studio.txt |
Q:
Why does Python's iter() on a mapping return iterkeys() instead of iteritems()?
It seems like if you want to get the keys of a mapping, you ask for them; otherwise, give me the whole mapping (constituted by a set of key-value pairs). Is there a historical reason for this?
A:
Check out this thread for a discussion on the reasons behind this behavior (including that Guido likes it, and it's not likely to change).
| Why does Python's iter() on a mapping return iterkeys() instead of iteritems()? | It seems like if you want to get the keys of a mapping, you ask for them; otherwise, give me the whole mapping (constituted by a set of key-value pairs). Is there a historical reason for this?
| [
"Check out this thread for a discussion on the reasons behind this behavior (including that Guido likes it, and it's not likely to change).\n"
] | [
11
] | [] | [] | [
"iteration",
"mapping",
"python"
] | stackoverflow_0000035569_iteration_mapping_python.txt |
Q:
How far does SQL Server Express Edition scale?
Wikipedia says SQL Server Express Edition is limited to "one processor, 1 GB memory and 4 GB database files". Does anyone have practical experience with how well this scales?
A:
It's a regular sql server, it just has a limit. SharePoint by default uses the sql server express if that gives you any idea. We have our entire office (80+) people running on that instance.
A:
We have used SQL Server Express Edition in some of our smaller applications, maybe 5+ users, and smaller databases. The 4GB is very limiting in a high transaction environments, and in some cases we have had to migrate our customer to SQL Server Standard Edition.
A:
It really comes down to the nature of your database and application. What kind of application(s) are hitting SQL Server? In my experience, it only handles 5-10 users with a heavy read/write application.
A:
This question is far too vague to be useful to you or anyone else. Also, Wikipedia is your primary source of info on SQL Server, fail?
The first matrix of the MSDN page for Features Supported by the Editions of SQL Server 2008 is titled "Scalability." The only edition with any features marked "Yes" is Enterprise (you get Partitioning, Data compression, Resource governor, and Partition table parallelism.) And it goes down the line from there, Express does not support many of the features designed for "scale." If your main demand is space, how soon will you exceed 4GB? If your main demand is high availability and integrity, don't even bother with Express.
"Scalable" is quickly becoming a weasel-/buzz-word, alongside "robust." People use it when they haven't thought hard enough about what they mean.
| How far does SQL Server Express Edition scale? | Wikipedia says SQL Server Express Edition is limited to "one processor, 1 GB memory and 4 GB database files". Does anyone have practical experience with how well this scales?
| [
"It's a regular sql server, it just has a limit. SharePoint by default uses the sql server express if that gives you any idea. We have our entire office (80+) people running on that instance.\n",
"We have used SQL Server Express Edition in some of our smaller applications, maybe 5+ users, and smaller databases. The 4GB is very limiting in a high transaction environments, and in some cases we have had to migrate our customer to SQL Server Standard Edition.\n",
"It really comes down to the nature of your database and application. What kind of application(s) are hitting SQL Server? In my experience, it only handles 5-10 users with a heavy read/write application.\n",
"This question is far too vague to be useful to you or anyone else. Also, Wikipedia is your primary source of info on SQL Server, fail?\nThe first matrix of the MSDN page for Features Supported by the Editions of SQL Server 2008 is titled \"Scalability.\" The only edition with any features marked \"Yes\" is Enterprise (you get Partitioning, Data compression, Resource governor, and Partition table parallelism.) And it goes down the line from there, Express does not support many of the features designed for \"scale.\" If your main demand is space, how soon will you exceed 4GB? If your main demand is high availability and integrity, don't even bother with Express.\n\"Scalable\" is quickly becoming a weasel-/buzz-word, alongside \"robust.\" People use it when they haven't thought hard enough about what they mean.\n"
] | [
5,
2,
0,
0
] | [] | [] | [
"sql_server_express"
] | stackoverflow_0000035559_sql_server_express.txt |
Q:
How to benchmark a SQL Server Query?
I'd like to know the standard way to benchmark a SQL Sever Query, preferably I'd like to know about the tools that come with SQL Server rather than 3rd Party tools.
A:
set showplan_text on
will show you the execution plan (to see it graphically use CTRL + K (sql 2000) or CTRL + M (sql 2005 +)
set statistics IO on
will show you the reads
set statistics time on
will show you the elapsed time
A:
Use SQL Profiler.
For .NET applications, filter that Application name by '.NET%' and you'll omit other extraneous queries.
A:
+1 on the execution plan. From here you can see where all the time is being spent in your particular query. Eg. 85% of the time is spent table scanning a particular table, can you put an index on that table to improve it? etc etc.
| How to benchmark a SQL Server Query? | I'd like to know the standard way to benchmark a SQL Sever Query, preferably I'd like to know about the tools that come with SQL Server rather than 3rd Party tools.
| [
"set showplan_text on \nwill show you the execution plan (to see it graphically use CTRL + K (sql 2000) or CTRL + M (sql 2005 +)\nset statistics IO on\nwill show you the reads\nset statistics time on\nwill show you the elapsed time\n",
"Use SQL Profiler.\nFor .NET applications, filter that Application name by '.NET%' and you'll omit other extraneous queries.\n",
"+1 on the execution plan. From here you can see where all the time is being spent in your particular query. Eg. 85% of the time is spent table scanning a particular table, can you put an index on that table to improve it? etc etc.\n"
] | [
11,
1,
0
] | [] | [] | [
"benchmarking",
"database",
"sql_server"
] | stackoverflow_0000034858_benchmarking_database_sql_server.txt |
Q:
Django ImageField core=False in newforms admin
In the transition to newforms admin I'm having difficulty figuring out how specify core=False for ImageFields.
I get the following error:
TypeError: __init__() got an unexpected keyword argument 'core'
[Edit] However, by just removing the core argument I get a "This field is required." error in the admin interface on attempted submission. How does one accomplish what core=False is meant to do using newforms admin?
A:
To get rid of "This field is required," you need to make it not required, by using blank=True (and possibly null=True as well, if it's not a CharField).
A:
The core attribute isn't used anymore.
From Brian Rosner's Blog:
You can safely just remove any and all core arguments. They are no longer used. newforms-admin now provides a nice delete checkbox for exisiting instances in inlines.
A:
This is simple. I started getting this problems a few revisions ago. Basically, just remove the "core=True" parameter in the ImageField in the models, and then follow the instructions here to convert to what the newforms admin uses.
| Django ImageField core=False in newforms admin | In the transition to newforms admin I'm having difficulty figuring out how specify core=False for ImageFields.
I get the following error:
TypeError: __init__() got an unexpected keyword argument 'core'
[Edit] However, by just removing the core argument I get a "This field is required." error in the admin interface on attempted submission. How does one accomplish what core=False is meant to do using newforms admin?
| [
"To get rid of \"This field is required,\" you need to make it not required, by using blank=True (and possibly null=True as well, if it's not a CharField).\n",
"The core attribute isn't used anymore.\nFrom Brian Rosner's Blog:\n\nYou can safely just remove any and all core arguments. They are no longer used. newforms-admin now provides a nice delete checkbox for exisiting instances in inlines.\n\n",
"This is simple. I started getting this problems a few revisions ago. Basically, just remove the \"core=True\" parameter in the ImageField in the models, and then follow the instructions here to convert to what the newforms admin uses.\n"
] | [
5,
4,
2
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000034209_django_django_models_python.txt |
Q:
How to Format Numbers in WinForms 1.1 DataGrid?
Is there a simple way to format numbers in a Winforms 1.1 datagrid? The Format property of the DataGridTextBoxColumn seems to be completely ignored. I know there is a solution that involves subclassing a Column control, and it's fairly simple, but was hoping there might be some trick to making the Format property just work.
A:
My personal opinion is that a datagridcolumnstyle is the way to go. Without seeing the code that you have, I can't say for certain why your formatting isn't taking hold when no style is defined - but mixing in formatting with data calculations and other parts of the code can get very messy very quickly.
Creating a new column style class is very clean, and if you have to use the same formatting again in another datagrid, it's as easy as pie to reuse it.
Here's the Microsoft Documentation that may get you started in the right direction.
A:
I did subclass and it was easy and did work. I still don't like it so much. I was already subclassing column styles for other reasons. I'd rather handle all databinding myself, where I can more easily change it and test it. This whole mixing of the UI with the data is old school, and not in a good way.
Thanks very much for your answers, it's good to have second opinions.
Mike
| How to Format Numbers in WinForms 1.1 DataGrid? | Is there a simple way to format numbers in a Winforms 1.1 datagrid? The Format property of the DataGridTextBoxColumn seems to be completely ignored. I know there is a solution that involves subclassing a Column control, and it's fairly simple, but was hoping there might be some trick to making the Format property just work.
| [
"My personal opinion is that a datagridcolumnstyle is the way to go. Without seeing the code that you have, I can't say for certain why your formatting isn't taking hold when no style is defined - but mixing in formatting with data calculations and other parts of the code can get very messy very quickly.\nCreating a new column style class is very clean, and if you have to use the same formatting again in another datagrid, it's as easy as pie to reuse it.\nHere's the Microsoft Documentation that may get you started in the right direction.\n",
"I did subclass and it was easy and did work. I still don't like it so much. I was already subclassing column styles for other reasons. I'd rather handle all databinding myself, where I can more easily change it and test it. This whole mixing of the UI with the data is old school, and not in a good way.\nThanks very much for your answers, it's good to have second opinions.\nMike\n"
] | [
1,
0
] | [] | [] | [
"winforms"
] | stackoverflow_0000034428_winforms.txt |
Q:
Using Office to programmatically convert documents?
I'm interested in using Office 2007 to convert between the pre-2007 binary formats (.doc, .xls, .ppt) and the new Office Open XML formats (.docx, .xlsx, .pptx)
How would I do this? I'd like to write a simple command line app that takes in two filenames (input and output) and perhaps the source and/or destination types, and performs the conversion.
A:
Microsoft has a page which gives several examples of writing scripts to "drive" MS Word. One such example shows how to convert from a Word document to HTML. By changing the last parameter to any values listed here, you can get the output in different formats.
A:
The easiest way would be to use Automation thru the Microsoft.Office.Interop. libraries. You can create an instance of a Word application, for example. There are methods attached to the Application object that will allow you to open and close documents, plus pretty much anything else you can accomplish in VBA by recording a macro.
You could also just write the VBA code in your Office application to do roughly the same thing. Both approaches are equally valid, depending on your comfort in programming in C#, VB.NET or VBA.
| Using Office to programmatically convert documents? | I'm interested in using Office 2007 to convert between the pre-2007 binary formats (.doc, .xls, .ppt) and the new Office Open XML formats (.docx, .xlsx, .pptx)
How would I do this? I'd like to write a simple command line app that takes in two filenames (input and output) and perhaps the source and/or destination types, and performs the conversion.
| [
"Microsoft has a page which gives several examples of writing scripts to \"drive\" MS Word. One such example shows how to convert from a Word document to HTML. By changing the last parameter to any values listed here, you can get the output in different formats.\n",
"The easiest way would be to use Automation thru the Microsoft.Office.Interop. libraries. You can create an instance of a Word application, for example. There are methods attached to the Application object that will allow you to open and close documents, plus pretty much anything else you can accomplish in VBA by recording a macro.\nYou could also just write the VBA code in your Office application to do roughly the same thing. Both approaches are equally valid, depending on your comfort in programming in C#, VB.NET or VBA.\n"
] | [
2,
0
] | [] | [] | [
"ms_office"
] | stackoverflow_0000035639_ms_office.txt |
Q:
Frequent SystemExit in Ruby when making HTTP calls
I have a Ruby on Rails Website that makes HTTP calls to an external Web Service.
About once a day I get a SystemExit (stacktrace below) error email where a call to the service has failed. If I then try the exact same query on my site moments later it works fine.
It's been happening since the site went live and I've had no luck tracking down what causes it.
Ruby is version 1.8.6 and rails is version 1.2.6.
Anyone else have this problem?
This is the error and stacktrace.
A SystemExit occurred
/usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in
exit'
/usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in
exit_now_handler'
/usr/local/lib/ruby/gems/1.8/gems/activesupport-1.4.4/lib/active_support/inflector.rb:250:in
to_proc' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in call'
/usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread'
/usr/local/lib/ruby/1.8/net/protocol.rb:133:in rbuf_fill'
/usr/local/lib/ruby/1.8/timeout.rb:56:in timeout'
/usr/local/lib/ruby/1.8/timeout.rb:76:in timeout'
/usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill'
/usr/local/lib/ruby/1.8/net/protocol.rb:116:in readuntil'
/usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline'
/usr/local/lib/ruby/1.8/net/http.rb:2017:in read_status_line'
/usr/local/lib/ruby/1.8/net/http.rb:2006:in read_new'
/usr/local/lib/ruby/1.8/net/http.rb:1047:in request'
/usr/local/lib/ruby/1.8/net/http.rb:945:in request_get'
/usr/local/lib/ruby/1.8/net/http.rb:380:in get_response'
/usr/local/lib/ruby/1.8/net/http.rb:543:in start'
/usr/local/lib/ruby/1.8/net/http.rb:379:in get_response'
A:
Using fcgi with Ruby is known to be very buggy.
Practically everybody has moved to Mongrel for this reason, and I recommend you do the same.
A:
It's been awhile since I used FCGI but I think a FCGI process could throw a SystemExit if the thread was taking too long. This could be the web service not responding or even a slow DNS query. Some google results show a similar error with Python and FCGI so moving to mongrel would be a good idea. This post is my reference I used to setup mongrel and I still refer back to it.
A:
I used to get these all the time on Apache1/fastcgi. I think it's caused by fastcgi hanging up before Ruby is done.
Switching to mongrel is a good first step, but there's more to do. It's a bad idea to cull from web services on live pages, particularly from Rails. Rails is not thread-safe. The number of concurrent connections you can support equals the number of mongrels (or Passenger processes) in your cluster.
If you have one mongrel and someone accesses a page that calls a web service that takes 10 seconds to time out, every request to your website will timeout during that time. Most of the load balancers just cycle through your mongrels blindly, so if you have two mongrels, every other request will timeout.
Anything that can be unpredictably slow needs to happen in a job queue. The first hit to /slow/action adds the job to the queue, and /slow/action keeps on refreshing via page refreshes or queries via ajax until the job is finished, and then you get your results from the job queue. There are a few job queues for Rails nowadays, but the oldest and probably most widely used one is BackgroundRB.
Another alternative, depending on the nature of your app, is to cull the service every N minutes via cron, cache the data locally, and have your live page read from the cache.
A:
I would also take a look at Passenger. It's a lot easier to get going than the traditional solution of Apache/nginx + Mongrel.
| Frequent SystemExit in Ruby when making HTTP calls | I have a Ruby on Rails Website that makes HTTP calls to an external Web Service.
About once a day I get a SystemExit (stacktrace below) error email where a call to the service has failed. If I then try the exact same query on my site moments later it works fine.
It's been happening since the site went live and I've had no luck tracking down what causes it.
Ruby is version 1.8.6 and rails is version 1.2.6.
Anyone else have this problem?
This is the error and stacktrace.
A SystemExit occurred
/usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in
exit'
/usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in
exit_now_handler'
/usr/local/lib/ruby/gems/1.8/gems/activesupport-1.4.4/lib/active_support/inflector.rb:250:in
to_proc' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in call'
/usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread'
/usr/local/lib/ruby/1.8/net/protocol.rb:133:in rbuf_fill'
/usr/local/lib/ruby/1.8/timeout.rb:56:in timeout'
/usr/local/lib/ruby/1.8/timeout.rb:76:in timeout'
/usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill'
/usr/local/lib/ruby/1.8/net/protocol.rb:116:in readuntil'
/usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline'
/usr/local/lib/ruby/1.8/net/http.rb:2017:in read_status_line'
/usr/local/lib/ruby/1.8/net/http.rb:2006:in read_new'
/usr/local/lib/ruby/1.8/net/http.rb:1047:in request'
/usr/local/lib/ruby/1.8/net/http.rb:945:in request_get'
/usr/local/lib/ruby/1.8/net/http.rb:380:in get_response'
/usr/local/lib/ruby/1.8/net/http.rb:543:in start'
/usr/local/lib/ruby/1.8/net/http.rb:379:in get_response'
| [
"Using fcgi with Ruby is known to be very buggy. \nPractically everybody has moved to Mongrel for this reason, and I recommend you do the same.\n",
"It's been awhile since I used FCGI but I think a FCGI process could throw a SystemExit if the thread was taking too long. This could be the web service not responding or even a slow DNS query. Some google results show a similar error with Python and FCGI so moving to mongrel would be a good idea. This post is my reference I used to setup mongrel and I still refer back to it.\n",
"I used to get these all the time on Apache1/fastcgi. I think it's caused by fastcgi hanging up before Ruby is done. \nSwitching to mongrel is a good first step, but there's more to do. It's a bad idea to cull from web services on live pages, particularly from Rails. Rails is not thread-safe. The number of concurrent connections you can support equals the number of mongrels (or Passenger processes) in your cluster. \nIf you have one mongrel and someone accesses a page that calls a web service that takes 10 seconds to time out, every request to your website will timeout during that time. Most of the load balancers just cycle through your mongrels blindly, so if you have two mongrels, every other request will timeout.\nAnything that can be unpredictably slow needs to happen in a job queue. The first hit to /slow/action adds the job to the queue, and /slow/action keeps on refreshing via page refreshes or queries via ajax until the job is finished, and then you get your results from the job queue. There are a few job queues for Rails nowadays, but the oldest and probably most widely used one is BackgroundRB.\nAnother alternative, depending on the nature of your app, is to cull the service every N minutes via cron, cache the data locally, and have your live page read from the cache. \n",
"I would also take a look at Passenger. It's a lot easier to get going than the traditional solution of Apache/nginx + Mongrel.\n"
] | [
9,
8,
5,
1
] | [] | [] | [
"crash",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000000514_crash_ruby_ruby_on_rails.txt |
Q:
Mac OS X: What is the best way to estimate the code & static data size of program?
I want to be able to get an estimate of how much code & static data is used by my C++ program?
Is there a way to find this out by looking at the executable or object files? Or perhaps something I can do at runtime?
Will otool help?
A:
"size" is the traditional tool and works on all unix flavors.
"otool" has a bit finer grain control and has a lot of options.
.
$ size python
__TEXT __DATA __OBJC others dec hex
860160 159744 0 2453504 3473408 350000
A:
I think otool can help. Specifically, "otool -s {segment} {section}" should print out the details. I'm not sure if you can get information about __DATA or __TEXT without specifying a section. Maybe those sizes are reported in the mach header: "otool -h"?
otool -s __DATA __data MyApp.bundle/Contents/MacOS/MyApp
otool -s __TEXT __text MyApp.bundle/Contents/MacOS/MyApp
Anyway, Apple documents what gets copied into each section per-segment here: Apple's mach-o format documentation
| Mac OS X: What is the best way to estimate the code & static data size of program? | I want to be able to get an estimate of how much code & static data is used by my C++ program?
Is there a way to find this out by looking at the executable or object files? Or perhaps something I can do at runtime?
Will otool help?
| [
"\n\"size\" is the traditional tool and works on all unix flavors. \n\"otool\" has a bit finer grain control and has a lot of options.\n\n.\n$ size python\n__TEXT __DATA __OBJC others dec hex\n860160 159744 0 2453504 3473408 350000\n\n",
"I think otool can help. Specifically, \"otool -s {segment} {section}\" should print out the details. I'm not sure if you can get information about __DATA or __TEXT without specifying a section. Maybe those sizes are reported in the mach header: \"otool -h\"?\notool -s __DATA __data MyApp.bundle/Contents/MacOS/MyApp\notool -s __TEXT __text MyApp.bundle/Contents/MacOS/MyApp\n\nAnyway, Apple documents what gets copied into each section per-segment here: Apple's mach-o format documentation\n"
] | [
8,
2
] | [] | [] | [
"mach_o",
"macos"
] | stackoverflow_0000035491_mach_o_macos.txt |
Q:
How do I change my Active Sound Card on the Fly?
I currently have speakers set up both in my office and in my living room, connected to my PC via two sound cards, and would like to switch the set of speakers I'm outputting to on the fly.
Anyone know an application or a windows API call that I can use to change the default sound output device? It is currently a bit of a pain to traverse the existing control panel system.
A:
That topic is covered in depth here Easily Change or Switch the Default Audio Sound Output in Vista or XP. Note that sound management was changed in Vista significantly.
On a side note, I believe SnapStream is/was working on an application to allo multi-channel sound cards to output to different rooms (sets of speakers) simultaneously.
| How do I change my Active Sound Card on the Fly? | I currently have speakers set up both in my office and in my living room, connected to my PC via two sound cards, and would like to switch the set of speakers I'm outputting to on the fly.
Anyone know an application or a windows API call that I can use to change the default sound output device? It is currently a bit of a pain to traverse the existing control panel system.
| [
"That topic is covered in depth here Easily Change or Switch the Default Audio Sound Output in Vista or XP. Note that sound management was changed in Vista significantly.\nOn a side note, I believe SnapStream is/was working on an application to allo multi-channel sound cards to output to different rooms (sets of speakers) simultaneously.\n"
] | [
7
] | [] | [] | [
"audio",
"hardware",
"windows"
] | stackoverflow_0000035709_audio_hardware_windows.txt |
Q:
Linux GUI development
I have a large GUI project that I'd like to port to Linux.
What is the most recommended framework to utilize for GUI programming in Linux? Are Frameworks such as KDE / Gnome usable for this objective Or is better to use something more generic other than X?
I feel like if I chose one of Gnome or KDE, I'm closing the market out for a chunk of the Linux market who have chosen one over the other. (Yes I know there is overlap)
Is there a better way? Or would I have to create 2 complete GUI apps to have near 100% coverage?
It's not necessary to have a cross-platform solution that will also work on Win32.
A:
Your best bet may be to port it to a cross-platform widget library such as wxWidgets, which would give you portability to any platform wxWidgets supports.
It's also important to make the distinction between Gnome libraries and GTK, and likewise KDE libraries and Qt. If you write the code to use GTK or Qt, it should work fine for users of any desktop environment, including less popular ones like XFCE. If you use other Gnome or KDE-specific libraries to do non-widget-related tasks, your app would be less portable between desktop environments.
A:
I recommend wxWidgets or Qt. They are both mature, well-structured and cross-platform, with decent documentation and sample source code.
A:
Gnome apps work on KDE desktops and vice versa; you won't be locking anyone out. As far as toolkits go, it's fairly subjective. All of the toolkits are fairly cross-platform. If you're not open source, then GTK+ would be the cheaper option, as Qt is only free for open source use, whereas GTK+ is LGPL.
A:
Have you thought of using Mono? Programs like Paint.NET work great under Linux & Windows.
| Linux GUI development | I have a large GUI project that I'd like to port to Linux.
What is the most recommended framework to utilize for GUI programming in Linux? Are Frameworks such as KDE / Gnome usable for this objective Or is better to use something more generic other than X?
I feel like if I chose one of Gnome or KDE, I'm closing the market out for a chunk of the Linux market who have chosen one over the other. (Yes I know there is overlap)
Is there a better way? Or would I have to create 2 complete GUI apps to have near 100% coverage?
It's not necessary to have a cross-platform solution that will also work on Win32.
| [
"Your best bet may be to port it to a cross-platform widget library such as wxWidgets, which would give you portability to any platform wxWidgets supports.\nIt's also important to make the distinction between Gnome libraries and GTK, and likewise KDE libraries and Qt. If you write the code to use GTK or Qt, it should work fine for users of any desktop environment, including less popular ones like XFCE. If you use other Gnome or KDE-specific libraries to do non-widget-related tasks, your app would be less portable between desktop environments.\n",
"I recommend wxWidgets or Qt. They are both mature, well-structured and cross-platform, with decent documentation and sample source code.\n",
"Gnome apps work on KDE desktops and vice versa; you won't be locking anyone out. As far as toolkits go, it's fairly subjective. All of the toolkits are fairly cross-platform. If you're not open source, then GTK+ would be the cheaper option, as Qt is only free for open source use, whereas GTK+ is LGPL.\n",
"Have you thought of using Mono? Programs like Paint.NET work great under Linux & Windows.\n"
] | [
15,
5,
4,
0
] | [] | [] | [
"c++",
"gnome",
"kde_plasma",
"linux",
"user_interface"
] | stackoverflow_0000035762_c++_gnome_kde_plasma_linux_user_interface.txt |
Q:
Programmatically editing Python source
This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
A:
Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.
A:
I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do.
Otherwise AFAIK you have to use some conf objects.
A:
Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.
It's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).
There's a lot of power in this feature and something along these lines is probably what you're looking for. :)
[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)
| Programmatically editing Python source | This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
| [
"Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.\n",
"I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do. \nOtherwise AFAIK you have to use some conf objects.\n",
"Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.\nIt's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).\nThere's a lot of power in this feature and something along these lines is probably what you're looking for. :)\n[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)\n"
] | [
6,
0,
0
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0000032385_file_io_python.txt |
Q:
Anyone soloing using fogbugz?
Is there anyone working solo and using fogbugz out there? I'm interested in personal experience/overhead versus paper.
I am involved in several projects and get pretty hammered with lots of details to keep track of... Any experience welcome.
(Yes I know Mr. Joel is on the stackoverflow team... I still want good answers :)
A:
I use it, especially since the hosted Version of FugBugz is free for up to 2 people. I found it a lot nicer than paper as I'm working on multiple projects, and my paper tends to get rather messy once you start making annotations or if you want to re-organize and shuffle tasks around, mark them as complete only to see that they are not complete after all...
Plus, the Visual Studio integration is really neat, something paper just cannot compete with. Also, if you lay the project to rest for 6 months and come back, all your tasks and notes are still there, whereas with paper you may need to search all the old documents and notes again, if you did not discard it.
But that is just the point of view from someone who is not really good at staying organized :-) If you are a really tidy and organized person, paper may work better for you than it does for me.
Bonus suggestion: Run Fogbugz on a second PC (or a small Laptop like the eeePC) so that you always have it at your fingertips. The main problem with Task tracking programs - be it FogBugz, Outlook, Excel or just notepad - is that they take up screen space, and my two monitors are usually full with Visual Studio, e-Mail, Web Browsers, some Notepads etc.
A:
Go to http://www.fogbugz.com/ then at the bottom under "Try It", sign up.
under Settings => Your FogBugz Hosted Account, it should either already say "Payment Information: Using Student and Startup Edition." or there should be some option/link to turn on the Student and Startup Edition.
And yes, it's not only for Students and Startups, I asked their support :-)
Disclaimer: I'm not affiliated with FogCreek and Joel did not just deposit money in my account.
A:
When I was working for myself doing my consulting business I signed up for a hosted account and honestly I couldn't have done without it.
What I liked most about it was it took 30 seconds to sign up for an account and I was then able to integrate source control using sourcegear vault (which is an excellent source control product and free for single developers) set up projects, clients, releases and versions and monitor my progress constantly.
One thing that totally blew me away was that I ended up completely abandoning outlook for all work related correspondence. I could manage all my client interactions from within fogbugz and it all just worked amazingly well.
In terms of overhead, one of the nice things you could do was turn anything into a case. Anything that came up in your mind while you were coding, you simply created a new email, sent it to fogbugz and it was instantly added as an item for review later.
I would strongly recommend you get yourself one of the hosted accounts and give it a whirl
A:
In addition to the benefits already mentioned, another nice feature of using FogBugz is BugzScout, which you can use to report errors from your app and log them into FogBugz automatically. If you're a one person team, chances are there are some bugs in your code you've never seen during your own testing, so it's nice to have those bugs found "in the wild" automatically reported and logged for you.
A:
I use it as well and quite frankly wouldn't want to work without it.
I've always had some kind of issue tracker available for the projects I work on and thus am quite used to updating it. With FB6 the process is now even better.
Since FB also integrates with Subversion, the source control tool I use for my projects, the process is really good and I have two-way links between the two systems now. I can click on a case number in the Subversion logs and go to the case in FB, or see the revisions bound to a case inside FB.
A:
I think it's great that Joel et al. let people use FogBugs hosted for free on their own. It's a great business strategy, because the users become fans (it is great software after all), and then they recommend it to their businesses or customers.
A:
Yea FogBugz is great for process-light, quick and easy task management. It seems especially well suited for soloing, where you don't need or want a lot of complexity in that area.
By the way, if you want to keep track of what you're doing at the computer all day, check out TimeSprite, which integrates with FogBugz. It's a Windows app that logs your active window and then categorizes your activity based on the window title / activity type mappings you define as you go. (You can also just tell it what you're working on.) And if you're a FogBugz user, you can associate your work with a FogBugz case, and it will upload your time intervals for that case. This makes accurate recording of elapsed time pretty painless and about as accurate as you can get, which in turn improves FogBugz predictive powers in its evidence-based scheduling. Also, when soloing, I find that such specific logging of my time keeps me on task, in the way a meandering manager otherwise might. (I'm not affiliated with TimeSprite in any way.)
| Anyone soloing using fogbugz? | Is there anyone working solo and using fogbugz out there? I'm interested in personal experience/overhead versus paper.
I am involved in several projects and get pretty hammered with lots of details to keep track of... Any experience welcome.
(Yes I know Mr. Joel is on the stackoverflow team... I still want good answers :)
| [
"I use it, especially since the hosted Version of FugBugz is free for up to 2 people. I found it a lot nicer than paper as I'm working on multiple projects, and my paper tends to get rather messy once you start making annotations or if you want to re-organize and shuffle tasks around, mark them as complete only to see that they are not complete after all...\nPlus, the Visual Studio integration is really neat, something paper just cannot compete with. Also, if you lay the project to rest for 6 months and come back, all your tasks and notes are still there, whereas with paper you may need to search all the old documents and notes again, if you did not discard it.\nBut that is just the point of view from someone who is not really good at staying organized :-) If you are a really tidy and organized person, paper may work better for you than it does for me.\nBonus suggestion: Run Fogbugz on a second PC (or a small Laptop like the eeePC) so that you always have it at your fingertips. The main problem with Task tracking programs - be it FogBugz, Outlook, Excel or just notepad - is that they take up screen space, and my two monitors are usually full with Visual Studio, e-Mail, Web Browsers, some Notepads etc.\n",
"Go to http://www.fogbugz.com/ then at the bottom under \"Try It\", sign up.\nunder Settings => Your FogBugz Hosted Account, it should either already say \"Payment Information: Using Student and Startup Edition.\" or there should be some option/link to turn on the Student and Startup Edition.\nAnd yes, it's not only for Students and Startups, I asked their support :-)\nDisclaimer: I'm not affiliated with FogCreek and Joel did not just deposit money in my account.\n",
"When I was working for myself doing my consulting business I signed up for a hosted account and honestly I couldn't have done without it. \nWhat I liked most about it was it took 30 seconds to sign up for an account and I was then able to integrate source control using sourcegear vault (which is an excellent source control product and free for single developers) set up projects, clients, releases and versions and monitor my progress constantly.\nOne thing that totally blew me away was that I ended up completely abandoning outlook for all work related correspondence. I could manage all my client interactions from within fogbugz and it all just worked amazingly well.\nIn terms of overhead, one of the nice things you could do was turn anything into a case. Anything that came up in your mind while you were coding, you simply created a new email, sent it to fogbugz and it was instantly added as an item for review later.\nI would strongly recommend you get yourself one of the hosted accounts and give it a whirl\n",
"In addition to the benefits already mentioned, another nice feature of using FogBugz is BugzScout, which you can use to report errors from your app and log them into FogBugz automatically. If you're a one person team, chances are there are some bugs in your code you've never seen during your own testing, so it's nice to have those bugs found \"in the wild\" automatically reported and logged for you.\n",
"I use it as well and quite frankly wouldn't want to work without it.\nI've always had some kind of issue tracker available for the projects I work on and thus am quite used to updating it. With FB6 the process is now even better.\nSince FB also integrates with Subversion, the source control tool I use for my projects, the process is really good and I have two-way links between the two systems now. I can click on a case number in the Subversion logs and go to the case in FB, or see the revisions bound to a case inside FB.\n",
"I think it's great that Joel et al. let people use FogBugs hosted for free on their own. It's a great business strategy, because the users become fans (it is great software after all), and then they recommend it to their businesses or customers.\n",
"Yea FogBugz is great for process-light, quick and easy task management. It seems especially well suited for soloing, where you don't need or want a lot of complexity in that area. \nBy the way, if you want to keep track of what you're doing at the computer all day, check out TimeSprite, which integrates with FogBugz. It's a Windows app that logs your active window and then categorizes your activity based on the window title / activity type mappings you define as you go. (You can also just tell it what you're working on.) And if you're a FogBugz user, you can associate your work with a FogBugz case, and it will upload your time intervals for that case. This makes accurate recording of elapsed time pretty painless and about as accurate as you can get, which in turn improves FogBugz predictive powers in its evidence-based scheduling. Also, when soloing, I find that such specific logging of my time keeps me on task, in the way a meandering manager otherwise might. (I'm not affiliated with TimeSprite in any way.)\n"
] | [
35,
18,
14,
12,
7,
6,
1
] | [] | [] | [
"fogbugz"
] | stackoverflow_0000003180_fogbugz.txt |
Q:
FlexUnit component testing patterns: use addAsync or manually initialize?
We've been using Flex for about 6 months here at work, and I found that my first batches of FlexUnit tests involving custom components would tend to follow this sort of pattern:
import mx.core.Application;
import mx.events.FlexEvent;
import flexunit.framework.TestCase;
public class CustomComponentTest extends TestCase {
private var component:CustomComponent;
public function testSomeAspect() : void {
component = new CustomComponent();
// set some properties...
component.addEventListener(FlexEvent.CREATION_COMPLETE,
addAsync(verifySomeAspect, 5000));
component.height = 0;
component.width = 0;
Application.application.addChild(component);
}
public function verifySomeAspect(event:FlexEvent) : void {
// Assert some things about component...
}
override public function tearDown() : void {
try {
if (component) {
Application.application.removeChild(component);
component = null;
}
} catch (e:Error) {
// ok to ignore
}
}
Basically, you need to make sure the component has been fully initialized before you can reliably verify anything about it, and in Flex this happens asynchronously after it has been added to the display list. So you need to setup a callback (using FlexUnit's addAsync function) to be notified when that's happened.
Lately i've been just manually calling the methods that the runtime would call for you in the necessary places, so now my tests tend to look more like this:
import flexunit.framework.TestCase;
public class CustomComponentTest extends TestCase {
public function testSomeAspect() : void {
var component:CustomComponent = new CustomComponent();
component.initialize();
// set some properties...
component.validateProperties();
// Assert some things about component...
}
This is much easier to follow, but it kinda feels like I'm cheating a little either way. The first case is slamming it into the current Application (which would be the unit test runner shell app), and the latter isn't a "real" environment.
I was wondering how other people would handle this sort of situation?
A:
I see nothing wrong with using the async version. I can agree that the second version is shorter, but I'm not sure that I think it's easier to follow. The test does a lot of things that you wouldn't normally do, whereas the first example is more true to how you would use the component outside the test environment.
Also, in the second form you have to make sure that you do exactly what the framework would do, miss one step and your test isn't relevant, and each test must repeat this code. Seems to me it's better to test it in a situation that is as close to the real thing as possible.
You could have a look at dpUint's sequences, they made component testing a little more declarative:
public function testLogin():void {
var passThroughData:Object = new Object();
passThroughData.username = "myuser1";
passThroughData.password = "somepsswd";
var sequence:SequenceRunner = new SequenceRunner(this);
sequence.addStep(new SequenceSetter(form.usernameTI, {text:passThroughData.username}));
sequence.addStep(new SequenceWaiter(form.usernameTI, FlexEvent.VALUE_COMMIT, 100));
sequence.addStep(new SequenceSetter(form.passwordTI, {text:passThroughData.password}));
sequence.addStep(new SequenceWaiter(form.passwordTI, FlexEvent.VALUE_COMMIT, 100));
sequence.addStep(new SequenceEventDispatcher(form.loginBtn, new MouseEvent("click", true, false)));
sequence.addStep(new SequenceWaiter(form, "loginRequested", 100));
sequence.addAssertHandler(handleLoginEvent, passThroughData);
sequence.run();
}
(example from the dpUint wiki, see here for more info).
| FlexUnit component testing patterns: use addAsync or manually initialize? | We've been using Flex for about 6 months here at work, and I found that my first batches of FlexUnit tests involving custom components would tend to follow this sort of pattern:
import mx.core.Application;
import mx.events.FlexEvent;
import flexunit.framework.TestCase;
public class CustomComponentTest extends TestCase {
private var component:CustomComponent;
public function testSomeAspect() : void {
component = new CustomComponent();
// set some properties...
component.addEventListener(FlexEvent.CREATION_COMPLETE,
addAsync(verifySomeAspect, 5000));
component.height = 0;
component.width = 0;
Application.application.addChild(component);
}
public function verifySomeAspect(event:FlexEvent) : void {
// Assert some things about component...
}
override public function tearDown() : void {
try {
if (component) {
Application.application.removeChild(component);
component = null;
}
} catch (e:Error) {
// ok to ignore
}
}
Basically, you need to make sure the component has been fully initialized before you can reliably verify anything about it, and in Flex this happens asynchronously after it has been added to the display list. So you need to setup a callback (using FlexUnit's addAsync function) to be notified when that's happened.
Lately i've been just manually calling the methods that the runtime would call for you in the necessary places, so now my tests tend to look more like this:
import flexunit.framework.TestCase;
public class CustomComponentTest extends TestCase {
public function testSomeAspect() : void {
var component:CustomComponent = new CustomComponent();
component.initialize();
// set some properties...
component.validateProperties();
// Assert some things about component...
}
This is much easier to follow, but it kinda feels like I'm cheating a little either way. The first case is slamming it into the current Application (which would be the unit test runner shell app), and the latter isn't a "real" environment.
I was wondering how other people would handle this sort of situation?
| [
"I see nothing wrong with using the async version. I can agree that the second version is shorter, but I'm not sure that I think it's easier to follow. The test does a lot of things that you wouldn't normally do, whereas the first example is more true to how you would use the component outside the test environment.\nAlso, in the second form you have to make sure that you do exactly what the framework would do, miss one step and your test isn't relevant, and each test must repeat this code. Seems to me it's better to test it in a situation that is as close to the real thing as possible.\nYou could have a look at dpUint's sequences, they made component testing a little more declarative:\npublic function testLogin():void {\n var passThroughData:Object = new Object();\n\n passThroughData.username = \"myuser1\";\n passThroughData.password = \"somepsswd\";\n\n var sequence:SequenceRunner = new SequenceRunner(this);\n\n sequence.addStep(new SequenceSetter(form.usernameTI, {text:passThroughData.username}));\n sequence.addStep(new SequenceWaiter(form.usernameTI, FlexEvent.VALUE_COMMIT, 100));\n\n sequence.addStep(new SequenceSetter(form.passwordTI, {text:passThroughData.password}));\n sequence.addStep(new SequenceWaiter(form.passwordTI, FlexEvent.VALUE_COMMIT, 100));\n\n sequence.addStep(new SequenceEventDispatcher(form.loginBtn, new MouseEvent(\"click\", true, false)));\n sequence.addStep(new SequenceWaiter(form, \"loginRequested\", 100));\n\n sequence.addAssertHandler(handleLoginEvent, passThroughData);\n\n sequence.run();\n}\n\n(example from the dpUint wiki, see here for more info).\n"
] | [
1
] | [] | [] | [
"apache_flex",
"unit_testing"
] | stackoverflow_0000035304_apache_flex_unit_testing.txt |
Q:
Desktop search utility for pdf,chm and djvu files
I want to write a tool that helps me search pdf/chm/djvu files in linux. Any pointers on how to go about it?
The major problem is reading/importing data from all these files. Can this be done with C and shell scripting?
A:
Tracker ships with Ubuntu 8.04 -- it was a significant switch from Beagle which users believed was too resource (CPU) intensive and didn't yield good enough results. It indexes both pdf and chm and according to this bug report it also indexes djvu.
A:
Note that djvu is an image compression format (optimized to compress 'pictures of text', typically the results of scanning). As such, you won't be able to search for text, except in the metadata -this is what the link sent by cdleary refers to-, or if you first use OCR on the document to convert it into text.
The same is true for PDFs which content are scanned articles/books.
A:
How about a plugin for Beagle ?
It already searches PDFs but you can add other file types.
Here is the relevant wikipedia page : http://en.wikipedia.org/wiki/Beagle_(software)
| Desktop search utility for pdf,chm and djvu files | I want to write a tool that helps me search pdf/chm/djvu files in linux. Any pointers on how to go about it?
The major problem is reading/importing data from all these files. Can this be done with C and shell scripting?
| [
"Tracker ships with Ubuntu 8.04 -- it was a significant switch from Beagle which users believed was too resource (CPU) intensive and didn't yield good enough results. It indexes both pdf and chm and according to this bug report it also indexes djvu.\n",
"Note that djvu is an image compression format (optimized to compress 'pictures of text', typically the results of scanning). As such, you won't be able to search for text, except in the metadata -this is what the link sent by cdleary refers to-, or if you first use OCR on the document to convert it into text.\nThe same is true for PDFs which content are scanned articles/books.\n",
"How about a plugin for Beagle ?\nIt already searches PDFs but you can add other file types.\nHere is the relevant wikipedia page : http://en.wikipedia.org/wiki/Beagle_(software)\n"
] | [
1,
1,
0
] | [] | [] | [
"desktop_search"
] | stackoverflow_0000035722_desktop_search.txt |
Q:
Can an audio object be embedded in an InfoPath form?
Is it possible to embed an audio object (mp3, wma, whatever) in a web-enabled InfoPath form ?
If it is, how do you do it ?
A:
@Martin
That works for local forms that open in InfoPath. Nathan was asking about web-enabled forms. ActiveX controls are disabled for web forms, as evidenced by the informational label at the bottom of the design controls when the form compatability has been set to the web.
Now, I will admit that I know nothing about the HTML tags to play audio in a browser, but I have something else that might work. I had an InfoPath form that I needed to dynamically load an image into for a web-enabled form. Similar to the ActiveX issue, the Picture control was also disabled. What I did was put some managed code behind the form and execute the following when the form loaded.
public void FormEvents_Loading(object sender, LoadingEventArgs e)
{
string imgPath = "http://yoursite/yourimage.jpeg";
XPathNodeIterator xpni = MainDataSource.CreateNavigator().SelectSingleNode("/my:FormName/my:RichTextControlName", NamespaceManager).SelectChildren(XPathNodeType.All);
xpni.Current.InnerXml = "<img xmlns=\"http://www.w3.org/1999/xhtml\" src=\"" + filePath + "\" width=\"200px\" height=\"55px\" />";
}
I don't see why you couldn't take the same approach and load audio rather than an image.
A:
It looks like you can't embed <object> tags in a richtext field. I'm getting nothing when I do it.
A:
Edit: My apologies, I missed that the question was about Web forms - for which the below does not work. Must learn to read the question fully!
Go to menu View
Click on Design Tasks
Select Controls in the 'Design Tasks' Task pane
Click on the 'add or remove custom controls' button to install your custom
control
Click on the Add button and select ActiveX Control
Select the Windows Media Player control
Select the necessary properties for databinding and finish the wizard.
After you have added the control, you can drag and drop the control on your screen.
Right-Click on the control and select the 'Windows Media Player properties'
Fill in the URL to automatically embed the file to play.
A:
Have you tried manually modifying the XSL in order to generate HTML which embedds your audio file?
I don't think there is a way to do this using the InfoPath Designer, but if it ends up in the XSL; it may just get passed through to the web enabled form.
| Can an audio object be embedded in an InfoPath form? | Is it possible to embed an audio object (mp3, wma, whatever) in a web-enabled InfoPath form ?
If it is, how do you do it ?
| [
"@Martin\nThat works for local forms that open in InfoPath. Nathan was asking about web-enabled forms. ActiveX controls are disabled for web forms, as evidenced by the informational label at the bottom of the design controls when the form compatability has been set to the web.\nNow, I will admit that I know nothing about the HTML tags to play audio in a browser, but I have something else that might work. I had an InfoPath form that I needed to dynamically load an image into for a web-enabled form. Similar to the ActiveX issue, the Picture control was also disabled. What I did was put some managed code behind the form and execute the following when the form loaded.\npublic void FormEvents_Loading(object sender, LoadingEventArgs e)\n{\n string imgPath = \"http://yoursite/yourimage.jpeg\";\n\n XPathNodeIterator xpni = MainDataSource.CreateNavigator().SelectSingleNode(\"/my:FormName/my:RichTextControlName\", NamespaceManager).SelectChildren(XPathNodeType.All);\n xpni.Current.InnerXml = \"<img xmlns=\\\"http://www.w3.org/1999/xhtml\\\" src=\\\"\" + filePath + \"\\\" width=\\\"200px\\\" height=\\\"55px\\\" />\"; \n}\n\nI don't see why you couldn't take the same approach and load audio rather than an image.\n",
"It looks like you can't embed <object> tags in a richtext field. I'm getting nothing when I do it. \n",
"Edit: My apologies, I missed that the question was about Web forms - for which the below does not work. Must learn to read the question fully!\n\nGo to menu View\nClick on Design Tasks\nSelect Controls in the 'Design Tasks' Task pane\nClick on the 'add or remove custom controls' button to install your custom\ncontrol\nClick on the Add button and select ActiveX Control\nSelect the Windows Media Player control\nSelect the necessary properties for databinding and finish the wizard.\n\nAfter you have added the control, you can drag and drop the control on your screen.\nRight-Click on the control and select the 'Windows Media Player properties'\nFill in the URL to automatically embed the file to play.\n",
"Have you tried manually modifying the XSL in order to generate HTML which embedds your audio file?\nI don't think there is a way to do this using the InfoPath Designer, but if it ends up in the XSL; it may just get passed through to the web enabled form.\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"audio",
"infopath",
"moss",
"sharepoint"
] | stackoverflow_0000034717_audio_infopath_moss_sharepoint.txt |
Q:
P/Invoke in Mono
What's the current status of Mono's Platform Invoke implementation on Linux and on Solaris?
A:
Working, usable and stable. It's well tested since quite a lot of mono's own low-level functionality has to be marshaled through it to the underlying operating system.
There are some P/Invoke extensions when compared to Microsoft .Net implementation (after all they deal with a single OS family and three architectures at most). Most notable of those would be that library mappings transform the library name to OS-specific variants (e.g. mylib.dll searches for mylib.so on Linux, mylib.dylib on OS X and so on) and take into account various other system specific conventions. There is also a DLLMap configuration extension which can be used if the default name translations are not enough. Usually it's convenient to have the same API of the binary lib exposed on different OSes, so that migrating between platforms only requires changes in the C code, not the .Net part.
| P/Invoke in Mono | What's the current status of Mono's Platform Invoke implementation on Linux and on Solaris?
| [
"Working, usable and stable. It's well tested since quite a lot of mono's own low-level functionality has to be marshaled through it to the underlying operating system.\nThere are some P/Invoke extensions when compared to Microsoft .Net implementation (after all they deal with a single OS family and three architectures at most). Most notable of those would be that library mappings transform the library name to OS-specific variants (e.g. mylib.dll searches for mylib.so on Linux, mylib.dylib on OS X and so on) and take into account various other system specific conventions. There is also a DLLMap configuration extension which can be used if the default name translations are not enough. Usually it's convenient to have the same API of the binary lib exposed on different OSes, so that migrating between platforms only requires changes in the C code, not the .Net part.\n"
] | [
29
] | [] | [] | [
".net",
"linux",
"mono",
"pinvoke",
"solaris"
] | stackoverflow_0000035853_.net_linux_mono_pinvoke_solaris.txt |
Q:
Is it possible to list named events in Windows?
I would like to create events for certain resources that are used across various processes and access these events by name. The problem seems to be that the names of the events must be known to all applications referring to them.
Is there maybe a way to get a list of names events in the system?
I am aware that I might use some standard names, but it seems rather inflexible with regard to future extensibility (all application would require a recompile).
I'm afraid, I can't even consider ZwOpenDirectoryObject, because it is described as needing Windows XP or higher, so it is out of question. Thanks for the suggestion though.
I am a little unsure about shared memory, because I haven't tried it so far. Might do some reading in that area I guess. Configuration files and registry are a slight problem, because they do tend to fail with Vista due to access problems. I am a bit afraid, that shared memory will have the same problem.
The idea with ProcessExplorer sounds promising. Does anyone know an API that could be used for listing events for a process? And, does it work without administrative rights?
Thank you for the clarification.
There is not really a master process. It is more of a driver dll that is used from different processes and the events would be used to "lock" resources used by these processes.
I am thinking about setting up a central service that has sufficient access rights even under Vista. It will certainly complicate things, but it might be the only thing left facing the problems with security.
A:
No, there is not any facility to enumerate named events. You could enumerate all objects in the respective object manager directory using ZwOpenDirectoryObject and then filter for events. But this routine is undocumented and therefore should not be used without good reason.
Why not use a separate mechanism to share the event names? You could list them in a configuration file, a registry key or maybe even in shared memory.
A:
ProcessExplorer is able to enumerate all the named events held by some specific process. You could go over the entire process list and do something similar although I have now clue as to what API is used to get the list...
A:
Do not mix up the user mode ZwOpenDirectoryObject with the kernel mode ZwOpenDirectoryObject -- the kernel mode API (http://msdn.microsoft.com/en-us/library/ms800966.aspx) indeed seems to available as of XP only, but the user mode version should be available at least since NT 4. Anyway, I would not recommend using ZwOpenDirectoryObject.
Why should configuration files and registry keys fail on Vista? Of course, you have to get the security settings right -- but you would have to do that for your named events as well -- so there should not be a big difference here. Maybe you should tell us some more details about the nature of your processes -- do they all run within the same logon session or do they run as different users even? And is there some master process or who creates the events in the first place?
Frankly, I tend to find the Process Explorer idea to be not a very good one. Despite the fact that you probably will not be able to accomplish that without using undocumented APIs and/or a device driver, I do not think that a process should be spelunking around in the handle table of another process just to find out the names of some kernel objects. And, of course, the same security issues apply again.
| Is it possible to list named events in Windows? | I would like to create events for certain resources that are used across various processes and access these events by name. The problem seems to be that the names of the events must be known to all applications referring to them.
Is there maybe a way to get a list of names events in the system?
I am aware that I might use some standard names, but it seems rather inflexible with regard to future extensibility (all application would require a recompile).
I'm afraid, I can't even consider ZwOpenDirectoryObject, because it is described as needing Windows XP or higher, so it is out of question. Thanks for the suggestion though.
I am a little unsure about shared memory, because I haven't tried it so far. Might do some reading in that area I guess. Configuration files and registry are a slight problem, because they do tend to fail with Vista due to access problems. I am a bit afraid, that shared memory will have the same problem.
The idea with ProcessExplorer sounds promising. Does anyone know an API that could be used for listing events for a process? And, does it work without administrative rights?
Thank you for the clarification.
There is not really a master process. It is more of a driver dll that is used from different processes and the events would be used to "lock" resources used by these processes.
I am thinking about setting up a central service that has sufficient access rights even under Vista. It will certainly complicate things, but it might be the only thing left facing the problems with security.
| [
"No, there is not any facility to enumerate named events. You could enumerate all objects in the respective object manager directory using ZwOpenDirectoryObject and then filter for events. But this routine is undocumented and therefore should not be used without good reason.\nWhy not use a separate mechanism to share the event names? You could list them in a configuration file, a registry key or maybe even in shared memory.\n",
"ProcessExplorer is able to enumerate all the named events held by some specific process. You could go over the entire process list and do something similar although I have now clue as to what API is used to get the list...\n",
"Do not mix up the user mode ZwOpenDirectoryObject with the kernel mode ZwOpenDirectoryObject -- the kernel mode API (http://msdn.microsoft.com/en-us/library/ms800966.aspx) indeed seems to available as of XP only, but the user mode version should be available at least since NT 4. Anyway, I would not recommend using ZwOpenDirectoryObject.\nWhy should configuration files and registry keys fail on Vista? Of course, you have to get the security settings right -- but you would have to do that for your named events as well -- so there should not be a big difference here. Maybe you should tell us some more details about the nature of your processes -- do they all run within the same logon session or do they run as different users even? And is there some master process or who creates the events in the first place?\nFrankly, I tend to find the Process Explorer idea to be not a very good one. Despite the fact that you probably will not be able to accomplish that without using undocumented APIs and/or a device driver, I do not think that a process should be spelunking around in the handle table of another process just to find out the names of some kernel objects. And, of course, the same security issues apply again.\n"
] | [
2,
1,
1
] | [] | [] | [
"events",
"windows"
] | stackoverflow_0000035748_events_windows.txt |
Q:
How do I autorun an application in a terminal in Ubuntu?
I've created a few autorun script files on various USB devices that run bash scripts when they mount. These scripts run "in the background", how do I get them to run in a terminal window? (Like the "Application in Terminal" gnome Launcher type.)
A:
Run them as a two stage process with your "autorun" script calling the second script in a new terminal eg
gnome-terminal -e top --title Testing
Would run the program "top" in a new gnome terminal window with the title "Testing" You can add additional arguments like setting the geometry to determine the size and location of the window checkout the man page for gnome-terminal and the "X" man page for more details
A:
xterm -e shellscript.sh
or (if xterm isn't installed)
gnome-terminal -e shellscript.sh
or (if you're using kubuntu / kde)
konsole -e shellscript.sh
| How do I autorun an application in a terminal in Ubuntu? | I've created a few autorun script files on various USB devices that run bash scripts when they mount. These scripts run "in the background", how do I get them to run in a terminal window? (Like the "Application in Terminal" gnome Launcher type.)
| [
"Run them as a two stage process with your \"autorun\" script calling the second script in a new terminal eg\ngnome-terminal -e top --title Testing\n\nWould run the program \"top\" in a new gnome terminal window with the title \"Testing\" You can add additional arguments like setting the geometry to determine the size and location of the window checkout the man page for gnome-terminal and the \"X\" man page for more details\n",
"xterm -e shellscript.sh\n\nor (if xterm isn't installed)\ngnome-terminal -e shellscript.sh\n\nor (if you're using kubuntu / kde)\nkonsole -e shellscript.sh\n\n"
] | [
5,
1
] | [] | [] | [
"autorun",
"bash",
"gnome",
"ubuntu"
] | stackoverflow_0000035905_autorun_bash_gnome_ubuntu.txt |
Q:
Launch a file with command line arguments without knowing location of exe?
Here's the situation: I am trying to launch an application, but the location of the .exe isn't known to me. Now, if the file extension is registered (in Windows), I can do something like:
Process.Start("Sample.xls");
However, I need to pass some command line arguments as well. I couldn't get this to work
Process p = new Process();
p.StartInfo.FileName = "Sample.xls";
p.StartInfo.Arguments = "/r"; // open in read-only mode
p.Start();
Any suggestions on a mechanism to solve this?
Edit @ aku
My StackOverflow search skills are weak; I did not find that post. Though I generally dislike peering into the registry, that's a great solution. Thanks!
A:
Using my code from this answer you can get command associated with xls extension. Then you can pass this command to Process.Start method.
A:
If you query the registry, you can retrieve the data about the registered file type and then call the app directly passing the command line arguments. See Programmatically Checking and Setting File Types for an example of retrieving shell information for a file type.
| Launch a file with command line arguments without knowing location of exe? | Here's the situation: I am trying to launch an application, but the location of the .exe isn't known to me. Now, if the file extension is registered (in Windows), I can do something like:
Process.Start("Sample.xls");
However, I need to pass some command line arguments as well. I couldn't get this to work
Process p = new Process();
p.StartInfo.FileName = "Sample.xls";
p.StartInfo.Arguments = "/r"; // open in read-only mode
p.Start();
Any suggestions on a mechanism to solve this?
Edit @ aku
My StackOverflow search skills are weak; I did not find that post. Though I generally dislike peering into the registry, that's a great solution. Thanks!
| [
"Using my code from this answer you can get command associated with xls extension. Then you can pass this command to Process.Start method.\n",
"If you query the registry, you can retrieve the data about the registered file type and then call the app directly passing the command line arguments. See Programmatically Checking and Setting File Types for an example of retrieving shell information for a file type.\n"
] | [
4,
2
] | [] | [] | [
".net",
"c#",
"vb.net"
] | stackoverflow_0000035914_.net_c#_vb.net.txt |
Q:
Decision making in distributed applications
With a distributed application, where you have lots of clients and one main server, should you:
Make the clients dumb and the server smart: clients are fast and non-invasive. Business rules are needed in only 1 place
Make the clients smart and the server dumb: take as much load as possible off of the server
Additional info:
Clients collect tons of data about the computer they are on. The server must analyze all of this info to determine the health of these computers
The owners of the client computers are temperamental and will shut down the clients if the client starts to consume too many resources (thus negating the purpose of the distributed app in helping diagnose problems)
A:
You should do as much client-side processing as possible. This will enable your application to scale better than doing processing server-side. To solve your temperamental user problem, you could look into making your client processes run at a very low priority so there's no noticeable decrease in performance on the part of the user.
A:
In a client-server setting, if you care about security, you should always program on the assumption that the client may have been compromised. Even if it hasn't, there is always the risk of somebody using an old version of the client, using a competing or modified version of the client, or just of the net connection being a bit screwy.
So while you do as much work on the client as possible, processing and marshalling information into the right form, the server then needs to do a thorough sanity check on anything the client gives it.
So the answer I guess is "both".
A:
The server must analyze all of this
info to determine the health of these
computers
That is probably the biggest clue so far explaning what your application is kinda about. Are you able to provide a more elaborate briefing on what this application is seeking to achieve in this distributed environment? We do not even know if the client-side processing is disk I/O or processor intensive. How you design the solution is dependent on the nature of what needs to be done to help the users/business accomplish their jobs and objectives.
| Decision making in distributed applications | With a distributed application, where you have lots of clients and one main server, should you:
Make the clients dumb and the server smart: clients are fast and non-invasive. Business rules are needed in only 1 place
Make the clients smart and the server dumb: take as much load as possible off of the server
Additional info:
Clients collect tons of data about the computer they are on. The server must analyze all of this info to determine the health of these computers
The owners of the client computers are temperamental and will shut down the clients if the client starts to consume too many resources (thus negating the purpose of the distributed app in helping diagnose problems)
| [
"You should do as much client-side processing as possible. This will enable your application to scale better than doing processing server-side. To solve your temperamental user problem, you could look into making your client processes run at a very low priority so there's no noticeable decrease in performance on the part of the user.\n",
"In a client-server setting, if you care about security, you should always program on the assumption that the client may have been compromised. Even if it hasn't, there is always the risk of somebody using an old version of the client, using a competing or modified version of the client, or just of the net connection being a bit screwy.\nSo while you do as much work on the client as possible, processing and marshalling information into the right form, the server then needs to do a thorough sanity check on anything the client gives it.\nSo the answer I guess is \"both\".\n",
"\nThe server must analyze all of this\n info to determine the health of these\n computers\n\nThat is probably the biggest clue so far explaning what your application is kinda about. Are you able to provide a more elaborate briefing on what this application is seeking to achieve in this distributed environment? We do not even know if the client-side processing is disk I/O or processor intensive. How you design the solution is dependent on the nature of what needs to be done to help the users/business accomplish their jobs and objectives.\n"
] | [
3,
2,
0
] | [] | [] | [
"distributed"
] | stackoverflow_0000035707_distributed.txt |
Q:
Log files in massively distributed systems
I do a lot of work in the grid and HPC space and one of the biggest challenges we have with a system distributed across hundreds (or in some case thousands) of servers is analysing the log files.
Currently log files are written locally to the disk on each blade but we could also consider publishing logging information using for example a UDP Appender and collect it centally.
Given that the objective is to be able to identify problems in as close to real time as possible, what should we do?
A:
First, synchronize all clocks in the system using NTP.
Second, if you are collecting the logs in a single location (like the UDP appender you mention) make sure the logs have enough information to actually help. I would include at least the server that generated the log, the time it happened, and the message. If there is any sort of transaction id, or job id type concept, include that also.
Since you mentioned a UDP Appender I am guessing you are using log4j (or one of it's siblings). Log4j has an MDC class that allows extra information to be passed along through a processing thread. it can help collect some of the extra information and pass it along.
A:
Are you using Apache? If so you could have a look at mod_log_spread Though you may have too big an infrastructure to make it maintainable. The other option is to look at "broadcasting" or "multicasting" your log messages and having dedicated logging servers subscribing to those feeds and collating them
| Log files in massively distributed systems | I do a lot of work in the grid and HPC space and one of the biggest challenges we have with a system distributed across hundreds (or in some case thousands) of servers is analysing the log files.
Currently log files are written locally to the disk on each blade but we could also consider publishing logging information using for example a UDP Appender and collect it centally.
Given that the objective is to be able to identify problems in as close to real time as possible, what should we do?
| [
"First, synchronize all clocks in the system using NTP.\nSecond, if you are collecting the logs in a single location (like the UDP appender you mention) make sure the logs have enough information to actually help. I would include at least the server that generated the log, the time it happened, and the message. If there is any sort of transaction id, or job id type concept, include that also. \nSince you mentioned a UDP Appender I am guessing you are using log4j (or one of it's siblings). Log4j has an MDC class that allows extra information to be passed along through a processing thread. it can help collect some of the extra information and pass it along. \n",
"Are you using Apache? If so you could have a look at mod_log_spread Though you may have too big an infrastructure to make it maintainable. The other option is to look at \"broadcasting\" or \"multicasting\" your log messages and having dedicated logging servers subscribing to those feeds and collating them\n"
] | [
3,
0
] | [] | [] | [
"distributed_computing",
"hpc"
] | stackoverflow_0000035292_distributed_computing_hpc.txt |
Q:
Web server statics repository -or- ZFS vs. NTFS
My goal is to maintain a web file server separately from my main ASP.NET application server for better scalability. The web file server will store a lot of files downloaded by users.
So the question is: Is it worth to adopt FreeBSD + Apache + ZFS, or will good old IIS be сonvenient enough?
A:
I understand you will serve only static files. In this case, lightweight HTTP servers will give you a higher performance for a given machine. The following are well known:
Lighttpd
Thttpd
Nginx
Many more are listed on Wikipedia. There's a more recent article on IBM DeveloperWorks.
A:
It all depends on your skill level and how much load you are getting on your servers.
If you have spare (physical) resources and have the technical skills and experience to maintain production machines running different operating systems, I'd recommend going running lighttpd on either Linux or FreeBSD. A light OS install with a static file optimized server will perform faster than Apache or IIS on a heavy OS.
However, unless you are extremely comfortable with these solutions, just stick to IIS on Windows. Move the static files to their own machine if you have sufficient load. If you aren't currently thinking about multiple ASP.Net frontends, there's probably no need to spin off the static files yet unless we're talking multiple gigabytes of files.
A:
If you're serving files over the Internet, you might also consider Amazon's S3 service. I've found the rates and reliability to be better than anything I could do (or find) on my own.
| Web server statics repository -or- ZFS vs. NTFS | My goal is to maintain a web file server separately from my main ASP.NET application server for better scalability. The web file server will store a lot of files downloaded by users.
So the question is: Is it worth to adopt FreeBSD + Apache + ZFS, or will good old IIS be сonvenient enough?
| [
"I understand you will serve only static files. In this case, lightweight HTTP servers will give you a higher performance for a given machine. The following are well known:\n\nLighttpd\nThttpd\nNginx\n\nMany more are listed on Wikipedia. There's a more recent article on IBM DeveloperWorks.\n",
"It all depends on your skill level and how much load you are getting on your servers.\nIf you have spare (physical) resources and have the technical skills and experience to maintain production machines running different operating systems, I'd recommend going running lighttpd on either Linux or FreeBSD. A light OS install with a static file optimized server will perform faster than Apache or IIS on a heavy OS.\nHowever, unless you are extremely comfortable with these solutions, just stick to IIS on Windows. Move the static files to their own machine if you have sufficient load. If you aren't currently thinking about multiple ASP.Net frontends, there's probably no need to spin off the static files yet unless we're talking multiple gigabytes of files.\n",
"If you're serving files over the Internet, you might also consider Amazon's S3 service. I've found the rates and reliability to be better than anything I could do (or find) on my own.\n"
] | [
2,
1,
0
] | [] | [] | [
"bsd",
"windows"
] | stackoverflow_0000027784_bsd_windows.txt |
Q:
I don't understand std::tr1::unordered_map
I need an associative container that makes me index a certain object through a string, but that also keeps the order of insertion, so I can look for a specific object by its name or just iterate on it and retrieve objects in the same order I inserted them.
I think this hybrid of linked list and hash map should do the job, but before I tried to use std::tr1::unordered_map thinking that it was working in that way I described, but it wasn't. So could someone explain me the meaning and behavior of unordered_map?
@wesc: I'm sure std::map is implemented by STL, while I'm sure std::hash_map is NOT in the STL (I think older version of Visual Studio put it in a namespace called stdext).
@cristopher: so, if I get it right, the difference is in the implementation (and thus performances), not in the way it behaves externally.
A:
You've asked for the canonical reason why Boost::MultiIndex was made: list insertion order with fast lookup by key. Boost MultiIndex tutorial: list fast lookup
A:
You need to index an associative container two ways:
Insertion order
String comparison
Try Boost.MultiIndex or Boost.Intrusive. I haven't used it this way but I think it's possible.
A:
Boost documentation of unordered containers
The difference is in the method of how you generate the look up.
In the map/set containers the operator< is used to generate an ordered tree.
In the unordered containers, an operator( key ) => index is used.
See hashing for a description of how that works.
A:
Sorry, read your last comment wrong. Yes, hash_map is not in STL, map is. But unordered_map and hash_map are the same from what I've been reading.
map -> log (n) insertion, retrieval, iteration is efficient (and ordered by key comparison)
hash_map/unordered_map -> constant time insertion and retrieval, iteration time is not guarantee to be efficient
Neither of these will work for you by themselves, since the map orders things based on the key content, and not the insertion sequence (unless your key contains info about the insertion sequence in it).
You'll have to do either what you described (list + hash_map), or create a key type that has the insertion sequence number plus an appropriate comparison function.
A:
I think that an unordered_map and hash_map are more or less the same thing. The difference is that the STL doesn't officially have a hash_map (what you're using is probably a compiler specific thing), so unordered_map is the fix for that omission.
unordered_map is just that... unordered. You can't depend on it preserving any ordering on iteration.
A:
You sure that std::hash_map exists in all STL implementations? SGI STL implements it, however GNU g++ doesn't have it (it's located in the __gnu_cxx namespace) as of 4.3.1 anyway. As far as I know, hash_map has always been non-standard, and now tr1 is fixing that.
| I don't understand std::tr1::unordered_map | I need an associative container that makes me index a certain object through a string, but that also keeps the order of insertion, so I can look for a specific object by its name or just iterate on it and retrieve objects in the same order I inserted them.
I think this hybrid of linked list and hash map should do the job, but before I tried to use std::tr1::unordered_map thinking that it was working in that way I described, but it wasn't. So could someone explain me the meaning and behavior of unordered_map?
@wesc: I'm sure std::map is implemented by STL, while I'm sure std::hash_map is NOT in the STL (I think older version of Visual Studio put it in a namespace called stdext).
@cristopher: so, if I get it right, the difference is in the implementation (and thus performances), not in the way it behaves externally.
| [
"You've asked for the canonical reason why Boost::MultiIndex was made: list insertion order with fast lookup by key. Boost MultiIndex tutorial: list fast lookup\n",
"You need to index an associative container two ways:\n\nInsertion order\nString comparison\n\nTry Boost.MultiIndex or Boost.Intrusive. I haven't used it this way but I think it's possible.\n",
"Boost documentation of unordered containers\nThe difference is in the method of how you generate the look up.\nIn the map/set containers the operator< is used to generate an ordered tree.\nIn the unordered containers, an operator( key ) => index is used.\nSee hashing for a description of how that works.\n",
"Sorry, read your last comment wrong. Yes, hash_map is not in STL, map is. But unordered_map and hash_map are the same from what I've been reading.\nmap -> log (n) insertion, retrieval, iteration is efficient (and ordered by key comparison)\nhash_map/unordered_map -> constant time insertion and retrieval, iteration time is not guarantee to be efficient\nNeither of these will work for you by themselves, since the map orders things based on the key content, and not the insertion sequence (unless your key contains info about the insertion sequence in it).\nYou'll have to do either what you described (list + hash_map), or create a key type that has the insertion sequence number plus an appropriate comparison function.\n",
"I think that an unordered_map and hash_map are more or less the same thing. The difference is that the STL doesn't officially have a hash_map (what you're using is probably a compiler specific thing), so unordered_map is the fix for that omission.\nunordered_map is just that... unordered. You can't depend on it preserving any ordering on iteration.\n",
"You sure that std::hash_map exists in all STL implementations? SGI STL implements it, however GNU g++ doesn't have it (it's located in the __gnu_cxx namespace) as of 4.3.1 anyway. As far as I know, hash_map has always been non-standard, and now tr1 is fixing that.\n"
] | [
17,
7,
4,
2,
0,
0
] | [
"@wesc: STL has std::map... so what's the difference with unordered_map? I don't think STL would implement twice the same thing and call it differently.\n"
] | [
-3
] | [
"c++",
"tr1",
"unordered_map"
] | stackoverflow_0000035950_c++_tr1_unordered_map.txt |
Q:
How do you full-text search multiple criteria on left-joined tables in SQL Server?
I have a query that originally looks like this:
select c.Id, c.Name, c.CountryCode, c.CustomerNumber, cacc.AccountNumber, ca.Line1, ca.CityName, ca.PostalCode
from dbo.Customer as c
left join dbo.CustomerAddress as ca on ca.CustomerId = c.Id
left join dbo.CustomerAccount as cacc on cacc.CustomerId = c.Id
where c.CountryCode = 'XX' and (cacc.AccountNumber like '%C17%' or c.Name like '%op%'
or ca.Line1 like '%ae%' or ca.CityName like '%ab%' or ca.PostalCode like '%10%')
On a database with 90,000 records this query takes around 7 seconds to execute (obviously all the joins and likes are taxing).
I have been trying to find a way to bring the query execution time down with full-text search on the columns concerned. However, I haven't seen an example of a full-text search that has three table joins like this, especially since my join condition is not part of the search term.
Is there a way to do this in full-text search?
@David
Yep, there are indexes on the Ids.
I've tried adding indexes on the CustomerAddress stuff (CityName, PostalCode, etc.) and it brought down the query to 3 seconds, but I still find that too slow for something like this.
Note that all of the text fields (with the exception of the ids) are nvarchars, and Line1 is an nvarchar 1000, so that might affect the speed, but still.
A:
NOTE: This isn't really an answer, just an attempt to clarify what might actually be causing the performance problem(s).
90,000 records is really a fairly small data set and the query is relatively simple with just two join. Do you have indexes on CustomerAddress.CustomerId and CustomerAccount.CustomerId? That seems more likely to be causing performance issues than the where condition LIKE predicates. Are you typically searching to find a match on all of those columns at the same time?
A:
I would echo David's suggestion. You'd probably want to examine how the RDBMS is executing your query (e.g., via table scans or using indexes).
One quick check would be to time just the part of the query involving the text search. Something like this:
SELECT ca.Line1, ca.CityName, ca.PostalCode
FROM CustomerAddress as ca
WHERE ca.CustomerId = <some id number>
AND (ca.Line1 LIKE '%ae%' OR ca.CityName LIKE '%ab%' OR ca.PostalCode LIKE '%10%');
If that takes a long time, then the LIKEs are the issue (remove one expression at a time from the ORed line to see if just one of those columns is causing the slowdown). If it's quick, then the joins are suspect.
You could write a similar query for the CustomerAccount table as well.
A:
Run it through the query analyzer and see what the query plan is. My guess would be that the double root (ie. %ae%) searches are causing it do do a table scan when looking for the matching rows. Double root searches are inherently slow, as you can't use any kind of index to match them usually.
| How do you full-text search multiple criteria on left-joined tables in SQL Server? | I have a query that originally looks like this:
select c.Id, c.Name, c.CountryCode, c.CustomerNumber, cacc.AccountNumber, ca.Line1, ca.CityName, ca.PostalCode
from dbo.Customer as c
left join dbo.CustomerAddress as ca on ca.CustomerId = c.Id
left join dbo.CustomerAccount as cacc on cacc.CustomerId = c.Id
where c.CountryCode = 'XX' and (cacc.AccountNumber like '%C17%' or c.Name like '%op%'
or ca.Line1 like '%ae%' or ca.CityName like '%ab%' or ca.PostalCode like '%10%')
On a database with 90,000 records this query takes around 7 seconds to execute (obviously all the joins and likes are taxing).
I have been trying to find a way to bring the query execution time down with full-text search on the columns concerned. However, I haven't seen an example of a full-text search that has three table joins like this, especially since my join condition is not part of the search term.
Is there a way to do this in full-text search?
@David
Yep, there are indexes on the Ids.
I've tried adding indexes on the CustomerAddress stuff (CityName, PostalCode, etc.) and it brought down the query to 3 seconds, but I still find that too slow for something like this.
Note that all of the text fields (with the exception of the ids) are nvarchars, and Line1 is an nvarchar 1000, so that might affect the speed, but still.
| [
"NOTE: This isn't really an answer, just an attempt to clarify what might actually be causing the performance problem(s).\n90,000 records is really a fairly small data set and the query is relatively simple with just two join. Do you have indexes on CustomerAddress.CustomerId and CustomerAccount.CustomerId? That seems more likely to be causing performance issues than the where condition LIKE predicates. Are you typically searching to find a match on all of those columns at the same time?\n",
"I would echo David's suggestion. You'd probably want to examine how the RDBMS is executing your query (e.g., via table scans or using indexes).\nOne quick check would be to time just the part of the query involving the text search. Something like this:\nSELECT ca.Line1, ca.CityName, ca.PostalCode\nFROM CustomerAddress as ca\nWHERE ca.CustomerId = <some id number>\nAND (ca.Line1 LIKE '%ae%' OR ca.CityName LIKE '%ab%' OR ca.PostalCode LIKE '%10%');\n\nIf that takes a long time, then the LIKEs are the issue (remove one expression at a time from the ORed line to see if just one of those columns is causing the slowdown). If it's quick, then the joins are suspect.\nYou could write a similar query for the CustomerAccount table as well.\n",
"Run it through the query analyzer and see what the query plan is. My guess would be that the double root (ie. %ae%) searches are causing it do do a table scan when looking for the matching rows. Double root searches are inherently slow, as you can't use any kind of index to match them usually.\n"
] | [
1,
1,
1
] | [] | [] | [
"full_text_search",
"sql_server"
] | stackoverflow_0000035954_full_text_search_sql_server.txt |
Q:
Yaws uses old config file
I'm developing a web app on Yaws 1.65 (installed through apt) running on Debian etch on a VPS with UML. Whenever I do /etc/init.d/yaws restart or a stop/start, it initializes according to an old version of the config file (/etc/yaws/yaws.conf).
I know this because I changed the docroot from the default to another directory (call it A), then a few weeks later changed it to directory B, and the config file has stayed with B for the last several months. But then, after a restart, it switches back to A. If it switched back to the package default, that would be understandable, but it switches to an old customized version instead.
The funny thing is that if I leave it stopped for several minutes, when I start it again, everything switches back to normal (using directory B). But while it's stopped, if I run ps, I don't see any yaws-related processes (yaws, heart, etc). This problem has survived several reboots, so it's got to be an old cached copy of the config somewhere, but I have yet to find anything like that.
Any idea what could be going on?
Update:
@Gorgapor - I stopped yaws, renamed the config file and tried to start it again. It failed to start. However, I was able to restart a couple of times and this time it didn't switch back to the old version.
A:
I'm completely inexperienced with yaws, but I have a troubleshooting suggestion: What happens if you remove the config file completely? If it still starts yaws without a config file, that could be a clear sign that something is being cached.
For what it's worth, with a quick 5 minutes of googling, I found no mention of any caching behavior.
| Yaws uses old config file | I'm developing a web app on Yaws 1.65 (installed through apt) running on Debian etch on a VPS with UML. Whenever I do /etc/init.d/yaws restart or a stop/start, it initializes according to an old version of the config file (/etc/yaws/yaws.conf).
I know this because I changed the docroot from the default to another directory (call it A), then a few weeks later changed it to directory B, and the config file has stayed with B for the last several months. But then, after a restart, it switches back to A. If it switched back to the package default, that would be understandable, but it switches to an old customized version instead.
The funny thing is that if I leave it stopped for several minutes, when I start it again, everything switches back to normal (using directory B). But while it's stopped, if I run ps, I don't see any yaws-related processes (yaws, heart, etc). This problem has survived several reboots, so it's got to be an old cached copy of the config somewhere, but I have yet to find anything like that.
Any idea what could be going on?
Update:
@Gorgapor - I stopped yaws, renamed the config file and tried to start it again. It failed to start. However, I was able to restart a couple of times and this time it didn't switch back to the old version.
| [
"I'm completely inexperienced with yaws, but I have a troubleshooting suggestion: What happens if you remove the config file completely? If it still starts yaws without a config file, that could be a clear sign that something is being cached.\nFor what it's worth, with a quick 5 minutes of googling, I found no mention of any caching behavior.\n"
] | [
1
] | [] | [] | [
"yaws"
] | stackoverflow_0000036030_yaws.txt |
Q:
How do you test/change untested and untestable code?
Lately I had to change some code on older systems where not all of the code has unit tests.
Before making the changes I want to write tests, but each class created a lot of dependencies and other anti-patterns which made testing quite hard.
Obviously, I wanted to refactor the code to make it easier to test, write the tests and then change it.
Is this the way you'd do it? Or would you spend a lot of time writing the hard-to-write tests that would be mostly removed after the refactoring will be completed?
A:
First of all, here's a great article with tips on unit testing. Secondly, I found a great way to avoid making tons of changes in old code is to just refactor it a little until you can test it. One easy way to do this is to make private members protected, and then override the protected field.
For example, let's say you have a class that loads some stuff from the database during the constructor. In this case, you can't just override a protected method, but you can extract the DB logic to a protected field and then override it in the test.
public class MyClass {
public MyClass() {
// undesirable DB logic
}
}
becomes
public class MyClass {
public MyClass() {
loadFromDB();
}
protected void loadFromDB() {
// undesirable DB logic
}
}
and then your test looks something like this:
public class MyClassTest {
public void testSomething() {
MyClass myClass = new MyClassWrapper();
// test it
}
private static class MyClassWrapper extends MyClass {
@Override
protected void loadFromDB() {
// some mock logic
}
}
}
This is somewhat of a bad example, because you could use DBUnit in this case, but I actually did this in a similar case recently because I wanted to test some functionality totally unrelated to the data being loaded, so it was very effective. I've also found such exposing of members to be useful in other similar cases where I need to get rid of some dependency that has been in a class for a long time.
I would recommend against this solution if you are writing a framework though, unless you really don't mind exposing the members to users of your framework.
It's a bit of a hack, but I've found it quite useful.
A:
@valters
I disagree with your statement that tests shouldn't break the build. The tests should be an indication that the application doesn't have new bugs introduced for the functionality that is tested (and a found bug is an indication of a missing test).
If tests don't break the build, then you can easily run into the situation where new code breaks the build and it isn't known for a while, even though a test covered it. A failing test should be a red flag that either the test or the code has to be fixed.
Furthermore, allowing the tests to not break the build will cause the failure rate to slowly creep up, to the point where you no longer have a reliable set of regression tests.
If there is a problem with tests breaking too often, it may be an indication that the tests are being written in too fragile a manner (dependence on resources that could change, such as the database without using DB Unit properly, or an external web service that should be mocked), or it may be an indication that there are developers in the team that don't give the tests proper attention.
I firmly believe that a failing test should be fixed ASAP, just as you would fix code that fails to compile ASAP.
A:
I am not sure why would you say that unit tests are going be removed once refactoring is completed. Actually your unit-test suite should run after main build (you can create a separate "tests" build, that just runs the unit tests after the main product is built). Then you will immediately see if changes in one piece break the tests in other subsystem. Note it's a bit different than running tests during build (as some may advocate) - some limited testing is useful during build, but usually it's unproductive to "crash" the build just because some unit test happens to fail.
If you are writing Java (chances are), check out http://www.easymock.org/ - may be useful for reducing coupling for the test purposes.
A:
I have read Working Effectively With Legacy Code, and I agree it is very useful for dealing with "untestable" code.
Some techniques only apply to compiled languages (I'm working on "old" PHP apps), but I would say most of the book is applicable to any language.
Refactoring books sometimes assume the code is in semi-ideal or "maintenance aware" state before refactoring, but the systems I work on are less than ideal and were developed as "learn as you go" apps, or as first apps for some technologies used (and I don't blame the initial developers for that, since I'm one of them), so there are no tests at all, and code is sometimes messy. This book addresses this kind of situation, whereas other refactoring books usually don't (well, not to this extent).
I should mention that I haven't received any money from the editor nor author of this book ;), but I found it very interesting, since resources are lacking in the field of legacy code (and particularly in my language, French, but that's another story).
| How do you test/change untested and untestable code? | Lately I had to change some code on older systems where not all of the code has unit tests.
Before making the changes I want to write tests, but each class created a lot of dependencies and other anti-patterns which made testing quite hard.
Obviously, I wanted to refactor the code to make it easier to test, write the tests and then change it.
Is this the way you'd do it? Or would you spend a lot of time writing the hard-to-write tests that would be mostly removed after the refactoring will be completed?
| [
"First of all, here's a great article with tips on unit testing. Secondly, I found a great way to avoid making tons of changes in old code is to just refactor it a little until you can test it. One easy way to do this is to make private members protected, and then override the protected field.\nFor example, let's say you have a class that loads some stuff from the database during the constructor. In this case, you can't just override a protected method, but you can extract the DB logic to a protected field and then override it in the test.\npublic class MyClass {\n public MyClass() {\n // undesirable DB logic\n }\n}\n\nbecomes\npublic class MyClass {\n public MyClass() {\n loadFromDB();\n }\n\n protected void loadFromDB() {\n // undesirable DB logic\n }\n}\n\nand then your test looks something like this:\npublic class MyClassTest {\n public void testSomething() {\n MyClass myClass = new MyClassWrapper();\n // test it\n }\n\n private static class MyClassWrapper extends MyClass {\n @Override\n protected void loadFromDB() {\n // some mock logic\n }\n }\n}\n\nThis is somewhat of a bad example, because you could use DBUnit in this case, but I actually did this in a similar case recently because I wanted to test some functionality totally unrelated to the data being loaded, so it was very effective. I've also found such exposing of members to be useful in other similar cases where I need to get rid of some dependency that has been in a class for a long time.\nI would recommend against this solution if you are writing a framework though, unless you really don't mind exposing the members to users of your framework.\nIt's a bit of a hack, but I've found it quite useful.\n",
"@valters\nI disagree with your statement that tests shouldn't break the build. The tests should be an indication that the application doesn't have new bugs introduced for the functionality that is tested (and a found bug is an indication of a missing test).\nIf tests don't break the build, then you can easily run into the situation where new code breaks the build and it isn't known for a while, even though a test covered it. A failing test should be a red flag that either the test or the code has to be fixed.\nFurthermore, allowing the tests to not break the build will cause the failure rate to slowly creep up, to the point where you no longer have a reliable set of regression tests.\nIf there is a problem with tests breaking too often, it may be an indication that the tests are being written in too fragile a manner (dependence on resources that could change, such as the database without using DB Unit properly, or an external web service that should be mocked), or it may be an indication that there are developers in the team that don't give the tests proper attention.\nI firmly believe that a failing test should be fixed ASAP, just as you would fix code that fails to compile ASAP.\n",
"I am not sure why would you say that unit tests are going be removed once refactoring is completed. Actually your unit-test suite should run after main build (you can create a separate \"tests\" build, that just runs the unit tests after the main product is built). Then you will immediately see if changes in one piece break the tests in other subsystem. Note it's a bit different than running tests during build (as some may advocate) - some limited testing is useful during build, but usually it's unproductive to \"crash\" the build just because some unit test happens to fail.\nIf you are writing Java (chances are), check out http://www.easymock.org/ - may be useful for reducing coupling for the test purposes.\n",
"I have read Working Effectively With Legacy Code, and I agree it is very useful for dealing with \"untestable\" code.\nSome techniques only apply to compiled languages (I'm working on \"old\" PHP apps), but I would say most of the book is applicable to any language.\nRefactoring books sometimes assume the code is in semi-ideal or \"maintenance aware\" state before refactoring, but the systems I work on are less than ideal and were developed as \"learn as you go\" apps, or as first apps for some technologies used (and I don't blame the initial developers for that, since I'm one of them), so there are no tests at all, and code is sometimes messy. This book addresses this kind of situation, whereas other refactoring books usually don't (well, not to this extent).\nI should mention that I haven't received any money from the editor nor author of this book ;), but I found it very interesting, since resources are lacking in the field of legacy code (and particularly in my language, French, but that's another story).\n"
] | [
6,
3,
0,
0
] | [] | [] | [
"legacy",
"refactoring",
"unit_testing"
] | stackoverflow_0000007287_legacy_refactoring_unit_testing.txt |
Q:
Trouble with parallel make not always starting another job when one finishes
I'm working on a system with four logical CPS (two dual-core CPUs if it matters). I'm using make to parallelize twelve trivially parallelizable tasks and doing it from cron.
The invocation looks like:
make -k -j 4 -l 3.99 -C [dir] [12 targets]
The trouble I'm running into is that sometimes one job will finish but the next one won't startup even though it shouldn't be stopped by the load average limiter. Each target takes about four hours to complete and I'm wondering if this might be part of the problem.
Edit: Sometimes a target does fail but I use the -k option to have the rest of the make still run. I haven't noticed any correlation with jobs failing and the next job not starting.
A:
I'd drop the '-l'
If all you plan to run the the system is this build I think the -j 4 does what you want.
Based on my memory, if you have anything else running (crond?), that can push the load average over 4.
GNU make ref
A:
Does make think one of the targets is failing? If so, it will stop the make after the running jobs finish. You can use -k to tell it to continue even if an error occurs.
A:
@BCS
I'm 99.9% sure that the -l isn't causeing the problem because I can watch the load average on the machine and it drops down to about three and sometimes as low as one (!) without starting the next job.
| Trouble with parallel make not always starting another job when one finishes | I'm working on a system with four logical CPS (two dual-core CPUs if it matters). I'm using make to parallelize twelve trivially parallelizable tasks and doing it from cron.
The invocation looks like:
make -k -j 4 -l 3.99 -C [dir] [12 targets]
The trouble I'm running into is that sometimes one job will finish but the next one won't startup even though it shouldn't be stopped by the load average limiter. Each target takes about four hours to complete and I'm wondering if this might be part of the problem.
Edit: Sometimes a target does fail but I use the -k option to have the rest of the make still run. I haven't noticed any correlation with jobs failing and the next job not starting.
| [
"I'd drop the '-l'\nIf all you plan to run the the system is this build I think the -j 4 does what you want.\nBased on my memory, if you have anything else running (crond?), that can push the load average over 4.\nGNU make ref\n",
"Does make think one of the targets is failing? If so, it will stop the make after the running jobs finish. You can use -k to tell it to continue even if an error occurs.\n",
"@BCS\nI'm 99.9% sure that the -l isn't causeing the problem because I can watch the load average on the machine and it drops down to about three and sometimes as low as one (!) without starting the next job.\n"
] | [
1,
0,
0
] | [] | [] | [
"makefile"
] | stackoverflow_0000035407_makefile.txt |
Q:
How to assign a method's output to a textbox value without code behind
How do I assign a method's output to a textbox value without code behind?
<%@ Page Language="VB" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<script runat="server">
Public TextFromString As String = "test text test text"
Public TextFromMethod As String = RepeatChar("S", 50) 'SubSonic.Sugar.Web.GenerateLoremIpsum(400, "w")
Public Function RepeatChar(ByVal Input As String, ByVal Count As Integer)
Return New String(Input, Count)
End Function
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head id="Head1" runat="server">
<title>Test Page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<%=TextFromString%>
<br />
<asp:TextBox ID="TextBox1" runat="server" Text="<%# TextFromString %>"></asp:TextBox>
<br />
<%=TextFromMethod%>
<br />
<asp:TextBox ID="TextBox2" runat="server" Text="<%# TextFromMethod %>"></asp:TextBox>
</div>
</form>
</body>
</html>
it was mostly so the designer guys could use it in the aspx page. Seems like a simple thing to push a variable value into a textbox to me.
It's also confusing to me why
<asp:Label runat="server" ID="label1"><%=TextFromString%></asp:Label>
and
<asp:TextBox ID="TextBox3" runat="server">Hello</asp:TextBox>
works but
<asp:TextBox ID="TextBox4" runat="server"><%=TextFromString%></asp:TextBox>
causes a compilation error.
A:
There's a couple of different expression types in .ASPX files. There's:
<%= TextFromMethod %>
which simply reserves a literal control, and outputs the text at render time.
and then there's:
<%# TextFromMethod %>
which is a databinding expression, evaluated when the control is DataBound(). There's also expression builders, like:
<%$ ConnectionStrings:Database %>
but that's not really important here....
So, the <%= %> method won't work because it would try to insert a Literal into the .Text property...obviously, not what you want.
The <%# %> method doesn't work because the TextBox isn't DataBound, nor are any of it's parents. If your TextBox was in a Repeater or GridView, then this method would work.
So - what to do? Just call TextBox.DataBind() at some point. Or, if you have more than 1 control, just call Page.DataBind() in your Page_Load.
Private Function Page_Load(sender as Object, e as EventArgs)
If Not IsPostback Then
Me.DataBind()
End If
End Function
A:
Have you tried using an HTML control instead of the server control? Does it also cause a compilation error?
<input type="text" id="TextBox4" runat="server" value="<%=TextFromString%>" />
| How to assign a method's output to a textbox value without code behind | How do I assign a method's output to a textbox value without code behind?
<%@ Page Language="VB" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<script runat="server">
Public TextFromString As String = "test text test text"
Public TextFromMethod As String = RepeatChar("S", 50) 'SubSonic.Sugar.Web.GenerateLoremIpsum(400, "w")
Public Function RepeatChar(ByVal Input As String, ByVal Count As Integer)
Return New String(Input, Count)
End Function
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head id="Head1" runat="server">
<title>Test Page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<%=TextFromString%>
<br />
<asp:TextBox ID="TextBox1" runat="server" Text="<%# TextFromString %>"></asp:TextBox>
<br />
<%=TextFromMethod%>
<br />
<asp:TextBox ID="TextBox2" runat="server" Text="<%# TextFromMethod %>"></asp:TextBox>
</div>
</form>
</body>
</html>
it was mostly so the designer guys could use it in the aspx page. Seems like a simple thing to push a variable value into a textbox to me.
It's also confusing to me why
<asp:Label runat="server" ID="label1"><%=TextFromString%></asp:Label>
and
<asp:TextBox ID="TextBox3" runat="server">Hello</asp:TextBox>
works but
<asp:TextBox ID="TextBox4" runat="server"><%=TextFromString%></asp:TextBox>
causes a compilation error.
| [
"There's a couple of different expression types in .ASPX files. There's:\n<%= TextFromMethod %>\n\nwhich simply reserves a literal control, and outputs the text at render time.\nand then there's:\n<%# TextFromMethod %>\n\nwhich is a databinding expression, evaluated when the control is DataBound(). There's also expression builders, like:\n<%$ ConnectionStrings:Database %>\n\nbut that's not really important here....\nSo, the <%= %> method won't work because it would try to insert a Literal into the .Text property...obviously, not what you want.\nThe <%# %> method doesn't work because the TextBox isn't DataBound, nor are any of it's parents. If your TextBox was in a Repeater or GridView, then this method would work.\nSo - what to do? Just call TextBox.DataBind() at some point. Or, if you have more than 1 control, just call Page.DataBind() in your Page_Load.\nPrivate Function Page_Load(sender as Object, e as EventArgs)\n If Not IsPostback Then\n Me.DataBind()\n End If\nEnd Function\n\n",
"Have you tried using an HTML control instead of the server control? Does it also cause a compilation error?\n<input type=\"text\" id=\"TextBox4\" runat=\"server\" value=\"<%=TextFromString%>\" />\n\n"
] | [
2,
1
] | [] | [] | [
"asp.net",
"vb.net"
] | stackoverflow_0000036028_asp.net_vb.net.txt |
Q:
Is using PHP accelerators such as MMCache or Zend Accelerator making PHP faster?
Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these?
A:
Note that Zend Optimizer and MMCache (or similar applications) are totally different things. While Zend Optimizer tries to optimize the program opcode MMCache will cache the scripts in memory and reuse the precompiled code.
I did some benchmarks some time ago and you can find the results in my blog (in German though). The basic results:
Zend Optimizer alone didn't help at all. Actually my scripts were slower than without optimizer.
When it comes to caches:
* fastest: eAccelerator
* XCache
* APC
And: You DO want to install a opcode cache!
For example:
alt text http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png
This is the duration it took to call the wordpress homepage 10.000 times.
Edit: BTW, eAccelerator contains an optimizer itself.
A:
MMCache has been deprecated. I recommend either http://pecl.php.net/package/APC or http://xcache.lighttpd.net/, both of which also give you variable storage (like Memcache).
A:
Both are interesting and will provide speed boost since they compile source code into binary representation which is then executed by the PHP engine.
Any huge web site running with PHP (Facebook for example) is running some sort of opcode cache system like MMCache.
The problem is that they are not very easy to set up depending on your system.
A:
Depending on how much of your PHP code is actually executed and how long that execution takes they can be a really big win. It certainly isn't going to hurt, but the gain you see will very much depend on where your time is currently spent.
btw mmcache has been rolled into a different project now, I forget the name but Google will tell you.
A:
I use APC on my production servers and it works pretty well out of the box. Compile it and add it to PHP and there isn't much tweaking left to do for it. I check it every once in a while just to review stats but since I use MVC a lot all of the main files (routers, controllers, etc) rarely change on a day-to-day basis so that code stays compiled and runs pretty efficiently.
A:
currently we use apc, free and was just a simple plug and play on our live servers. Provided a huge performance increase for our site, especially as the project size increased. I also have the apc.stat disabled so it doesn't check if the code has been updated, so whenever we need to update the code on the live site we restart apache.
A:
I use APC, and can attest that it can dramatically reduce the CPU and I/O load on an app server if you maintain a high cache-hit rate. It not only saves you from having to compile, it can save you from having to read the php files from disk at all. (i.e. the bytecodes are served directly from main memory, so it's super fast) It lowers the speed to render a single page, and increases the requests per second your server can handle.
If you use RedHat or CentOS, installing APC is super simple:
yum install php-devel httpd-devel php-pear
pecl install apc
echo "extension=apc.so" > /etc/php.d/apc.ini
# if you're using SELinux:
chcon "system_u:object_r:textrel_shlib_t" /usr/lib/php/modules/apc.so
/etc/init.d/httpd restart
You asked about downsides. The only downside is that it requires some memory. The default on APC is 30MB, but it can be adjusted, and the cost of a little bit of memory more than pays for itself with the increased speed and response rate.
A:
BlaM's testing included all the DB calls made by WordPress. When you're making fewer DB calls, you'll see the performance gain of opcode caches be even more dramatic.
A:
Have you checked out Phalanger? It compiles PHP to .NET code. Here are some benchmarks which show that it can dramatically improve performance.
A:
I used Zend Accelerator a little back in the day (2004-ish). It certainly gave some significant performance wins on code it could work with, but unfortunately the system I was using was designed to quite often dynamically load code and then eval it, which Zend Accelerator couldn't do much with at the time (and I'd guess still can't).
On the down side, we certainly saw some caching issues (where the code would be changes, but the compiled version sync with the change for one reason or another). I imagine those problems have likely been ironed out by now.
Anyway, I don't have any hard comparison numbers, and certainly didn't write the same system in different environments for comparison, but for the vast majority of systems, PHP isn't going to kill you performance wise.
| Is using PHP accelerators such as MMCache or Zend Accelerator making PHP faster? | Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these?
| [
"Note that Zend Optimizer and MMCache (or similar applications) are totally different things. While Zend Optimizer tries to optimize the program opcode MMCache will cache the scripts in memory and reuse the precompiled code.\nI did some benchmarks some time ago and you can find the results in my blog (in German though). The basic results:\nZend Optimizer alone didn't help at all. Actually my scripts were slower than without optimizer.\nWhen it comes to caches:\n* fastest: eAccelerator\n* XCache\n* APC\nAnd: You DO want to install a opcode cache!\nFor example:\nalt text http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png\nThis is the duration it took to call the wordpress homepage 10.000 times.\nEdit: BTW, eAccelerator contains an optimizer itself.\n",
"MMCache has been deprecated. I recommend either http://pecl.php.net/package/APC or http://xcache.lighttpd.net/, both of which also give you variable storage (like Memcache).\n",
"Both are interesting and will provide speed boost since they compile source code into binary representation which is then executed by the PHP engine.\nAny huge web site running with PHP (Facebook for example) is running some sort of opcode cache system like MMCache.\nThe problem is that they are not very easy to set up depending on your system.\n",
"Depending on how much of your PHP code is actually executed and how long that execution takes they can be a really big win. It certainly isn't going to hurt, but the gain you see will very much depend on where your time is currently spent.\nbtw mmcache has been rolled into a different project now, I forget the name but Google will tell you.\n",
"I use APC on my production servers and it works pretty well out of the box. Compile it and add it to PHP and there isn't much tweaking left to do for it. I check it every once in a while just to review stats but since I use MVC a lot all of the main files (routers, controllers, etc) rarely change on a day-to-day basis so that code stays compiled and runs pretty efficiently. \n",
"currently we use apc, free and was just a simple plug and play on our live servers. Provided a huge performance increase for our site, especially as the project size increased. I also have the apc.stat disabled so it doesn't check if the code has been updated, so whenever we need to update the code on the live site we restart apache.\n",
"I use APC, and can attest that it can dramatically reduce the CPU and I/O load on an app server if you maintain a high cache-hit rate. It not only saves you from having to compile, it can save you from having to read the php files from disk at all. (i.e. the bytecodes are served directly from main memory, so it's super fast) It lowers the speed to render a single page, and increases the requests per second your server can handle.\nIf you use RedHat or CentOS, installing APC is super simple:\nyum install php-devel httpd-devel php-pear\npecl install apc \necho \"extension=apc.so\" > /etc/php.d/apc.ini\n# if you're using SELinux:\nchcon \"system_u:object_r:textrel_shlib_t\" /usr/lib/php/modules/apc.so\n/etc/init.d/httpd restart\n\nYou asked about downsides. The only downside is that it requires some memory. The default on APC is 30MB, but it can be adjusted, and the cost of a little bit of memory more than pays for itself with the increased speed and response rate.\n",
"BlaM's testing included all the DB calls made by WordPress. When you're making fewer DB calls, you'll see the performance gain of opcode caches be even more dramatic.\n",
"Have you checked out Phalanger? It compiles PHP to .NET code. Here are some benchmarks which show that it can dramatically improve performance.\n",
"I used Zend Accelerator a little back in the day (2004-ish). It certainly gave some significant performance wins on code it could work with, but unfortunately the system I was using was designed to quite often dynamically load code and then eval it, which Zend Accelerator couldn't do much with at the time (and I'd guess still can't).\nOn the down side, we certainly saw some caching issues (where the code would be changes, but the compiled version sync with the change for one reason or another). I imagine those problems have likely been ironed out by now.\nAnyway, I don't have any hard comparison numbers, and certainly didn't write the same system in different environments for comparison, but for the vast majority of systems, PHP isn't going to kill you performance wise.\n"
] | [
13,
5,
3,
2,
2,
1,
1,
1,
0,
0
] | [] | [] | [
"caching",
"php",
"zend_optimizer"
] | stackoverflow_0000012936_caching_php_zend_optimizer.txt |