content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Which resolution to target for a Mobile App?
When desinging UI for mobile apps in general which resolution could be considered safe as a general rule of thumb. My interest lies specifically in web based apps. The iPhone has a pretty high resolution for a hand held, and the Nokia E Series seem to oriented differently. Is 240×320 still considered safe?
A:
Not enough information...
You say you're targeting a "Mobile App" but the reality is that mobile could mean anything from a cell phone with 128x128 resolution to a MID with 800x600 resolution.
There is no "safe" resolution for such a wide range, and if you're truly targeting all of them you need to design a custom interface for each major resolution. Add some scaling factors in and you might be able to cut it down to 5-8 different interface designs.
Further, the UI means "User Interface" and includes a lot more than just the resolution - you can't count on a touchscreen, full keyboard, or even software keys.
You need to either better define your target, or explain your target here so we can better help you.
Keep in mind that there are millions of phone users that don't have PDA resolutions, and you can really only count on 128x128 or better to cover the majority of technically inclined cell phone users (those that know there's a web browser in their phone, nevermind those that use it).
But if you're prepared to accept these losses, go ahead and hit for 320x240 and 240x320. That will give you most current PDA phones and up (older blackberries and palm devices had smaller square orientations). Plan on spending time later supporting lower resolution devices and above all...
Do not tie your app to a particular resolution.
Make sure your app is flexible enough that you can deploy new UI's without changing internal application logic - in other words separate the presentation from the core logic. You will find this very useful later - the mobile world changes daily. Once you gauge how your app is being used you can, for instance, easily deploy an iPhone specific version that is pixel perfect (and prettier than an upscaled 320x240) in order to engage more users. Being able to do this in a few hours (because you don't have to change the internals) is going to put you miles ahead of the competition if someone else makes a swipe at your market.
-Adam
A:
Right now I believe it would make sense for me to target about 2 resolutions and latter learn my customers best needs through feedback?
It's a chicken and egg problem.
Ideally before you develop the product you already know what your customers use/need.
Often not even the customers know what they need until they use something (and more often than not you find out what they don't need rather than what they need).
So in this case, yes, spend a little bit of time developing a prototype app that you can send out there to a few people and get feedback. They will have better feedback because they can try it out, and you will have a springboard to start from. The ability to quickly release UI updates without changing core logic will allow you test several interfaces quickly without a huge time investment.
Further, to customers you will seem really responsive to their needs, which will be a big benefit to people who's jobs depend on reaction time.
-Adam
A:
You mentioned Web based apps. Any particular framework you have in mind?
In many cases, WALL seems to help to large extent.
Here's one Article, Adapting to User Devices Using Mobile Web Technology exploiting WALL.
| Which resolution to target for a Mobile App? | When desinging UI for mobile apps in general which resolution could be considered safe as a general rule of thumb. My interest lies specifically in web based apps. The iPhone has a pretty high resolution for a hand held, and the Nokia E Series seem to oriented differently. Is 240×320 still considered safe?
| [
"Not enough information...\nYou say you're targeting a \"Mobile App\" but the reality is that mobile could mean anything from a cell phone with 128x128 resolution to a MID with 800x600 resolution.\nThere is no \"safe\" resolution for such a wide range, and if you're truly targeting all of them you need to design a custom interface for each major resolution. Add some scaling factors in and you might be able to cut it down to 5-8 different interface designs.\nFurther, the UI means \"User Interface\" and includes a lot more than just the resolution - you can't count on a touchscreen, full keyboard, or even software keys.\nYou need to either better define your target, or explain your target here so we can better help you.\nKeep in mind that there are millions of phone users that don't have PDA resolutions, and you can really only count on 128x128 or better to cover the majority of technically inclined cell phone users (those that know there's a web browser in their phone, nevermind those that use it).\nBut if you're prepared to accept these losses, go ahead and hit for 320x240 and 240x320. That will give you most current PDA phones and up (older blackberries and palm devices had smaller square orientations). Plan on spending time later supporting lower resolution devices and above all...\nDo not tie your app to a particular resolution.\nMake sure your app is flexible enough that you can deploy new UI's without changing internal application logic - in other words separate the presentation from the core logic. You will find this very useful later - the mobile world changes daily. Once you gauge how your app is being used you can, for instance, easily deploy an iPhone specific version that is pixel perfect (and prettier than an upscaled 320x240) in order to engage more users. Being able to do this in a few hours (because you don't have to change the internals) is going to put you miles ahead of the competition if someone else makes a swipe at your market.\n-Adam\n",
"\nRight now I believe it would make sense for me to target about 2 resolutions and latter learn my customers best needs through feedback?\n\nIt's a chicken and egg problem. \nIdeally before you develop the product you already know what your customers use/need.\nOften not even the customers know what they need until they use something (and more often than not you find out what they don't need rather than what they need).\nSo in this case, yes, spend a little bit of time developing a prototype app that you can send out there to a few people and get feedback. They will have better feedback because they can try it out, and you will have a springboard to start from. The ability to quickly release UI updates without changing core logic will allow you test several interfaces quickly without a huge time investment.\nFurther, to customers you will seem really responsive to their needs, which will be a big benefit to people who's jobs depend on reaction time.\n-Adam\n",
"You mentioned Web based apps. Any particular framework you have in mind?\nIn many cases, WALL seems to help to large extent.\nHere's one Article, Adapting to User Devices Using Mobile Web Technology exploiting WALL.\n"
] | [
3,
2,
1
] | [] | [] | [
"mobile",
"web_applications"
] | stackoverflow_0000027948_mobile_web_applications.txt |
Q:
Optimizing for low bandwidth
I am charged with designing a web application that displays very large geographical data. And one of the requirements is that it should be optimized so the PC still on dial-ups common in the suburbs of my country could use it as well.
Now I am permitted to use Flash and/or Silverlight if that will help with the limited development time and user experience.
The heavy part of the geographical data are chunked into tiles and loaded like map tiles in Google Maps but that means I need a lot of HTTP requests.
Should I go with just javascript + HTML? Would I end up with a faster application regarding Flash/Silverlight? Since I can do some complex algorithm on those 2 tech (like DeepZoom). Deploying desktop app though, is out of the question since we don't have that much maintenance funds.
It just needs to be fast... really fast..
p.s. faster is in the sense of "download faster"
A:
Is something like Gears acceptable? This will let you store data locally to limit re-requests.
I would also stay away from flash and Silverlight and go straight to javascript/AJAX. jQuery is a ton-O-fun.
A:
I would suggest you look into Silverlight and DeepZoom
A:
I don't think you'll find Flash or Silverlight is going to help too much for this application. Either way you're going to be utilizing tiled images and the images are going to be the same size in both scenarios. Using Flash or Silverlight may allow you to add some neat animations to the application but anything you gain here will be additional overhead for your clients on dialup connections. I'd stick with plain Javascript/HTML.
A:
You may also want to look at asynchronously downloading your tiles via one of the Ajax libraries available. Let's say your user can view 9 tiles at a time and scroll/zoom. Download those 9 tiles they can see plus whatever is needed to handle the zoom for those tiles on the first load; then you'll need to play around with caching strategies for prefetching other information asynchronously.
At one place I worked a rules engine was taking a bit too long to return a result so they opted to present the user with a "confirm this" screen. The few seconds it took the users to review and click next was more than enough time to return the results. It made the app look lightening fast to the user when in reality it took a bit longer. You have to remember, user perception of performance is just as important in some cases as the actual performance.
A:
I believe Microsoft's Seadragon is your answer. However, I am not sure if that is available to developers.
It looks like some of it has found its way into Silverlight
| Optimizing for low bandwidth | I am charged with designing a web application that displays very large geographical data. And one of the requirements is that it should be optimized so the PC still on dial-ups common in the suburbs of my country could use it as well.
Now I am permitted to use Flash and/or Silverlight if that will help with the limited development time and user experience.
The heavy part of the geographical data are chunked into tiles and loaded like map tiles in Google Maps but that means I need a lot of HTTP requests.
Should I go with just javascript + HTML? Would I end up with a faster application regarding Flash/Silverlight? Since I can do some complex algorithm on those 2 tech (like DeepZoom). Deploying desktop app though, is out of the question since we don't have that much maintenance funds.
It just needs to be fast... really fast..
p.s. faster is in the sense of "download faster"
| [
"Is something like Gears acceptable? This will let you store data locally to limit re-requests.\nI would also stay away from flash and Silverlight and go straight to javascript/AJAX. jQuery is a ton-O-fun.\n",
"I would suggest you look into Silverlight and DeepZoom\n",
"I don't think you'll find Flash or Silverlight is going to help too much for this application. Either way you're going to be utilizing tiled images and the images are going to be the same size in both scenarios. Using Flash or Silverlight may allow you to add some neat animations to the application but anything you gain here will be additional overhead for your clients on dialup connections. I'd stick with plain Javascript/HTML.\n",
"You may also want to look at asynchronously downloading your tiles via one of the Ajax libraries available. Let's say your user can view 9 tiles at a time and scroll/zoom. Download those 9 tiles they can see plus whatever is needed to handle the zoom for those tiles on the first load; then you'll need to play around with caching strategies for prefetching other information asynchronously.\nAt one place I worked a rules engine was taking a bit too long to return a result so they opted to present the user with a \"confirm this\" screen. The few seconds it took the users to review and click next was more than enough time to return the results. It made the app look lightening fast to the user when in reality it took a bit longer. You have to remember, user perception of performance is just as important in some cases as the actual performance.\n",
"I believe Microsoft's Seadragon is your answer. However, I am not sure if that is available to developers.\nIt looks like some of it has found its way into Silverlight\n"
] | [
2,
2,
1,
1,
0
] | [] | [] | [
"networking",
"optimization",
"web_applications"
] | stackoverflow_0000028441_networking_optimization_web_applications.txt |
Q:
Impose a total ordering on all instances of *any* class in Java
I'm unsure whether the following code would ensure all conditions given in Comparator's Javadoc.
class TotalOrder<T> implements Comparator<T> {
public boolean compare(T o1, T o2) {
if (o1 == o2 || equal(o1, o2)) return 0;
int h1 = System.identityHashCode(o1);
int h2 = System.identityHashCode(o2);
if (h1 != h2) {
return h1 < h2 ? -1 : 1;
}
// equals returned false but identity hash code was same, assume o1 == o2
return 0;
}
boolean equal(Object o1, Object o2) {
return o1 == null ? o2 == null : o1.equals(o2);
}
}
Will the code above impose a total ordering on all instances of any class, even if that class does not implement Comparable?
A:
Hey, look at what I found!
http://gafter.blogspot.com/2007/03/compact-object-comparator.html
This is exactly what I was looking for.
A:
Hey, look at what I found!
http://gafter.blogspot.com/2007/03/compact-object-comparator.html
Oh yes, I forgot about the IdentityHashMap (Java 6 and above only). Just have to pay attention at releasing your comparator.
A:
You answered in your comment:
equals returned false but identity hash code was same, assume o1 == o2
Unfortunately you cannot assume that. Most of the time that is going to work, but in some exceptionnal cases, it won't. And you cannot know when. When such a case appear, it would lead to lose instances in TreeSets for example.
A:
I don't think it does since this clause is not met:
Finally, the implementer must ensure that x.compareTo(y)==0 implies that sgn(x.compareTo(z)) == sgn(y.compareTo(z)), for all z.
Since equal(o1, o2) depends on o1's implementation of equals, two objects that are logically equal (as determined by equals) still have two differrent identityHashCodes.
So when comparing them to a third object (z), they might end up yielding different values for compareTo.
Make sense?
A:
You should probably raise an exception if it gets to that last return 0 line --when a hash collision happens. I do have a question though: you are doing a total ordering on the hash's, which I guess is fine, but shouldn't some function be passed to it to define a Lexicographical order?
int h1 = System.identityHashCode(o1);
int h2 = System.identityHashCode(o2);
if (h1 != h2) {
return h1 < h2 ? -1 : 1;
}
I can imagine that you have the objects as a tuple of two integers that form a real number. But you wont get the proper ordering since you're only taking a hash of the object. This is all up to you if hashing is what you meant, but to me, it doesn't make much sense.
A:
I'm not really sure about the System.identityHashCode(Object). That's pretty much what the == is used for. You might rather want to use the Object.hashCode() - it's more in parallel with Object.equals(Object).
A:
I agree this is not ideal, hence the comment. Any suggestions?
I think there is now way you can solve that, because you cannot access the one and only one thing that can distinguish two instances: their address in memory. So I have only one suggestion: reconsider your need of having a general total ordering process in Java :-)
| Impose a total ordering on all instances of *any* class in Java | I'm unsure whether the following code would ensure all conditions given in Comparator's Javadoc.
class TotalOrder<T> implements Comparator<T> {
public boolean compare(T o1, T o2) {
if (o1 == o2 || equal(o1, o2)) return 0;
int h1 = System.identityHashCode(o1);
int h2 = System.identityHashCode(o2);
if (h1 != h2) {
return h1 < h2 ? -1 : 1;
}
// equals returned false but identity hash code was same, assume o1 == o2
return 0;
}
boolean equal(Object o1, Object o2) {
return o1 == null ? o2 == null : o1.equals(o2);
}
}
Will the code above impose a total ordering on all instances of any class, even if that class does not implement Comparable?
| [
"Hey, look at what I found!\nhttp://gafter.blogspot.com/2007/03/compact-object-comparator.html\nThis is exactly what I was looking for.\n",
"\nHey, look at what I found!\nhttp://gafter.blogspot.com/2007/03/compact-object-comparator.html\n\nOh yes, I forgot about the IdentityHashMap (Java 6 and above only). Just have to pay attention at releasing your comparator. \n",
"You answered in your comment: \n\nequals returned false but identity hash code was same, assume o1 == o2\n\nUnfortunately you cannot assume that. Most of the time that is going to work, but in some exceptionnal cases, it won't. And you cannot know when. When such a case appear, it would lead to lose instances in TreeSets for example.\n",
"I don't think it does since this clause is not met:\n\nFinally, the implementer must ensure that x.compareTo(y)==0 implies that sgn(x.compareTo(z)) == sgn(y.compareTo(z)), for all z.\n\nSince equal(o1, o2) depends on o1's implementation of equals, two objects that are logically equal (as determined by equals) still have two differrent identityHashCodes.\nSo when comparing them to a third object (z), they might end up yielding different values for compareTo.\nMake sense?\n",
"You should probably raise an exception if it gets to that last return 0 line --when a hash collision happens. I do have a question though: you are doing a total ordering on the hash's, which I guess is fine, but shouldn't some function be passed to it to define a Lexicographical order?\n int h1 = System.identityHashCode(o1);\n int h2 = System.identityHashCode(o2);\n if (h1 != h2) {\n return h1 < h2 ? -1 : 1;\n }\n\nI can imagine that you have the objects as a tuple of two integers that form a real number. But you wont get the proper ordering since you're only taking a hash of the object. This is all up to you if hashing is what you meant, but to me, it doesn't make much sense. \n",
"I'm not really sure about the System.identityHashCode(Object). That's pretty much what the == is used for. You might rather want to use the Object.hashCode() - it's more in parallel with Object.equals(Object).\n",
"\nI agree this is not ideal, hence the comment. Any suggestions?\n\nI think there is now way you can solve that, because you cannot access the one and only one thing that can distinguish two instances: their address in memory. So I have only one suggestion: reconsider your need of having a general total ordering process in Java :-)\n"
] | [
2,
2,
1,
1,
1,
0,
0
] | [] | [] | [
"algorithm",
"java"
] | stackoverflow_0000028301_algorithm_java.txt |
Q:
Ruby / Rails pre-epoch dates on windows
Working with dates in ruby and rails on windows, I'm having problems with pre-epoch dates (before 1970) throwing out of range exceptions. I tried using both Time and DateTime objects, but still have the same problems.
A:
If you only need dates (no times), the Date class in ruby should handle dates before 1970. But it has only a resolution of days. I don't know if there are solutions, if you also need times before 1970
(source)
A:
You can also check out the section on dates on ruby-doc.org. I'm still learning Ruby but it sounds like you could use either the Civil or Commerical Date to go back before 1970.
A:
Ended up using the Date class, problem then became making that work with the rails select helper - which didn't happen, just generated the html myself.
| Ruby / Rails pre-epoch dates on windows | Working with dates in ruby and rails on windows, I'm having problems with pre-epoch dates (before 1970) throwing out of range exceptions. I tried using both Time and DateTime objects, but still have the same problems.
| [
"If you only need dates (no times), the Date class in ruby should handle dates before 1970. But it has only a resolution of days. I don't know if there are solutions, if you also need times before 1970\n(source)\n",
"You can also check out the section on dates on ruby-doc.org. I'm still learning Ruby but it sounds like you could use either the Civil or Commerical Date to go back before 1970.\n",
"Ended up using the Date class, problem then became making that work with the rails select helper - which didn't happen, just generated the html myself.\n"
] | [
1,
0,
0
] | [] | [] | [
"date",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000028011_date_ruby_ruby_on_rails.txt |
Q:
SQL Server - testing the database
What tools are people using for testing SQL Server databases?
By this I mean all parts of the database:
configuration
tables
column type
stored procedures
constraints
Most likely, there is not one tool to do it all.
A:
How do you mean "Test the database"?
If you are testing foreign keys, a simply script to insert invalid data is all you should need.
Testing a database could imply a great number of issues. Does it have all the tables? Are the tables correct? Are the indexes in place? Did the latest updates get applied? Has the data been migrated? Is the even valid? Are the foreign keys correct?
There is a lot to test in a database so you are unlikely to find a simple way to test it. I find that a combination of test stored procedures and some Nunit unit tests do most of the vetting of my databases.
A:
I personally use NHibernate with SqlCe, this provides a "throw-away" database that doesn't need any specialized tear down after the tests are run.
It also provides a good way to test your nhibernate mappings if applicable.
Here is a link to an article I wrote awhile ago on how to accomplish this: http://www.codeproject.com/KB/database/TDD_and_SqlCE.aspx?display=Print
| SQL Server - testing the database | What tools are people using for testing SQL Server databases?
By this I mean all parts of the database:
configuration
tables
column type
stored procedures
constraints
Most likely, there is not one tool to do it all.
| [
"How do you mean \"Test the database\"? \nIf you are testing foreign keys, a simply script to insert invalid data is all you should need.\nTesting a database could imply a great number of issues. Does it have all the tables? Are the tables correct? Are the indexes in place? Did the latest updates get applied? Has the data been migrated? Is the even valid? Are the foreign keys correct?\nThere is a lot to test in a database so you are unlikely to find a simple way to test it. I find that a combination of test stored procedures and some Nunit unit tests do most of the vetting of my databases.\n",
"I personally use NHibernate with SqlCe, this provides a \"throw-away\" database that doesn't need any specialized tear down after the tests are run.\nIt also provides a good way to test your nhibernate mappings if applicable.\nHere is a link to an article I wrote awhile ago on how to accomplish this: http://www.codeproject.com/KB/database/TDD_and_SqlCE.aspx?display=Print\n"
] | [
1,
0
] | [] | [] | [
"sql_server",
"unit_testing"
] | stackoverflow_0000027916_sql_server_unit_testing.txt |
Q:
Why is it bad practice to make multiple database connections in one request?
A discussion about Singletons in PHP has me thinking about this issue more and more. Most people instruct that you shouldn't make a bunch of DB connections in one request, and I'm just curious as to what your reasoning is. My first thought is the expense to your script of making that many requests to the DB, but then I counter myself with the question: wouldn't multiple connections make concurrent querying more efficient?
How about some answers (with evidence, folks) from some people in the know?
A:
Database connections are a limited resource. Some DBs have a very low connection limit, and wasting connections is a major problem. By consuming many connections, you may be blocking others for using the database.
Additionally, throwing a ton of extra connections at the DB doesn't help anything unless there are resources on the DB server sitting idle. If you've got 8 cores and only one is being used to satisfy a query, then sure, making another connection might help. More likely, though, you are already using all the available cores. You're also likely hitting the same harddrive for every DB request, and adding additional lock contention.
If your DB has anything resembling high utilization, adding extra connections won't help. That'd be like spawning extra threads in an application with the blind hope that the extra concurrency will make processing faster. It might in some certain circumstances, but in other cases it'll just slow you down as you thrash the hard drive, waste time task-switching, and introduce synchronization overhead.
A:
It is the cost of setting up the connection, transferring the data and then tearing it down. It will eat up your performance.
Evidence is harder to come by but consider the following...
Let's say it takes x microseconds to make a connection.
Now you want to make several requests and get data back and forth. Let's say that the difference in transport time is negligable between one connection and many (just ofr the sake of argument).
Now let's say it takes y microseconds to close the connection.
Opening one connection will take x+y microseconds of overhead. Opening many will take n * (x+y). That will delay your execution.
A:
Setting up a DB connection is usually quite heavy. A lot of things are going on backstage (DNS resolution/TCP connection/Handshake/Authentication/Actual Query).
I've had an issue once with some weird DNS configuration that made every TCP connection took a few seconds before going up. My login procedure (because of a complex architecture) took 3 different DB connections to complete. With that issue, it was taking forever to log-in. We then refactored the code to make it go through one connection only.
A:
We access Informix from .NET and use multiple connections. Unless we're starting a transaction on each connection, it often is handled in the connection pool. I know that's very brand-specific, but most(?) database systems' cilent access will pool connections to the best of its ability.
As an aside, we did have a problem with connection count because of cross-database connections. Informix supports synonyms, so we synonymed the common offenders and the multiple connections were handled server-side, saving a lot in transfer time, connection creation overhead, and (the real crux of our situtation) license fees.
A:
I would assume that it is because your requests are not being sent asynchronously, since your requests are done iteratively on the server, blocking each time, you have to pay for the overhead of creating a connection each time, when you only have to do it once...
In Flex, all web service calls are automatically called asynchronously, so you it is common to see multiple connections, or queued up requests on the same connection.
Asynchronous requests mitigate the connection cost through faster request / response time...because you cannot easily achieve this in PHP without out some threading, then the performance hit is greater then simply reusing the same connection.
that's my 2 cents...
| Why is it bad practice to make multiple database connections in one request? | A discussion about Singletons in PHP has me thinking about this issue more and more. Most people instruct that you shouldn't make a bunch of DB connections in one request, and I'm just curious as to what your reasoning is. My first thought is the expense to your script of making that many requests to the DB, but then I counter myself with the question: wouldn't multiple connections make concurrent querying more efficient?
How about some answers (with evidence, folks) from some people in the know?
| [
"Database connections are a limited resource. Some DBs have a very low connection limit, and wasting connections is a major problem. By consuming many connections, you may be blocking others for using the database.\nAdditionally, throwing a ton of extra connections at the DB doesn't help anything unless there are resources on the DB server sitting idle. If you've got 8 cores and only one is being used to satisfy a query, then sure, making another connection might help. More likely, though, you are already using all the available cores. You're also likely hitting the same harddrive for every DB request, and adding additional lock contention.\nIf your DB has anything resembling high utilization, adding extra connections won't help. That'd be like spawning extra threads in an application with the blind hope that the extra concurrency will make processing faster. It might in some certain circumstances, but in other cases it'll just slow you down as you thrash the hard drive, waste time task-switching, and introduce synchronization overhead.\n",
"It is the cost of setting up the connection, transferring the data and then tearing it down. It will eat up your performance.\nEvidence is harder to come by but consider the following...\nLet's say it takes x microseconds to make a connection. \nNow you want to make several requests and get data back and forth. Let's say that the difference in transport time is negligable between one connection and many (just ofr the sake of argument).\nNow let's say it takes y microseconds to close the connection.\nOpening one connection will take x+y microseconds of overhead. Opening many will take n * (x+y). That will delay your execution.\n",
"Setting up a DB connection is usually quite heavy. A lot of things are going on backstage (DNS resolution/TCP connection/Handshake/Authentication/Actual Query).\nI've had an issue once with some weird DNS configuration that made every TCP connection took a few seconds before going up. My login procedure (because of a complex architecture) took 3 different DB connections to complete. With that issue, it was taking forever to log-in. We then refactored the code to make it go through one connection only.\n",
"We access Informix from .NET and use multiple connections. Unless we're starting a transaction on each connection, it often is handled in the connection pool. I know that's very brand-specific, but most(?) database systems' cilent access will pool connections to the best of its ability.\nAs an aside, we did have a problem with connection count because of cross-database connections. Informix supports synonyms, so we synonymed the common offenders and the multiple connections were handled server-side, saving a lot in transfer time, connection creation overhead, and (the real crux of our situtation) license fees.\n",
"I would assume that it is because your requests are not being sent asynchronously, since your requests are done iteratively on the server, blocking each time, you have to pay for the overhead of creating a connection each time, when you only have to do it once...\nIn Flex, all web service calls are automatically called asynchronously, so you it is common to see multiple connections, or queued up requests on the same connection.\nAsynchronous requests mitigate the connection cost through faster request / response time...because you cannot easily achieve this in PHP without out some threading, then the performance hit is greater then simply reusing the same connection. \nthat's my 2 cents...\n"
] | [
10,
4,
2,
1,
0
] | [] | [] | [
"database",
"database_connection",
"resources"
] | stackoverflow_0000028590_database_database_connection_resources.txt |
Q:
Not showing Dialog when opening file in Acrobat Pro using Applescript
When opening Adobe Acrobat Pro, whether it be through Applescript or finder, the introductory dialog is shown. Is there a way to not show this dialog without already having checked the "Don't Show Again" option when opening a document using Applescript?
Photoshop and Illustrator Applescript libraries have ways of setting interaction levels and not showing dialogs, but I can't seem to find the option in Acrobat.
A:
Copy any applicable preferences files in ~/Library/Preferences from a machine that you have checked "Don't show again" on.
A:
If it's not in the dictionary, probably not.
| Not showing Dialog when opening file in Acrobat Pro using Applescript | When opening Adobe Acrobat Pro, whether it be through Applescript or finder, the introductory dialog is shown. Is there a way to not show this dialog without already having checked the "Don't Show Again" option when opening a document using Applescript?
Photoshop and Illustrator Applescript libraries have ways of setting interaction levels and not showing dialogs, but I can't seem to find the option in Acrobat.
| [
"Copy any applicable preferences files in ~/Library/Preferences from a machine that you have checked \"Don't show again\" on.\n",
"If it's not in the dictionary, probably not.\n"
] | [
1,
0
] | [] | [] | [
"acrobat",
"adobe",
"applescript",
"macos"
] | stackoverflow_0000008830_acrobat_adobe_applescript_macos.txt |
Q:
Abstract Factory Design Pattern
I'm working on an internal project for my company, and part of the project is to be able to parse various "Tasks" from an XML file into a collection of tasks to be ran later.
Because each type of Task has a multitude of different associated fields, I decided it would be best to represent each type of Task with a seperate class.
To do this, I constructed an abstract base class:
public abstract class Task
{
public enum TaskType
{
// Types of Tasks
}
public abstract TaskType Type
{
get;
}
public abstract LoadFromXml(XmlElement task);
public abstract XmlElement CreateXml(XmlDocument currentDoc);
}
Each task inherited from this base class, and included the code necessary to create itself from the passed in XmlElement, as well as serialize itself back out to an XmlElement.
A basic example:
public class MergeTask : Task
{
public override TaskType Type
{
get { return TaskType.Merge; }
}
// Lots of Properties / Methods for this Task
public MergeTask (XmlElement elem)
{
this.LoadFromXml(elem);
}
public override LoadFromXml(XmlElement task)
{
// Populates this Task from the Xml.
}
public override XmlElement CreateXml(XmlDocument currentDoc)
{
// Serializes this class back to xml.
}
}
The parser would then use code similar to this to create a task collection:
XmlNode taskNode = parent.SelectNode("tasks");
TaskFactory tf = new TaskFactory();
foreach (XmlNode task in taskNode.ChildNodes)
{
// Since XmlComments etc will show up
if (task is XmlElement)
{
tasks.Add(tf.CreateTask(task as XmlElement));
}
}
All of this works wonderfully, and allows me to pass tasks around using the base class, while retaining the structure of having individual classes for each task.
However, I am not happy with my code for TaskFactory.CreateTask. This method accepts an XmlElement, and then returns an instance of the appropriate Task class:
public Task CreateTask(XmlElement elem)
{
if (elem != null)
{
switch(elem.Name)
{
case "merge":
return new MergeTask(elem);
default:
throw new ArgumentException("Invalid Task");
}
}
}
Because I have to parse the XMLElement, I'm using a huge (10-15 cases in the real code) switch to pick which child class to instantiate. I'm hoping there is some sort of polymorphic trick I can do here to clean up this method.
Any advice?
A:
I use reflection to do this.
You can make a factory that basically expands without you having to add any extra code.
make sure you have "using System.Reflection", place the following code in your instantiation method.
public Task CreateTask(XmlElement elem)
{
if (elem != null)
{
try
{
Assembly a = typeof(Task).Assembly
string type = string.Format("{0}.{1}Task",typeof(Task).Namespace,elem.Name);
//this is only here, so that if that type doesn't exist, this method
//throws an exception
Type t = a.GetType(type, true, true);
return a.CreateInstance(type, true) as Task;
}
catch(System.Exception)
{
throw new ArgumentException("Invalid Task");
}
}
}
Another observation, is that you can make this method, a static and hang it off of the Task class, so that you don't have to new up the TaskFactory, and also you get to save yourself a moving piece to maintain.
A:
Create a "Prototype" instanace of each class and put them in a hashtable inside the factory , with the string you expect in the XML as the key.
so CreateTask just finds the right Prototype object,
by get() ing from the hashtable.
then call LoadFromXML on it.
you have to pre-load the classes into the hashtable,
If you want it more automatic...
You can make the classes "self-registering" by calling a static register method on the factory.
Put calls to register ( with constructors) in the static blocks on the Task subclasses.
Then all you need to do is "mention" the classes to get the static blocks run.
A static array of Task subclasses would then suffice to "mention" them.
Or use reflection to mention the classes.
A:
How do you feel about Dependency Injection? I use Ninject and the contextual binding support in it would be perfect for this situation. Look at this blog post on how you can use contextual binding with creating controllers with the IControllerFactory when they are requested. This should be a good resource on how to use it for your situation.
A:
@jholland
I don't think the Type enum is needed, because I can always do something like this:
Enum?
I admit that it feels hacky. Reflection feels dirty at first, but once you tame the beast you will enjoy what it allows you to do. (Remember recursion, it feels dirty, but its good)
The trick is to realize, you are analyzing meta data, in this case a string provided from xml, and turning it into run-time behavior. That is what reflection is the best at.
BTW: the is operator, is reflection too.
http://en.wikipedia.org/wiki/Reflection_(computer_science)#Uses
A:
@Tim, I ended up using a simplified version of your approach and ChanChans, Here is the code:
public class TaskFactory
{
private Dictionary<String, Type> _taskTypes = new Dictionary<String, Type>();
public TaskFactory()
{
// Preload the Task Types into a dictionary so we can look them up later
foreach (Type type in typeof(TaskFactory).Assembly.GetTypes())
{
if (type.IsSubclassOf(typeof(CCTask)))
{
_taskTypes[type.Name.ToLower()] = type;
}
}
}
public CCTask CreateTask(XmlElement task)
{
if (task != null)
{
string taskName = task.Name;
taskName = taskName.ToLower() + "task";
// If the Type information is in our Dictionary, instantiate a new instance of that task
Type taskType;
if (_taskTypes.TryGetValue(taskName, out taskType))
{
return (CCTask)Activator.CreateInstance(taskType, task);
}
else
{
throw new ArgumentException("Unrecognized Task:" + task.Name);
}
}
else
{
return null;
}
}
}
A:
@ChanChan
I like the idea of reflection, yet at the same time I've always been shy to use reflection. It's always struck me as a "hack" to work around something that should be easier. I did consider that approach, and then figured a switch statement would be faster for the same amount of code smell.
You did get me thinking, I don't think the Type enum is needed, because I can always do something like this:
if (CurrentTask is MergeTask)
{
// Do Something Specific to MergeTask
}
Perhaps I should crack open my GoF Design Patterns book again, but I really thought there was a way to polymorphically instantiate the right class.
A:
Enum?
I was referring to the Type property and enum in my abstract class.
Reflection it is then! I'll mark you answer as accepted in about 30 minutes, just to give time for anyone else to weigh in. Its a fun topic.
A:
Thanks for leaving it open, I won't complain. It is a fun topic, I wish you could polymorphicly instantiate.
Even ruby (and its superior meta-programming) has to use its reflection mechanism for this.
A:
@Dale
I have not inspected nInject closely, but from my high level understanding of dependency injection, I believe it would be accomplishing the same thing as ChanChans suggestion, only with more layers of cruft (er abstraction).
In a one off situation where I just need it here, I think using some handrolled reflection code is a better approach than having an additional library to link against and only calling it one place...
But maybe I don't understand the advantage nInject would give me here.
A:
Some frameworks may rely on reflection where needed, but most of the time you use a boot- strapper, if you will, to setup what to do when an instance of an object is needed. This is usually stored in a generic dictionary. I used my own up until recently, when I started using Ninject.
With Ninject, the main thing I liked about it, is that when it does need to use reflection, it doesn't. Instead it takes advantage of the code generation features of .NET which make it incredibly fast. If you feel reflection would be faster in the context you are using, it also allows you to set it up that way.
I know this maybe overkill for what you need at the moment, but I just wanted to point out dependency injection and give you some food for thought for the future. Visit the dojo for a lesson.
| Abstract Factory Design Pattern | I'm working on an internal project for my company, and part of the project is to be able to parse various "Tasks" from an XML file into a collection of tasks to be ran later.
Because each type of Task has a multitude of different associated fields, I decided it would be best to represent each type of Task with a seperate class.
To do this, I constructed an abstract base class:
public abstract class Task
{
public enum TaskType
{
// Types of Tasks
}
public abstract TaskType Type
{
get;
}
public abstract LoadFromXml(XmlElement task);
public abstract XmlElement CreateXml(XmlDocument currentDoc);
}
Each task inherited from this base class, and included the code necessary to create itself from the passed in XmlElement, as well as serialize itself back out to an XmlElement.
A basic example:
public class MergeTask : Task
{
public override TaskType Type
{
get { return TaskType.Merge; }
}
// Lots of Properties / Methods for this Task
public MergeTask (XmlElement elem)
{
this.LoadFromXml(elem);
}
public override LoadFromXml(XmlElement task)
{
// Populates this Task from the Xml.
}
public override XmlElement CreateXml(XmlDocument currentDoc)
{
// Serializes this class back to xml.
}
}
The parser would then use code similar to this to create a task collection:
XmlNode taskNode = parent.SelectNode("tasks");
TaskFactory tf = new TaskFactory();
foreach (XmlNode task in taskNode.ChildNodes)
{
// Since XmlComments etc will show up
if (task is XmlElement)
{
tasks.Add(tf.CreateTask(task as XmlElement));
}
}
All of this works wonderfully, and allows me to pass tasks around using the base class, while retaining the structure of having individual classes for each task.
However, I am not happy with my code for TaskFactory.CreateTask. This method accepts an XmlElement, and then returns an instance of the appropriate Task class:
public Task CreateTask(XmlElement elem)
{
if (elem != null)
{
switch(elem.Name)
{
case "merge":
return new MergeTask(elem);
default:
throw new ArgumentException("Invalid Task");
}
}
}
Because I have to parse the XMLElement, I'm using a huge (10-15 cases in the real code) switch to pick which child class to instantiate. I'm hoping there is some sort of polymorphic trick I can do here to clean up this method.
Any advice?
| [
"I use reflection to do this.\nYou can make a factory that basically expands without you having to add any extra code.\nmake sure you have \"using System.Reflection\", place the following code in your instantiation method.\npublic Task CreateTask(XmlElement elem)\n{\n if (elem != null)\n { \n try\n {\n Assembly a = typeof(Task).Assembly\n string type = string.Format(\"{0}.{1}Task\",typeof(Task).Namespace,elem.Name);\n\n //this is only here, so that if that type doesn't exist, this method\n //throws an exception\n Type t = a.GetType(type, true, true);\n\n return a.CreateInstance(type, true) as Task;\n }\n catch(System.Exception)\n {\n throw new ArgumentException(\"Invalid Task\");\n }\n }\n}\n\nAnother observation, is that you can make this method, a static and hang it off of the Task class, so that you don't have to new up the TaskFactory, and also you get to save yourself a moving piece to maintain.\n",
"Create a \"Prototype\" instanace of each class and put them in a hashtable inside the factory , with the string you expect in the XML as the key.\nso CreateTask just finds the right Prototype object,\nby get() ing from the hashtable.\nthen call LoadFromXML on it.\nyou have to pre-load the classes into the hashtable,\nIf you want it more automatic...\nYou can make the classes \"self-registering\" by calling a static register method on the factory.\nPut calls to register ( with constructors) in the static blocks on the Task subclasses.\nThen all you need to do is \"mention\" the classes to get the static blocks run.\nA static array of Task subclasses would then suffice to \"mention\" them.\nOr use reflection to mention the classes.\n",
"How do you feel about Dependency Injection? I use Ninject and the contextual binding support in it would be perfect for this situation. Look at this blog post on how you can use contextual binding with creating controllers with the IControllerFactory when they are requested. This should be a good resource on how to use it for your situation.\n",
"@jholland\n\nI don't think the Type enum is needed, because I can always do something like this:\n\nEnum?\nI admit that it feels hacky. Reflection feels dirty at first, but once you tame the beast you will enjoy what it allows you to do. (Remember recursion, it feels dirty, but its good)\nThe trick is to realize, you are analyzing meta data, in this case a string provided from xml, and turning it into run-time behavior. That is what reflection is the best at.\nBTW: the is operator, is reflection too.\nhttp://en.wikipedia.org/wiki/Reflection_(computer_science)#Uses \n",
"@Tim, I ended up using a simplified version of your approach and ChanChans, Here is the code:\npublic class TaskFactory\n {\n private Dictionary<String, Type> _taskTypes = new Dictionary<String, Type>();\n\n public TaskFactory()\n {\n // Preload the Task Types into a dictionary so we can look them up later\n foreach (Type type in typeof(TaskFactory).Assembly.GetTypes())\n {\n if (type.IsSubclassOf(typeof(CCTask)))\n {\n _taskTypes[type.Name.ToLower()] = type;\n }\n }\n }\n\n public CCTask CreateTask(XmlElement task)\n {\n if (task != null)\n {\n string taskName = task.Name;\n taskName = taskName.ToLower() + \"task\";\n\n // If the Type information is in our Dictionary, instantiate a new instance of that task\n Type taskType;\n if (_taskTypes.TryGetValue(taskName, out taskType))\n {\n return (CCTask)Activator.CreateInstance(taskType, task);\n }\n else\n {\n throw new ArgumentException(\"Unrecognized Task:\" + task.Name);\n } \n }\n else\n {\n return null;\n }\n }\n }\n\n",
"@ChanChan\nI like the idea of reflection, yet at the same time I've always been shy to use reflection. It's always struck me as a \"hack\" to work around something that should be easier. I did consider that approach, and then figured a switch statement would be faster for the same amount of code smell.\nYou did get me thinking, I don't think the Type enum is needed, because I can always do something like this:\nif (CurrentTask is MergeTask)\n{\n // Do Something Specific to MergeTask\n}\n\nPerhaps I should crack open my GoF Design Patterns book again, but I really thought there was a way to polymorphically instantiate the right class.\n",
"\nEnum?\n\nI was referring to the Type property and enum in my abstract class.\nReflection it is then! I'll mark you answer as accepted in about 30 minutes, just to give time for anyone else to weigh in. Its a fun topic.\n",
"Thanks for leaving it open, I won't complain. It is a fun topic, I wish you could polymorphicly instantiate.\nEven ruby (and its superior meta-programming) has to use its reflection mechanism for this.\n",
"@Dale\nI have not inspected nInject closely, but from my high level understanding of dependency injection, I believe it would be accomplishing the same thing as ChanChans suggestion, only with more layers of cruft (er abstraction).\nIn a one off situation where I just need it here, I think using some handrolled reflection code is a better approach than having an additional library to link against and only calling it one place...\nBut maybe I don't understand the advantage nInject would give me here.\n",
"Some frameworks may rely on reflection where needed, but most of the time you use a boot- strapper, if you will, to setup what to do when an instance of an object is needed. This is usually stored in a generic dictionary. I used my own up until recently, when I started using Ninject.\nWith Ninject, the main thing I liked about it, is that when it does need to use reflection, it doesn't. Instead it takes advantage of the code generation features of .NET which make it incredibly fast. If you feel reflection would be faster in the context you are using, it also allows you to set it up that way.\nI know this maybe overkill for what you need at the moment, but I just wanted to point out dependency injection and give you some food for thought for the future. Visit the dojo for a lesson.\n"
] | [
12,
6,
4,
2,
2,
1,
1,
1,
1,
1
] | [] | [] | [
"c#",
"design_patterns",
"factory"
] | stackoverflow_0000027294_c#_design_patterns_factory.txt |
Q:
NHibernate 1.2 to 2.0 migration
What kinds of considerations are there for migrating an application from NHibernate 1.2 to 2.0? What are breaking changes vs. recommended changes?
Are there mapping issues?
A:
Breaking changes in NHibernate 2.0
If you have good test coverage it's busywork.
Edit: We upgraded this morning. There is nothing major. You have to Flush() the session after you delete. The Expression namespace got renamed to Criterion. All these are covered in the link above. Mappings need no change. It's quite transparent. Oh, and transactions everywhere, but you were probably doing that already.
By the way, here's an interesting look at the changes: http://codebetter.com/blogs/patricksmacchia/archive/2008/08/26/nhibernate-2-0-changes-overview.aspx
A:
I found the answer here:
http://blog.domaindotnet.com/2008/08/24/nhibernate-20-gold-released-must-wait-for-linq-to-nhibernate/
gold release 2.0.0.GA
BREAKING CHANGES from NH1.2.1GA to NH2.0.0
Infrastructure
.NET 1.1 is no longer supported
Nullables.NHibernate is no longer supported (use nullable types of .NET 2.0)
Contrib moved. New Location
http://sourceforge.net/projects/nhcontrib
Compile time
NHibernate.Expression namespace was renamed to NHibernate.Criterion
IInterceptor have additional methods. (IsUnsaved was renamed IsTransient)
INamingStrategy
IType
IEntityPersister
IVersionType
IBatcher
IUserCollectionType
IEnhancedUserType
IPropertyAccessor
ValueTypeType renamed to PrimitiveType
Possible Breaking Changes for external frameworks
Various classes were moved between namespaces
Various classes have been renamed (to match Hibernate 3.2 names)
ISession interface have additional methods
ICacheProvider
ICriterion
CriteriaQueryTranslator
Initialization time
<nhibernate> section, in App.config, is no longer supported and will be ignored. Configuration schema for configuration file and App.config is now identical, and the App.config section name is: <hibernate-configuration>
<hibernate-configuration> have a different schema and all properties names are cheked
configuration properties are no longer prefixed by “hibernate.”, if before you would specify “hibernate.dialect”, now you specify just “dialect”
All named queries will be validated at initialization time, an exception will be thrown if any is not valid (can be disabled if needed)
Stricter checks for proxying classes (all public methods must be virtual)
Run time
SaveOrUpdateCopy() returns a new instance of the entity without changing the original
AutoFlush will not occur outside a transaction - Database transactions are never optional, all communication with the database must occur inside a transaction, whatever you read or write data.
NHibernate will return long for count(*) queries on SQL Server
<formula> must contain parenthesis when needed
These HQL function names may cause conflict in your HQL reserved names are:
substring
locate
trim
length
bit_length
coalesce
nullif
abs
mod
sqrt
upper
lower
cast
extract
concat
current_timestamp
sysdate
second
minute
hour
day
month
year
str
<any> when meta-type=”class” the persistent type is a string containing the Class.FullName
In order to set a parameter in a query you must use SetParameter(”paraName”, typeof(YourClass).FullName, NHibernateUtil.ClassMetaType)
Mapping
<any> : default meta-type is “string” (was “class”)
| NHibernate 1.2 to 2.0 migration | What kinds of considerations are there for migrating an application from NHibernate 1.2 to 2.0? What are breaking changes vs. recommended changes?
Are there mapping issues?
| [
"Breaking changes in NHibernate 2.0\nIf you have good test coverage it's busywork.\nEdit: We upgraded this morning. There is nothing major. You have to Flush() the session after you delete. The Expression namespace got renamed to Criterion. All these are covered in the link above. Mappings need no change. It's quite transparent. Oh, and transactions everywhere, but you were probably doing that already.\nBy the way, here's an interesting look at the changes: http://codebetter.com/blogs/patricksmacchia/archive/2008/08/26/nhibernate-2-0-changes-overview.aspx\n",
"I found the answer here:\nhttp://blog.domaindotnet.com/2008/08/24/nhibernate-20-gold-released-must-wait-for-linq-to-nhibernate/\ngold release 2.0.0.GA\nBREAKING CHANGES from NH1.2.1GA to NH2.0.0\n\n\nInfrastructure\n\n.NET 1.1 is no longer supported\nNullables.NHibernate is no longer supported (use nullable types of .NET 2.0)\nContrib moved. New Location\n\nhttp://sourceforge.net/projects/nhcontrib\n\n\n\n\n\nCompile time\n\nNHibernate.Expression namespace was renamed to NHibernate.Criterion\nIInterceptor have additional methods. (IsUnsaved was renamed IsTransient)\nINamingStrategy\nIType\nIEntityPersister\nIVersionType\nIBatcher\nIUserCollectionType\nIEnhancedUserType\nIPropertyAccessor\nValueTypeType renamed to PrimitiveType\n\n\n\nPossible Breaking Changes for external frameworks\n\n\n\nVarious classes were moved between namespaces\nVarious classes have been renamed (to match Hibernate 3.2 names)\nISession interface have additional methods\nICacheProvider\nICriterion\nCriteriaQueryTranslator\n\n\nInitialization time\n\n\n<nhibernate> section, in App.config, is no longer supported and will be ignored. Configuration schema for configuration file and App.config is now identical, and the App.config section name is: <hibernate-configuration>\n<hibernate-configuration> have a different schema and all properties names are cheked\nconfiguration properties are no longer prefixed by “hibernate.”, if before you would specify “hibernate.dialect”, now you specify just “dialect”\nAll named queries will be validated at initialization time, an exception will be thrown if any is not valid (can be disabled if needed)\nStricter checks for proxying classes (all public methods must be virtual)\n\n\nRun time\n\nSaveOrUpdateCopy() returns a new instance of the entity without changing the original\nAutoFlush will not occur outside a transaction - Database transactions are never optional, all communication with the database must occur inside a transaction, whatever you read or write data.\nNHibernate will return long for count(*) queries on SQL Server\n<formula> must contain parenthesis when needed\nThese HQL function names may cause conflict in your HQL reserved names are:\n\nsubstring\nlocate\ntrim\nlength\nbit_length\ncoalesce\nnullif\nabs\nmod\nsqrt\nupper\nlower\ncast\nextract\nconcat\ncurrent_timestamp\nsysdate\nsecond\nminute\nhour\nday\nmonth\nyear\nstr\n\n\n<any> when meta-type=”class” the persistent type is a string containing the Class.FullName\n\nIn order to set a parameter in a query you must use SetParameter(”paraName”, typeof(YourClass).FullName, NHibernateUtil.ClassMetaType)\n\n\n\n\n\nMapping\n\n<any> : default meta-type is “string” (was “class”)\n"
] | [
8,
2
] | [] | [] | [
"nhibernate"
] | stackoverflow_0000027243_nhibernate.txt |
Q:
Pointer to Pointer Managed C++
I have an old C library with a function that takes a void**:
oldFunction(void** pStuff);
I'm trying to call this function from managed C++ (m_pStuff is a member of the parent ref class of type void*):
oldFunction( static_cast<sqlite3**>( &m_pStuff ) );
This gives me the following error from Visual Studio:
error C2440: 'static_cast' : cannot convert from 'cli::interior_ptr' to 'void **'
I'm guessing the compiler is converting the void* member pointer to a cli::interior_ptr behind my back.
Any advice on how to do this?
A:
EDIT: Fixed answer, see below.
Really you need to know what oldFunction is going to be doing with pStuff. If pStuff is a pointer to some unmanaged data you can try wrapping the definition of m_pStuff with:
#pragma unmanaged
void* m_pStuff
#pragma managed
This will make the pointer unmanaged which can then be passed into unmanaged functions. Of course you will not be able to assign any managed objects to this pointer directly.
Fundamentally unmanaged and managed pointers are not the same and can't be converted without some sort of glue code that copies the underlying data. Basically managed pointers point to the managed heap and since this is garbage collected the actual memory address they point to can change over time. Unmanaged pointers do not change the memory address without you explicitly doing so.
Scratch that, you can't define unmanaged / managed inside a class definition. But this test code seems to work just fine:
// TestSol.cpp : main project file.
#include "stdafx.h"
using namespace System;
#pragma unmanaged
void oldFunction(void** pStuff)
{
return;
}
#pragma managed
ref class Test
{
public:
void* m_test;
};
int main(array<System::String ^> ^args)
{
Console::WriteLine(L"Hello World");
Test^ test = gcnew Test();
void* pStuff = test->m_test;
oldFunction(&pStuff);
test->m_test = pStuff;
return 0;
}
Here I copy the pointer out of the managed object first and then pass that in by to the oldFunction. Then I copy the result (probably updated by oldFunction) back into the managed object. Since the managed object is on the managed heap, the compiler won't let you pass a reference to the pointer contained in that object as it may move when the garbage collector runs.
A:
Thanks for the advice, the pointer is to an C style abstract structure which I think if I leave that structure exposed to the managed code is going to cause further pain due to its lack of defined structure. So what I think I will do is wrap the C library in C++ and then wrap the C++ wrapper with managed C++, which will prevent exposing those C structures to managed code.
| Pointer to Pointer Managed C++ | I have an old C library with a function that takes a void**:
oldFunction(void** pStuff);
I'm trying to call this function from managed C++ (m_pStuff is a member of the parent ref class of type void*):
oldFunction( static_cast<sqlite3**>( &m_pStuff ) );
This gives me the following error from Visual Studio:
error C2440: 'static_cast' : cannot convert from 'cli::interior_ptr' to 'void **'
I'm guessing the compiler is converting the void* member pointer to a cli::interior_ptr behind my back.
Any advice on how to do this?
| [
"EDIT: Fixed answer, see below.\nReally you need to know what oldFunction is going to be doing with pStuff. If pStuff is a pointer to some unmanaged data you can try wrapping the definition of m_pStuff with:\n#pragma unmanaged\n\nvoid* m_pStuff\n\n#pragma managed\n\nThis will make the pointer unmanaged which can then be passed into unmanaged functions. Of course you will not be able to assign any managed objects to this pointer directly. \nFundamentally unmanaged and managed pointers are not the same and can't be converted without some sort of glue code that copies the underlying data. Basically managed pointers point to the managed heap and since this is garbage collected the actual memory address they point to can change over time. Unmanaged pointers do not change the memory address without you explicitly doing so. \nScratch that, you can't define unmanaged / managed inside a class definition. But this test code seems to work just fine:\n// TestSol.cpp : main project file.\n\n#include \"stdafx.h\"\n\nusing namespace System;\n\n#pragma unmanaged\n\nvoid oldFunction(void** pStuff)\n{\n return;\n}\n\n#pragma managed\n\nref class Test\n{\npublic:\n void* m_test;\n\n};\n\nint main(array<System::String ^> ^args)\n{\n Console::WriteLine(L\"Hello World\");\n\n Test^ test = gcnew Test();\n void* pStuff = test->m_test;\n oldFunction(&pStuff);\n test->m_test = pStuff;\n\n return 0;\n}\n\nHere I copy the pointer out of the managed object first and then pass that in by to the oldFunction. Then I copy the result (probably updated by oldFunction) back into the managed object. Since the managed object is on the managed heap, the compiler won't let you pass a reference to the pointer contained in that object as it may move when the garbage collector runs.\n",
"Thanks for the advice, the pointer is to an C style abstract structure which I think if I leave that structure exposed to the managed code is going to cause further pain due to its lack of defined structure. So what I think I will do is wrap the C library in C++ and then wrap the C++ wrapper with managed C++, which will prevent exposing those C structures to managed code.\n"
] | [
1,
0
] | [] | [] | [
"managed_c++",
"pointers"
] | stackoverflow_0000027071_managed_c++_pointers.txt |
Q:
How do I get the path where the user installed my Java application?
I want to bring up a file dialog in Java that defaults to the application installation directory.
What's the best way to get that information programmatically?
A:
System.getProperty("user.dir")
gets the directory the Java VM was started from.
A:
System.getProperty("user.dir");
The above method gets the user's working directory when the application was launched. This is fine if the application is launched by a script or shortcut that ensures that this is the case.
However, if the app is launched from somewhere else (entirely possible if the command line is used), then the return value will be wherever the user was when they launched the app.
A more reliable method is to work out the application install directory using ClassLoaders.
| How do I get the path where the user installed my Java application? | I want to bring up a file dialog in Java that defaults to the application installation directory.
What's the best way to get that information programmatically?
| [
"System.getProperty(\"user.dir\") \n\ngets the directory the Java VM was started from.\n",
"System.getProperty(\"user.dir\");\n\nThe above method gets the user's working directory when the application was launched. This is fine if the application is launched by a script or shortcut that ensures that this is the case.\nHowever, if the app is launched from somewhere else (entirely possible if the command line is used), then the return value will be wherever the user was when they launched the app.\nA more reliable method is to work out the application install directory using ClassLoaders.\n"
] | [
8,
4
] | [] | [] | [
"environment_variables",
"java"
] | stackoverflow_0000028428_environment_variables_java.txt |
Q:
What are some instances in which expression trees are useful?
I completely understand the concept of expression trees, but I am having a hard time trying to find situations in which they are useful. Is there a specific instance in which expression trees can be applied? Or is it only useful as a transport mechanism for code? I feel like I am missing something here. Thanks!
A:
Some unit test mocking frameworks make use of expression trees in order to set up strongly typed expectations/verifications. Ie:
myMock.Verify(m => m.SomeMethod(someObject)); // tells moq to verify that the method
// SomeMethod was called with
// someObject as the argument
Here, the expression is never actually executed, but the expression itself holds the interesting information. The alternative without expression trees would be
myMock.Verify("SomeMethod", someObject) // we've lost the strong typing
A:
Or is it only useful as a transport mechanism for code?
It's useful as an execution mechanism for code. Using the interpreter pattern, expression trees can directly be interpreted. This is useful because it's very easy and fast to implement. Such interpreters are ubiquitous and used even in cases that don't seem to “interpret” anything, e.g. for printing nested structures.
A:
Expression trees are useful when you need to access function logic in order to alter or reapply it in some way.
Linq to SQL is a good example:
//a linq to sql statement
var recs (
from rec in LinqDataContext.Table
where rec.IntField > 5
select rec );
If we didn't have expression trees this statement would have to return all the records, and then apply the C# where logic to each.
With expression trees that where rec.IntField > 5 can be parsed into SQL:
--SQL statment executed
select *
from [table]
where [table].[IntField] > 5
| What are some instances in which expression trees are useful? | I completely understand the concept of expression trees, but I am having a hard time trying to find situations in which they are useful. Is there a specific instance in which expression trees can be applied? Or is it only useful as a transport mechanism for code? I feel like I am missing something here. Thanks!
| [
"Some unit test mocking frameworks make use of expression trees in order to set up strongly typed expectations/verifications. Ie:\nmyMock.Verify(m => m.SomeMethod(someObject)); // tells moq to verify that the method\n // SomeMethod was called with \n // someObject as the argument\n\nHere, the expression is never actually executed, but the expression itself holds the interesting information. The alternative without expression trees would be\nmyMock.Verify(\"SomeMethod\", someObject) // we've lost the strong typing\n\n",
"\nOr is it only useful as a transport mechanism for code?\n\nIt's useful as an execution mechanism for code. Using the interpreter pattern, expression trees can directly be interpreted. This is useful because it's very easy and fast to implement. Such interpreters are ubiquitous and used even in cases that don't seem to “interpret” anything, e.g. for printing nested structures.\n",
"Expression trees are useful when you need to access function logic in order to alter or reapply it in some way.\nLinq to SQL is a good example:\n//a linq to sql statement\nvar recs (\n from rec in LinqDataContext.Table\n where rec.IntField > 5\n select rec );\n\nIf we didn't have expression trees this statement would have to return all the records, and then apply the C# where logic to each.\nWith expression trees that where rec.IntField > 5 can be parsed into SQL:\n--SQL statment executed\nselect *\nfrom [table]\nwhere [table].[IntField] > 5\n\n"
] | [
7,
6,
4
] | [] | [] | [
"c#",
"expression"
] | stackoverflow_0000027726_c#_expression.txt |
Q:
Is it possible to use nHibernate with Paradox database?
Is it possible to configure nHibernate to connect to Paradox database (*.db files)?
A:
Yes, sort of.
There is no support included in the trunk, you need to write your own dialect. Or you can port the Paradox dialect created for Hibernate.
| Is it possible to use nHibernate with Paradox database? | Is it possible to configure nHibernate to connect to Paradox database (*.db files)?
| [
"Yes, sort of. \nThere is no support included in the trunk, you need to write your own dialect. Or you can port the Paradox dialect created for Hibernate.\n"
] | [
1
] | [] | [] | [
"database",
"nhibernate",
"paradox"
] | stackoverflow_0000028560_database_nhibernate_paradox.txt |
Q:
Finding controls that use a certain interface in ASP.NET
Having a heckuva time with this one, though I feel I'm missing something obvious. I have a control that inherits from System.Web.UI.WebControls.Button, and then implements an interface that I have set up. So think...
public class Button : System.Web.UI.WebControls.Button, IMyButtonInterface { ... }
In the codebehind of a page, I'd like to find all instances of this button from the ASPX. Because I don't really know what the type is going to be, just the interface it implements, that's all I have to go on when looping through the control tree. Thing is, I've never had to determine if an object uses an interface versus just testing its type. How can I loop through the control tree and yank anything that implements IMyButtonInterface in a clean way (Linq would be fine)?
Again, know it's something obvious, but just now started using interfaces heavily and I can't seem to focus my Google results enough to figure it out :)
Edit: GetType() returns the actual class, but doesn't return the interface, so I can't test on that (e.g., it'd return "MyNamespace.Button" instead of "IMyButtonInterface"). In trying to use "as" or "is" in a recursive function, the type parameter doesn't even get recognized within the function! It's rather bizarre. So
if(ctrl.GetType() == typeToFind) //ok
if(ctrl is typeToFind) //typeToFind isn't recognized! eh?
Definitely scratching my head over this one.
A:
Longhorn213 almost has the right answer, but as as Sean Chambers and bdukes say, you should use
ctrl is IInterfaceToFind
instead of
ctrl.GetType() == aTypeVariable
The reason why is that if you use .GetType() you will get the true type of an object, not necessarily what it can also be cast to in its inheritance/Interface implementation chain. Also, .GetType() will never return an abstract type/interface since you can't new up an abstract type or interface. GetType() returns concrete types only.
The reason this doesn't work
if(ctrl is typeToFind)
Is because the type of the variable typeToFind is actually System.RuntimeType, not the type you've set its value to. Example, if you set a string's value to "foo", its type is still string not "foo". I hope that makes sense. It's very easy to get confused when working with types. I'm chronically confused when working with them.
The most import thing to note about longhorn213's answer is that you have to use recursion or you may miss some of the controls on the page.
Although we have a working solution here, I too would love to see if there is a more succinct way to do this with LINQ.
A:
You can just search on the Interface. This also uses recursion if the control has child controls, i.e. the button is in a panel.
private List<Control> FindControlsByType(ControlCollection controls, Type typeToFind)
{
List<Control> foundList = new List<Control>();
foreach (Control ctrl in this.Page.Controls)
{
if (ctrl.GetType() == typeToFind)
{
// Do whatever with interface
foundList.Add(ctrl);
}
// Check if the Control has Child Controls and use Recursion
// to keep checking them
if (ctrl.HasControls())
{
// Call Function to
List<Control> childList = FindControlsByType(ctrl.Controls, typeToFind);
foundList.AddRange(childList);
}
}
return foundList;
}
// Pass it this way
FindControlsByType(Page.Controls, typeof(IYourInterface));
A:
I'd make the following changes to Longhorn213's example to clean this up a bit:
private List<T> FindControlsByType<T>(ControlCollection controls )
{
List<T> foundList = new List<T>();
foreach (Control ctrl in this.Page.Controls)
{
if (ctrl as T != null )
{
// Do whatever with interface
foundList.Add(ctrl as T);
}
// Check if the Control has Child Controls and use Recursion
// to keep checking them
if (ctrl.HasControls())
{
// Call Function to
List<T> childList = FindControlsByType<T>( ctrl.Controls );
foundList.AddRange( childList );
}
}
return foundList;
}
// Pass it this way
FindControlsByType<IYourInterface>( Page.Controls );
This way you get back a list of objects of the desired type that don't require another cast to use. I also made the required change to the "as" operator that the others pointed out.
A:
Interfaces are close enough to types that it should feel about the same. I'd use the as operator.
foreach (Control c in this.Page.Controls) {
IMyButtonInterface myButton = c as IMyButtonInterface;
if (myButton != null) {
// do something
}
}
You can also test using the is operator, depending on your need.
if (c is IMyButtonInterface) {
...
}
A:
Would the "is" operator work?
if (myControl is ISomeInterface)
{
// do something
}
A:
If you're going to do some work on it if it is of that type, then TryCast is what I'd use.
Dim c as IInterface = TryCast(obj, IInterface)
If c IsNot Nothing
'do work
End if
A:
you can always just use the as cast:
c as IMyButtonInterface;
if (c != null)
{
// c is an IMyButtonInterface
}
| Finding controls that use a certain interface in ASP.NET | Having a heckuva time with this one, though I feel I'm missing something obvious. I have a control that inherits from System.Web.UI.WebControls.Button, and then implements an interface that I have set up. So think...
public class Button : System.Web.UI.WebControls.Button, IMyButtonInterface { ... }
In the codebehind of a page, I'd like to find all instances of this button from the ASPX. Because I don't really know what the type is going to be, just the interface it implements, that's all I have to go on when looping through the control tree. Thing is, I've never had to determine if an object uses an interface versus just testing its type. How can I loop through the control tree and yank anything that implements IMyButtonInterface in a clean way (Linq would be fine)?
Again, know it's something obvious, but just now started using interfaces heavily and I can't seem to focus my Google results enough to figure it out :)
Edit: GetType() returns the actual class, but doesn't return the interface, so I can't test on that (e.g., it'd return "MyNamespace.Button" instead of "IMyButtonInterface"). In trying to use "as" or "is" in a recursive function, the type parameter doesn't even get recognized within the function! It's rather bizarre. So
if(ctrl.GetType() == typeToFind) //ok
if(ctrl is typeToFind) //typeToFind isn't recognized! eh?
Definitely scratching my head over this one.
| [
"Longhorn213 almost has the right answer, but as as Sean Chambers and bdukes say, you should use \nctrl is IInterfaceToFind\n\ninstead of \nctrl.GetType() == aTypeVariable \n\nThe reason why is that if you use .GetType() you will get the true type of an object, not necessarily what it can also be cast to in its inheritance/Interface implementation chain. Also, .GetType() will never return an abstract type/interface since you can't new up an abstract type or interface. GetType() returns concrete types only.\nThe reason this doesn't work\nif(ctrl is typeToFind) \n\nIs because the type of the variable typeToFind is actually System.RuntimeType, not the type you've set its value to. Example, if you set a string's value to \"foo\", its type is still string not \"foo\". I hope that makes sense. It's very easy to get confused when working with types. I'm chronically confused when working with them.\nThe most import thing to note about longhorn213's answer is that you have to use recursion or you may miss some of the controls on the page. \nAlthough we have a working solution here, I too would love to see if there is a more succinct way to do this with LINQ. \n",
"You can just search on the Interface. This also uses recursion if the control has child controls, i.e. the button is in a panel.\nprivate List<Control> FindControlsByType(ControlCollection controls, Type typeToFind)\n{\n List<Control> foundList = new List<Control>();\n\n foreach (Control ctrl in this.Page.Controls)\n {\n if (ctrl.GetType() == typeToFind)\n {\n // Do whatever with interface\n foundList.Add(ctrl);\n }\n\n // Check if the Control has Child Controls and use Recursion\n // to keep checking them\n if (ctrl.HasControls())\n {\n // Call Function to \n List<Control> childList = FindControlsByType(ctrl.Controls, typeToFind);\n\n foundList.AddRange(childList);\n }\n }\n\n return foundList;\n}\n\n// Pass it this way\nFindControlsByType(Page.Controls, typeof(IYourInterface));\n\n",
"I'd make the following changes to Longhorn213's example to clean this up a bit: \nprivate List<T> FindControlsByType<T>(ControlCollection controls )\n{\n List<T> foundList = new List<T>();\n\n foreach (Control ctrl in this.Page.Controls)\n {\n if (ctrl as T != null )\n {\n // Do whatever with interface\n foundList.Add(ctrl as T);\n }\n\n // Check if the Control has Child Controls and use Recursion\n // to keep checking them\n if (ctrl.HasControls())\n {\n // Call Function to \n List<T> childList = FindControlsByType<T>( ctrl.Controls );\n\n foundList.AddRange( childList );\n }\n }\n\n return foundList;\n}\n\n// Pass it this way\nFindControlsByType<IYourInterface>( Page.Controls );\n\nThis way you get back a list of objects of the desired type that don't require another cast to use. I also made the required change to the \"as\" operator that the others pointed out. \n",
"Interfaces are close enough to types that it should feel about the same. I'd use the as operator.\nforeach (Control c in this.Page.Controls) {\n IMyButtonInterface myButton = c as IMyButtonInterface;\n if (myButton != null) {\n // do something\n }\n}\n\nYou can also test using the is operator, depending on your need.\nif (c is IMyButtonInterface) {\n ...\n}\n\n",
"Would the \"is\" operator work?\nif (myControl is ISomeInterface)\n{\n // do something\n}\n\n",
"If you're going to do some work on it if it is of that type, then TryCast is what I'd use.\nDim c as IInterface = TryCast(obj, IInterface)\nIf c IsNot Nothing\n 'do work\nEnd if\n\n",
"you can always just use the as cast:\nc as IMyButtonInterface;\n\nif (c != null)\n{\n // c is an IMyButtonInterface\n}\n\n"
] | [
7,
5,
4,
1,
1,
0,
0
] | [] | [] | [
"asp.net",
"c#"
] | stackoverflow_0000028642_asp.net_c#.txt |
Q:
What exactly is Microsoft Expression Studio and how does it integrate with Visual Studio?
My university is part of MSDNAA, so I downloaded it a while back, but I just got around to installing it. I guess part of it replaces FrontPage for web editing, and there appears to be a video editor and a vector graphics editor, but I don't think I've even scratched the surface of what it is and what it can do. Could someone enlighten me, especially since I haven't found an "Expression Studio for Dummies" type website.
A:
Expression Studio is basically a design studio. It consists of a bunch of design software that Microsoft has bought for the most part. The audience is designers, not developers. The gist of the software is that Expression Blend enables designers and programmers to work seamlessly together in letting the designer create the graphical user interface.
In a normal workflow, the designer would deliver a mockup which the developer would have to implement. Using Expression Blend in combination with WPF, this is no longer necessary. The graphical UI made by the designer is functional. All the developer has to do is write the code for the function behind the design.
This in itself is great because developers invariably fail to implement the design as thought out by the designer. Technical limitations, lack of communication … whatever the reason. UIs never look like them mockup done up front.
Expression Design is basically a vector drawing program that can be used to design smaller components that are then used within Expression Blend as parts of the UI. For example, graphical buttons could be designed that way. It can also be used as a vanilla drawing program. I did the graphics in my thesis using Expression Design.
A:
The idea is that designers will work in Expression Design (to design vector artwork) and Expression Blend (to build and style XAML interactions, as well as to define timeline based animations and interactions).
Developers will work on the application in Visual Studio. Visual Studio includes very basic XAML editing capabilities, so developers would only be making minor edits and would mostly be focusing on the code-behind.
That's the theory / product strategy side of it. In reality, if you're performing both roles, you'll end up having your project open in both Expression Blend and Visual Studio, switching back and forth between them depending on whether you're doing "designer tasks" or "developer tasks". Fortunately, Expression Blend and Visual Studio use the same project files.
A:
From Wikipedia:
Microsoft Expression Studio is a suite of design and media applications from Microsoft aimed at developers and designers. It consists of:
Microsoft Expression Web (code-named Quartz) - WYSIWYG website designer and HTML editor.
Microsoft Expression Blend (code-named Sparkle) - Visual user interface builder for Windows Presentation Foundation and Silverlight applications.
Microsoft Expression Design (code-named Acrylic) - Raster and vector graphics editor.
Microsoft Expression Media - Digital asset and media manager.
Microsoft Expression Encoder - VC-1 content professional encoder.
For web development Expression Web is useful. For XAML development, Blend and Design are useful.
A:
EDIT: Okay, I type too slow so most of what I had to say was already mentioned, so I'll strip it out except for...
The BIG thing to take note of is that the WSYWIG designer they used in Expression Web made it's way into Visual Studio 2008, which is a VERY GOOD thing. There is now EXCELLENT support for CSS, a better editing interface, and you can even go into a split edit mode to see the code and the content while editing.
For the longest time I was using Expression Web to do all my initial layout and then loading that into Visual Studio 2005. With Visual Studio 2008, there is no need to do it.
A:
The Expression site is the first place to start. These are tools that bridge the developer/designer gap for building rich internet applications with Silverlight and WPF. They compete with Adobe Studio products.
Whilst Visual Studio is good for working with code, it has some weaknesses when it comes to dealing with XAML. In many cases a designer will build something visually different from a Windows application and Expression Blend allows them this freedom. It ties in Visual Studio for the C#/VB coding and debugging part of development.
A:
Expression Studio is targeted more at designers. It integrates with Visual Studio in that Expression Studio uses solution and project files, just like Visual Studio. Which makes collaborating with designer easier. The developer and the designer open up the same project. The developer sets up the initial page with all the binding and the designer takes that page and makes it look pretty.
A:
Please check for XAML .NET development, most of the tutorials makes use of many Expression tools.
| What exactly is Microsoft Expression Studio and how does it integrate with Visual Studio? | My university is part of MSDNAA, so I downloaded it a while back, but I just got around to installing it. I guess part of it replaces FrontPage for web editing, and there appears to be a video editor and a vector graphics editor, but I don't think I've even scratched the surface of what it is and what it can do. Could someone enlighten me, especially since I haven't found an "Expression Studio for Dummies" type website.
| [
"Expression Studio is basically a design studio. It consists of a bunch of design software that Microsoft has bought for the most part. The audience is designers, not developers. The gist of the software is that Expression Blend enables designers and programmers to work seamlessly together in letting the designer create the graphical user interface.\nIn a normal workflow, the designer would deliver a mockup which the developer would have to implement. Using Expression Blend in combination with WPF, this is no longer necessary. The graphical UI made by the designer is functional. All the developer has to do is write the code for the function behind the design.\nThis in itself is great because developers invariably fail to implement the design as thought out by the designer. Technical limitations, lack of communication … whatever the reason. UIs never look like them mockup done up front.\nExpression Design is basically a vector drawing program that can be used to design smaller components that are then used within Expression Blend as parts of the UI. For example, graphical buttons could be designed that way. It can also be used as a vanilla drawing program. I did the graphics in my thesis using Expression Design.\n",
"The idea is that designers will work in Expression Design (to design vector artwork) and Expression Blend (to build and style XAML interactions, as well as to define timeline based animations and interactions).\nDevelopers will work on the application in Visual Studio. Visual Studio includes very basic XAML editing capabilities, so developers would only be making minor edits and would mostly be focusing on the code-behind.\nThat's the theory / product strategy side of it. In reality, if you're performing both roles, you'll end up having your project open in both Expression Blend and Visual Studio, switching back and forth between them depending on whether you're doing \"designer tasks\" or \"developer tasks\". Fortunately, Expression Blend and Visual Studio use the same project files.\n",
"From Wikipedia:\nMicrosoft Expression Studio is a suite of design and media applications from Microsoft aimed at developers and designers. It consists of:\n\nMicrosoft Expression Web (code-named Quartz) - WYSIWYG website designer and HTML editor.\nMicrosoft Expression Blend (code-named Sparkle) - Visual user interface builder for Windows Presentation Foundation and Silverlight applications.\nMicrosoft Expression Design (code-named Acrylic) - Raster and vector graphics editor.\nMicrosoft Expression Media - Digital asset and media manager.\nMicrosoft Expression Encoder - VC-1 content professional encoder.\n\nFor web development Expression Web is useful. For XAML development, Blend and Design are useful.\n",
"EDIT: Okay, I type too slow so most of what I had to say was already mentioned, so I'll strip it out except for...\nThe BIG thing to take note of is that the WSYWIG designer they used in Expression Web made it's way into Visual Studio 2008, which is a VERY GOOD thing. There is now EXCELLENT support for CSS, a better editing interface, and you can even go into a split edit mode to see the code and the content while editing. \nFor the longest time I was using Expression Web to do all my initial layout and then loading that into Visual Studio 2005. With Visual Studio 2008, there is no need to do it. \n",
"The Expression site is the first place to start. These are tools that bridge the developer/designer gap for building rich internet applications with Silverlight and WPF. They compete with Adobe Studio products.\nWhilst Visual Studio is good for working with code, it has some weaknesses when it comes to dealing with XAML. In many cases a designer will build something visually different from a Windows application and Expression Blend allows them this freedom. It ties in Visual Studio for the C#/VB coding and debugging part of development.\n",
"Expression Studio is targeted more at designers. It integrates with Visual Studio in that Expression Studio uses solution and project files, just like Visual Studio. Which makes collaborating with designer easier. The developer and the designer open up the same project. The developer sets up the initial page with all the binding and the designer takes that page and makes it look pretty.\n",
"Please check for XAML .NET development, most of the tutorials makes use of many Expression tools.\n"
] | [
18,
8,
1,
1,
0,
0,
0
] | [] | [] | [
"expression_studio",
"integration",
"visual_studio"
] | stackoverflow_0000028826_expression_studio_integration_visual_studio.txt |
Q:
Simple Object to Database Product
I've been taking a look at some different products for .NET which propose to speed up development time by providing a way for business objects to map seamlessly to an automatically generated database. I've never had a problem writing a data access layer, but I'm wondering if this type of product will really save the time it claims. I also worry that I will be giving up too much control over the database and make it harder to track down any data level problems. Do these type of products make it better or worse in the already tough case that the database and business object structure must change?
For example:
Object Relation Mapping from Dev Express
In essence, is it worth it? Will I save "THAT" much time, effort, and future bugs?
A:
I have used SubSonic and EntitySpaces. Once you get the hang of them, I beleive they can save you time, but as complexity of your app and volume of data grow, you may outgrow these tools. You start to lose time trying to figure out if something like a performance issue is related to the ORM or to your code. So, to answer your question, I think it depends. I tend to agree with Eric on this, high volume enterprise apps are not a good place for general purpose ORMs, but in standard fare smaller CRUD type apps, you might see some saved time.
A:
I've found iBatis from the Apache group to be an excellent solution to this problem. My team is currently using iBatis to map all of our calls from Java to our MySQL backend. It's been a huge benefit as it's easy to manage all of our SQL queries and procedures because they're all located in XML files, not in our code. Separating SQL from your code, no matter what the language, is a great help.
Additionally, iBatis allows you to write your own data mappers to map data to and from your objects to the DB. We wanted this flexibility, as opposed to a Hibernate type solution that does everything for you, but also (IMO) limits your ability to perform complex queries.
There is a .NET version of iBatis as well.
A:
I've recently set up ActiveRecord from the Castle Project for an app. It was pretty easy to get going. After creating a new app with it, I even used MyGeneration to script out class files for a legacy app that ActiveRecord could use in a pretty short time. It uses NHibernate to interact with the database, but takes away all the xml mapping that comes with NHibernate. The nice thing is though, if necessary, you already have NHibernate in your project, you can use its full power if you have some special cases. I'd suggest taking a look at it.
A:
There are lots of choices of ORMs. Linq to Sql, nHibernate. For pure object databases there is db4o.
It depends on the application, but for a high volume enterprise application, I would not go this route. You need more control of your data.
A:
I was discussing this with a friend over the weekend and it seems like the gains you make on ease of storage are lost if you need to be able to query the database outside of the application. My understanding is that these databases work by storing your object data in a de-normalized fashion. This makes it fast to retrieve entire sets of objects, but if you need to select data from a perspective that doesn't match your object model, the odbms might have a hard time getting at the particular data you want.
| Simple Object to Database Product | I've been taking a look at some different products for .NET which propose to speed up development time by providing a way for business objects to map seamlessly to an automatically generated database. I've never had a problem writing a data access layer, but I'm wondering if this type of product will really save the time it claims. I also worry that I will be giving up too much control over the database and make it harder to track down any data level problems. Do these type of products make it better or worse in the already tough case that the database and business object structure must change?
For example:
Object Relation Mapping from Dev Express
In essence, is it worth it? Will I save "THAT" much time, effort, and future bugs?
| [
"I have used SubSonic and EntitySpaces. Once you get the hang of them, I beleive they can save you time, but as complexity of your app and volume of data grow, you may outgrow these tools. You start to lose time trying to figure out if something like a performance issue is related to the ORM or to your code. So, to answer your question, I think it depends. I tend to agree with Eric on this, high volume enterprise apps are not a good place for general purpose ORMs, but in standard fare smaller CRUD type apps, you might see some saved time.\n",
"I've found iBatis from the Apache group to be an excellent solution to this problem. My team is currently using iBatis to map all of our calls from Java to our MySQL backend. It's been a huge benefit as it's easy to manage all of our SQL queries and procedures because they're all located in XML files, not in our code. Separating SQL from your code, no matter what the language, is a great help.\nAdditionally, iBatis allows you to write your own data mappers to map data to and from your objects to the DB. We wanted this flexibility, as opposed to a Hibernate type solution that does everything for you, but also (IMO) limits your ability to perform complex queries.\nThere is a .NET version of iBatis as well.\n",
"I've recently set up ActiveRecord from the Castle Project for an app. It was pretty easy to get going. After creating a new app with it, I even used MyGeneration to script out class files for a legacy app that ActiveRecord could use in a pretty short time. It uses NHibernate to interact with the database, but takes away all the xml mapping that comes with NHibernate. The nice thing is though, if necessary, you already have NHibernate in your project, you can use its full power if you have some special cases. I'd suggest taking a look at it.\n",
"There are lots of choices of ORMs. Linq to Sql, nHibernate. For pure object databases there is db4o.\nIt depends on the application, but for a high volume enterprise application, I would not go this route. You need more control of your data.\n",
"I was discussing this with a friend over the weekend and it seems like the gains you make on ease of storage are lost if you need to be able to query the database outside of the application. My understanding is that these databases work by storing your object data in a de-normalized fashion. This makes it fast to retrieve entire sets of objects, but if you need to select data from a perspective that doesn't match your object model, the odbms might have a hard time getting at the particular data you want. \n"
] | [
3,
1,
1,
0,
0
] | [] | [] | [
".net",
"c#",
"database",
"orm"
] | stackoverflow_0000028768_.net_c#_database_orm.txt |
Q:
C++ Memory management
I've learned in College that you always have to free your unused Objects but not how you actually do it. For example structuring your code right and so on.
Are there any general rules on how to handle pointers in C++?
I'm currently not allowed to use boost. I have to stick to pure c++ because the framework I'm using forbids any use of generics.
A:
I have worked with the embedded Symbian OS, which had an excellent system in place for this, based entirely on developer conventions.
Only one object will ever own a pointer. By default this is the creator.
Ownership can be passed on. To indicate passing of ownership, the object is passed as a pointer in the method signature (e.g. void Foo(Bar *zonk);).
The owner will decide when to delete the object.
To pass an object to a method just for use, the object is passed as a reference in the method signature (e.g. void Foo(Bat &zonk);).
Non-owner classes may store references (never pointers) to objects they are given only when they can be certain that the owner will not destroy it during use.
Basically, if a class simply uses something, it uses a reference. If a class owns something, it uses a pointer.
This worked beautifully and was a pleasure to use. Memory issues were very rare.
A:
Rules:
Wherever possible, use a
smart pointer. Boost has some
good ones.
If you
can't use a smart pointer, null out
your pointer after deleting it.
Never work anywhere that won't let you use rule 1.
If someone disallows rule 1, remember that if you grab someone else's code, change the variable names and delete the copyright notices, no-one will ever notice. Unless it's a school project, where they actually check for that kind of shenanigans with quite sophisticated tools. See also, this question.
A:
I would add another rule here:
Don't new/delete an object when an automatic object will do just fine.
We have found that programmers who are new to C++, or programmers coming over from languages like Java, seem to learn about new and then obsessively use it whenever they want to create any object, regardless of the context. This is especially pernicious when an object is created locally within a function purely to do something useful. Using new in this way can be detrimental to performance and can make it all too easy to introduce silly memory leaks when the corresponding delete is forgotten. Yes, smart pointers can help with the latter but it won't solve the performance issues (assuming that new/delete or an equivalent is used behind the scenes). Interestingly (well, maybe), we have found that delete often tends to be more expensive than new when using Visual C++.
Some of this confusion also comes from the fact that functions they call might take pointers, or even smart pointers, as arguments (when references would perhaps be better/clearer). This makes them think that they need to "create" a pointer (a lot of people seem to think that this is what new does) to be able to pass a pointer to a function. Clearly, this requires some rules about how APIs are written to make calling conventions as unambiguous as possible, which are reinforced with clear comments supplied with the function prototype.
A:
In the general case (resource management, where resource is not necessarily memory), you need to be familiar with the RAII pattern. This is one of the most important pieces of information for C++ developers.
A:
In general, avoid allocating from the heap unless you have to. If you have to, use reference counting for objects that are long-lived and need to be shared between diverse parts of your code.
Sometimes you need to allocate objects dynamically, but they will only be used within a certain span of time. For example, in a previous project I needed to create a complex in-memory representation of a database schema -- basically a complex cyclic graph of objects. However, the graph was only needed for the duration of a database connection, after which all the nodes could be freed in one shot. In this kind of scenario, a good pattern to use is something I call the "local GC idiom." I'm not sure if it has an "official" name, as it's something I've only seen in my own code, and in Cocoa (see NSAutoreleasePool in Apple's Cocoa reference).
In a nutshell, you create a "collector" object that keeps pointers to the temporary objects that you allocate using new. It is usually tied to some scope in your program, either a static scope (e.g. -- as a stack-allocated object that implements the RAII idiom) or a dynamic one (e.g. -- tied to the lifetime of a database connection, as in my previous project). When the "collector" object is freed, its destructor frees all of the objects that it points to.
Also, like DrPizza I think the restriction to not use templates is too harsh. However, having done a lot of development on ancient versions of Solaris, AIX, and HP-UX (just recently - yes, these platforms are still alive in the Fortune 50), I can tell you that if you really care about portability, you should use templates as little as possible. Using them for containers and smart pointers ought to be ok, though (it worked for me). Without templates the technique I described is more painful to implement. It would require that all objects managed by the "collector" derive from a common base class.
A:
G'day,
I'd suggest reading the relevant sections of "Effective C++" by Scott Meyers. Easy to read and he covers some interesting gotchas to trap the unwary.
I'm also intrigued by the lack of templates. So no STL or Boost. Wow.
BTW Getting people to agree on conventions is an excellent idea. As is getting everyone to agree on conventions for OOD. BTW The latest edition of Effective C++ doesn't have the excellent chapter about OOD conventions that the first edition had which is a pity, e.g. conventions such as public virtual inheritance always models an "isa" relationship.
Rob
A:
When you have to use manage memory
manually, make sure you call delete
in the same
scope/function/class/module, which
ever applies first, e.g.:
Let the caller of a function allocate the memory that is filled by it,
do not return new'ed pointers.
Always call delete in the same exe/dll as you called new in, because otherwise you may have problems with heap corruptions (different incompatible runtime libraries).
A:
you could derive everything from some base class that implement smart pointer like functionality (using ref()/unref() methods and a counter.
All points highlighted by @Timbo are important when designing that base class.
| C++ Memory management | I've learned in College that you always have to free your unused Objects but not how you actually do it. For example structuring your code right and so on.
Are there any general rules on how to handle pointers in C++?
I'm currently not allowed to use boost. I have to stick to pure c++ because the framework I'm using forbids any use of generics.
| [
"I have worked with the embedded Symbian OS, which had an excellent system in place for this, based entirely on developer conventions.\n\nOnly one object will ever own a pointer. By default this is the creator.\nOwnership can be passed on. To indicate passing of ownership, the object is passed as a pointer in the method signature (e.g. void Foo(Bar *zonk);).\nThe owner will decide when to delete the object.\nTo pass an object to a method just for use, the object is passed as a reference in the method signature (e.g. void Foo(Bat &zonk);).\nNon-owner classes may store references (never pointers) to objects they are given only when they can be certain that the owner will not destroy it during use.\n\nBasically, if a class simply uses something, it uses a reference. If a class owns something, it uses a pointer.\nThis worked beautifully and was a pleasure to use. Memory issues were very rare.\n",
"Rules:\n\nWherever possible, use a\nsmart pointer. Boost has some\ngood ones. \nIf you\ncan't use a smart pointer, null out\nyour pointer after deleting it.\nNever work anywhere that won't let you use rule 1.\n\nIf someone disallows rule 1, remember that if you grab someone else's code, change the variable names and delete the copyright notices, no-one will ever notice. Unless it's a school project, where they actually check for that kind of shenanigans with quite sophisticated tools. See also, this question.\n",
"I would add another rule here:\n\nDon't new/delete an object when an automatic object will do just fine.\n\nWe have found that programmers who are new to C++, or programmers coming over from languages like Java, seem to learn about new and then obsessively use it whenever they want to create any object, regardless of the context. This is especially pernicious when an object is created locally within a function purely to do something useful. Using new in this way can be detrimental to performance and can make it all too easy to introduce silly memory leaks when the corresponding delete is forgotten. Yes, smart pointers can help with the latter but it won't solve the performance issues (assuming that new/delete or an equivalent is used behind the scenes). Interestingly (well, maybe), we have found that delete often tends to be more expensive than new when using Visual C++.\nSome of this confusion also comes from the fact that functions they call might take pointers, or even smart pointers, as arguments (when references would perhaps be better/clearer). This makes them think that they need to \"create\" a pointer (a lot of people seem to think that this is what new does) to be able to pass a pointer to a function. Clearly, this requires some rules about how APIs are written to make calling conventions as unambiguous as possible, which are reinforced with clear comments supplied with the function prototype.\n",
"In the general case (resource management, where resource is not necessarily memory), you need to be familiar with the RAII pattern. This is one of the most important pieces of information for C++ developers.\n",
"In general, avoid allocating from the heap unless you have to. If you have to, use reference counting for objects that are long-lived and need to be shared between diverse parts of your code.\nSometimes you need to allocate objects dynamically, but they will only be used within a certain span of time. For example, in a previous project I needed to create a complex in-memory representation of a database schema -- basically a complex cyclic graph of objects. However, the graph was only needed for the duration of a database connection, after which all the nodes could be freed in one shot. In this kind of scenario, a good pattern to use is something I call the \"local GC idiom.\" I'm not sure if it has an \"official\" name, as it's something I've only seen in my own code, and in Cocoa (see NSAutoreleasePool in Apple's Cocoa reference).\nIn a nutshell, you create a \"collector\" object that keeps pointers to the temporary objects that you allocate using new. It is usually tied to some scope in your program, either a static scope (e.g. -- as a stack-allocated object that implements the RAII idiom) or a dynamic one (e.g. -- tied to the lifetime of a database connection, as in my previous project). When the \"collector\" object is freed, its destructor frees all of the objects that it points to.\nAlso, like DrPizza I think the restriction to not use templates is too harsh. However, having done a lot of development on ancient versions of Solaris, AIX, and HP-UX (just recently - yes, these platforms are still alive in the Fortune 50), I can tell you that if you really care about portability, you should use templates as little as possible. Using them for containers and smart pointers ought to be ok, though (it worked for me). Without templates the technique I described is more painful to implement. It would require that all objects managed by the \"collector\" derive from a common base class.\n",
"G'day,\nI'd suggest reading the relevant sections of \"Effective C++\" by Scott Meyers. Easy to read and he covers some interesting gotchas to trap the unwary.\nI'm also intrigued by the lack of templates. So no STL or Boost. Wow.\nBTW Getting people to agree on conventions is an excellent idea. As is getting everyone to agree on conventions for OOD. BTW The latest edition of Effective C++ doesn't have the excellent chapter about OOD conventions that the first edition had which is a pity, e.g. conventions such as public virtual inheritance always models an \"isa\" relationship.\nRob\n",
"\nWhen you have to use manage memory\nmanually, make sure you call delete\nin the same\nscope/function/class/module, which\never applies first, e.g.:\nLet the caller of a function allocate the memory that is filled by it,\ndo not return new'ed pointers.\nAlways call delete in the same exe/dll as you called new in, because otherwise you may have problems with heap corruptions (different incompatible runtime libraries).\n\n",
"you could derive everything from some base class that implement smart pointer like functionality (using ref()/unref() methods and a counter.\nAll points highlighted by @Timbo are important when designing that base class.\n"
] | [
14,
5,
3,
2,
2,
1,
0,
0
] | [] | [] | [
"c++",
"memory",
"pointers"
] | stackoverflow_0000027492_c++_memory_pointers.txt |
Q:
SQLServer Get Results Where Value Is Null
I have an SQL server database that I am querying and I only want to get the information when a specific row is null. I used a where statement such as:
WHERE database.foobar = NULL
and it does not return anything. However, I know that there is at least one result because I created an instance in the database where 'foobar' is equal to null. If I take out the where statement it shows data so I know it is not the rest of the query.
Can anyone help me out?
A:
Correct syntax is WHERE database.foobar IS NULL. See http://msdn.microsoft.com/en-us/library/ms188795.aspx for more info
A:
Comparison to NULL will be false every time. You want to use IS NULL instead.
x = NULL -- always false
x <> NULL -- always false
x IS NULL -- these do what you want
x IS NOT NULL
A:
Read Testing for Null Values, you need IS NULL not = NULL
A:
Is it an SQL Server database?
If so, use IS NULL instead of making the comparison (MSDN).
| SQLServer Get Results Where Value Is Null | I have an SQL server database that I am querying and I only want to get the information when a specific row is null. I used a where statement such as:
WHERE database.foobar = NULL
and it does not return anything. However, I know that there is at least one result because I created an instance in the database where 'foobar' is equal to null. If I take out the where statement it shows data so I know it is not the rest of the query.
Can anyone help me out?
| [
"Correct syntax is WHERE database.foobar IS NULL. See http://msdn.microsoft.com/en-us/library/ms188795.aspx for more info\n",
"Comparison to NULL will be false every time. You want to use IS NULL instead.\nx = NULL -- always false\nx <> NULL -- always false\n\nx IS NULL -- these do what you want\nx IS NOT NULL\n\n",
"Read Testing for Null Values, you need IS NULL not = NULL\n",
"Is it an SQL Server database?\nIf so, use IS NULL instead of making the comparison (MSDN).\n"
] | [
6,
3,
2,
1
] | [] | [] | [
"oracle",
"sql",
"sql_server"
] | stackoverflow_0000028922_oracle_sql_sql_server.txt |
Q:
Best architecture for handling file system changes?
Here is the scenario:
I'm writing an app that will watch for any changes in a specific directory. This directory will be flooded with thousands of files a minute each with an "almost" unique GUID. The file format is this:
GUID.dat where GUID == xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
(the internal contents aren't relevant, but it's just text data)
My app will be a form that has one single text box that shows all the files that are being added and deleted in real time. Every time a new file comes in I have to update the textbox with this file, BUT I must first make sure that this semi-unique GUID is really unique, if it is, update the textbox with this new file.
When a file is removed from that directory, make sure it exists, then delete it, update textbox accordingly.
The problem is that I've been using the .NET filewatcher and it seems that there is an internal buffer that gets blown up every time the (buffersize + 1)-th file comes in. I also tried to keep an internal List in my app, and just add every single file that comes in, but do the unique-GUID check later, but no dice.
A:
A couple of things that I have in my head:
If the guid is not unique, would it not overwrite the file with the same name, or is the check based on a lookup which does some external action (e.g. check the archive)? (i.e. is this a YAGNI moment?)
I've used FileSystemWatcher before with pretty good success, can you give us some ideas as to how your actually doing things?
When you say "no dice" when working with your custom list, what was the problem? And how were you checking for file system changes without FileSystemWatcher?!
Sorry no answer as yet, just would like to know more about the problem :)
A:
I suggest you take a look at the SHChangeNotify API call, which can notify you of all kinds of shell events. To monitor file creation and deletion activity, you may want to pay special attention to the SHCNE_CREATE and SHCNE_DELETE arguments.
| Best architecture for handling file system changes? | Here is the scenario:
I'm writing an app that will watch for any changes in a specific directory. This directory will be flooded with thousands of files a minute each with an "almost" unique GUID. The file format is this:
GUID.dat where GUID == xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
(the internal contents aren't relevant, but it's just text data)
My app will be a form that has one single text box that shows all the files that are being added and deleted in real time. Every time a new file comes in I have to update the textbox with this file, BUT I must first make sure that this semi-unique GUID is really unique, if it is, update the textbox with this new file.
When a file is removed from that directory, make sure it exists, then delete it, update textbox accordingly.
The problem is that I've been using the .NET filewatcher and it seems that there is an internal buffer that gets blown up every time the (buffersize + 1)-th file comes in. I also tried to keep an internal List in my app, and just add every single file that comes in, but do the unique-GUID check later, but no dice.
| [
"A couple of things that I have in my head:\n\nIf the guid is not unique, would it not overwrite the file with the same name, or is the check based on a lookup which does some external action (e.g. check the archive)? (i.e. is this a YAGNI moment?)\nI've used FileSystemWatcher before with pretty good success, can you give us some ideas as to how your actually doing things?\nWhen you say \"no dice\" when working with your custom list, what was the problem? And how were you checking for file system changes without FileSystemWatcher?!\n\nSorry no answer as yet, just would like to know more about the problem :)\n",
"I suggest you take a look at the SHChangeNotify API call, which can notify you of all kinds of shell events. To monitor file creation and deletion activity, you may want to pay special attention to the SHCNE_CREATE and SHCNE_DELETE arguments.\n"
] | [
3,
2
] | [] | [] | [
".net",
"architecture",
"concurrency",
"filesystemwatcher"
] | stackoverflow_0000028941_.net_architecture_concurrency_filesystemwatcher.txt |
Q:
Why doesn't 'shell' work in VBscript in VS6?
In a macro for Visual Studio 6, I wanted to run an external program, so I typed:
shell("p4 open " + ActiveDocument.FullName)
Which gave me a type mismatch runtime error. What I ended up having to type was this:
Dim wshShell
Set wshShell = CreateObject("WScript.Shell")
strResult = wshShell.Run("p4 open " + ActiveDocument.FullName)
What is going on here? Is that nonsense really necessary or have I missed something?
A:
VBScript isn't Visual Basic.
A:
As lassevk pointed out, VBScript is not Visual Basic.
I believe the only built in object in VBScript is the WScript object.
WScript.Echo "Hello, World!"
From the docs
The WScript object is the root object of the Windows Script Host
object model hierarchy. It never needs to be instantiated before invoking its
properties and methods, and it is always available from any script file.
Everything else must be created via the CreateObject call. Some of those objects are listed here.
The Shell object is one of the other objects that you need to create if you want to call methods on it.
One caveat, is that RegExp is sort of built in, in that you can instantiate a RegExp object like so in VBScript:
Dim r as New RegExp
A:
Give this a try:
Shell "p4 open" & ActiveDocument.FullName
| Why doesn't 'shell' work in VBscript in VS6? | In a macro for Visual Studio 6, I wanted to run an external program, so I typed:
shell("p4 open " + ActiveDocument.FullName)
Which gave me a type mismatch runtime error. What I ended up having to type was this:
Dim wshShell
Set wshShell = CreateObject("WScript.Shell")
strResult = wshShell.Run("p4 open " + ActiveDocument.FullName)
What is going on here? Is that nonsense really necessary or have I missed something?
| [
"VBScript isn't Visual Basic.\n",
"As lassevk pointed out, VBScript is not Visual Basic.\nI believe the only built in object in VBScript is the WScript object.\nWScript.Echo \"Hello, World!\"\n\nFrom the docs\n\nThe WScript object is the root object of the Windows Script Host\n object model hierarchy. It never needs to be instantiated before invoking its\n properties and methods, and it is always available from any script file.\n\nEverything else must be created via the CreateObject call. Some of those objects are listed here.\nThe Shell object is one of the other objects that you need to create if you want to call methods on it.\nOne caveat, is that RegExp is sort of built in, in that you can instantiate a RegExp object like so in VBScript:\nDim r as New RegExp\n\n",
"Give this a try:\nShell \"p4 open\" & ActiveDocument.FullName\n\n"
] | [
1,
1,
0
] | [
"VB6 uses & to concatenate strings rather than +, and you'll want to make sure the file name is encased in quotes in case of spaces. Try it like this: \nShell \"p4 open \"\"\" & ActiveDocument.FullName & \"\"\"\"\n\n"
] | [
-1
] | [
"vbscript",
"visual_studio"
] | stackoverflow_0000020272_vbscript_visual_studio.txt |
Q:
How do you use ssh in a shell script?
When I try to use an ssh command in a shell script, the command just sits there. Do you have an example of how to use ssh in a shell script?
A:
Depends on what you want to do, and how you use it. If you just want to execute a command remotely and safely on another machine, just use
ssh user@host command
for example
ssh user@host ls
In order to do this safely you need to either ask the user for the password during runtime, or set up keys on the remote host.
A:
First, you need to make sure you've set up password-less (public key login). There are at least two flavors of ssh with slightly different configuration file formats. Check the ssh manpage on your system, consult you local sysadmin or head over to How do I setup Public-Key Authentication?.
To run ssh in batch mode (such as within a shell script), you need to pass a command you want to be run. The syntax is:
ssh host command
If you want to run more than one command at the same time, use quotes and semicolons:
ssh host "command1; command2"
The quotes are needed to protect the semicolons from the shell interpreter. If you left them out, only the first command would be run remotely and all the rest would be run on the local machine.
A:
You need to put your SSH public key into the ~/.ssh/authorized_keys file on the remote host. Then you'll be able to SSH to that host password-less.
Alternatively you can use ssh-agent. I would recommend against storing the password in the script.
A:
You can use expect command to populate the username/password info.
| How do you use ssh in a shell script? | When I try to use an ssh command in a shell script, the command just sits there. Do you have an example of how to use ssh in a shell script?
| [
"Depends on what you want to do, and how you use it. If you just want to execute a command remotely and safely on another machine, just use\nssh user@host command\n\nfor example\nssh user@host ls\n\nIn order to do this safely you need to either ask the user for the password during runtime, or set up keys on the remote host.\n",
"First, you need to make sure you've set up password-less (public key login). There are at least two flavors of ssh with slightly different configuration file formats. Check the ssh manpage on your system, consult you local sysadmin or head over to How do I setup Public-Key Authentication?. \nTo run ssh in batch mode (such as within a shell script), you need to pass a command you want to be run. The syntax is:\nssh host command\n\nIf you want to run more than one command at the same time, use quotes and semicolons:\nssh host \"command1; command2\"\n\nThe quotes are needed to protect the semicolons from the shell interpreter. If you left them out, only the first command would be run remotely and all the rest would be run on the local machine.\n",
"You need to put your SSH public key into the ~/.ssh/authorized_keys file on the remote host. Then you'll be able to SSH to that host password-less.\nAlternatively you can use ssh-agent. I would recommend against storing the password in the script.\n",
"You can use expect command to populate the username/password info.\n"
] | [
38,
22,
2,
0
] | [
"The easiest way is using a certificate for the user that runs the script.\nA more complex one implies adding to stdin the password when the shell command asks for it. Expect, perl libraries, show to the user the prompt asking the password (if is interactive, at least), there are a lot of choices.\n"
] | [
-3
] | [
"bash",
"ksh",
"rsh",
"shell",
"ssh"
] | stackoverflow_0000029061_bash_ksh_rsh_shell_ssh.txt |
Q:
What is the difference between a group and match in .NET's RegEx?
What is the difference between a Group and a Match in .NET's RegEx?
A:
A Match is an object that indicates a particular regular expression matched (a portion of) the target text. A Group indicates a portion of a match, if the original regular expression contained group markers (basically a pattern in parentheses). For example, with the following code:
string text = "One car red car blue car";
string pat = @"(\w+)\s+(car)";
Match m = r.Match(text);
m would be match object that contains two groups - group 1, from (\w+), and that captured "One", and group 2 (from (car)) that matched, well, "car".
A:
A Match is a part of a string that matches the regular expression, and there could therefore be multiple matches within a string.
Inside a Match you can define groups, either anonymous or named, to make it easier to split up a match. A simple example is to create a regex to search for URLs, and then use groups inside to find the protocol (http), domain (www.web.com), path (/lol/cats.html) and arguments and what not.
// Example I made up on the spot, probably doesn't work very well
"(?<protocol>\w+)://(?<domain>[^/]+)(?<path>/[^?])"
A single pattern can be found multiple times inside a string, as I said, so if you use Regex.Matches(string text) you will get back multiple matches, each consisting of zero, one or more groups.
Those named groups can be found by either indexing by number, or with a string. The example above can be used like this:
Match match = pattern.Match(urls);
if (!match.Success)
continue;
string protocol = match.Groups["protocol"].Value;
string domain = match.Groups[1].Value;
To make things even more interesting, one group could be matched multiple times, but then I recommend start reading the documentation.
You can also use groups to generate back references, and to do partial search and replace, but read more of that on MSDN.
| What is the difference between a group and match in .NET's RegEx? | What is the difference between a Group and a Match in .NET's RegEx?
| [
"A Match is an object that indicates a particular regular expression matched (a portion of) the target text. A Group indicates a portion of a match, if the original regular expression contained group markers (basically a pattern in parentheses). For example, with the following code:\nstring text = \"One car red car blue car\";\nstring pat = @\"(\\w+)\\s+(car)\";\nMatch m = r.Match(text);\n\nm would be match object that contains two groups - group 1, from (\\w+), and that captured \"One\", and group 2 (from (car)) that matched, well, \"car\".\n",
"A Match is a part of a string that matches the regular expression, and there could therefore be multiple matches within a string.\nInside a Match you can define groups, either anonymous or named, to make it easier to split up a match. A simple example is to create a regex to search for URLs, and then use groups inside to find the protocol (http), domain (www.web.com), path (/lol/cats.html) and arguments and what not. \n// Example I made up on the spot, probably doesn't work very well\n\"(?<protocol>\\w+)://(?<domain>[^/]+)(?<path>/[^?])\"\n\nA single pattern can be found multiple times inside a string, as I said, so if you use Regex.Matches(string text) you will get back multiple matches, each consisting of zero, one or more groups.\nThose named groups can be found by either indexing by number, or with a string. The example above can be used like this:\nMatch match = pattern.Match(urls);\nif (!match.Success) \n continue;\nstring protocol = match.Groups[\"protocol\"].Value;\nstring domain = match.Groups[1].Value;\n\nTo make things even more interesting, one group could be matched multiple times, but then I recommend start reading the documentation.\nYou can also use groups to generate back references, and to do partial search and replace, but read more of that on MSDN.\n"
] | [
8,
2
] | [] | [] | [
".net",
"regex"
] | stackoverflow_0000029088_.net_regex.txt |
Q:
Best method of Textfile Parsing in C#?
I want to parse a config file sorta thing, like so:
[KEY:Value]
[SUBKEY:SubValue]
Now I started with a StreamReader, converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me.
One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0.
A:
I considered it, but I'm not going to use XML. I am going to be writing this stuff by hand, and hand editing XML makes my brain hurt. :')
Have you looked at YAML?
You get the benefits of XML without all the pain and suffering. It's used extensively in the ruby community for things like config files, pre-prepared database data, etc
here's an example
customer:
name: Orion
age: 26
addresses:
- type: Work
number: 12
street: Bob Street
- type: Home
number: 15
street: Secret Road
There appears to be a C# library here, which I haven't used personally, but yaml is pretty simple, so "how hard can it be?" :-)
I'd say it's preferable to inventing your own ad-hoc format (and dealing with parser bugs)
A:
I was looking at almost this exact problem the other day: this article on string tokenizing is exactly what you need. You'll want to define your tokens as something like:
@"(?<level>\s) | " +
@"(?<term>[^:\s]) | " +
@"(?<separator>:)"
The article does a pretty good job of explaining it. From there you just start eating up tokens as you see fit.
Protip: For an LL(1) parser (read: easy), tokens cannot share a prefix. If you have abc as a token, you cannot have ace as a token
Note: The article's missing the | characters in its examples, just throw them in.
A:
Using a library is almost always preferably to rolling your own. Here's a quick list of "Oh I'll never need that/I didn't think about that" points which will end up coming to bite you later down the line:
Escaping characters. What if you want a : in the key or ] in the value?
Escaping the escape character.
Unicode
Mix of tabs and spaces (see the problems with Python's white space sensitive syntax)
Handling different return character formats
Handling syntax error reporting
Like others have suggested, YAML looks like your best bet.
A:
There is another YAML library for .NET which is under development. Right now it supports reading YAML streams and has been tested on Windows and Mono. Write support is currently being implemented.
A:
It looks to me that you would be better off using an XML based config file as there are already .NET classes which can read and store the information for you relatively easily. Is there a reason that this is not possible?
@Bernard: It is true that hand editing XML is tedious, but the structure that you are presenting already looks very similar to XML.
Then yes, has a good method there.
A:
You can also use a stack, and use a push/pop algorithm. This one matches open/closing tags.
public string check()
{
ArrayList tags = getTags();
int stackSize = tags.Count;
Stack stack = new Stack(stackSize);
foreach (string tag in tags)
{
if (!tag.Contains('/'))
{
stack.push(tag);
}
else
{
if (!stack.isEmpty())
{
string startTag = stack.pop();
startTag = startTag.Substring(1, startTag.Length - 1);
string endTag = tag.Substring(2, tag.Length - 2);
if (!startTag.Equals(endTag))
{
return "Fout: geen matchende eindtag";
}
}
else
{
return "Fout: geen matchende openeningstag";
}
}
}
if (!stack.isEmpty())
{
return "Fout: geen matchende eindtag";
}
return "Xml is valid";
}
You can probably adapt so you can read the contents of your file. Regular expressions are also a good idea.
A:
@Gishu
Actually once I'd accommodated for escaped characters my regex ran slightly slower than my hand written top down recursive parser and that's without the nesting (linking sub-items to their parents) and error reporting the hand written parser had.
The regex was a slightly faster to write (though I do have a bit of experience with hand parsers) but that's without good error reporting. Once you add that it becomes slightly harder and longer to do.
I also find the hand written parser easier to understand the intention of. For instance, here is the a snippet of the code:
private static Node ParseNode(TextReader reader)
{
Node node = new Node();
int indentation = ParseWhitespace(reader);
Expect(reader, '[');
node.Key = ParseTerminatedString(reader, ':');
node.Value = ParseTerminatedString(reader, ']');
}
| Best method of Textfile Parsing in C#? | I want to parse a config file sorta thing, like so:
[KEY:Value]
[SUBKEY:SubValue]
Now I started with a StreamReader, converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me.
One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0.
| [
"\nI considered it, but I'm not going to use XML. I am going to be writing this stuff by hand, and hand editing XML makes my brain hurt. :')\n\nHave you looked at YAML?\nYou get the benefits of XML without all the pain and suffering. It's used extensively in the ruby community for things like config files, pre-prepared database data, etc\nhere's an example\ncustomer:\n name: Orion\n age: 26\n addresses:\n - type: Work\n number: 12\n street: Bob Street\n - type: Home\n number: 15\n street: Secret Road\n\nThere appears to be a C# library here, which I haven't used personally, but yaml is pretty simple, so \"how hard can it be?\" :-)\nI'd say it's preferable to inventing your own ad-hoc format (and dealing with parser bugs)\n",
"I was looking at almost this exact problem the other day: this article on string tokenizing is exactly what you need. You'll want to define your tokens as something like:\n@\"(?<level>\\s) | \" +\n@\"(?<term>[^:\\s]) | \" +\n@\"(?<separator>:)\"\n\nThe article does a pretty good job of explaining it. From there you just start eating up tokens as you see fit.\nProtip: For an LL(1) parser (read: easy), tokens cannot share a prefix. If you have abc as a token, you cannot have ace as a token\nNote: The article's missing the | characters in its examples, just throw them in.\n",
"Using a library is almost always preferably to rolling your own. Here's a quick list of \"Oh I'll never need that/I didn't think about that\" points which will end up coming to bite you later down the line:\n\nEscaping characters. What if you want a : in the key or ] in the value?\nEscaping the escape character.\nUnicode\nMix of tabs and spaces (see the problems with Python's white space sensitive syntax)\nHandling different return character formats\nHandling syntax error reporting\n\nLike others have suggested, YAML looks like your best bet.\n",
"There is another YAML library for .NET which is under development. Right now it supports reading YAML streams and has been tested on Windows and Mono. Write support is currently being implemented.\n",
"It looks to me that you would be better off using an XML based config file as there are already .NET classes which can read and store the information for you relatively easily. Is there a reason that this is not possible?\n@Bernard: It is true that hand editing XML is tedious, but the structure that you are presenting already looks very similar to XML.\nThen yes, has a good method there. \n",
"You can also use a stack, and use a push/pop algorithm. This one matches open/closing tags.\npublic string check()\n {\n ArrayList tags = getTags();\n\n\n int stackSize = tags.Count;\n\n Stack stack = new Stack(stackSize);\n\n foreach (string tag in tags)\n {\n if (!tag.Contains('/'))\n {\n stack.push(tag);\n }\n else\n {\n if (!stack.isEmpty())\n {\n string startTag = stack.pop();\n startTag = startTag.Substring(1, startTag.Length - 1);\n string endTag = tag.Substring(2, tag.Length - 2);\n if (!startTag.Equals(endTag))\n {\n return \"Fout: geen matchende eindtag\";\n }\n }\n else\n {\n return \"Fout: geen matchende openeningstag\";\n }\n }\n }\n\n if (!stack.isEmpty())\n {\n return \"Fout: geen matchende eindtag\";\n } \n return \"Xml is valid\";\n }\n\nYou can probably adapt so you can read the contents of your file. Regular expressions are also a good idea.\n",
"@Gishu\nActually once I'd accommodated for escaped characters my regex ran slightly slower than my hand written top down recursive parser and that's without the nesting (linking sub-items to their parents) and error reporting the hand written parser had.\nThe regex was a slightly faster to write (though I do have a bit of experience with hand parsers) but that's without good error reporting. Once you add that it becomes slightly harder and longer to do.\nI also find the hand written parser easier to understand the intention of. For instance, here is the a snippet of the code:\nprivate static Node ParseNode(TextReader reader)\n{\n Node node = new Node();\n int indentation = ParseWhitespace(reader);\n Expect(reader, '[');\n node.Key = ParseTerminatedString(reader, ':');\n node.Value = ParseTerminatedString(reader, ']');\n}\n\n"
] | [
12,
4,
1,
1,
0,
0,
0
] | [
"Regardless of the persisted format, using a Regex would be the fastest way of parsing.\nIn ruby it'd probably be a few lines of code.\n\\[KEY:(.*)\\] \n\\[SUBKEY:(.*)\\]\n\nThese two would get you the Value and SubValue in the first group. Check out MSDN on how to match a regex against a string.\nThis is something everyone should have in their kitty. Pre-Regex days would seem like the Ice Age.\n"
] | [
-1
] | [
"c#",
"fileparse"
] | stackoverflow_0000013963_c#_fileparse.txt |
Q:
Debugging Web Service with SOAP Packet
I have a web service that I created in C# and a test harness that was provided by my client. Unfortunately my web service doesn't seem to be parsing the objects created by the test harness. I believe the problem lies with serializing the soap packet.
Using TCPTrace I was able to get the soap packet passed to the web service but only on a remote machine so I can't debug it there. Is there a way of calling my local webservice with the soap packet generated rather than my current test harness where I manually create objects and call the web service through a web reference?
[edit] The machine that I got the soap packet was on a vm so I can't link it to my machine. I suppose I'm looking for a tool that I can paste the soap packet into and it will in turn call my web service
A:
A somewhat manual process would be to use the Poster add-in for Firefox. There is also a java utility called SoapUI that has some discovery based automated templates that you can then modify and run against your service.
A:
By default, .Net will not allow you to connect a packet analyzer like TCPTrace or Fiddler (which I prefer) to localhost or 127.0.0.1 connections (for reasons that I forget now..)
Best way would be to reference your web services via a full IP address or FQDN where possible. That will allow you to trace the calls in the tool of your choice.
A:
Same as palehorse, use soapUI or directly the specific component for that feature: TCPMon.
A:
Just did this the other day with TCPTrace on the local machine. I mapped the remote host in the hosts file to 127.0.0.1. Ran the local web server on 8080, TcpTrace on 80 pointing to 127.0.0.1:8080. Probably your issue is trying to run both at port 80 which won't work.
| Debugging Web Service with SOAP Packet | I have a web service that I created in C# and a test harness that was provided by my client. Unfortunately my web service doesn't seem to be parsing the objects created by the test harness. I believe the problem lies with serializing the soap packet.
Using TCPTrace I was able to get the soap packet passed to the web service but only on a remote machine so I can't debug it there. Is there a way of calling my local webservice with the soap packet generated rather than my current test harness where I manually create objects and call the web service through a web reference?
[edit] The machine that I got the soap packet was on a vm so I can't link it to my machine. I suppose I'm looking for a tool that I can paste the soap packet into and it will in turn call my web service
| [
"A somewhat manual process would be to use the Poster add-in for Firefox. There is also a java utility called SoapUI that has some discovery based automated templates that you can then modify and run against your service.\n",
"By default, .Net will not allow you to connect a packet analyzer like TCPTrace or Fiddler (which I prefer) to localhost or 127.0.0.1 connections (for reasons that I forget now..)\nBest way would be to reference your web services via a full IP address or FQDN where possible. That will allow you to trace the calls in the tool of your choice.\n",
"Same as palehorse, use soapUI or directly the specific component for that feature: TCPMon.\n",
"Just did this the other day with TCPTrace on the local machine. I mapped the remote host in the hosts file to 127.0.0.1. Ran the local web server on 8080, TcpTrace on 80 pointing to 127.0.0.1:8080. Probably your issue is trying to run both at port 80 which won't work.\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"c#",
"soap",
"tcptrace_pocketsoap",
"web_services"
] | stackoverflow_0000028654_c#_soap_tcptrace_pocketsoap_web_services.txt |
Q:
Why does SQL Server work faster when you index a table after filling it?
I have a sproc that puts 750K records into a temp table through a query as one of its first actions. If I create indexes on the temp table before filling it, the item takes about twice as long to run compared to when I index after filling the table. (The index is an integer in a single column, the table being indexed is just two columns each a single integer.)
This seems a little off to me, but then I don't have the firmest understanding of what goes on under the hood. Does anyone have an answer for this?
A:
If you create a clustered index, it affects the way the data is physically ordered on the disk. It's better to add the index after the fact and let the database engine reorder the rows when it knows how the data is distributed.
For example, let's say you needed to build a brick wall with numbered bricks so that those with the highest number are at the bottom of the wall. It would be a difficult task if you were just handed the bricks in random order, one at a time - you wouldn't know which bricks were going to turn out to be the highest numbered, and you'd have to tear the wall down and rebuild it over and over. It would be a lot easier to handle that task if you had all the bricks lined up in front of you, and could organize your work.
That's how it is for the database engine - if you let it know about the whole job, it can be much more efficient than if you just feed it a row at a time.
A:
It's because the database server has to do calculations each and every time you insert a new row. Basically, you end up reindexing the table each time. It doesn't seem like a very expensive operation, and it's not, but when you do that many of them together, you start to see the impact. That's why you usually want to index after you've populated your rows, since it will just be a one-time cost.
A:
You should NEVER EVER create an index on an empty table if you are going to massively load it right afterwards.
Indexes have to be maintained as the data on the table changes, so imagine as if for every insert on the table the index was being recalculated (which is an expensive operation).
Load the table first and create the index after finishing with the load.
That's were the performance difference is going.
A:
Think of it this way.
Given
unorderedList = {5, 1,3}
orderedList = {1,3,5}
add 2 to both lists.
unorderedList = {5, 1,3,2}
orderedList = {1,2,3,5}
What list do you think is easier to add to?
Btw ordering your input before load will give you a boost.
A:
After performing large data manipulation operations, you frequently have to update the underlying indexes. You can do that by using the UPDATE STATISTICS [table] statement.
The other option is to drop and recreate the index which, if you are doing large data insertions, will likely perform the inserts much faster. You can even incorporate that into your stored procedure.
A:
this is because if the data you insert is not in the order of the index, SQL will have to split pages to make room for additional rows to keep them together logically
A:
This due to the fact that when SQL Server indexes table with data it is able to produce exact statistics of values in indexed column. At some moments SQL Server will recalculate statistics, but when you perform massive inserts the distribution of values may change after the statistics was calculated last time.
The fact that statistics is out of date can be discovered on Query Analyzer. When you see that on a certain table scan number of rows expected differs to much from actual numbers of rows processed.
You should use UPDATE STATISTICS to recalculate distribution of values after you insert all the data. After that no performance difference should be observed.
A:
If you have an index on a table, as you add data to the table SQL Server will have to re-order the table to make room in the appropriate place for the new records. If you're adding a lot of data, it will have to reorder it over and over again. By creating an index only after the data is loaded, the re-order only needs to happen once.
Of course, if you are importing the records in index order it shouldn't matter so much.
A:
In addition to the index overhead, running each query as a transaction is a bad idea for the same reason. If you run chunks of inserts (say 100) within 1 explicit transaction, you should also see a performance increase.
| Why does SQL Server work faster when you index a table after filling it? | I have a sproc that puts 750K records into a temp table through a query as one of its first actions. If I create indexes on the temp table before filling it, the item takes about twice as long to run compared to when I index after filling the table. (The index is an integer in a single column, the table being indexed is just two columns each a single integer.)
This seems a little off to me, but then I don't have the firmest understanding of what goes on under the hood. Does anyone have an answer for this?
| [
"If you create a clustered index, it affects the way the data is physically ordered on the disk. It's better to add the index after the fact and let the database engine reorder the rows when it knows how the data is distributed.\nFor example, let's say you needed to build a brick wall with numbered bricks so that those with the highest number are at the bottom of the wall. It would be a difficult task if you were just handed the bricks in random order, one at a time - you wouldn't know which bricks were going to turn out to be the highest numbered, and you'd have to tear the wall down and rebuild it over and over. It would be a lot easier to handle that task if you had all the bricks lined up in front of you, and could organize your work.\nThat's how it is for the database engine - if you let it know about the whole job, it can be much more efficient than if you just feed it a row at a time.\n",
"It's because the database server has to do calculations each and every time you insert a new row. Basically, you end up reindexing the table each time. It doesn't seem like a very expensive operation, and it's not, but when you do that many of them together, you start to see the impact. That's why you usually want to index after you've populated your rows, since it will just be a one-time cost.\n",
"You should NEVER EVER create an index on an empty table if you are going to massively load it right afterwards.\nIndexes have to be maintained as the data on the table changes, so imagine as if for every insert on the table the index was being recalculated (which is an expensive operation).\nLoad the table first and create the index after finishing with the load.\nThat's were the performance difference is going.\n",
"Think of it this way.\n\nGiven\nunorderedList = {5, 1,3}\norderedList = {1,3,5}\nadd 2 to both lists.\nunorderedList = {5, 1,3,2}\norderedList = {1,2,3,5}\n\nWhat list do you think is easier to add to?\nBtw ordering your input before load will give you a boost.\n",
"After performing large data manipulation operations, you frequently have to update the underlying indexes. You can do that by using the UPDATE STATISTICS [table] statement.\nThe other option is to drop and recreate the index which, if you are doing large data insertions, will likely perform the inserts much faster. You can even incorporate that into your stored procedure.\n",
"this is because if the data you insert is not in the order of the index, SQL will have to split pages to make room for additional rows to keep them together logically\n",
"This due to the fact that when SQL Server indexes table with data it is able to produce exact statistics of values in indexed column. At some moments SQL Server will recalculate statistics, but when you perform massive inserts the distribution of values may change after the statistics was calculated last time.\nThe fact that statistics is out of date can be discovered on Query Analyzer. When you see that on a certain table scan number of rows expected differs to much from actual numbers of rows processed.\nYou should use UPDATE STATISTICS to recalculate distribution of values after you insert all the data. After that no performance difference should be observed.\n",
"If you have an index on a table, as you add data to the table SQL Server will have to re-order the table to make room in the appropriate place for the new records. If you're adding a lot of data, it will have to reorder it over and over again. By creating an index only after the data is loaded, the re-order only needs to happen once.\nOf course, if you are importing the records in index order it shouldn't matter so much.\n",
"In addition to the index overhead, running each query as a transaction is a bad idea for the same reason. If you run chunks of inserts (say 100) within 1 explicit transaction, you should also see a performance increase.\n"
] | [
42,
6,
3,
3,
2,
1,
1,
1,
1
] | [] | [] | [
"indexing",
"performance",
"sql_server"
] | stackoverflow_0000028877_indexing_performance_sql_server.txt |
Q:
Java import/export dependencies
I'm trying to find a way to list the (static) dependency requirements of a jar file, in terms of which symbols are required at run time.
I can see that the methods exported by classes can be listed using "javap", but there doesn't seem to be an opposite facility to list the 'imports'. Is it possible to do this?
This would be similar to the dumpbin utility in Windows development which can be used to list the exports and imports of a DLL.
EDIT : Thanks for the responses; I checked out all of the suggestions; accepted DependencyFinder as it most closely meets what I was looking for.
A:
You could use the Outbound dependencies feature of DependencyFinder. You can do that entirely in the GUI, or in command line exporting XML.
A:
I think you can get that information using JDepend
A:
There's a tool called JarAnalyzer that will give you the dependencies between the jars in a directory. It'll also give you a list of dependencies that don't exist in the directory.
A:
If it's a public jar (as in, not yours) then it might be in the Maven Repository.
| Java import/export dependencies | I'm trying to find a way to list the (static) dependency requirements of a jar file, in terms of which symbols are required at run time.
I can see that the methods exported by classes can be listed using "javap", but there doesn't seem to be an opposite facility to list the 'imports'. Is it possible to do this?
This would be similar to the dumpbin utility in Windows development which can be used to list the exports and imports of a DLL.
EDIT : Thanks for the responses; I checked out all of the suggestions; accepted DependencyFinder as it most closely meets what I was looking for.
| [
"You could use the Outbound dependencies feature of DependencyFinder. You can do that entirely in the GUI, or in command line exporting XML.\n",
"I think you can get that information using JDepend\n",
"There's a tool called JarAnalyzer that will give you the dependencies between the jars in a directory. It'll also give you a list of dependencies that don't exist in the directory.\n",
"If it's a public jar (as in, not yours) then it might be in the Maven Repository.\n"
] | [
3,
2,
0,
0
] | [] | [] | [
"export",
"import",
"java"
] | stackoverflow_0000028538_export_import_java.txt |
Q:
Multiple form Delphi applications and dialogs
I have a Delphi 7 application that has two views of a document (e.g. a WYSIWYG HTML edit might have a WYSIWYG view and a source view - not my real application). They can be opened in separate windows, or docked into tabs in the main window.
If I open a modal dialog from one of the separate forms, the main form is brought to the front, and is shown as the selected window in the windows taskbar. Say the main form is the WYSIWYG view, and the source view is poped out. You go to a particular point in the source view and insert an image tag. A dialog appears to allow you to select and enter the properties you want for the image. If the WYSIWYG view and the source view overlap, the WYSIWYG view will be brought to the front and the source view is hidden. Once the dialog is dismissed, the source view comes back into sight.
I've tried setting the owner and the ParentWindow properties to the form it is related to:
dialog := TDialogForm.Create( parentForm );
dialog.ParentWindow := parentForm.Handle;
How can I fix this problem? What else should I be trying?
Given that people seem to be stumbling on my example, perhaps I can try with a better example: a text editor that allows you to have more than one file open at the same time. The files you have open are either in tabs (like in the Delphi IDE) or in its own window. Suppose the user brings up the spell check dialog or the find dialog. What happens, is that if the file is being editing in its own window, that window is sent to below the main form in the z-order when the modal dialog is shown; once the dialog is closed, it is returned to its original z-order.
Note: If you are using Delphi 7 and looking for a solution to this problem, see my answer lower down on the page to see what I ended up doing.
A:
I'd use this code... (Basically what Lars said)
dialog := TDialogForm.Create( parentForm );
dialog.PopupParent := parentForm;
dialog.PopupMode := pmExplicit;
dialog.ShowModal();
A:
I ultimately ended up finding the answer using Google Groups. In a nutshell, all the modal dialogs need to have the following added to them:
procedure TDialogForm.CreateParams(var Params: TCreateParams);
begin
inherited;
Params.Style := Params.Style or WS_POPUP;
Params.WndParent := (Owner as TWinControl).Handle;
end;
I'm guessing this does the equivalent of Lars' and Marius' answers in Delphi 7.
A:
Is the dialog shown using ShowModal or just Show? You should probably set the PopupMode property correct of the your dialog. pmAuto would probably your best choice. Also see if you need to set the PopupParent property.
A:
First of all, I am not completely sure I follow, you might need to provide some additional details to help us understand what is happening and what the problem is. I guess I am not sure I understand exactly what you're trying to accomplish and what the problem is.
Second, you shouldn't need to set the dialog's parent since that is essentially what is happening with the call to Create (passing the parent). The dialogs you're describing sound like they could use some "re-thinking" a bit to be honest. Is this dialog to enter the properties of the image a child of the source window, or the WYSIWYG window?
A:
I'm not sure I quite understand what you are getting at, but here's a few things I can suggest you can try...
This behaviour changes between different versions of Delphi. I'd suggest that this is due to the hoops they jumped through to support Windows Vista in Delphi 2007.
If you are using Delphi 2007, try removing the line from the project source file that sets the Application.MainFormOnTaskBar boolean variable.
With this removed, you should be able to use the various Form's BringToFront / SendToBack methods to achieve the Z-ordering that you are after.
I suspect that what you've discovered has been discussed on this link
Of course, I may have just missed your point entirely, so apologies in advance!
| Multiple form Delphi applications and dialogs | I have a Delphi 7 application that has two views of a document (e.g. a WYSIWYG HTML edit might have a WYSIWYG view and a source view - not my real application). They can be opened in separate windows, or docked into tabs in the main window.
If I open a modal dialog from one of the separate forms, the main form is brought to the front, and is shown as the selected window in the windows taskbar. Say the main form is the WYSIWYG view, and the source view is poped out. You go to a particular point in the source view and insert an image tag. A dialog appears to allow you to select and enter the properties you want for the image. If the WYSIWYG view and the source view overlap, the WYSIWYG view will be brought to the front and the source view is hidden. Once the dialog is dismissed, the source view comes back into sight.
I've tried setting the owner and the ParentWindow properties to the form it is related to:
dialog := TDialogForm.Create( parentForm );
dialog.ParentWindow := parentForm.Handle;
How can I fix this problem? What else should I be trying?
Given that people seem to be stumbling on my example, perhaps I can try with a better example: a text editor that allows you to have more than one file open at the same time. The files you have open are either in tabs (like in the Delphi IDE) or in its own window. Suppose the user brings up the spell check dialog or the find dialog. What happens, is that if the file is being editing in its own window, that window is sent to below the main form in the z-order when the modal dialog is shown; once the dialog is closed, it is returned to its original z-order.
Note: If you are using Delphi 7 and looking for a solution to this problem, see my answer lower down on the page to see what I ended up doing.
| [
"I'd use this code... (Basically what Lars said)\ndialog := TDialogForm.Create( parentForm );\ndialog.PopupParent := parentForm;\ndialog.PopupMode := pmExplicit; \ndialog.ShowModal();\n\n",
"I ultimately ended up finding the answer using Google Groups. In a nutshell, all the modal dialogs need to have the following added to them:\n\nprocedure TDialogForm.CreateParams(var Params: TCreateParams);\nbegin\n inherited;\n Params.Style := Params.Style or WS_POPUP;\n Params.WndParent := (Owner as TWinControl).Handle;\nend;\n\nI'm guessing this does the equivalent of Lars' and Marius' answers in Delphi 7.\n",
"Is the dialog shown using ShowModal or just Show? You should probably set the PopupMode property correct of the your dialog. pmAuto would probably your best choice. Also see if you need to set the PopupParent property.\n",
"First of all, I am not completely sure I follow, you might need to provide some additional details to help us understand what is happening and what the problem is. I guess I am not sure I understand exactly what you're trying to accomplish and what the problem is.\nSecond, you shouldn't need to set the dialog's parent since that is essentially what is happening with the call to Create (passing the parent). The dialogs you're describing sound like they could use some \"re-thinking\" a bit to be honest. Is this dialog to enter the properties of the image a child of the source window, or the WYSIWYG window?\n",
"I'm not sure I quite understand what you are getting at, but here's a few things I can suggest you can try...\n\nThis behaviour changes between different versions of Delphi. I'd suggest that this is due to the hoops they jumped through to support Windows Vista in Delphi 2007.\nIf you are using Delphi 2007, try removing the line from the project source file that sets the Application.MainFormOnTaskBar boolean variable.\nWith this removed, you should be able to use the various Form's BringToFront / SendToBack methods to achieve the Z-ordering that you are after.\n\nI suspect that what you've discovered has been discussed on this link\nOf course, I may have just missed your point entirely, so apologies in advance! \n"
] | [
5,
3,
1,
0,
0
] | [] | [] | [
"delphi",
"delphi_7"
] | stackoverflow_0000021749_delphi_delphi_7.txt |
Q:
How do I create an xml document in python
Here is my sample code:
from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
when I run the above code I get this:
<?xml version="1.0" ?>
<foo/>
I would like to get:
<?xml version="1.0" ?>
<foo>bar</foo>
I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?
A:
@Daniel
Thanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)
from xml.dom.minidom import *
def make_xml():
doc = Document();
node = doc.createElement('foo')
node.appendChild(doc.createTextNode('bar'))
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
I swear I tried this before posting my question...
A:
Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. "node.noSuchAttr = 'bar'" would also not give an error).
Unless you need a specific feature of minidom, I would look at ElementTree:
import sys
from xml.etree.cElementTree import Element, ElementTree
def make_xml():
node = Element('foo')
node.text = 'bar'
doc = ElementTree(node)
return doc
if __name__ == '__main__':
make_xml().write(sys.stdout)
| How do I create an xml document in python | Here is my sample code:
from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
when I run the above code I get this:
<?xml version="1.0" ?>
<foo/>
I would like to get:
<?xml version="1.0" ?>
<foo>bar</foo>
I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?
| [
"@Daniel\nThanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)\n\n\nfrom xml.dom.minidom import *\ndef make_xml():\n doc = Document();\n node = doc.createElement('foo')\n node.appendChild(doc.createTextNode('bar'))\n doc.appendChild(node)\n return doc\nif __name__ == '__main__':\n make_xml().writexml(sys.stdout)\n\n\nI swear I tried this before posting my question...\n",
"Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. \"node.noSuchAttr = 'bar'\" would also not give an error).\nUnless you need a specific feature of minidom, I would look at ElementTree:\nimport sys\nfrom xml.etree.cElementTree import Element, ElementTree\n\ndef make_xml():\n node = Element('foo')\n node.text = 'bar'\n doc = ElementTree(node)\n return doc\n\nif __name__ == '__main__':\n make_xml().write(sys.stdout)\n\n"
] | [
13,
9
] | [] | [] | [
"python",
"xml"
] | stackoverflow_0000029243_python_xml.txt |
Q:
CSharpCodeProvider Compilation Performance
Is CompileAssemblyFromDom faster than CompileAssemblyFromSource?
It should be as it presumably bypasses the compiler front-end.
A:
CompileAssemblyFromDom compiles to a .cs file which is then run through the normal C# compiler.
Example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.CSharp;
using System.CodeDom;
using System.IO;
using System.CodeDom.Compiler;
using System.Reflection;
namespace CodeDomQuestion
{
class Program
{
private static void Main(string[] args)
{
Program p = new Program();
p.dotest("C:\\fs.exe");
}
public void dotest(string outputname)
{
CSharpCodeProvider cscProvider = new CSharpCodeProvider();
CompilerParameters cp = new CompilerParameters();
cp.MainClass = null;
cp.GenerateExecutable = true;
cp.OutputAssembly = outputname;
CodeNamespace ns = new CodeNamespace("StackOverflowd");
CodeTypeDeclaration type = new CodeTypeDeclaration();
type.IsClass = true;
type.Name = "MainClass";
type.TypeAttributes = TypeAttributes.Public;
ns.Types.Add(type);
CodeMemberMethod cmm = new CodeMemberMethod();
cmm.Attributes = MemberAttributes.Static;
cmm.Name = "Main";
cmm.Statements.Add(new CodeSnippetExpression("System.Console.WriteLine('f'zxcvv)"));
type.Members.Add(cmm);
CodeCompileUnit ccu = new CodeCompileUnit();
ccu.Namespaces.Add(ns);
CompilerResults results = cscProvider.CompileAssemblyFromDom(cp, ccu);
foreach (CompilerError err in results.Errors)
Console.WriteLine(err.ErrorText + " - " + err.FileName + ":" + err.Line);
Console.WriteLine();
}
}
}
which shows errors in a (now nonexistent) temp file:
) expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17
; expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17
Invalid expression term ')' - c:\Documents and Settings\jacob\Local Settings\Tem p\x59n9yb-.0.cs:17
So I guess the answer is "no"
A:
I've tried finding the ultimate compiler call earlier and I gave up. There's quite a number of layers of interfaces and virtual classes for my patience.
I don't think the source reader part of the compiler ends up with a DOM tree, but intuitively I would agree with you. The work necessary to transform the DOM to IL should be much less than reading C# source code.
| CSharpCodeProvider Compilation Performance | Is CompileAssemblyFromDom faster than CompileAssemblyFromSource?
It should be as it presumably bypasses the compiler front-end.
| [
"CompileAssemblyFromDom compiles to a .cs file which is then run through the normal C# compiler.\nExample:\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing Microsoft.CSharp;\nusing System.CodeDom;\nusing System.IO;\nusing System.CodeDom.Compiler;\nusing System.Reflection;\n\nnamespace CodeDomQuestion\n{\n class Program\n {\n\n private static void Main(string[] args)\n {\n Program p = new Program();\n p.dotest(\"C:\\\\fs.exe\");\n }\n\n public void dotest(string outputname)\n {\n CSharpCodeProvider cscProvider = new CSharpCodeProvider();\n CompilerParameters cp = new CompilerParameters();\n cp.MainClass = null;\n cp.GenerateExecutable = true;\n cp.OutputAssembly = outputname;\n \n CodeNamespace ns = new CodeNamespace(\"StackOverflowd\");\n\n CodeTypeDeclaration type = new CodeTypeDeclaration();\n type.IsClass = true;\n type.Name = \"MainClass\";\n type.TypeAttributes = TypeAttributes.Public;\n \n ns.Types.Add(type);\n\n CodeMemberMethod cmm = new CodeMemberMethod();\n cmm.Attributes = MemberAttributes.Static;\n cmm.Name = \"Main\";\n cmm.Statements.Add(new CodeSnippetExpression(\"System.Console.WriteLine('f'zxcvv)\"));\n type.Members.Add(cmm);\n\n CodeCompileUnit ccu = new CodeCompileUnit();\n ccu.Namespaces.Add(ns);\n\n CompilerResults results = cscProvider.CompileAssemblyFromDom(cp, ccu);\n\n foreach (CompilerError err in results.Errors)\n Console.WriteLine(err.ErrorText + \" - \" + err.FileName + \":\" + err.Line);\n\n Console.WriteLine();\n }\n }\n}\n\nwhich shows errors in a (now nonexistent) temp file:\n\n) expected - c:\\Documents and Settings\\jacob\\Local Settings\\Temp\\x59n9yb-.0.cs:17\n; expected - c:\\Documents and Settings\\jacob\\Local Settings\\Temp\\x59n9yb-.0.cs:17\nInvalid expression term ')' - c:\\Documents and Settings\\jacob\\Local Settings\\Tem p\\x59n9yb-.0.cs:17\n\nSo I guess the answer is \"no\"\n",
"I've tried finding the ultimate compiler call earlier and I gave up. There's quite a number of layers of interfaces and virtual classes for my patience.\nI don't think the source reader part of the compiler ends up with a DOM tree, but intuitively I would agree with you. The work necessary to transform the DOM to IL should be much less than reading C# source code.\n"
] | [
9,
0
] | [] | [] | [
"c#",
"compiler_construction",
"performance"
] | stackoverflow_0000004612_c#_compiler_construction_performance.txt |
Q:
Attaching entities to data contexts
In LINQ to SQL, is it possible to check to see if an entity is already part of the data context before trying to attach it?
A little context if it helps...
I have this code in my global.asax as a helper method. Normally, between requests, this isn't a problem. But right after signing in, this is getting called more than once, and the second time I end up trying to attach the Member object in the same unit of work where it was created.
private void CheckCurrentUser()
{
if (!HttpContext.Current.User.Identity.IsAuthenticated)
{
AppHelper.CurrentMember = null;
return;
}
IUserService userService = new UserService();
if (AppHelper.CurrentMember != null)
userService.AttachExisting(AppHelper.CurrentMember);
else
AppHelper.CurrentMember = userService.GetMember(
HttpContext.Current.User.Identity.Name,
AppHelper.CurrentLocation);
}
A:
I believe there are two methods to do this.
DataContext.TableName.Contains(Item)
or we use the id field. If the item is inserted in the Database, then it will be assigned a row.
if(Item.id == 0)
DataContext.Insert(Item)
else
DataContext.Update(Item)
A:
Rather than attaching to a new data context why not just requery the object in the new datacontext? It believe it is a more reliable and stateless strategy.
| Attaching entities to data contexts | In LINQ to SQL, is it possible to check to see if an entity is already part of the data context before trying to attach it?
A little context if it helps...
I have this code in my global.asax as a helper method. Normally, between requests, this isn't a problem. But right after signing in, this is getting called more than once, and the second time I end up trying to attach the Member object in the same unit of work where it was created.
private void CheckCurrentUser()
{
if (!HttpContext.Current.User.Identity.IsAuthenticated)
{
AppHelper.CurrentMember = null;
return;
}
IUserService userService = new UserService();
if (AppHelper.CurrentMember != null)
userService.AttachExisting(AppHelper.CurrentMember);
else
AppHelper.CurrentMember = userService.GetMember(
HttpContext.Current.User.Identity.Name,
AppHelper.CurrentLocation);
}
| [
"I believe there are two methods to do this.\nDataContext.TableName.Contains(Item)\n\nor we use the id field. If the item is inserted in the Database, then it will be assigned a row.\nif(Item.id == 0)\n DataContext.Insert(Item)\nelse\n DataContext.Update(Item)\n\n",
"Rather than attaching to a new data context why not just requery the object in the new datacontext? It believe it is a more reliable and stateless strategy.\n"
] | [
0,
0
] | [] | [] | [
"c#",
"linq_to_sql"
] | stackoverflow_0000024556_c#_linq_to_sql.txt |
Q:
interfacing with stdbool.h C++
In a project I am interfacing between C++ and a C library that uses stdbool.h defined as such.
#ifndef _STDBOOL_H
#define _STDBOOL_H
/* C99 Boolean types for compilers without C99 support */
/* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */
#if !defined(__cplusplus)
#if !defined(__GNUC__)
/* _Bool builtin type is included in GCC */
typedef enum { _Bool_must_promote_to_int = -1, false = 0, true = 1 } _Bool;
#endif
#define bool _Bool
#define true 1
#define false 0
#define __bool_true_false_are_defined 1
#endif
#endif
Some structures have bool members. So if I have one of these structures defined as local variables within a C++ function and pass it to a C function the sizes are inconsistent between C++ and C as bool is one bye in C++ and 4 in C.
Does anyone have any advice to how to overcome this without resorting to my current solution which is
//#define bool _Bool
#define bool unsigned char
Which is against the C99 standard for stdbool.h
A:
I found the answer to my own question by finding a more compatible implementation of stdbool.h that is compliant with the C99 standard.
#ifndef _STDBOOL_H
#define _STDBOOL_H
#include <stdint.h>
/* C99 Boolean types for compilers without C99 support */
/* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */
#if !defined(__cplusplus)
#if !defined(__GNUC__)
/* _Bool builtin type is included in GCC */
/* ISO C Standard: 5.2.5 An object declared as
type _Bool is large enough to store
the values 0 and 1. */
/* We choose 8 bit to match C++ */
/* It must also promote to integer */
typedef int8_t _Bool;
#endif
/* ISO C Standard: 7.16 Boolean type */
#define bool _Bool
#define true 1
#define false 0
#define __bool_true_false_are_defined 1
#endif
#endif
This is taken from the Ada Class Library project.
A:
Size is not the only thing that will be inconsistent here. In C++ bool is a keyword, and C++ guarantees that a bool can hold a value of either 1 or 0 and nothing else. C doesn't give you this guarantee.
That said, if interoperability between C and C++ is important you can emulate C's custom-made boolean by defining an identical one for C++ and using that instead of the builtin bool. That will be a tradeoff between a buggy boolean and identical behaviour between the C boolean and the C++ boolean.
A:
Logically, you are not able to share source code between C and C++ with conflicting declarations for bool and have them link to each other.
The only way you can share code and link is via an intermediary datastructure. Unfortunately, from what I understand, you can't modify the code that defines the interface between your C++ program and C library. If you could, I'd suggest using something like:
union boolean {
bool value_cpp;
int value_c;
};
// padding may be necessary depending on endianness
The effect of which will be to make the datatype the same width in both languages; conversion to the native data type will need to be performed at both ends. Swap the use of bool for boolean in the library function definition, fiddle code in the library to convert, and you're done.
So, what you're going to have to do instead is create a shim between the C++ program and the C library.
You have:
extern "C" bool library_func_1(int i, char c, bool b);
And you need to create:
bool library_func_1_cpp(int i, char c, bool b)
{
int result = library_func_1(i, c, static_cast<int>(b));
return (result==true);
}
And now call library_func_1_cpp instead.
| interfacing with stdbool.h C++ | In a project I am interfacing between C++ and a C library that uses stdbool.h defined as such.
#ifndef _STDBOOL_H
#define _STDBOOL_H
/* C99 Boolean types for compilers without C99 support */
/* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */
#if !defined(__cplusplus)
#if !defined(__GNUC__)
/* _Bool builtin type is included in GCC */
typedef enum { _Bool_must_promote_to_int = -1, false = 0, true = 1 } _Bool;
#endif
#define bool _Bool
#define true 1
#define false 0
#define __bool_true_false_are_defined 1
#endif
#endif
Some structures have bool members. So if I have one of these structures defined as local variables within a C++ function and pass it to a C function the sizes are inconsistent between C++ and C as bool is one bye in C++ and 4 in C.
Does anyone have any advice to how to overcome this without resorting to my current solution which is
//#define bool _Bool
#define bool unsigned char
Which is against the C99 standard for stdbool.h
| [
"I found the answer to my own question by finding a more compatible implementation of stdbool.h that is compliant with the C99 standard.\n#ifndef _STDBOOL_H\n#define _STDBOOL_H\n\n#include <stdint.h>\n\n/* C99 Boolean types for compilers without C99 support */\n/* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */\n#if !defined(__cplusplus)\n\n#if !defined(__GNUC__)\n/* _Bool builtin type is included in GCC */\n/* ISO C Standard: 5.2.5 An object declared as \ntype _Bool is large enough to store \nthe values 0 and 1. */\n/* We choose 8 bit to match C++ */\n/* It must also promote to integer */\ntypedef int8_t _Bool;\n#endif\n\n/* ISO C Standard: 7.16 Boolean type */\n#define bool _Bool\n#define true 1\n#define false 0\n#define __bool_true_false_are_defined 1\n\n#endif\n\n#endif\n\nThis is taken from the Ada Class Library project.\n",
"Size is not the only thing that will be inconsistent here. In C++ bool is a keyword, and C++ guarantees that a bool can hold a value of either 1 or 0 and nothing else. C doesn't give you this guarantee.\nThat said, if interoperability between C and C++ is important you can emulate C's custom-made boolean by defining an identical one for C++ and using that instead of the builtin bool. That will be a tradeoff between a buggy boolean and identical behaviour between the C boolean and the C++ boolean.\n",
"Logically, you are not able to share source code between C and C++ with conflicting declarations for bool and have them link to each other.\nThe only way you can share code and link is via an intermediary datastructure. Unfortunately, from what I understand, you can't modify the code that defines the interface between your C++ program and C library. If you could, I'd suggest using something like:\nunion boolean {\n bool value_cpp;\n int value_c;\n}; \n\n// padding may be necessary depending on endianness \nThe effect of which will be to make the datatype the same width in both languages; conversion to the native data type will need to be performed at both ends. Swap the use of bool for boolean in the library function definition, fiddle code in the library to convert, and you're done.\nSo, what you're going to have to do instead is create a shim between the C++ program and the C library.\nYou have:\nextern \"C\" bool library_func_1(int i, char c, bool b);\n\nAnd you need to create:\nbool library_func_1_cpp(int i, char c, bool b)\n{\n int result = library_func_1(i, c, static_cast<int>(b));\n return (result==true);\n}\n\nAnd now call library_func_1_cpp instead.\n"
] | [
11,
2,
0
] | [] | [] | [
"boolean",
"c",
"c++",
"standards"
] | stackoverflow_0000025461_boolean_c_c++_standards.txt |
Q:
Why doesn't **sort** sort the same on every machine?
Using the same sort command with the same input produces different results on different machines. How do I fix that?
A:
The man-page on OS X says:
******* WARNING ******* The locale specified by the environment affects sort order. Set LC_ALL=C to get
the traditional sort order that uses native byte values.
which might explain things.
If some of your systems have no locale support, they would default to that locale (C), so you wouldn't have to set it on those. If you have some that supports locales and want the same behavior, set LC_ALL=C on those systems. That would be the way to have as many systems as I know do it the same way.
If you don't have any locale-less systems, just making sure they share locale would probably be enough.
For more canonical information, see The Single UNIX ® Specification, Version 2 description of locale, environment variables, setlocale() and the description of the sort(1) utility.
A:
This can be the result of locale differences:
$ echo 'CO2_
CO_' | env LC_ALL=C sort
CO2_
CO_
$ echo 'CO2_
CO_' | env LC_ALL=en_US sort
CO_
CO2_
Setting the LC_ALL environment variable to the same value should correct the problem.
A:
This is probably due to different settings of the locale environment variables. sort will use these settings to determine how to compare strings. By setting these environment variables the way you want before calling sort, you should be able to force it to behave in one specific way.
A:
For more than you ever wanted to know about sort, read the specification of sort in the Single Unix Specification v3. It states
Comparisons [...] shall be performed using the collating sequence of the current locale.
IOW, how sort sorts is dependent on the locale (language) settings of the environment that the script is running under.
| Why doesn't **sort** sort the same on every machine? | Using the same sort command with the same input produces different results on different machines. How do I fix that?
| [
"The man-page on OS X says:\n\n******* WARNING ******* The locale specified by the environment affects sort order. Set LC_ALL=C to get\nthe traditional sort order that uses native byte values.\n\nwhich might explain things.\nIf some of your systems have no locale support, they would default to that locale (C), so you wouldn't have to set it on those. If you have some that supports locales and want the same behavior, set LC_ALL=C on those systems. That would be the way to have as many systems as I know do it the same way.\nIf you don't have any locale-less systems, just making sure they share locale would probably be enough.\nFor more canonical information, see The Single UNIX ® Specification, Version 2 description of locale, environment variables, setlocale() and the description of the sort(1) utility.\n",
"This can be the result of locale differences:\n$ echo 'CO2_\nCO_' | env LC_ALL=C sort\nCO2_\nCO_\n\n\n$ echo 'CO2_\nCO_' | env LC_ALL=en_US sort\nCO_\nCO2_\n\nSetting the LC_ALL environment variable to the same value should correct the problem.\n",
"This is probably due to different settings of the locale environment variables. sort will use these settings to determine how to compare strings. By setting these environment variables the way you want before calling sort, you should be able to force it to behave in one specific way.\n",
"For more than you ever wanted to know about sort, read the specification of sort in the Single Unix Specification v3. It states\n\nComparisons [...] shall be performed using the collating sequence of the current locale.\n\nIOW, how sort sorts is dependent on the locale (language) settings of the environment that the script is running under.\n"
] | [
24,
5,
3,
3
] | [] | [] | [
"bash",
"ksh",
"sorting",
"unix"
] | stackoverflow_0000028881_bash_ksh_sorting_unix.txt |
Q:
Rails requires RubyGems >= 0.9.4. Please install RubyGems
I'm deploying to Ubuntu slice on slicehost, using Rails 2.1.0 (from gem)
If I try mongrel_rails start or script/server I get this error:
Rails requires RubyGems >= 0.9.4. Please install RubyGems
When I type gem -v I have version 1.2.0 installed. Any quick tips on what to look at to fix?
A:
Have you tried reinstalling RubyGems? I had a pretty similar error message until I reuninstalled and for some reason, it installed into a different directory and then the problem went away.
A:
Just finally found this answer... I was missing a gem, and thrown off by bad error message from Rails...
| Rails requires RubyGems >= 0.9.4. Please install RubyGems | I'm deploying to Ubuntu slice on slicehost, using Rails 2.1.0 (from gem)
If I try mongrel_rails start or script/server I get this error:
Rails requires RubyGems >= 0.9.4. Please install RubyGems
When I type gem -v I have version 1.2.0 installed. Any quick tips on what to look at to fix?
| [
"Have you tried reinstalling RubyGems? I had a pretty similar error message until I reuninstalled and for some reason, it installed into a different directory and then the problem went away.\n",
"Just finally found this answer... I was missing a gem, and thrown off by bad error message from Rails...\n"
] | [
1,
0
] | [] | [] | [
"deployment",
"ruby_on_rails",
"rubygems"
] | stackoverflow_0000029382_deployment_ruby_on_rails_rubygems.txt |
Q:
Browser-based game - Which framework to choose?
I'm starting to develop a browser-based game (and by this I mean text-based, no Flash or similar stuff on it) and I'm struggling to decide on which development framework to use.
As far as requirements are concerned, the most important thing that I can think of right now is the ability to translate it to several languages. A good object-relational mapping and a way to generate forms from logical objects would also be very good, as I've noticed that I always spend lots of time solving the problems that come up when I change any of those things.
The programming language is kind of unimportant. I have some experience in PHP and C#, but I don't mind, and I would even like to use this as an excuse, learning some new thing like Python or Ruby. What I do want is something with a good and thriving community and lots of samples and tutorials online to help me.
A:
I would reccomend sticking to what you know - PHP is more than capable.
That's true of course, but:
I don't mind, and I would even like to use this as an excuse, learning some new thing like Python or Ruby.
Then writing a browser game is an excellent opportunity to do this. Learning something new is never wrong and learning an alternative to PHP can never hurt (eh, Jeff?). While neither Ruby on Rails nor Django are especially useful for writing games, they're still great. We had to write a small browser game in a matter of weeks for a project once and Rails worked charms. On the other hand, all successful browser games have enormous work loads and if you want to scale well you either have to get good hardware and load balancing or you need a non-interpreted framework (sorry, guys!).
A:
I'd definitely suggest PHP. I've developed browser based games (pbbgs) for about 10 years now. I've tried .Net, Perl and Java.
All of them worked, but by far PHP was the best because:
Speed with which you can develop (that might be due to experience)
Ease/Cost of finding a host for a game site
Flexibility to change/revamp on the fly (game programming seems to always have a different development cycle then normal projects)
Ruby is not to bad, but the last time I tried it I rapidly ran into scaling/performance issues. I have not tried Python yet...maybe it's time to give it a shot.
Just my two cents, but over the years PHP has saved me a ton of time.
A:
I would reccomend sticking to what you know - PHP is more than capable.
I used to play a game called Hyperiums - a text based browser game like yours - which is created using Java (it's web-based quivalent is JSP?) and servlets. It works fairly well (it has had downtime issues but those were more related to it's running on a pretty crap server).
As for which framework to use - why not create your own? Spend a good amount of time pre-coding deciding how you're going to handle various things - such as langauge support: you could use a phrase system or seperate langauge-specific templates. Third party frameworks are probably better tested than one you make but they're not created for a specific purpose, they're created for a wide range of purposes.
A:
Check out django-mmo!
| Browser-based game - Which framework to choose? | I'm starting to develop a browser-based game (and by this I mean text-based, no Flash or similar stuff on it) and I'm struggling to decide on which development framework to use.
As far as requirements are concerned, the most important thing that I can think of right now is the ability to translate it to several languages. A good object-relational mapping and a way to generate forms from logical objects would also be very good, as I've noticed that I always spend lots of time solving the problems that come up when I change any of those things.
The programming language is kind of unimportant. I have some experience in PHP and C#, but I don't mind, and I would even like to use this as an excuse, learning some new thing like Python or Ruby. What I do want is something with a good and thriving community and lots of samples and tutorials online to help me.
| [
"\nI would reccomend sticking to what you know - PHP is more than capable.\n\nThat's true of course, but:\n\nI don't mind, and I would even like to use this as an excuse, learning some new thing like Python or Ruby.\n\nThen writing a browser game is an excellent opportunity to do this. Learning something new is never wrong and learning an alternative to PHP can never hurt (eh, Jeff?). While neither Ruby on Rails nor Django are especially useful for writing games, they're still great. We had to write a small browser game in a matter of weeks for a project once and Rails worked charms. On the other hand, all successful browser games have enormous work loads and if you want to scale well you either have to get good hardware and load balancing or you need a non-interpreted framework (sorry, guys!).\n",
"I'd definitely suggest PHP. I've developed browser based games (pbbgs) for about 10 years now. I've tried .Net, Perl and Java.\nAll of them worked, but by far PHP was the best because:\n\nSpeed with which you can develop (that might be due to experience)\nEase/Cost of finding a host for a game site \nFlexibility to change/revamp on the fly (game programming seems to always have a different development cycle then normal projects)\n\nRuby is not to bad, but the last time I tried it I rapidly ran into scaling/performance issues. I have not tried Python yet...maybe it's time to give it a shot.\nJust my two cents, but over the years PHP has saved me a ton of time.\n",
"I would reccomend sticking to what you know - PHP is more than capable.\nI used to play a game called Hyperiums - a text based browser game like yours - which is created using Java (it's web-based quivalent is JSP?) and servlets. It works fairly well (it has had downtime issues but those were more related to it's running on a pretty crap server).\nAs for which framework to use - why not create your own? Spend a good amount of time pre-coding deciding how you're going to handle various things - such as langauge support: you could use a phrase system or seperate langauge-specific templates. Third party frameworks are probably better tested than one you make but they're not created for a specific purpose, they're created for a wide range of purposes.\n",
"Check out django-mmo!\n"
] | [
8,
3,
2,
2
] | [] | [] | [
"frameworks",
"language_agnostic"
] | stackoverflow_0000026041_frameworks_language_agnostic.txt |
Q:
Compact Framework - how do I dynamically create type with no default constructor?
I'm using the .NET CF 3.5. The type I want to create does not have a default constructor so I want to pass a string to an overloaded constructor. How do I do this?
Code:
Assembly a = Assembly.LoadFrom("my.dll");
Type t = a.GetType("type info here");
// All ok so far, assembly loads and I can get my type
string s = "Pass me to the constructor of Type t";
MyObj o = Activator.CreateInstance(t); // throws MissMethodException
A:
MyObj o = null;
Assembly a = Assembly.LoadFrom("my.dll");
Type t = a.GetType("type info here");
ConstructorInfo ctor = t.GetConstructor(new Type[] { typeof(string) });
if(ctor != null)
o = ctor.Invoke(new object[] { s });
A:
Ok, here's a funky helper method to give you a flexible way to activate a type given an array of parameters:
static object GetInstanceFromParameters(Assembly a, string typeName, params object[] pars)
{
var t = a.GetType(typeName);
var c = t.GetConstructor(pars.Select(p => p.GetType()).ToArray());
if (c == null) return null;
return c.Invoke(pars);
}
And you call it like this:
Foo f = GetInstanceFromParameters(a, "SmartDeviceProject1.Foo", "hello", 17) as Foo;
So you pass the assembly and the name of the type as the first two parameters, and then all the constructor's parameters in order.
A:
See if this works for you (untested):
Type t = a.GetType("type info here");
var ctors = t.GetConstructors();
string s = "Pass me to the ctor of t";
MyObj o = ctors[0].Invoke(new[] { s }) as MyObj;
If the type has more than one constructor then you may have to do some fancy footwork to find the one that accepts your string parameter.
Edit: Just tested the code, and it works.
Edit2: Chris' answer shows the fancy footwork I was talking about! ;-)
| Compact Framework - how do I dynamically create type with no default constructor? | I'm using the .NET CF 3.5. The type I want to create does not have a default constructor so I want to pass a string to an overloaded constructor. How do I do this?
Code:
Assembly a = Assembly.LoadFrom("my.dll");
Type t = a.GetType("type info here");
// All ok so far, assembly loads and I can get my type
string s = "Pass me to the constructor of Type t";
MyObj o = Activator.CreateInstance(t); // throws MissMethodException
| [
"MyObj o = null;\nAssembly a = Assembly.LoadFrom(\"my.dll\");\nType t = a.GetType(\"type info here\");\n\nConstructorInfo ctor = t.GetConstructor(new Type[] { typeof(string) });\nif(ctor != null)\n o = ctor.Invoke(new object[] { s });\n\n",
"Ok, here's a funky helper method to give you a flexible way to activate a type given an array of parameters:\nstatic object GetInstanceFromParameters(Assembly a, string typeName, params object[] pars) \n{\n var t = a.GetType(typeName);\n\n var c = t.GetConstructor(pars.Select(p => p.GetType()).ToArray());\n if (c == null) return null;\n\n return c.Invoke(pars);\n}\n\nAnd you call it like this:\nFoo f = GetInstanceFromParameters(a, \"SmartDeviceProject1.Foo\", \"hello\", 17) as Foo;\n\nSo you pass the assembly and the name of the type as the first two parameters, and then all the constructor's parameters in order.\n",
"See if this works for you (untested):\nType t = a.GetType(\"type info here\");\nvar ctors = t.GetConstructors();\nstring s = \"Pass me to the ctor of t\";\nMyObj o = ctors[0].Invoke(new[] { s }) as MyObj;\n\nIf the type has more than one constructor then you may have to do some fancy footwork to find the one that accepts your string parameter.\nEdit: Just tested the code, and it works.\nEdit2: Chris' answer shows the fancy footwork I was talking about! ;-)\n"
] | [
9,
1,
0
] | [] | [] | [
"c#",
"compact_framework",
"reflection"
] | stackoverflow_0000029436_c#_compact_framework_reflection.txt |
Q:
XML => HTML with Hpricot and Rails
I have never worked with web services and rails, and obviously this is something I need to learn.
I have chosen to use hpricot because it looks great.
Anyway, _why's been nice enough to provide the following example on the hpricot website:
#!ruby
require 'hpricot'
require 'open-uri'
# load the RedHanded home page
doc = Hpricot(open("http://redhanded.hobix.com/index.html"))
# change the CSS class on links
(doc/"span.entryPermalink").set("class", "newLinks")
# remove the sidebar
(doc/"#sidebar").remove
# print the altered HTML
puts doc
Which looks simple, elegant, and easy peasey.
Works great in Ruby, but my question is: How do I break this up in rails?
I experimented with adding this all to a single controller, but couldn't think of the best way to call it in a view.
So if you were parsing an XML file from a web API and printing it in nice clean HTML with Hpricot, how would you break up the activity over the models, views, and controllers, and what would you put where?
A:
Model, model, model, model, model. Skinny controllers, simple views.
The RedHandedHomePage model does the parsing on initialization, then call 'def render' in the controller, set output to an instance variable, and print that in a view.
A:
I'd probably go for a REST approach and have resources that represent the different entities within the XML file being consumed. Do you have a specific example of the XML that you can give?
| XML => HTML with Hpricot and Rails | I have never worked with web services and rails, and obviously this is something I need to learn.
I have chosen to use hpricot because it looks great.
Anyway, _why's been nice enough to provide the following example on the hpricot website:
#!ruby
require 'hpricot'
require 'open-uri'
# load the RedHanded home page
doc = Hpricot(open("http://redhanded.hobix.com/index.html"))
# change the CSS class on links
(doc/"span.entryPermalink").set("class", "newLinks")
# remove the sidebar
(doc/"#sidebar").remove
# print the altered HTML
puts doc
Which looks simple, elegant, and easy peasey.
Works great in Ruby, but my question is: How do I break this up in rails?
I experimented with adding this all to a single controller, but couldn't think of the best way to call it in a view.
So if you were parsing an XML file from a web API and printing it in nice clean HTML with Hpricot, how would you break up the activity over the models, views, and controllers, and what would you put where?
| [
"Model, model, model, model, model. Skinny controllers, simple views.\nThe RedHandedHomePage model does the parsing on initialization, then call 'def render' in the controller, set output to an instance variable, and print that in a view.\n",
"I'd probably go for a REST approach and have resources that represent the different entities within the XML file being consumed. Do you have a specific example of the XML that you can give?\n"
] | [
2,
0
] | [] | [] | [
"hpricot",
"open_uri",
"ruby",
"ruby_on_rails",
"xml"
] | stackoverflow_0000028823_hpricot_open_uri_ruby_ruby_on_rails_xml.txt |
Q:
How to shift an array of bytes by 12-bits
I want to shift the contents of an array of bytes by 12-bit to the left.
For example, starting with this array of type uint8_t shift[10]:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0xBC}
I'd like to shift it to the left by 12-bits resulting in:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xAB, 0xC0, 0x00}
A:
Hurray for pointers!
This code works by looking ahead 12 bits for each byte and copying the proper bits forward. 12 bits is the bottom half (nybble) of the next byte and the top half of 2 bytes away.
unsigned char length = 10;
unsigned char data[10] = {0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0A,0xBC};
unsigned char *shift = data;
while (shift < data+(length-2)) {
*shift = (*(shift+1)&0x0F)<<4 | (*(shift+2)&0xF0)>>4;
shift++;
}
*(data+length-2) = (*(data+length-1)&0x0F)<<4;
*(data+length-1) = 0x00;
Justin wrote:
@Mike, your solution works, but does not carry.
Well, I'd say a normal shift operation does just that (called overflow), and just lets the extra bits fall off the right or left. It's simple enough to carry if you wanted to - just save the 12 bits before you start to shift. Maybe you want a circular shift, to put the overflowed bits back at the bottom? Maybe you want to realloc the array and make it larger? Return the overflow to the caller? Return a boolean if non-zero data was overflowed? You'd have to define what carry means to you.
unsigned char overflow[2];
*overflow = (*data&0xF0)>>4;
*(overflow+1) = (*data&0x0F)<<4 | (*(data+1)&0xF0)>>4;
while (shift < data+(length-2)) {
/* normal shifting */
}
/* now would be the time to copy it back if you want to carry it somewhere */
*(data+length-2) = (*(data+length-1)&0x0F)<<4 | (*(overflow)&0x0F);
*(data+length-1) = *(overflow+1);
/* You could return a 16-bit carry int,
* but endian-ness makes that look weird
* if you care about the physical layout */
unsigned short carry = *(overflow+1)<<8 | *overflow;
A:
Here's my solution, but even more importantly my approach to solving the problem.
I approached the problem by
drawing the memory cells and drawing arrows from the destination to the source.
made a table showing the above drawing.
labeling each row in the table with the relative byte address.
This showed me the pattern:
let iL be the low nybble (half byte) of a[i]
let iH be the high nybble of a[i]
iH = (i+1)L
iL = (i+2)H
This pattern holds for all bytes.
Translating into C, this means:
a[i] = (iH << 4) OR iL
a[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4)
We now make three more observations:
since we carry out the assignments left to right, we don't need to store any values in temporary variables.
we will have a special case for the tail: all 12 bits at the end will be zero.
we must avoid reading undefined memory past the array. since we never read more than a[i+2], this only affects the last two bytes
So, we
handle the general case by looping for N-2 bytes and performing the general calculation above
handle the next to last byte by it by setting iH = (i+1)L
handle the last byte by setting it to 0
given a with length N, we get:
for (i = 0; i < N - 2; ++i) {
a[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4);
}
a[N-2] = (a[N-1) & 0x0f) << 4;
a[N-1] = 0;
And there you have it... the array is shifted left by 12 bits. It could easily be generalized to shifting N bits, noting that there will be M assignment statements where M = number of bits modulo 8, I believe.
The loop could be made more efficient on some machines by translating to pointers
for (p = a, p2=a+N-2; p != p2; ++p) {
*p = ((*(p+1) & 0x0f) << 4) | (((*(p+2) & 0xf0) >> 4);
}
and by using the largest integer data type supported by the CPU.
(I've just typed this in, so now would be a good time for somebody to review the code, especially since bit twiddling is notoriously easy to get wrong.)
A:
Lets make it the best way to shift N bits in the array of 8 bit integers.
N - Total number of bits to shift
F = (N / 8) - Full 8 bit integers shifted
R = (N % 8) - Remaining bits that need to be shifted
I guess from here you would have to find the most optimal way to make use of this data to move around ints in an array. Generic algorithms would be to apply the full integer shifts by starting from the right of the array and moving each integer F indexes. Zero fill the newly empty spaces. Then finally perform an R bit shift on all of the indexes, again starting from the right.
In the case of shifting 0xBC by R bits you can calculate the overflow by doing a bitwise AND, and the shift using the bitshift operator:
// 0xAB shifted 4 bits is:
(0xAB & 0x0F) >> 4 // is the overflow (0x0A)
0xAB << 4 // is the shifted value (0xB0)
Keep in mind that the 4 bits is just a simple mask: 0x0F or just 0b00001111. This is easy to calculate, dynamically build, or you can even use a simple static lookup table.
I hope that is generic enough. I'm not good with C/C++ at all so maybe someone can clean up my syntax or be more specific.
Bonus: If you're crafty with your C you might be able to fudge multiple array indexes into a single 16, 32, or even 64 bit integer and perform the shifts. But that is prabably not very portable and I would recommend against this. Just a possible optimization.
A:
Here a working solution, using temporary variables:
void shift_4bits_left(uint8_t* array, uint16_t size)
{
int i;
uint8_t shifted = 0x00;
uint8_t overflow = (0xF0 & array[0]) >> 4;
for (i = (size - 1); i >= 0; i--)
{
shifted = (array[i] << 4) | overflow;
overflow = (0xF0 & array[i]) >> 4;
array[i] = shifted;
}
}
Call this function 3 times for a 12-bit shift.
Mike's solution maybe faster, due to the use of temporary variables.
A:
The 32 bit version... :-) Handles 1 <= count <= num_words
#include <stdio.h>
unsigned int array[] = {0x12345678,0x9abcdef0,0x12345678,0x9abcdef0,0x66666666};
int main(void) {
int count;
unsigned int *from, *to;
from = &array[0];
to = &array[0];
count = 5;
while (count-- > 1) {
*to++ = (*from<<12) | ((*++from>>20)&0xfff);
};
*to = (*from<<12);
printf("%x\n", array[0]);
printf("%x\n", array[1]);
printf("%x\n", array[2]);
printf("%x\n", array[3]);
printf("%x\n", array[4]);
return 0;
}
A:
@Joseph, notice that the variables are 8 bits wide, while the shift is 12 bits wide. Your solution works only for N <= variable size.
If you can assume your array is a multiple of 4 you can cast the array into an array of uint64_t and then work on that. If it isn't a multiple of 4, you can work in 64-bit chunks on as much as you can and work on the remainder one by one.
This may be a bit more coding, but I think it's more elegant in the end.
A:
There are a couple of edge-cases which make this a neat problem:
the input array might be empty
the last and next-to-last bits need to be treated specially, because they have zero bits shifted into them
Here's a simple solution which loops over the array copying the low-order nibble of the next byte into its high-order nibble, and the high-order nibble of the next-next (+2) byte into its low-order nibble. To save dereferencing the look-ahead pointer twice, it maintains a two-element buffer with the "last" and "next" bytes:
void shl12(uint8_t *v, size_t length) {
if (length == 0) {
return; // nothing to do
}
if (length > 1) {
uint8_t last_byte, next_byte;
next_byte = *(v + 1);
for (size_t i = 0; i + 2 < length; i++, v++) {
last_byte = next_byte;
next_byte = *(v + 2);
*v = ((last_byte & 0x0f) << 4) | (((next_byte) & 0xf0) >> 4);
}
// the next-to-last byte is half-empty
*(v++) = (next_byte & 0x0f) << 4;
}
// the last byte is always empty
*v = 0;
}
Consider the boundary cases, which activate successively more parts of the function:
When length is zero, we bail out without touching memory.
When length is one, we set the one and only element to zero.
When length is two, we set the high-order nibble of the first byte to low-order nibble of the second byte (that is, bits 12-16), and the second byte to zero. We don't activate the loop.
When length is greater than two we hit the loop, shuffling the bytes across the two-element buffer.
If efficiency is your goal, the answer probably depends largely on your machine's architecture. Typically you should maintain the two-element buffer, but handle a machine word (32/64 bit unsigned integer) at a time. If you're shifting a lot of data it will be worthwhile treating the first few bytes as a special case so that you can get your machine word pointers word-aligned. Most CPUs access memory more efficiently if the accesses fall on machine word boundaries. Of course, the trailing bytes have to be handled specially too so you don't touch memory past the end of the array.
| How to shift an array of bytes by 12-bits | I want to shift the contents of an array of bytes by 12-bit to the left.
For example, starting with this array of type uint8_t shift[10]:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0xBC}
I'd like to shift it to the left by 12-bits resulting in:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xAB, 0xC0, 0x00}
| [
"Hurray for pointers! \nThis code works by looking ahead 12 bits for each byte and copying the proper bits forward. 12 bits is the bottom half (nybble) of the next byte and the top half of 2 bytes away.\nunsigned char length = 10;\nunsigned char data[10] = {0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0A,0xBC};\nunsigned char *shift = data;\nwhile (shift < data+(length-2)) {\n *shift = (*(shift+1)&0x0F)<<4 | (*(shift+2)&0xF0)>>4;\n shift++;\n}\n*(data+length-2) = (*(data+length-1)&0x0F)<<4;\n*(data+length-1) = 0x00;\n\n\nJustin wrote:\n @Mike, your solution works, but does not carry. \n\nWell, I'd say a normal shift operation does just that (called overflow), and just lets the extra bits fall off the right or left. It's simple enough to carry if you wanted to - just save the 12 bits before you start to shift. Maybe you want a circular shift, to put the overflowed bits back at the bottom? Maybe you want to realloc the array and make it larger? Return the overflow to the caller? Return a boolean if non-zero data was overflowed? You'd have to define what carry means to you.\nunsigned char overflow[2];\n*overflow = (*data&0xF0)>>4;\n*(overflow+1) = (*data&0x0F)<<4 | (*(data+1)&0xF0)>>4;\nwhile (shift < data+(length-2)) {\n /* normal shifting */\n} \n/* now would be the time to copy it back if you want to carry it somewhere */\n*(data+length-2) = (*(data+length-1)&0x0F)<<4 | (*(overflow)&0x0F);\n*(data+length-1) = *(overflow+1); \n\n/* You could return a 16-bit carry int, \n * but endian-ness makes that look weird \n * if you care about the physical layout */\nunsigned short carry = *(overflow+1)<<8 | *overflow;\n\n",
"Here's my solution, but even more importantly my approach to solving the problem.\nI approached the problem by\n\ndrawing the memory cells and drawing arrows from the destination to the source.\nmade a table showing the above drawing.\nlabeling each row in the table with the relative byte address.\n\nThis showed me the pattern:\n\nlet iL be the low nybble (half byte) of a[i]\nlet iH be the high nybble of a[i]\niH = (i+1)L\niL = (i+2)H\n\nThis pattern holds for all bytes.\nTranslating into C, this means:\na[i] = (iH << 4) OR iL\na[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4)\n\nWe now make three more observations:\n\nsince we carry out the assignments left to right, we don't need to store any values in temporary variables.\nwe will have a special case for the tail: all 12 bits at the end will be zero.\nwe must avoid reading undefined memory past the array. since we never read more than a[i+2], this only affects the last two bytes\n\nSo, we\n\nhandle the general case by looping for N-2 bytes and performing the general calculation above\nhandle the next to last byte by it by setting iH = (i+1)L\nhandle the last byte by setting it to 0\n\ngiven a with length N, we get:\nfor (i = 0; i < N - 2; ++i) {\n a[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4);\n}\na[N-2] = (a[N-1) & 0x0f) << 4;\na[N-1] = 0;\n\nAnd there you have it... the array is shifted left by 12 bits. It could easily be generalized to shifting N bits, noting that there will be M assignment statements where M = number of bits modulo 8, I believe.\nThe loop could be made more efficient on some machines by translating to pointers\nfor (p = a, p2=a+N-2; p != p2; ++p) {\n *p = ((*(p+1) & 0x0f) << 4) | (((*(p+2) & 0xf0) >> 4);\n}\n\nand by using the largest integer data type supported by the CPU.\n(I've just typed this in, so now would be a good time for somebody to review the code, especially since bit twiddling is notoriously easy to get wrong.)\n",
"Lets make it the best way to shift N bits in the array of 8 bit integers.\nN - Total number of bits to shift\nF = (N / 8) - Full 8 bit integers shifted\nR = (N % 8) - Remaining bits that need to be shifted\n\nI guess from here you would have to find the most optimal way to make use of this data to move around ints in an array. Generic algorithms would be to apply the full integer shifts by starting from the right of the array and moving each integer F indexes. Zero fill the newly empty spaces. Then finally perform an R bit shift on all of the indexes, again starting from the right.\nIn the case of shifting 0xBC by R bits you can calculate the overflow by doing a bitwise AND, and the shift using the bitshift operator:\n// 0xAB shifted 4 bits is:\n(0xAB & 0x0F) >> 4 // is the overflow (0x0A)\n0xAB << 4 // is the shifted value (0xB0)\n\nKeep in mind that the 4 bits is just a simple mask: 0x0F or just 0b00001111. This is easy to calculate, dynamically build, or you can even use a simple static lookup table.\nI hope that is generic enough. I'm not good with C/C++ at all so maybe someone can clean up my syntax or be more specific.\nBonus: If you're crafty with your C you might be able to fudge multiple array indexes into a single 16, 32, or even 64 bit integer and perform the shifts. But that is prabably not very portable and I would recommend against this. Just a possible optimization. \n",
"Here a working solution, using temporary variables:\nvoid shift_4bits_left(uint8_t* array, uint16_t size)\n{\n int i;\n uint8_t shifted = 0x00; \n uint8_t overflow = (0xF0 & array[0]) >> 4;\n\n for (i = (size - 1); i >= 0; i--)\n {\n shifted = (array[i] << 4) | overflow;\n overflow = (0xF0 & array[i]) >> 4;\n array[i] = shifted;\n }\n}\n\nCall this function 3 times for a 12-bit shift.\nMike's solution maybe faster, due to the use of temporary variables.\n",
"The 32 bit version... :-) Handles 1 <= count <= num_words\n#include <stdio.h>\n\nunsigned int array[] = {0x12345678,0x9abcdef0,0x12345678,0x9abcdef0,0x66666666};\n\nint main(void) {\n int count;\n unsigned int *from, *to;\n from = &array[0];\n to = &array[0];\n count = 5;\n\n while (count-- > 1) {\n *to++ = (*from<<12) | ((*++from>>20)&0xfff);\n };\n *to = (*from<<12);\n\n printf(\"%x\\n\", array[0]);\n printf(\"%x\\n\", array[1]);\n printf(\"%x\\n\", array[2]);\n printf(\"%x\\n\", array[3]);\n printf(\"%x\\n\", array[4]);\n\n return 0;\n}\n\n",
"@Joseph, notice that the variables are 8 bits wide, while the shift is 12 bits wide. Your solution works only for N <= variable size.\nIf you can assume your array is a multiple of 4 you can cast the array into an array of uint64_t and then work on that. If it isn't a multiple of 4, you can work in 64-bit chunks on as much as you can and work on the remainder one by one.\nThis may be a bit more coding, but I think it's more elegant in the end.\n",
"There are a couple of edge-cases which make this a neat problem:\n\nthe input array might be empty\nthe last and next-to-last bits need to be treated specially, because they have zero bits shifted into them\n\nHere's a simple solution which loops over the array copying the low-order nibble of the next byte into its high-order nibble, and the high-order nibble of the next-next (+2) byte into its low-order nibble. To save dereferencing the look-ahead pointer twice, it maintains a two-element buffer with the \"last\" and \"next\" bytes: \nvoid shl12(uint8_t *v, size_t length) {\n if (length == 0) {\n return; // nothing to do\n }\n\n if (length > 1) {\n uint8_t last_byte, next_byte;\n next_byte = *(v + 1);\n\n for (size_t i = 0; i + 2 < length; i++, v++) {\n last_byte = next_byte;\n next_byte = *(v + 2);\n *v = ((last_byte & 0x0f) << 4) | (((next_byte) & 0xf0) >> 4);\n }\n\n // the next-to-last byte is half-empty\n *(v++) = (next_byte & 0x0f) << 4;\n }\n\n // the last byte is always empty\n *v = 0;\n}\n\nConsider the boundary cases, which activate successively more parts of the function:\n\nWhen length is zero, we bail out without touching memory.\nWhen length is one, we set the one and only element to zero.\nWhen length is two, we set the high-order nibble of the first byte to low-order nibble of the second byte (that is, bits 12-16), and the second byte to zero. We don't activate the loop.\nWhen length is greater than two we hit the loop, shuffling the bytes across the two-element buffer.\n\nIf efficiency is your goal, the answer probably depends largely on your machine's architecture. Typically you should maintain the two-element buffer, but handle a machine word (32/64 bit unsigned integer) at a time. If you're shifting a lot of data it will be worthwhile treating the first few bytes as a special case so that you can get your machine word pointers word-aligned. Most CPUs access memory more efficiently if the accesses fall on machine word boundaries. Of course, the trailing bytes have to be handled specially too so you don't touch memory past the end of the array.\n"
] | [
9,
5,
3,
3,
1,
0,
0
] | [] | [] | [
"arrays",
"bit_shift",
"bitset",
"c"
] | stackoverflow_0000029437_arrays_bit_shift_bitset_c.txt |
Q:
How automated is too automated when it comes to deployment?
I have ci, so our staging environment builds itself.
Should I have a script that not only builds production but does all the branching for it as well?
When you have one code base on two different urls with skinning, should they be required to build at once?
A:
The only way to be too automated is if you are spending more time fighting with building or fixing automation scripts than you would just doing the job manually. As long as your automation scripts take less time and produce fewer errors than doing the job manually, then automation is great.
Scripts to build and branch for production are a great idea!
A:
In my opinion anything the computer is capable of doing automatically it should do, because it can do it faster, easier and without thought from you. Within reason of course, but stuff like that can be very trivial to automate, so I've always been a proponent of automating that whole process.
and plus it can be fun too!
A:
I like to separate the build and deploy steps into two separate steps. The output of the build step should be a package that is placed in a repository or staging area. This package should be independent of the target environments.
The deploy step is responsible for configuring the target environment and installing the package.
The reasons I prefer this approach are:
I have one package that can run in my development, test and production environments. That should cut down the arguments between QA and development.
There may be different elements that need to be configured during deployment. Application server settings, database schemas, data loads, etc. that might not be as easy to do from the automated build script.
A:
In my opinion it's only too automated if no one in your production support group can deploy an application manually in a pinch. Automated deployments really cut down on simple but common errors such as configuration mistakes. However, a manual deployment must always be an option.
| How automated is too automated when it comes to deployment? | I have ci, so our staging environment builds itself.
Should I have a script that not only builds production but does all the branching for it as well?
When you have one code base on two different urls with skinning, should they be required to build at once?
| [
"The only way to be too automated is if you are spending more time fighting with building or fixing automation scripts than you would just doing the job manually. As long as your automation scripts take less time and produce fewer errors than doing the job manually, then automation is great.\nScripts to build and branch for production are a great idea!\n",
"In my opinion anything the computer is capable of doing automatically it should do, because it can do it faster, easier and without thought from you. Within reason of course, but stuff like that can be very trivial to automate, so I've always been a proponent of automating that whole process.\nand plus it can be fun too!\n",
"I like to separate the build and deploy steps into two separate steps. The output of the build step should be a package that is placed in a repository or staging area. This package should be independent of the target environments. \nThe deploy step is responsible for configuring the target environment and installing the package.\nThe reasons I prefer this approach are:\n\nI have one package that can run in my development, test and production environments. That should cut down the arguments between QA and development.\nThere may be different elements that need to be configured during deployment. Application server settings, database schemas, data loads, etc. that might not be as easy to do from the automated build script.\n\n",
"In my opinion it's only too automated if no one in your production support group can deploy an application manually in a pinch. Automated deployments really cut down on simple but common errors such as configuration mistakes. However, a manual deployment must always be an option. \n"
] | [
8,
3,
2,
1
] | [] | [] | [
"build_automation",
"production"
] | stackoverflow_0000029423_build_automation_production.txt |
Q:
Drag and Drop to a hosted Browser control
I have a WinForms program written on .NET 2 which hosts a webbrowser control and renders asp.net pages from a known server.
I would like to be able to drag, say, a tree node from a treeview in my winforms app into a specific location in the hosted web page and have it trigger a javascript event there.
Currently, I can implement the IDocHostUIHandler interface and getting drag\drop events on the browser control, then call Navigate("javascript:fire_event(...)") on the control to execute a script on the page. However, I want this to work only when I drop data on a specific part of the page.
One solution, I suppose, would be to bite the bullet and write a custom browser plugin in the form of an activex control, embed that in the location I want to drop to and let that implement the needed drag\drop interfaces.
Would that work?
Is there a cleaner approach? Can I take advantage of the fact that the browser control is hosted in my app and provide some further level of interaction?
A:
Take a look at the BrowserPlus project at Yahoo.
It looks like they have built a toolkit so that you don't have to do the gritty work of writing the browser plugin yourself.
A:
If you can find out the on screen position of the part of the page you are interested in, you could compare this with the position of the mouse when you receive the drop event. I'm not sure how practical this is if you can get the info out of the DOM or whatnot.
As an alternative could you implement the mouse events on the bit of the page using javascript?
| Drag and Drop to a hosted Browser control | I have a WinForms program written on .NET 2 which hosts a webbrowser control and renders asp.net pages from a known server.
I would like to be able to drag, say, a tree node from a treeview in my winforms app into a specific location in the hosted web page and have it trigger a javascript event there.
Currently, I can implement the IDocHostUIHandler interface and getting drag\drop events on the browser control, then call Navigate("javascript:fire_event(...)") on the control to execute a script on the page. However, I want this to work only when I drop data on a specific part of the page.
One solution, I suppose, would be to bite the bullet and write a custom browser plugin in the form of an activex control, embed that in the location I want to drop to and let that implement the needed drag\drop interfaces.
Would that work?
Is there a cleaner approach? Can I take advantage of the fact that the browser control is hosted in my app and provide some further level of interaction?
| [
"Take a look at the BrowserPlus project at Yahoo.\nIt looks like they have built a toolkit so that you don't have to do the gritty work of writing the browser plugin yourself.\n",
"If you can find out the on screen position of the part of the page you are interested in, you could compare this with the position of the mouse when you receive the drop event. I'm not sure how practical this is if you can get the info out of the DOM or whatnot.\nAs an alternative could you implement the mouse events on the bit of the page using javascript?\n"
] | [
3,
1
] | [] | [] | [
"browser",
"c#"
] | stackoverflow_0000004849_browser_c#.txt |
Q:
Creating a UserControl Programmatically within a repeater?
I have a repeater that is bound to some data.
I bind to the ItemDataBound event, and I am attempting to programmatically create a UserControl:
In a nutshell:
void rptrTaskList_ItemDataBound(object sender, RepeaterItemEventArgs e)
{
CCTask task = (CCTask)e.Item.DataItem;
if (task is ExecTask)
{
ExecTaskControl foo = new ExecTaskControl();
e.Item.Controls.Add(foo);
}
}
The problem is that while the binding works, the user control is not rendered to the main page.
A:
Eh, figured out one way to do it:
ExecTaskControl foo = (ExecTaskControl)LoadControl("tasks\\ExecTaskControl.ascx");
It seems silly to have a file depedancy like that, but maybe thats how UserControls must be done.
A:
You could consider inverting the problem. That is add the control to the repeaters definition and the remove it if it is not needed. Not knowing the details of your app this might be a tremendous waste of time but it might just work out in the end.
A:
I think that @Craig is on the right track depending on the details of the problem you are solving. Add it to the repeater and remove it or set Visible="false" to hide it where needed. Viewstate gets tricky with dynamically created controls/user controls, so google that or check here if you must add dynamically. The article referenced also shows an alternative way to load dynamically:
Control ctrl=this.LoadControl(Request.ApplicationPath +"/Controls/" +ControlName);
A:
If you are going to do it from a place where you don't have an instance of a page then you need to go one step further (e.g. from a webservice to return html or from a task rendering emails)
var myPage = new System.Web.UI.Page();
var myControl = (Controls.MemberRating)myPage.LoadControl("~/Controls/MemberRating.ascx");
I found this technique on Scott Guithrie's site so I assume it's the legit way to do it in .NET
| Creating a UserControl Programmatically within a repeater? | I have a repeater that is bound to some data.
I bind to the ItemDataBound event, and I am attempting to programmatically create a UserControl:
In a nutshell:
void rptrTaskList_ItemDataBound(object sender, RepeaterItemEventArgs e)
{
CCTask task = (CCTask)e.Item.DataItem;
if (task is ExecTask)
{
ExecTaskControl foo = new ExecTaskControl();
e.Item.Controls.Add(foo);
}
}
The problem is that while the binding works, the user control is not rendered to the main page.
| [
"Eh, figured out one way to do it:\nExecTaskControl foo = (ExecTaskControl)LoadControl(\"tasks\\\\ExecTaskControl.ascx\");\n\nIt seems silly to have a file depedancy like that, but maybe thats how UserControls must be done.\n",
"You could consider inverting the problem. That is add the control to the repeaters definition and the remove it if it is not needed. Not knowing the details of your app this might be a tremendous waste of time but it might just work out in the end.\n",
"I think that @Craig is on the right track depending on the details of the problem you are solving. Add it to the repeater and remove it or set Visible=\"false\" to hide it where needed. Viewstate gets tricky with dynamically created controls/user controls, so google that or check here if you must add dynamically. The article referenced also shows an alternative way to load dynamically:\n\nControl ctrl=this.LoadControl(Request.ApplicationPath +\"/Controls/\" +ControlName);\n\n",
"If you are going to do it from a place where you don't have an instance of a page then you need to go one step further (e.g. from a webservice to return html or from a task rendering emails)\nvar myPage = new System.Web.UI.Page();\nvar myControl = (Controls.MemberRating)myPage.LoadControl(\"~/Controls/MemberRating.ascx\");\n\nI found this technique on Scott Guithrie's site so I assume it's the legit way to do it in .NET\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"asp.net",
"user_controls",
"webforms"
] | stackoverflow_0000029067_asp.net_user_controls_webforms.txt |
Q:
Using .NET CodeDOM to declare and initialize a field in one statement
I want to use CodeDOM to both declare and initialize my static field in one statement. How can I do this?
// for example
public static int MyField = 5;
I can seem to figure out how to declare a static field, and I can set its value later, but I can't seem to get the above effect.
@lomaxx,
Naw, I just want static. I don't want const. This value can change. I just wanted the simplicity of declaring and init'ing in one fell swoop. As if anything in the codedom world is simple. Every type name is 20+ characters long and you end up building these huge expression trees. Makes my eyes bug out. I'm only alive today thanks to resharper's reformatting.
A:
Once you create your CodeMemberField instance to represent the static field, you can assign the InitExpression property to the expression you want to use to populate the field.
A:
This post by Omer van Kloeten seems to do what you want. Notice that the output has the line:
private static Foo instance = new Foo();
A:
I think what you want is a const rather than static. I assume what you want is the effect of having a static readonly which is why you always want the value to be 5.
In c# consts are treated exactly the same as a readonly static.
From the c# docs:
Even though constants are considered
static members, a constant-
declaration neither requires nor
allows a static modifier.
| Using .NET CodeDOM to declare and initialize a field in one statement | I want to use CodeDOM to both declare and initialize my static field in one statement. How can I do this?
// for example
public static int MyField = 5;
I can seem to figure out how to declare a static field, and I can set its value later, but I can't seem to get the above effect.
@lomaxx,
Naw, I just want static. I don't want const. This value can change. I just wanted the simplicity of declaring and init'ing in one fell swoop. As if anything in the codedom world is simple. Every type name is 20+ characters long and you end up building these huge expression trees. Makes my eyes bug out. I'm only alive today thanks to resharper's reformatting.
| [
"Once you create your CodeMemberField instance to represent the static field, you can assign the InitExpression property to the expression you want to use to populate the field.\n",
"This post by Omer van Kloeten seems to do what you want. Notice that the output has the line:\nprivate static Foo instance = new Foo();\n\n",
"I think what you want is a const rather than static. I assume what you want is the effect of having a static readonly which is why you always want the value to be 5.\nIn c# consts are treated exactly the same as a readonly static.\nFrom the c# docs:\n\nEven though constants are considered\n static members, a constant-\n declaration neither requires nor\n allows a static modifier.\n\n"
] | [
8,
1,
0
] | [] | [] | [
".net",
"codedom"
] | stackoverflow_0000029531_.net_codedom.txt |
Q:
Get current process CPU usage in C
On Windows I can do:
HANDLE hProcess = GetCurrentProcess();
FILETIME ftCreation, ftExit, ftKernel, ftUser;
GetProcessTimes(hProcess, &ftCreation, &ftExit, &ftKernel, &ftUser);
SYSTEMTIME stKernel;
FileTimeToSystemTime(&ftKernel, &stKernel);
SYSTEMTIME stUser;
FileTimeToSystemTime(&ftUser, &stUser);
printf("Time in kernel mode = %uh %um %us %ums", stKernel.wHour,
stKernel.wMinute, stKernel.wSecond, stKernel.wMilliseconds));
printf("Time in user mode = %uh %um %us %ums", stUser.wHour,
stUser.wMinute, stUser.wSecond, stUser.wMilliseconds));
How can I do the same thing on *nix?
A:
Check getrusage, I think that should solve your problem.
| Get current process CPU usage in C | On Windows I can do:
HANDLE hProcess = GetCurrentProcess();
FILETIME ftCreation, ftExit, ftKernel, ftUser;
GetProcessTimes(hProcess, &ftCreation, &ftExit, &ftKernel, &ftUser);
SYSTEMTIME stKernel;
FileTimeToSystemTime(&ftKernel, &stKernel);
SYSTEMTIME stUser;
FileTimeToSystemTime(&ftUser, &stUser);
printf("Time in kernel mode = %uh %um %us %ums", stKernel.wHour,
stKernel.wMinute, stKernel.wSecond, stKernel.wMilliseconds));
printf("Time in user mode = %uh %um %us %ums", stUser.wHour,
stUser.wMinute, stUser.wSecond, stUser.wMilliseconds));
How can I do the same thing on *nix?
| [
"Check getrusage, I think that should solve your problem.\n"
] | [
3
] | [] | [] | [
"c",
"cross_platform",
"process_management",
"unix"
] | stackoverflow_0000029615_c_cross_platform_process_management_unix.txt |
Q:
What does ServerVariables["APPL_MD_PATH"] retrieves the metabase path for the Application for the ISAPI DLL mean?
I've trying to get an ASP.net (v2) app to work in the debugger and keep running into a problem because the value returned by the following code is an empty string:
HttpContext.Current.Request.ServerVariables["APPL_MD_PATH"].ToLower()
I have found out that this "Retrieves the metabase path for the Application for the ISAPI DLL". Can anybody shed some light on what this means and why it might be empty?
This code works in our live environment, but I want it to work on my PC and be able to step through source code so I can look at another problem...
A:
Are you running your application locally inside of IIS or inside of the development web server? If it's the latter, then that's probably why: Cassini (the development web server) doesn't do ISAPI, so this value will be empty.
| What does ServerVariables["APPL_MD_PATH"] retrieves the metabase path for the Application for the ISAPI DLL mean? | I've trying to get an ASP.net (v2) app to work in the debugger and keep running into a problem because the value returned by the following code is an empty string:
HttpContext.Current.Request.ServerVariables["APPL_MD_PATH"].ToLower()
I have found out that this "Retrieves the metabase path for the Application for the ISAPI DLL". Can anybody shed some light on what this means and why it might be empty?
This code works in our live environment, but I want it to work on my PC and be able to step through source code so I can look at another problem...
| [
"Are you running your application locally inside of IIS or inside of the development web server? If it's the latter, then that's probably why: Cassini (the development web server) doesn't do ISAPI, so this value will be empty.\n"
] | [
3
] | [] | [] | [
".net",
".net_2.0",
"asp.net",
"debugging",
"visual_studio"
] | stackoverflow_0000029593_.net_.net_2.0_asp.net_debugging_visual_studio.txt |
Q:
Can I maintain state between calls to a SQL Server UDF?
I have a SQL script that inserts data (via INSERT statements currently numbering in the thousands) One of the columns contains a unique identifier (though not an IDENTITY type, just a plain ol' int) that's actually unique across a few different tables.
I'd like to add a scalar function to my script that gets the next available ID (i.e. last used ID + 1) but I'm not sure this is possible because there doesn't seem to be a way to use a global or static variable from within a UDF, I can't use a temp table, and I can't update a permanent table from within a function.
Currently my script looks like this:
declare @v_baseID int
exec dbo.getNextID @v_baseID out --sproc to get the next available id
--Lots of these - where n is a hardcoded value
insert into tableOfStuff (someStuff, uniqueID) values ('stuff', @v_baseID + n )
exec dbo.UpdateNextID @v_baseID + lastUsedn --sproc to update the last used id
But I would like it to look like this:
--Lots of these
insert into tableOfStuff (someStuff, uniqueID) values ('stuff', getNextID() )
Hardcoding the offset is a pain in the arse, and is error prone. Packaging it up into a simple scalar function is very appealing, but I'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing.
We're using SQL Server 2005 at the moment.
edits for clarification:
Two users hitting it won't happen. This is an upgrade script that will be run only once, and never concurrently.
The actual sproc isn't prefixed with sp_, fixed the example code.
In normal usage, we do use an id table and a sproc to get IDs as needed, I was just looking for a cleaner way to do it in this script, which essentially just dumps a bunch of data into the db.
A:
If you have 2 users hitting it at the same time they will get the same id. Why didn't you use an id table with an identity instead, insert into that and use that as the unique (which is guaranteed) id, this will also perform much faster
sp_getNextID
never ever prefix procs with sp_, this has performance implication because the optimizer first checks the master DB to see if that proc exists there and then th local DB, also if MS decide to create a sp_getNextID in a service pack yours will never get executed
A:
I'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing.
You aren't missing anything; SQL Server does not support global variables, and it doesn't support data modification within UDFs. And even if you wanted to do something as kludgy as using CONTEXT_INFO (see http://weblogs.sqlteam.com/mladenp/archive/2007/04/23/60185.aspx), you can't set that from within a UDF anyway.
Is there a way you can get around the "hardcoding" of the offset by making that a variable and looping over the iteration of it, doing the inserts within that loop?
A:
It would probably be more work than it's worth, but you can use static C#/VB variables in a SQL CLR UDF, so I think you'd be able to do what you want to do by simply incrementing this variable every time the UDF is called. The static variable would be lost whenever the appdomain unloaded, of course. So if you need continuity of your ID from one day to the next, you'd need a way, on first access of NextId, to poll all of tables that use this ID, to find the highest value.
| Can I maintain state between calls to a SQL Server UDF? | I have a SQL script that inserts data (via INSERT statements currently numbering in the thousands) One of the columns contains a unique identifier (though not an IDENTITY type, just a plain ol' int) that's actually unique across a few different tables.
I'd like to add a scalar function to my script that gets the next available ID (i.e. last used ID + 1) but I'm not sure this is possible because there doesn't seem to be a way to use a global or static variable from within a UDF, I can't use a temp table, and I can't update a permanent table from within a function.
Currently my script looks like this:
declare @v_baseID int
exec dbo.getNextID @v_baseID out --sproc to get the next available id
--Lots of these - where n is a hardcoded value
insert into tableOfStuff (someStuff, uniqueID) values ('stuff', @v_baseID + n )
exec dbo.UpdateNextID @v_baseID + lastUsedn --sproc to update the last used id
But I would like it to look like this:
--Lots of these
insert into tableOfStuff (someStuff, uniqueID) values ('stuff', getNextID() )
Hardcoding the offset is a pain in the arse, and is error prone. Packaging it up into a simple scalar function is very appealing, but I'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing.
We're using SQL Server 2005 at the moment.
edits for clarification:
Two users hitting it won't happen. This is an upgrade script that will be run only once, and never concurrently.
The actual sproc isn't prefixed with sp_, fixed the example code.
In normal usage, we do use an id table and a sproc to get IDs as needed, I was just looking for a cleaner way to do it in this script, which essentially just dumps a bunch of data into the db.
| [
"If you have 2 users hitting it at the same time they will get the same id. Why didn't you use an id table with an identity instead, insert into that and use that as the unique (which is guaranteed) id, this will also perform much faster\n\n\nsp_getNextID \n\n\nnever ever prefix procs with sp_, this has performance implication because the optimizer first checks the master DB to see if that proc exists there and then th local DB, also if MS decide to create a sp_getNextID in a service pack yours will never get executed\n",
"\nI'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing.\n\nYou aren't missing anything; SQL Server does not support global variables, and it doesn't support data modification within UDFs. And even if you wanted to do something as kludgy as using CONTEXT_INFO (see http://weblogs.sqlteam.com/mladenp/archive/2007/04/23/60185.aspx), you can't set that from within a UDF anyway.\nIs there a way you can get around the \"hardcoding\" of the offset by making that a variable and looping over the iteration of it, doing the inserts within that loop?\n",
"It would probably be more work than it's worth, but you can use static C#/VB variables in a SQL CLR UDF, so I think you'd be able to do what you want to do by simply incrementing this variable every time the UDF is called. The static variable would be lost whenever the appdomain unloaded, of course. So if you need continuity of your ID from one day to the next, you'd need a way, on first access of NextId, to poll all of tables that use this ID, to find the highest value.\n"
] | [
2,
2,
0
] | [] | [] | [
"sql",
"sql_server",
"sql_server_2005"
] | stackoverflow_0000028280_sql_sql_server_sql_server_2005.txt |
Q:
Set ASP.net executionTimeout in code / "refresh" request
I'll have an ASP.net page that creates some Excel Sheets and sends them to the user. The problem is, sometimes I get Http timeouts, presumably because the Request runs longer than executionTimeout (110 seconds per default).
I just wonder what my options are to prevent this, without wanting to generally increase the executionTimeout in web.config?
In PHP, set_time_limit exists which can be used in a function to extend its life, but I did not see anything like that in C#/ASP.net?
How do you handle long-running functions in ASP.net?
A:
If you want to increase the execution timeout for this one request you can set
HttpContext.Current.Server.ScriptTimeout
But you still may have the problem of the client timing out which you can't reliably solve directly from the server. To get around that you could implement a "processing" page (like Rob suggests) that posts back until the response is ready. Or you might want to look into AJAX to do something similar.
A:
I've not really had to face this issue too much yet myself, so please keep that in mind.
Is there not anyway you can run the process async and specify a callback method to occur once complete, and then keep the page in a "we are processing your request.." loop cycle. You could then open this up to add some nice UI enhancements as well.
Just kinda thinking out loud. That would probably be the sort of thing I would like to do :)
| Set ASP.net executionTimeout in code / "refresh" request | I'll have an ASP.net page that creates some Excel Sheets and sends them to the user. The problem is, sometimes I get Http timeouts, presumably because the Request runs longer than executionTimeout (110 seconds per default).
I just wonder what my options are to prevent this, without wanting to generally increase the executionTimeout in web.config?
In PHP, set_time_limit exists which can be used in a function to extend its life, but I did not see anything like that in C#/ASP.net?
How do you handle long-running functions in ASP.net?
| [
"If you want to increase the execution timeout for this one request you can set\nHttpContext.Current.Server.ScriptTimeout\nBut you still may have the problem of the client timing out which you can't reliably solve directly from the server. To get around that you could implement a \"processing\" page (like Rob suggests) that posts back until the response is ready. Or you might want to look into AJAX to do something similar.\n",
"I've not really had to face this issue too much yet myself, so please keep that in mind.\nIs there not anyway you can run the process async and specify a callback method to occur once complete, and then keep the page in a \"we are processing your request..\" loop cycle. You could then open this up to add some nice UI enhancements as well.\nJust kinda thinking out loud. That would probably be the sort of thing I would like to do :)\n"
] | [
16,
1
] | [] | [] | [
"asp.net",
"c#"
] | stackoverflow_0000029686_asp.net_c#.txt |
Q:
Visual Studio 2005 Project options
I have a solution in Visual Studio 2005(professional Edition) which in turn has 8 projects.I am facing a problem that even after i set the Command Arguments in the Project settings of the relevant project, it doesnt accept those command line arguments and it shows argc = 1, inspite of me giving more than 1 command arguments. Tried making the settings of this Solution similar to a working solution, but no success.
Any pointers?
-Ajit.
A:
Hmm.. Are you sure the specified project is set as the start project (right click > set as startup project) ??
Oh, and obviously you need to be in the correct configuration mode ^_^
(Notice it can be changed to debug | build | all configurations )
A:
Are you sure you are setting the command arguments on the same configuration (Debug|Release) you are debugging? As far as I remember command arguments are per configuration.
| Visual Studio 2005 Project options | I have a solution in Visual Studio 2005(professional Edition) which in turn has 8 projects.I am facing a problem that even after i set the Command Arguments in the Project settings of the relevant project, it doesnt accept those command line arguments and it shows argc = 1, inspite of me giving more than 1 command arguments. Tried making the settings of this Solution similar to a working solution, but no success.
Any pointers?
-Ajit.
| [
"Hmm.. Are you sure the specified project is set as the start project (right click > set as startup project) ??\nOh, and obviously you need to be in the correct configuration mode ^_^\n(Notice it can be changed to debug | build | all configurations )\n",
"Are you sure you are setting the command arguments on the same configuration (Debug|Release) you are debugging? As far as I remember command arguments are per configuration.\n"
] | [
1,
0
] | [] | [] | [
"projects",
"visual_studio_2005"
] | stackoverflow_0000029777_projects_visual_studio_2005.txt |
Q:
VS.NET Application Diagrams
Have you used VS.NET Architect Edition's Application and System diagrams to start designing a solution?
If so, did you find it useful?
Did the "automatic implementation" feature work ok?
A:
I used to use it a lot. This designer worked good for stubbing out prototype projects, but ultimately I found myself wasting a lot of time moving the mouse around when I could be typing. It seemed like an awesome idea to be able to print out the class diagrams to show APIs to other developers while I was prototyping, but it proved quite limiting and it looks awful on a non-color printer.
Now I just use the text editor and some AutoHotkey macros to get everything done.
A:
Yes, and no, it's not very useful in my opinion. It's not very stable, it's easy to get out of sync, and the "look how fast I generate this" advantage is virtually nil when compared to more mundane things such as code snippets.
Then again, I am a total "Architect" luddite, so take this with a grain of salt.
A:
I agree with Stu, and I don't consider myself an Architect luddite :-). Kind of like a lot of MS frameworks over the years, you are tied to their particular way of thinking, which doesn't always gel with the ideas that come out of the rest of the architecture community at large. Generating stubs, in my opinion, doesn't really add that much value, and the round trip half of the equation has messed up some of my project files and made me have to re-write the things manually.
| VS.NET Application Diagrams | Have you used VS.NET Architect Edition's Application and System diagrams to start designing a solution?
If so, did you find it useful?
Did the "automatic implementation" feature work ok?
| [
"I used to use it a lot. This designer worked good for stubbing out prototype projects, but ultimately I found myself wasting a lot of time moving the mouse around when I could be typing. It seemed like an awesome idea to be able to print out the class diagrams to show APIs to other developers while I was prototyping, but it proved quite limiting and it looks awful on a non-color printer.\nNow I just use the text editor and some AutoHotkey macros to get everything done.\n",
"Yes, and no, it's not very useful in my opinion. It's not very stable, it's easy to get out of sync, and the \"look how fast I generate this\" advantage is virtually nil when compared to more mundane things such as code snippets.\nThen again, I am a total \"Architect\" luddite, so take this with a grain of salt.\n",
"I agree with Stu, and I don't consider myself an Architect luddite :-). Kind of like a lot of MS frameworks over the years, you are tied to their particular way of thinking, which doesn't always gel with the ideas that come out of the rest of the architecture community at large. Generating stubs, in my opinion, doesn't really add that much value, and the round trip half of the equation has messed up some of my project files and made me have to re-write the things manually.\n"
] | [
2,
0,
0
] | [] | [] | [
".net",
"architecture",
"c#",
"diagram",
"visual_studio"
] | stackoverflow_0000016556_.net_architecture_c#_diagram_visual_studio.txt |
Q:
Opcode cache impact on memory usage
Can anyone tell me what is the memory usage overhead associated with PHP opcode cache?
I've seen a lot of reviews of opcode cache but all of them only concentrate on the performance increase. I have a small entry level VPS and memory limits are a concern for me.
A:
Most of the memory overhead will come from the opcode cache size. Each opcode cacher has their own default(e.g. 30MB for APC) which you can change through the config file.
Other than the cache size, the actual memory overhead of the cacher itself is negligible.
A:
In todays world: It's neglectible. I think memory consumption was about 50 MB bigger with eAccelerator then it was without when I did my benchmarks.
If you really need the speed but do have headaches that your RAM might be not enough: grab $40 and buy another GIG of RAM for your server ;)
A:
You can set a limit to memory consumption for APC, but that potentially limits its effectiveness.
If you're just using it for silent opcode caching, then it should be fine. Once the memory allotment is full, no new files will be cached, but everything will work as expected. However, the user-space cache functions like apc_store() and apc_fetch() will fail silently and inexplicably if there is no memory available.
This can be tricky to catch and debug since no error is reported and no exception is thrown.
| Opcode cache impact on memory usage | Can anyone tell me what is the memory usage overhead associated with PHP opcode cache?
I've seen a lot of reviews of opcode cache but all of them only concentrate on the performance increase. I have a small entry level VPS and memory limits are a concern for me.
| [
"Most of the memory overhead will come from the opcode cache size. Each opcode cacher has their own default(e.g. 30MB for APC) which you can change through the config file.\nOther than the cache size, the actual memory overhead of the cacher itself is negligible.\n",
"In todays world: It's neglectible. I think memory consumption was about 50 MB bigger with eAccelerator then it was without when I did my benchmarks.\nIf you really need the speed but do have headaches that your RAM might be not enough: grab $40 and buy another GIG of RAM for your server ;)\n",
"You can set a limit to memory consumption for APC, but that potentially limits its effectiveness.\nIf you're just using it for silent opcode caching, then it should be fine. Once the memory allotment is full, no new files will be cached, but everything will work as expected. However, the user-space cache functions like apc_store() and apc_fetch() will fail silently and inexplicably if there is no memory available. \nThis can be tricky to catch and debug since no error is reported and no exception is thrown.\n"
] | [
5,
0,
0
] | [] | [] | [
"opcode_cache",
"php"
] | stackoverflow_0000029525_opcode_cache_php.txt |
Q:
Java return copy to hide future changes
In Java, say you have a class that wraps an ArrayList (or any collection) of objects.
How would you return one of those objects such that the caller will not see any future changes to the object made in the ArrayList?
i.e. you want to return a deep copy of the object, but you don't know if it is cloneable.
A:
Turn that into a spec:
-that objects need to implement an interface in order to be allowed into the collection
Something like ArrayList<ICloneable>()
Then you can be assured that you always do a deep copy - the interface should have a method that is guaranteed to return a deep copy.
I think that's the best you can do.
A:
One option is to use serialization. Here's a blog post explaining it:
http://weblogs.java.net/blog/emcmanus/archive/2007/04/cloning_java_ob.html
A:
I suppose it is an ovbious answer:
Make a requisite for the classes stored in the collection to be cloneable. You could check that at insertion time or at retrieval time, whatever makes more sense, and throw an exception.
Or if the item is not cloneable, just fail back to the return by reference option.
| Java return copy to hide future changes | In Java, say you have a class that wraps an ArrayList (or any collection) of objects.
How would you return one of those objects such that the caller will not see any future changes to the object made in the ArrayList?
i.e. you want to return a deep copy of the object, but you don't know if it is cloneable.
| [
"Turn that into a spec:\n-that objects need to implement an interface in order to be allowed into the collection\nSomething like ArrayList<ICloneable>()\nThen you can be assured that you always do a deep copy - the interface should have a method that is guaranteed to return a deep copy. \nI think that's the best you can do. \n",
"One option is to use serialization. Here's a blog post explaining it:\nhttp://weblogs.java.net/blog/emcmanus/archive/2007/04/cloning_java_ob.html\n",
"I suppose it is an ovbious answer:\nMake a requisite for the classes stored in the collection to be cloneable. You could check that at insertion time or at retrieval time, whatever makes more sense, and throw an exception.\nOr if the item is not cloneable, just fail back to the return by reference option.\n"
] | [
4,
2,
1
] | [] | [] | [
"cloning",
"deep_copy",
"java"
] | stackoverflow_0000029820_cloning_deep_copy_java.txt |
Q:
Windows XP Default Routes
I use my mobile phone for connection to the internet on my laptop, I also have a wired connection to a LAN which doesn't have internet conectivity, it just has our TFS server on it.
The problem is that I can't use the internet (from the phone) with the LAN cable plugged in. Is there a way to set the default route to my phone?
I'm running Windows XP.
A:
There's many OS specific ways to force routing over specific interfaces. What OS are you using? XP? Vista? *nix?
The simplest way is to configure your network card with a static IP and NO GATEWAY, the only gateway (ie. internet access) your laptop will find is then via the mobile.
The disadvantage of this method is that you'll need to access your TFS server by IP address (or netbios name) as all DNS requests will be going out over the internet and not through your private LAN.
EDIT: If you can't use the phone when the LAN is plugged in, that's because you've got it setup for DHCP and the DHCP server is advertising (incorrectly for you) that it will accept and route internet traffic. As previously mentioned, setup with a static IP and no gateway... if you insist on using DHCP you'll need to learn the ROUTE command in DOS, find the IP address of your phone (assuming it's acting as a router) set that as the default route, and remove whatever default route was assigned from the DHCP server.
EDIT2: @dan - you can't use the internet from your phone directly (eg. mobile browser), or you can't make your laptop use your phone for internet when the cable is plugged in? (ie. routing issues) ... if it's the former, then your phone is probably configuring a PAN with your phone and trying to route internet back over the LAN
EDIT @Jorge - IP routing is the responsibility of the network layer, not the application. Go review the OSI model ;)
A:
You can actually configure what you want to be the default gateway globally using the "routes" command as described here: Default Internet connection on Dual LAN Workstation
I admit though, on windows it'd finicky at best as sometimes that setup will just disappear :(
| Windows XP Default Routes | I use my mobile phone for connection to the internet on my laptop, I also have a wired connection to a LAN which doesn't have internet conectivity, it just has our TFS server on it.
The problem is that I can't use the internet (from the phone) with the LAN cable plugged in. Is there a way to set the default route to my phone?
I'm running Windows XP.
| [
"There's many OS specific ways to force routing over specific interfaces. What OS are you using? XP? Vista? *nix?\nThe simplest way is to configure your network card with a static IP and NO GATEWAY, the only gateway (ie. internet access) your laptop will find is then via the mobile.\nThe disadvantage of this method is that you'll need to access your TFS server by IP address (or netbios name) as all DNS requests will be going out over the internet and not through your private LAN. \nEDIT: If you can't use the phone when the LAN is plugged in, that's because you've got it setup for DHCP and the DHCP server is advertising (incorrectly for you) that it will accept and route internet traffic. As previously mentioned, setup with a static IP and no gateway... if you insist on using DHCP you'll need to learn the ROUTE command in DOS, find the IP address of your phone (assuming it's acting as a router) set that as the default route, and remove whatever default route was assigned from the DHCP server.\nEDIT2: @dan - you can't use the internet from your phone directly (eg. mobile browser), or you can't make your laptop use your phone for internet when the cable is plugged in? (ie. routing issues) ... if it's the former, then your phone is probably configuring a PAN with your phone and trying to route internet back over the LAN\nEDIT @Jorge - IP routing is the responsibility of the network layer, not the application. Go review the OSI model ;)\n",
"You can actually configure what you want to be the default gateway globally using the \"routes\" command as described here: Default Internet connection on Dual LAN Workstation\nI admit though, on windows it'd finicky at best as sometimes that setup will just disappear :(\n"
] | [
2,
1
] | [] | [] | [
"networking",
"tcp",
"windows"
] | stackoverflow_0000029782_networking_tcp_windows.txt |
Q:
Printing DOM Changes
What I am trying to do is change the background colour of a table cell <td> and then when a user goes to print the page, the changes are now showing.
I am currently using an unobtrusive script to run the following command on a range of cells:
element.style.backgroundColor = "#f00"
This works on screen in IE and FF, however, when you go to Print Preview, the background colours are lost.
Am I doing something wrong?
A:
Is it not recommended to do this with stylesheets? You can change the media type in the LINK statement in your HTML, so when the page is printed, it will revert to the different style?
A:
Have you tried hard-coding the values just to see if background-colors are showing on the print-preview at all? I think it is a setting in the Browser.
| Printing DOM Changes | What I am trying to do is change the background colour of a table cell <td> and then when a user goes to print the page, the changes are now showing.
I am currently using an unobtrusive script to run the following command on a range of cells:
element.style.backgroundColor = "#f00"
This works on screen in IE and FF, however, when you go to Print Preview, the background colours are lost.
Am I doing something wrong?
| [
"Is it not recommended to do this with stylesheets? You can change the media type in the LINK statement in your HTML, so when the page is printed, it will revert to the different style?\n",
"Have you tried hard-coding the values just to see if background-colors are showing on the print-preview at all? I think it is a setting in the Browser.\n"
] | [
2,
0
] | [] | [] | [
"browser",
"dom",
"firefox",
"internet_explorer",
"printing"
] | stackoverflow_0000029883_browser_dom_firefox_internet_explorer_printing.txt |
Q:
How to bring in a web app
I run a game and the running is done by hand, I have a few scripts that help me but essentially it's me doing the work. I am at the moment working on web app that will allow the users to input directly some of their game actions and thus save me a lot of work.
The problem is that I'm one man working on a moderately sized (upwards of 20 tables) project, the workload isn't the issue, it's that bugs will have slipped in even though I test as I write. So my question is thus two-fold.
Beta testing, I love open beta's but would a closed beta be somehow more effective and give better results?
How should I bring in the app? Should I one turn drop it in and declare it's being used or should I use it alongside the normal construct of the game?
A:
This is my general approach to testing/launching.
How you test/launch depends mostly on:
What your application is.
Who your users are.
If you application is a technical application and is geared to the technically-minded, the word "beta" won't really scare them - but provide an opportunity to test the product before it goes 'live', and help to improve the system. This is the ideal circumstance in which to use either an open or closed beta. It's usually beneficial to start off 'closed' with a group of people you select and trust to bug-find quickly and reliably - after you're more confident that all the critical bugs are gone, open it up with an invite system (for example).
If, however, your application is 'trivial' from a technical standpoint (i.e. it's something like Twitter, or Facebook, or Flickr - nothing that is inherently geared towards technical usage), then you're going to have to be more careful in how you plan your testing. Closed testing is most definitely your first port of call, and this should last for longer than a closed beta on a more 'technical' product. The reason? Your 'average Joe' doesn't necessarily know what the word "beta" means, and others may well be scared by it, or judge your service prematurely (not understanding the concept of this 'public testing' phase). Many won't want to be used as guinea pigs.
A:
I don't understand what you mean by "bring in the app" and "one turn drop it". By "bring in the app" do you mean deploy? As for "One turn drop", I totally don't understand it.
As for open betas, that depends on your audience, really. Counterstrike, for example, apparently run a few closed betas before doing open betas, so here's my suggestion:
Set up a forum in some free forumboard, or set up a topic in a popular gaming forum.
Look for people (whether or not they are in those forums) that you trust, and let them in in a closed beta. This will allow you to iron out serious kinks at first.
If your closed group isn't reporting as much bugs any more, release it to open beta, pointing out ways on how they could give feedback to you.
This is similar to the approach StackOverflow took, but this being a game setting it up on a gaming forum will give the dual benefit of advertising your game and getting some interested beta testers.
A:
I'll try to answer with the limited amount of details you've given.
1: Wether it's open or closed is really only an issue if you have great buzz, and a large group of users hammering down your door, trying toget in on the action.
If this is the case, I think you might get more loyalty and commitment from users in a closed beta.
2: You haven't given many (any) details as to what kind of game you are talking about, so it's pretty hard to answer this one.
/Jonas
| How to bring in a web app | I run a game and the running is done by hand, I have a few scripts that help me but essentially it's me doing the work. I am at the moment working on web app that will allow the users to input directly some of their game actions and thus save me a lot of work.
The problem is that I'm one man working on a moderately sized (upwards of 20 tables) project, the workload isn't the issue, it's that bugs will have slipped in even though I test as I write. So my question is thus two-fold.
Beta testing, I love open beta's but would a closed beta be somehow more effective and give better results?
How should I bring in the app? Should I one turn drop it in and declare it's being used or should I use it alongside the normal construct of the game?
| [
"This is my general approach to testing/launching.\nHow you test/launch depends mostly on:\n\nWhat your application is.\nWho your users are.\n\nIf you application is a technical application and is geared to the technically-minded, the word \"beta\" won't really scare them - but provide an opportunity to test the product before it goes 'live', and help to improve the system. This is the ideal circumstance in which to use either an open or closed beta. It's usually beneficial to start off 'closed' with a group of people you select and trust to bug-find quickly and reliably - after you're more confident that all the critical bugs are gone, open it up with an invite system (for example).\nIf, however, your application is 'trivial' from a technical standpoint (i.e. it's something like Twitter, or Facebook, or Flickr - nothing that is inherently geared towards technical usage), then you're going to have to be more careful in how you plan your testing. Closed testing is most definitely your first port of call, and this should last for longer than a closed beta on a more 'technical' product. The reason? Your 'average Joe' doesn't necessarily know what the word \"beta\" means, and others may well be scared by it, or judge your service prematurely (not understanding the concept of this 'public testing' phase). Many won't want to be used as guinea pigs.\n",
"I don't understand what you mean by \"bring in the app\" and \"one turn drop it\". By \"bring in the app\" do you mean deploy? As for \"One turn drop\", I totally don't understand it.\nAs for open betas, that depends on your audience, really. Counterstrike, for example, apparently run a few closed betas before doing open betas, so here's my suggestion:\n\nSet up a forum in some free forumboard, or set up a topic in a popular gaming forum.\nLook for people (whether or not they are in those forums) that you trust, and let them in in a closed beta. This will allow you to iron out serious kinks at first.\nIf your closed group isn't reporting as much bugs any more, release it to open beta, pointing out ways on how they could give feedback to you.\n\nThis is similar to the approach StackOverflow took, but this being a game setting it up on a gaming forum will give the dual benefit of advertising your game and getting some interested beta testers.\n",
"I'll try to answer with the limited amount of details you've given.\n1: Wether it's open or closed is really only an issue if you have great buzz, and a large group of users hammering down your door, trying toget in on the action. \nIf this is the case, I think you might get more loyalty and commitment from users in a closed beta.\n2: You haven't given many (any) details as to what kind of game you are talking about, so it's pretty hard to answer this one.\n/Jonas\n"
] | [
2,
1,
1
] | [] | [] | [
"beta",
"launching",
"web_applications"
] | stackoverflow_0000029870_beta_launching_web_applications.txt |
Q:
How would you handle errors when using jQuery.ajax()?
When using jQuery's ajax method to submit form data, what is the best way to handle errors?
This is an example of what a call might look like:
$.ajax({
url: "userCreation.ashx",
data: { u:userName, p:password, e:email },
type: "POST",
beforeSend: function(){disableSubmitButton();},
complete: function(){enableSubmitButton();},
error: function(xhr, statusText, errorThrown){
// Work out what the error was and display the appropriate message
},
success: function(data){
displayUserCreatedMessage();
refreshUserList();
}
});
The request might fail for a number of reasons, such as duplicate user name, duplicate email address etc, and the ashx is written to throw an exception when this happens.
My problem seems to be that by throwing an exception the ashx causes the statusText and errorThrown to be undefined.
I can get to the XMLHttpRequest.responseText which contains the HTML that makes up the standard .net error page.
I am finding the page title in the responseText and using the title to work out which error was thrown. Although I have a suspicion that this will fall apart when I enable custom error handling pages.
Should I be throwing the errors in the ashx, or should I be returning a status code as part of the data returned by the call to userCreation.ashx, then using this to decide what action to take?
How do you handle these situations?
A:
For debugging, I usually just create an element (in the case below: <div id="error"></div>) on the page and write the XmlHttpRequest to it:
error: function (XMLHttpRequest, textStatus, errorThrown) {
$("#error").html(XMLHttpRequest.status + "\n<hr />" + XMLHttpRequest.responseText);
}
Then you can see the types of errors that are occurring and capture them correctly:
if (XMLHttpRequest.status === 404) // display some page not found error
if (XMLHttpRequest.status === 500) // display some server error
In your ashx, can you throw a new exception (e.g "Invalid User" etc.) and then just parse that out of the XMLHttpRequest.responseText? For me when I get an error the XMLHttpRequest.responseText isn't the standard Asp.Net error page, it's a JSON object containing the error like this:
{
"Message":"Index was out of range. Must be non-negative and less than the size of the collection.\r\n
Parameter name: index",
"StackTrace":" at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)\r\n
at etc...",
"ExceptionType":"System.ArgumentOutOfRangeException"
}
Edit: This could be because the function I'm calling is marked with these attributes:
<WebMethod()> _
<ScriptMethod()> _
A:
Should I be throwing the errors in the
ashx, or should I be returning a
status code as part of the data
returned by the call to
userCreation.ashx, then using this to
decide what action to take? How do you
handle these situations?
Personally, if possible, I would prefer to handle this on the server side and work up a message to the user there. This works very well in a scenario where you only want to display a message to the user telling them what happened (validation message, essentially).
However, if you want to perform an action based on what happened on the server, you may want to use a status code and write some javascript to perform various actions based on that status code.
A:
Now I have a problem as to which answer to accept.
Further thought on the problem brings me to the conclusion that I was incorrectly throwing exceptions. Duplicate user names, email addresses etc are expected issues during a sign up process and are therefore not exceptions, but simply errors. In which case I probably shouldn't be throwing exceptions, but returning error codes.
Which leads me to think that irobinson's approach should be the one to take in this case, especially since the form is only a small part of the UI being displayed. I have now implemented this solution and I am returning xml containing a status and an optional message that is to be displayed. I can then use jQuery to parse it and take the appropriate action: -
success: function(data){
var created = $("result", data).attr("success");
if (created == "OK"){
resetNewUserForm();
listUsers('');
} else {
var errorMessage = $("result", data).attr("message");
$("#newUserErrorMessage").text(errorMessage).show();
}
enableNewUserForm();
}
However travis' answer is very detailed and would be perfect during debugging or if I wanted to display an exception message to the user. I am definitely not receiving JSON back, so it is probably down to one of those attributes that travis has listed, as I don't have them in my code.
(I am going to accept irobinson's answer, but upvote travis' answer. It just feels strange to be accepting an answer that doesn't have the most votes.)
| How would you handle errors when using jQuery.ajax()? | When using jQuery's ajax method to submit form data, what is the best way to handle errors?
This is an example of what a call might look like:
$.ajax({
url: "userCreation.ashx",
data: { u:userName, p:password, e:email },
type: "POST",
beforeSend: function(){disableSubmitButton();},
complete: function(){enableSubmitButton();},
error: function(xhr, statusText, errorThrown){
// Work out what the error was and display the appropriate message
},
success: function(data){
displayUserCreatedMessage();
refreshUserList();
}
});
The request might fail for a number of reasons, such as duplicate user name, duplicate email address etc, and the ashx is written to throw an exception when this happens.
My problem seems to be that by throwing an exception the ashx causes the statusText and errorThrown to be undefined.
I can get to the XMLHttpRequest.responseText which contains the HTML that makes up the standard .net error page.
I am finding the page title in the responseText and using the title to work out which error was thrown. Although I have a suspicion that this will fall apart when I enable custom error handling pages.
Should I be throwing the errors in the ashx, or should I be returning a status code as part of the data returned by the call to userCreation.ashx, then using this to decide what action to take?
How do you handle these situations?
| [
"For debugging, I usually just create an element (in the case below: <div id=\"error\"></div>) on the page and write the XmlHttpRequest to it:\nerror: function (XMLHttpRequest, textStatus, errorThrown) {\n $(\"#error\").html(XMLHttpRequest.status + \"\\n<hr />\" + XMLHttpRequest.responseText);\n}\n\nThen you can see the types of errors that are occurring and capture them correctly:\nif (XMLHttpRequest.status === 404) // display some page not found error\nif (XMLHttpRequest.status === 500) // display some server error\n\nIn your ashx, can you throw a new exception (e.g \"Invalid User\" etc.) and then just parse that out of the XMLHttpRequest.responseText? For me when I get an error the XMLHttpRequest.responseText isn't the standard Asp.Net error page, it's a JSON object containing the error like this:\n{\n\"Message\":\"Index was out of range. Must be non-negative and less than the size of the collection.\\r\\n\nParameter name: index\",\n\"StackTrace\":\" at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)\\r\\n \nat etc...\",\n\"ExceptionType\":\"System.ArgumentOutOfRangeException\"\n}\n\nEdit: This could be because the function I'm calling is marked with these attributes:\n<WebMethod()> _\n<ScriptMethod()> _\n\n",
"\nShould I be throwing the errors in the\n ashx, or should I be returning a\n status code as part of the data\n returned by the call to\n userCreation.ashx, then using this to\n decide what action to take? How do you\n handle these situations?\n\nPersonally, if possible, I would prefer to handle this on the server side and work up a message to the user there. This works very well in a scenario where you only want to display a message to the user telling them what happened (validation message, essentially).\nHowever, if you want to perform an action based on what happened on the server, you may want to use a status code and write some javascript to perform various actions based on that status code.\n",
"Now I have a problem as to which answer to accept.\nFurther thought on the problem brings me to the conclusion that I was incorrectly throwing exceptions. Duplicate user names, email addresses etc are expected issues during a sign up process and are therefore not exceptions, but simply errors. In which case I probably shouldn't be throwing exceptions, but returning error codes.\nWhich leads me to think that irobinson's approach should be the one to take in this case, especially since the form is only a small part of the UI being displayed. I have now implemented this solution and I am returning xml containing a status and an optional message that is to be displayed. I can then use jQuery to parse it and take the appropriate action: -\nsuccess: function(data){\n var created = $(\"result\", data).attr(\"success\");\n if (created == \"OK\"){\n resetNewUserForm();\n listUsers('');\n } else {\n var errorMessage = $(\"result\", data).attr(\"message\");\n $(\"#newUserErrorMessage\").text(errorMessage).show();\n }\n enableNewUserForm();\n}\n\nHowever travis' answer is very detailed and would be perfect during debugging or if I wanted to display an exception message to the user. I am definitely not receiving JSON back, so it is probably down to one of those attributes that travis has listed, as I don't have them in my code.\n(I am going to accept irobinson's answer, but upvote travis' answer. It just feels strange to be accepting an answer that doesn't have the most votes.)\n"
] | [
20,
16,
4
] | [] | [] | [
"ajax",
"asp.net",
"jquery"
] | stackoverflow_0000028529_ajax_asp.net_jquery.txt |
Q:
Video Codec startcodes
Does anybody know (or know of a resource that contains) a list of frame start codes for common video formats (MPEG-1/2/4, .wmv, .mov etc.).
For example, an MPEG-1 video frame will (I think) always start with "00 00 01 00".
In essence I'd like to know these so that I could write a program that can automatically find the start of frames throughout a video for a number of different video formats.
A:
What do you want to achieve?
Is this a question how video container types are structured?
See for example : http://www.daubnet.com/formats/AVI.html
That is a description how avi files are structured. Google may help you in finding other container file formats.
When you record a video, it is normally composed of individual frames, think of individual bitmap files in a directory.
To only have 1 file of a video, this stream of frames is put in a container, which has a header describing the contents and a certain layout in which the frames are stored sequentially in the file.
Simple example for my own container :
{
struct header
{
unsigned int frametype;
unsigned int framesize;
};
byte* readFrame( header* pHdr, int frameNum )
{
byte* pFirstFrame = ((byte*) pHdr) + sizeof( header );
return pFristFrame + frameNum * pHdr->framesize;
}
}
There are several other container types. AVI is only one of these container types.
To get to the individual frames you must interpret the header in the file and then based on that information calculate the position of the frame you want to parse.
I posted you a link to the definition of the avi file format. There are other places where you can get information on the mpeg/mkv/ogm file formats.
You need this information to get your program to work.
On a side note, compressed formats do not safe all individual frames independently. They safe an individual frame and then several intermediate frames, which only contain the information on how the current frame differs from the last complete frame. So you cannot extract complete frames at every frame number.
| Video Codec startcodes | Does anybody know (or know of a resource that contains) a list of frame start codes for common video formats (MPEG-1/2/4, .wmv, .mov etc.).
For example, an MPEG-1 video frame will (I think) always start with "00 00 01 00".
In essence I'd like to know these so that I could write a program that can automatically find the start of frames throughout a video for a number of different video formats.
| [
"What do you want to achieve?\nIs this a question how video container types are structured?\nSee for example : http://www.daubnet.com/formats/AVI.html\nThat is a description how avi files are structured. Google may help you in finding other container file formats.\nWhen you record a video, it is normally composed of individual frames, think of individual bitmap files in a directory.\nTo only have 1 file of a video, this stream of frames is put in a container, which has a header describing the contents and a certain layout in which the frames are stored sequentially in the file.\nSimple example for my own container :\n{\n struct header\n {\n unsigned int frametype;\n unsigned int framesize;\n };\n\n byte* readFrame( header* pHdr, int frameNum )\n {\n byte* pFirstFrame = ((byte*) pHdr) + sizeof( header );\n return pFristFrame + frameNum * pHdr->framesize;\n }\n}\n\nThere are several other container types. AVI is only one of these container types.\nTo get to the individual frames you must interpret the header in the file and then based on that information calculate the position of the frame you want to parse.\nI posted you a link to the definition of the avi file format. There are other places where you can get information on the mpeg/mkv/ogm file formats.\nYou need this information to get your program to work.\nOn a side note, compressed formats do not safe all individual frames independently. They safe an individual frame and then several intermediate frames, which only contain the information on how the current frame differs from the last complete frame. So you cannot extract complete frames at every frame number.\n"
] | [
1
] | [] | [] | [
"codec",
"video"
] | stackoverflow_0000029993_codec_video.txt |
Q:
How do I select an XML-node based on its content?
How can I use XPath to select an XML-node based on its content?
If I e.g. have the following xml and I want to select the <author>-node that contains Ritchie to get the author's full name:
<books>
<book isbn='0131103628'>
<title>The C Programming Language</title>
<authors>
<author>Ritchie, Dennis M.</author>
<author>Kernighan, Brian W.</author>
</authors>
</book>
<book isbn='1590593898'>
<title>Joel on Software</title>
<authors>
<author>Spolsky, Joel</author>
</authors>
</book>
</books>
A:
/books/book/authors/author[contains(., 'Ritchie')]
or
//author[contains(., 'Ritchie')]
A:
The XPath for this is:
/books/book/authors/author[contains(., 'Ritchie')]
In C# the following code would return "Ritchie, Dennis M.":
xmlDoc.SelectSingleNode("/books/book/authors/author[contains(., 'Ritchie')]").InnerText;
A:
//author[contains(text(), 'Ritchie')]
| How do I select an XML-node based on its content? | How can I use XPath to select an XML-node based on its content?
If I e.g. have the following xml and I want to select the <author>-node that contains Ritchie to get the author's full name:
<books>
<book isbn='0131103628'>
<title>The C Programming Language</title>
<authors>
<author>Ritchie, Dennis M.</author>
<author>Kernighan, Brian W.</author>
</authors>
</book>
<book isbn='1590593898'>
<title>Joel on Software</title>
<authors>
<author>Spolsky, Joel</author>
</authors>
</book>
</books>
| [
"/books/book/authors/author[contains(., 'Ritchie')]\n\nor\n//author[contains(., 'Ritchie')]\n\n",
"The XPath for this is: \n/books/book/authors/author[contains(., 'Ritchie')]\n\nIn C# the following code would return \"Ritchie, Dennis M.\":\nxmlDoc.SelectSingleNode(\"/books/book/authors/author[contains(., 'Ritchie')]\").InnerText;\n\n",
"//author[contains(text(), 'Ritchie')]\n\n"
] | [
23,
4,
4
] | [] | [] | [
"xml",
"xpath"
] | stackoverflow_0000030018_xml_xpath.txt |
Q:
Use QItemDelegate to show image thumbnails
What's the best way to use QT4's QItemDelegate to show thumbnails for images in a view?
Specifically, how do you stop the item delegate from blocking when generating pixmaps from very large image files (> 500MB)?
Can anyone link to some example code that achieves this? Then again, perhaps this isn't the place to look for Qt-specific code.
A:
You're doing it wrong if you are generating pixmaps inside any of the delegate methods (paint, draw...).
Try to generate the thumbnails only once (on worker thread or maybe not even at runtime, if possible) and have the delegate just display them for the appropriate role.
If you do it at runtime display a default picture until you have the thumbnail generated (like web browsers do with pictures that are not yet downloaded).
| Use QItemDelegate to show image thumbnails | What's the best way to use QT4's QItemDelegate to show thumbnails for images in a view?
Specifically, how do you stop the item delegate from blocking when generating pixmaps from very large image files (> 500MB)?
Can anyone link to some example code that achieves this? Then again, perhaps this isn't the place to look for Qt-specific code.
| [
"You're doing it wrong if you are generating pixmaps inside any of the delegate methods (paint, draw...).\nTry to generate the thumbnails only once (on worker thread or maybe not even at runtime, if possible) and have the delegate just display them for the appropriate role.\nIf you do it at runtime display a default picture until you have the thumbnail generated (like web browsers do with pictures that are not yet downloaded).\n"
] | [
4
] | [] | [] | [
"c++",
"image",
"qitemdelegate",
"qt",
"thumbnails"
] | stackoverflow_0000024212_c++_image_qitemdelegate_qt_thumbnails.txt |
Q:
How to compare an html entity with jQuery
I have the following html code:
<h3 id="headerid"><span onclick="expandCollapse('headerid')">⇑</span>Header title</h3>
I would like to toggle between up arrow and down arrow each time the user clicks the span tag.
function expandCollapse(id) {
var arrow = $("#"+id+" span").html(); // I have tried with .text() too
if(arrow == "⇓") {
$("#"+id+" span").html("⇑");
} else {
$("#"+id+" span").html("⇓");
}
}
My function is going always the else path. If I make a javacript:alert of arrow variable I am getting the html entity represented as an arrow. How can I tell jQuery to interpret the arrow variable as a string and not as html.
A:
When the HTML is parsed, what JQuery sees in the DOM is a UPWARDS DOUBLE ARROW ("⇑"), not the entity reference. Thus, in your Javascript code you should test for "⇑" or "\u21d1". Also, you need to change what you're switching to:
function expandCollapse(id) {
var arrow = $("#"+id+" span").html();
if(arrow == "\u21d1") {
$("#"+id+" span").html("\u21d3");
} else {
$("#"+id+" span").html("\u21d1");
}
}
A:
If you do an alert of arrow what does it return? Does it return the exact string that you're matching against? If you are getting the actual characters '⇓' and '⇑' you may have to match it against "\u21D1" and "\u21D3".
Also, you may want to try ⇑ and ⇓ since not all browsers support those entities.
Update: here's a fully working example:
http://jsbin.com/edogop/3/edit#html,live
window.expandCollapse = function (id) {
var $arrowSpan = $("#" + id + " span"),
arrowCharCode = $arrowSpan.text().charCodeAt(0);
// 8659 is the unicode value of the html entity
if (arrowCharCode === 8659) {
$arrowSpan.html("⇑");
} else {
$arrowSpan.html("⇓");
}
// one liner:
//$("#" + id + " span").html( ($("#" + id + " span").text().charCodeAt(0) === 8659) ? "⇑" : "⇓" );
};
A:
Check out the .toggle() effect.
Here is something similar i was playing with earlier.
HTML:
<div id="inplace">
<div id="myStatic">Hello World!</div>
<div id="myEdit" style="display: none">
<input id="myNewTxt" type="text" />
<input id="myOk" type="button" value="OK" />
<input id="myX" type="button" value="X" />
</div></div>
SCRIPT:
$("#myStatic").bind("click", function(){
$("#myNewTxt").val($("#myStatic").text());
$("#myStatic,#myEdit").toggle();
});
$("#myOk").click(function(){
$("#myStatic").text($("#myNewTxt").val());
$("#myStatic,#myEdit").toggle();
});
$("#myX").click(function(){
$("#myStatic,#myEdit").toggle();
});
A:
Use a class to signal the current state of the span.
The html could look like this
<h3 id="headerId"><span class="upArrow">⇑</span>Header title</h3>
Then in the javascript you do
$( '.upArrow, .downArrow' ).click( function( span ) {
if ( span.hasClass( 'upArrow' ) )
span.text( "⇓" );
else
span.text( "⇑" );
span.toggleClass( 'upArrow' );
span.toggleClass( 'downArrow' );
} );
This may not be the best way, but it should work. Didnt test it tough
A:
Maybe you're not getting an exact match because the browser is lower-casing the entity or something. Try using a carat (^) and lower-case "v" just for testing.
Edited - My first theory was plain wrong.
| How to compare an html entity with jQuery | I have the following html code:
<h3 id="headerid"><span onclick="expandCollapse('headerid')">⇑</span>Header title</h3>
I would like to toggle between up arrow and down arrow each time the user clicks the span tag.
function expandCollapse(id) {
var arrow = $("#"+id+" span").html(); // I have tried with .text() too
if(arrow == "⇓") {
$("#"+id+" span").html("⇑");
} else {
$("#"+id+" span").html("⇓");
}
}
My function is going always the else path. If I make a javacript:alert of arrow variable I am getting the html entity represented as an arrow. How can I tell jQuery to interpret the arrow variable as a string and not as html.
| [
"When the HTML is parsed, what JQuery sees in the DOM is a UPWARDS DOUBLE ARROW (\"⇑\"), not the entity reference. Thus, in your Javascript code you should test for \"⇑\" or \"\\u21d1\". Also, you need to change what you're switching to:\nfunction expandCollapse(id) {\n var arrow = $(\"#\"+id+\" span\").html();\n if(arrow == \"\\u21d1\") { \n $(\"#\"+id+\" span\").html(\"\\u21d3\"); \n } else { \n $(\"#\"+id+\" span\").html(\"\\u21d1\"); \n }\n}\n\n",
"If you do an alert of arrow what does it return? Does it return the exact string that you're matching against? If you are getting the actual characters '⇓' and '⇑' you may have to match it against \"\\u21D1\" and \"\\u21D3\".\nAlso, you may want to try ⇑ and ⇓ since not all browsers support those entities.\nUpdate: here's a fully working example:\nhttp://jsbin.com/edogop/3/edit#html,live\nwindow.expandCollapse = function (id) { \n var $arrowSpan = $(\"#\" + id + \" span\"),\n arrowCharCode = $arrowSpan.text().charCodeAt(0);\n\n // 8659 is the unicode value of the html entity\n if (arrowCharCode === 8659) {\n $arrowSpan.html(\"⇑\"); \n } else { \n $arrowSpan.html(\"⇓\"); \n }\n\n // one liner:\n //$(\"#\" + id + \" span\").html( ($(\"#\" + id + \" span\").text().charCodeAt(0) === 8659) ? \"⇑\" : \"⇓\" );\n};\n\n",
"Check out the .toggle() effect.\nHere is something similar i was playing with earlier.\nHTML:\n<div id=\"inplace\">\n<div id=\"myStatic\">Hello World!</div>\n<div id=\"myEdit\" style=\"display: none\">\n<input id=\"myNewTxt\" type=\"text\" />\n<input id=\"myOk\" type=\"button\" value=\"OK\" />\n<input id=\"myX\" type=\"button\" value=\"X\" />\n</div></div>\n\nSCRIPT:\n $(\"#myStatic\").bind(\"click\", function(){\n $(\"#myNewTxt\").val($(\"#myStatic\").text());\n $(\"#myStatic,#myEdit\").toggle();\n });\n $(\"#myOk\").click(function(){\n $(\"#myStatic\").text($(\"#myNewTxt\").val());\n $(\"#myStatic,#myEdit\").toggle();\n });\n $(\"#myX\").click(function(){\n $(\"#myStatic,#myEdit\").toggle();\n });\n\n",
"Use a class to signal the current state of the span. \nThe html could look like this\n<h3 id=\"headerId\"><span class=\"upArrow\">⇑</span>Header title</h3>\n\nThen in the javascript you do\n$( '.upArrow, .downArrow' ).click( function( span ) {\n if ( span.hasClass( 'upArrow' ) )\n span.text( \"⇓\" );\n else\n span.text( \"⇑\" );\n span.toggleClass( 'upArrow' );\n span.toggleClass( 'downArrow' );\n} );\n\nThis may not be the best way, but it should work. Didnt test it tough\n",
"Maybe you're not getting an exact match because the browser is lower-casing the entity or something. Try using a carat (^) and lower-case \"v\" just for testing.\nEdited - My first theory was plain wrong.\n"
] | [
17,
3,
1,
1,
0
] | [] | [] | [
"html_entities",
"javascript",
"jquery"
] | stackoverflow_0000030003_html_entities_javascript_jquery.txt |
Q:
Simple programming practice (Fizz Buzz, Print Primes)
I want to practice my skills away from a keyboard (i.e. pen and paper) and I'm after simple practice questions like Fizz Buzz, Print the first N primes.
What are your favourite simple programming questions?
A:
I've been working on http://projecteuler.net/
A:
Problem:
Insert + or - sign anywhere between the digits 123456789 in such a way that the expression evaluates to 100. The condition is that the order of the digits must not be changed.
e.g.: 1 + 2 + 3 - 4 + 5 + 6 + 78 + 9 = 100
Programming Problem:
Write a program in your favorite language which outputs all possible solutions of the above problem.
A:
If you want a pen and paper kind of exercises I'd recommend more designing than coding.
Actually coding in paper sucks and it lets you learn almost nothing. Work environment does matter so typing on a computer, compiling, seeing what errors you've made, using refactor here and there, just doesn't compare to what you can do on a piece of paper and so, what you can do on a piece of paper, while being an interesting mental exercise is not practical, it will not improve your coding skills so much.
On the other hand, you can design the architecture of a medium or even complex application by hand in a paper. In fact, I usually do. Engineering tools (such as Enterprise Architect) are not good enough to replace the good all by-hand diagrams.
Good projects could be, How would you design a game engine? Classes, Threads, Storage, Physics, the data structures which will hold everything and so on. How would you start a search engine? How would you design an pattern recognition system?
I find that kind of problems much more rewarding than any paper coding you can do.
A:
There are some good examples of simple-ish programming questions in Steve Yegge's article Five Essential Phone Screen Questions (under Area Number One: Coding). I find these are pretty good for doing on pen and paper. Also, the questions under OOP Design in the same article can be done on pen and paper (or even in your head) and are, I think, good exercises to do.
A:
Towers of Hannoi is great for practice on recursion.
I'd also do a search on sample programming interview questions.
A:
Quite a few online sites for competitive programming are full of sample questions/challenges, sorted by 'difficulty'. Quite often, the simpler categories in the 'algorithms' questions would suit you I think.
For example, check out TopCoder (algorithms section)!
Apart from that, 2 samples:
You are given a list of N points in the plane by their coordinates (x_i, y_i), and a number R>0. Output the maximum number out of the N given points that can be simultaneously covered by a disk of radius R (for bonus points: complexity?).
You are given an array of N numbers a1 to aN, and you want to compute a1 * a2 * ... * aN / ai for all values of i (so the output is again an array of N elements) without using division. Provide a (non-naive) method (complexity should be in O(N) multiplications).
A:
I also like project euler, but I would like to point out that the questions get really tricky really fast. After the first 20 questions or so, they start to be problems most people won't be able to figure out in 1/2 an hour. Another problem is that a lot of them deal with math with really large numbers, that don't fit into standard integer or even long variable types.
| Simple programming practice (Fizz Buzz, Print Primes) | I want to practice my skills away from a keyboard (i.e. pen and paper) and I'm after simple practice questions like Fizz Buzz, Print the first N primes.
What are your favourite simple programming questions?
| [
"I've been working on http://projecteuler.net/\n",
"Problem:\nInsert + or - sign anywhere between the digits 123456789 in such a way that the expression evaluates to 100. The condition is that the order of the digits must not be changed.\ne.g.: 1 + 2 + 3 - 4 + 5 + 6 + 78 + 9 = 100\nProgramming Problem:\nWrite a program in your favorite language which outputs all possible solutions of the above problem.\n",
"If you want a pen and paper kind of exercises I'd recommend more designing than coding.\nActually coding in paper sucks and it lets you learn almost nothing. Work environment does matter so typing on a computer, compiling, seeing what errors you've made, using refactor here and there, just doesn't compare to what you can do on a piece of paper and so, what you can do on a piece of paper, while being an interesting mental exercise is not practical, it will not improve your coding skills so much.\nOn the other hand, you can design the architecture of a medium or even complex application by hand in a paper. In fact, I usually do. Engineering tools (such as Enterprise Architect) are not good enough to replace the good all by-hand diagrams.\nGood projects could be, How would you design a game engine? Classes, Threads, Storage, Physics, the data structures which will hold everything and so on. How would you start a search engine? How would you design an pattern recognition system?\nI find that kind of problems much more rewarding than any paper coding you can do.\n",
"There are some good examples of simple-ish programming questions in Steve Yegge's article Five Essential Phone Screen Questions (under Area Number One: Coding). I find these are pretty good for doing on pen and paper. Also, the questions under OOP Design in the same article can be done on pen and paper (or even in your head) and are, I think, good exercises to do.\n",
"Towers of Hannoi is great for practice on recursion.\nI'd also do a search on sample programming interview questions. \n",
"Quite a few online sites for competitive programming are full of sample questions/challenges, sorted by 'difficulty'. Quite often, the simpler categories in the 'algorithms' questions would suit you I think.\nFor example, check out TopCoder (algorithms section)!\nApart from that, 2 samples:\n\nYou are given a list of N points in the plane by their coordinates (x_i, y_i), and a number R>0. Output the maximum number out of the N given points that can be simultaneously covered by a disk of radius R (for bonus points: complexity?).\nYou are given an array of N numbers a1 to aN, and you want to compute a1 * a2 * ... * aN / ai for all values of i (so the output is again an array of N elements) without using division. Provide a (non-naive) method (complexity should be in O(N) multiplications).\n\n",
"I also like project euler, but I would like to point out that the questions get really tricky really fast. After the first 20 questions or so, they start to be problems most people won't be able to figure out in 1/2 an hour. Another problem is that a lot of them deal with math with really large numbers, that don't fit into standard integer or even long variable types. \n"
] | [
12,
6,
5,
3,
1,
1,
1
] | [] | [] | [
"language_agnostic"
] | stackoverflow_0000029995_language_agnostic.txt |
Q:
What is the purpose of the designer files in Visual Studio 2008 Web application projects?
There is a conversion process that is needed when migrating Visual Studio 2005 web site to Visual Studio 2008 web application projects.
It looks like VS2008 is creating a .designer. file for every aspx when you right click on a file or the project itself in Solution Explorer and select 'Convert to Web Application.'
What is the purpose of these designer files? And these won't exist on a release build of the web application, they are just intermediate files used during development, hopefully?
A:
They hold all the form designer stuff that used to go in the #Region " Web Form Designer Generated Code " section of the code. instead of putting it in the .aspx.vb file where people might edit it (mistakenly or not), it's been moved to a separate file, so that you don't have ever look at it.
A:
What kibbee said.
For the part of your question about existing on a release build, it depends on what kind of web site you have. If you have a pre-compiled web site, then none of code files (.vb, .cs, etc) need to be deployed the server. They are compiled into .dlls (assemblies) and deployed that way along with the .as*x files.
| What is the purpose of the designer files in Visual Studio 2008 Web application projects? | There is a conversion process that is needed when migrating Visual Studio 2005 web site to Visual Studio 2008 web application projects.
It looks like VS2008 is creating a .designer. file for every aspx when you right click on a file or the project itself in Solution Explorer and select 'Convert to Web Application.'
What is the purpose of these designer files? And these won't exist on a release build of the web application, they are just intermediate files used during development, hopefully?
| [
"They hold all the form designer stuff that used to go in the #Region \" Web Form Designer Generated Code \" section of the code. instead of putting it in the .aspx.vb file where people might edit it (mistakenly or not), it's been moved to a separate file, so that you don't have ever look at it.\n",
"What kibbee said. \nFor the part of your question about existing on a release build, it depends on what kind of web site you have. If you have a pre-compiled web site, then none of code files (.vb, .cs, etc) need to be deployed the server. They are compiled into .dlls (assemblies) and deployed that way along with the .as*x files.\n"
] | [
5,
1
] | [] | [] | [
"visual_studio_2008",
"web_applications"
] | stackoverflow_0000028481_visual_studio_2008_web_applications.txt |
Q:
DoDragDrop and MouseUp
Is there an easy way to ensure that after a drag-and-drop fails to complete, the MouseUp event isn't eaten up and ignored by the framework?
I have found a blog post describing one mechanism, but it involves a good deal of manual bookkeeping, including status flags, MouseMove events, manual "mouse leave" checking, etc. all of which I would rather not have to implement if it can be avoided.
A:
I was recently wanting to put Drag and Drop functionality in my project and I hadn't come across this issue, but I was intrigued and really wanted to see if I could come up with a better method than the one described in the page you linked to. I hope I clearly understood everything you wanted to do and overall I think I succeeded in solving the problem in a much more elegant and simple fashion.
On a quick side note, for problems like this it would be great if you provide some code so we can see exactly what it is you are trying to do. I say this only because I assumed a few things about your code in my solution...so hopefully it's pretty close.
Here's the code, which I will explain below:
this.LabelDrag.QueryContinueDrag += new System.Windows.Forms.QueryContinueDragEventHandler(this.LabelDrag_QueryContinueDrag);
this.LabelDrag.MouseDown += new System.Windows.Forms.MouseEventHandler(this.LabelDrag_MouseDown);
this.LabelDrag.MouseUp += new System.Windows.Forms.MouseEventHandler(this.LabelDrag_MouseUp);
this.LabelDrop.DragDrop += new System.Windows.Forms.DragEventHandler(this.LabelDrop_DragDrop);
this.LabelDrop.DragEnter += new System.Windows.Forms.DragEventHandler(this.LabelMain_DragEnter);
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void LabelDrop_DragDrop(object sender, DragEventArgs e)
{
LabelDrop.Text = e.Data.GetData(DataFormats.Text).ToString();
}
private void LabelMain_DragEnter(object sender, DragEventArgs e)
{
if (e.Data.GetDataPresent(DataFormats.Text))
e.Effect = DragDropEffects.Copy;
else
e.Effect = DragDropEffects.None;
}
private void LabelDrag_MouseDown(object sender, MouseEventArgs e)
{
//EXTREMELY IMPORTANT - MUST CALL LabelDrag's DoDragDrop method!!
//Calling the Form's DoDragDrop WILL NOT allow QueryContinueDrag to fire!
((Label)sender).DoDragDrop(TextMain.Text, DragDropEffects.Copy);
}
private void LabelDrag_MouseUp(object sender, MouseEventArgs e)
{
LabelDrop.Text = "LabelDrag_MouseUp";
}
private void LabelDrag_QueryContinueDrag(object sender, QueryContinueDragEventArgs e)
{
//Get rect of LabelDrop
Rectangle rect = new Rectangle(LabelDrop.Location, new Size(LabelDrop.Width, LabelDrop.Height));
//If the left mouse button is up and the mouse is not currently over LabelDrop
if (Control.MouseButtons != MouseButtons.Left && !rect.Contains(PointToClient(Control.MousePosition)))
{
//Cancel the DragDrop Action
e.Action = DragAction.Cancel;
//Manually fire the MouseUp event
LabelDrag_MouseUp(sender, new MouseEventArgs(Control.MouseButtons, 0, Control.MousePosition.X, Control.MousePosition.Y, 0));
}
}
}
I have left out most of the designer code, but included the Event Handler link up code so you can be sure what is linked to what. In my example, the drag/drop is occuring between the labels LabelDrag and LabelDrop.
The main piece of my solution is using the QueryContinueDrag event. This event fires when the keyboard or mouse state changes after DoDragDrop has been called on that control. You may already be doing this, but it is very important that you call the DoDragDrop method of the control that is your source and not the method associated with the form. Otherwise QueryContinueDrag will NOT fire!
One thing to note is that QueryContinueDrag will actually fire when you release the mouse on the drop control so we need to make sure we allow for that. This is handled by checking that the Mouse position (retrieved with the global Control.MousePosition property) is inside of the LabelDrop control rectangle. You must also be sure to convert MousePosition to a point relative to the Client Window with PointToClient as Control.MousePosition returns a screen relative position.
So by checking that the mouse is not over the drop control and that the mouse button is now up we have effectively captured a MouseUp event for the LabelDrag control! :) Now, you could just do whatever processing you want to do here, but if you already have code you are using in the MouseUp event handler, this is not efficient. So just call your MouseUp event from here, passing it the necessary parameters and the MouseUp handler won't ever know the difference.
Just a note though, as I call DoDragDrop from within the MouseDown event handler in my example, this code should never actually get a direct MouseUp event to fire. I just put that code in there to show that it is possible to do it.
Hope that helps!
| DoDragDrop and MouseUp | Is there an easy way to ensure that after a drag-and-drop fails to complete, the MouseUp event isn't eaten up and ignored by the framework?
I have found a blog post describing one mechanism, but it involves a good deal of manual bookkeeping, including status flags, MouseMove events, manual "mouse leave" checking, etc. all of which I would rather not have to implement if it can be avoided.
| [
"I was recently wanting to put Drag and Drop functionality in my project and I hadn't come across this issue, but I was intrigued and really wanted to see if I could come up with a better method than the one described in the page you linked to. I hope I clearly understood everything you wanted to do and overall I think I succeeded in solving the problem in a much more elegant and simple fashion.\nOn a quick side note, for problems like this it would be great if you provide some code so we can see exactly what it is you are trying to do. I say this only because I assumed a few things about your code in my solution...so hopefully it's pretty close.\nHere's the code, which I will explain below:\nthis.LabelDrag.QueryContinueDrag += new System.Windows.Forms.QueryContinueDragEventHandler(this.LabelDrag_QueryContinueDrag);\nthis.LabelDrag.MouseDown += new System.Windows.Forms.MouseEventHandler(this.LabelDrag_MouseDown);\nthis.LabelDrag.MouseUp += new System.Windows.Forms.MouseEventHandler(this.LabelDrag_MouseUp);\n\nthis.LabelDrop.DragDrop += new System.Windows.Forms.DragEventHandler(this.LabelDrop_DragDrop);\nthis.LabelDrop.DragEnter += new System.Windows.Forms.DragEventHandler(this.LabelMain_DragEnter);\n\npublic partial class Form1 : Form\n{\n public Form1()\n {\n InitializeComponent();\n }\n\n private void LabelDrop_DragDrop(object sender, DragEventArgs e)\n {\n LabelDrop.Text = e.Data.GetData(DataFormats.Text).ToString();\n }\n\n\n private void LabelMain_DragEnter(object sender, DragEventArgs e)\n {\n if (e.Data.GetDataPresent(DataFormats.Text))\n e.Effect = DragDropEffects.Copy;\n else\n e.Effect = DragDropEffects.None;\n\n }\n\n private void LabelDrag_MouseDown(object sender, MouseEventArgs e)\n {\n //EXTREMELY IMPORTANT - MUST CALL LabelDrag's DoDragDrop method!!\n //Calling the Form's DoDragDrop WILL NOT allow QueryContinueDrag to fire!\n ((Label)sender).DoDragDrop(TextMain.Text, DragDropEffects.Copy); \n }\n\n private void LabelDrag_MouseUp(object sender, MouseEventArgs e)\n {\n LabelDrop.Text = \"LabelDrag_MouseUp\";\n }\n\n private void LabelDrag_QueryContinueDrag(object sender, QueryContinueDragEventArgs e)\n {\n //Get rect of LabelDrop\n Rectangle rect = new Rectangle(LabelDrop.Location, new Size(LabelDrop.Width, LabelDrop.Height));\n\n //If the left mouse button is up and the mouse is not currently over LabelDrop\n if (Control.MouseButtons != MouseButtons.Left && !rect.Contains(PointToClient(Control.MousePosition)))\n {\n //Cancel the DragDrop Action\n e.Action = DragAction.Cancel;\n //Manually fire the MouseUp event\n LabelDrag_MouseUp(sender, new MouseEventArgs(Control.MouseButtons, 0, Control.MousePosition.X, Control.MousePosition.Y, 0));\n }\n }\n\n}\n\nI have left out most of the designer code, but included the Event Handler link up code so you can be sure what is linked to what. In my example, the drag/drop is occuring between the labels LabelDrag and LabelDrop.\nThe main piece of my solution is using the QueryContinueDrag event. This event fires when the keyboard or mouse state changes after DoDragDrop has been called on that control. You may already be doing this, but it is very important that you call the DoDragDrop method of the control that is your source and not the method associated with the form. Otherwise QueryContinueDrag will NOT fire!\nOne thing to note is that QueryContinueDrag will actually fire when you release the mouse on the drop control so we need to make sure we allow for that. This is handled by checking that the Mouse position (retrieved with the global Control.MousePosition property) is inside of the LabelDrop control rectangle. You must also be sure to convert MousePosition to a point relative to the Client Window with PointToClient as Control.MousePosition returns a screen relative position.\nSo by checking that the mouse is not over the drop control and that the mouse button is now up we have effectively captured a MouseUp event for the LabelDrag control! :) Now, you could just do whatever processing you want to do here, but if you already have code you are using in the MouseUp event handler, this is not efficient. So just call your MouseUp event from here, passing it the necessary parameters and the MouseUp handler won't ever know the difference.\nJust a note though, as I call DoDragDrop from within the MouseDown event handler in my example, this code should never actually get a direct MouseUp event to fire. I just put that code in there to show that it is possible to do it.\nHope that helps!\n"
] | [
25
] | [] | [] | [
".net",
"drag_and_drop",
"events",
"winforms"
] | stackoverflow_0000029177_.net_drag_and_drop_events_winforms.txt |
Q:
TFS Lifecycle Management for Build Environment
How would you manage the lifecycle and automated build process when some of the projects (C# .csproj projects) are part of the actual build system?
Example:
A .csproj is a project that uses MSBuild tasks that are implemented in BuildEnv.csproj.
Both projects are part of the same product (meaning, BuildEnv.csproj frequently changes as the product is being developed and not a 3rd party that is rarely updated)
A:
You must factor this out into two separate "projects" otherwise you'll spend ages chasing your tail trying to find out if a broken build is due to changes in the build system or chages in the code being developed.
Previously we've factored the two systems out into separate projects in CVS.
You want to be able to vary one thing while keeping the other constant to limit what you would have to look at when performing forensic analysis.
Hope that helps.
| TFS Lifecycle Management for Build Environment | How would you manage the lifecycle and automated build process when some of the projects (C# .csproj projects) are part of the actual build system?
Example:
A .csproj is a project that uses MSBuild tasks that are implemented in BuildEnv.csproj.
Both projects are part of the same product (meaning, BuildEnv.csproj frequently changes as the product is being developed and not a 3rd party that is rarely updated)
| [
"You must factor this out into two separate \"projects\" otherwise you'll spend ages chasing your tail trying to find out if a broken build is due to changes in the build system or chages in the code being developed.\nPreviously we've factored the two systems out into separate projects in CVS.\nYou want to be able to vary one thing while keeping the other constant to limit what you would have to look at when performing forensic analysis.\nHope that helps.\n"
] | [
2
] | [] | [] | [
"msbuild",
"tfs"
] | stackoverflow_0000030209_msbuild_tfs.txt |
Q:
Ethernet MAC address as activation code for an appliance?
Let's suppose you deploy a network-attached appliances (small form factor PCs) in the field. You want to allow these to call home after being powered on, then be identified and activated by end users.
Our current plan involves the user entering the MAC address into an activation page on our web site. Later our software (running on the box) will read the address from the interface and transmit this in a "call home" packet. If it matches, the server response with customer information and the box is activated.
We like this approach because it's easy to access, and usually printed on external labels (FCC requirement?).
Any problems to watch out for? (The hardware in use is small form factor so all NICs, etc are embedded and would be very hard to change. Customers don't normally have direct acccess to the OS in any way).
I know Microsoft does some crazy fuzzy-hashing function for Windows activation using PCI device IDs, memory size, etc. But that seems overkill for our needs.
--
@Neall Basically, calling into our server, for purposes of this discussion you could call us the manufacturer.
Neall is correct, we're just using the address as a constant. We will read it and transmit it within another packet (let's say HTTP POST), not depending on getting it somehow from Ethernet frames.
A:
I don't think that the well-known spoofability of MAC addresses is an issue in this case. I think tweakt is just wanting to use them for initial identification. The device can read its own MAC address, and the installer can (as long as it's printed on a label) read the same number and know, "OK - this is the box that I put at location A."
tweakt - would these boxes be calling into the manufacturer's server, or the server of the company/person using them (or are those the same thing in this case)?
A:
I don't think there's anything magic about what you're doing here - couldn't what you're doing be described as:
"At production we burn a unique number into each of our devices which is both readable by the end user (it's on the label) and accessible to the internal processor. Our users have to enter this number into our website along with their credit-card details, and the box subsequently contacts to the website for permission to operate"
"Coincidentally we also use this number as the MAC address for network packets as we have to uniquely assign that during production anyway, so it saved us duplicating this bit of work"
I would say the two obvious hazards are:
People hack around with your device and change this address to one which someone else has already activated. Whether this is likely to happen depends on some relationship between how hard it is and how expensive whatever they get to steal is. You might want to think about how easily they can take a firmware upgrade file and get the code out of it.
Someone uses a combination of firewall/router rules and a bit of custom software to generate a server which replicates the operation of your 'auth server' and grants permission to the device to proceed. You could make this harder with some combination of hashing/PKE as part of the protocol.
As ever, some tedious, expensive one-off hack is largely irrelevant, what you don't want is a class-break which can be distributed over the internet to every thieving dweep.
A:
The MAC address is as unique as a serial number printed on a manual/sticker.
Microsoft does hashing to prevent MAC address spoofing, and to allow a bit more privacy.
With the only MAC approach, you can easily match a device to a customer by only being in the same subnet. The hash prevents that, by being opaque to what criteria are used and no way to reverse engineer individual parts.
(see password hashing)
| Ethernet MAC address as activation code for an appliance? | Let's suppose you deploy a network-attached appliances (small form factor PCs) in the field. You want to allow these to call home after being powered on, then be identified and activated by end users.
Our current plan involves the user entering the MAC address into an activation page on our web site. Later our software (running on the box) will read the address from the interface and transmit this in a "call home" packet. If it matches, the server response with customer information and the box is activated.
We like this approach because it's easy to access, and usually printed on external labels (FCC requirement?).
Any problems to watch out for? (The hardware in use is small form factor so all NICs, etc are embedded and would be very hard to change. Customers don't normally have direct acccess to the OS in any way).
I know Microsoft does some crazy fuzzy-hashing function for Windows activation using PCI device IDs, memory size, etc. But that seems overkill for our needs.
--
@Neall Basically, calling into our server, for purposes of this discussion you could call us the manufacturer.
Neall is correct, we're just using the address as a constant. We will read it and transmit it within another packet (let's say HTTP POST), not depending on getting it somehow from Ethernet frames.
| [
"I don't think that the well-known spoofability of MAC addresses is an issue in this case. I think tweakt is just wanting to use them for initial identification. The device can read its own MAC address, and the installer can (as long as it's printed on a label) read the same number and know, \"OK - this is the box that I put at location A.\"\ntweakt - would these boxes be calling into the manufacturer's server, or the server of the company/person using them (or are those the same thing in this case)?\n",
"I don't think there's anything magic about what you're doing here - couldn't what you're doing be described as:\n\"At production we burn a unique number into each of our devices which is both readable by the end user (it's on the label) and accessible to the internal processor. Our users have to enter this number into our website along with their credit-card details, and the box subsequently contacts to the website for permission to operate\"\n\"Coincidentally we also use this number as the MAC address for network packets as we have to uniquely assign that during production anyway, so it saved us duplicating this bit of work\"\nI would say the two obvious hazards are:\n\nPeople hack around with your device and change this address to one which someone else has already activated. Whether this is likely to happen depends on some relationship between how hard it is and how expensive whatever they get to steal is. You might want to think about how easily they can take a firmware upgrade file and get the code out of it.\nSomeone uses a combination of firewall/router rules and a bit of custom software to generate a server which replicates the operation of your 'auth server' and grants permission to the device to proceed. You could make this harder with some combination of hashing/PKE as part of the protocol. \n\nAs ever, some tedious, expensive one-off hack is largely irrelevant, what you don't want is a class-break which can be distributed over the internet to every thieving dweep.\n",
"The MAC address is as unique as a serial number printed on a manual/sticker.\nMicrosoft does hashing to prevent MAC address spoofing, and to allow a bit more privacy. \nWith the only MAC approach, you can easily match a device to a customer by only being in the same subnet. The hash prevents that, by being opaque to what criteria are used and no way to reverse engineer individual parts.\n(see password hashing)\n"
] | [
3,
2,
1
] | [
"From a security perspective, I know that it is possible to spoof a MAC, though I am not entirely sure how difficult it is or what it entails.\nOtherwise, if the customers don't have easy access to the hardware or the OS, you should be fairly safe doing this... probably best to put a warning sticker on saying that messing with anything will disrupt communication to the server.\n"
] | [
-1
] | [
"activation",
"drm",
"ethernet",
"licensing"
] | stackoverflow_0000030145_activation_drm_ethernet_licensing.txt |
Q:
What would be a good, windows and iis (http) based distributed version control system
At my job we make & sell websites. Usually we install our .NET C# based site on a customer's server and maintain and support it remotely. However, every once in a while, for bigger development works and just to make things simpler (and faster!), we will copy the site to a local server.
This is great, but has one pain - moving the site back to the customer. Now, If nothing was change on the customer's copy - no problem. However, it is the sad truth that sometime (read more often than I would like) some fixes were needed to be applied on the production server. Either because the customer needed it NOW or simply because it was major bug.
I know that you can easily apply those bug fixes to the local copy as well, but this is an error prone process. So I'm setting my hopes on a distributed version control to help synchronize the two copies.
Here is what I need:
Easy to install - nothing else needed except the installer and admin rights.
Can integrated in an existing website as a virtual directory and works on port 80 - no hassle with new DNS required.
Excellent software
That's it. Any ideas?
Some comments on the answers
First, thanks! much appreciated.
I've looked at Mercurial and Bazaar and both look very good. The only caveat is the installation as a virtual directory on IIS. Mercurial, as far as I understand, use a special protocol (wire) and Bazaar needs and addition of python extensions. Is there another system which is easier to integrate with IIS? I'm willing to take a performance hit for that.
A:
I'd look at either Mercurial or Bazaar. I'm told Git also works on windows, but I suspect the windows port is still a second class port at best.
You'll probably need to be able to run python scripts on your webserver to host either of them.
A:
Maybe not exactly what you request but checkout DeltaCopy which is a windows version of rsync. You can also read about another rsync solution here
A:
I can also vouch for Mercurial. Simple to use and powerful to boot!
| What would be a good, windows and iis (http) based distributed version control system | At my job we make & sell websites. Usually we install our .NET C# based site on a customer's server and maintain and support it remotely. However, every once in a while, for bigger development works and just to make things simpler (and faster!), we will copy the site to a local server.
This is great, but has one pain - moving the site back to the customer. Now, If nothing was change on the customer's copy - no problem. However, it is the sad truth that sometime (read more often than I would like) some fixes were needed to be applied on the production server. Either because the customer needed it NOW or simply because it was major bug.
I know that you can easily apply those bug fixes to the local copy as well, but this is an error prone process. So I'm setting my hopes on a distributed version control to help synchronize the two copies.
Here is what I need:
Easy to install - nothing else needed except the installer and admin rights.
Can integrated in an existing website as a virtual directory and works on port 80 - no hassle with new DNS required.
Excellent software
That's it. Any ideas?
Some comments on the answers
First, thanks! much appreciated.
I've looked at Mercurial and Bazaar and both look very good. The only caveat is the installation as a virtual directory on IIS. Mercurial, as far as I understand, use a special protocol (wire) and Bazaar needs and addition of python extensions. Is there another system which is easier to integrate with IIS? I'm willing to take a performance hit for that.
| [
"I'd look at either Mercurial or Bazaar. I'm told Git also works on windows, but I suspect the windows port is still a second class port at best.\nYou'll probably need to be able to run python scripts on your webserver to host either of them.\n",
"Maybe not exactly what you request but checkout DeltaCopy which is a windows version of rsync. You can also read about another rsync solution here\n",
"I can also vouch for Mercurial. Simple to use and powerful to boot!\n"
] | [
2,
0,
0
] | [] | [] | [
"distributed",
"http",
"version_control"
] | stackoverflow_0000029882_distributed_http_version_control.txt |
Q:
Table Scan vs. Add Index - which is quicker?
I have a table with many millions of rows. I need to find all the rows with a specific column value. That column is not in an index, so a table scan results.
But would it be quicker to add an index with the column at the head (prime key following), do the query, then drop the index?
I can't add an index permanently as the user is nominating what column they're looking for.
A:
Two questions to think about:
How many columns could be nominated for the query?
Does the data change frequently? A lot of it?
If you have a small number of candidate columns, and the data doesn't change a lot, then you might want to consider adding a permanent index on any or even all candidate column.
"Blasphemy!", I hear. Most sources tell you to "never" index every column of a table, but that advised is rooted on the generic assumption that tables are modified frequently.
You will pay a price in additional storage, as well as a performance hit when the data changes.
How small is small and how much is a lot, and is the tradeoff worth it?
There is no way to tell a priory because "too slow" is usually a subjective measurement.
You will have to try it, measure the size of your indexes and then the effect they have in the searches. You will have to balance the costs against the increase in satisfaction of your customers.
[Added] Oh, one more thing: temporary indexes are not only physically slower than a table scan, but they would destroy your concurrency. Re-indexing a table usually (always?) requires a full table lock, so in effect only one user search could be done at a time.
Good luck.
A:
I'm no DBA, but I would guess that building the index would require scanning the table anyway.
Unless there are going to be multiple queries on that column, I would recommend not creating the index.
Best to check the explain plans/execution times for both ways, though!
A:
As everyone else has said, it most certainly would not be faster to add an index than it would be to do a full scan of that column.
However, I would suggest tracking the query pattern and find out which column(s) are searched for the most, and add indexes at least for them. You may find out that 3-4 indexes speeds up 90% of your queries.
A:
Adding an index requires a table scan, so if you can't add a permanent index it sounds like a single scan will be (slightly) faster.
A:
No, that would not be quicker. What would be quicker is to just add the index and leave it there!
Of course, it may not be practical to index every column, but then again it may. How is data added to the table?
A:
It wouldn't be. Creating an index is more complex than simply scanning the column, even if the computational complexity is the same.
That said - how many columns do you have? Are you sure you can't just create an index for each of them if the query time for a single find is too long?
A:
It depends on the complexity of your query. If you're retrieving the data once, then doing a table scan is faster. However, if you're going back to the table more than once for related information in the same query, then the index is faster.
Another related strategy is to do the table scan, and put all the data in a temporary table. Then index THAT and then you can do all your subsequent selects, groupings, and as many other queries on the subset of indexed data. The benefit being that looking up related information in related tables using the temp table is MUCH faster.
However, space is cheap these days, so you'd probably best be served by examining how your users actually USE your system and adding indexes on those frequent columns. I have yet to see users use ALL the search parameters ALL the time.
A:
Your solution will not scale unless you add a permanent index to each column, with all of the columns that are returned in the query in the list of included columns (a covering index). These indexes will be very large, and inserts and updates to that table will be a bit slower, but you don't have much of a choice if you are allowing a user to arbitrarily select a search column.
How many columns are there? How often does the data get updated? How fast do inserts and updates need to run? There are trade-offs involved, depending on the answers to those questions. Do plenty of experimentation and testing so you know for sure how things will perform.
But to your original question, adding and dropping an index for the purpose of a single query is only beneficial if you do more than one select during the query (for example, the select is in a sub-query that gets run for each row returned).
| Table Scan vs. Add Index - which is quicker? | I have a table with many millions of rows. I need to find all the rows with a specific column value. That column is not in an index, so a table scan results.
But would it be quicker to add an index with the column at the head (prime key following), do the query, then drop the index?
I can't add an index permanently as the user is nominating what column they're looking for.
| [
"Two questions to think about:\n\nHow many columns could be nominated for the query?\nDoes the data change frequently? A lot of it?\n\nIf you have a small number of candidate columns, and the data doesn't change a lot, then you might want to consider adding a permanent index on any or even all candidate column.\n\"Blasphemy!\", I hear. Most sources tell you to \"never\" index every column of a table, but that advised is rooted on the generic assumption that tables are modified frequently.\nYou will pay a price in additional storage, as well as a performance hit when the data changes.\nHow small is small and how much is a lot, and is the tradeoff worth it?\nThere is no way to tell a priory because \"too slow\" is usually a subjective measurement.\nYou will have to try it, measure the size of your indexes and then the effect they have in the searches. You will have to balance the costs against the increase in satisfaction of your customers.\n[Added] Oh, one more thing: temporary indexes are not only physically slower than a table scan, but they would destroy your concurrency. Re-indexing a table usually (always?) requires a full table lock, so in effect only one user search could be done at a time.\nGood luck.\n",
"I'm no DBA, but I would guess that building the index would require scanning the table anyway. \nUnless there are going to be multiple queries on that column, I would recommend not creating the index.\nBest to check the explain plans/execution times for both ways, though!\n",
"As everyone else has said, it most certainly would not be faster to add an index than it would be to do a full scan of that column.\nHowever, I would suggest tracking the query pattern and find out which column(s) are searched for the most, and add indexes at least for them. You may find out that 3-4 indexes speeds up 90% of your queries.\n",
"Adding an index requires a table scan, so if you can't add a permanent index it sounds like a single scan will be (slightly) faster.\n",
"No, that would not be quicker. What would be quicker is to just add the index and leave it there!\nOf course, it may not be practical to index every column, but then again it may. How is data added to the table?\n",
"It wouldn't be. Creating an index is more complex than simply scanning the column, even if the computational complexity is the same.\nThat said - how many columns do you have? Are you sure you can't just create an index for each of them if the query time for a single find is too long?\n",
"It depends on the complexity of your query. If you're retrieving the data once, then doing a table scan is faster. However, if you're going back to the table more than once for related information in the same query, then the index is faster.\nAnother related strategy is to do the table scan, and put all the data in a temporary table. Then index THAT and then you can do all your subsequent selects, groupings, and as many other queries on the subset of indexed data. The benefit being that looking up related information in related tables using the temp table is MUCH faster.\nHowever, space is cheap these days, so you'd probably best be served by examining how your users actually USE your system and adding indexes on those frequent columns. I have yet to see users use ALL the search parameters ALL the time.\n",
"Your solution will not scale unless you add a permanent index to each column, with all of the columns that are returned in the query in the list of included columns (a covering index). These indexes will be very large, and inserts and updates to that table will be a bit slower, but you don't have much of a choice if you are allowing a user to arbitrarily select a search column.\nHow many columns are there? How often does the data get updated? How fast do inserts and updates need to run? There are trade-offs involved, depending on the answers to those questions. Do plenty of experimentation and testing so you know for sure how things will perform.\nBut to your original question, adding and dropping an index for the purpose of a single query is only beneficial if you do more than one select during the query (for example, the select is in a sub-query that gets run for each row returned).\n"
] | [
9,
8,
3,
2,
2,
2,
2,
2
] | [] | [] | [
"database",
"indexing",
"optimization",
"sql"
] | stackoverflow_0000030094_database_indexing_optimization_sql.txt |
Q:
joining latest of various usermetadata tags to user rows
I have a postgres database with a user table (userid, firstname, lastname) and a usermetadata table (userid, code, content, created datetime). I store various information about each user in the usermetadata table by code and keep a full history. so for example, a user (userid 15) has the following metadata:
15, 'QHS', '20', '2008-08-24 13:36:33.465567-04'
15, 'QHE', '8', '2008-08-24 12:07:08.660519-04'
15, 'QHS', '21', '2008-08-24 09:44:44.39354-04'
15, 'QHE', '10', '2008-08-24 08:47:57.672058-04'
I need to fetch a list of all my users and the most recent value of each of various usermetadata codes. I did this programmatically and it was, of course godawful slow. The best I could figure out to do it in SQL was to join sub-selects, which were also slow and I had to do one for each code.
A:
This is actually not that hard to do in PostgreSQL because it has the "DISTINCT ON" clause in its SELECT syntax (DISTINCT ON isn't standard SQL).
SELECT DISTINCT ON (code) code, content, createtime
FROM metatable
WHERE userid = 15
ORDER BY code, createtime DESC;
That will limit the returned results to the first result per unique code, and if you sort the results by the create time descending, you'll get the newest of each.
A:
I suppose you're not willing to modify your schema, so I'm afraid my answe might not be of much help, but here goes...
One possible solution would be to have the time field empty until it was replaced by a newer value, when you insert the 'deprecation date' instead. Another way is to expand the table with an 'active' column, but that would introduce some redundancy.
The classic solution would be to have both 'Valid-From' and 'Valid-To' fields where the 'Valid-To' fields are blank until some other entry becomes valid. This can be handled easily by using triggers or similar. Using constraints to make sure there is only one item of each type that is valid will ensure data integrity.
Common to these is that there is a single way of determining the set of current fields. You'd simply select all entries with the active user and a NULL 'Valid-To' or 'deprecation date' or a true 'active'.
You might be interested in taking a look at the Wikipedia entry on temporal databases and the article A consensus glossary of temporal database concepts.
A:
A subselect is the standard way of doing this sort of thing. You just need a Unique Constraint on UserId, Code, and Date - and then you can run the following:
SELECT *
FROM Table
JOIN (
SELECT UserId, Code, MAX(Date) as LastDate
FROM Table
GROUP BY UserId, Code
) as Latest ON
Table.UserId = Latest.UserId
AND Table.Code = Latest.Code
AND Table.Date = Latest.Date
WHERE
UserId = @userId
| joining latest of various usermetadata tags to user rows | I have a postgres database with a user table (userid, firstname, lastname) and a usermetadata table (userid, code, content, created datetime). I store various information about each user in the usermetadata table by code and keep a full history. so for example, a user (userid 15) has the following metadata:
15, 'QHS', '20', '2008-08-24 13:36:33.465567-04'
15, 'QHE', '8', '2008-08-24 12:07:08.660519-04'
15, 'QHS', '21', '2008-08-24 09:44:44.39354-04'
15, 'QHE', '10', '2008-08-24 08:47:57.672058-04'
I need to fetch a list of all my users and the most recent value of each of various usermetadata codes. I did this programmatically and it was, of course godawful slow. The best I could figure out to do it in SQL was to join sub-selects, which were also slow and I had to do one for each code.
| [
"This is actually not that hard to do in PostgreSQL because it has the \"DISTINCT ON\" clause in its SELECT syntax (DISTINCT ON isn't standard SQL).\nSELECT DISTINCT ON (code) code, content, createtime\nFROM metatable\nWHERE userid = 15\nORDER BY code, createtime DESC;\n\nThat will limit the returned results to the first result per unique code, and if you sort the results by the create time descending, you'll get the newest of each.\n",
"I suppose you're not willing to modify your schema, so I'm afraid my answe might not be of much help, but here goes...\nOne possible solution would be to have the time field empty until it was replaced by a newer value, when you insert the 'deprecation date' instead. Another way is to expand the table with an 'active' column, but that would introduce some redundancy.\nThe classic solution would be to have both 'Valid-From' and 'Valid-To' fields where the 'Valid-To' fields are blank until some other entry becomes valid. This can be handled easily by using triggers or similar. Using constraints to make sure there is only one item of each type that is valid will ensure data integrity.\nCommon to these is that there is a single way of determining the set of current fields. You'd simply select all entries with the active user and a NULL 'Valid-To' or 'deprecation date' or a true 'active'.\nYou might be interested in taking a look at the Wikipedia entry on temporal databases and the article A consensus glossary of temporal database concepts.\n",
"A subselect is the standard way of doing this sort of thing. You just need a Unique Constraint on UserId, Code, and Date - and then you can run the following:\nSELECT * \nFROM Table\nJOIN (\n SELECT UserId, Code, MAX(Date) as LastDate\n FROM Table\n GROUP BY UserId, Code\n) as Latest ON\n Table.UserId = Latest.UserId\n AND Table.Code = Latest.Code\n AND Table.Date = Latest.Date\nWHERE\n UserId = @userId\n\n"
] | [
4,
1,
0
] | [] | [] | [
"database",
"postgresql"
] | stackoverflow_0000025224_database_postgresql.txt |
Q:
Why stateless session beans are single threaded?
As per my understanding stateless session beans are used to code the business logic. They can not store data in their instance variables because their instance is shared by multiple requests. So they seem to be more like Singleton classes. However the difference is contain creates (or reuses from pool) the separate instance of stateless session beans for every request.
After googling I could find the reasoning that the Java EE specification says they are suppose to be single threaded. But I can't get the reason why the are specified to be SINGLE THREADED?
A:
The SLSBs are single threaded because of the TX Context, Principal is associated with a bean instance when it is called. These beans are pooled and unless the max pool size is reached are processed in separate threads ( Vendor dependent).
If SLSBs were designed thread safe every call would have looked like a servlet doGet/Post with request info containing Tx Context , Security Context info and etc. So at least the code looks clean (developer dependent).
A:
The primary reason stateless session beans are single threaded is to make them highly scalable for the container. The container can make a lot of simplifying assumptions about the runtime environment. A second reason is to make life easier for the developer because the developer doesn't have to worry about any synchronization or re-entrancy in his business logic because the bean will never be called in another thread context.
I remember the reasoning being discussed in the reviews of the original EJB 1.0 specification. I would look at the goals section of the specification. See http://java.sun.com/products/ejb/docs.html for the list of specifications.
| Why stateless session beans are single threaded? | As per my understanding stateless session beans are used to code the business logic. They can not store data in their instance variables because their instance is shared by multiple requests. So they seem to be more like Singleton classes. However the difference is contain creates (or reuses from pool) the separate instance of stateless session beans for every request.
After googling I could find the reasoning that the Java EE specification says they are suppose to be single threaded. But I can't get the reason why the are specified to be SINGLE THREADED?
| [
"The SLSBs are single threaded because of the TX Context, Principal is associated with a bean instance when it is called. These beans are pooled and unless the max pool size is reached are processed in separate threads ( Vendor dependent).\nIf SLSBs were designed thread safe every call would have looked like a servlet doGet/Post with request info containing Tx Context , Security Context info and etc. So at least the code looks clean (developer dependent). \n",
"The primary reason stateless session beans are single threaded is to make them highly scalable for the container. The container can make a lot of simplifying assumptions about the runtime environment. A second reason is to make life easier for the developer because the developer doesn't have to worry about any synchronization or re-entrancy in his business logic because the bean will never be called in another thread context.\nI remember the reasoning being discussed in the reviews of the original EJB 1.0 specification. I would look at the goals section of the specification. See http://java.sun.com/products/ejb/docs.html for the list of specifications.\n"
] | [
5,
4
] | [] | [] | [
"ejb",
"java"
] | stackoverflow_0000029734_ejb_java.txt |
Q:
How to store Application Messages for a .NET Website
I am looking for a method of storing Application Messages, such as
"You have logged in successfully"
"An error has occurred, please call the helpdesk on x100"
"You do not have the authority to reset all system passwords" etc
So that "when" the users decide they don't like the wording of messages I don't have to change the source code, recompile then redeploy - instead I just change the message store.
I really like the way that I can easily access strings in the web.config using keys and values.
ConfigurationManager.AppSettings("LOGINSUCCESS");
However as I could have a large number of application messages I didn't want to use the web.config directly. I was going to add a 2nd web config file and use that but of course you can only have one per virtual directory.
Does anyone have any suggestions on how to do this without writing much custom code?
A:
In your Web.config, under appSettings, change it to:
<appSettings file="StringKeys.config">
Then, create your StringKeys.config file and have all your keys in it.
You can still use the AppSettings area in the main web.config for any real application related keys.
A:
Put the strings in an xml file and use a filewatcher to check for updates to the file
Put the strings in a database, cache them and set a reasonable expiration policy
A:
You can use ResourceManager class. See "ResourceManager and ASP.NET" article at http://msdn.microsoft.com/en-us/library/aa309419(VS.71).aspx
| How to store Application Messages for a .NET Website | I am looking for a method of storing Application Messages, such as
"You have logged in successfully"
"An error has occurred, please call the helpdesk on x100"
"You do not have the authority to reset all system passwords" etc
So that "when" the users decide they don't like the wording of messages I don't have to change the source code, recompile then redeploy - instead I just change the message store.
I really like the way that I can easily access strings in the web.config using keys and values.
ConfigurationManager.AppSettings("LOGINSUCCESS");
However as I could have a large number of application messages I didn't want to use the web.config directly. I was going to add a 2nd web config file and use that but of course you can only have one per virtual directory.
Does anyone have any suggestions on how to do this without writing much custom code?
| [
"In your Web.config, under appSettings, change it to:\n<appSettings file=\"StringKeys.config\">\n\nThen, create your StringKeys.config file and have all your keys in it.\nYou can still use the AppSettings area in the main web.config for any real application related keys.\n",
"\nPut the strings in an xml file and use a filewatcher to check for updates to the file\nPut the strings in a database, cache them and set a reasonable expiration policy \n\n",
"You can use ResourceManager class. See \"ResourceManager and ASP.NET\" article at http://msdn.microsoft.com/en-us/library/aa309419(VS.71).aspx\n"
] | [
6,
3,
0
] | [] | [] | [
"resources",
"vb.net"
] | stackoverflow_0000030321_resources_vb.net.txt |
Q:
NullReferenceException on User Control handle
I have an Asp.NET application (VS2008, Framework 2.0). When I try to set a property on one of the user controls like
myUserControl.SomeProperty = someValue;
I get a NullReferenceException. When I debug, I found out that myUserControl is null. How is it possible that a user control handle is null? How do I fix this or how do I find what causes this?
A:
Where are you trying to access the property? If you are in onInit, the control may not be loaded yet.
A:
Where exactly in the code are you attempting to do this? It is possible that you are attempting to access the control too early in the page lifecycle and it has not been instantiated yet.
A:
If you created the UserControl during runtime (through ControlCollection.Add), you need to create it on postback too.
Another case can be your UserControl does not match the designer.cs page
A:
I was trying to set the property from markup on an outside user control. When I took the property to OnLoad, it worked.
| NullReferenceException on User Control handle | I have an Asp.NET application (VS2008, Framework 2.0). When I try to set a property on one of the user controls like
myUserControl.SomeProperty = someValue;
I get a NullReferenceException. When I debug, I found out that myUserControl is null. How is it possible that a user control handle is null? How do I fix this or how do I find what causes this?
| [
"Where are you trying to access the property? If you are in onInit, the control may not be loaded yet.\n",
"Where exactly in the code are you attempting to do this? It is possible that you are attempting to access the control too early in the page lifecycle and it has not been instantiated yet.\n",
"If you created the UserControl during runtime (through ControlCollection.Add), you need to create it on postback too.\nAnother case can be your UserControl does not match the designer.cs page\n",
"I was trying to set the property from markup on an outside user control. When I took the property to OnLoad, it worked.\n"
] | [
5,
5,
0,
0
] | [] | [] | [
"asp.net",
"user_controls"
] | stackoverflow_0000030286_asp.net_user_controls.txt |
Q:
Using Interop with C#, Excel Save changing original. How to negate this?
The problem: Loading an excel spreadsheet template. Using the Save command with a different filename and then quitting the interop object. This ends up saving the original template file. Not the result that is liked.
public void saveAndExit(string filename)
{
excelApplication.Save(filename);
excelApplication.Quit();
}
Original file opened is c:\testing\template.xls
The file name that is passed in is c:\testing\7777 (date).xls
Does anyone have an answer?
(The answer I chose was the most correct and thorough though the wbk.Close() requires parameters passed to it. Thanks.)
A:
Excel interop is pretty painful. I dug up an old project I had, did a little fiddling, and I think this is what you're looking for. The other commenters are right, but, at least in my experience, there's a lot more to calling SaveAs() than you'd expect if you've used the same objects (without the interop wrapper) in VBA.
Microsoft.Office.Interop.Excel.Workbook wbk = excelApplication.Workbooks[0]; //or some other way of obtaining this workbook reference, as Jason Z mentioned
wbk.SaveAs(filename, Type.Missing, Type.Missing, Type.Missing,
Type.Missing, Type.Missing, XlSaveAsAccessMode.xlNoChange,
Type.Missing, Type.Missing, Type.Missing, Type.Missing,
Type.Missing);
wbk.Close();
excelApplication.Quit();
Gotta love all those Type.Missings. But I think they're necessary.
A:
Rather than using an ExcelApplication, you can use the Workbook object and call the SaveAs() method. You can pass the updated file name in there.
A:
Have you tried the SaveAs from the Worksheet?
A:
Ditto on the SaveAs
Whenever I have to do Interop I create a separate VB.NET class library and write the logic in VB. It is just not worth the hassle doing it in C#
| Using Interop with C#, Excel Save changing original. How to negate this? | The problem: Loading an excel spreadsheet template. Using the Save command with a different filename and then quitting the interop object. This ends up saving the original template file. Not the result that is liked.
public void saveAndExit(string filename)
{
excelApplication.Save(filename);
excelApplication.Quit();
}
Original file opened is c:\testing\template.xls
The file name that is passed in is c:\testing\7777 (date).xls
Does anyone have an answer?
(The answer I chose was the most correct and thorough though the wbk.Close() requires parameters passed to it. Thanks.)
| [
"Excel interop is pretty painful. I dug up an old project I had, did a little fiddling, and I think this is what you're looking for. The other commenters are right, but, at least in my experience, there's a lot more to calling SaveAs() than you'd expect if you've used the same objects (without the interop wrapper) in VBA.\nMicrosoft.Office.Interop.Excel.Workbook wbk = excelApplication.Workbooks[0]; //or some other way of obtaining this workbook reference, as Jason Z mentioned\nwbk.SaveAs(filename, Type.Missing, Type.Missing, Type.Missing,\n Type.Missing, Type.Missing, XlSaveAsAccessMode.xlNoChange, \n Type.Missing, Type.Missing, Type.Missing, Type.Missing,\n Type.Missing);\nwbk.Close();\nexcelApplication.Quit();\n\nGotta love all those Type.Missings. But I think they're necessary.\n",
"Rather than using an ExcelApplication, you can use the Workbook object and call the SaveAs() method. You can pass the updated file name in there.\n",
"Have you tried the SaveAs from the Worksheet?\n",
"\nDitto on the SaveAs\nWhenever I have to do Interop I create a separate VB.NET class library and write the logic in VB. It is just not worth the hassle doing it in C#\n\n"
] | [
8,
1,
0,
0
] | [] | [] | [
"c#",
"excel"
] | stackoverflow_0000029141_c#_excel.txt |
Q:
MS hotfix delayed delivery
I just requested a hotfix from support.microsoft.com and put in my email address, but I haven't received the email yet. The splash page I got after I requested the hotfix said:
Hotfix Confirmation
We will send these hotfixes to the following e-mail address:
(my correct email address)
Usually, our hotfix e-mail is delivered to you within five minutes. However, sometimes unforeseen issues in e-mail delivery systems may cause delays.
We will send the e-mail from the “hotfix@microsoft.com” e-mail account. If you use an e-mail filter or a SPAM blocker, we recommend that you add “hotfix@microsoft.com” or the “microsoft.com” domain to your safe senders list. (The safe senders list is also known as a whitelist or an approved senders list.) This will help prevent our e-mail from going into your junk e-mail folder or being automatically deleted.
I'm sure that the email is not getting caught in a spam catcher.
How long does it normally take to get one of these hotfixes? Am I waiting for some human to approve it, or something? Should I just give up and try to get the file I need some other way?
(Update: Replaced "me@mycompany.com" with "(my correct email address)" to resolve Martín Marconcini's ambiguity.)
A:
It usually arrives within the first hour. BUt the fact that it reads me@mycompany.com could either because you put it there to protect your privacy (in which case forget about this) or that the system didn't catch your email and they sent it to me@mycompany.com.
If the email address was ok and you didn't get it, somehow it bounced or it won't arrive. I'd suggest you contact them again providing an alternate email (gmail or such) to make sure that you don't experience any problems.
Last time I received a hotfix it took them 10 minutes.
Good luck with that!
A:
Took about a day for me when I requested one so I suspect some sort of manual/semi-automated process has to complete before you get the e-mail.
Give it a day before you start bugging them ;)
| MS hotfix delayed delivery | I just requested a hotfix from support.microsoft.com and put in my email address, but I haven't received the email yet. The splash page I got after I requested the hotfix said:
Hotfix Confirmation
We will send these hotfixes to the following e-mail address:
(my correct email address)
Usually, our hotfix e-mail is delivered to you within five minutes. However, sometimes unforeseen issues in e-mail delivery systems may cause delays.
We will send the e-mail from the “hotfix@microsoft.com” e-mail account. If you use an e-mail filter or a SPAM blocker, we recommend that you add “hotfix@microsoft.com” or the “microsoft.com” domain to your safe senders list. (The safe senders list is also known as a whitelist or an approved senders list.) This will help prevent our e-mail from going into your junk e-mail folder or being automatically deleted.
I'm sure that the email is not getting caught in a spam catcher.
How long does it normally take to get one of these hotfixes? Am I waiting for some human to approve it, or something? Should I just give up and try to get the file I need some other way?
(Update: Replaced "me@mycompany.com" with "(my correct email address)" to resolve Martín Marconcini's ambiguity.)
| [
"It usually arrives within the first hour. BUt the fact that it reads me@mycompany.com could either because you put it there to protect your privacy (in which case forget about this) or that the system didn't catch your email and they sent it to me@mycompany.com.\nIf the email address was ok and you didn't get it, somehow it bounced or it won't arrive. I'd suggest you contact them again providing an alternate email (gmail or such) to make sure that you don't experience any problems. \nLast time I received a hotfix it took them 10 minutes. \nGood luck with that!\n",
"Took about a day for me when I requested one so I suspect some sort of manual/semi-automated process has to complete before you get the e-mail.\nGive it a day before you start bugging them ;)\n"
] | [
1,
1
] | [] | [] | [
"email",
"hotfix"
] | stackoverflow_0000030297_email_hotfix.txt |
Q:
Why would getcwd() return a different directory than a local pwd?
I'm doing some PHP stuff on an Ubuntu server.
The path I'm working in is /mnt/dev-windows-data/Staging/mbiek/test_list but the PHP call getcwd() is returning /mnt/dev-windows/Staging/mbiek/test_list (notice how it's dev-windows instead of dev-windows-data).
There aren't any symbolic links anywhere.
Are there any other causes for getcwd() returning a different path from a local pwd call?
Edit
I figured it out. The DOCUMENT_ROOT in PHP is set to /mnt/dev-windows which throws everything off.
A:
Which file are you calling the getcwd() in and is that file is included into the one you are running (e.g. running index.php, including startup.php which contains gwtcwd()).
Is the file you are running in /dev-windows/ or /dev-windows-data/? It works on the file you are actually running.
Here's an example of my current project:
index.php
<?php
require_once('./includes/construct.php');
//snip
?>
includes/construct.php
<?php
//snip
(!defined('DIR')) ? define('DIR', getcwd()) : NULL;
require_once(DIR . '/includes/functions.php');
//snip
?>
A:
@Ross
I thought that getcwd() was returning a filesystem path rather than a relative url path.
Either way, the fact remains that the path /mnt/dev-windows doesn't exist while /mnt/dev-windows-data does.
A:
@Mark
Well that's just plain weird! What's your include_path - that could be messing thigns around. I've personally ditched it in favour of contants as it's just so temperamental (or I've never learned how to do it justice).
A:
@Ross
I figured it out and updated the OP with the solution.
| Why would getcwd() return a different directory than a local pwd? | I'm doing some PHP stuff on an Ubuntu server.
The path I'm working in is /mnt/dev-windows-data/Staging/mbiek/test_list but the PHP call getcwd() is returning /mnt/dev-windows/Staging/mbiek/test_list (notice how it's dev-windows instead of dev-windows-data).
There aren't any symbolic links anywhere.
Are there any other causes for getcwd() returning a different path from a local pwd call?
Edit
I figured it out. The DOCUMENT_ROOT in PHP is set to /mnt/dev-windows which throws everything off.
| [
"Which file are you calling the getcwd() in and is that file is included into the one you are running (e.g. running index.php, including startup.php which contains gwtcwd()).\nIs the file you are running in /dev-windows/ or /dev-windows-data/? It works on the file you are actually running.\n\nHere's an example of my current project:\nindex.php\n<?php\n require_once('./includes/construct.php');\n //snip\n?>\n\nincludes/construct.php\n<?php\n //snip\n (!defined('DIR')) ? define('DIR', getcwd()) : NULL;\n\n require_once(DIR . '/includes/functions.php');\n //snip\n?>\n\n",
"@Ross\nI thought that getcwd() was returning a filesystem path rather than a relative url path.\nEither way, the fact remains that the path /mnt/dev-windows doesn't exist while /mnt/dev-windows-data does.\n",
"@Mark\nWell that's just plain weird! What's your include_path - that could be messing thigns around. I've personally ditched it in favour of contants as it's just so temperamental (or I've never learned how to do it justice).\n",
"@Ross\nI figured it out and updated the OP with the solution.\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"directory",
"php"
] | stackoverflow_0000030307_directory_php.txt |
Q:
Install-base of Java JRE?
Is there an online resource somewhere that maintains statistics on the install-base of Java including JRE version information? If not, is there any recent report that has some numbers?
I'm particularly interested in Windows users, but all other OS's are welcome too.
A:
I'm not aware of anyone who keeps track of this publicly on a regular basis (unlike Adobe who pushes it every chance they get). The closest that I could come was this article from last November. Based upon his site, this data could be skewed a bit, but I think we fairly similar numbers as well.
A:
There is a very rough percentage of browsers with some JRE available at The Counter, though I wouldn't trust it. Sun has a few useful stats from 2007, but their stats from 2008 are much less detailed. They suggest that in 2007 "92%...of JRE installs...are now Java SE 6", but who knows what highly technical site they surveyed to get that number.
| Install-base of Java JRE? | Is there an online resource somewhere that maintains statistics on the install-base of Java including JRE version information? If not, is there any recent report that has some numbers?
I'm particularly interested in Windows users, but all other OS's are welcome too.
| [
"I'm not aware of anyone who keeps track of this publicly on a regular basis (unlike Adobe who pushes it every chance they get). The closest that I could come was this article from last November. Based upon his site, this data could be skewed a bit, but I think we fairly similar numbers as well.\n",
"There is a very rough percentage of browsers with some JRE available at The Counter, though I wouldn't trust it. Sun has a few useful stats from 2007, but their stats from 2008 are much less detailed. They suggest that in 2007 \"92%...of JRE installs...are now Java SE 6\", but who knows what highly technical site they surveyed to get that number.\n"
] | [
1,
1
] | [] | [] | [
"deployment",
"java"
] | stackoverflow_0000030337_deployment_java.txt |
Q:
Making a JavaScript regex equivalent to a PHP regex
After my web form is submitted, a regex will be applied to user input on the server side (via PHP). I'd like to have the identical regex running in real-time on the client side to show the user what the real input will be. This will be pretty much the same as the Preview section on the Ask Question pages on Stack Overflow except with PHP on the back-end instead of .NET.
What do I need to keep in mind in order to have my PHP and JavaScript regular expressions act exactly the same as each other?
A:
Hehe this was sort of asked moments ago and Jeff pointed out:
http://www.regular-expressions.info/refflavors.html.
There is a comparison of regular expression capabilities across tools and languages.
A:
If the regular expressions are simple then there should be no issue, as the basics of regular expressions are common across most implementations.
For particulars then it would be best to study both implementations:
http://www.regular-expressions.info/php.html
http://www.regular-expressions.info/javascript.html
Javascripts implementation is probably the more basic, so if you are going for a lowest common denominator approach then aim for that one.
A:
I've found that different implementations of regular expressions often have subtle differences in what exactly they support. If you want to be entirely sure that the result will be the same in both frontend and backend, the savest choice would be to make an Ajax call to your PHP backend and use the same piece of PHP code for both regex evaluations.
A:
@LKM AJAX is the clear winner here. This will also allow you to follow the DRY principle. Why would you want to write your parsing code in Javascript and PHP?
A:
Both JavaScript's regex and PHP's preg_match are based on Perl, so there shouldn't be any porting problems. Do note, however, that Javascript only supports a subset of modifiers that Perl supports.
For more info for comparing the two:
Javascript Regular Expressions
PHP Regular Expressions
As for delivery method, I'd suggest you'd use JSON, the slimmest data interchange format as of date (AFAIK) and directly translatable to a JavaScript object through eval(). Just put that bad boy through an AJAX session and you should be set to go.
I hope this helps :)
| Making a JavaScript regex equivalent to a PHP regex | After my web form is submitted, a regex will be applied to user input on the server side (via PHP). I'd like to have the identical regex running in real-time on the client side to show the user what the real input will be. This will be pretty much the same as the Preview section on the Ask Question pages on Stack Overflow except with PHP on the back-end instead of .NET.
What do I need to keep in mind in order to have my PHP and JavaScript regular expressions act exactly the same as each other?
| [
"Hehe this was sort of asked moments ago and Jeff pointed out:\nhttp://www.regular-expressions.info/refflavors.html.\nThere is a comparison of regular expression capabilities across tools and languages.\n",
"If the regular expressions are simple then there should be no issue, as the basics of regular expressions are common across most implementations.\nFor particulars then it would be best to study both implementations:\nhttp://www.regular-expressions.info/php.html\nhttp://www.regular-expressions.info/javascript.html\nJavascripts implementation is probably the more basic, so if you are going for a lowest common denominator approach then aim for that one.\n",
"I've found that different implementations of regular expressions often have subtle differences in what exactly they support. If you want to be entirely sure that the result will be the same in both frontend and backend, the savest choice would be to make an Ajax call to your PHP backend and use the same piece of PHP code for both regex evaluations.\n",
"@LKM AJAX is the clear winner here. This will also allow you to follow the DRY principle. Why would you want to write your parsing code in Javascript and PHP?\n",
"Both JavaScript's regex and PHP's preg_match are based on Perl, so there shouldn't be any porting problems. Do note, however, that Javascript only supports a subset of modifiers that Perl supports.\nFor more info for comparing the two:\n\nJavascript Regular Expressions\nPHP Regular Expressions\n\nAs for delivery method, I'd suggest you'd use JSON, the slimmest data interchange format as of date (AFAIK) and directly translatable to a JavaScript object through eval(). Just put that bad boy through an AJAX session and you should be set to go.\nI hope this helps :)\n"
] | [
12,
3,
1,
1,
0
] | [] | [] | [
"javascript",
"php",
"regex"
] | stackoverflow_0000030121_javascript_php_regex.txt |
Q:
Why can't I connect to my CAS server with Perl's AuthCAS?
I'm attempting to use an existing CAS server to authenticate login for a Perl CGI web script and am using the AuthCAS Perl module (v 1.3.1). I can connect to the CAS server to get the service ticket but when I try to connect to validate the ticket my script returns with the following error from the IO::Socket::SSL module:
500 Can't connect to [CAS Server]:443 (Bad hostname '[CAS Server]')
([CAS Server] substituted for real server name)
Symptoms/Tests:
If I type the generated URL for the authentication into the web browser's location bar it returns just fine with the expected XML snippet. So it is not a bad host name.
If I generate a script without using the AuthCAS module but using the IO::Socket::SSL module directly to query the CAS server for validation on the generated service ticket the Perl script will run fine from the command line but not in the browser.
If I add the AuthCAS module into the script in item 2, the script no longer works on the command line and still doesn't work in the browser.
Here is the bare-bones script that produces the error:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use AuthCAS;
use CGI::Carp qw( fatalsToBrowser );
my $id = $ENV{QUERY_STRING};
my $q = new CGI;
my $target = "http://localhost/cgi-bin/testCAS.cgi";
my $cas = new AuthCAS(casUrl => 'https://cas_server/cas');
if ($id eq ""){
my $login_url = $cas->getServerLoginURL($target);
printf "Location: $login_url\n\n";
exit 0;
} else {
print $q->header();
print "CAS TEST<br>\n";
## When coming back from the CAS server a ticket is provided in the QUERY_STRING
print "QUERY_STRING = " . $id . "</br>\n";
## $ST should contain the received Service Ticket
my $ST = $q->param('ticket');
my $user = $cas->validateST($target, $ST); #### This is what fails
printf "Error: %s\n", &AuthCAS::get_errors() unless (defined $user);
}
Any ideas on where the conflict might be?
The error is coming from the line directly above the snippet Cebjyre quoted namely
$ssl_socket = new IO::Socket::SSL(%ssl_options);
namely the socket creation. All of the input parameters are correct. I had edited the module to put in debug statements and print out all the parameters just before that call and they are all fine. Looks like I'm going to have to dive deeper into the IO::Socket::SSL module.
A:
As usually happens when I post questions like this, I found the problem. It turns out the Crypt::SSLeay module was not installed or at least not up to date. Of course the error messages didn't give me any clues. Updating it and all the problems go away and things are working fine now.
| Why can't I connect to my CAS server with Perl's AuthCAS? | I'm attempting to use an existing CAS server to authenticate login for a Perl CGI web script and am using the AuthCAS Perl module (v 1.3.1). I can connect to the CAS server to get the service ticket but when I try to connect to validate the ticket my script returns with the following error from the IO::Socket::SSL module:
500 Can't connect to [CAS Server]:443 (Bad hostname '[CAS Server]')
([CAS Server] substituted for real server name)
Symptoms/Tests:
If I type the generated URL for the authentication into the web browser's location bar it returns just fine with the expected XML snippet. So it is not a bad host name.
If I generate a script without using the AuthCAS module but using the IO::Socket::SSL module directly to query the CAS server for validation on the generated service ticket the Perl script will run fine from the command line but not in the browser.
If I add the AuthCAS module into the script in item 2, the script no longer works on the command line and still doesn't work in the browser.
Here is the bare-bones script that produces the error:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use AuthCAS;
use CGI::Carp qw( fatalsToBrowser );
my $id = $ENV{QUERY_STRING};
my $q = new CGI;
my $target = "http://localhost/cgi-bin/testCAS.cgi";
my $cas = new AuthCAS(casUrl => 'https://cas_server/cas');
if ($id eq ""){
my $login_url = $cas->getServerLoginURL($target);
printf "Location: $login_url\n\n";
exit 0;
} else {
print $q->header();
print "CAS TEST<br>\n";
## When coming back from the CAS server a ticket is provided in the QUERY_STRING
print "QUERY_STRING = " . $id . "</br>\n";
## $ST should contain the received Service Ticket
my $ST = $q->param('ticket');
my $user = $cas->validateST($target, $ST); #### This is what fails
printf "Error: %s\n", &AuthCAS::get_errors() unless (defined $user);
}
Any ideas on where the conflict might be?
The error is coming from the line directly above the snippet Cebjyre quoted namely
$ssl_socket = new IO::Socket::SSL(%ssl_options);
namely the socket creation. All of the input parameters are correct. I had edited the module to put in debug statements and print out all the parameters just before that call and they are all fine. Looks like I'm going to have to dive deeper into the IO::Socket::SSL module.
| [
"As usually happens when I post questions like this, I found the problem. It turns out the Crypt::SSLeay module was not installed or at least not up to date. Of course the error messages didn't give me any clues. Updating it and all the problems go away and things are working fine now.\n"
] | [
3
] | [
"Well, from the module source it looks like that IO::Socket error is coming from get_https2\n[...]\nunless ($ssl_socket) {\n $errors = sprintf \"error %s unable to connect https://%s:%s/\\n\",&IO::Socket::SSL::errstr,$host,$port;\n return undef;\n}\n[...]\n\nwhich is called by callCAS, which is called by validateST.\nOne option is to temporarily edit the module file to put some debug statements in if you can, but if I had to guess, I'd say the casUrl you are supplying isn't matching up to the _parse_url regex properly - maybe you have three slashes after the https?\n"
] | [
-1
] | [
"apache",
"authentication",
"cgi",
"perl",
"ssl"
] | stackoverflow_0000026842_apache_authentication_cgi_perl_ssl.txt |
Q:
Asp.Net Routing: How do I ignore multiple wildcard routes?
I'd like to ignore multiple wildcard routes. With asp.net mvc preview 4, they ship with:
RouteTable.Routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
I'd also like to add something like:
RouteTable.Routes.IgnoreRoute("Content/{*pathInfo}");
but that seems to break some of the helpers that generate urls in my program. Thoughts?
A:
There are two possible solutions here.
Add a constraint to the ignore route to make sure that only requests that should be ignored would match that route. Kinda kludgy, but it should work.
RouteTable.Routes.IgnoreRoute("{folder}/{*pathInfo}", new {folder="content"});
What is in your content directory? By default, Routing does not route files that exist on disk (actually checks the VirtualPathProvider). So if you are putting static content in the Content directory, you might not need the ignore route.
A:
This can be quite tricky.
When attempting to figure out how to map route data into a route, the system currently searches top-down until it finds something where all the required information is provided, and then stuffs everything else into query parameters.
Since the required information for the route "Content/{*pathInfo}" is entirely satisfied always (no required data at all in this route), and it's near the top of the route list, then all your attempts to map to unnamed routes will match this pattern, and all your URLs will be based on this ("Content?action=foo&controller=bar")
Unfortunately, there's no way around this with action routes. If you use named routes (f.e., choosing Html.RouteLink instead of Html.ActionLink), then you can specify the name of the route to match. It's less convenient, but more precise.
IMO, complex routes make the action-routing system basically fall over. In applications where I have something other than the default routes, I almost always end up reverting to named-route based URL generation to ensure I'm always getting the right route.
| Asp.Net Routing: How do I ignore multiple wildcard routes? | I'd like to ignore multiple wildcard routes. With asp.net mvc preview 4, they ship with:
RouteTable.Routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
I'd also like to add something like:
RouteTable.Routes.IgnoreRoute("Content/{*pathInfo}");
but that seems to break some of the helpers that generate urls in my program. Thoughts?
| [
"There are two possible solutions here.\n\nAdd a constraint to the ignore route to make sure that only requests that should be ignored would match that route. Kinda kludgy, but it should work.\nRouteTable.Routes.IgnoreRoute(\"{folder}/{*pathInfo}\", new {folder=\"content\"});\n\nWhat is in your content directory? By default, Routing does not route files that exist on disk (actually checks the VirtualPathProvider). So if you are putting static content in the Content directory, you might not need the ignore route.\n\n",
"This can be quite tricky.\nWhen attempting to figure out how to map route data into a route, the system currently searches top-down until it finds something where all the required information is provided, and then stuffs everything else into query parameters.\nSince the required information for the route \"Content/{*pathInfo}\" is entirely satisfied always (no required data at all in this route), and it's near the top of the route list, then all your attempts to map to unnamed routes will match this pattern, and all your URLs will be based on this (\"Content?action=foo&controller=bar\")\nUnfortunately, there's no way around this with action routes. If you use named routes (f.e., choosing Html.RouteLink instead of Html.ActionLink), then you can specify the name of the route to match. It's less convenient, but more precise.\nIMO, complex routes make the action-routing system basically fall over. In applications where I have something other than the default routes, I almost always end up reverting to named-route based URL generation to ensure I'm always getting the right route.\n"
] | [
15,
5
] | [] | [] | [
"asp.net",
"asp.net_mvc",
"c#",
"routing"
] | stackoverflow_0000030302_asp.net_asp.net_mvc_c#_routing.txt |
Q:
Using MVP - How to use Events Properly for Testing
I've just started using the MVP pattern in the large ASP.NET application that I'm building (re-building actually) and I am having a hard time figuring out how I should be using Events applied to the view.
Say I have 2 drop down lists in a User Control, where one is dependent on the other's value:
<%@ Control Language="vb" AutoEventWireup="false" CodeBehind="ucTestMVP.ascx.vb" Inherits=".ucTestMVP" %>
<asp:DropDownList ID="ddlCountry" runat="server" AutoPostBack="True" />
<asp:DropDownList ID="ddlCity" runat="server" />
How should the AutoPostBack Event be defined in the interface? Should it be an event that is handled by the User Control like this:
Public Partial Class ucTestMVP
Inherits System.Web.UI.UserControl
Implements ITestMVPView
Protected Sub PageLoad(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
If Not Page.IsPostBack Then
Dim presenter As New TestMVPPresenter(Me)
presenter.InitView()
End If
End Sub
Private Sub ddlCountrySelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles ddlCountry.SelectedIndexChanged
Dim presenter as New TestMVPPresenter(Me)
presenter.CountryDDLIndexChanged()
End Sub
End Class
Or should there be an event defined at the Interface? If this is the preferred pattern, how do I add events to be handled and used?
A:
I don't know if there's a universally preferred pattern. I tend to prefer adding the event to the view interface and having the presenter respond to the view. I described this pattern in more detail here.
| Using MVP - How to use Events Properly for Testing | I've just started using the MVP pattern in the large ASP.NET application that I'm building (re-building actually) and I am having a hard time figuring out how I should be using Events applied to the view.
Say I have 2 drop down lists in a User Control, where one is dependent on the other's value:
<%@ Control Language="vb" AutoEventWireup="false" CodeBehind="ucTestMVP.ascx.vb" Inherits=".ucTestMVP" %>
<asp:DropDownList ID="ddlCountry" runat="server" AutoPostBack="True" />
<asp:DropDownList ID="ddlCity" runat="server" />
How should the AutoPostBack Event be defined in the interface? Should it be an event that is handled by the User Control like this:
Public Partial Class ucTestMVP
Inherits System.Web.UI.UserControl
Implements ITestMVPView
Protected Sub PageLoad(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
If Not Page.IsPostBack Then
Dim presenter As New TestMVPPresenter(Me)
presenter.InitView()
End If
End Sub
Private Sub ddlCountrySelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles ddlCountry.SelectedIndexChanged
Dim presenter as New TestMVPPresenter(Me)
presenter.CountryDDLIndexChanged()
End Sub
End Class
Or should there be an event defined at the Interface? If this is the preferred pattern, how do I add events to be handled and used?
| [
"I don't know if there's a universally preferred pattern. I tend to prefer adding the event to the view interface and having the presenter respond to the view. I described this pattern in more detail here.\n"
] | [
2
] | [] | [] | [
"asp.net",
"design_patterns",
"mvp"
] | stackoverflow_0000030541_asp.net_design_patterns_mvp.txt |
Q:
Parsing XML Elements & Attributes with Perl
So I wrote some perl that would parse results returned from the Amazon Web Services. I am using the XML::Simple package. For the most part, everything worked when I pulled out an element. However, the problem I ran into was when an element had an attribute as well. Then I get an error that the item is a Hash.
Here's what I did if I wanted to get the Running Time for a DVD: I just created an item to hold the specific info for this one-off item.
// XML
<ProductGroup>DVD</ProductGroup>
<RunningTime Units="minutes">90</RunningTime>
// Perl to parse XML
my $item = $xml->XMLin($content, KeyAttr => { Item => 'ASIN'}, ForceArray => ['ASIN']);
$ProductGroup = $item->{Items}->{Item}->{ItemAttributes}->{ProductGroup};
if(ref($item->{Items}->{Item}->{ItemAttributes}->{RunningTime}) eq 'HASH'){
$RunningTimeXML = $xml->XMLin($content, KeyAttr => { Item => 'ASIN'}, NoAttr => 1);
$RunningTime = $RunningTimeXML->{Items}->{Item}->{ItemAttributes}->{RunningTime};
}
Is there a way I can access both elements and attributes from one item?
A:
$item is a hashref that looks like this:
$item = {
'RunningTime' => {'content' => '90', 'Units' => 'minutes'},
'ProductGroup' => 'DVD'
};
Therefore you can get the running time like this:
$RunningTime = $item->{RunningTime}->{content}
| Parsing XML Elements & Attributes with Perl | So I wrote some perl that would parse results returned from the Amazon Web Services. I am using the XML::Simple package. For the most part, everything worked when I pulled out an element. However, the problem I ran into was when an element had an attribute as well. Then I get an error that the item is a Hash.
Here's what I did if I wanted to get the Running Time for a DVD: I just created an item to hold the specific info for this one-off item.
// XML
<ProductGroup>DVD</ProductGroup>
<RunningTime Units="minutes">90</RunningTime>
// Perl to parse XML
my $item = $xml->XMLin($content, KeyAttr => { Item => 'ASIN'}, ForceArray => ['ASIN']);
$ProductGroup = $item->{Items}->{Item}->{ItemAttributes}->{ProductGroup};
if(ref($item->{Items}->{Item}->{ItemAttributes}->{RunningTime}) eq 'HASH'){
$RunningTimeXML = $xml->XMLin($content, KeyAttr => { Item => 'ASIN'}, NoAttr => 1);
$RunningTime = $RunningTimeXML->{Items}->{Item}->{ItemAttributes}->{RunningTime};
}
Is there a way I can access both elements and attributes from one item?
| [
"$item is a hashref that looks like this:\n$item = {\n 'RunningTime' => {'content' => '90', 'Units' => 'minutes'},\n 'ProductGroup' => 'DVD'\n};\n\nTherefore you can get the running time like this:\n$RunningTime = $item->{RunningTime}->{content}\n\n"
] | [
6
] | [] | [] | [
"amazon_web_services",
"perl",
"xml"
] | stackoverflow_0000030454_amazon_web_services_perl_xml.txt |
Q:
Compare Version Identifiers
Here is my code, which takes two version identifiers in the form "1, 5, 0, 4" or "1.5.0.4" and determines which is the newer version.
Suggestions or improvements, please!
/// <summary>
/// Compares two specified version strings and returns an integer that
/// indicates their relationship to one another in the sort order.
/// </summary>
/// <param name="strA">the first version</param>
/// <param name="strB">the second version</param>
/// <returns>less than zero if strA is less than strB, equal to zero if
/// strA equals strB, and greater than zero if strA is greater than strB</returns>
public static int CompareVersions(string strA, string strB)
{
char[] splitTokens = new char[] {'.', ','};
string[] strAsplit = strA.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
string[] strBsplit = strB.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
int[] versionA = new int[4];
int[] versionB = new int[4];
for (int i = 0; i < 4; i++)
{
versionA[i] = Convert.ToInt32(strAsplit[i]);
versionB[i] = Convert.ToInt32(strBsplit[i]);
}
// now that we have parsed the input strings, compare them
return RecursiveCompareArrays(versionA, versionB, 0);
}
/// <summary>
/// Recursive function for comparing arrays, 0-index is highest priority
/// </summary>
private static int RecursiveCompareArrays(int[] versionA, int[] versionB, int idx)
{
if (versionA[idx] < versionB[idx])
return -1;
else if (versionA[idx] > versionB[idx])
return 1;
else
{
Debug.Assert(versionA[idx] == versionB[idx]);
if (idx == versionA.Length - 1)
return 0;
else
return RecursiveCompareArrays(versionA, versionB, idx + 1);
}
}
@ Darren Kopp:
The version class does not handle versions of the format 1.0.0.5.
A:
Use the Version class.
Version a = new Version("1.0.0.0");
Version b = new Version("2.0.0.0");
Console.WriteLine(string.Format("Newer: {0}", (a > b) ? "a" : "b"));
// prints b
A:
The System.Version class does not support versions with commas in it, so the solution presented by Darren Kopp is not sufficient.
Here is a version that is as simple as possible (but no simpler).
It uses System.Version but achieves compatibility with version numbers like "1, 2, 3, 4" by doing a search-replace before comparing.
/// <summary>
/// Compare versions of form "1,2,3,4" or "1.2.3.4". Throws FormatException
/// in case of invalid version.
/// </summary>
/// <param name="strA">the first version</param>
/// <param name="strB">the second version</param>
/// <returns>less than zero if strA is less than strB, equal to zero if
/// strA equals strB, and greater than zero if strA is greater than strB</returns>
public static int CompareVersions(String strA, String strB)
{
Version vA = new Version(strA.Replace(",", "."));
Version vB = new Version(strB.Replace(",", "."));
return vA.CompareTo(vB);
}
The code has been tested with:
static void Main(string[] args)
{
Test("1.0.0.0", "1.0.0.1", -1);
Test("1.0.0.1", "1.0.0.0", 1);
Test("1.0.0.0", "1.0.0.0", 0);
Test("1, 0.0.0", "1.0.0.0", 0);
Test("9, 5, 1, 44", "3.4.5.6", 1);
Test("1, 5, 1, 44", "3.4.5.6", -1);
Test("6,5,4,3", "6.5.4.3", 0);
try
{
CompareVersions("2, 3, 4 - 4", "1,2,3,4");
Console.WriteLine("Exception should have been thrown");
}
catch (FormatException e)
{
Console.WriteLine("Got exception as expected.");
}
Console.ReadLine();
}
private static void Test(string lhs, string rhs, int expected)
{
int result = CompareVersions(lhs, rhs);
Console.WriteLine("Test(\"" + lhs + "\", \"" + rhs + "\", " + expected +
(result.Equals(expected) ? " succeeded." : " failed."));
}
A:
Well, since you only have a four element array you may just want ot unroll the recursion to save time. Passing arrays as arguments will eat up memory and leave a mess for the GC to clean up later.
A:
If you can assume that each place in the version string will only be one number (or at least the last 3, you can just remove the commas or periods and compare...which would be a lot faster...not as robust, but you don't always need that.
public static int CompareVersions(string strA, string strB)
{
char[] splitTokens = new char[] {'.', ','};
string[] strAsplit = strA.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
string[] strBsplit = strB.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
int versionA = 0;
int versionB = 0;
string vA = string.Empty;
string vB = string.Empty;
for (int i = 0; i < 4; i++)
{
vA += strAsplit[i];
vB += strBsplit[i];
versionA[i] = Convert.ToInt32(strAsplit[i]);
versionB[i] = Convert.ToInt32(strBsplit[i]);
}
versionA = Convert.ToInt32(vA);
versionB = Convert.ToInt32(vB);
if(vA > vB)
return 1;
else if(vA < vB)
return -1;
else
return 0; //they are equal
}
And yes, I'm also assuming 4 version places here...
| Compare Version Identifiers | Here is my code, which takes two version identifiers in the form "1, 5, 0, 4" or "1.5.0.4" and determines which is the newer version.
Suggestions or improvements, please!
/// <summary>
/// Compares two specified version strings and returns an integer that
/// indicates their relationship to one another in the sort order.
/// </summary>
/// <param name="strA">the first version</param>
/// <param name="strB">the second version</param>
/// <returns>less than zero if strA is less than strB, equal to zero if
/// strA equals strB, and greater than zero if strA is greater than strB</returns>
public static int CompareVersions(string strA, string strB)
{
char[] splitTokens = new char[] {'.', ','};
string[] strAsplit = strA.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
string[] strBsplit = strB.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
int[] versionA = new int[4];
int[] versionB = new int[4];
for (int i = 0; i < 4; i++)
{
versionA[i] = Convert.ToInt32(strAsplit[i]);
versionB[i] = Convert.ToInt32(strBsplit[i]);
}
// now that we have parsed the input strings, compare them
return RecursiveCompareArrays(versionA, versionB, 0);
}
/// <summary>
/// Recursive function for comparing arrays, 0-index is highest priority
/// </summary>
private static int RecursiveCompareArrays(int[] versionA, int[] versionB, int idx)
{
if (versionA[idx] < versionB[idx])
return -1;
else if (versionA[idx] > versionB[idx])
return 1;
else
{
Debug.Assert(versionA[idx] == versionB[idx]);
if (idx == versionA.Length - 1)
return 0;
else
return RecursiveCompareArrays(versionA, versionB, idx + 1);
}
}
@ Darren Kopp:
The version class does not handle versions of the format 1.0.0.5.
| [
"Use the Version class.\nVersion a = new Version(\"1.0.0.0\");\nVersion b = new Version(\"2.0.0.0\");\n\nConsole.WriteLine(string.Format(\"Newer: {0}\", (a > b) ? \"a\" : \"b\"));\n// prints b\n\n",
"The System.Version class does not support versions with commas in it, so the solution presented by Darren Kopp is not sufficient.\nHere is a version that is as simple as possible (but no simpler).\nIt uses System.Version but achieves compatibility with version numbers like \"1, 2, 3, 4\" by doing a search-replace before comparing.\n /// <summary>\n /// Compare versions of form \"1,2,3,4\" or \"1.2.3.4\". Throws FormatException\n /// in case of invalid version.\n /// </summary>\n /// <param name=\"strA\">the first version</param>\n /// <param name=\"strB\">the second version</param>\n /// <returns>less than zero if strA is less than strB, equal to zero if\n /// strA equals strB, and greater than zero if strA is greater than strB</returns>\n public static int CompareVersions(String strA, String strB)\n {\n Version vA = new Version(strA.Replace(\",\", \".\"));\n Version vB = new Version(strB.Replace(\",\", \".\"));\n\n return vA.CompareTo(vB);\n }\n\nThe code has been tested with:\n static void Main(string[] args)\n {\n Test(\"1.0.0.0\", \"1.0.0.1\", -1);\n Test(\"1.0.0.1\", \"1.0.0.0\", 1);\n Test(\"1.0.0.0\", \"1.0.0.0\", 0);\n Test(\"1, 0.0.0\", \"1.0.0.0\", 0);\n Test(\"9, 5, 1, 44\", \"3.4.5.6\", 1);\n Test(\"1, 5, 1, 44\", \"3.4.5.6\", -1);\n Test(\"6,5,4,3\", \"6.5.4.3\", 0);\n\n try\n {\n CompareVersions(\"2, 3, 4 - 4\", \"1,2,3,4\");\n Console.WriteLine(\"Exception should have been thrown\");\n }\n catch (FormatException e)\n {\n Console.WriteLine(\"Got exception as expected.\");\n }\n\n Console.ReadLine();\n }\n\n private static void Test(string lhs, string rhs, int expected)\n {\n int result = CompareVersions(lhs, rhs);\n Console.WriteLine(\"Test(\\\"\" + lhs + \"\\\", \\\"\" + rhs + \"\\\", \" + expected +\n (result.Equals(expected) ? \" succeeded.\" : \" failed.\"));\n }\n\n",
"Well, since you only have a four element array you may just want ot unroll the recursion to save time. Passing arrays as arguments will eat up memory and leave a mess for the GC to clean up later.\n",
"If you can assume that each place in the version string will only be one number (or at least the last 3, you can just remove the commas or periods and compare...which would be a lot faster...not as robust, but you don't always need that.\npublic static int CompareVersions(string strA, string strB)\n{\n char[] splitTokens = new char[] {'.', ','};\n string[] strAsplit = strA.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);\n string[] strBsplit = strB.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);\n int versionA = 0;\n int versionB = 0;\n string vA = string.Empty;\n string vB = string.Empty;\n\n for (int i = 0; i < 4; i++)\n {\n vA += strAsplit[i];\n vB += strBsplit[i];\n versionA[i] = Convert.ToInt32(strAsplit[i]);\n versionB[i] = Convert.ToInt32(strBsplit[i]);\n }\n\n versionA = Convert.ToInt32(vA);\n versionB = Convert.ToInt32(vB);\n\n if(vA > vB)\n return 1;\n else if(vA < vB)\n return -1;\n else\n return 0; //they are equal\n}\n\nAnd yes, I'm also assuming 4 version places here...\n"
] | [
40,
31,
1,
0
] | [] | [] | [
".net",
"c#",
"compare",
"versions"
] | stackoverflow_0000030494_.net_c#_compare_versions.txt |
Q:
Best way to enumerate all available video codecs on Windows?
I'm looking for a good way to enumerate all the Video codecs on a Windows XP/Vista machine.
I need present the user with a set of video codecs, including the compressors and decompressors. The output would look something like
Available Decoders
DiVX Version 6.0
XVID
Motion JPEG
CompanyX's MPEG-2 Decoder
Windows Media Video
**Available Encoders**
DiVX Version 6.0
Windows Media Video
The problem that I am running into is that there is no reliable way to to capture all of the decoders available to the system. For instance:
You can enumerate all the decompressors using DirectShow, but this tells you nothing about the compressors (encoders).
You can enumerate all the Video For Windows components, but you get no indication if these are encoders or decoders.
There are DirectShow filters that may do the job for you perfectly well (Motion JPEG filter for example), but there is no indication that a particular DirectShow filter is a "video decoder".
Has anyone found a generalizes solution for this problem using any of the Windows APIs? Does the Windows Vista Media Foundation API solve any of these issues?
A:
This is best handled by DirectShow.
DirectShow is currently a part of the platform SDK.
HRESULT extractFriendlyName( IMoniker* pMk, std::wstring& str )
{
assert( pMk != 0 );
IPropertyBag* pBag = 0;
HRESULT hr = pMk->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pBag );
if( FAILED( hr ) || pBag == 0 )
{
return hr;
}
VARIANT var;
var.vt = VT_BSTR;
hr = pBag->Read(L"FriendlyName", &var, NULL);
if( SUCCEEDED( hr ) && var.bstrVal != 0 )
{
str = reinterpret_cast<wchar_t*>( var.bstrVal );
SysFreeString(var.bstrVal);
}
pBag->Release();
return hr;
}
HRESULT enumerateDShowFilterList( const CLSID& category )
{
HRESULT rval = S_OK;
HRESULT hr;
ICreateDevEnum* pCreateDevEnum = 0; // volatile, will be destroyed at the end
hr = ::CoCreateInstance( CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, reinterpret_cast<void**>( &pCreateDevEnum ) );
assert( SUCCEEDED( hr ) && pCreateDevEnum != 0 );
if( FAILED( hr ) || pCreateDevEnum == 0 )
{
return hr;
}
IEnumMoniker* pEm = 0;
hr = pCreateDevEnum->CreateClassEnumerator( category, &pEm, 0 );
// If hr == S_FALSE, no error is occured. In this case pEm is NULL, because
// a filter does not exist e.g no video capture devives are connected to
// the computer or no codecs are installed.
assert( SUCCEEDED( hr ) && ((hr == S_OK && pEm != 0 ) || hr == S_FALSE) );
if( FAILED( hr ) )
{
pCreateDevEnum->Release();
return hr;
}
if( hr == S_OK && pEm != 0 ) // In this case pEm is != NULL
{
pEm->Reset();
ULONG cFetched;
IMoniker* pM = 0;
while( pEm->Next(1, &pM, &cFetched) == S_OK && pM != 0 )
{
std::wstring str;
if( SUCCEEDED( extractFriendlyName( pM, str ) )
{
// str contains the friendly name of the filter
// pM->BindToObject creates the filter
std::wcout << str << std::endl;
}
pM->Release();
}
pEm->Release();
}
pCreateDevEnum->Release();
return rval;
}
The following call enumerates all video compressors to the console :
enumerateDShowFilterList( CLSID_VideoCompressorCategory );
The MSDN page Filter Categories lists all other 'official' categories.
I hope that is a good starting point for you.
A:
The answer above doesn't account for decompressors. There is no CLSID_VideoDecompressorCategory. Is the are a way to ask a filter if it is a video decompressor?
Not that I know of.
Most filters in this list are codecs, so contain both a encoder and decoder.
The filters in the
CLSID_ActiveMovieCategories
are wrappers around the VfW filters installed.
(Some software companies create their own categories, so there may be 'non official' categories on some machines)
If you want to see all installed categories, use GraphEdit which is supplied with the DirectShow SDK.
GraphEdit itself is a great tool to see what DirectShow does under the hood. So maybe that may be a source of more information about the filters (and their interactions) on your system.
A:
Another point I forgot.
The Windows Media Foundation is a toolkit for using WMV/WMA. It does not provide all things that DirectShow supports. It is really only a SDK for Windows Media.
There are bindings in WMV/WMA to DirectShow, so that you can use WM* files/streams in DirectShow applications.
| Best way to enumerate all available video codecs on Windows? | I'm looking for a good way to enumerate all the Video codecs on a Windows XP/Vista machine.
I need present the user with a set of video codecs, including the compressors and decompressors. The output would look something like
Available Decoders
DiVX Version 6.0
XVID
Motion JPEG
CompanyX's MPEG-2 Decoder
Windows Media Video
**Available Encoders**
DiVX Version 6.0
Windows Media Video
The problem that I am running into is that there is no reliable way to to capture all of the decoders available to the system. For instance:
You can enumerate all the decompressors using DirectShow, but this tells you nothing about the compressors (encoders).
You can enumerate all the Video For Windows components, but you get no indication if these are encoders or decoders.
There are DirectShow filters that may do the job for you perfectly well (Motion JPEG filter for example), but there is no indication that a particular DirectShow filter is a "video decoder".
Has anyone found a generalizes solution for this problem using any of the Windows APIs? Does the Windows Vista Media Foundation API solve any of these issues?
| [
"This is best handled by DirectShow.\nDirectShow is currently a part of the platform SDK.\nHRESULT extractFriendlyName( IMoniker* pMk, std::wstring& str )\n{\n assert( pMk != 0 );\n IPropertyBag* pBag = 0;\n HRESULT hr = pMk->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pBag );\n if( FAILED( hr ) || pBag == 0 )\n {\n return hr;\n }\n VARIANT var;\n var.vt = VT_BSTR;\n hr = pBag->Read(L\"FriendlyName\", &var, NULL);\n if( SUCCEEDED( hr ) && var.bstrVal != 0 )\n {\n str = reinterpret_cast<wchar_t*>( var.bstrVal );\n SysFreeString(var.bstrVal);\n }\n pBag->Release();\n return hr;\n}\n\n\nHRESULT enumerateDShowFilterList( const CLSID& category )\n{\n HRESULT rval = S_OK;\n HRESULT hr;\n ICreateDevEnum* pCreateDevEnum = 0; // volatile, will be destroyed at the end\n hr = ::CoCreateInstance( CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, reinterpret_cast<void**>( &pCreateDevEnum ) );\n\n assert( SUCCEEDED( hr ) && pCreateDevEnum != 0 );\n if( FAILED( hr ) || pCreateDevEnum == 0 )\n {\n return hr;\n }\n\n IEnumMoniker* pEm = 0;\n hr = pCreateDevEnum->CreateClassEnumerator( category, &pEm, 0 );\n\n // If hr == S_FALSE, no error is occured. In this case pEm is NULL, because\n // a filter does not exist e.g no video capture devives are connected to\n // the computer or no codecs are installed.\n assert( SUCCEEDED( hr ) && ((hr == S_OK && pEm != 0 ) || hr == S_FALSE) );\n if( FAILED( hr ) )\n {\n pCreateDevEnum->Release();\n return hr;\n }\n\n if( hr == S_OK && pEm != 0 ) // In this case pEm is != NULL\n {\n pEm->Reset();\n ULONG cFetched;\n IMoniker* pM = 0;\n while( pEm->Next(1, &pM, &cFetched) == S_OK && pM != 0 )\n {\n std::wstring str;\n\n if( SUCCEEDED( extractFriendlyName( pM, str ) )\n {\n // str contains the friendly name of the filter\n // pM->BindToObject creates the filter\n std::wcout << str << std::endl;\n }\n\n pM->Release();\n }\n pEm->Release();\n }\n pCreateDevEnum->Release();\n return rval;\n}\n\nThe following call enumerates all video compressors to the console :\nenumerateDShowFilterList( CLSID_VideoCompressorCategory );\n\nThe MSDN page Filter Categories lists all other 'official' categories.\nI hope that is a good starting point for you.\n",
"\nThe answer above doesn't account for decompressors. There is no CLSID_VideoDecompressorCategory. Is the are a way to ask a filter if it is a video decompressor?\n\nNot that I know of.\nMost filters in this list are codecs, so contain both a encoder and decoder.\nThe filters in the \nCLSID_ActiveMovieCategories\n\nare wrappers around the VfW filters installed.\n(Some software companies create their own categories, so there may be 'non official' categories on some machines)\nIf you want to see all installed categories, use GraphEdit which is supplied with the DirectShow SDK.\nGraphEdit itself is a great tool to see what DirectShow does under the hood. So maybe that may be a source of more information about the filters (and their interactions) on your system.\n",
"Another point I forgot.\nThe Windows Media Foundation is a toolkit for using WMV/WMA. It does not provide all things that DirectShow supports. It is really only a SDK for Windows Media.\nThere are bindings in WMV/WMA to DirectShow, so that you can use WM* files/streams in DirectShow applications.\n"
] | [
7,
2,
1
] | [] | [] | [
"codec",
"directshow",
"media",
"video"
] | stackoverflow_0000030539_codec_directshow_media_video.txt |
Q:
Trigger without a transaction?
Is it possible to create a trigger that will not be in a transaction?
I want to update data on a linked server with a trigger but due to firewall issues we can't create a distributed transaction between the two servers.
A:
What you probably want is a combination of a queue that contains updates for the linked server and a process that reads data from the queue and updates the remote server. The trigger will then insert a message into the queue as part of the normal transaction. This data will be read by the separate process and used to update the remote server. Logic will needed in the process handle errors (and possibly retries).
The queue can be implemented with one or more tables.
A:
I know it's not helpful, so I'll probably get downvoted for this, but really, the solution is to fix the firewall problem.
I think if you use remote (not linked) servers (which are not the preferred option these days) then you can use SET REMOTE_PROC_TRANSACTIONS OFF to prevent the use of DTC for remote transactions, which might do the right thing here. But that probably doesn't help you with a linked server anyway.
| Trigger without a transaction? | Is it possible to create a trigger that will not be in a transaction?
I want to update data on a linked server with a trigger but due to firewall issues we can't create a distributed transaction between the two servers.
| [
"What you probably want is a combination of a queue that contains updates for the linked server and a process that reads data from the queue and updates the remote server. The trigger will then insert a message into the queue as part of the normal transaction. This data will be read by the separate process and used to update the remote server. Logic will needed in the process handle errors (and possibly retries).\nThe queue can be implemented with one or more tables.\n",
"I know it's not helpful, so I'll probably get downvoted for this, but really, the solution is to fix the firewall problem.\nI think if you use remote (not linked) servers (which are not the preferred option these days) then you can use SET REMOTE_PROC_TRANSACTIONS OFF to prevent the use of DTC for remote transactions, which might do the right thing here. But that probably doesn't help you with a linked server anyway.\n"
] | [
3,
2
] | [] | [] | [
"sql_server",
"triggers",
"tsql"
] | stackoverflow_0000019744_sql_server_triggers_tsql.txt |
Q:
How do I retrieve data sent to the web server in ASP.NET?
What are the ways to retrieve data submitted to the web server from a form in the client HTML in ASP.NET?
A:
You can also search through both the Form and QueryString collections at the same time so that the data will be found regardless of the the request method.
value = Request("formElementID")
A:
In VB.NET
For POST requests:
value = Request.Form("formElementID")
For GET requests:
value = Request.QueryString("formElementID")
| How do I retrieve data sent to the web server in ASP.NET? | What are the ways to retrieve data submitted to the web server from a form in the client HTML in ASP.NET?
| [
"You can also search through both the Form and QueryString collections at the same time so that the data will be found regardless of the the request method.\nvalue = Request(\"formElementID\")\n\n",
"In VB.NET\nFor POST requests:\nvalue = Request.Form(\"formElementID\")\n\nFor GET requests:\nvalue = Request.QueryString(\"formElementID\")\n\n"
] | [
4,
0
] | [] | [] | [
"asp.net",
"html"
] | stackoverflow_0000030781_asp.net_html.txt |
Q:
How do I create a draggable and resizable JavaScript popup window?
I want to create a draggable and resizable window in JavaScript for cross browser use, but I want to try and avoid using a framework if I can.
Has anyone got a link or some code that I can use?
A:
JQuery would be a good way to go. And with the Jquery UI plugins (such as draggable), it's a breeze.. (there's a demo here).
Not using a framework, to keep it 'pure', seems just a waste of time to me. There's good stuff, that will save you tremendous amounts of time, time better spent in making your application even better.
But you can always check out the source to get some 'inspiration', and adapt it without the overhead of the stuff you won't use. It's well done and easy to read, and you often discover some cross-browser hacks you didn't even think about..
edit: oh, if you REALLY don't wan't no framework EVER, just check out their source then.. sure you can use some of it for your application.
A:
JQuery is more focused on a lot of nice utility functions, and makes DOM manipulation a whole lot easier. Basically, I consider it to be Javascript as it should have been. It's a supremely helpful addition to the Javascript language itself.
ExtJS is a suite of GUI components with specific APIs... Use it if you want to easily create components that look like that, otherwise, go with a more flexible framework.
A:
Sometimes you can't choose your environment or architecture, so you're stuck working within constraints like not being able to use frameworks...
A:
Avoiding a framework altogether will leave you with lots of code and a bunch of tedious browser-testing.
If you would consider a framework I'd suggest jQuery with the jqDnR plugin. I think it will solve your problem or perhaps you could combine the functionality of the jQuery draggables with the jQuery resizables
A:
Just trying to avoid large framework downloads to the client for one very small thing, perhaps I am being daft.
I had looked at jQuery but also ExtJS, the documentation and UI 'look' seem far superior and professional in ExtJS ... are there particular reasons for you guys recommending jQuery?
| How do I create a draggable and resizable JavaScript popup window? | I want to create a draggable and resizable window in JavaScript for cross browser use, but I want to try and avoid using a framework if I can.
Has anyone got a link or some code that I can use?
| [
"JQuery would be a good way to go. And with the Jquery UI plugins (such as draggable), it's a breeze.. (there's a demo here).\nNot using a framework, to keep it 'pure', seems just a waste of time to me. There's good stuff, that will save you tremendous amounts of time, time better spent in making your application even better. \nBut you can always check out the source to get some 'inspiration', and adapt it without the overhead of the stuff you won't use. It's well done and easy to read, and you often discover some cross-browser hacks you didn't even think about..\nedit: oh, if you REALLY don't wan't no framework EVER, just check out their source then.. sure you can use some of it for your application. \n",
"JQuery is more focused on a lot of nice utility functions, and makes DOM manipulation a whole lot easier. Basically, I consider it to be Javascript as it should have been. It's a supremely helpful addition to the Javascript language itself.\nExtJS is a suite of GUI components with specific APIs... Use it if you want to easily create components that look like that, otherwise, go with a more flexible framework.\n",
"Sometimes you can't choose your environment or architecture, so you're stuck working within constraints like not being able to use frameworks...\n",
"Avoiding a framework altogether will leave you with lots of code and a bunch of tedious browser-testing.\nIf you would consider a framework I'd suggest jQuery with the jqDnR plugin. I think it will solve your problem or perhaps you could combine the functionality of the jQuery draggables with the jQuery resizables\n",
"Just trying to avoid large framework downloads to the client for one very small thing, perhaps I am being daft.\nI had looked at jQuery but also ExtJS, the documentation and UI 'look' seem far superior and professional in ExtJS ... are there particular reasons for you guys recommending jQuery?\n"
] | [
2,
2,
1,
0,
0
] | [] | [] | [
"dialog",
"javascript"
] | stackoverflow_0000030706_dialog_javascript.txt |
Q:
Understanding Interfaces
I have class method that returns a list of employees that I can iterate through. What's the best way to return the list? Typically I just return an ArrayList. However, as I understand, interfaces are better suited for this type of action. Which would be the best interface to use? Also, why is it better to return an interface, rather than the implementation (say ArrayList object)? It just seems like a lot more work to me.
A:
Personally, I would use a List<Employee> for creating the list on the backend, and then use IList when you return. When you use interfaces, it gives you the flexability to change the implementation without having to alter who's using your code. If you wanted to stick with an ArrayList, that'd be a non-generic IList.
A:
@ Jason
You may as well return IList<> because an array actually implements this interface.
A:
The best way to do something like this would be to return, as you say, a List, preferably using generics, so it would be List<Employee>.
Returning a List rather than an ArrayList means that if later you decide to use, say, a LinkedList, you don't have to change any of the code other than where you create the object to begin with (i.e, the call to "new ArrayList())".
A:
If all you are doing is iterating through the list, you can define a method that returns the list as IEnumerable (for .NET).
By returning the interface that provides just the functionality you need, if some new collection type comes along in the future that is better/faster/a better match for your application, as long as it still implements IEnumerable you can completely rewrite your method, using the new type inside it, without changing any of the code that calls it.
A:
Is there any reason the collection needs to be ordered? Why not simply return an IEnumerable<Employee>? This gives the bare minimum that is required - if you later wanted some other form of storage, like a Bag or Set or Tree or whatnot, your contract would remain intact.
A:
I disagree with the premise that it's better to return an interface. My reason is that you want to maximize the usefulness a given block of code exposes.
With that in mind, an interface works for accepting an item as an argument. If a function parameter calls for an array or an ArrayList, that's the only thing you can pass to it. If a function parameter calls for an IEnumerable it will accept either, as well as a number of other objects. It's more useful
The return value, however, works opposite. When you return an IEnumerable, the only thing you can do is enumerate it. If you have a List handy and return that then code that calls your function can also easily do a number of other things, like get a count.
I stand united with those advising you to get away from the ArrayList, though. Generics are so much better.
A:
Return type for your method should be IList<Employee>.
That means that the caller of your method can use anything that IList offers but cannot use things specific to ArrayList. Then if you feel at some point that LinkedList or YourCustomSuperDuperList offers better performance or other advantages you can safely use it within your method and not screw callers of it.
That's roughly interfaces 101. ;-)
A:
An interface is a contract between the implementation and the user of the implementation.
By using an interface, you allow the implementation to change as much as it wants as long as it maintains the contract for the users.
It also allows multiple implementations to use the same interface so that users can reuse code that interacts with the interface.
A:
You don't say what language you're talking about, but in something .NETish, then it's no more work to return an IList than a List or even an ArrayList, though the mere mention of that obsolete class makes me think you're not talking about .NET.
A:
An interface is essentially a contract that a class has certain methods or attributes; programming to an interface rather then a direct implementation allows for more dynamic and manageable code, as you can completely swap out implementations as long as the "contract" is still held.
In the case you describe, passing an interface does not give you a particular advantage, if it were me, I would pass the ArrayList with the generic type, or pass the Array itself: list.toArray()
A:
Actually you shouldn't return a List if thats a framework, at least not without thinking it, the recommended class to use is a Collection. The List class has some performance improvements at the cost of server extendability issues. It's in fact an FXCop rule.
You have the reasoning for that in this article
| Understanding Interfaces | I have class method that returns a list of employees that I can iterate through. What's the best way to return the list? Typically I just return an ArrayList. However, as I understand, interfaces are better suited for this type of action. Which would be the best interface to use? Also, why is it better to return an interface, rather than the implementation (say ArrayList object)? It just seems like a lot more work to me.
| [
"Personally, I would use a List<Employee> for creating the list on the backend, and then use IList when you return. When you use interfaces, it gives you the flexability to change the implementation without having to alter who's using your code. If you wanted to stick with an ArrayList, that'd be a non-generic IList.\n",
"@ Jason\nYou may as well return IList<> because an array actually implements this interface.\n",
"The best way to do something like this would be to return, as you say, a List, preferably using generics, so it would be List<Employee>.\nReturning a List rather than an ArrayList means that if later you decide to use, say, a LinkedList, you don't have to change any of the code other than where you create the object to begin with (i.e, the call to \"new ArrayList())\". \n",
"If all you are doing is iterating through the list, you can define a method that returns the list as IEnumerable (for .NET).\nBy returning the interface that provides just the functionality you need, if some new collection type comes along in the future that is better/faster/a better match for your application, as long as it still implements IEnumerable you can completely rewrite your method, using the new type inside it, without changing any of the code that calls it.\n",
"Is there any reason the collection needs to be ordered? Why not simply return an IEnumerable<Employee>? This gives the bare minimum that is required - if you later wanted some other form of storage, like a Bag or Set or Tree or whatnot, your contract would remain intact.\n",
"I disagree with the premise that it's better to return an interface. My reason is that you want to maximize the usefulness a given block of code exposes. \nWith that in mind, an interface works for accepting an item as an argument. If a function parameter calls for an array or an ArrayList, that's the only thing you can pass to it. If a function parameter calls for an IEnumerable it will accept either, as well as a number of other objects. It's more useful\nThe return value, however, works opposite. When you return an IEnumerable, the only thing you can do is enumerate it. If you have a List handy and return that then code that calls your function can also easily do a number of other things, like get a count.\nI stand united with those advising you to get away from the ArrayList, though. Generics are so much better.\n",
"Return type for your method should be IList<Employee>. \nThat means that the caller of your method can use anything that IList offers but cannot use things specific to ArrayList. Then if you feel at some point that LinkedList or YourCustomSuperDuperList offers better performance or other advantages you can safely use it within your method and not screw callers of it.\nThat's roughly interfaces 101. ;-)\n",
"An interface is a contract between the implementation and the user of the implementation.\nBy using an interface, you allow the implementation to change as much as it wants as long as it maintains the contract for the users.\nIt also allows multiple implementations to use the same interface so that users can reuse code that interacts with the interface.\n",
"You don't say what language you're talking about, but in something .NETish, then it's no more work to return an IList than a List or even an ArrayList, though the mere mention of that obsolete class makes me think you're not talking about .NET.\n",
"An interface is essentially a contract that a class has certain methods or attributes; programming to an interface rather then a direct implementation allows for more dynamic and manageable code, as you can completely swap out implementations as long as the \"contract\" is still held.\nIn the case you describe, passing an interface does not give you a particular advantage, if it were me, I would pass the ArrayList with the generic type, or pass the Array itself: list.toArray()\n",
"Actually you shouldn't return a List if thats a framework, at least not without thinking it, the recommended class to use is a Collection. The List class has some performance improvements at the cost of server extendability issues. It's in fact an FXCop rule.\nYou have the reasoning for that in this article\n"
] | [
4,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"interface"
] | stackoverflow_0000030763_interface.txt |
Q:
SharePoint SPContext.List in a custom application page
I have a custom SharePoint application page deployed to the _layouts folder. It's a custom "new form" for a custom content type. During my interactions with this page, I will need to add an item to my list. When the page first loads, I can use SPContext.Current.List to see the current list I'm working with. But after I fill in my form and the form posts back onto itself and IsPostBack is true, then SPContext.Current.List is null so I can't find the list that I need to add my stuff into.
Is this expected?
How should I retain some info about my context list across the postback? Should I just populate some asp:hidden control with my list's guid and then just pull it back from that on the postback? That seems safe, I guess.
FWIW, this is the MOSS 2007 Standard version.
A:
Generally speaking I try and copy whatever approach the product group has taken when looking to add functionality of my own. In this case they add their own edit/view/add pages via the list definition itself.
I built a solution that also needed its own custom "New" form, not open source unfortunately, though if you are interested you can download it, its called "Tagged Links" (Social Bookmarking for SharePoint) and you can find some links on my blog.
To give you a few hints and tips, the following should set you off in the right direction:
Created a new list definition.
Created a new Content Type In the content type you can define your own "FormTemplates" that references a Rendering Template which determine what gets displayed in the "Middle" bit of those forms.
Copied the standard Rendering Template, but then made the changes to it that I
needed.
Wrapped it all up in a solution, and deployed.
My Rendering Template actually included an overridden "Save" Button where I did a lot of the extra work I needed to do during the save.
Anyway, it is a little too much work in my opinion but, I think, it most closely matches the standard approach taken by the product developers. Let me know if you need more detail and I will see if I can put together a step-by-step blog post, but hopefully this gets you off on the right direction.
A:
I would be surprised if you could do something in a _Layouts file that you can't do in a forms template. You have pretty much the same technologies at your disposal.
Looking at the way SharePoint works with ListItems and Layouts pages (for example "Manage Permissions" on a list item), I can see that they pass some variables in via querystrings:
?obj={76113B3A-FABA-4389-BC85-4BB2CC5AB423},6,LISTITEM&List={76113B3A-FABA-4389-BC85-4BB2CC5AB423}
Perhaps they grab the context back each time programmatically using these values.
A:
I'm not using a custom "new form", so this might not apply. I added an event receiver to my custom content type and then do my custom code in the ItemAdded or ItemAdding events. This code fires when the event is added to a list. You can use the event receiver properties to get to the parent List, Web, and Site.
A:
I'd like to think my issue is "special" here, since I am using a custom form. I chose to use a custom form rather than a custom FormTemplate simply because I'm doing a lot of stuff that's not very SharePoint list-like (making ajax calls to get info from a third-party app then generating some dynamic form elements based on that ajax result, then subsequent processing of that data on postback). I thought it'd be a nightmare to try this within the usual custom rendering template mechanism.
I also don't think I can supply the custom form declarations in the list definition itself, because I have multiple content types associated with this list, and each content type has its own custom form (the other type is thankfully much simpler).
Actually, my simple way of keeping the list guid in my hidden field was a very low impact way to address this specific problem. My main concern is that I'm not sure why the SPContext just loses all its usefulness when I postback here, which makes me think I'm doing something wrong.
| SharePoint SPContext.List in a custom application page | I have a custom SharePoint application page deployed to the _layouts folder. It's a custom "new form" for a custom content type. During my interactions with this page, I will need to add an item to my list. When the page first loads, I can use SPContext.Current.List to see the current list I'm working with. But after I fill in my form and the form posts back onto itself and IsPostBack is true, then SPContext.Current.List is null so I can't find the list that I need to add my stuff into.
Is this expected?
How should I retain some info about my context list across the postback? Should I just populate some asp:hidden control with my list's guid and then just pull it back from that on the postback? That seems safe, I guess.
FWIW, this is the MOSS 2007 Standard version.
| [
"Generally speaking I try and copy whatever approach the product group has taken when looking to add functionality of my own. In this case they add their own edit/view/add pages via the list definition itself.\nI built a solution that also needed its own custom \"New\" form, not open source unfortunately, though if you are interested you can download it, its called \"Tagged Links\" (Social Bookmarking for SharePoint) and you can find some links on my blog.\nTo give you a few hints and tips, the following should set you off in the right direction:\n\nCreated a new list definition.\nCreated a new Content Type In the content type you can define your own \"FormTemplates\" that references a Rendering Template which determine what gets displayed in the \"Middle\" bit of those forms.\nCopied the standard Rendering Template, but then made the changes to it that I\nneeded. \nWrapped it all up in a solution, and deployed.\n\nMy Rendering Template actually included an overridden \"Save\" Button where I did a lot of the extra work I needed to do during the save.\nAnyway, it is a little too much work in my opinion but, I think, it most closely matches the standard approach taken by the product developers. Let me know if you need more detail and I will see if I can put together a step-by-step blog post, but hopefully this gets you off on the right direction.\n",
"I would be surprised if you could do something in a _Layouts file that you can't do in a forms template. You have pretty much the same technologies at your disposal. \nLooking at the way SharePoint works with ListItems and Layouts pages (for example \"Manage Permissions\" on a list item), I can see that they pass some variables in via querystrings:\n?obj={76113B3A-FABA-4389-BC85-4BB2CC5AB423},6,LISTITEM&List={76113B3A-FABA-4389-BC85-4BB2CC5AB423}\nPerhaps they grab the context back each time programmatically using these values.\n",
"I'm not using a custom \"new form\", so this might not apply. I added an event receiver to my custom content type and then do my custom code in the ItemAdded or ItemAdding events. This code fires when the event is added to a list. You can use the event receiver properties to get to the parent List, Web, and Site.\n",
"I'd like to think my issue is \"special\" here, since I am using a custom form. I chose to use a custom form rather than a custom FormTemplate simply because I'm doing a lot of stuff that's not very SharePoint list-like (making ajax calls to get info from a third-party app then generating some dynamic form elements based on that ajax result, then subsequent processing of that data on postback). I thought it'd be a nightmare to try this within the usual custom rendering template mechanism.\nI also don't think I can supply the custom form declarations in the list definition itself, because I have multiple content types associated with this list, and each content type has its own custom form (the other type is thankfully much simpler).\nActually, my simple way of keeping the list guid in my hidden field was a very low impact way to address this specific problem. My main concern is that I'm not sure why the SPContext just loses all its usefulness when I postback here, which makes me think I'm doing something wrong.\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"applicationpage",
"sharepoint",
"spcontext"
] | stackoverflow_0000029030_applicationpage_sharepoint_spcontext.txt |
Q:
Summary fields in Crystal Report VS2008
I need to have a summary field in each page of the report and in page 2 and forward the same summary has to appear at the top of the page. Anyone know how to do this?
Ex:
>
> Page 1
>
> Name Value
> a 1
> b 3
> Total 4
>
> Page 2
> Name Value
> Total Before 4
> c 5
> d 1
> Total 10
A:
Create a new Running Total Field called, for example "RTotal". In "Field to summarize" select "Value", in "Type of summary" select "sum", under "Evaluate" select "For each record". You can then drag this field into your report to use as the "Total" at the bottom of each page.
You cannot use this running total field in the page header too, however, because Crystal will add the value in the first row on the page to it first (so in your example it would show 9 rather than 4 at the top of page 2). To work around this, create a formula field which subtracts the current value of the Value field from the running total (e.g. {#RTotal}-{TableName.Value}), and put this formula field in your page header.
A:
I do not understand your question all the way.
If you need an overall summary that is repeated, you would need a sub-report that have shown in the report multiple times.
| Summary fields in Crystal Report VS2008 | I need to have a summary field in each page of the report and in page 2 and forward the same summary has to appear at the top of the page. Anyone know how to do this?
Ex:
>
> Page 1
>
> Name Value
> a 1
> b 3
> Total 4
>
> Page 2
> Name Value
> Total Before 4
> c 5
> d 1
> Total 10
| [
"Create a new Running Total Field called, for example \"RTotal\". In \"Field to summarize\" select \"Value\", in \"Type of summary\" select \"sum\", under \"Evaluate\" select \"For each record\". You can then drag this field into your report to use as the \"Total\" at the bottom of each page.\nYou cannot use this running total field in the page header too, however, because Crystal will add the value in the first row on the page to it first (so in your example it would show 9 rather than 4 at the top of page 2). To work around this, create a formula field which subtracts the current value of the Value field from the running total (e.g. {#RTotal}-{TableName.Value}), and put this formula field in your page header.\n",
"I do not understand your question all the way.\nIf you need an overall summary that is repeated, you would need a sub-report that have shown in the report multiple times.\n"
] | [
1,
0
] | [] | [] | [
"crystal_reports",
"visual_studio_2008"
] | stackoverflow_0000025938_crystal_reports_visual_studio_2008.txt |
Q:
How should I test a method that populates a list from a DataReader?
So I'm working on some legacy code that's heavy on the manual database operations. I'm trying to maintain some semblance of quality here, so I'm going TDD as much as possible.
The code I'm working on needs to populate, let's say a List<Foo> from a DataReader that returns all the fields required for a functioning Foo. However, if I want to verify that the code in fact returns one list item per one database row, I'm writing test code that looks something like this:
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 1);
// ....
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 2);
// ....
Expect.Call(reader.Read()).Return(false);
Which is rather tedious and rather easily broken, too.
How should I be approaching this issue so that the result won't be a huge mess of brittle tests?
Btw I'm currently using Rhino.Mocks for this, but I can change it if the result is convincing enough. Just as long as the alternative isn't TypeMock, because their EULA was a bit too scary for my tastes last I checked.
Edit: I'm also currently limited to C# 2.
A:
To make this less tedious, you will need to encapsulate/refactor the mapping between the DataReader and the Object you hold in the list. There is quite of few steps to encapsulate that logic out. If that is the road you want to take, I can post code for you. I am just not sure how practical it would be to post the code here on StackOverflow, but I can give it a shot to keep it concise and to the point. Otherwise, you are stuck with the tedious task of repeating each expectation on the index accessor for the reader. The encapsulation process will also get rid of the strings and make those strings more reusable through your tests.
Also, I am not sure at this point how much you want to make the existing code more testable. Since this is legacy code that wasn't built with testing in mind.
A:
I thought about posting some code and then I remembered about JP Boodhoo's Nothin But .NET course. He has a sample project that he is sharing that was created during one of his classes. The project is hosted on Google Code and it is a nice resource. I am sure it has some nice tips for you to use and give you ideas on how to refactor the mapping. The whole project was built with TDD.
A:
You can put the Foo instances in a list and compare the objects with what you read:
var arrFoos = new Foos[]{...}; // what you expect
var expectedFoos = new List<Foo>(arrFoos); // make a list from the hardcoded array of expected Foos
var readerResult = ReadEntireList(reader); // read everything from reader and put in List<Foo>
Expect.ContainSameFoos(expectedFoos, readerResult); // compare the two lists
A:
Kokos,
Couple of things wrong there. First, doing it that way means I have to construct the Foos first, then feed their values to the mock reader which does nothing to reduce the amount of code I'm writing. Second, if the values pass through the reader, the Foos won't be the same Foos (reference equality). They might be equal, but even that's assuming too much of the Foo class that I don't dare touch at this point.
A:
Just to clarify, you want to be able to test your call into SQL Server returned some data, or that if you had some data you could map it back into the model?
If you want to test your call into SQL returned some data checkout my answer found here
A:
@Toran: What I'm testing is the programmatic mapping from data returned from the database to quote-unquote domain model. Hence I want to mock out the database connection. For the other kind of test, I'd go for all-out integration testing.
@Dale: I guess you nailed it pretty well there, and I was afraid that might be the case. If you've got pointers to any articles or suchlike where someone has done the dirty job and decomposed it into more easily digestible steps, I'd appreciate it. Code samples wouldn't hurt either. I do have a clue on how to approach that problem, but before I actually dare do that, I'm going to need to get other things done, and if testing that will require tedious mocking, then that's what I'll do.
| How should I test a method that populates a list from a DataReader? | So I'm working on some legacy code that's heavy on the manual database operations. I'm trying to maintain some semblance of quality here, so I'm going TDD as much as possible.
The code I'm working on needs to populate, let's say a List<Foo> from a DataReader that returns all the fields required for a functioning Foo. However, if I want to verify that the code in fact returns one list item per one database row, I'm writing test code that looks something like this:
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 1);
// ....
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 2);
// ....
Expect.Call(reader.Read()).Return(false);
Which is rather tedious and rather easily broken, too.
How should I be approaching this issue so that the result won't be a huge mess of brittle tests?
Btw I'm currently using Rhino.Mocks for this, but I can change it if the result is convincing enough. Just as long as the alternative isn't TypeMock, because their EULA was a bit too scary for my tastes last I checked.
Edit: I'm also currently limited to C# 2.
| [
"To make this less tedious, you will need to encapsulate/refactor the mapping between the DataReader and the Object you hold in the list. There is quite of few steps to encapsulate that logic out. If that is the road you want to take, I can post code for you. I am just not sure how practical it would be to post the code here on StackOverflow, but I can give it a shot to keep it concise and to the point. Otherwise, you are stuck with the tedious task of repeating each expectation on the index accessor for the reader. The encapsulation process will also get rid of the strings and make those strings more reusable through your tests.\nAlso, I am not sure at this point how much you want to make the existing code more testable. Since this is legacy code that wasn't built with testing in mind.\n",
"I thought about posting some code and then I remembered about JP Boodhoo's Nothin But .NET course. He has a sample project that he is sharing that was created during one of his classes. The project is hosted on Google Code and it is a nice resource. I am sure it has some nice tips for you to use and give you ideas on how to refactor the mapping. The whole project was built with TDD.\n",
"You can put the Foo instances in a list and compare the objects with what you read: \nvar arrFoos = new Foos[]{...}; // what you expect\nvar expectedFoos = new List<Foo>(arrFoos); // make a list from the hardcoded array of expected Foos\nvar readerResult = ReadEntireList(reader); // read everything from reader and put in List<Foo>\nExpect.ContainSameFoos(expectedFoos, readerResult); // compare the two lists\n\n",
"Kokos,\nCouple of things wrong there. First, doing it that way means I have to construct the Foos first, then feed their values to the mock reader which does nothing to reduce the amount of code I'm writing. Second, if the values pass through the reader, the Foos won't be the same Foos (reference equality). They might be equal, but even that's assuming too much of the Foo class that I don't dare touch at this point.\n",
"Just to clarify, you want to be able to test your call into SQL Server returned some data, or that if you had some data you could map it back into the model?\nIf you want to test your call into SQL returned some data checkout my answer found here\n",
"@Toran: What I'm testing is the programmatic mapping from data returned from the database to quote-unquote domain model. Hence I want to mock out the database connection. For the other kind of test, I'd go for all-out integration testing.\n@Dale: I guess you nailed it pretty well there, and I was afraid that might be the case. If you've got pointers to any articles or suchlike where someone has done the dirty job and decomposed it into more easily digestible steps, I'd appreciate it. Code samples wouldn't hurt either. I do have a clue on how to approach that problem, but before I actually dare do that, I'm going to need to get other things done, and if testing that will require tedious mocking, then that's what I'll do.\n"
] | [
1,
1,
0,
0,
0,
0
] | [] | [] | [
"c#",
"mocking",
"tdd",
"unit_testing"
] | stackoverflow_0000029980_c#_mocking_tdd_unit_testing.txt |
Q:
What is the proper virtual directory access permission level required for a SOAP web service?
When setting up a new virtual directory for hosting a SOAP web service in IIS 6.0 on a Server 2003 box I am required to set the access permissions for the virtual directory. The various permissions are to allow/disallow the following:
Read
Run scripts (such as ASP)
Execute (such as ISAPI or CGI)
Write
Browse
The SOAP web service is being published through the SOAP3.0 ISAPI server with the extensions set to "Allowed" in the Web Service Extensions pane of the IIS Manager.
Since I don't want to expose the contents of this directory to the web I know Browse is not desirable. But, I don't know if I need to have the Run scripts, Execute, and Write permissions enabled to properly publish this web service. The web service is being used to send and receive XML data sets between the server and remote clients. What is the correct level of access permission for my SOAP web service's virtual directory?
A:
Upon further examination, I've come to the conclusion that one assumption I had about needing Read permissions was incorrect.
SOAP web services only need the "Run scripts" permission enabled because the .wsdl apparently comes from the web service in the form of a script execution response. So the minimum required for a SOAP3.0 web service's directory is Run scripts.
| What is the proper virtual directory access permission level required for a SOAP web service? | When setting up a new virtual directory for hosting a SOAP web service in IIS 6.0 on a Server 2003 box I am required to set the access permissions for the virtual directory. The various permissions are to allow/disallow the following:
Read
Run scripts (such as ASP)
Execute (such as ISAPI or CGI)
Write
Browse
The SOAP web service is being published through the SOAP3.0 ISAPI server with the extensions set to "Allowed" in the Web Service Extensions pane of the IIS Manager.
Since I don't want to expose the contents of this directory to the web I know Browse is not desirable. But, I don't know if I need to have the Run scripts, Execute, and Write permissions enabled to properly publish this web service. The web service is being used to send and receive XML data sets between the server and remote clients. What is the correct level of access permission for my SOAP web service's virtual directory?
| [
"Upon further examination, I've come to the conclusion that one assumption I had about needing Read permissions was incorrect.\nSOAP web services only need the \"Run scripts\" permission enabled because the .wsdl apparently comes from the web service in the form of a script execution response. So the minimum required for a SOAP3.0 web service's directory is Run scripts.\n"
] | [
4
] | [] | [] | [
"file_permissions",
"soap",
"web_services"
] | stackoverflow_0000030712_file_permissions_soap_web_services.txt |
Q:
Windows Vista: Unable to load DLL 'x.dll': Invalid access to memory location. (DllNotFoundException)
I was testing on a customer's box this afternoon which has Windows Vista (He had home, but I am testing on a Business Edition with same results).
We make use of a .DLL that gets the Hardware ID of the computer. It's usage is very simple and the sample program I have created works. The Dll is This from AzSdk.
In fact, this works perfectly under Windows XP. However, for some strange reason, inside our project (way bigger), we get this exception:
Exception Type: System.DllNotFoundException
Exception Message: Unable to load DLL 'HardwareID.dll': Invalid access to memory location. (Exception from HRESULT: 0x800703E6)
Exception Target Site: GetHardwareID
I don't know what can be causing the problem, since I have full control over the folder. The project is a c#.net Windows Forms application and everything works fine, except the call for the external library.
I am declaring it like this: (note: it's not a COM library and it doesn't need to be registered).
[DllImport("HardwareID.dll")]
public static extern String GetHardwareID(bool HDD,
bool NIC, bool CPU, bool BIOS, string sRegistrationCode);
And then the calling code is quite simple:
private void button1_Click(object sender, EventArgs e)
{
textBox1.Text = GetHardwareID(cb_HDD.Checked,
cb_NIC.Checked,
cb_CPU.Checked,
cb_BIOS.Checked,
"*Registration Code*");
}
When you create a sample application, it works, but inside my projectit doesn't. Under XP works fine. Any ideas about what should I do in Vista to make this work?
As I've said, the folder and its sub-folders have Full Control for "Everybody".
UPDATE: I do not have Vista SP 1 installed.
UPDATE 2: I have installed Vista SP1 and now, with UAC disabled, not even the simple sample works!!! :( Damn Vista.
A:
Unable to load DLL 'HardwareID.dll':
Invalid access to memory location.
(Exception from HRESULT: 0x800703E6)
The name of DllNotFoundException is confusing you - this isn't a problem with finding or loading the DLL file, the problem is that when the DLL is loaded, it does an illegal memory access which causes the loading process to fail.
Like another poster here, I think this is a DEP problem, and that your UAC, etc, changes have finally allowed you to disable DEP for this application.
A:
@Martín
The reason you were not getting the UAC prompt is because UAC can only change how a process is started, once the process is running it must stay at the same elevation level. The UAC will prompt will happen if:
Vista thinks it's an installer (lots of rules here, the simplest one is if it's called "setup.exe"),
If it's flagged as "Run as Administrator" (you can edit this by changing the properties of the shortcut or the exe), or
If the exe contains a manifest requesting admin privileges.
The first two options are workarounds for 'legacy' applications that were around before UAC, the correct way to do it for new applications is to embed a manifest resource asking for the privileges that you need.
Some program, such as Process Explorer appear to elevate a running process (when you choose "Show details for all process" in the file menu in this case) but what they really do is start a new instance, and it's that new instance that gets elevated - not the one that was originally running. This is the recommend way of doing it if only some parts of your application need elevation (e.g. a special 'admin options' dialog).
A:
Is the machine you have the code deployed on a 64-bit machine? You could also be running into a DEP issue.
Edit
This is a 1st gen Macbook Pro with a 1st gen Core Duo 2 Intel processor. Far from 64 bits.
I mentioned 64 bit, because at low levels structs from 32 bit to 64 bit do not get properly handled. Since the machines aren't 64bit, then more than likely disabling DEP would be a good logical next step. Vista did get more secure than XP SP2.
Well, I've just turned DEP globally off to no avail. Same error.
Well, I also read that people were getting this error after updating a machine to Vista SP1. Do these Vista installs have SP1 on them?
Turns out to be something completely different. Just for the sake of testing, I've disabled de UAC (note: I was not getting any prompt).
Great, I was actually going to suggest that, but I figured you probably tried it already.
A:
Have you made a support request to the vendor? Perhaps there's something about the MacBook Pro hardware that prevents the product from working.
A:
Given that the exception is a DllNotFoundException, you might want to try checking the HardwareID.dll with Dependency Walker BEFORE installing any dev tools on the Vista install to see if there is in fact a dependency missing.
A:
In addition to allowing full control to "Everyone" does the location also allow processes with a medium integrity level to write?
How do I check that ? I am new to Vista, I don't like it too much, it's too slow inside a VM for daily work and for VStudio usage inside a Virtual Machine, it doesn't bring anything new.
From a command prompt to you can execute:
icacls C:\Folder
If you see a line such as "Mandatory Label\High Mandatory Level" then the folder is only accessible to a high integrity process. If there is no such line then medium integrity processes can access it provided there are no other ACLs denying access (based on user for example).
EDIT: Forgot to mention you can use the /setintegritylevel switch to actually change the required integrity level for accessing the object.
| Windows Vista: Unable to load DLL 'x.dll': Invalid access to memory location. (DllNotFoundException) | I was testing on a customer's box this afternoon which has Windows Vista (He had home, but I am testing on a Business Edition with same results).
We make use of a .DLL that gets the Hardware ID of the computer. It's usage is very simple and the sample program I have created works. The Dll is This from AzSdk.
In fact, this works perfectly under Windows XP. However, for some strange reason, inside our project (way bigger), we get this exception:
Exception Type: System.DllNotFoundException
Exception Message: Unable to load DLL 'HardwareID.dll': Invalid access to memory location. (Exception from HRESULT: 0x800703E6)
Exception Target Site: GetHardwareID
I don't know what can be causing the problem, since I have full control over the folder. The project is a c#.net Windows Forms application and everything works fine, except the call for the external library.
I am declaring it like this: (note: it's not a COM library and it doesn't need to be registered).
[DllImport("HardwareID.dll")]
public static extern String GetHardwareID(bool HDD,
bool NIC, bool CPU, bool BIOS, string sRegistrationCode);
And then the calling code is quite simple:
private void button1_Click(object sender, EventArgs e)
{
textBox1.Text = GetHardwareID(cb_HDD.Checked,
cb_NIC.Checked,
cb_CPU.Checked,
cb_BIOS.Checked,
"*Registration Code*");
}
When you create a sample application, it works, but inside my projectit doesn't. Under XP works fine. Any ideas about what should I do in Vista to make this work?
As I've said, the folder and its sub-folders have Full Control for "Everybody".
UPDATE: I do not have Vista SP 1 installed.
UPDATE 2: I have installed Vista SP1 and now, with UAC disabled, not even the simple sample works!!! :( Damn Vista.
| [
"\nUnable to load DLL 'HardwareID.dll':\n Invalid access to memory location.\n (Exception from HRESULT: 0x800703E6)\n\nThe name of DllNotFoundException is confusing you - this isn't a problem with finding or loading the DLL file, the problem is that when the DLL is loaded, it does an illegal memory access which causes the loading process to fail.\nLike another poster here, I think this is a DEP problem, and that your UAC, etc, changes have finally allowed you to disable DEP for this application.\n",
"@Martín\nThe reason you were not getting the UAC prompt is because UAC can only change how a process is started, once the process is running it must stay at the same elevation level. The UAC will prompt will happen if:\n\nVista thinks it's an installer (lots of rules here, the simplest one is if it's called \"setup.exe\"), \nIf it's flagged as \"Run as Administrator\" (you can edit this by changing the properties of the shortcut or the exe), or \nIf the exe contains a manifest requesting admin privileges.\n\nThe first two options are workarounds for 'legacy' applications that were around before UAC, the correct way to do it for new applications is to embed a manifest resource asking for the privileges that you need.\nSome program, such as Process Explorer appear to elevate a running process (when you choose \"Show details for all process\" in the file menu in this case) but what they really do is start a new instance, and it's that new instance that gets elevated - not the one that was originally running. This is the recommend way of doing it if only some parts of your application need elevation (e.g. a special 'admin options' dialog).\n",
"Is the machine you have the code deployed on a 64-bit machine? You could also be running into a DEP issue.\nEdit\n\nThis is a 1st gen Macbook Pro with a 1st gen Core Duo 2 Intel processor. Far from 64 bits.\n\nI mentioned 64 bit, because at low levels structs from 32 bit to 64 bit do not get properly handled. Since the machines aren't 64bit, then more than likely disabling DEP would be a good logical next step. Vista did get more secure than XP SP2.\n\nWell, I've just turned DEP globally off to no avail. Same error.\n\nWell, I also read that people were getting this error after updating a machine to Vista SP1. Do these Vista installs have SP1 on them? \n\nTurns out to be something completely different. Just for the sake of testing, I've disabled de UAC (note: I was not getting any prompt).\n\nGreat, I was actually going to suggest that, but I figured you probably tried it already.\n",
"Have you made a support request to the vendor? Perhaps there's something about the MacBook Pro hardware that prevents the product from working.\n",
"Given that the exception is a DllNotFoundException, you might want to try checking the HardwareID.dll with Dependency Walker BEFORE installing any dev tools on the Vista install to see if there is in fact a dependency missing.\n",
"\n\nIn addition to allowing full control to \"Everyone\" does the location also allow processes with a medium integrity level to write?\n\nHow do I check that ? I am new to Vista, I don't like it too much, it's too slow inside a VM for daily work and for VStudio usage inside a Virtual Machine, it doesn't bring anything new.\n\nFrom a command prompt to you can execute:\nicacls C:\\Folder\n\nIf you see a line such as \"Mandatory Label\\High Mandatory Level\" then the folder is only accessible to a high integrity process. If there is no such line then medium integrity processes can access it provided there are no other ACLs denying access (based on user for example).\nEDIT: Forgot to mention you can use the /setintegritylevel switch to actually change the required integrity level for accessing the object.\n"
] | [
4,
2,
1,
0,
0,
0
] | [] | [] | [
".net",
"c#",
"dllnotfoundexception",
"windows_vista"
] | stackoverflow_0000029284_.net_c#_dllnotfoundexception_windows_vista.txt |
Q:
MySQL Results to a File
How do I write the results from a mysql query to file? I just need something quick. Output can be CSV, XML, HTML, etc.
A:
SELECT a,b,a+b
FROM test_table
INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(the docs show INTO OUTFILE up in the SELECT .. portion which may work as well, but I've never tried it that way)
http://dev.mysql.com/doc/refman/5.0/en/select.html
INTO OUTFILE creates a file on the server; if you are on a client and want it there, do:
mysql -u you -p -e "SELECT ..." > file_name
A:
if you have phpMyAdmin installed, it is a nobrainer: Run the query (haven't got a copy loaded, so I can't tell you the details, but it really is easy) and check neer bottom for export options. CSV will be listed, but I think you can also have SQL if you like :)
phpMyAdmin will give CSV in Excels dialect, which is probably what you want...
A:
You can use MySQL Query Browser to run the query and then just go to File -> Export Resultset and choose the output format. The options are CSV, HTML, XML, Excel and PLIST.
| MySQL Results to a File | How do I write the results from a mysql query to file? I just need something quick. Output can be CSV, XML, HTML, etc.
| [
"SELECT a,b,a+b \n FROM test_table\n INTO OUTFILE '/tmp/result.txt'\n FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"'\n LINES TERMINATED BY '\\n'\n\n(the docs show INTO OUTFILE up in the SELECT .. portion which may work as well, but I've never tried it that way)\nhttp://dev.mysql.com/doc/refman/5.0/en/select.html\nINTO OUTFILE creates a file on the server; if you are on a client and want it there, do:\nmysql -u you -p -e \"SELECT ...\" > file_name \n\n",
"if you have phpMyAdmin installed, it is a nobrainer: Run the query (haven't got a copy loaded, so I can't tell you the details, but it really is easy) and check neer bottom for export options. CSV will be listed, but I think you can also have SQL if you like :)\nphpMyAdmin will give CSV in Excels dialect, which is probably what you want...\n",
"You can use MySQL Query Browser to run the query and then just go to File -> Export Resultset and choose the output format. The options are CSV, HTML, XML, Excel and PLIST.\n"
] | [
20,
1,
1
] | [] | [] | [
"database",
"mysql"
] | stackoverflow_0000030856_database_mysql.txt |
Q:
Symantec Backup Exec 11d RALUS Communications Error
I'm trying to do a file system backup of a RedHat Enterprise Linux v4 server using Symantec Backup Exec 11d (Rev 7170). The backup server is Windows Server 2003.
I can browse the target server to create a selection list, and when I do a test run it completes successfully.
However, when I run a real backup, the job fails immediately during the "processing" phase with the error:
e000fe30 - A communications failure has occured.
I've tried opening ports (10000, 1025-9999), etc. But no joy. Any ideas?
A:
Sure sounds like firewall issues. Try stopping iptables, and running again. Also, RALUS can dump a log file - which may give some more to go on.
I use the older UNIX agent myself, which uses port 6101 IIRC - but I believe that the newer client uses tcp/10000 for control and 1024-65535 for transfer.
Last resort is to fire up a network sniffer. ;)
A:
To clarify the answer, the solution was to open up the tcp ports from 1024-65535.
The iptables looked liked this:
[root@MYSERVER ~]# service iptables status
Table: filter
Chain INPUT (policy ACCEPT)
target prot opt source destination
RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT)
target prot opt source destination
RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain RH-Firewall-1-INPUT (2 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 255
ACCEPT esp -- 0.0.0.0/0 0.0.0.0/0
ACCEPT ah -- 0.0.0.0/0 0.0.0.0/0
ACCEPT udp -- 0.0.0.0/0 224.0.0.251 udp dpt:5353
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5801
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5802
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5804
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5901
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5902
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5904
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:9099
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:1025
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
I executed this command to add the new rule:
[root@MYSERVER ~]# iptables -I RH-Firewall-1-INPUT 14 -p tcp -m tcp --dport 1024:65535 -j ACCEPT
Then they looked like this:
[root@MYSERVER ~]# service iptables status
Table: filter
Chain INPUT (policy ACCEPT)
target prot opt source destination
RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT)
target prot opt source destination
RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain RH-Firewall-1-INPUT (2 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 255
ACCEPT esp -- 0.0.0.0/0 0.0.0.0/0
ACCEPT ah -- 0.0.0.0/0 0.0.0.0/0
ACCEPT udp -- 0.0.0.0/0 224.0.0.251 udp dpt:5353
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5801
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5802
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5804
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:1025:65535
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5901
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5902
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5904
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:9099
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:1025
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Save the iptables when you've verified that it works:
[root@MYSERVER ~]# service iptables save
| Symantec Backup Exec 11d RALUS Communications Error | I'm trying to do a file system backup of a RedHat Enterprise Linux v4 server using Symantec Backup Exec 11d (Rev 7170). The backup server is Windows Server 2003.
I can browse the target server to create a selection list, and when I do a test run it completes successfully.
However, when I run a real backup, the job fails immediately during the "processing" phase with the error:
e000fe30 - A communications failure has occured.
I've tried opening ports (10000, 1025-9999), etc. But no joy. Any ideas?
| [
"Sure sounds like firewall issues. Try stopping iptables, and running again. Also, RALUS can dump a log file - which may give some more to go on. \nI use the older UNIX agent myself, which uses port 6101 IIRC - but I believe that the newer client uses tcp/10000 for control and 1024-65535 for transfer.\nLast resort is to fire up a network sniffer. ;)\n",
"To clarify the answer, the solution was to open up the tcp ports from 1024-65535.\nThe iptables looked liked this:\n[root@MYSERVER ~]# service iptables status \nTable: filter \nChain INPUT (policy ACCEPT) \ntarget prot opt source destination \nRH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 \n\nChain FORWARD (policy ACCEPT) \ntarget prot opt source destination \nRH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 \n\nChain OUTPUT (policy ACCEPT)\ntarget prot opt source destination \n\nChain RH-Firewall-1-INPUT (2 references) \ntarget prot opt source destination \nACCEPT all -- 0.0.0.0/0 0.0.0.0/0 \nACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 255 \nACCEPT esp -- 0.0.0.0/0 0.0.0.0/0 \nACCEPT ah -- 0.0.0.0/0 0.0.0.0/0 \nACCEPT udp -- 0.0.0.0/0 224.0.0.251 udp dpt:5353 \nACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631 \nACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5801 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5802 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5804 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5901 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5902 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5904 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:9099 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10000 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:1025 \nREJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited\n\nI executed this command to add the new rule: \n[root@MYSERVER ~]# iptables -I RH-Firewall-1-INPUT 14 -p tcp -m tcp --dport 1024:65535 -j ACCEPT\n\nThen they looked like this: \n[root@MYSERVER ~]# service iptables status \nTable: filter \nChain INPUT (policy ACCEPT) \ntarget prot opt source destination \nRH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 \n\nChain FORWARD (policy ACCEPT) \ntarget prot opt source destination \nRH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 \n\nChain OUTPUT (policy ACCEPT)\ntarget prot opt source destination \n\nChain RH-Firewall-1-INPUT (2 references) \ntarget prot opt source destination \nACCEPT all -- 0.0.0.0/0 0.0.0.0/0 \nACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 255 \nACCEPT esp -- 0.0.0.0/0 0.0.0.0/0 \nACCEPT ah -- 0.0.0.0/0 0.0.0.0/0 \nACCEPT udp -- 0.0.0.0/0 224.0.0.251 udp dpt:5353 \nACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631 \nACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5801 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5802 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5804 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:1025:65535 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5901 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5902 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5904 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:9099 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10000 \nACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:1025 \nREJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited \n\nSave the iptables when you've verified that it works: \n[root@MYSERVER ~]# service iptables save\n\n"
] | [
1,
0
] | [] | [] | [
"backup",
"backupexec",
"redhat",
"symantec"
] | stackoverflow_0000020912_backup_backupexec_redhat_symantec.txt |
Q:
InputManager on OS X
I was wondering if InputManagers are still part of Leopard and if there is a good example that would show me what are the use of it?
A:
InputManagers are still available on Leopard, except:
they need to be in the Local domain (/Library/InputManagers); you cannot install them per-user any more,
they need to have appropriate privileges,
in 64-bit, they won't get loaded at all.
The Leopard AppKit Release Notes have more specific details.
| InputManager on OS X | I was wondering if InputManagers are still part of Leopard and if there is a good example that would show me what are the use of it?
| [
"InputManagers are still available on Leopard, except:\n\nthey need to be in the Local domain (/Library/InputManagers); you cannot install them per-user any more,\nthey need to have appropriate privileges,\nin 64-bit, they won't get loaded at all.\n\nThe Leopard AppKit Release Notes have more specific details.\n"
] | [
4
] | [] | [] | [
"cocoa",
"macos"
] | stackoverflow_0000030966_cocoa_macos.txt |
Q:
Limiting traffic to SSL version of page only
We have an external service that is currently accessible via the http (port 80, non-SSL) and https (port 443, SSL) addresses.
What is the best way to limit connections to only the https address? Is it something we can do via IIS or does it have to be done via code.
Additional info: Regular ASP.NET web service (.asmx) running on IIS on Windows 2003 server. Service is built with C# on .NET 3.5.
A:
Require SSL on the application
In a custom error page for the 403 redirect the browser to the incoming URL, changing http to https along the way.
Note: Keep port 80 open for this - or there won't be a server to listen for requests to redirect.
A:
Just to clarify Greg's point 1. IIS Manager > Site properties > Directory Security > Secure Communications > Require Secure Channel (SSL)
A:
Is just not accepting any connections on port 80 an option? I'm a complete web server noob so I don't know if the server can operate without an unsecured listen port but if the server can operate only listen on port 443 that would seem to be simplest option.
Another option would be a redirect from the unsecure port to the secure one
| Limiting traffic to SSL version of page only | We have an external service that is currently accessible via the http (port 80, non-SSL) and https (port 443, SSL) addresses.
What is the best way to limit connections to only the https address? Is it something we can do via IIS or does it have to be done via code.
Additional info: Regular ASP.NET web service (.asmx) running on IIS on Windows 2003 server. Service is built with C# on .NET 3.5.
| [
"\nRequire SSL on the application\nIn a custom error page for the 403 redirect the browser to the incoming URL, changing http to https along the way.\n\nNote: Keep port 80 open for this - or there won't be a server to listen for requests to redirect.\n",
"Just to clarify Greg's point 1. IIS Manager > Site properties > Directory Security > Secure Communications > Require Secure Channel (SSL)\n",
"Is just not accepting any connections on port 80 an option? I'm a complete web server noob so I don't know if the server can operate without an unsecured listen port but if the server can operate only listen on port 443 that would seem to be simplest option.\nAnother option would be a redirect from the unsecure port to the secure one\n"
] | [
5,
3,
0
] | [] | [] | [
"iis",
"ssl",
"web_services"
] | stackoverflow_0000030964_iis_ssl_web_services.txt |
Q:
What's the best way to allow a user to browse for a file in C#?
What's the best way to allow a user to browse for a file in C#?
A:
using (OpenFileDialog dlg = new OpenFileDialog())
{
dlg.Title = "Select a file";
if (dlg.ShowDialog()== DialogResult.OK)
{
//do something with dlg.FileName
}
}
A:
I would say use the standard "Open File" dialog box (OpenFileDialog), this makes it less intimidating for new users and helps with a consistant UI.
A:
Close, Ryan, but you never showed the dialog. it should be:
if (dlg.ShowDialog() == DialogResult.OK)
| What's the best way to allow a user to browse for a file in C#? | What's the best way to allow a user to browse for a file in C#?
| [
"using (OpenFileDialog dlg = new OpenFileDialog())\n{\n dlg.Title = \"Select a file\";\n if (dlg.ShowDialog()== DialogResult.OK)\n {\n //do something with dlg.FileName \n }\n}\n\n",
"I would say use the standard \"Open File\" dialog box (OpenFileDialog), this makes it less intimidating for new users and helps with a consistant UI.\n",
"Close, Ryan, but you never showed the dialog. it should be: \nif (dlg.ShowDialog() == DialogResult.OK)\n\n"
] | [
16,
1,
1
] | [] | [] | [
"c#"
] | stackoverflow_0000031031_c#.txt |
Q:
querying 2 tables with the same spec for the differences
I recently had to solve this problem and find I've needed this info many times in the past so I thought I would post it. Assuming the following table def, how would you write a query to find all differences between the two?
table def:
CREATE TABLE feed_tbl
(
code varchar(15),
name varchar(40),
status char(1),
update char(1)
CONSTRAINT feed_tbl_PK PRIMARY KEY (code)
CREATE TABLE data_tbl
(
code varchar(15),
name varchar(40),
status char(1),
update char(1)
CONSTRAINT data_tbl_PK PRIMARY KEY (code)
Here is my solution, as a view using three queries joined by unions. The diff_type specified is how the record needs updated: deleted from _data(2), updated in _data(1), or added to _data(0)
CREATE VIEW delta_vw AS (
SELECT feed_tbl.code, feed_tbl.name, feed_tbl.status, feed_tbl.update, 0 as diff_type
FROM feed_tbl LEFT OUTER JOIN
data_tbl ON feed_tbl.code = data_tbl.code
WHERE (data_tbl.code IS NULL)
UNION
SELECT feed_tbl.code, feed_tbl.name, feed_tbl.status, feed_tbl.update, 1 as diff_type
FROM data_tbl RIGHT OUTER JOIN
feed_tbl ON data_tbl.code = feed_tbl.code
where (feed_tbl.name <> data_tbl.name) OR
(data_tbl.status <> feed_tbl.status) OR
(data_tbl.update <> feed_tbl.update)
UNION
SELECT data_tbl.code, data_tbl.name, data_tbl.status, data_tbl.update, 2 as diff_type
FROM feed_tbl LEFT OUTER JOIN
data_tbl ON data_tbl.code = feed_tbl.code
WHERE (feed_tbl.code IS NULL)
)
A:
UNION will remove duplicates, so just UNION the two together, then search for anything with more than one entry. Given "code" as a primary key, you can say:
edit 0: modified to include differences in the PK field itself
edit 1: if you use this in real life, be sure to list the actual column names. Dont use dot-star, since the UNION operation requires result sets to have exactly matching columns. This example would break if you added / removed a column from one of the tables.
select dt.*
from
data_tbl dt
,(
select code
from
(
select * from feed_tbl
union
select * from data_tbl
)
group by code
having count(*) > 1
) diffs --"diffs" will return all differences *except* those in the primary key itself
where diffs.code = dt.code
union --plus the ones that are only in feed, but not in data
select * from feed_tbl ft where not exists(select code from data_tbl dt where dt.code = ft.code)
union --plus the ones that are only in data, but not in feed
select * from data_tbl dt where not exists(select code from feed_tbl ft where ft.code = dt.code)
A:
I would use a minor variation in the second union:
where (ISNULL(feed_tbl.name, 'NONAME') <> ISNULL(data_tbl.name, 'NONAME')) OR
(ISNULL(data_tbl.status, 'NOSTATUS') <> ISNULL(feed_tbl.status, 'NOSTATUS')) OR
(ISNULL(data_tbl.update, '12/31/2039') <> ISNULL(feed_tbl.update, '12/31/2039'))
For reasons I have never understood, NULL does not equal NULL (at least in SQL Server).
A:
You could also use a FULL OUTER JOIN and a CASE ... END statement on the diff_type column along with the aforementioned where clause in querying 2 tables with the same spec for the differences
That would probably achieve the same results, but in one query.
| querying 2 tables with the same spec for the differences | I recently had to solve this problem and find I've needed this info many times in the past so I thought I would post it. Assuming the following table def, how would you write a query to find all differences between the two?
table def:
CREATE TABLE feed_tbl
(
code varchar(15),
name varchar(40),
status char(1),
update char(1)
CONSTRAINT feed_tbl_PK PRIMARY KEY (code)
CREATE TABLE data_tbl
(
code varchar(15),
name varchar(40),
status char(1),
update char(1)
CONSTRAINT data_tbl_PK PRIMARY KEY (code)
Here is my solution, as a view using three queries joined by unions. The diff_type specified is how the record needs updated: deleted from _data(2), updated in _data(1), or added to _data(0)
CREATE VIEW delta_vw AS (
SELECT feed_tbl.code, feed_tbl.name, feed_tbl.status, feed_tbl.update, 0 as diff_type
FROM feed_tbl LEFT OUTER JOIN
data_tbl ON feed_tbl.code = data_tbl.code
WHERE (data_tbl.code IS NULL)
UNION
SELECT feed_tbl.code, feed_tbl.name, feed_tbl.status, feed_tbl.update, 1 as diff_type
FROM data_tbl RIGHT OUTER JOIN
feed_tbl ON data_tbl.code = feed_tbl.code
where (feed_tbl.name <> data_tbl.name) OR
(data_tbl.status <> feed_tbl.status) OR
(data_tbl.update <> feed_tbl.update)
UNION
SELECT data_tbl.code, data_tbl.name, data_tbl.status, data_tbl.update, 2 as diff_type
FROM feed_tbl LEFT OUTER JOIN
data_tbl ON data_tbl.code = feed_tbl.code
WHERE (feed_tbl.code IS NULL)
)
| [
"UNION will remove duplicates, so just UNION the two together, then search for anything with more than one entry. Given \"code\" as a primary key, you can say:\nedit 0: modified to include differences in the PK field itself\nedit 1: if you use this in real life, be sure to list the actual column names. Dont use dot-star, since the UNION operation requires result sets to have exactly matching columns. This example would break if you added / removed a column from one of the tables.\nselect dt.*\nfrom\n data_tbl dt\n ,( \n select code\n from\n ( \n select * from feed_tbl\n union\n select * from data_tbl \n )\n group by code\n having count(*) > 1 \n ) diffs --\"diffs\" will return all differences *except* those in the primary key itself \nwhere diffs.code = dt.code\nunion --plus the ones that are only in feed, but not in data\nselect * from feed_tbl ft where not exists(select code from data_tbl dt where dt.code = ft.code)\nunion --plus the ones that are only in data, but not in feed\nselect * from data_tbl dt where not exists(select code from feed_tbl ft where ft.code = dt.code)\n\n",
"I would use a minor variation in the second union:\nwhere (ISNULL(feed_tbl.name, 'NONAME') <> ISNULL(data_tbl.name, 'NONAME')) OR\n(ISNULL(data_tbl.status, 'NOSTATUS') <> ISNULL(feed_tbl.status, 'NOSTATUS')) OR\n(ISNULL(data_tbl.update, '12/31/2039') <> ISNULL(feed_tbl.update, '12/31/2039')) \n\nFor reasons I have never understood, NULL does not equal NULL (at least in SQL Server).\n",
"You could also use a FULL OUTER JOIN and a CASE ... END statement on the diff_type column along with the aforementioned where clause in querying 2 tables with the same spec for the differences \nThat would probably achieve the same results, but in one query.\n"
] | [
2,
0,
0
] | [] | [] | [
"sql"
] | stackoverflow_0000030985_sql.txt |
Q:
What does this javascript error mean? Permission denied to call method to Location.toString
This error just started popping up all over our site.
Permission denied to call method to Location.toString
I'm seeing google posts that suggest that this is related to flash and our crossdomain.xml. What caused this to occur and how do you fix?
A:
Are you using javascript to communicate between frames/iframes which point to different domains? This is not permitted by the JS "same origin/domain" security policy. Ie, if you have
<iframe name="foo" src="foo.com/script.js">
<iframe name="bar" src="bar.com/script.js">
And the script on bar.com tries to access window["foo"].Location.toString, you will get this (or similar) exceptions. Please also note that the same origin policy can also kick in if you have content from different subdomains. Here you can find a short and to the point explanation of it with examples.
A:
You may have come across this posting, but it appears that a flash security update changed the behaviour of the crossdomain.xml, requiring you to specify a security policy to allow arbitrary headers to be sent from a remote domain. The Adobe knowledge base article (also referenced in the original post) is here.
A:
This post suggests that there is one line that needs to be added to the crossdomain.xml file.
<allow-http-request-headers-from domain="*" headers="*"/>
A:
This likely causeed by a change made in the Flash Player version released in early April, I'm not too sure about the specifics, but I assume there were security concerns with this functionality.
What you need to do is indeed add that to your crossdomain.xml (which should be in your servers webroot)
You can read more here: http://www.adobe.com/devnet/flashplayer/articles/flash_player9_security_update.html
A typical example of a crossdomain.xml is twitters, more info about how the file works can be found here.
| What does this javascript error mean? Permission denied to call method to Location.toString | This error just started popping up all over our site.
Permission denied to call method to Location.toString
I'm seeing google posts that suggest that this is related to flash and our crossdomain.xml. What caused this to occur and how do you fix?
| [
"Are you using javascript to communicate between frames/iframes which point to different domains? This is not permitted by the JS \"same origin/domain\" security policy. Ie, if you have\n<iframe name=\"foo\" src=\"foo.com/script.js\">\n<iframe name=\"bar\" src=\"bar.com/script.js\">\n\nAnd the script on bar.com tries to access window[\"foo\"].Location.toString, you will get this (or similar) exceptions. Please also note that the same origin policy can also kick in if you have content from different subdomains. Here you can find a short and to the point explanation of it with examples.\n",
"You may have come across this posting, but it appears that a flash security update changed the behaviour of the crossdomain.xml, requiring you to specify a security policy to allow arbitrary headers to be sent from a remote domain. The Adobe knowledge base article (also referenced in the original post) is here.\n",
"This post suggests that there is one line that needs to be added to the crossdomain.xml file.\n<allow-http-request-headers-from domain=\"*\" headers=\"*\"/>\n\n",
"This likely causeed by a change made in the Flash Player version released in early April, I'm not too sure about the specifics, but I assume there were security concerns with this functionality.\nWhat you need to do is indeed add that to your crossdomain.xml (which should be in your servers webroot)\nYou can read more here: http://www.adobe.com/devnet/flashplayer/articles/flash_player9_security_update.html\nA typical example of a crossdomain.xml is twitters, more info about how the file works can be found here.\n"
] | [
9,
2,
0,
0
] | [] | [] | [
"flash",
"javascript"
] | stackoverflow_0000030540_flash_javascript.txt |
Q:
Querying like Linq when you don't have Linq
I have a project that I'm currently working on but it currently only supports the .net framework 2.0. I love linq, but because of the framework version I can't use it. What I want isn't so much the ORM side of things, but the "queryability" (is that even a word?) of Linq.
So far the closest is llblgen but if there was something even lighter weight that could just do the querying for me that would be even better.
I've also looked at NHibernate which looks like it could go close to doing what I want, but it has a pretty steep learning curve and the mapping files don't get me overly excited.
If anyone is aware of something that will give me a similar query interface to Linq (or even better, how to get Linq to work on the .net 2.0 framework) I'd really like to hear about it.
A:
Have a look at this:
http://www.albahari.com/nutshell/linqbridge.html
Linq is several different things, and I'm not 100% sure which bits you want, but the above might be useful in some way. If you don't already have a book on Linq (I guess you don't), then I found "Linq In Action" to be be good.
A:
You might want to check out Subsonic. It is an ORM that uses an ActiveRecord pattern. I'm pretty sure most of its features work with the .NET Framework 2.0.
A:
To echo what Lance said - the SubSonic query language has a fluent interface which isn't as pretty as LINQ, but gives you some of the benefits (compile time checking, intellisense, etc.).
A:
LinqBridge works fine under .NET 2.0, and you get all the Linq extensions and query language. You need VS 2008 in order to use it, but you already knew that.
However, Linq it not an ORM. It's a query syntax. If you want to use Linq to query a database, you will need .NET 3.5. That's because 2.0 does not provide the mechanism needed to convert Linq code to your favorite database query language.
In other words, if an ORM is what you need, LinqBridge will not help you. You need to check out some of the other suggestions provided.
A:
There's a way to reference LINQ in the .NET 2.0 Framework, but I have to warn you that it might be against the terms of use/EULA of the framework:
LINQ on the .NET 2.0 Runtime
A:
First of all. Getting linq itself to work on 2.0 is out of the question. Its possible, but really not something to do outside a testing environment.
The closest you can get in terms of the ORM/Dynamic Querying part of it, is imho SubSonic, which I'll recommend for anyone stuck in C# 2.0
A:
LinqBridge looks like a pretty nice place to start since I have VS2008, I just need to compile and deploy to a .net 2.0 server.
I've looked at SubSonic and it's also an interesting alternative, but linqbridge seems to provide a much closer fit so I'm not going to have to go and learn a new ORM / query syntax.
| Querying like Linq when you don't have Linq | I have a project that I'm currently working on but it currently only supports the .net framework 2.0. I love linq, but because of the framework version I can't use it. What I want isn't so much the ORM side of things, but the "queryability" (is that even a word?) of Linq.
So far the closest is llblgen but if there was something even lighter weight that could just do the querying for me that would be even better.
I've also looked at NHibernate which looks like it could go close to doing what I want, but it has a pretty steep learning curve and the mapping files don't get me overly excited.
If anyone is aware of something that will give me a similar query interface to Linq (or even better, how to get Linq to work on the .net 2.0 framework) I'd really like to hear about it.
| [
"Have a look at this:\nhttp://www.albahari.com/nutshell/linqbridge.html\nLinq is several different things, and I'm not 100% sure which bits you want, but the above might be useful in some way. If you don't already have a book on Linq (I guess you don't), then I found \"Linq In Action\" to be be good.\n",
"You might want to check out Subsonic. It is an ORM that uses an ActiveRecord pattern. I'm pretty sure most of its features work with the .NET Framework 2.0.\n",
"To echo what Lance said - the SubSonic query language has a fluent interface which isn't as pretty as LINQ, but gives you some of the benefits (compile time checking, intellisense, etc.).\n",
"LinqBridge works fine under .NET 2.0, and you get all the Linq extensions and query language. You need VS 2008 in order to use it, but you already knew that.\nHowever, Linq it not an ORM. It's a query syntax. If you want to use Linq to query a database, you will need .NET 3.5. That's because 2.0 does not provide the mechanism needed to convert Linq code to your favorite database query language.\nIn other words, if an ORM is what you need, LinqBridge will not help you. You need to check out some of the other suggestions provided.\n",
"There's a way to reference LINQ in the .NET 2.0 Framework, but I have to warn you that it might be against the terms of use/EULA of the framework:\nLINQ on the .NET 2.0 Runtime\n",
"First of all. Getting linq itself to work on 2.0 is out of the question. Its possible, but really not something to do outside a testing environment.\n\nThe closest you can get in terms of the ORM/Dynamic Querying part of it, is imho SubSonic, which I'll recommend for anyone stuck in C# 2.0\n",
"LinqBridge looks like a pretty nice place to start since I have VS2008, I just need to compile and deploy to a .net 2.0 server.\nI've looked at SubSonic and it's also an interesting alternative, but linqbridge seems to provide a much closer fit so I'm not going to have to go and learn a new ORM / query syntax.\n"
] | [
5,
3,
1,
1,
0,
0,
0
] | [] | [] | [
"database",
"linq",
"orm"
] | stackoverflow_0000007652_database_linq_orm.txt |
Q:
Can you use LINQ tools such as SQLMetal with an access database?
I'm creating a small database application to teach myself the following concepts
C# programming
.Net 3.5 framework
WPF
LINQ ORM
I want to use Microsoft Access as the database but I can't seem to find any mention of whether its possible to use SQLMetal to generate the ORM code from a Microsoft Access database.
Does anyone know if this is possible?
If not, are there any small database or embedded databases I could use? I think SQL express would be overkill for me at this point.
A:
For an embedded database, you can use SQL Server Compact Edition. Unlike SQL Server Express, it is not compatible with the LINQ to SQL designer, but it is fully compatible with the command-line SQLMetal. It has a few advantages over SQL Express, like to ability to use embedded or run from a file. Microsoft has a very handy chart outlining the differences between Express and Compact.
A:
I don't think SQL Express would be overkill if you want to learn real-world skills - quite the opposite in fact! That'd be my choice, and whatever I chose, I'd stay clear of Access.
Good luck
A:
AFAIK, Linq to SQL is MSSQL server provider specific. To be honest, SQL Express is pretty lightweight on todays machines.
BTW don't confuse LINQ with Linq to SQL. Linq is the underlying technology to provide "query" like support to .NET (amongst other things), where as L2S is effectively a Data Access technology built on top of Linq. Vanilla Linq will work with any ADO.NET provider, which of course Access is one.
Entity Framework will work with any compatible provider also but if SQLExpress is too heavy for you then I wouldn't recommend going down this path...
A:
Thanks for all the responses. I never expected to get an answer this quick. For my test application I think SQL Server Compact Edition would be the way to go. I'm basically creating a money managment app similar to Microsoft Money and although it is an exercise to learn skills, I would eventually want to use it to manage my finances (provided its not too crap!)
This why I thought a fully blown database would be overkill.
| Can you use LINQ tools such as SQLMetal with an access database? | I'm creating a small database application to teach myself the following concepts
C# programming
.Net 3.5 framework
WPF
LINQ ORM
I want to use Microsoft Access as the database but I can't seem to find any mention of whether its possible to use SQLMetal to generate the ORM code from a Microsoft Access database.
Does anyone know if this is possible?
If not, are there any small database or embedded databases I could use? I think SQL express would be overkill for me at this point.
| [
"For an embedded database, you can use SQL Server Compact Edition. Unlike SQL Server Express, it is not compatible with the LINQ to SQL designer, but it is fully compatible with the command-line SQLMetal. It has a few advantages over SQL Express, like to ability to use embedded or run from a file. Microsoft has a very handy chart outlining the differences between Express and Compact.\n",
"I don't think SQL Express would be overkill if you want to learn real-world skills - quite the opposite in fact! That'd be my choice, and whatever I chose, I'd stay clear of Access.\nGood luck\n",
"AFAIK, Linq to SQL is MSSQL server provider specific. To be honest, SQL Express is pretty lightweight on todays machines.\nBTW don't confuse LINQ with Linq to SQL. Linq is the underlying technology to provide \"query\" like support to .NET (amongst other things), where as L2S is effectively a Data Access technology built on top of Linq. Vanilla Linq will work with any ADO.NET provider, which of course Access is one.\nEntity Framework will work with any compatible provider also but if SQLExpress is too heavy for you then I wouldn't recommend going down this path...\n",
"Thanks for all the responses. I never expected to get an answer this quick. For my test application I think SQL Server Compact Edition would be the way to go. I'm basically creating a money managment app similar to Microsoft Money and although it is an exercise to learn skills, I would eventually want to use it to manage my finances (provided its not too crap!) \nThis why I thought a fully blown database would be overkill.\n"
] | [
4,
1,
1,
0
] | [] | [] | [
"c#",
"linq_to_sql",
"ms_access"
] | stackoverflow_0000030004_c#_linq_to_sql_ms_access.txt |
Q:
Getting Started with Unit Testing
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Rytmis
My question is, what are the current "best practices" in terms of tools as well as when and where to use unit testing as part of your daily coding?
Lets try to be somewhat language agnostic and cover all the bases.
A:
Ok here's some best practices from some one who doesn't unit test as much as he should...cough.
Make sure your tests test one
thing and one thing only.
Write unit tests as you go. Preferably before you write the code you are testing against.
Do not unit test the GUI.
Separate your concerns.
Minimise the dependencies of your tests.
Mock behviour with mocks.
A:
You might want to look at TDD on Three Index Cards and Three Index Cards to Easily Remember the Essence of Test-Driven Development:
Card #1. Uncle Bob’s Three Laws
Write no production code except to pass a failing test.
Write only enough of a test to demonstrate a failure.
Write only enough production code to pass the test.
Card #2: FIRST Principles
Fast: Mind-numbingly fast, as in hundreds or thousands per second.
Isolated: The test isolates a fault clearly.
Repeatable: I can run it repeatedly and it will pass or fail the same way each time.
Self-verifying: The Test is unambiguously pass-fail.
Timely: Produced in lockstep with tiny code changes.
Card #3: Core of TDD
Red: test fails
Green: test passes
Refactor: clean code and tests
A:
The so-called xUnit framework is widely used. It was originally developed for Smalltalk as SUnit, evolved into JUnit for Java, and now has many other implementations such as NUnit for .Net. It's almost a de facto standard - if you say you're using unit tests, a majority of other developers will assume you mean xUnit or similar.
A:
A great resource for 'best practices' is the Google Testing Blog, for example a recent post on Writing Testable Code is a fantastic resource. Specifically their 'Testing on the Toilet' series weekly posts are great for posting around your cube, or toilet, so you can always be thinking about testing.
A:
The xUnit family are the mainstay of unit testing. They are integrated into the likes of Netbeans, Eclipse and many other IDEs. They offer a simple, structured solution to unit testing.
One thing I always try and do when writing a test is to minimise external code usage. By that I mean: I try to minimise the setup and teardown code for the test as much as possible and try to avoid using other modules/code blocks as much as possible. Well-written modular code shouldn't require too much external code in it's setup and teardown.
A:
NUnit is a good tool for any of the .NET languages.
Unit tests can be used in a number of ways:
Test Logic
Increase separation of code units. If you can't fully test a function or section of code, then the parts that make it up are too interdependant.
Drive development, some people write tests before they write the code to be tested. This forces you to think about what you want the code to do, and then gives you a definite guideline on when you have acheived that.
A:
Don't forget refactoring support. ReSharper on .NET provides automatic refactoring and quick fixes for missing code. That means if you write a call to something that does not exist, ReSharper will ask if you want to create the missing piece.
| Getting Started with Unit Testing |
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Rytmis
My question is, what are the current "best practices" in terms of tools as well as when and where to use unit testing as part of your daily coding?
Lets try to be somewhat language agnostic and cover all the bases.
| [
"Ok here's some best practices from some one who doesn't unit test as much as he should...cough.\n\nMake sure your tests test one\nthing and one thing only.\nWrite unit tests as you go. Preferably before you write the code you are testing against.\nDo not unit test the GUI. \nSeparate your concerns. \nMinimise the dependencies of your tests.\nMock behviour with mocks.\n\n",
"You might want to look at TDD on Three Index Cards and Three Index Cards to Easily Remember the Essence of Test-Driven Development:\nCard #1. Uncle Bob’s Three Laws\n\nWrite no production code except to pass a failing test.\nWrite only enough of a test to demonstrate a failure.\nWrite only enough production code to pass the test.\n\nCard #2: FIRST Principles\n\nFast: Mind-numbingly fast, as in hundreds or thousands per second.\nIsolated: The test isolates a fault clearly.\nRepeatable: I can run it repeatedly and it will pass or fail the same way each time.\nSelf-verifying: The Test is unambiguously pass-fail.\nTimely: Produced in lockstep with tiny code changes.\n\nCard #3: Core of TDD\n\nRed: test fails\nGreen: test passes\nRefactor: clean code and tests\n\n",
"The so-called xUnit framework is widely used. It was originally developed for Smalltalk as SUnit, evolved into JUnit for Java, and now has many other implementations such as NUnit for .Net. It's almost a de facto standard - if you say you're using unit tests, a majority of other developers will assume you mean xUnit or similar.\n",
"A great resource for 'best practices' is the Google Testing Blog, for example a recent post on Writing Testable Code is a fantastic resource. Specifically their 'Testing on the Toilet' series weekly posts are great for posting around your cube, or toilet, so you can always be thinking about testing. \n",
"The xUnit family are the mainstay of unit testing. They are integrated into the likes of Netbeans, Eclipse and many other IDEs. They offer a simple, structured solution to unit testing.\nOne thing I always try and do when writing a test is to minimise external code usage. By that I mean: I try to minimise the setup and teardown code for the test as much as possible and try to avoid using other modules/code blocks as much as possible. Well-written modular code shouldn't require too much external code in it's setup and teardown.\n",
"NUnit is a good tool for any of the .NET languages.\nUnit tests can be used in a number of ways: \n\nTest Logic\nIncrease separation of code units. If you can't fully test a function or section of code, then the parts that make it up are too interdependant.\nDrive development, some people write tests before they write the code to be tested. This forces you to think about what you want the code to do, and then gives you a definite guideline on when you have acheived that.\n\n",
"Don't forget refactoring support. ReSharper on .NET provides automatic refactoring and quick fixes for missing code. That means if you write a call to something that does not exist, ReSharper will ask if you want to create the missing piece.\n"
] | [
22,
14,
3,
3,
1,
0,
0
] | [] | [] | [
"unit_testing"
] | stackoverflow_0000016860_unit_testing.txt |
Q:
Passing around sets of data
A question that has pondered me for the last while. I am primarily a .net developer who dabbles in Objective-C for iPhone and Mac.
How do you go about sending "datasets" between methods in objective-c. For example in C# you can populate a custom class with data and pass it around in a List of type custom class. EG if you had a customer class you would just do something like:
List<Customer> customers = DataLayer.GetAllCustomers();
The only way I can see how this could be done in obj-c would be to populate an NSArray with custom objects? Is this an efficient way to do things? Any other recommendations? I am using sqlite as the database/data I want to return.
A:
You're on the right track.
Cocoa's collection classes — which all have mutable an immutable variants — are:
NSArray: ordered, can contain an object multiple times
NSDictionary: unordered, mapping from keys to values, keys are copied
NSSet: unordered, can contain an object only once
NSCountedSet: unordered, can contain an object multiple times
The immutable variants help a lot with efficiency. The standard pattern for accessors of classes that have mutable variants is to copy rather than retain. This is codified in the @property mechanism, by using the copy attribute on the property:
// Department.h
@interface Department : NSObject
@property (readwrite, copy) NSSet *employees;
@end
This means that if you pass a mutable array to something that takes an array, it will be copied, and if you pass that to something else, it will be copied again. The trick is though that "copying" an immutable object really just retains it, so you only take a hit for that first copy. You probably want to make a copy that first time anyway so you don't pass a mutable array to something else, then mutate it behind the back of whatever you passed it to.
For Cocoa on Mac OS X, I'd also strongly encourage you to take a look at Core Data. It's an alternative to the "data set" pattern you might be used to from .NET/ADO/etc. With Core Data, you don't "get all customers" and then pass that collection around. Instead you query for the customers you care about, and as you traverse relationships of the objects you've queried for, other objects will be pulled in for you automatically.
Core Data also gets you features like visual modeling of your entities, automatic generation of property getters & setters, fine-grained control over migration from one schema version to another, and so on.
| Passing around sets of data | A question that has pondered me for the last while. I am primarily a .net developer who dabbles in Objective-C for iPhone and Mac.
How do you go about sending "datasets" between methods in objective-c. For example in C# you can populate a custom class with data and pass it around in a List of type custom class. EG if you had a customer class you would just do something like:
List<Customer> customers = DataLayer.GetAllCustomers();
The only way I can see how this could be done in obj-c would be to populate an NSArray with custom objects? Is this an efficient way to do things? Any other recommendations? I am using sqlite as the database/data I want to return.
| [
"You're on the right track.\nCocoa's collection classes — which all have mutable an immutable variants — are:\n\nNSArray: ordered, can contain an object multiple times\nNSDictionary: unordered, mapping from keys to values, keys are copied\nNSSet: unordered, can contain an object only once\nNSCountedSet: unordered, can contain an object multiple times\n\nThe immutable variants help a lot with efficiency. The standard pattern for accessors of classes that have mutable variants is to copy rather than retain. This is codified in the @property mechanism, by using the copy attribute on the property:\n// Department.h\n@interface Department : NSObject\n@property (readwrite, copy) NSSet *employees;\n@end\n\nThis means that if you pass a mutable array to something that takes an array, it will be copied, and if you pass that to something else, it will be copied again. The trick is though that \"copying\" an immutable object really just retains it, so you only take a hit for that first copy. You probably want to make a copy that first time anyway so you don't pass a mutable array to something else, then mutate it behind the back of whatever you passed it to.\nFor Cocoa on Mac OS X, I'd also strongly encourage you to take a look at Core Data. It's an alternative to the \"data set\" pattern you might be used to from .NET/ADO/etc. With Core Data, you don't \"get all customers\" and then pass that collection around. Instead you query for the customers you care about, and as you traverse relationships of the objects you've queried for, other objects will be pulled in for you automatically.\nCore Data also gets you features like visual modeling of your entities, automatic generation of property getters & setters, fine-grained control over migration from one schema version to another, and so on.\n"
] | [
25
] | [] | [] | [
"cocoa",
"macos",
"objective_c",
"sqlite"
] | stackoverflow_0000031237_cocoa_macos_objective_c_sqlite.txt |
Q:
Do you need the .NET 1.0 framework to target the .NET 1.0 framework?
I have a bunch of .NET frameworks installed on my machine.
I know that with the Java JDK, I can use the 6.0 version to target 5.0 and earlier.
Can I do something similar with the .NET framework - target 1.0 and 2.0 with the 3.0 framework?
A:
Visual Studio 2008 was the first to support targeting older versions of .NET. Unfortunately, it supports only .NET 2 and up.
In other words, you'll need .NET framework SDK 1 or 1.1 to do this.
A:
We use Visual Studio 2008 to maintain a .NET 1.1 WebForms app using MSBee. It required a bit of initial *.csproj/msbuild file hackery, but works very well. Of course, you're limited to .NET 1.1 features (it uses the old 1.1 compilers), so no Generics or LINQ. But if you're wanting just one copy of Visual Studio installed it's the way to go.
A:
(Updated)
You need to compile with the 1.0 compilers. These are only available with the 1.0 release of the runtime/SDK.
The 2.0/3.5 compilers won't emit 1.0-compatible assemblies.
Visual Studio 2008 can generate 2.0 assemblies, but 1.0 was left off.
| Do you need the .NET 1.0 framework to target the .NET 1.0 framework? | I have a bunch of .NET frameworks installed on my machine.
I know that with the Java JDK, I can use the 6.0 version to target 5.0 and earlier.
Can I do something similar with the .NET framework - target 1.0 and 2.0 with the 3.0 framework?
| [
"Visual Studio 2008 was the first to support targeting older versions of .NET. Unfortunately, it supports only .NET 2 and up.\nIn other words, you'll need .NET framework SDK 1 or 1.1 to do this.\n",
"We use Visual Studio 2008 to maintain a .NET 1.1 WebForms app using MSBee. It required a bit of initial *.csproj/msbuild file hackery, but works very well. Of course, you're limited to .NET 1.1 features (it uses the old 1.1 compilers), so no Generics or LINQ. But if you're wanting just one copy of Visual Studio installed it's the way to go.\n",
"(Updated)\nYou need to compile with the 1.0 compilers. These are only available with the 1.0 release of the runtime/SDK.\nThe 2.0/3.5 compilers won't emit 1.0-compatible assemblies.\nVisual Studio 2008 can generate 2.0 assemblies, but 1.0 was left off.\n"
] | [
2,
2,
1
] | [] | [] | [
".net"
] | stackoverflow_0000031343_.net.txt |