content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How do I make persistent network sockets on Unix in Ruby?
I'd like to be able to write a ruby program that can restart without dropping it's socket connections.
A:
This program gets Google's homepage and then when you pass it SIG_INT via Ctrl-C it restarts the program and reads the output of the homepage from the open socket with Google.
#!/usr/bin/ruby
#simple_connector.rb
require 'socket'
puts "Started."
if ARGV[0] == "restart"
sock = IO.open(ARGV[1].to_i)
puts sock.read
exit
else
sock = TCPSocket.new('google.com', 80)
sock.write("GET /\n")
end
Signal.trap("INT") do
puts "Restarting..."
exec("ruby simple_connector.rb restart #{sock.fileno}")
end
while true
sleep 1
end
A:
You're talking about network sockets, not UNIX sockets I assume?
I'm not sure this suits your needs, but the way I would do it is by seperating the networking and logic part, and only restart the logic part, then reconnect the logic part to the networking part.
| How do I make persistent network sockets on Unix in Ruby? | I'd like to be able to write a ruby program that can restart without dropping it's socket connections.
| [
"This program gets Google's homepage and then when you pass it SIG_INT via Ctrl-C it restarts the program and reads the output of the homepage from the open socket with Google.\n#!/usr/bin/ruby\n#simple_connector.rb\nrequire 'socket'\n\nputs \"Started.\"\n\nif ARGV[0] == \"restart\"\n sock = IO.open(ARGV[1].to_i)\n puts sock.read\n exit\nelse\n sock = TCPSocket.new('google.com', 80)\n sock.write(\"GET /\\n\")\nend\n\nSignal.trap(\"INT\") do\n puts \"Restarting...\"\n exec(\"ruby simple_connector.rb restart #{sock.fileno}\")\nend\n\nwhile true\n sleep 1\nend\n\n",
"You're talking about network sockets, not UNIX sockets I assume?\nI'm not sure this suits your needs, but the way I would do it is by seperating the networking and logic part, and only restart the logic part, then reconnect the logic part to the networking part.\n"
] | [
1,
0
] | [] | [] | [
"ruby",
"sockets"
] | stackoverflow_0000038592_ruby_sockets.txt |
Q:
Pinning pointer arrays in memory
I'm currently working on a ray-tracer in C# as a hobby project. I'm trying to achieve a decent rendering speed by implementing some tricks from a c++ implementation and have run into a spot of trouble.
The objects in the scenes which the ray-tracer renders are stored in a KdTree structure and the tree's nodes are, in turn, stored in an array. The optimization I'm having problems with is while trying to fit as many tree nodes as possible into a cache line. One means of doing this is for nodes to contain a pointer to the left child node only. It is then implicit that the right child follows directly after the left one in the array.
The nodes are structs and during tree construction they are succesfully put into the array by a static memory manager class. When I begin to traverse the tree it, at first, seems to work just fine. Then at a point early in the rendering (about the same place each time), the left child pointer of the root node is suddenly pointing at a null pointer. I have come to the conclusion that the garbage collecter has moved the structs as the array lies on the heap.
I've tried several things to pin the addresses in memory but none of them seems to last for the entire application lifetime as I need. The 'fixed' keyword only seems to help during single method calls and declaring 'fixed' arrays can only be done on simple types which a node isn't. Is there a good way to do this or am I just too far down the path of stuff C# wasn't meant for.
Btw, changing to c++, while perhaps the better choice for a high performance program, is not an option.
A:
Firstly, if you're using C# normally, you can't suddenly get a null reference due to the garbage collector moving stuff, because the garbage collector also updates all references, so you don't need to worry about it moving stuff around.
You can pin things in memory but this may cause more problems than it solves. For one thing, it prevents the garbage collector from compacting memory properly, and may impact performance in that way.
One thing I would say from your post is that using structs may not help performance as you hope. C# fails to inline any method calls involving structs, and even though they've fixed this in their latest runtime beta, structs frequently don't perform that well.
Personally, I would say C++ tricks like this don't generally tend to carry over too well into C#. You may have to learn to let go a bit; there can be other more subtle ways to improve performance ;)
A:
What is your static memory manager actually doing? Unless it is doing something unsafe (P/Invoke, unsafe code), the behaviour you are seeing is a bug in your program, and not due to the behaviour of the CLR.
Secondly, what do you mean by 'pointer', with respect to links between structures? Do you literally mean an unsafe KdTree* pointer? Don't do that. Instead, use an index into the array. Since I expect that all nodes for a single tree are stored in the same array, you won't need a separate reference to the array. Just a single index will do.
Finally, if you really really must use KdTree* pointers, then your static memory manager should allocate a large block using e.g. Marshal.AllocHGlobal or another unmanaged memory source; it should both treat this large block as a KdTree array (i.e. index a KdTree* C-style) and it should suballocate nodes from this array, by bumping a "free" pointer.
If you ever have to resize this array, then you'll need to update all the pointers, of course.
The basic lesson here is that unsafe pointers and managed memory do not mix outside of 'fixed' blocks, which of course have stack frame affinity (i.e. when the function returns, the pinned behaviour goes away). There is a way to pin arbitrary objects, like your array, using GCHandle.Alloc(yourArray, GCHandleType.Pinned), but you almost certainly don't want to go down that route.
You will get more sensible answers if you describe in more detail what you are doing.
A:
If you really want to do this, you can use the GCHandle.Alloc method to specify that a pointer should be pinned without being automatically released at the end of the scope like the fixed statement.
But, as other people have been saying, doing this is putting undue pressure on the garbage collector. What about just creating a struct that holds onto a pair of your nodes and then managing an array of NodePairs rather than an array of nodes?
If you really do want to have completely unmanaged access to a chunk of memory, you would probably be better off allocating the memory directly from the unmanaged heap rather than permanently pinning a part of the managed heap (this prevents the heap from being able to properly compact itself). One quick and simple way to do this would be to use Marshal.AllocHGlobal method.
A:
Is it really prohibitive to store the pair of array reference and index?
A:
What is your static memory manager actually doing? Unless it is doing something unsafe (P/Invoke, unsafe code), the behaviour you are seeing is a bug in your program, and not due to the behaviour of the CLR.
I was in fact speaking about unsafe pointers. What I wanted was something like Marshal.AllocHGlobal, though with a lifetime exceeding a single method call. On reflection it seems that just using an index is the right solution as I might have gotten too caught up in mimicking the c++ code.
One thing I would say from your post is that using structs may not help performance as you hope. C# fails to inline any method calls involving structs, and even though they've fixed this in their latest run-time beta, structs frequently don't perform that well.
I looked into this a bit and I see it has been fixed in .NET 3.5SP1; I assume that's what you were referring to as the run-time beta. In fact, I now understand that this change accounted for a doubling of my rendering speed. Now, structs are aggressively in-lined, improving their performance greatly on X86 systems (X64 had better struct performance in advance).
| Pinning pointer arrays in memory | I'm currently working on a ray-tracer in C# as a hobby project. I'm trying to achieve a decent rendering speed by implementing some tricks from a c++ implementation and have run into a spot of trouble.
The objects in the scenes which the ray-tracer renders are stored in a KdTree structure and the tree's nodes are, in turn, stored in an array. The optimization I'm having problems with is while trying to fit as many tree nodes as possible into a cache line. One means of doing this is for nodes to contain a pointer to the left child node only. It is then implicit that the right child follows directly after the left one in the array.
The nodes are structs and during tree construction they are succesfully put into the array by a static memory manager class. When I begin to traverse the tree it, at first, seems to work just fine. Then at a point early in the rendering (about the same place each time), the left child pointer of the root node is suddenly pointing at a null pointer. I have come to the conclusion that the garbage collecter has moved the structs as the array lies on the heap.
I've tried several things to pin the addresses in memory but none of them seems to last for the entire application lifetime as I need. The 'fixed' keyword only seems to help during single method calls and declaring 'fixed' arrays can only be done on simple types which a node isn't. Is there a good way to do this or am I just too far down the path of stuff C# wasn't meant for.
Btw, changing to c++, while perhaps the better choice for a high performance program, is not an option.
| [
"Firstly, if you're using C# normally, you can't suddenly get a null reference due to the garbage collector moving stuff, because the garbage collector also updates all references, so you don't need to worry about it moving stuff around.\nYou can pin things in memory but this may cause more problems than it solves. For one thing, it prevents the garbage collector from compacting memory properly, and may impact performance in that way.\nOne thing I would say from your post is that using structs may not help performance as you hope. C# fails to inline any method calls involving structs, and even though they've fixed this in their latest runtime beta, structs frequently don't perform that well.\nPersonally, I would say C++ tricks like this don't generally tend to carry over too well into C#. You may have to learn to let go a bit; there can be other more subtle ways to improve performance ;)\n",
"What is your static memory manager actually doing? Unless it is doing something unsafe (P/Invoke, unsafe code), the behaviour you are seeing is a bug in your program, and not due to the behaviour of the CLR.\nSecondly, what do you mean by 'pointer', with respect to links between structures? Do you literally mean an unsafe KdTree* pointer? Don't do that. Instead, use an index into the array. Since I expect that all nodes for a single tree are stored in the same array, you won't need a separate reference to the array. Just a single index will do.\nFinally, if you really really must use KdTree* pointers, then your static memory manager should allocate a large block using e.g. Marshal.AllocHGlobal or another unmanaged memory source; it should both treat this large block as a KdTree array (i.e. index a KdTree* C-style) and it should suballocate nodes from this array, by bumping a \"free\" pointer.\nIf you ever have to resize this array, then you'll need to update all the pointers, of course.\nThe basic lesson here is that unsafe pointers and managed memory do not mix outside of 'fixed' blocks, which of course have stack frame affinity (i.e. when the function returns, the pinned behaviour goes away). There is a way to pin arbitrary objects, like your array, using GCHandle.Alloc(yourArray, GCHandleType.Pinned), but you almost certainly don't want to go down that route.\nYou will get more sensible answers if you describe in more detail what you are doing.\n",
"If you really want to do this, you can use the GCHandle.Alloc method to specify that a pointer should be pinned without being automatically released at the end of the scope like the fixed statement.\nBut, as other people have been saying, doing this is putting undue pressure on the garbage collector. What about just creating a struct that holds onto a pair of your nodes and then managing an array of NodePairs rather than an array of nodes? \nIf you really do want to have completely unmanaged access to a chunk of memory, you would probably be better off allocating the memory directly from the unmanaged heap rather than permanently pinning a part of the managed heap (this prevents the heap from being able to properly compact itself). One quick and simple way to do this would be to use Marshal.AllocHGlobal method.\n",
"Is it really prohibitive to store the pair of array reference and index?\n",
"\nWhat is your static memory manager actually doing? Unless it is doing something unsafe (P/Invoke, unsafe code), the behaviour you are seeing is a bug in your program, and not due to the behaviour of the CLR.\n\nI was in fact speaking about unsafe pointers. What I wanted was something like Marshal.AllocHGlobal, though with a lifetime exceeding a single method call. On reflection it seems that just using an index is the right solution as I might have gotten too caught up in mimicking the c++ code.\n\nOne thing I would say from your post is that using structs may not help performance as you hope. C# fails to inline any method calls involving structs, and even though they've fixed this in their latest run-time beta, structs frequently don't perform that well.\n\nI looked into this a bit and I see it has been fixed in .NET 3.5SP1; I assume that's what you were referring to as the run-time beta. In fact, I now understand that this change accounted for a doubling of my rendering speed. Now, structs are aggressively in-lined, improving their performance greatly on X86 systems (X64 had better struct performance in advance).\n"
] | [
4,
2,
1,
0,
0
] | [] | [] | [
"c#",
"optimization",
"raytracing",
"unsafe"
] | stackoverflow_0000038302_c#_optimization_raytracing_unsafe.txt |
Q:
Get back to basics. How do I get back into C++?
I haven't used C++ since college. Even though I've wanted to I haven't needed to do any until I started wanting to write plugins for Launchy.
Is there a good book to read to get back into it?
My experience since college is mainly C# and recently ruby. I bought some book for C# developers and it ended up being on how to write C++ with CLI. While a good book it wasn't quite what I was looking for.
A:
My favorites are Effective C++, More Effective C++, and Effective STL by Scott Meyers. Also C++ Coding Standards by Sutter and Alexandrescu.
A:
The best way to get back into C++ is to jump in. You can't learn a real language without spending any serious time in a country where they speak it. I wouldn't try to learn a programming language without spending time coding in it either.
I wouldn't recommend learning C first though. That's a good way to pick up some bad habits in C++.
A:
I haven't tried it myself but have heard from people and sources I trust that "Accelerated C++" by Koenig and Moo is a good book for people who want to pick up C++ quickly. Compared to the more traditional route of learning C first then C++ as a kind of C with classes the K+M approach helps you become productive quickly while avoiding pitfalls and bad habits associated with the legacy of the language.
A:
A good starting place is "Thinking in C++" by Bruce Eckel, I've rarely had anyone complain about the book. Well written and also has a version available online.
A:
Another online book that I pick up whenever I need to get back into C++ is "C++ In Action" by Bartosz Milewski. Its online at his site.
A:
The C++ Programming Language by Bjarne Stroustrup covers C++ in depth. Bjarne is the inventor of C++. It also provides insights into why the language is the way it is. Some people find the book a little terse. I found it to be an enjoyable read. If you have done some C++ before it's a great place to start. It is by no means a beginners book on C++.
A:
My book recommendations:
Essential C++ (Lippman)
C++ Common Knowledge: Essential Intermediate Programming (Dewhurst)
...and I second the Effective C++ suggestion above.
A very handy alternative to buying books in meatspace is to subscribe to a service like Safari Books Online. For a not unreasonable monthly fee you'll get access to all of the above books plus a bajillion others. If you desire fast random access to more than a couple books, it pretty much pays for itself. It's an easy case to make if you want to convince your employer to pay for it.
Beyond that, sit yourself in front of an IDE that has a C++ code completion feature (I use Eclipse/CDT most of the time).
| Get back to basics. How do I get back into C++? | I haven't used C++ since college. Even though I've wanted to I haven't needed to do any until I started wanting to write plugins for Launchy.
Is there a good book to read to get back into it?
My experience since college is mainly C# and recently ruby. I bought some book for C# developers and it ended up being on how to write C++ with CLI. While a good book it wasn't quite what I was looking for.
| [
"My favorites are Effective C++, More Effective C++, and Effective STL by Scott Meyers. Also C++ Coding Standards by Sutter and Alexandrescu.\n",
"The best way to get back into C++ is to jump in. You can't learn a real language without spending any serious time in a country where they speak it. I wouldn't try to learn a programming language without spending time coding in it either.\nI wouldn't recommend learning C first though. That's a good way to pick up some bad habits in C++.\n",
"I haven't tried it myself but have heard from people and sources I trust that \"Accelerated C++\" by Koenig and Moo is a good book for people who want to pick up C++ quickly. Compared to the more traditional route of learning C first then C++ as a kind of C with classes the K+M approach helps you become productive quickly while avoiding pitfalls and bad habits associated with the legacy of the language.\n",
"A good starting place is \"Thinking in C++\" by Bruce Eckel, I've rarely had anyone complain about the book. Well written and also has a version available online.\n",
"Another online book that I pick up whenever I need to get back into C++ is \"C++ In Action\" by Bartosz Milewski. Its online at his site.\n",
"The C++ Programming Language by Bjarne Stroustrup covers C++ in depth. Bjarne is the inventor of C++. It also provides insights into why the language is the way it is. Some people find the book a little terse. I found it to be an enjoyable read. If you have done some C++ before it's a great place to start. It is by no means a beginners book on C++.\n",
"My book recommendations: \nEssential C++ (Lippman)\nC++ Common Knowledge: Essential Intermediate Programming (Dewhurst)\n...and I second the Effective C++ suggestion above.\nA very handy alternative to buying books in meatspace is to subscribe to a service like Safari Books Online. For a not unreasonable monthly fee you'll get access to all of the above books plus a bajillion others. If you desire fast random access to more than a couple books, it pretty much pays for itself. It's an easy case to make if you want to convince your employer to pay for it. \nBeyond that, sit yourself in front of an IDE that has a C++ code completion feature (I use Eclipse/CDT most of the time).\n"
] | [
10,
7,
6,
2,
2,
1,
1
] | [] | [] | [
"c++"
] | stackoverflow_0000037428_c++.txt |
Q:
Can you modify text files when committing to subversion?
I want to ran the following script on text files that are being committed:
# Send the commands H and w to ed
# ed will append newline if the file does not end in one
printf "%s\n" H w | ed -s $1
# Strip trailing whitespace
sed -i 's/[ \t]*$//g' $1
# Convert tabs to 4 spaces
sed -i -r "s/\t/ /g" $1
I see subversion has a start-commit and pre-commit hooks but I can't follow the documentation about how I could process the text files with the above script.
A:
You mean change the text file before it's committed? You can (I'm not sure how), but it's generally not a good idea, as it doesn't tell the client about the change, so the local copies become void on a commit.
What I would do is block the commit (non zero exit), and give an error message as to why you don't want that revision to go through.
| Can you modify text files when committing to subversion? | I want to ran the following script on text files that are being committed:
# Send the commands H and w to ed
# ed will append newline if the file does not end in one
printf "%s\n" H w | ed -s $1
# Strip trailing whitespace
sed -i 's/[ \t]*$//g' $1
# Convert tabs to 4 spaces
sed -i -r "s/\t/ /g" $1
I see subversion has a start-commit and pre-commit hooks but I can't follow the documentation about how I could process the text files with the above script.
| [
"You mean change the text file before it's committed? You can (I'm not sure how), but it's generally not a good idea, as it doesn't tell the client about the change, so the local copies become void on a commit. \nWhat I would do is block the commit (non zero exit), and give an error message as to why you don't want that revision to go through. \n"
] | [
7
] | [] | [] | [
"hook",
"svn"
] | stackoverflow_0000038664_hook_svn.txt |
Q:
To use views or not to use views
I seem right now to be embroiled in a debate with another programmer on this project who thinks that views have no merits. He proposes a system that PHP looks something like this:
$draw = new Draw;
$nav = $draw->wideHeaderBox().
$draw->left().
$draw->image().
Image::get($image,60,array('id'=>'header_image')).
$draw->imageEnd().
$draw->leftEnd().
$draw->left(10).
'<div id="header_text">'.
self::defaultSectionText().
'</div>'.
$draw->leftEnd().
and so on (this is in the controller btw). Now his arguments for this actually make some sense, he claims that if there is a redesign all we need to do is change the HTML in one place and it changes everywhere automatically. For some reason however, this method still rubs me the wrong way, is there any merit to views over this method? I mean besides not having to retype HTML by hand.
A:
HTML time-savers are useful, but they're only useful when they're intuitive and easy-to-understand. Having to instantiate a new Draw just doesn't sound very natural. Furthermore, wideHeaderBox and left will only have significance to someone who intimately knows the system. And what if there is a redesign, like your co-worker muses? What if the wideHeaderBox becomes very narrow? Will you change the markup (and styles, presumable) generated by the PHP method but leave a very inaccurate method name to call the code?
If you guys just have to use HTML generation, you should use it interspersed in view files, and you should use it where it's really necessary/useful, such as something like this:
HTML::link("Wikipedia", "http://en.wikipedia.org");
HTML::bulleted_list(array(
HTML::list_item("Dogs"),
HTML::list_item("Cats"),
HTML::list_item("Armadillos")
));
In the above example, the method names actually make sense to people who aren't familiar with your system. They'll also make more sense to you guys when you go back into a seldom-visited file and wonder what the heck you were doing.
A:
The argument he uses is the argument you need to have views. Both result in only changing it in one place. However, in his version, you are mixing view markup with business code.
I would suggest using more of a templated design. Do all your business logic in the PHP, setup all variables that are needed by your page. Then just have your page markup reference those variables (and deal with no business logic whatsoever).
Have you looked at smarty? http://smarty.php.net
A:
I've done something like that in the past, and it was a waste of time. For instance, you basically have to write wrappers for everything you can already with HTML and you WILL forget some things. When you need to change something in the layout you will think "Shoot, I forgot about that..now I gotta code another method or add another parameter".
Ultimately, you will have a huge collection of functions/classes that generate HTML which nobody will know or remember how to use months from now. New developers will curse you for using this system, since they will have to learn it before changing anything. In contrast, more people probably know HTML than your abstract HTML drawing classes...and sometimes you just gotta get your hands dirty with pure HTML!
A:
It looks pretty verbose and hard to follow to be honest and some of the code looks like it is very much layout information.
We always try to split the logic from the output as much as possible. However, it is often the case that the view and data are very tightly linked with both part dictating how the other should be (eg, in a simple e-commerce site, you may decide you want to start showing stock levels next to each product, which would obviously involve changing the view to add appropriate html for this, and the business logic to go and figure out a value for the stock).
If the thought of maintaining 2 files to do this is too much to handle, try splitting things into a "Gather data" part and a "Display View" part, getting you most of the benefits without increasing the number of files you need to manage.
A:
I always find it much easier to work directly with html. Theres one less abstraction layer (html -> actual webpage / php function -> html -> actual webpage) to deal with then you just work in HTML.
I really think the 'just have to change it in one place' thing wont work in this case. This is because they'll be so many times when you want to change the output of a function, but only in just one place. Sure you can use arguments but you'll soon end up with some functions having like a dozen arguments. Yuck.
Bear in mind templating languages / systems often let you include sub templates, allowing you to have some reusable blocks of html.
The bottom line is if I had just started at your company and saw code like that everywhere, my first thought would be, 'Damn it! Need a new job again.'
| To use views or not to use views | I seem right now to be embroiled in a debate with another programmer on this project who thinks that views have no merits. He proposes a system that PHP looks something like this:
$draw = new Draw;
$nav = $draw->wideHeaderBox().
$draw->left().
$draw->image().
Image::get($image,60,array('id'=>'header_image')).
$draw->imageEnd().
$draw->leftEnd().
$draw->left(10).
'<div id="header_text">'.
self::defaultSectionText().
'</div>'.
$draw->leftEnd().
and so on (this is in the controller btw). Now his arguments for this actually make some sense, he claims that if there is a redesign all we need to do is change the HTML in one place and it changes everywhere automatically. For some reason however, this method still rubs me the wrong way, is there any merit to views over this method? I mean besides not having to retype HTML by hand.
| [
"HTML time-savers are useful, but they're only useful when they're intuitive and easy-to-understand. Having to instantiate a new Draw just doesn't sound very natural. Furthermore, wideHeaderBox and left will only have significance to someone who intimately knows the system. And what if there is a redesign, like your co-worker muses? What if the wideHeaderBox becomes very narrow? Will you change the markup (and styles, presumable) generated by the PHP method but leave a very inaccurate method name to call the code?\nIf you guys just have to use HTML generation, you should use it interspersed in view files, and you should use it where it's really necessary/useful, such as something like this:\nHTML::link(\"Wikipedia\", \"http://en.wikipedia.org\");\nHTML::bulleted_list(array(\n HTML::list_item(\"Dogs\"),\n HTML::list_item(\"Cats\"),\n HTML::list_item(\"Armadillos\")\n));\n\nIn the above example, the method names actually make sense to people who aren't familiar with your system. They'll also make more sense to you guys when you go back into a seldom-visited file and wonder what the heck you were doing.\n",
"The argument he uses is the argument you need to have views. Both result in only changing it in one place. However, in his version, you are mixing view markup with business code.\nI would suggest using more of a templated design. Do all your business logic in the PHP, setup all variables that are needed by your page. Then just have your page markup reference those variables (and deal with no business logic whatsoever).\nHave you looked at smarty? http://smarty.php.net\n",
"I've done something like that in the past, and it was a waste of time. For instance, you basically have to write wrappers for everything you can already with HTML and you WILL forget some things. When you need to change something in the layout you will think \"Shoot, I forgot about that..now I gotta code another method or add another parameter\".\nUltimately, you will have a huge collection of functions/classes that generate HTML which nobody will know or remember how to use months from now. New developers will curse you for using this system, since they will have to learn it before changing anything. In contrast, more people probably know HTML than your abstract HTML drawing classes...and sometimes you just gotta get your hands dirty with pure HTML!\n",
"It looks pretty verbose and hard to follow to be honest and some of the code looks like it is very much layout information.\nWe always try to split the logic from the output as much as possible. However, it is often the case that the view and data are very tightly linked with both part dictating how the other should be (eg, in a simple e-commerce site, you may decide you want to start showing stock levels next to each product, which would obviously involve changing the view to add appropriate html for this, and the business logic to go and figure out a value for the stock).\nIf the thought of maintaining 2 files to do this is too much to handle, try splitting things into a \"Gather data\" part and a \"Display View\" part, getting you most of the benefits without increasing the number of files you need to manage.\n",
"I always find it much easier to work directly with html. Theres one less abstraction layer (html -> actual webpage / php function -> html -> actual webpage) to deal with then you just work in HTML.\nI really think the 'just have to change it in one place' thing wont work in this case. This is because they'll be so many times when you want to change the output of a function, but only in just one place. Sure you can use arguments but you'll soon end up with some functions having like a dozen arguments. Yuck.\nBear in mind templating languages / systems often let you include sub templates, allowing you to have some reusable blocks of html.\nThe bottom line is if I had just started at your company and saw code like that everywhere, my first thought would be, 'Damn it! Need a new job again.'\n"
] | [
5,
1,
1,
1,
1
] | [] | [] | [
"model_view_controller",
"php"
] | stackoverflow_0000037731_model_view_controller_php.txt |
Q:
How do you unit test business applications?
How are people unit testing their business applications? I've seen a lot of examples of unit testing with "simple to test" examples. Ex. a calculator. How are people unit testing data-heavy applications? How are you putting together your sample data? In many cases, data for one test may not work at all for another test which makes it hard to just have one test database?
Testing the data access portion of the code is fairly straightforward. It's testing out all the methods that work against the data that seem to be hard to test. For example, imagine a posting process where there is heavy data access to determine what is posted, numbers are adjusted, etc. There are a number of interim steps that occur (and need to be tested) along with tests afterwards that ensure the posting was successful. Some of those steps may actually be stored procedures.
In the past I've tried inserting the test data in a test database, then running the test, but honestly it's pretty painful to write this kind of code (and error prone). I've also tried just building a test database up front and rolling back the changes. That works OK but in a number of places you can't easily do this either (and many people would say that's integration testing; so be it, I still need to be able to test this somehow).
If the answer is that there isn't a nice way of handling this and it currently just sort of sucks, that would be useful to know as well.
Any thoughts, ideas, suggestions, or tips are appreciated.
A:
My automated functional tests usually follow one of two patters:
Database Connected Tests
Mock Persistence Layer Tests
Database Connected Tests
When I have automated tests that are connected to the database, I usually make a single test database template that has enough data for all the tests. When the automated tests are run, a new test database is generated from the template for every test. The test database has to be constantly re-generated because test will often change the data. As tests are added, I usually append more data to the test database template.
There are some nice advantages to this testing method. The obvious advantage is that the tests also exercise your schema. Another advantage is that after setting up the initial tests, most new tests will be able to re-use the existing test data. This makes it easy to add more tests.
The downside is that the test database will become unwieldy. Because data will usually be added one test at time, it will be inconsistent and maybe even unrealistic. You will also end up cursing the person who setup the test database when there is a significant database schema change (which for me usually means I end up cursing myself).
This style of testing obviously doesn't work if you can't generate new test databases at will.
Mock Persistence Layer Tests
For this pattern, you create mock objects that live with the test cases. These mock objects intercept the calls to the database so that you can programmatically provide the appropriate results. Basically, when the code you're testing calls the findCustomerByName() method, your mock object is called instead of the persistence layer.
The nice thing about using mock object tests is that you can get very specific. Often times, there are execution paths that you simply can't reach in automated tests w/o mock objects. They also free you from maintaining a large, monolithic set of test data.
Another benefit is the lack of external dependencies. Because the mock objects simulate the persistence layer, your tests are no longer dependent on the database. This is often the deciding factor when choosing which pattern to choose. Mock objects seem to get more traction when dealing with legacy database systems or databases with stringent licensing terms.
The downside of mock objects is that they often result in a lot of extra test code. This isn't horrible because almost any amount of testing code is cheap when amortized over the number of times you run the test, but it can be annoying to have more test code then production code.
A:
It depends on what you're testing. If you're testing a business logic component -- then its immaterial where the data is coming from and you'd probably use a mock or a hand rolled stub class that simulates the data access routine the component would have called in the wild. The only time I mess with the data access is when I'm actually testing the data access components themselves.
Even then I tend to open a DB transaction in the TestFixtureSetUp method (obviously this depends on what unit testing framework you might be using) and rollback the transaction at the end of the test suite TestFixtureTeardown.
A:
Mocking Frameworks enable you to test your business objects.
Data Driven tests often end up becoming more of a intergration test than a unit test, they also carry with them the burden of managing the state of a data store pre and post execution of the test and the time taken in connecting and executing queries.
In general i would avoid doing unit tests that touch the database from your business objects. As for Testing your database you need a different stratergy.
That being said you can never totally get away from data driven testing only limiting the amout of tests that actually need to invoke your back end systems.
A:
I have to second the comment by @Phil Bennett as I try to approach these integration tests with a rollback solution.
I have a very detailed post about integration testing your data access layer here
I show not only the sample data access class, base class, and sample DB transaction fixture class, but a full CRUD integration test w/ sample data shown. With this approach you don't need multiple test databases as you can control the data going in with each test and after the test is complete the transactions are all rolledback so your DB is clean.
About unit testing business logic inside your app, I would also second the comments by @Phil and @Mark because if you mock out all the dependencies your business object has, it becomes very simple to test your application logic one entity at a time ;)
Edit: So are you looking for one huge integration test that will verify everything from logic pre-data base / stored procedure run w/ logic and finally a verification on the way back? If so you could break this out into 2 steps:
1 - Unit test the logic that happens before the data is pushed
into your data access code. For
example, if you have some code that
calculates some numbers based on
some properties -- write a test that
only checks to see if the logic for
this 1 function does what you asked
it to do. Mock out any dependancy
on the data access class so you can
ignore it for this test of the
application logic alone.
2 - Integration test the logic that happens once you take your
manipulated data (from the previous
method we unit tested) and call the
appropriate stored procedure. Do
this inside a data specific testing
class so you can rollback after it's
completed. After your stored
procedure has run, do a query
against the database to get your
object now that we have done some
logic against the data and verify it
has the values you expected
(post-stored procedure logic /etc )
If you need an entry in your database for the stored procedure to run, simply insert that data before you run the sproc that has your logic inside it. For example, if you have a product that you need to test, it might require a supplier and category entry to insert so before you insert your product do a quick and dirty insert for a supplier and category so your product insert works as planned.
A:
It sounds like you might be testing message based systems, or systems with highly parameterised interfaces, where there are large numbers of permutations of input data.
In general all the rules of standard unti testing still hold:
Try to make the units being tested as small and discrete as possible.
Try to make tests independant.
Factor code to decouple dependencies.
Use mocks and stubs to replace dependencies (like dataaccess)
Once this is done you will have removed a lot of the complexity from the tests, hopefully revealing good sets of unit tests, and simplifying the sample data.
A good methodology for then compiling sample data for test that still require complex input data is Orthogonal testing, or see here.
I've used that sort of method for generating test plans for WCF and BizTalk solutions where the permutations of input messages can create multiple possible execution paths.
A:
For lots of different runs over the same logic but with different data you can use CSV, as many columns as you like for the input and the last for the output etc.
| How do you unit test business applications? | How are people unit testing their business applications? I've seen a lot of examples of unit testing with "simple to test" examples. Ex. a calculator. How are people unit testing data-heavy applications? How are you putting together your sample data? In many cases, data for one test may not work at all for another test which makes it hard to just have one test database?
Testing the data access portion of the code is fairly straightforward. It's testing out all the methods that work against the data that seem to be hard to test. For example, imagine a posting process where there is heavy data access to determine what is posted, numbers are adjusted, etc. There are a number of interim steps that occur (and need to be tested) along with tests afterwards that ensure the posting was successful. Some of those steps may actually be stored procedures.
In the past I've tried inserting the test data in a test database, then running the test, but honestly it's pretty painful to write this kind of code (and error prone). I've also tried just building a test database up front and rolling back the changes. That works OK but in a number of places you can't easily do this either (and many people would say that's integration testing; so be it, I still need to be able to test this somehow).
If the answer is that there isn't a nice way of handling this and it currently just sort of sucks, that would be useful to know as well.
Any thoughts, ideas, suggestions, or tips are appreciated.
| [
"My automated functional tests usually follow one of two patters:\n\nDatabase Connected Tests\nMock Persistence Layer Tests\n\nDatabase Connected Tests\nWhen I have automated tests that are connected to the database, I usually make a single test database template that has enough data for all the tests. When the automated tests are run, a new test database is generated from the template for every test. The test database has to be constantly re-generated because test will often change the data. As tests are added, I usually append more data to the test database template.\nThere are some nice advantages to this testing method. The obvious advantage is that the tests also exercise your schema. Another advantage is that after setting up the initial tests, most new tests will be able to re-use the existing test data. This makes it easy to add more tests.\nThe downside is that the test database will become unwieldy. Because data will usually be added one test at time, it will be inconsistent and maybe even unrealistic. You will also end up cursing the person who setup the test database when there is a significant database schema change (which for me usually means I end up cursing myself).\nThis style of testing obviously doesn't work if you can't generate new test databases at will.\nMock Persistence Layer Tests\nFor this pattern, you create mock objects that live with the test cases. These mock objects intercept the calls to the database so that you can programmatically provide the appropriate results. Basically, when the code you're testing calls the findCustomerByName() method, your mock object is called instead of the persistence layer.\nThe nice thing about using mock object tests is that you can get very specific. Often times, there are execution paths that you simply can't reach in automated tests w/o mock objects. They also free you from maintaining a large, monolithic set of test data.\nAnother benefit is the lack of external dependencies. Because the mock objects simulate the persistence layer, your tests are no longer dependent on the database. This is often the deciding factor when choosing which pattern to choose. Mock objects seem to get more traction when dealing with legacy database systems or databases with stringent licensing terms.\nThe downside of mock objects is that they often result in a lot of extra test code. This isn't horrible because almost any amount of testing code is cheap when amortized over the number of times you run the test, but it can be annoying to have more test code then production code.\n",
"It depends on what you're testing. If you're testing a business logic component -- then its immaterial where the data is coming from and you'd probably use a mock or a hand rolled stub class that simulates the data access routine the component would have called in the wild. The only time I mess with the data access is when I'm actually testing the data access components themselves. \nEven then I tend to open a DB transaction in the TestFixtureSetUp method (obviously this depends on what unit testing framework you might be using) and rollback the transaction at the end of the test suite TestFixtureTeardown.\n",
"Mocking Frameworks enable you to test your business objects. \nData Driven tests often end up becoming more of a intergration test than a unit test, they also carry with them the burden of managing the state of a data store pre and post execution of the test and the time taken in connecting and executing queries.\nIn general i would avoid doing unit tests that touch the database from your business objects. As for Testing your database you need a different stratergy.\nThat being said you can never totally get away from data driven testing only limiting the amout of tests that actually need to invoke your back end systems.\n",
"I have to second the comment by @Phil Bennett as I try to approach these integration tests with a rollback solution.\nI have a very detailed post about integration testing your data access layer here\nI show not only the sample data access class, base class, and sample DB transaction fixture class, but a full CRUD integration test w/ sample data shown. With this approach you don't need multiple test databases as you can control the data going in with each test and after the test is complete the transactions are all rolledback so your DB is clean.\nAbout unit testing business logic inside your app, I would also second the comments by @Phil and @Mark because if you mock out all the dependencies your business object has, it becomes very simple to test your application logic one entity at a time ;)\nEdit: So are you looking for one huge integration test that will verify everything from logic pre-data base / stored procedure run w/ logic and finally a verification on the way back? If so you could break this out into 2 steps:\n\n1 - Unit test the logic that happens before the data is pushed\ninto your data access code. For\nexample, if you have some code that\ncalculates some numbers based on\nsome properties -- write a test that\nonly checks to see if the logic for\nthis 1 function does what you asked\nit to do. Mock out any dependancy\non the data access class so you can\nignore it for this test of the\napplication logic alone.\n2 - Integration test the logic that happens once you take your\n manipulated data (from the previous\n method we unit tested) and call the\n appropriate stored procedure. Do\n this inside a data specific testing\n class so you can rollback after it's\n completed. After your stored\n procedure has run, do a query\n against the database to get your\n object now that we have done some\n logic against the data and verify it\n has the values you expected\n (post-stored procedure logic /etc )\n\nIf you need an entry in your database for the stored procedure to run, simply insert that data before you run the sproc that has your logic inside it. For example, if you have a product that you need to test, it might require a supplier and category entry to insert so before you insert your product do a quick and dirty insert for a supplier and category so your product insert works as planned.\n",
"It sounds like you might be testing message based systems, or systems with highly parameterised interfaces, where there are large numbers of permutations of input data.\nIn general all the rules of standard unti testing still hold: \n\nTry to make the units being tested as small and discrete as possible.\nTry to make tests independant.\nFactor code to decouple dependencies.\nUse mocks and stubs to replace dependencies (like dataaccess)\n\nOnce this is done you will have removed a lot of the complexity from the tests, hopefully revealing good sets of unit tests, and simplifying the sample data.\nA good methodology for then compiling sample data for test that still require complex input data is Orthogonal testing, or see here.\nI've used that sort of method for generating test plans for WCF and BizTalk solutions where the permutations of input messages can create multiple possible execution paths.\n",
"For lots of different runs over the same logic but with different data you can use CSV, as many columns as you like for the input and the last for the output etc.\n"
] | [
6,
2,
2,
2,
1,
0
] | [] | [] | [
"unit_testing"
] | stackoverflow_0000038598_unit_testing.txt |
Q:
Automatically floating all fields in a VFP report?
I want to set all the fields and labels on a VFP7 report to Float and Stretch with overflow. I tried Using the .frx file and doing the following REPLACE but it didn't work.
Is there some other field I need to change too?
REPLACE float WITH .T. FOR objtype = 8
A:
It turns out you have to set top to .F. for float to take effect, this worked:
USE report.frx
REPLACE float with .T., stretch with .T., top with .F. for objtype = 8
| Automatically floating all fields in a VFP report? | I want to set all the fields and labels on a VFP7 report to Float and Stretch with overflow. I tried Using the .frx file and doing the following REPLACE but it didn't work.
Is there some other field I need to change too?
REPLACE float WITH .T. FOR objtype = 8
| [
"It turns out you have to set top to .F. for float to take effect, this worked:\nUSE report.frx\nREPLACE float with .T., stretch with .T., top with .F. for objtype = 8\n\n"
] | [
2
] | [] | [] | [
"foxpro",
"report",
"visual_foxpro"
] | stackoverflow_0000038654_foxpro_report_visual_foxpro.txt |
Q:
ASP.NET XML ObjectDataSource Wrapper Class Examples
I want to use XML instead of SQLServer for a simple website.
Are their any good tutorials, code examples, and/or tools available to make a (prefer VB.NET) wrapper class to handle the basic list, insert, edit, and delete (CRUD) code?
The closest one I found was on a Telerik Trainer video/code for their Scheduler component where they used XML to handle the scheduling data in the demo. They created an ObjectDataSource class. Here is a LINK to that demo if anyone is interested.
[Reply to Esteban]
it would make deployment easier for clients that use godaddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing.
i have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.
A:
In these days of SQL Server Express, I'd say there's really no reason for you not to use a database.
I know this doesn't really answer your question, but I'd hate to see you roll out code that will be a nightmare to maintain and scale.
Maybe you could tell us why you want to use XML files instead of a proper database.
A:
It would make deployment easier for clients that use go-daddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing.
I have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.
| ASP.NET XML ObjectDataSource Wrapper Class Examples | I want to use XML instead of SQLServer for a simple website.
Are their any good tutorials, code examples, and/or tools available to make a (prefer VB.NET) wrapper class to handle the basic list, insert, edit, and delete (CRUD) code?
The closest one I found was on a Telerik Trainer video/code for their Scheduler component where they used XML to handle the scheduling data in the demo. They created an ObjectDataSource class. Here is a LINK to that demo if anyone is interested.
[Reply to Esteban]
it would make deployment easier for clients that use godaddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing.
i have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.
| [
"In these days of SQL Server Express, I'd say there's really no reason for you not to use a database.\nI know this doesn't really answer your question, but I'd hate to see you roll out code that will be a nightmare to maintain and scale.\nMaybe you could tell us why you want to use XML files instead of a proper database.\n",
"It would make deployment easier for clients that use go-daddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing. \nI have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.\n"
] | [
2,
0
] | [] | [] | [
"asp.net",
"objectdatasource",
"vb.net",
"xml"
] | stackoverflow_0000038629_asp.net_objectdatasource_vb.net_xml.txt |
Q:
Retrieving an Oracle timestamp using Python's Win32 ODBC module
Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?
A:
I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.
In your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.
A:
My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:
cursor.execute("SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log")
This works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.
| Retrieving an Oracle timestamp using Python's Win32 ODBC module | Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?
| [
"I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.\nIn your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.\n",
"My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:\ncursor.execute(\"SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log\")\n\nThis works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.\n"
] | [
2,
1
] | [] | [] | [
"ora_00932",
"oracle",
"python"
] | stackoverflow_0000038435_ora_00932_oracle_python.txt |
Q:
Do you know how to implement transactions in Castle ActiveRecord?
I decided to make a system for a client using Castle ActiveRecord, everything went well until I found that the transactions do not work, for instance;
TransactionScope t = new TransactionScope();
try
{
member.Save();
//This is just to see transaction working
throw new Exception("Exception");
foreach (qfh.Beneficiary b1 in l)
{
b1.Create();
}
}
catch (Exception ex)
{
t.VoteRollBack();
MessageBox.Show(ex.Message);
}
finally
{
t.Dispose();
}
But it doesn't work, I throw an Exception just to try the transaction rolls back, but for my surprise I see that the first [Save] records into the database. What is happening?
I'm new on Castle and NHibernate, firstly I saw it very attractive and I decided to go on with it and MySQL (I've never worked with this DB), I tried ActiveWriter and it seemed very promising but after a long and effortly week I see this issue and now I feel like I'm stuck and like I've wasted my time. It is supposed to be easy but right now I'm feeling a frustated cause I cannot find enough information to make this workout, can you help me?
A:
You need to wrap the code in a session scope, like this:
using(new SessionScope())
{
a.Save();
b.Save();
c.Save();
}
Read more here.
A:
Ben's got it. That doc is a little confusing. Refer to the last block on the page, "Nested transactions".
A:
I finally fixed, it happened that I was doing wrong, I overrode the Save method of the Member class and made sessionScope inside and inside of it a transaction scope, so when a involved all of that in a transaction scope it saved in the database, so when I threw the exception everything was already saved, I think that's it.
All in all, thanks for the help.
| Do you know how to implement transactions in Castle ActiveRecord? | I decided to make a system for a client using Castle ActiveRecord, everything went well until I found that the transactions do not work, for instance;
TransactionScope t = new TransactionScope();
try
{
member.Save();
//This is just to see transaction working
throw new Exception("Exception");
foreach (qfh.Beneficiary b1 in l)
{
b1.Create();
}
}
catch (Exception ex)
{
t.VoteRollBack();
MessageBox.Show(ex.Message);
}
finally
{
t.Dispose();
}
But it doesn't work, I throw an Exception just to try the transaction rolls back, but for my surprise I see that the first [Save] records into the database. What is happening?
I'm new on Castle and NHibernate, firstly I saw it very attractive and I decided to go on with it and MySQL (I've never worked with this DB), I tried ActiveWriter and it seemed very promising but after a long and effortly week I see this issue and now I feel like I'm stuck and like I've wasted my time. It is supposed to be easy but right now I'm feeling a frustated cause I cannot find enough information to make this workout, can you help me?
| [
"You need to wrap the code in a session scope, like this:\nusing(new SessionScope())\n{\n a.Save();\n b.Save();\n c.Save();\n}\n\nRead more here.\n",
"Ben's got it. That doc is a little confusing. Refer to the last block on the page, \"Nested transactions\". \n",
"I finally fixed, it happened that I was doing wrong, I overrode the Save method of the Member class and made sessionScope inside and inside of it a transaction scope, so when a involved all of that in a transaction scope it saved in the database, so when I threw the exception everything was already saved, I think that's it.\nAll in all, thanks for the help.\n"
] | [
5,
3,
0
] | [] | [] | [
"activerecord",
"castle"
] | stackoverflow_0000038729_activerecord_castle.txt |
Q:
Developer Setup for Starting Out with Cocoa/Mac Programming
I'd like to start experimenting with Cocoa and programming for Mac OSX. I'm not terribly concerned with Objective C syntax/constructs/bheaviors at this point, but more curious as to an efficient setup on in terms of an editor and/or IDE that will get me going quickly. Is there any IDE even remotely similar to Visual Studio (since that's where I've spent most of my time over the last 7 years) in terms of it's solution/project concept? Any other tools, tips, suggestions and/or resources to get up and experimenting quickly?
I'd like to avoid a lot of the intro stuff and get into things like "If you want to create an Mac desktop application, you can use Acme IDE and set up your project like this."
I was afraid Xcode was going to be the answer! :P I tried playing around with that -- literally just getting it off the DVD and just diving in with no idea what to expect (before I even knew that you used Objective C as the language). I figured, the wise guy that I am, that I could just sort of fumble around and get a simple app working ... wrong.
@Andrew - Thanks for the insight on those config settings. Based on my Xcode first impression, I think those may help.
A:
I'd suggest you pick a fun little product and dive in. If you're looking for a book I'd suggest Cocoa Programming for Max OSX which is a very good introduction both to Objective-C and Cocoa.
XCode is pretty much the de facto IDE and free with OSX. It should be on your original install DVD. It's good but not as good as Visual Studio (sorry, it's really not).
As a long-time VS user I found the default XCode config a little odd and hard to adjust to, particularly the way a new floating window would open for every sourcefile. Some tweaks I found particularly helpful;
Settings/General -> All-In-One (unifies editor/debugger window)
Settings/General -> Open counterparts in same editor (single-window edit)
Settings/Debugging - "In Editor Debugger Controls"
Settings/Debugging - "Auto Clear Debug Console"
Settings/Key-binding - lots of binding to match VS (Ctrl+F5/Shift+F5,Shift+Home, Shift+End etc)
I find the debugger has some annoying issues such as breakpoints not correctly mapping to lines and exceptions aren't immediately trapped by the debugger. Nothing deal-breaking but a bit cumbersome.
I would recommend that you make use of the new property syntax that was introduced for Objective-C 2.0. They make for a heck of a lot less typing in many many places. They're limited to OSX 10.5 only though (yeah, language features are tied to OS versions which is a bit odd).
Also don't be fooled into downplaying the differences between C/C++ and Objective-C. They're very much related but ARE different languages. Try and start Objective-C without thinking about how you'd do X,Y,Z in C/C++. It'll make it a lot easier.
A:
The first document to read and digest is the Mem management guide, understand this before moving on. This is a great guide to objective-c too. Infact the developer site at Apple is very good - but you would probably want to read the Hillegas book first.
In regards to Xcode vs Visual Studio - they are different. I wouldn't say one is better than the other - Windows developers come over from VS and expect it to be the same. This is just an arrogant attitude and please don't fall into this crowd. Having used VS since the AppStudio days and Xcode for a year or so now, both have strengths and weaknesses. Xcode is something that out of the box (and especially when coming from VS) doesn't seem that good, but once you start using and understanding it - it becomes very powerful.
Also, there are a lot more tools included with Xcode et al, such as Instruments and Shark that you simply can't get with VS, unless you open your wallet, and even then IMHO aren't as good.
Anyway, good luck. I still enjoy C#, but Objective-C/Cocoa somehow makes programming fun again once you get into it...
A:
Don't bother digging up your OSX DVD as they've released a new version (3.1) of XCode since then.
First, you'll want to join Apple Developer Connection (it's free, and you need it to access their version of MSDN) - it uses your Apple ID so if you've ever had one for the itunes store etc, it's that same username/password
Once you've done that, click on downloads, then click on developer tools, to view this page, and go for the XCode 3.1 Developer DVD
A:
One other suggestion: If you have feature or enhancement requests, or bugs that you've run into, be sure to file them at Apple's Bug Reporter. It's the best way for developers to communicate their needs to Apple, because every issue is tracked through the system.
A:
You might try the demo of textmate and see how you like it for working with objective-c or any other type of text really. It will import xcode project settings so you can still compile and run from textmate rather than having to go back to xcode.
A:
Xcode is the standard for editing source files, though you can use another editor in conjunction with the command line xcodebuild tool if you really want. I used Vim for all my Cocoa editing before finally giving in to Xcode. It's not the greatest IDE in the world, but it gets the job done, and the recent 3.x releases have had some nice improvements.
The real power tool of Cocoa development is Interface Builder. IB does not generate source code like many UI tools. Instead it manipulates real Cocoa views, controls, and objects which it then bundles into an archive (nib) that is loaded by your program at runtime. Most Cocoa programs use at least one nib file, and often many more.
No matter what IDE/editor combination you choose for hacking on source files, I recommend using IB where you can. Even if you're not a fan of other UI layout/generation tools, I suggest keeping an open mind, giving "the Cocoa way" a chance and at least learning what Interface Builder can do for your development process.
A:
AFAIK, pretty much every OS X developer uses Xcode.
That, and Interface Builder for creating the GUIs.
FWIW, try to get hold of a copy of Hillegas's book, as it's a great introductory tutorial, and the reference Docs Apple provides really aren't. (They are generally very good reference docs, however).
A:
Cocoa is huge. The hardest part of learning how to write apps on Mac is learning Cocoa. By the way. You do not need to know ObjC (though it helps tons). You can write Cocoa apps with Python or Ruby (right in the IDE).
I agree VS is a better IDE then Xcode. But if you throw in Interface Builder and all the other tools, I'm not so sure. Mac development is not about 1 giant IDE for everything. But VS is "kinder" on the developer then Xcode is.
Also if you want to do cross platform apps look at RealBasic. A fine tool (Basic though. But it runs on Linux too.) You'd be surprised how many Mac apps are written with RB.
A:
I've heard the books currently out there are pretty out of date. The whole ecosystem seems to evolve very fast with dramatic changes made in every OS release.
He wrote a tutorial which pulls together some Apple documentation and other tutorials which should get you started. I think it covers the basics of using the IDE, writing simple apps, and then goes on to more advanced stuff.
A:
I've been dabbling in Cocoa for the past couple years, and recently picked up Fritz Anderson's "Xcode 3 Unleashed." Highly recommended for getting into Xcode — especially with some of the big changes 3.0/Leopard brought.
Don't forget Hillegass' defacto Cocoa bible, "Cocoa Programming for Mac OS X - Third Edition."
A:
@peter I don't know why you had trouble with getting a simple app working, right off the bat without doing anything your app gets a lot of benefits from the Cocoa framework. If you mean you were trying to do stuff like connect a button to an action and have it print a alert on screen or something like that then yes I could see where your going with it being difficult.
The problem for me starting with Cocoa many years back is that it was so different from anything else that it had a little bit of a learning curve. Whereas many other systems are compile time oriented Cocoa is very dynamic and runtime oriented. Once you get past learning how actions hook up to classes it just becomes a matter of learning how the Cocoa frameworks work.
| Developer Setup for Starting Out with Cocoa/Mac Programming | I'd like to start experimenting with Cocoa and programming for Mac OSX. I'm not terribly concerned with Objective C syntax/constructs/bheaviors at this point, but more curious as to an efficient setup on in terms of an editor and/or IDE that will get me going quickly. Is there any IDE even remotely similar to Visual Studio (since that's where I've spent most of my time over the last 7 years) in terms of it's solution/project concept? Any other tools, tips, suggestions and/or resources to get up and experimenting quickly?
I'd like to avoid a lot of the intro stuff and get into things like "If you want to create an Mac desktop application, you can use Acme IDE and set up your project like this."
I was afraid Xcode was going to be the answer! :P I tried playing around with that -- literally just getting it off the DVD and just diving in with no idea what to expect (before I even knew that you used Objective C as the language). I figured, the wise guy that I am, that I could just sort of fumble around and get a simple app working ... wrong.
@Andrew - Thanks for the insight on those config settings. Based on my Xcode first impression, I think those may help.
| [
"I'd suggest you pick a fun little product and dive in. If you're looking for a book I'd suggest Cocoa Programming for Max OSX which is a very good introduction both to Objective-C and Cocoa.\nXCode is pretty much the de facto IDE and free with OSX. It should be on your original install DVD. It's good but not as good as Visual Studio (sorry, it's really not). \nAs a long-time VS user I found the default XCode config a little odd and hard to adjust to, particularly the way a new floating window would open for every sourcefile. Some tweaks I found particularly helpful;\n\nSettings/General -> All-In-One (unifies editor/debugger window)\nSettings/General -> Open counterparts in same editor (single-window edit)\nSettings/Debugging - \"In Editor Debugger Controls\"\nSettings/Debugging - \"Auto Clear Debug Console\"\nSettings/Key-binding - lots of binding to match VS (Ctrl+F5/Shift+F5,Shift+Home, Shift+End etc)\n\nI find the debugger has some annoying issues such as breakpoints not correctly mapping to lines and exceptions aren't immediately trapped by the debugger. Nothing deal-breaking but a bit cumbersome.\nI would recommend that you make use of the new property syntax that was introduced for Objective-C 2.0. They make for a heck of a lot less typing in many many places. They're limited to OSX 10.5 only though (yeah, language features are tied to OS versions which is a bit odd). \nAlso don't be fooled into downplaying the differences between C/C++ and Objective-C. They're very much related but ARE different languages. Try and start Objective-C without thinking about how you'd do X,Y,Z in C/C++. It'll make it a lot easier.\n",
"The first document to read and digest is the Mem management guide, understand this before moving on. This is a great guide to objective-c too. Infact the developer site at Apple is very good - but you would probably want to read the Hillegas book first.\nIn regards to Xcode vs Visual Studio - they are different. I wouldn't say one is better than the other - Windows developers come over from VS and expect it to be the same. This is just an arrogant attitude and please don't fall into this crowd. Having used VS since the AppStudio days and Xcode for a year or so now, both have strengths and weaknesses. Xcode is something that out of the box (and especially when coming from VS) doesn't seem that good, but once you start using and understanding it - it becomes very powerful.\nAlso, there are a lot more tools included with Xcode et al, such as Instruments and Shark that you simply can't get with VS, unless you open your wallet, and even then IMHO aren't as good.\nAnyway, good luck. I still enjoy C#, but Objective-C/Cocoa somehow makes programming fun again once you get into it...\n",
"Don't bother digging up your OSX DVD as they've released a new version (3.1) of XCode since then.\nFirst, you'll want to join Apple Developer Connection (it's free, and you need it to access their version of MSDN) - it uses your Apple ID so if you've ever had one for the itunes store etc, it's that same username/password\nOnce you've done that, click on downloads, then click on developer tools, to view this page, and go for the XCode 3.1 Developer DVD\n",
"One other suggestion: If you have feature or enhancement requests, or bugs that you've run into, be sure to file them at Apple's Bug Reporter. It's the best way for developers to communicate their needs to Apple, because every issue is tracked through the system.\n",
"You might try the demo of textmate and see how you like it for working with objective-c or any other type of text really. It will import xcode project settings so you can still compile and run from textmate rather than having to go back to xcode. \n",
"Xcode is the standard for editing source files, though you can use another editor in conjunction with the command line xcodebuild tool if you really want. I used Vim for all my Cocoa editing before finally giving in to Xcode. It's not the greatest IDE in the world, but it gets the job done, and the recent 3.x releases have had some nice improvements.\nThe real power tool of Cocoa development is Interface Builder. IB does not generate source code like many UI tools. Instead it manipulates real Cocoa views, controls, and objects which it then bundles into an archive (nib) that is loaded by your program at runtime. Most Cocoa programs use at least one nib file, and often many more.\nNo matter what IDE/editor combination you choose for hacking on source files, I recommend using IB where you can. Even if you're not a fan of other UI layout/generation tools, I suggest keeping an open mind, giving \"the Cocoa way\" a chance and at least learning what Interface Builder can do for your development process.\n",
"AFAIK, pretty much every OS X developer uses Xcode.\nThat, and Interface Builder for creating the GUIs.\nFWIW, try to get hold of a copy of Hillegas's book, as it's a great introductory tutorial, and the reference Docs Apple provides really aren't. (They are generally very good reference docs, however).\n",
"Cocoa is huge. The hardest part of learning how to write apps on Mac is learning Cocoa. By the way. You do not need to know ObjC (though it helps tons). You can write Cocoa apps with Python or Ruby (right in the IDE).\nI agree VS is a better IDE then Xcode. But if you throw in Interface Builder and all the other tools, I'm not so sure. Mac development is not about 1 giant IDE for everything. But VS is \"kinder\" on the developer then Xcode is. \nAlso if you want to do cross platform apps look at RealBasic. A fine tool (Basic though. But it runs on Linux too.) You'd be surprised how many Mac apps are written with RB.\n",
"I've heard the books currently out there are pretty out of date. The whole ecosystem seems to evolve very fast with dramatic changes made in every OS release.\nHe wrote a tutorial which pulls together some Apple documentation and other tutorials which should get you started. I think it covers the basics of using the IDE, writing simple apps, and then goes on to more advanced stuff.\n",
"I've been dabbling in Cocoa for the past couple years, and recently picked up Fritz Anderson's \"Xcode 3 Unleashed.\" Highly recommended for getting into Xcode — especially with some of the big changes 3.0/Leopard brought.\nDon't forget Hillegass' defacto Cocoa bible, \"Cocoa Programming for Mac OS X - Third Edition.\"\n",
"@peter I don't know why you had trouble with getting a simple app working, right off the bat without doing anything your app gets a lot of benefits from the Cocoa framework. If you mean you were trying to do stuff like connect a button to an action and have it print a alert on screen or something like that then yes I could see where your going with it being difficult.\nThe problem for me starting with Cocoa many years back is that it was so different from anything else that it had a little bit of a learning curve. Whereas many other systems are compile time oriented Cocoa is very dynamic and runtime oriented. Once you get past learning how actions hook up to classes it just becomes a matter of learning how the Cocoa frameworks work.\n"
] | [
18,
4,
4,
2,
1,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"cocoa",
"macos"
] | stackoverflow_0000015681_cocoa_macos.txt |
Q:
Web Service Namespace Dynamic Naming
I have a web-service that I will be deploying to dev, staging and production. Along with this will be an ASP.net application that will be deploying separately but also in those three stages.
What is the most pragmatic way to change the following line in the web-service to match the current environment?
[WebService(Namespace = "http://dev.mycompany.com/MyAppsWebService")]
[WebService(Namespace = "http://stage.mycompany.com/MyAppsWebService")]
[WebService(Namespace = "http://mycompany.com/MyAppsWebService")]
A:
Your webservice object has a "URL" property on it which can be set via the web.config file. There's a config file that gets created when you add the web reference to your application that you should copy the contents of to your web.config or app.config file. You can then deploy the config file and not have to manage any code changes to accomodate the change in url.
| Web Service Namespace Dynamic Naming | I have a web-service that I will be deploying to dev, staging and production. Along with this will be an ASP.net application that will be deploying separately but also in those three stages.
What is the most pragmatic way to change the following line in the web-service to match the current environment?
[WebService(Namespace = "http://dev.mycompany.com/MyAppsWebService")]
[WebService(Namespace = "http://stage.mycompany.com/MyAppsWebService")]
[WebService(Namespace = "http://mycompany.com/MyAppsWebService")]
| [
"Your webservice object has a \"URL\" property on it which can be set via the web.config file. There's a config file that gets created when you add the web reference to your application that you should copy the contents of to your web.config or app.config file. You can then deploy the config file and not have to manage any code changes to accomodate the change in url.\n"
] | [
2
] | [] | [] | [
"asp.net",
"c#",
"web_services"
] | stackoverflow_0000038789_asp.net_c#_web_services.txt |
Q:
Is there an n-ary tree implementation in Perl?
I'm writing a Perl script and would like to use a n-ary tree data structure.
Is there a good implementation that is available as source code (rather than part of a Perl library) ?
A:
Adding to what Matthew already said, it looks like the following modules would be suitable:
Tree::Nary
Tree::Simple
Tree
A:
I don't really understand why you want it was "source" rather than as a perl library, but you can download the source for any CPAN module.
I haven't used it, but Tree looks to fill your requirements.
A:
Depending on what you need a tree structure for, you might not need any pre-built implementation. Perl already supports them using arrays of arrayrefs.
For example, a simple representation of this tree
t
/ \
a d
/ \ / \
b c e f
could be represented by the following Perl code:
$tree = [ t => [ a => [ b => [], c => [] ]
d => [ e => [], f => [] ] ] ];
Here, the tree's representation is as nested pairs: first the element (in this case, the letter), then an anonymous array reference representing the children of that element. Note that => is just a fancy comma in Perl that exempts you having to put quotes around the token to the left of the comma, provided it is a single word. The above code could also have been written thus:
$tree = [ 't', [ 'a' , [ 'b' , [], 'c' , [] ]
'd' , [ 'e' , [], 'f' , [] ] ] ];
Here's a simple depth-first accumulator of all the elements in the tree:
sub elements {
my $tree = shift;
my @elements;
my @queue = @$tree;
while (@queue) {
my $element = shift @queue;
my $children = shift @queue;
push @elements, $element;
unshift @queue, @$children;
}
return @elements;
}
@elements = elements($tree) # qw(t a b c d e f)
(For breadth first, change the line unshift @queue, @$children to push @queue, @$children)
So, depending on what operations you want to perform on your tree, the simplest thing might be just to use Perl's built-in support for arrays and array references.
| Is there an n-ary tree implementation in Perl? | I'm writing a Perl script and would like to use a n-ary tree data structure.
Is there a good implementation that is available as source code (rather than part of a Perl library) ?
| [
"Adding to what Matthew already said, it looks like the following modules would be suitable:\nTree::Nary\nTree::Simple\nTree\n",
"I don't really understand why you want it was \"source\" rather than as a perl library, but you can download the source for any CPAN module. \nI haven't used it, but Tree looks to fill your requirements.\n",
"Depending on what you need a tree structure for, you might not need any pre-built implementation. Perl already supports them using arrays of arrayrefs.\nFor example, a simple representation of this tree\n t\n / \\\n a d\n / \\ / \\\n b c e f\n\ncould be represented by the following Perl code:\n$tree = [ t => [ a => [ b => [], c => [] ]\n d => [ e => [], f => [] ] ] ];\n\nHere, the tree's representation is as nested pairs: first the element (in this case, the letter), then an anonymous array reference representing the children of that element. Note that => is just a fancy comma in Perl that exempts you having to put quotes around the token to the left of the comma, provided it is a single word. The above code could also have been written thus:\n$tree = [ 't', [ 'a' , [ 'b' , [], 'c' , [] ]\n 'd' , [ 'e' , [], 'f' , [] ] ] ];\n\nHere's a simple depth-first accumulator of all the elements in the tree:\nsub elements {\n my $tree = shift;\n\n my @elements;\n my @queue = @$tree;\n while (@queue) {\n my $element = shift @queue;\n my $children = shift @queue;\n push @elements, $element;\n unshift @queue, @$children;\n }\n\n return @elements;\n}\n\n@elements = elements($tree) # qw(t a b c d e f)\n\n(For breadth first, change the line unshift @queue, @$children to push @queue, @$children)\nSo, depending on what operations you want to perform on your tree, the simplest thing might be just to use Perl's built-in support for arrays and array references.\n"
] | [
6,
4,
3
] | [] | [] | [
"algorithm",
"perl",
"tree"
] | stackoverflow_0000037662_algorithm_perl_tree.txt |
Q:
Is Wiki Content Portable?
I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on IIS/ASP.NET down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an XML based format on one site and import it into another. Is there something similar with wikis?
A:
The correct answer is ... "it depends".
It depends on which wiki you're using or planning to use. I've used various over the years MoinMoin was ok, used files rather than database, Ubuntu seem to like it. MediaWiki, everyone knows about and JAMWiki is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.
I recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)
I also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.
A:
To answer your question I don't believe that there's such a standard as WikiML as Till called it.
As strange as it sounds, I've investigated screen scraping a wiki for a co-worker to help him port it to another wiki engine. It turned out that screen scraping would have been easier, quicker and more efficient to write to move this particular file based wiki to another one or a CMS.
Given the context that you wrote the question in I would bite the bullet now and pay the little extra for a windows hosted account and put Screwturn wiki on it. You're got the option of using file based or SQL Server based back end for it but because one of your requirements is low cost I'm guessing that you would use file based now for a cheaper hosted account and then you can always upscale the back end to SQL Server.
A:
I haven't heard of WikiML.
I think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it "fit" in another system. It might just be a pain in the ass.
And if the contents are not databased, it's gonna be a royal pain in the ass. :D
Another solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.
| Is Wiki Content Portable? | I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on IIS/ASP.NET down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an XML based format on one site and import it into another. Is there something similar with wikis?
| [
"The correct answer is ... \"it depends\".\nIt depends on which wiki you're using or planning to use. I've used various over the years MoinMoin was ok, used files rather than database, Ubuntu seem to like it. MediaWiki, everyone knows about and JAMWiki is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.\nI recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)\nI also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.\n",
"To answer your question I don't believe that there's such a standard as WikiML as Till called it.\nAs strange as it sounds, I've investigated screen scraping a wiki for a co-worker to help him port it to another wiki engine. It turned out that screen scraping would have been easier, quicker and more efficient to write to move this particular file based wiki to another one or a CMS.\nGiven the context that you wrote the question in I would bite the bullet now and pay the little extra for a windows hosted account and put Screwturn wiki on it. You're got the option of using file based or SQL Server based back end for it but because one of your requirements is low cost I'm guessing that you would use file based now for a cheaper hosted account and then you can always upscale the back end to SQL Server.\n",
"I haven't heard of WikiML.\nI think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it \"fit\" in another system. It might just be a pain in the ass.\nAnd if the contents are not databased, it's gonna be a royal pain in the ass. :D\nAnother solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.\n"
] | [
6,
1,
0
] | [] | [] | [
"asp.net",
"iis",
"wiki"
] | stackoverflow_0000038824_asp.net_iis_wiki.txt |
Q:
SQL Server 2000: Is there a way to tell when a record was last modified?
The table doesn't have a last updated field and I need to know when existing data was updated. So adding a last updated field won't help (as far as I know).
A:
SQL Server 2000 does not keep track of this information for you.
There may be creative / fuzzy ways to guess what this date was depending on your database model. But, if you are talking about 1 table with no relation to other data, then you are out of luck.
A:
You can't check for changes without some sort of audit mechanism. You are looking to extract information that ha not been collected. If you just need to know when a record was added or edited, adding a datetime field that gets updated via a trigger when the record is updated would be the simplest choice.
If you also need to track when a record has been deleted, then you'll want to use an audit table and populate it from triggers with a row when a record has been added, edited, or deleted.
A:
You might try a log viewer; this basically just lets you look at the transactions in the transaction log, so you should be able to find the statement that updated the row in question. I wouldn't recommend this as a production-level auditing strategy, but I've found it to be useful in a pinch.
Here's one I've used; it's free and (only) works w/ SQL Server 2000.
http://www.red-gate.com/products/SQL_Log_Rescue/index.htm
A:
You can add a timestamp field to that table and update that timestamp value with an update trigger.
A:
OmniAudit is a commercial package which implments auditng across an entire database.
A free method would be to write a trigger for each table which addes entries to an audit table when fired.
| SQL Server 2000: Is there a way to tell when a record was last modified? | The table doesn't have a last updated field and I need to know when existing data was updated. So adding a last updated field won't help (as far as I know).
| [
"SQL Server 2000 does not keep track of this information for you. \nThere may be creative / fuzzy ways to guess what this date was depending on your database model. But, if you are talking about 1 table with no relation to other data, then you are out of luck.\n",
"You can't check for changes without some sort of audit mechanism. You are looking to extract information that ha not been collected. If you just need to know when a record was added or edited, adding a datetime field that gets updated via a trigger when the record is updated would be the simplest choice.\nIf you also need to track when a record has been deleted, then you'll want to use an audit table and populate it from triggers with a row when a record has been added, edited, or deleted.\n",
"You might try a log viewer; this basically just lets you look at the transactions in the transaction log, so you should be able to find the statement that updated the row in question. I wouldn't recommend this as a production-level auditing strategy, but I've found it to be useful in a pinch.\nHere's one I've used; it's free and (only) works w/ SQL Server 2000.\nhttp://www.red-gate.com/products/SQL_Log_Rescue/index.htm\n",
"You can add a timestamp field to that table and update that timestamp value with an update trigger.\n",
"OmniAudit is a commercial package which implments auditng across an entire database.\nA free method would be to write a trigger for each table which addes entries to an audit table when fired.\n"
] | [
5,
1,
1,
0,
0
] | [] | [] | [
"sql_server"
] | stackoverflow_0000002809_sql_server.txt |
Q:
Will this hardware be 64bit Windows Server 2008 compatible?
I recently printed out Jeff Atwood's Understanding The Hardware blog post and plan on taking it to Fry's Electronics and saying to them "Give me all the parts on these sheets so I can put this together." However, I'm going to be installing 64bit Windows Server 2008 on this machine so before I get all the parts:
Will all this hardware be 64bit Server 2008 compatible? - i.e. all drivers available for this hardware for this OS?
A:
Hardware's generally pretty OS-agnostic (at least in terms of Windows flavors) these days. Your only concern is getting drivers for other devices (scanners, printers, IR remotes) that won't work on 64bit and/or won't work on "Server" OSes. Online backup software like Mozy generally won't even install on a Server OS, so it depends on what you're going to use it for.
That said, if you're just going to use it for a home machine, then without even looking at the hardware list Jeff put together, I'd be confident in saying it'll probably work just fine.
A:
Yes, all that stuff should be fine (motherboard and CPU hardware, motherboard drivers, video card drivers).
| Will this hardware be 64bit Windows Server 2008 compatible? | I recently printed out Jeff Atwood's Understanding The Hardware blog post and plan on taking it to Fry's Electronics and saying to them "Give me all the parts on these sheets so I can put this together." However, I'm going to be installing 64bit Windows Server 2008 on this machine so before I get all the parts:
Will all this hardware be 64bit Server 2008 compatible? - i.e. all drivers available for this hardware for this OS?
| [
"Hardware's generally pretty OS-agnostic (at least in terms of Windows flavors) these days. Your only concern is getting drivers for other devices (scanners, printers, IR remotes) that won't work on 64bit and/or won't work on \"Server\" OSes. Online backup software like Mozy generally won't even install on a Server OS, so it depends on what you're going to use it for.\nThat said, if you're just going to use it for a home machine, then without even looking at the hardware list Jeff put together, I'd be confident in saying it'll probably work just fine.\n",
"Yes, all that stuff should be fine (motherboard and CPU hardware, motherboard drivers, video card drivers).\n"
] | [
2,
0
] | [
"You have a $1000 operating system license and you're going to put it on ~$1100 worth of hardware purchased at Fry's and presumably put together by yourself?\n"
] | [
-2
] | [
"hardware",
"windows_server_2008"
] | stackoverflow_0000038861_hardware_windows_server_2008.txt |
Q:
PHP : session variable aren't usable when site is redirected
I've to admin a small website for my alumni group which is hosted by my ISV. The url is something like www.myIsv.com/myWebSite/ which is quite ugly and very forgetable. The main admin of the webserver has registered a domain name www.mysmallwebsite.com and put a index.html with this content:
<html>
<head>
<title>www.mysmallwebsite.com</title>
</head>
<frameset>
<frame src="http://www.myIsv.com/myWebSite/" name="redir">
<noframes>
<p>Original location:
<a href="www.myIsv.com/myWebSite/">http://www.myIsv.com/myWebSite/</a>
</p>
</noframes>
</frameset>
</html>
It works fine, but some features like PHP Session variables doesn't work anymore! Anyone has a suggestion for correcting that?
Edit:
This doesn't work both on IE and on Firefox (no plugins)
Thanks
A:
Sessions are tied to the server AND the domain. Using frameset across domain will cause all kind of breakage because that's just not how it was designed to do.
Try using apache mod rewrite to create a "passthrough redirection", the "proxy" flag ([P]) in the rule is the magic flag that you need
Documentation at http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html
A:
What do you mean?
Are you saying that when you go from www.mysmallwebsite.com to www.myIsv.com/myWebSite/ then the PHP session is lost?
PHP recognizes the session with an ID (alpha-numeric hash generated on the server). The ID is passed from request to request using a cookie called PHPSESSID or something like that (you can view the cookies a websites sets with the help of your browser ... on Firefox you have Firebug + FireCookie and the wonderful Web Developer Toolbar ... with which you can view the list of cookies without a sweat).
So ... PHP is passing the session ID through the PHPSESSID cookie. But you can pass the session ID as a plain GET request parameters.
So when you place the html link to the ugly domain name, assuming that it is the same PHP server (with the same sessions initialized), you can put it like this ...
www.myIsv.com/myWebSite/?PHPSESSID=<?=session_id()?>
I haven't worked with PHP for a while, but I think this will work.
A:
Do session variables work if you hit http://www.myIsv.com/myWebSite/ directly? It would seem to me that the server config would dictate whether or not sessions will work. However, if you're starting a session on www.mysmallwebsite.com somehow (doesn't look like you're using PHP, but maybe you are), you're not going to be able to transfer session data without writing some backend logic that moves the session from server to server.
A:
Stick a session_start() at the beginning of your script and see if you can access the variables again.
A:
It's not working because on the client sessions are per-domain. All the cookies are being saved for mysmallwebsite.com, so myIsv.com cannot access them.
A:
@pix0r
www.myIsv.com/myWebSite/ -> session variable work
www.mysmallwebsite.com -> session variable doesn't work
@Alexandru
Unfortunately this is not on the same webserver
A:
What browser/ ad-on do you have? it may be your browser or some other software (may be even the web server) is blocking the sessions from http://www.myIsv.com/myWebSite/ working from with-in the frame, as its located on a different site, thinking its an XSS attack.
If the session works at http://www.myIsv.com/myWebSite/ with out the frame you could always us a redirect from http://www.mysmallwebsite.com to the ugly url, instead of using the frame.
EDIT:
I have just tried your frame code on a site of mine that uses sessions, firefox worked fine, with me logging in and staying loged in, but IE7 logged me straight out again.
A:
So when you place the html link to the ugly domain name, assuming that it is the same PHP server (with the same sessions initialized), you can put it like this ...
www.myIsv.com/myWebSite/?PHPSESSID=<?=session_id()?>
From a security point of view, I really really really hope that doesn't work
A:
You could also set a cookie on the user-side and then check for the presence of that cookie directly after redirecting, which if you're bothered about friendly URLs would mean that you don't have to pass around a PHPSESSID in the query string.
A:
When people arrive @ www.mysmallwebsite.com I would just redirect to http://www.myIsv.com/myWebSite/
<?php header('Location: http://www.myIsv.com/myWebSite/'); ?>
This is all I would have in www.mysmqllwebsite.com/index.php
This way you dont have to worry about browsedr compatibility, or weather the sessions work, just do the redirct, and you'll be good.
| PHP : session variable aren't usable when site is redirected | I've to admin a small website for my alumni group which is hosted by my ISV. The url is something like www.myIsv.com/myWebSite/ which is quite ugly and very forgetable. The main admin of the webserver has registered a domain name www.mysmallwebsite.com and put a index.html with this content:
<html>
<head>
<title>www.mysmallwebsite.com</title>
</head>
<frameset>
<frame src="http://www.myIsv.com/myWebSite/" name="redir">
<noframes>
<p>Original location:
<a href="www.myIsv.com/myWebSite/">http://www.myIsv.com/myWebSite/</a>
</p>
</noframes>
</frameset>
</html>
It works fine, but some features like PHP Session variables doesn't work anymore! Anyone has a suggestion for correcting that?
Edit:
This doesn't work both on IE and on Firefox (no plugins)
Thanks
| [
"Sessions are tied to the server AND the domain. Using frameset across domain will cause all kind of breakage because that's just not how it was designed to do. \nTry using apache mod rewrite to create a \"passthrough redirection\", the \"proxy\" flag ([P]) in the rule is the magic flag that you need\nDocumentation at http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html\n",
"What do you mean?\nAre you saying that when you go from www.mysmallwebsite.com to www.myIsv.com/myWebSite/ then the PHP session is lost?\nPHP recognizes the session with an ID (alpha-numeric hash generated on the server). The ID is passed from request to request using a cookie called PHPSESSID or something like that (you can view the cookies a websites sets with the help of your browser ... on Firefox you have Firebug + FireCookie and the wonderful Web Developer Toolbar ... with which you can view the list of cookies without a sweat).\nSo ... PHP is passing the session ID through the PHPSESSID cookie. But you can pass the session ID as a plain GET request parameters.\nSo when you place the html link to the ugly domain name, assuming that it is the same PHP server (with the same sessions initialized), you can put it like this ...\nwww.myIsv.com/myWebSite/?PHPSESSID=<?=session_id()?>\n\nI haven't worked with PHP for a while, but I think this will work. \n",
"Do session variables work if you hit http://www.myIsv.com/myWebSite/ directly? It would seem to me that the server config would dictate whether or not sessions will work. However, if you're starting a session on www.mysmallwebsite.com somehow (doesn't look like you're using PHP, but maybe you are), you're not going to be able to transfer session data without writing some backend logic that moves the session from server to server.\n",
"Stick a session_start() at the beginning of your script and see if you can access the variables again.\n",
"It's not working because on the client sessions are per-domain. All the cookies are being saved for mysmallwebsite.com, so myIsv.com cannot access them.\n",
"@pix0r\nwww.myIsv.com/myWebSite/ -> session variable work\nwww.mysmallwebsite.com -> session variable doesn't work\n@Alexandru\nUnfortunately this is not on the same webserver\n",
"What browser/ ad-on do you have? it may be your browser or some other software (may be even the web server) is blocking the sessions from http://www.myIsv.com/myWebSite/ working from with-in the frame, as its located on a different site, thinking its an XSS attack.\nIf the session works at http://www.myIsv.com/myWebSite/ with out the frame you could always us a redirect from http://www.mysmallwebsite.com to the ugly url, instead of using the frame.\nEDIT:\nI have just tried your frame code on a site of mine that uses sessions, firefox worked fine, with me logging in and staying loged in, but IE7 logged me straight out again.\n",
"\nSo when you place the html link to the ugly domain name, assuming that it is the same PHP server (with the same sessions initialized), you can put it like this ...\nwww.myIsv.com/myWebSite/?PHPSESSID=<?=session_id()?>\n\nFrom a security point of view, I really really really hope that doesn't work\n",
"You could also set a cookie on the user-side and then check for the presence of that cookie directly after redirecting, which if you're bothered about friendly URLs would mean that you don't have to pass around a PHPSESSID in the query string.\n",
"When people arrive @ www.mysmallwebsite.com I would just redirect to http://www.myIsv.com/myWebSite/\n<?php header('Location: http://www.myIsv.com/myWebSite/'); ?>\n\nThis is all I would have in www.mysmqllwebsite.com/index.php\nThis way you dont have to worry about browsedr compatibility, or weather the sessions work, just do the redirct, and you'll be good.\n"
] | [
4,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"php",
"session",
"session_variables"
] | stackoverflow_0000038370_php_session_session_variables.txt |
Q:
How do you create your own moniker (URL Protocol) on Windows systems?
How do you create your own custom moniker (or URL Protocol) on Windows systems?
Examples:
http:
mailto:
service:
A:
Take a look at Creating and Using URL Monikers , About Asynchronous Pluggable Protocols and Registering an Application to a URL Protocol from MSDN
A:
Here's some old Delphi code we used as a way to get shortcuts in a web application to start a windows program locally for the user.
procedure InstallIntoRegistry;
var
Reg: TRegistry;
begin
Reg := TRegistry.Create;
try
Reg.RootKey := HKEY_CLASSES_ROOT;
if Reg.OpenKey('moniker', True) then
begin
Reg.WriteString('', 'URL:Name of moniker');
Reg.WriteString('URL Protocol', '');
Reg.WriteString('Source Filter', '{E436EBB6-524F-11CE-9F53-0020AF0BA770}');
Reg.WriteInteger('EditFlags', 2);
if Reg.OpenKey('shell\open\command', True) then
begin
Reg.WriteString('', '"' + ParamStr(0) + '" "%1"');
end;
end else begin
MessageBox(0, 'You do not have the necessary access rights to complete this installation!' + Chr(13) +
'Please make sure you are logged in with a user account with administrative rights!', 'Access denied', 0);
Exit;
end;
finally
FreeAndNil(Reg);
end;
MessageBox(0, 'Application WebStart has been installed successfully!', 'Installed', 0);
end;
A:
Inside OLE from Craig Brockschmidt probably has the best coverage on monikers. If you want to dig a little deeper into this topic, I'd recommend getting this book. It is also contained on the MSDN disk that came along with VS 6.0, in case you still have that.
| How do you create your own moniker (URL Protocol) on Windows systems? | How do you create your own custom moniker (or URL Protocol) on Windows systems?
Examples:
http:
mailto:
service:
| [
"Take a look at Creating and Using URL Monikers , About Asynchronous Pluggable Protocols and Registering an Application to a URL Protocol from MSDN\n",
"Here's some old Delphi code we used as a way to get shortcuts in a web application to start a windows program locally for the user.\nprocedure InstallIntoRegistry;\nvar\n Reg: TRegistry;\nbegin\n Reg := TRegistry.Create;\n try\n Reg.RootKey := HKEY_CLASSES_ROOT;\n if Reg.OpenKey('moniker', True) then\n begin\n Reg.WriteString('', 'URL:Name of moniker');\n Reg.WriteString('URL Protocol', '');\n Reg.WriteString('Source Filter', '{E436EBB6-524F-11CE-9F53-0020AF0BA770}');\n Reg.WriteInteger('EditFlags', 2);\n\n if Reg.OpenKey('shell\\open\\command', True) then\n begin\n Reg.WriteString('', '\"' + ParamStr(0) + '\" \"%1\"');\n end;\n end else begin\n MessageBox(0, 'You do not have the necessary access rights to complete this installation!' + Chr(13) +\n 'Please make sure you are logged in with a user account with administrative rights!', 'Access denied', 0);\n Exit;\n end;\n finally\n FreeAndNil(Reg);\n end;\n\n MessageBox(0, 'Application WebStart has been installed successfully!', 'Installed', 0);\nend;\n\n",
"Inside OLE from Craig Brockschmidt probably has the best coverage on monikers. If you want to dig a little deeper into this topic, I'd recommend getting this book. It is also contained on the MSDN disk that came along with VS 6.0, in case you still have that.\n"
] | [
4,
3,
0
] | [] | [] | [
"moniker",
"winapi",
"windows"
] | stackoverflow_0000004638_moniker_winapi_windows.txt |
Q:
Enforce constraint checking only when inserting rows in MSSQL?
Is there a way to enforce constraint checking in MSSQL only when inserting new rows? I.e. allow the constraints to be violated when removing/updating rows?
Update: I mean FK constraint.
A:
You could create an INSERT TRIGGER that checks that the conditions are met. That way all updates will go straight through.
CREATE TRIGGER employee_insupd
ON employee
FOR INSERT
AS
/* Get the range of level for this job type from the jobs table. */
DECLARE @min_lvl tinyint,
@max_lvl tinyint,
@emp_lvl tinyint,
@job_id smallint
SELECT @min_lvl = min_lvl,
@max_lvl = max_lvl,
@emp_lvl = i.job_lvl,
@job_id = i.job_id
FROM employee e INNER JOIN inserted i ON e.emp_id = i.emp_id
JOIN jobs j ON j.job_id = i.job_id
IF (@job_id = 1) and (@emp_lvl <> 10)
BEGIN
RAISERROR ('Job id 1 expects the default level of 10.', 16, 1)
ROLLBACK TRANSACTION
END
ELSE
IF NOT (@emp_lvl BETWEEN @min_lvl AND @max_lvl)
BEGIN
RAISERROR ('The level for job_id:%d should be between %d and %d.',
16, 1, @job_id, @min_lvl, @max_lvl)
ROLLBACK TRANSACTION
END
A:
I think your best bet is to remove the explicit constraint and add a cursor for inserts, so you can perform your checking there and raise an error if the constraint is violated.
A:
What sort of constraints? I'm guessing foreign key constraints, since you imply that deleting a row might violate the constraint. If that's the case, it seems like you don't really need a constraint per se, since you're not concerned with referential integrity.
Without knowing more about your specific situation, I would echo the intent of the other posters, which seems to be "enforce the insert requirements in your data access layer". However, I'd quibble with their implementations. A trigger seems like overkill and any competent DBA should sternly rap you on the knuckles with a wooden ruler for trying to use a cursor to perform a simple insert. A stored procedure should suffice.
| Enforce constraint checking only when inserting rows in MSSQL? | Is there a way to enforce constraint checking in MSSQL only when inserting new rows? I.e. allow the constraints to be violated when removing/updating rows?
Update: I mean FK constraint.
| [
"You could create an INSERT TRIGGER that checks that the conditions are met. That way all updates will go straight through.\nCREATE TRIGGER employee_insupd\nON employee\nFOR INSERT\nAS\n/* Get the range of level for this job type from the jobs table. */\nDECLARE @min_lvl tinyint,\n @max_lvl tinyint,\n @emp_lvl tinyint,\n @job_id smallint\nSELECT @min_lvl = min_lvl, \n @max_lvl = max_lvl, \n @emp_lvl = i.job_lvl,\n @job_id = i.job_id\nFROM employee e INNER JOIN inserted i ON e.emp_id = i.emp_id \n JOIN jobs j ON j.job_id = i.job_id\nIF (@job_id = 1) and (@emp_lvl <> 10) \nBEGIN\n RAISERROR ('Job id 1 expects the default level of 10.', 16, 1)\n ROLLBACK TRANSACTION\nEND\nELSE\nIF NOT (@emp_lvl BETWEEN @min_lvl AND @max_lvl)\nBEGIN\n RAISERROR ('The level for job_id:%d should be between %d and %d.',\n 16, 1, @job_id, @min_lvl, @max_lvl)\n ROLLBACK TRANSACTION\nEND\n\n",
"I think your best bet is to remove the explicit constraint and add a cursor for inserts, so you can perform your checking there and raise an error if the constraint is violated.\n",
"What sort of constraints? I'm guessing foreign key constraints, since you imply that deleting a row might violate the constraint. If that's the case, it seems like you don't really need a constraint per se, since you're not concerned with referential integrity.\nWithout knowing more about your specific situation, I would echo the intent of the other posters, which seems to be \"enforce the insert requirements in your data access layer\". However, I'd quibble with their implementations. A trigger seems like overkill and any competent DBA should sternly rap you on the knuckles with a wooden ruler for trying to use a cursor to perform a simple insert. A stored procedure should suffice.\n"
] | [
7,
1,
1
] | [] | [] | [
"database",
"sql_server"
] | stackoverflow_0000038890_database_sql_server.txt |
Q:
Generate field in MySQL SELECT
If I've got a table containing Field1 and Field2 can I generate a new field in the select statement? For example, a normal query would be:
SELECT Field1, Field2 FROM Table
And I want to also create Field3 and have that returned in the resultset... something along the lines of this would be ideal:
SELECT Field1, Field2, Field3 = 'Value' FROM Table
Is this possible at all?
A:
SELECT Field1, Field2, 'Value' Field3 FROM Table
or for clarity
SELECT Field1, Field2, 'Value' AS Field3 FROM Table
A:
Yes - it's very possible, in fact you almost had it!
Try:
SELECT Field1, Field2, 'Value' AS `Field3` FROM Table
| Generate field in MySQL SELECT | If I've got a table containing Field1 and Field2 can I generate a new field in the select statement? For example, a normal query would be:
SELECT Field1, Field2 FROM Table
And I want to also create Field3 and have that returned in the resultset... something along the lines of this would be ideal:
SELECT Field1, Field2, Field3 = 'Value' FROM Table
Is this possible at all?
| [
"SELECT Field1, Field2, 'Value' Field3 FROM Table\n\nor for clarity\nSELECT Field1, Field2, 'Value' AS Field3 FROM Table\n\n",
"Yes - it's very possible, in fact you almost had it!\nTry:\nSELECT Field1, Field2, 'Value' AS `Field3` FROM Table\n\n"
] | [
12,
5
] | [] | [] | [
"mysql",
"sql"
] | stackoverflow_0000038940_mysql_sql.txt |
Q:
Is it possible to get the maximum supported resolution of a connected display in os x from java?
Assume java 1.6 and leopard. Ideally, it would also be nice to get a list of all supported resolutions and the current resolution. If this isn't possible in java, is there some way to do it that could be called from java?
A:
GraphicsDevice[] devices = GraphicsEnvironment.getLocalGraphicsEnvironment()
.getScreenDevices();
for (int i = 0; i < devices.length; i++) {
GraphicsDevice dev = devices[i];
System.out.println("device " + i);
DisplayMode[] modes = dev.getDisplayModes();
for (int j = 0; j < modes.length; j++) {
DisplayMode m = modes[j];
System.out.println(" " + j + ": " + m.getWidth() + " x " + m.getHeight());
}
}
With this code you can determine the current resolution. On my system (SuSE linux) it does NOT output the possible resolutions.
Seems to work an Mac and Windows.
| Is it possible to get the maximum supported resolution of a connected display in os x from java? | Assume java 1.6 and leopard. Ideally, it would also be nice to get a list of all supported resolutions and the current resolution. If this isn't possible in java, is there some way to do it that could be called from java?
| [
" GraphicsDevice[] devices = GraphicsEnvironment.getLocalGraphicsEnvironment()\n .getScreenDevices();\n for (int i = 0; i < devices.length; i++) {\n GraphicsDevice dev = devices[i];\n System.out.println(\"device \" + i);\n DisplayMode[] modes = dev.getDisplayModes();\n for (int j = 0; j < modes.length; j++) {\n DisplayMode m = modes[j];\n System.out.println(\" \" + j + \": \" + m.getWidth() + \" x \" + m.getHeight());\n }\n }\n\nWith this code you can determine the current resolution. On my system (SuSE linux) it does NOT output the possible resolutions. \nSeems to work an Mac and Windows.\n"
] | [
6
] | [] | [] | [
"java",
"macos",
"osx_leopard",
"resolutions"
] | stackoverflow_0000038955_java_macos_osx_leopard_resolutions.txt |
Q:
Regex to match unique substrings
Here's a basic regex technique that I've never managed to remember. Let's say I'm using a fairly generic regex implementation (e.g., grep or grep -E). If I were to do a list of files and match any that end in either .sty or .cls, how would I do that?
A:
ls | grep -E "\.(sty|cls)$"
\. matches literally a "." - an unescaped . matches any character
(sty|cls) - match "sty" or "cls" - the | is an or and the brackets limit the expression.
$ forces the match to be at the end of the line
Note, you want grep -E or egrep, not grep -e as that's a different option for lists of patterns.
A:
egrep "\.sty$|\.cls$"
A:
This regex:
\.(sty|cls)\z
will match any string ends with .sty or .cls
EDIT:
for grep \z should be replaced with $ i.e.
\.(sty|cls)$
as jelovirt suggested.
| Regex to match unique substrings | Here's a basic regex technique that I've never managed to remember. Let's say I'm using a fairly generic regex implementation (e.g., grep or grep -E). If I were to do a list of files and match any that end in either .sty or .cls, how would I do that?
| [
"ls | grep -E \"\\.(sty|cls)$\"\n\n\n\\. matches literally a \".\" - an unescaped . matches any character\n(sty|cls) - match \"sty\" or \"cls\" - the | is an or and the brackets limit the expression.\n$ forces the match to be at the end of the line\n\nNote, you want grep -E or egrep, not grep -e as that's a different option for lists of patterns.\n",
"egrep \"\\.sty$|\\.cls$\"\n\n",
"This regex:\n \\.(sty|cls)\\z\nwill match any string ends with .sty or .cls\nEDIT:\nfor grep \\z should be replaced with $ i.e.\n \\.(sty|cls)$\nas jelovirt suggested.\n"
] | [
4,
2,
2
] | [] | [] | [
"grep",
"regex"
] | stackoverflow_0000038993_grep_regex.txt |
Q:
Ruby "is" equivalent
Is there a Ruby equivalent for Python's "is"? It tests whether two objects are identical (i.e. have the same memory location).
A:
Use a.equal? b
http://www.ruby-doc.org/core/classes/Object.html
Unlike ==, the equal? method should never be overridden by subclasses: it is used to determine object identity (that is, a.equal?(b) iff a is the same object as b).
A:
You could also use __id__. This gives you the objects internal ID number, which is always unique. To check if to objects are the same, try
a.__id__ = b.__id__
This is how Ruby's standard library does it as far as I can tell (see group_by and others).
| Ruby "is" equivalent | Is there a Ruby equivalent for Python's "is"? It tests whether two objects are identical (i.e. have the same memory location).
| [
"Use a.equal? b\nhttp://www.ruby-doc.org/core/classes/Object.html\n\nUnlike ==, the equal? method should never be overridden by subclasses: it is used to determine object identity (that is, a.equal?(b) iff a is the same object as b). \n\n",
"You could also use __id__. This gives you the objects internal ID number, which is always unique. To check if to objects are the same, try\n\na.__id__ = b.__id__\n\nThis is how Ruby's standard library does it as far as I can tell (see group_by and others).\n"
] | [
13,
2
] | [] | [] | [
"python",
"ruby"
] | stackoverflow_0000035634_python_ruby.txt |
Q:
Is the .NET Client Profile worth targeting?
I've recently been looking into targeting the .NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations:
Windows XP SP2+
Windows Server 2003 Edit: Appears the Client Profile will not install on Windows Server 2003.
In addition, the client profile is not valid for x64 or ia64 editions; and will also not install if any previous version of the .NET Framework has been installed.
I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the .NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically.
Has anyone else determined if it is worth the extra effort to target these specific users?
Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration. Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+.
A:
Ultimately, it will not hurt any users if you target the Client Profile. This is because the client profile is a subset of the .net framework v3.5 sp1, and if v3.5 sp1 is already installed you don't need to install anything.
The assemblies in the client profile are the same binaries as the full framework, so unless you're loading assemblies dynamically, then you shouldn't need to do any additional testing.
My thinking is that unless you must use assemblies which are NOT in the client profile, then you should target it.
As for the OS requirements, WPF won't run on pre-XP sp2, so if you need to run on other OSes, then you'll have to use WinForms anyways.
EDIT:
On IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:
Actually so does FF3+3.5sp1:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1 (.NET CLR 3.5.30729)
A:
I think it is important to target as many users as you can, have you ever considered shipping your application without any managed code at all? You can convert your managed applications to pure machine code using tools such as http://www.xenocode.com/ or http://www.remotesoft.com/linker/ so you won't need any .NET framework on the client machines at all.
A:
I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available.
On IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; .NET CLR 2.0.50727).
| Is the .NET Client Profile worth targeting? | I've recently been looking into targeting the .NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations:
Windows XP SP2+
Windows Server 2003 Edit: Appears the Client Profile will not install on Windows Server 2003.
In addition, the client profile is not valid for x64 or ia64 editions; and will also not install if any previous version of the .NET Framework has been installed.
I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the .NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically.
Has anyone else determined if it is worth the extra effort to target these specific users?
Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration. Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+.
| [
"Ultimately, it will not hurt any users if you target the Client Profile. This is because the client profile is a subset of the .net framework v3.5 sp1, and if v3.5 sp1 is already installed you don't need to install anything. \nThe assemblies in the client profile are the same binaries as the full framework, so unless you're loading assemblies dynamically, then you shouldn't need to do any additional testing. \nMy thinking is that unless you must use assemblies which are NOT in the client profile, then you should target it. \nAs for the OS requirements, WPF won't run on pre-XP sp2, so if you need to run on other OSes, then you'll have to use WinForms anyways.\nEDIT:\n\nOn IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:\n\nActually so does FF3+3.5sp1:\n\nMozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1 (.NET CLR 3.5.30729)\n\n",
"I think it is important to target as many users as you can, have you ever considered shipping your application without any managed code at all? You can convert your managed applications to pure machine code using tools such as http://www.xenocode.com/ or http://www.remotesoft.com/linker/ so you won't need any .NET framework on the client machines at all.\n",
"\nI believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available.\n\nOn IE, yes. It sends the .NET Framework version as part of the UA string, e.g.:\nMozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; .NET CLR 2.0.50727).\n\n"
] | [
5,
3,
2
] | [] | [] | [
".net",
".net_client_profile"
] | stackoverflow_0000015694_.net_.net_client_profile.txt |
Q:
Caching Active Directory Data
In one of my applications, I am querying active directory to get a list of all users below a given user (using the "Direct Reports" thing). So basically, given the name of the person, it is looked up in AD, then the Direct Reports are read. But then for every direct report, the tool needs to check the direct reports of the direct reports. Or, more abstract: The Tool will use a person as the root of the tree and then walk down the complete tree to get the names of all the leaves (can be several hundred)
Now, my concern is obviously performance, as this needs to be done quite a few times. My idea is to manually cache that (essentially just put all the names in a long string and store that somewhere and update it once a day).
But I just wonder if there is a more elegant way to first get the information and then cache it, possibly using something in the System.DirectoryServices Namespace?
A:
In order to take control over the properties that you want to be cached you can call 'RefreshCache()' passing the properties that you want to hang around:
System.DirectoryServices.DirectoryEntry entry = new System.DirectoryServices.DirectoryEntry();
// Push the property values from AD back to cache.
entry.RefreshCache(new string[] {"cn", "www" });
A:
Active Directory is pretty efficient at storing information and the retrieval shouldn't be that much of a performance hit. If you are really intent on storing the names, you'll probably want to store them in some sort of a tree stucture, so you can see the relationships of all the people. Depending on how the number of people, you might as well pull all the information you need daily and then query all the requests against your cached copy.
A:
AD does that sort of caching for you so don't worry about it unless performance becomes a problem. I have software doing this sort of thing all day long running on a corporate intranet that takes thousands of hits per hour and have never had to tune performance in this area.
A:
Depends on how up to date you want the information to be. If you must have the very latest data in your report then querying directly from AD is reasonable. And I agree that AD is quite robust, a typical dedicated AD server is actually very lightly utilised in normal day to day operations but best to check with your IT department / support person.
An alternative is to have a daily script to dump the AD data into a CSV file and/or import it into a SQL database. (Oracle has a SELECT CONNECT BY feature that can automatically create multi-level hierarchies within a result set. MSSQL can do a similar thing with a bit of recursion IIRC).
| Caching Active Directory Data | In one of my applications, I am querying active directory to get a list of all users below a given user (using the "Direct Reports" thing). So basically, given the name of the person, it is looked up in AD, then the Direct Reports are read. But then for every direct report, the tool needs to check the direct reports of the direct reports. Or, more abstract: The Tool will use a person as the root of the tree and then walk down the complete tree to get the names of all the leaves (can be several hundred)
Now, my concern is obviously performance, as this needs to be done quite a few times. My idea is to manually cache that (essentially just put all the names in a long string and store that somewhere and update it once a day).
But I just wonder if there is a more elegant way to first get the information and then cache it, possibly using something in the System.DirectoryServices Namespace?
| [
"In order to take control over the properties that you want to be cached you can call 'RefreshCache()' passing the properties that you want to hang around:\nSystem.DirectoryServices.DirectoryEntry entry = new System.DirectoryServices.DirectoryEntry(); \n\n// Push the property values from AD back to cache.\n\nentry.RefreshCache(new string[] {\"cn\", \"www\" });\n\n",
"Active Directory is pretty efficient at storing information and the retrieval shouldn't be that much of a performance hit. If you are really intent on storing the names, you'll probably want to store them in some sort of a tree stucture, so you can see the relationships of all the people. Depending on how the number of people, you might as well pull all the information you need daily and then query all the requests against your cached copy. \n",
"AD does that sort of caching for you so don't worry about it unless performance becomes a problem. I have software doing this sort of thing all day long running on a corporate intranet that takes thousands of hits per hour and have never had to tune performance in this area. \n",
"Depends on how up to date you want the information to be. If you must have the very latest data in your report then querying directly from AD is reasonable. And I agree that AD is quite robust, a typical dedicated AD server is actually very lightly utilised in normal day to day operations but best to check with your IT department / support person.\nAn alternative is to have a daily script to dump the AD data into a CSV file and/or import it into a SQL database. (Oracle has a SELECT CONNECT BY feature that can automatically create multi-level hierarchies within a result set. MSSQL can do a similar thing with a bit of recursion IIRC).\n"
] | [
3,
2,
2,
2
] | [] | [] | [
"active_directory",
"asp.net",
"c#"
] | stackoverflow_0000033250_active_directory_asp.net_c#.txt |
Q:
What are the important Ruby commands?
I'm not sure of all of them, but what are the commands to do things like update Ruby, download a new gem, or update an existing gem? What other important things are there?
Since it might matter, I'm running Windows.
A:
By Ruby commands you probably mean the command line programs for Ruby. These are also called Ruby Helper programs. Here are a few:
ruby - The interpreter itself. Run Ruby scripts or statements.
gem - Ruby Package Manager. Great for automatically downloading or updating small Ruby modules like XML libraries, web servers, or even whole Ruby programs.
irb - Interactive Ruby Prompt. This is an entire Ruby shell that will let you execute any Ruby code you want. You can load libraries, test code directly, anything you can do with Ruby you can do in this shell. Believe me, there is quite a lot that you can do with it to improve your Ruby development workflow [1].
ri - Quick shell access to Ruby documentation. You can find the RDoc information on nearly any Ruby Class or method. The same kind of documentation that you would find on the online ruby-docs.
erb - Evaluates embedded Ruby in Ruby Templated documents. Embedded Ruby is just like embedding php into a document, and this is an interpreter for that kind of document. This is really more for the rails crowd. An alternative would be haml.
rdoc - Generate the standard Ruby documentation for one of your Ruby classes. Its like Javadocs. It parses the Ruby source files and generates the standard documentation from special comments.
testrb and rake. I'm not familiar enough with these. I'd love it if someone could fill these in!
Hopefully this was what you were looking for!
A:
Useful command: Rake
In addition to the commands listed by Joseph Pecoraro, the 'rake' command is also pretty standard when working with Ruby. Rake makes it easy to automate (simple) tasks; like building a RubyGem or running your unit tests.
With rake, the only important command to remember is 'rake -T', which shows a list of rake tasks available in the current directory.
Updating a Ruby gem
To get back to your specific question:
To update a specific gem, you can do two things: simply update the gem:
gem update <gemname>
This will update the gem to the latest version.
Install a Ruby gem
If you want to update to a specific version, you must install it:
gem install <gemname> -v <gemversion>
You can leave out the -v options. Rubygems then installs the latest version.
How to help yourself
Two useful gem commands to remember are:
gem help
This shows how to get help with rubygems.
gem help commands
This shows all commands available to rubygems.
From here you can get more specific help on a command by using gem help:
gem help update
A:
sudo gem install gemname
sudo gem update gemname
A:
Okay. I see what you're going for but again try to go abstract because I know someone will give you a direct answer (which people should up-vote over this).
Everyone should get comfortable with man pages. But even if you are, you'll find that these commands lack decent man pages. However, those that do will point you to cmd --help and you will find some decent documentation there. I linked each of the commands above to a hopefully useful resource that will lead you to an answer if you're worried about command line switches. I see someone already posted the commands so I won't repeat those for gem. But I'd go further and say:
sudo gem update [gemname]
The default behavior will update all installed gems.
Also, as a bonus there is a neat gem called cheat. The idea is that instead of typing man cmd you will type cheat cmd and you can get a community editable man page for that command. Or better yet, it doesn't have to be a command, it can be an entire topic. Coincidentally to install cheat you would do:
sudo gem install cheat
And then:
cheat gem
That will list out a "man page" written by users like you about the gem command. The commands that you asked for are on that page. Anyone can add new pages, update existing pages, and contribute to the community. If you're interested here is a quick addition you can make to have autocompletion for the cheat command from the command line.
I know I have long winded answers ;)
A:
Is there a similar command to update Ruby itself?
Alas, no there is not. I'm afraid that if you want to update Ruby itself you will have to either download an installer from the Ruby website, or compile it from source.
I should mention though that compiling from source is very easy and offers developers quite a bit of neat flexibility. You can add a suffix to the generated commands so that you can have standalone Ruby 1.8 and Ruby 1.9 builds both at the same time. That can be very helpful for testing.
Finally, its always a danger to update an operating systems built in commands unless it occurs through an official update. Installed applications may be expecting to a Ruby 1.8 in the standard location and crash if they meet an updated version. Any updates you make should just not overwrite one that came with an OS. (If any app crashes then its the fault of the app's developers for not specifying the absolute path to the OS version).
| What are the important Ruby commands? | I'm not sure of all of them, but what are the commands to do things like update Ruby, download a new gem, or update an existing gem? What other important things are there?
Since it might matter, I'm running Windows.
| [
"By Ruby commands you probably mean the command line programs for Ruby. These are also called Ruby Helper programs. Here are a few:\n\nruby - The interpreter itself. Run Ruby scripts or statements.\ngem - Ruby Package Manager. Great for automatically downloading or updating small Ruby modules like XML libraries, web servers, or even whole Ruby programs.\nirb - Interactive Ruby Prompt. This is an entire Ruby shell that will let you execute any Ruby code you want. You can load libraries, test code directly, anything you can do with Ruby you can do in this shell. Believe me, there is quite a lot that you can do with it to improve your Ruby development workflow [1].\nri - Quick shell access to Ruby documentation. You can find the RDoc information on nearly any Ruby Class or method. The same kind of documentation that you would find on the online ruby-docs.\nerb - Evaluates embedded Ruby in Ruby Templated documents. Embedded Ruby is just like embedding php into a document, and this is an interpreter for that kind of document. This is really more for the rails crowd. An alternative would be haml.\nrdoc - Generate the standard Ruby documentation for one of your Ruby classes. Its like Javadocs. It parses the Ruby source files and generates the standard documentation from special comments.\ntestrb and rake. I'm not familiar enough with these. I'd love it if someone could fill these in!\n\nHopefully this was what you were looking for!\n",
"Useful command: Rake\nIn addition to the commands listed by Joseph Pecoraro, the 'rake' command is also pretty standard when working with Ruby. Rake makes it easy to automate (simple) tasks; like building a RubyGem or running your unit tests.\nWith rake, the only important command to remember is 'rake -T', which shows a list of rake tasks available in the current directory.\nUpdating a Ruby gem\nTo get back to your specific question:\nTo update a specific gem, you can do two things: simply update the gem:\ngem update <gemname>\n\nThis will update the gem to the latest version. \nInstall a Ruby gem\nIf you want to update to a specific version, you must install it:\ngem install <gemname> -v <gemversion>\n\nYou can leave out the -v options. Rubygems then installs the latest version.\nHow to help yourself\nTwo useful gem commands to remember are:\ngem help\n\nThis shows how to get help with rubygems.\ngem help commands\n\nThis shows all commands available to rubygems.\nFrom here you can get more specific help on a command by using gem help:\ngem help update\n\n",
"sudo gem install gemname\nsudo gem update gemname\n\n",
"Okay. I see what you're going for but again try to go abstract because I know someone will give you a direct answer (which people should up-vote over this).\nEveryone should get comfortable with man pages. But even if you are, you'll find that these commands lack decent man pages. However, those that do will point you to cmd --help and you will find some decent documentation there. I linked each of the commands above to a hopefully useful resource that will lead you to an answer if you're worried about command line switches. I see someone already posted the commands so I won't repeat those for gem. But I'd go further and say:\nsudo gem update [gemname]\n\nThe default behavior will update all installed gems.\n\nAlso, as a bonus there is a neat gem called cheat. The idea is that instead of typing man cmd you will type cheat cmd and you can get a community editable man page for that command. Or better yet, it doesn't have to be a command, it can be an entire topic. Coincidentally to install cheat you would do:\nsudo gem install cheat\n\nAnd then:\ncheat gem\n\nThat will list out a \"man page\" written by users like you about the gem command. The commands that you asked for are on that page. Anyone can add new pages, update existing pages, and contribute to the community. If you're interested here is a quick addition you can make to have autocompletion for the cheat command from the command line.\nI know I have long winded answers ;)\n",
"\nIs there a similar command to update Ruby itself?\n\nAlas, no there is not. I'm afraid that if you want to update Ruby itself you will have to either download an installer from the Ruby website, or compile it from source.\nI should mention though that compiling from source is very easy and offers developers quite a bit of neat flexibility. You can add a suffix to the generated commands so that you can have standalone Ruby 1.8 and Ruby 1.9 builds both at the same time. That can be very helpful for testing.\nFinally, its always a danger to update an operating systems built in commands unless it occurs through an official update. Installed applications may be expecting to a Ruby 1.8 in the standard location and crash if they meet an updated version. Any updates you make should just not overwrite one that came with an OS. (If any app crashes then its the fault of the app's developers for not specifying the absolute path to the OS version).\n"
] | [
16,
11,
1,
1,
1
] | [
"\n@John Topley: Thanks. Is there a\n similar command to update Ruby itself?\n\nNot really. You don't say which operating system you're using. I use Mac OS X and tend to build Ruby from source.\n"
] | [
-1
] | [
"ruby"
] | stackoverflow_0000036430_ruby.txt |
Q:
What do I need to do to implement an "out of proc" COM server in C#?
I am trying to implement an "out of proc" COM server written in C#. How do I do this?
I need the C# code to be "out of proc" from my main C++ application, because I cannot load the .NET runtime into my main process space
WHY?:
My C++ code is in a DLL that is loaded into many different customer EXE's, some of which use different versions of the .NET runtime. Since there can only be one runtime loaded into a single process, my best bet seems to be to put my C# code into another process.
A:
You can create COM+ components using System.EnterpriseServices.ServicedComponent. Consequently, you'll be able to create out-of-proc and in-proc (client) component activation as well as all COM+ benefits of pooling, remoting, run as a windows service etc.
A:
Here we can read that it is possible, but the exe will be loaded as an library and not started in it's own process like an exe. I don't know if that is a problem for you? It also contains some possible solutions if you do want to make it act like a real out of process com server. But maybe using another way of inter process communication is better. Like .Net Remoting.
A:
I cannot recommend this as the way, but you could create a COM-callable wrapper for your C# library, then create a VB6 ActiveX exe project that delegates calls to your C# library.
A:
Why can't you load the .net runtime into you process space? It is possible to host the .net runtime and call into .net using COM.
| What do I need to do to implement an "out of proc" COM server in C#? | I am trying to implement an "out of proc" COM server written in C#. How do I do this?
I need the C# code to be "out of proc" from my main C++ application, because I cannot load the .NET runtime into my main process space
WHY?:
My C++ code is in a DLL that is loaded into many different customer EXE's, some of which use different versions of the .NET runtime. Since there can only be one runtime loaded into a single process, my best bet seems to be to put my C# code into another process.
| [
"You can create COM+ components using System.EnterpriseServices.ServicedComponent. Consequently, you'll be able to create out-of-proc and in-proc (client) component activation as well as all COM+ benefits of pooling, remoting, run as a windows service etc.\n",
"Here we can read that it is possible, but the exe will be loaded as an library and not started in it's own process like an exe. I don't know if that is a problem for you? It also contains some possible solutions if you do want to make it act like a real out of process com server. But maybe using another way of inter process communication is better. Like .Net Remoting.\n",
"I cannot recommend this as the way, but you could create a COM-callable wrapper for your C# library, then create a VB6 ActiveX exe project that delegates calls to your C# library.\n",
"Why can't you load the .net runtime into you process space? It is possible to host the .net runtime and call into .net using COM.\n"
] | [
7,
1,
0,
0
] | [] | [] | [
"c#",
"com",
"interop"
] | stackoverflow_0000030653_c#_com_interop.txt |
Q:
Call Project Server Interface web method from an msi installer
I'm using a Visual Studio web setup project to install an application that extends the functionality of Project Server. I want to call a method from the PSI ( Project Server Interface ) from one of the custom actions of my setup project, but every time a get a "401 Unauthorized access" error. What should I do to be able to access the PSI? The same code, when used from a Console Application, works without any issues.
A:
It sounds like in the console situation you are running with your current user credentials, which have access to the PSI. When running from the web, it's running with the creds of the IIS application instance. I think you'd either need to set up delegation to pass the session creds to the IIS application, or use some static creds for your IIS app that have access to the PSI.
A:
I finally found the answer. You can call the LoginWindows PSI service an set the credentials to NetworkCredentials using the appropriate user, password and domain tokens. Then you can call any PSI method, as long as the credentials are explicit. Otherwise, using DefaultCredentials you'll get an Unauthorized Access error, because an msi is run with Local System Account.
| Call Project Server Interface web method from an msi installer | I'm using a Visual Studio web setup project to install an application that extends the functionality of Project Server. I want to call a method from the PSI ( Project Server Interface ) from one of the custom actions of my setup project, but every time a get a "401 Unauthorized access" error. What should I do to be able to access the PSI? The same code, when used from a Console Application, works without any issues.
| [
"It sounds like in the console situation you are running with your current user credentials, which have access to the PSI. When running from the web, it's running with the creds of the IIS application instance. I think you'd either need to set up delegation to pass the session creds to the IIS application, or use some static creds for your IIS app that have access to the PSI.\n",
"I finally found the answer. You can call the LoginWindows PSI service an set the credentials to NetworkCredentials using the appropriate user, password and domain tokens. Then you can call any PSI method, as long as the credentials are explicit. Otherwise, using DefaultCredentials you'll get an Unauthorized Access error, because an msi is run with Local System Account.\n"
] | [
3,
2
] | [] | [] | [
"action",
"methods",
"windows_installer"
] | stackoverflow_0000020782_action_methods_windows_installer.txt |
Q:
How can a .net class libraries be protected so it cant be referenced by other applications?
How can a .net class library project and resulting dll be protected so it cant be referenced by other applications (.net projects) except those projects in my own solution?
A:
I think you can't forbid other applications to reference you library.
You can make library's classes internal and provide access to them via InternalVisibleTo attribute but it won't save you from reflection.
A:
Yep, aku is right. In reality if you want certain types & methods to only be accessible to one application, you're better off compiling it all into one exe & marking those types all internal. You can then obfuscate the code to avoid the issue with reflection (see here)
A:
Forgive my ignorance, but if they're all class libraries, what does the code do? Isn't the purpose of having a dll so that the code can be referenced.
In any case if you mark everything internal it won't be able to be accessed outside its own library
A:
I think what deanbates is saying is that he is trying to find a way to keep a DLL public within his own application and private for everything else
| How can a .net class libraries be protected so it cant be referenced by other applications? | How can a .net class library project and resulting dll be protected so it cant be referenced by other applications (.net projects) except those projects in my own solution?
| [
"I think you can't forbid other applications to reference you library.\nYou can make library's classes internal and provide access to them via InternalVisibleTo attribute but it won't save you from reflection.\n",
"Yep, aku is right. In reality if you want certain types & methods to only be accessible to one application, you're better off compiling it all into one exe & marking those types all internal. You can then obfuscate the code to avoid the issue with reflection (see here)\n",
"Forgive my ignorance, but if they're all class libraries, what does the code do? Isn't the purpose of having a dll so that the code can be referenced.\nIn any case if you mark everything internal it won't be able to be accessed outside its own library\n",
"I think what deanbates is saying is that he is trying to find a way to keep a DLL public within his own application and private for everything else\n"
] | [
8,
2,
1,
1
] | [] | [] | [
".net",
"c#",
"class_library"
] | stackoverflow_0000039125_.net_c#_class_library.txt |
Q:
SQL 2000 'TRY CATCH like' Error Handling
This is a sql 2000 database that I am working with.
I have what I call a staging table that is a raw data dump of data, so everything is ntext or nvarchar(255).
I need to cast/convert all of this data into the appropriate data types (ie int, decimal, nvarchar, etc.)
The way I was going to do this was to iterate through all records using a while loop and attempt a CAST on each column on a single record during each iteration, after I visit a particular record I flag it as processed (bit field).
But how can I log the error when/if it occurs but allow the while loop to continue.
At first I implemented this using a TRY CATCH in a local SQL 2005 instance (to get the project going) and all was working well, but i learned today that the dev & production database that the international DBA's have set up is a SQL 2000 instance so I have to conform.
EDIT: I am using a SSIS package to populate the staging table. I see that now I must revisit that package and implement a script component to handle the conversions. Thanks guys
EDIT: I am doing this on a record by record basis, not a batch insert, so the transaction idea seems like it would be feasible but I'm not sure how to trap @@ERROR and allow the stored procedure to continue.
EDIT: I really like Guy's approach, I am going to implement it this way.
A:
What are you using to import the file? DTS has scripting abilities that can be used for data validation. If your not using DTS are you using a custom tool? If so do your validation there.
But i think this is what your looking for.
http://www.sqlteam.com/article/using-dts-to-automate-a-data-import-process
IF @@Error <> 0
GOTO LABEL
@op
In SSIS the "red line" from a data import task can redirect bad rows to a separate destination or transform. I haven't played with it in a while but hope it helps.
A:
Generally I don't like "loop through the record" solutions as they tend to be slow and you end up writing a lot of custom code.
So...
Depending on how many records are in your staging table, you could post process the data with a series of SQL statements that test the columns for correctness and mark any records that fail the test.
i.e.
UPDATE staging_table
SET status_code = 'FAIL_TEST_1'
WHERE status_code IS NULL
AND ISDATE(ntext_column1) = 0;
UPDATE staging_table
SET status_code = 'FAIL_TEST_2'
WHERE status_code IS NULL
AND ISNUMERIC(ntext_column2) = 0;
etc...
Finally
INSERT INTO results_table ( mydate, myprice )
SELECT ntext_column1 AS mydate, ntext_column2 AS myprice
FROM staging_table
WHERE status_code IS NULL;
DELETE FROM staging_table
WHERE status_code IS NULL;
And the staging table has all the errors, that you can export and report out.
A:
It looks like you are doomed. See this document.
TL/DR: A data conversion error always causes the whole batch to be aborted - your sql script will not continue to execute no matter what you do. Transactions won't help. You can't check @@ERROR because execution will already have aborted.
I would first reexamine why you need a staging database full of varchar(255) columns - can whatever fills that database do the conversion?
If not, I guess you'll need to write a program/script to select from the varchar columns, convert, and insert into the prod db.
A:
Run each cast in a transaction, after each cast, check @@ERROR, if its clear, commit and move on.
A:
You could try checking for the data type before casting and actually avoid throwing errors.
You could use functions like:
ISNUM - to check if the data is of a numeric type
ISDATE - to check if it can be cast to DATETIME
| SQL 2000 'TRY CATCH like' Error Handling | This is a sql 2000 database that I am working with.
I have what I call a staging table that is a raw data dump of data, so everything is ntext or nvarchar(255).
I need to cast/convert all of this data into the appropriate data types (ie int, decimal, nvarchar, etc.)
The way I was going to do this was to iterate through all records using a while loop and attempt a CAST on each column on a single record during each iteration, after I visit a particular record I flag it as processed (bit field).
But how can I log the error when/if it occurs but allow the while loop to continue.
At first I implemented this using a TRY CATCH in a local SQL 2005 instance (to get the project going) and all was working well, but i learned today that the dev & production database that the international DBA's have set up is a SQL 2000 instance so I have to conform.
EDIT: I am using a SSIS package to populate the staging table. I see that now I must revisit that package and implement a script component to handle the conversions. Thanks guys
EDIT: I am doing this on a record by record basis, not a batch insert, so the transaction idea seems like it would be feasible but I'm not sure how to trap @@ERROR and allow the stored procedure to continue.
EDIT: I really like Guy's approach, I am going to implement it this way.
| [
"What are you using to import the file? DTS has scripting abilities that can be used for data validation. If your not using DTS are you using a custom tool? If so do your validation there.\nBut i think this is what your looking for.\nhttp://www.sqlteam.com/article/using-dts-to-automate-a-data-import-process\nIF @@Error <> 0\n GOTO LABEL\n\n@op\nIn SSIS the \"red line\" from a data import task can redirect bad rows to a separate destination or transform. I haven't played with it in a while but hope it helps.\n",
"Generally I don't like \"loop through the record\" solutions as they tend to be slow and you end up writing a lot of custom code.\nSo...\nDepending on how many records are in your staging table, you could post process the data with a series of SQL statements that test the columns for correctness and mark any records that fail the test.\ni.e.\nUPDATE staging_table\nSET status_code = 'FAIL_TEST_1'\nWHERE status_code IS NULL\nAND ISDATE(ntext_column1) = 0;\n\nUPDATE staging_table\nSET status_code = 'FAIL_TEST_2'\nWHERE status_code IS NULL\nAND ISNUMERIC(ntext_column2) = 0;\n\netc...\n\nFinally\nINSERT INTO results_table ( mydate, myprice )\nSELECT ntext_column1 AS mydate, ntext_column2 AS myprice\nFROM staging_table\nWHERE status_code IS NULL;\n\nDELETE FROM staging_table\nWHERE status_code IS NULL;\n\nAnd the staging table has all the errors, that you can export and report out.\n",
"It looks like you are doomed. See this document.\nTL/DR: A data conversion error always causes the whole batch to be aborted - your sql script will not continue to execute no matter what you do. Transactions won't help. You can't check @@ERROR because execution will already have aborted.\nI would first reexamine why you need a staging database full of varchar(255) columns - can whatever fills that database do the conversion?\nIf not, I guess you'll need to write a program/script to select from the varchar columns, convert, and insert into the prod db.\n",
"Run each cast in a transaction, after each cast, check @@ERROR, if its clear, commit and move on.\n",
"You could try checking for the data type before casting and actually avoid throwing errors.\nYou could use functions like:\nISNUM - to check if the data is of a numeric type\nISDATE - to check if it can be cast to DATETIME\n"
] | [
2,
2,
1,
1,
1
] | [] | [] | [
"sql_server"
] | stackoverflow_0000033685_sql_server.txt |
Q:
Referencing resource files from multiple projects in a solution
I am working on localization for a asp.net application that consists of several projects.
For this, there are some strings that are used in several of these projects. Naturally, I would prefer to have only one copy of the resource file in each project.
Since the resource files don't have an namespace (at least as far as I can tell), they can't be accessed like regular classes.
Is there any way to reference resx files in another project, within the same solution?
A:
You can just create a class library project, add a resource file there, and then refer to that assembly for common resources.
A:
I have used this solution before to share a assembley info.cs file across all projects in a solution I would presume the same would work fro a resource file.
Create a linked file to each individual project/class library. There will be only one copy and every project will have a reference to the code via a linked file at compile time. Its a very elegant solution to solve shared non public resources without duplicating code.
<Compile Include="path to shared file usually relative">
<Link>filename for Visual Studio To Dispaly.resx</Link>
</Compile>
add that code to the complile item group of a csproj file then replace the paths with your actual paths to the resx files and you sould be able to open them.
Once you have done this for one project file you should be able to employ the copy & paste the linked file to other projects without having to hack the csproj.
A:
Some useful advice on how to manage a situation like this is available here:
http://www.codeproject.com/KB/dotnet/Localization.aspx
| Referencing resource files from multiple projects in a solution | I am working on localization for a asp.net application that consists of several projects.
For this, there are some strings that are used in several of these projects. Naturally, I would prefer to have only one copy of the resource file in each project.
Since the resource files don't have an namespace (at least as far as I can tell), they can't be accessed like regular classes.
Is there any way to reference resx files in another project, within the same solution?
| [
"You can just create a class library project, add a resource file there, and then refer to that assembly for common resources.\n",
"I have used this solution before to share a assembley info.cs file across all projects in a solution I would presume the same would work fro a resource file.\nCreate a linked file to each individual project/class library. There will be only one copy and every project will have a reference to the code via a linked file at compile time. Its a very elegant solution to solve shared non public resources without duplicating code.\n<Compile Include=\"path to shared file usually relative\">\n <Link>filename for Visual Studio To Dispaly.resx</Link>\n</Compile>\n\nadd that code to the complile item group of a csproj file then replace the paths with your actual paths to the resx files and you sould be able to open them.\nOnce you have done this for one project file you should be able to employ the copy & paste the linked file to other projects without having to hack the csproj.\n",
"Some useful advice on how to manage a situation like this is available here:\nhttp://www.codeproject.com/KB/dotnet/Localization.aspx\n"
] | [
11,
5,
4
] | [] | [] | [
"c#",
"localization",
"resx"
] | stackoverflow_0000039065_c#_localization_resx.txt |
Q:
Do you version "derived" files?
Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
@Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, @Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
A:
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
A:
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
A:
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
A:
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
A:
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.
| Do you version "derived" files? | Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
@Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, @Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
| [
"We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.\nOne scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.\nWe also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.\n",
"I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN. \nSo I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.\n",
"Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.\nIs there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?\n",
"Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.\nAs you say, keeping the derived files in version control only increases the amount of noise you have to deal with.\n",
"In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.\n"
] | [
4,
2,
1,
1,
0
] | [] | [] | [
"revision_history",
"revisions",
"version_control",
"versioning"
] | stackoverflow_0000039154_revision_history_revisions_version_control_versioning.txt |
Q:
How do I run my app with large pages in Windows?
Large pages are available in Windows Server 2003 and Windows Vista.
But how do I enable large pages for my application?
A:
Martin's answer is correct on Windows Server 2003:
You will have to assign the "Lock pages in memory" privilege to any user that runs your > application. This includes administrators
Select Control Panel -> Administrative Tools -> Local Security Policy
Select Local Policies -> User Rights Assignment
Double click "Lock pages in memory", add users and/or groups
Reboot the machine
On Windows Vista you need also make sure that the application is run as Administrator (by right-clicking on the application or the shell and choosing "Run as adminstrator".
In addition, it helps to have a freshly booted machine since the large pages can "run out" due to fragmentation of the heap.
A:
You will have to assign the Lock pages in memory privilege to any user that runs your application. This includes administrators.
Select Control Panel -> Administrative Tools -> Local Security Policy
Select Local Policies -> User Rights Assignment
Double click "Lock pages in memory", add users and/or groups
Reboot the machine
| How do I run my app with large pages in Windows? | Large pages are available in Windows Server 2003 and Windows Vista.
But how do I enable large pages for my application?
| [
"Martin's answer is correct on Windows Server 2003:\n\nYou will have to assign the \"Lock pages in memory\" privilege to any user that runs your > application. This includes administrators\n\nSelect Control Panel -> Administrative Tools -> Local Security Policy\nSelect Local Policies -> User Rights Assignment\nDouble click \"Lock pages in memory\", add users and/or groups\nReboot the machine\n\n\nOn Windows Vista you need also make sure that the application is run as Administrator (by right-clicking on the application or the shell and choosing \"Run as adminstrator\".\nIn addition, it helps to have a freshly booted machine since the large pages can \"run out\" due to fragmentation of the heap.\n",
"You will have to assign the Lock pages in memory privilege to any user that runs your application. This includes administrators.\n\nSelect Control Panel -> Administrative Tools -> Local Security Policy \nSelect Local Policies -> User Rights Assignment\nDouble click \"Lock pages in memory\", add users and/or groups \nReboot the machine \n\n"
] | [
3,
1
] | [] | [] | [
"windows",
"windows_server_2003"
] | stackoverflow_0000039059_windows_windows_server_2003.txt |
Q:
How to use LINQ To SQL in an N-Tier Solution?
Now that LINQ to SQL is a little more mature, I'd like to know of any techniques people are using to create an n-tiered solution using the technology, because it does not seem that obvious to me.
A:
Hm, Rockford Lhotka sad, that LINQ to SQL is wonderful technology for fetching data from database. He suggests that afterwards they'll must to be bind to "reach domain objects" (aka. CSLA objetcs).
Seriously speaking, LINQ to SQL had it's support for n-tier architecture see DataContext.Update method.
A:
You might want to look into the ADO .Net Entity Framework as an alternative to LINQ to SQL, although it does support LINQ as well. I believe LINQ to SQL is designed to be fairly lightweight and simple, whereas the Entity Framework is more heavy duty and probably more suitable in large Enterprise applications.
A:
LINQ to SQL doesn't really have a n-tier story that I've seen, since the objects that it creates are created in the class with the rest of it, you don't really have an assembly that you can nicely reference through something like Web Services, etc.
The only way I'd really consider it is using the datacontext to fetch data, then fill an intermediary data model, passing that through, and referencing it on both sides, and using that in your client side - then passing them back and pushing the data back into a new Datacontext or intellgently updating rows after you refetch them.
That's if I'm understanding what you're trying to get at :\
I asked ScottGu the same question on his blog when I first started looking at it - but I haven't seen a single scenario or app in the wild that uses LINQ to SQL in this way. Websites like Rob Connery's Storefront are closer to the provider.
A:
OK, I am going to give myself one possible solution.
Inserts/Updates were never an issue; you can wrap the business logic in a Save/Update method; e.g.
public class EmployeesDAL
{
...
SaveEmployee(Employee employee)
{
//data formatting
employee.FirstName = employee.FirstName.Trim();
employee.LastName = employee.LastName.Trim();
//business rules
if(employee.FirstName.Length > 0 && employee.LastName.Length > 0)
{
MyCompanyContext context = new MyCompanyContext();
//insert
if(employee.empid == 0)
context.Employees.InsertOnSubmit(employee);
else
{
//update goes here
}
context.SubmitChanges();
}
else
throw new BusinessRuleException("Employees must have first and last names");
}
}
For fetching data, or at least the fetching of data that is coming from more than one table you can use stored procedures or views because the results will not be anonymous so you can return them from an outside method. For instance, using a stored proc:
public ISingleResult<GetEmployeesAndManagersResult> LoadEmployeesAndManagers()
{
MyCompanyContext context = new MyCompanyContext();
var emps = context.GetEmployeesAndManagers();
return emps;
}
A:
Seriously speaking, LINQ to SQL had it's support for n-tier architecture see DataContext.Update method
Some of what I've read suggests that the business logic wraps the DataContext - in other words you wrap the update in the way that you suggest.
The way i traditionally write business objects i usually encapsulate the "Load methods" in the BO as well; so I might have a method named LoadEmployeesAndManagers that returns a list of employees and their immediate managers (this is a contrived example) . Maybe its just me, but in my front end I'd rather see e.LoadEmployeesAndManagers() than some long LINQ statement.
Anyway, using LINQ it would probably look something like this (not checked for syntax correctness):
var emps = from e in Employees
join m in Employees
on e.ManagerEmpID equals m.EmpID
select new
{ e,
m.FullName
};
Now if I understand things correctly, if I put this in say a class library and call it from my front end, the only way I can return this is as an IEnumerable, so I lose my strong typed goodness. The only way I'd be able to return a strongly typed object would be to create my own Employees class (plus a string field for manager name) and fill it from the results of my LINQ to SQL statement and then return that. But this seems counter intuitive... what exactly did LINQ to SQL buy me if I have to do all that?
I think that I might be looking at things the wrong way; any enlightenment would be appreciated.
A:
"the only way I can return this is as an IEnumerable, so I lose my strong typed goodness"
that is incorrect. In fact your query is strongly typed, it is just an anonymous type. I think the query you want is more like:
var emps = from e in Employees
join m in Employees
on e.ManagerEmpID equals m.EmpID
select new Employee
{ e,
m.FullName
};
Which will return IEnumerable.
Here is an article I wrote on the topic.
Linq-to-sql is an ORM. It does not affect the way that you design an N-tiered application. You use it the same way you would use any other ORM.
A:
@liammclennan
Which will return IEnumerable. ... Linq-to-sql is an ORM. It does not affect the way that you design an N-tiered application. You use it the same way you would use any other ORM.
Then I guess I am still confused. Yes, Linq-to-Sql is an ORM; but as far as I can tell I am still littering my front end code with inline sql type statements (linq, not sql.... but still I feel that this should be abstracted away from the front end).
Suppose I wrap the LINQ statement we've been using as an example in a method. As far as I can tell, the only way I can return it is this way:
public class EmployeesDAL
{
public IEnumerable LoadEmployeesAndManagers()
{
MyCompanyContext context = new MyCompanyContext();
var emps = from e in context.Employees
join m in context.Employees
on e.ManagerEmpID equals m.EmpID
select new
{ e,
m.FullName
};
return emps;
}
}
From my front end code I would do something like this:
EmployeesDAL dal = new EmployeesDAL;
var emps = dal.LoadEmployeesAndManagers();
This of course returns an IEnumerable; but I cannot use this like any other ORM like you say (unless of course I misunderstand), because I cannot do this (again, this is a contrived example):
txtEmployeeName.Text = emps[0].FullName
This is what I meant by "I lose strong typed goodness." I think that I am starting to agree with Crucible; that LINQ-to-SQL was not designed to be used in this way. Again, if I am not seeing things correctly, someone show me the way :)
| How to use LINQ To SQL in an N-Tier Solution? | Now that LINQ to SQL is a little more mature, I'd like to know of any techniques people are using to create an n-tiered solution using the technology, because it does not seem that obvious to me.
| [
"Hm, Rockford Lhotka sad, that LINQ to SQL is wonderful technology for fetching data from database. He suggests that afterwards they'll must to be bind to \"reach domain objects\" (aka. CSLA objetcs).\nSeriously speaking, LINQ to SQL had it's support for n-tier architecture see DataContext.Update method.\n",
"You might want to look into the ADO .Net Entity Framework as an alternative to LINQ to SQL, although it does support LINQ as well. I believe LINQ to SQL is designed to be fairly lightweight and simple, whereas the Entity Framework is more heavy duty and probably more suitable in large Enterprise applications.\n",
"LINQ to SQL doesn't really have a n-tier story that I've seen, since the objects that it creates are created in the class with the rest of it, you don't really have an assembly that you can nicely reference through something like Web Services, etc.\nThe only way I'd really consider it is using the datacontext to fetch data, then fill an intermediary data model, passing that through, and referencing it on both sides, and using that in your client side - then passing them back and pushing the data back into a new Datacontext or intellgently updating rows after you refetch them.\nThat's if I'm understanding what you're trying to get at :\\\nI asked ScottGu the same question on his blog when I first started looking at it - but I haven't seen a single scenario or app in the wild that uses LINQ to SQL in this way. Websites like Rob Connery's Storefront are closer to the provider.\n",
"OK, I am going to give myself one possible solution.\nInserts/Updates were never an issue; you can wrap the business logic in a Save/Update method; e.g.\npublic class EmployeesDAL\n{\n ...\n SaveEmployee(Employee employee)\n {\n //data formatting\n employee.FirstName = employee.FirstName.Trim();\n employee.LastName = employee.LastName.Trim();\n\n //business rules\n if(employee.FirstName.Length > 0 && employee.LastName.Length > 0)\n {\n MyCompanyContext context = new MyCompanyContext();\n\n //insert\n if(employee.empid == 0)\n context.Employees.InsertOnSubmit(employee);\n else\n {\n //update goes here\n }\n\n context.SubmitChanges();\n\n\n }\n else \n throw new BusinessRuleException(\"Employees must have first and last names\");\n }\n }\n\nFor fetching data, or at least the fetching of data that is coming from more than one table you can use stored procedures or views because the results will not be anonymous so you can return them from an outside method. For instance, using a stored proc:\n public ISingleResult<GetEmployeesAndManagersResult> LoadEmployeesAndManagers()\n {\n MyCompanyContext context = new MyCompanyContext();\n\n var emps = context.GetEmployeesAndManagers();\n\n return emps;\n }\n\n",
"\nSeriously speaking, LINQ to SQL had it's support for n-tier architecture see DataContext.Update method\n\nSome of what I've read suggests that the business logic wraps the DataContext - in other words you wrap the update in the way that you suggest. \nThe way i traditionally write business objects i usually encapsulate the \"Load methods\" in the BO as well; so I might have a method named LoadEmployeesAndManagers that returns a list of employees and their immediate managers (this is a contrived example) . Maybe its just me, but in my front end I'd rather see e.LoadEmployeesAndManagers() than some long LINQ statement. \nAnyway, using LINQ it would probably look something like this (not checked for syntax correctness):\nvar emps = from e in Employees\n join m in Employees\n on e.ManagerEmpID equals m.EmpID\n select new\n { e,\n m.FullName\n };\n\nNow if I understand things correctly, if I put this in say a class library and call it from my front end, the only way I can return this is as an IEnumerable, so I lose my strong typed goodness. The only way I'd be able to return a strongly typed object would be to create my own Employees class (plus a string field for manager name) and fill it from the results of my LINQ to SQL statement and then return that. But this seems counter intuitive... what exactly did LINQ to SQL buy me if I have to do all that?\nI think that I might be looking at things the wrong way; any enlightenment would be appreciated.\n",
"\"the only way I can return this is as an IEnumerable, so I lose my strong typed goodness\"\nthat is incorrect. In fact your query is strongly typed, it is just an anonymous type. I think the query you want is more like:\nvar emps = from e in Employees\n join m in Employees\n on e.ManagerEmpID equals m.EmpID\n select new Employee\n { e,\n m.FullName\n };\n\nWhich will return IEnumerable.\nHere is an article I wrote on the topic. \nLinq-to-sql is an ORM. It does not affect the way that you design an N-tiered application. You use it the same way you would use any other ORM.\n",
"@liammclennan\n\nWhich will return IEnumerable. ... Linq-to-sql is an ORM. It does not affect the way that you design an N-tiered application. You use it the same way you would use any other ORM.\n\nThen I guess I am still confused. Yes, Linq-to-Sql is an ORM; but as far as I can tell I am still littering my front end code with inline sql type statements (linq, not sql.... but still I feel that this should be abstracted away from the front end).\nSuppose I wrap the LINQ statement we've been using as an example in a method. As far as I can tell, the only way I can return it is this way:\npublic class EmployeesDAL\n{\n public IEnumerable LoadEmployeesAndManagers()\n {\n MyCompanyContext context = new MyCompanyContext();\n\n var emps = from e in context.Employees\n join m in context.Employees\n on e.ManagerEmpID equals m.EmpID\n select new\n { e,\n m.FullName\n };\n\n return emps;\n }\n\n}\n\nFrom my front end code I would do something like this:\nEmployeesDAL dal = new EmployeesDAL;\nvar emps = dal.LoadEmployeesAndManagers();\n\nThis of course returns an IEnumerable; but I cannot use this like any other ORM like you say (unless of course I misunderstand), because I cannot do this (again, this is a contrived example):\ntxtEmployeeName.Text = emps[0].FullName\n\nThis is what I meant by \"I lose strong typed goodness.\" I think that I am starting to agree with Crucible; that LINQ-to-SQL was not designed to be used in this way. Again, if I am not seeing things correctly, someone show me the way :)\n"
] | [
1,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"linq_to_sql",
"n_tier_architecture"
] | stackoverflow_0000038005_linq_to_sql_n_tier_architecture.txt |
Q:
C++ STL question: allocators
I have a (potentially dumb) question about the C++ STL. When I make a container (vector, set, map, etc), is it allocated on the stack or on the heap? If I make a set and put 5 million strings, will I have to worry about a stack overflow?
A:
STL classes by default allocate their internal buffers from the heap, although these classes also allow custom allocators that allow a user to specify an alternate location to allocate from - e.g. a shared memory pool.
A:
The default allocator for STL containers uses operator new and delete, so it's whatever those route to for the type being contained. (In general, it comes from the heap unless you do something to override that.)
You will not get a stack overflow from allocating 5 million strings. Even if you made a stack based allocator, it would probably overflow before you even inserted one string.
A:
The container itself is allocated where you decide (it can be the stack, the heap, an object's member, etc) but the memory it uses is, by default, as others described, taken on the Free Store (managed through new and delete) which is not the same as the heap (managed through malloc/free).
Don't mix the two!
| C++ STL question: allocators | I have a (potentially dumb) question about the C++ STL. When I make a container (vector, set, map, etc), is it allocated on the stack or on the heap? If I make a set and put 5 million strings, will I have to worry about a stack overflow?
| [
"STL classes by default allocate their internal buffers from the heap, although these classes also allow custom allocators that allow a user to specify an alternate location to allocate from - e.g. a shared memory pool.\n",
"The default allocator for STL containers uses operator new and delete, so it's whatever those route to for the type being contained. (In general, it comes from the heap unless you do something to override that.)\nYou will not get a stack overflow from allocating 5 million strings. Even if you made a stack based allocator, it would probably overflow before you even inserted one string.\n",
"The container itself is allocated where you decide (it can be the stack, the heap, an object's member, etc) but the memory it uses is, by default, as others described, taken on the Free Store (managed through new and delete) which is not the same as the heap (managed through malloc/free).\nDon't mix the two!\n"
] | [
9,
3,
0
] | [] | [] | [
"c++",
"stl"
] | stackoverflow_0000033306_c++_stl.txt |
Q:
Can I use other IDEs other than Visual Studio for coding in .net?
What are the options? How popular are they? Do these IDEs give similar/better functionality compared to visual studio?
A:
Yes - you can try using SharpDevelop:
http://www.icsharpcode.net/OpenSource/SD/
Or you can just use notepad, or notepad++
http://notepad-plus.sourceforge.net/
Then compile on the command line.
Edit: If you're looking for a free solution - try Visual Studio C# Express Edition:
http://www.microsoft.com/express/vcsharp/
A:
The vast majority of .net developers use Visual Studio, but there are a couple of alternatives.
Visual Studio Express Editions are free and give you a cut down version of Visual Studio which you can use with a single language, i.e. VB or C# or C++.
SharpDevelop is probably the best free alternative to Visual Studio. It's open source and has features like a form designer. It supports the full range of .net languages (including IronPython, F# and Boo). It also has features not found in Visual Studio, like the ability to translate between C# and VB.net. You can even mix different languages in the same project.
MonoDevelop is also free and open source. - Now runs on Linux, Mac OS/X and Windows.
The .net compilers are all free and included with the SDK. This means you can always use any text editor and compile from the command line. This would be pretty painful to do with anything other than a really simple program!
A:
MonoDevelop
A:
Check out the mono project. http://www.mono-project.com/
It's the '.NET for linux' project.
They also have an ide based on eclipse as part of the whole thing. Never used it before but I've used eclipse for java and some php work, and eclipse is pretty good
Edit: the ide is called MonoDevelop. Seen at http://www.monodevelop.com/
A:
You do
SharpDevelop - It doesn't really stand up to Visual Studio. Thou I found it to be useful at times since it has support for Visual Basic. And at times I could load solutions for projects that were not installed on my VS. But the really USEFUL features that I found were : Conversion between C# <-> VB Code, PInvoke, and Regex Expressions. Oh and lets not forget support for Boo :D.
there is also Borland C# Builder AFAIK. Only saw a tutorial long ago written by someone who has used it.
MonoDevelop - link text This is based on SharpDevelop 0.9 if I remember it correctly. I have to say I only used it once to see if I can work with threads in Linux just like in Windows.
That's about all I remember, I'm pretty sure there are at least one more IDE but I don't remember it now :). Also they don't really match up to VS + Resharper :) or + CodeRush.
Plus you have Visual Studio Express so unless you have to work on Linux or have some projects that you think you could try opening in #D there isn't much out there. MonoDevelop is starting to come along try keeping an eye out for it.
I found this refrences also X-Code
| Can I use other IDEs other than Visual Studio for coding in .net? | What are the options? How popular are they? Do these IDEs give similar/better functionality compared to visual studio?
| [
"Yes - you can try using SharpDevelop:\nhttp://www.icsharpcode.net/OpenSource/SD/\nOr you can just use notepad, or notepad++\nhttp://notepad-plus.sourceforge.net/\nThen compile on the command line.\nEdit: If you're looking for a free solution - try Visual Studio C# Express Edition:\nhttp://www.microsoft.com/express/vcsharp/\n",
"The vast majority of .net developers use Visual Studio, but there are a couple of alternatives.\nVisual Studio Express Editions are free and give you a cut down version of Visual Studio which you can use with a single language, i.e. VB or C# or C++. \nSharpDevelop is probably the best free alternative to Visual Studio. It's open source and has features like a form designer. It supports the full range of .net languages (including IronPython, F# and Boo). It also has features not found in Visual Studio, like the ability to translate between C# and VB.net. You can even mix different languages in the same project.\nMonoDevelop is also free and open source. - Now runs on Linux, Mac OS/X and Windows.\nThe .net compilers are all free and included with the SDK. This means you can always use any text editor and compile from the command line. This would be pretty painful to do with anything other than a really simple program!\n",
"MonoDevelop\n",
"Check out the mono project. http://www.mono-project.com/\nIt's the '.NET for linux' project.\nThey also have an ide based on eclipse as part of the whole thing. Never used it before but I've used eclipse for java and some php work, and eclipse is pretty good\nEdit: the ide is called MonoDevelop. Seen at http://www.monodevelop.com/\n",
"You do\nSharpDevelop - It doesn't really stand up to Visual Studio. Thou I found it to be useful at times since it has support for Visual Basic. And at times I could load solutions for projects that were not installed on my VS. But the really USEFUL features that I found were : Conversion between C# <-> VB Code, PInvoke, and Regex Expressions. Oh and lets not forget support for Boo :D.\nthere is also Borland C# Builder AFAIK. Only saw a tutorial long ago written by someone who has used it.\nMonoDevelop - link text This is based on SharpDevelop 0.9 if I remember it correctly. I have to say I only used it once to see if I can work with threads in Linux just like in Windows.\nThat's about all I remember, I'm pretty sure there are at least one more IDE but I don't remember it now :). Also they don't really match up to VS + Resharper :) or + CodeRush.\nPlus you have Visual Studio Express so unless you have to work on Linux or have some projects that you think you could try opening in #D there isn't much out there. MonoDevelop is starting to come along try keeping an eye out for it.\nI found this refrences also X-Code\n"
] | [
5,
3,
2,
1,
1
] | [] | [] | [
".net",
"visual_studio"
] | stackoverflow_0000039229_.net_visual_studio.txt |
Q:
Query to identify the number of revisions made to a table
Is there any query which can return me the number of revisions made to the structure of a database table?
Secondly, how can I determine the number of pages (in terms of size) present in mdf or ldf files?
A:
I think you need to create a trigger and store all changes to the table in a separate table. You can then use this table to get the revision history.
A:
You can get last modify date or creation date of object in SQL Server.
For examle info on tables:
SELECT * FROM sys.objects WHERE type='U'
More info on msdn
Number of pages can be fetched from sys.database_files.
Check documentation
A:
SQL Server doesn't keep track of changes so it can't tell you this.
The only way you may be able to do this is if you had a copy of all the scripts applied to the database.
In order to be able to capture this information in the future you should look at DDL triggers (v2005+) which will enable you to record changes.
| Query to identify the number of revisions made to a table | Is there any query which can return me the number of revisions made to the structure of a database table?
Secondly, how can I determine the number of pages (in terms of size) present in mdf or ldf files?
| [
"I think you need to create a trigger and store all changes to the table in a separate table. You can then use this table to get the revision history.\n",
"You can get last modify date or creation date of object in SQL Server.\nFor examle info on tables:\nSELECT * FROM sys.objects WHERE type='U'\n\nMore info on msdn\nNumber of pages can be fetched from sys.database_files.\nCheck documentation\n",
"SQL Server doesn't keep track of changes so it can't tell you this.\nThe only way you may be able to do this is if you had a copy of all the scripts applied to the database.\nIn order to be able to capture this information in the future you should look at DDL triggers (v2005+) which will enable you to record changes.\n"
] | [
2,
1,
0
] | [] | [] | [
"sql_server"
] | stackoverflow_0000039243_sql_server.txt |
Q:
"Similar Posts" like functionality using MS SQL Server?
I have lots of article store in MS SQL server 2005 database in a table called Articles-
"Articles (ArticleID, ArticleTitle, ArticleContent)"
Now I want some SP or SQL query which could return me similar Article against any user's input (very much like "Similar Posts" in blogs OR "Related Questions" in stackoverflow). The matching should work on both ArticleTitle and ArticleContent. The query should be intelligent enough to sort the result on the basis on their relevancy.
Is it possible to do this in MS SQL Server 2005?
A:
Something like this might work, a kind of ranking system. You would probably have to split the string in your application to build a SQL string, but I have used similar to build an effective site search.
Select
Top 10
ArticleID,
ArticleTitle,
ArticleContent
From
Articles
Order By
(Case When ArticleTitle = 'Article Title' Then 1 Else 0 End) Desc,
(Case When ArticleTitle = 'Article' Then 1 Else 0 End) Desc,
(Case When ArticleTitle = 'Title' Then 1 Else 0 End) Desc,
(Case When Soundex('Article Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,
(Case When Soundex('Article') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,
(Case When Soundex('Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%', ArticleContent) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc
You can then add/remove case statements from the order by clause to improve the list based on your data.
A:
First of all you need to define what article similarity means.
For example you can associate some meta information with articles, like tags.
To be able to find similar articles you need to extract some features from them, for example you can build full text index.
You can take advantage of full text search capability of MSSQL 2005
-- Assuming @Title contains title of current articles you can find related articles runnig this query
SELECT * FROM Acticles WHERE CONTAINS(ArticleTitle, @Title)
A:
I think the question is what 'similar' means to you. If you create a field for user to input some kind of tags, it becomes much more easier to query.
| "Similar Posts" like functionality using MS SQL Server? | I have lots of article store in MS SQL server 2005 database in a table called Articles-
"Articles (ArticleID, ArticleTitle, ArticleContent)"
Now I want some SP or SQL query which could return me similar Article against any user's input (very much like "Similar Posts" in blogs OR "Related Questions" in stackoverflow). The matching should work on both ArticleTitle and ArticleContent. The query should be intelligent enough to sort the result on the basis on their relevancy.
Is it possible to do this in MS SQL Server 2005?
| [
"Something like this might work, a kind of ranking system. You would probably have to split the string in your application to build a SQL string, but I have used similar to build an effective site search.\nSelect\nTop 10\nArticleID,\nArticleTitle,\nArticleContent\nFrom\nArticles\nOrder By\n(Case When ArticleTitle = 'Article Title' Then 1 Else 0 End) Desc,\n(Case When ArticleTitle = 'Article' Then 1 Else 0 End) Desc,\n(Case When ArticleTitle = 'Title' Then 1 Else 0 End) Desc,\n(Case When Soundex('Article Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,\n(Case When Soundex('Article') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,\n(Case When Soundex('Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,\n(Case When PatIndex('%Article%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,\n(Case When PatIndex('%Article%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,\n(Case When PatIndex('%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,\n(Case When PatIndex('%Article%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc,\n(Case When PatIndex('%Article%', ArticleContent) > 0 Then 1 Else 0 End) Desc,\n(Case When PatIndex('%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc\n\nYou can then add/remove case statements from the order by clause to improve the list based on your data.\n",
"First of all you need to define what article similarity means.\nFor example you can associate some meta information with articles, like tags.\nTo be able to find similar articles you need to extract some features from them, for example you can build full text index.\nYou can take advantage of full text search capability of MSSQL 2005\n-- Assuming @Title contains title of current articles you can find related articles runnig this query \nSELECT * FROM Acticles WHERE CONTAINS(ArticleTitle, @Title)\n\n",
"I think the question is what 'similar' means to you. If you create a field for user to input some kind of tags, it becomes much more easier to query.\n"
] | [
1,
0,
0
] | [] | [] | [
"database",
"sql",
"sql_server"
] | stackoverflow_0000039240_database_sql_sql_server.txt |
Q:
Error: "VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results"
I'm running WAMP v2.0 on WindowsXP and I've got a bunch of virtual hosts setup in the http-vhosts.conf file.
This was working, but in the last week whenever I try & start WAMP I get this error in the event logs:
VirtualHost *:80 -- mixing * ports and
non-* ports with a NameVirtualHost
address is not supported, proceeding
with undefined results.
and the server won't start. I can't think of what's changed.
I've copied the conf file below.
NameVirtualHost *
<VirtualHost *:80>
ServerName dev.blog.slaven.net.au
ServerAlias dev.blog.slaven.net.au
ServerAdmin user@host.com
DocumentRoot "c:/Project Data/OtherProjects/slaven.net.au/blog/"
ErrorLog "logs/blog.slaven.localhost-error.log"
CustomLog "logs/blog.slaven.localhost-access.log" common
<Directory "c:/Project Data/OtherProjects/slaven.net.au/blog/">
Options Indexes FollowSymLinks MultiViews
AllowOverride all
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
EDIT: I meant to add, if I change the NameVirtualHosts directive to specify a port, i.e
NameVirtualHost *:80
I get this error:
Only one usage of each socket address (protocol/network address/port) is normally permitted. : make_sock: could not bind to address 0.0.0.0:80
A:
NameVirtualHost *:80
I get this error:
Only one usage of each socket address (protocol/network address/port) is normally >permitted. : make_sock: could not bind to address 0.0.0.0:80
I think this might be because you have somthing else listening to port 80. Do you have any other servers (or for example Skype) running?
(If it was Skype: untick "Tools > Options > Advanced > Connection > Use port 80 and 443 as alternatives for incoming connections")
A:
Well, it seems the problem there is the way (and order) in which you assign the ports.
Basically, *:80 means "use port 80 for all hosts in this configuration". When you do this, Apache tries to bind that host to 0.0.0.0:80, which means that host will receive every single packet coming to the machine through port 80, regardless of what virtual host it was intended to go to. That's something you should use only once, and only if you have one host in that configuration.
Thus, if you have the same *:80 directive on two hosts in the configuration file, the server won't load because it will try to bind 0.0.0.0:80 twice, failing on the second try. (which explains the "Only one usage of each socket address (protocol/network address/port) is normally permitted. : make_sock: could not bind to address 0.0.0.0:80" message).
| Error: "VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results" | I'm running WAMP v2.0 on WindowsXP and I've got a bunch of virtual hosts setup in the http-vhosts.conf file.
This was working, but in the last week whenever I try & start WAMP I get this error in the event logs:
VirtualHost *:80 -- mixing * ports and
non-* ports with a NameVirtualHost
address is not supported, proceeding
with undefined results.
and the server won't start. I can't think of what's changed.
I've copied the conf file below.
NameVirtualHost *
<VirtualHost *:80>
ServerName dev.blog.slaven.net.au
ServerAlias dev.blog.slaven.net.au
ServerAdmin user@host.com
DocumentRoot "c:/Project Data/OtherProjects/slaven.net.au/blog/"
ErrorLog "logs/blog.slaven.localhost-error.log"
CustomLog "logs/blog.slaven.localhost-access.log" common
<Directory "c:/Project Data/OtherProjects/slaven.net.au/blog/">
Options Indexes FollowSymLinks MultiViews
AllowOverride all
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
EDIT: I meant to add, if I change the NameVirtualHosts directive to specify a port, i.e
NameVirtualHost *:80
I get this error:
Only one usage of each socket address (protocol/network address/port) is normally permitted. : make_sock: could not bind to address 0.0.0.0:80
| [
"\nNameVirtualHost *:80\nI get this error:\nOnly one usage of each socket address (protocol/network address/port) is normally >permitted. : make_sock: could not bind to address 0.0.0.0:80\n\nI think this might be because you have somthing else listening to port 80. Do you have any other servers (or for example Skype) running?\n(If it was Skype: untick \"Tools > Options > Advanced > Connection > Use port 80 and 443 as alternatives for incoming connections\")\n",
"Well, it seems the problem there is the way (and order) in which you assign the ports. \nBasically, *:80 means \"use port 80 for all hosts in this configuration\". When you do this, Apache tries to bind that host to 0.0.0.0:80, which means that host will receive every single packet coming to the machine through port 80, regardless of what virtual host it was intended to go to. That's something you should use only once, and only if you have one host in that configuration.\nThus, if you have the same *:80 directive on two hosts in the configuration file, the server won't load because it will try to bind 0.0.0.0:80 twice, failing on the second try. (which explains the \"Only one usage of each socket address (protocol/network address/port) is normally permitted. : make_sock: could not bind to address 0.0.0.0:80\" message).\n"
] | [
4,
0
] | [] | [] | [
"apache",
"wamp",
"windows"
] | stackoverflow_0000039006_apache_wamp_windows.txt |
Q:
Generics in Java
Is there any shorthand way of defining and using generic definitions without having to keep repeating a particular generic description such that if there is a change I don't have to change all definitions/usages though out the codebase for example is something like this possible:
Typedef myGenDef = < Object1, Object2 >;
HashMap< myGenDef > hm = new HashMap< myGenDef >();
for (Entry< myGenDef > ent : hm..entrySet())
{
.
.
.
}
A:
There's the pseudo-typedef antipattern...
class StringList extends ArrayList<String> { }
Good stuff, drink up! ;-)
As the article notes, this technique has some serious issues, primarily that this "typedef" is actually a separate class and thus cannot be used interchangeably with either the type it extends or other similarly defined types.
A:
In a generic method, you can use a limited form of type inferrence to avoid some repetitions.
Example: if you have the function
<K, V> Map<K, V> getSomething() {
//...
}
you can use:
final Map<String, Object> something = getsomething();
instead of:
final Map<String, Object> something = this.<String, Object>getsomething();
A:
Use Factory Pattern for creation of Generics:
Method Sample:
public Map<String, Integer> createGenMap(){
return new HashMap<String,Integer>();
}
A:
The pseudo-typedef antipattern mentioned by Shog9 would work - though it's not recommended to use an ANTIPATTERN - but it does not address your intentions. The goal of pseudo-typedef is to reduce clutter in declaration and improve readability.
What you want is to be able to replace a group of generics declarations by one single trade. I think you have to stop and think: "in witch ways is it valuable?". I mean, I can't think of a scenario where you would need this. Imagine class A:
class A {
private Map<String, Integer> values = new HashMap<String, Integer>();
}
Imagine now that I want to change the 'values' field to a Map. Why would exist many other fields scattered through the code that needs the same change? As for the operations that uses 'values' a simple refactoring would be enough.
A:
No. Though, groovy, a JVM language, is dynamically typed and would let you write:
def map = new HashMap<complicated generic expression>();
| Generics in Java | Is there any shorthand way of defining and using generic definitions without having to keep repeating a particular generic description such that if there is a change I don't have to change all definitions/usages though out the codebase for example is something like this possible:
Typedef myGenDef = < Object1, Object2 >;
HashMap< myGenDef > hm = new HashMap< myGenDef >();
for (Entry< myGenDef > ent : hm..entrySet())
{
.
.
.
}
| [
"There's the pseudo-typedef antipattern... \nclass StringList extends ArrayList<String> { }\n\nGood stuff, drink up! ;-)\nAs the article notes, this technique has some serious issues, primarily that this \"typedef\" is actually a separate class and thus cannot be used interchangeably with either the type it extends or other similarly defined types.\n",
"In a generic method, you can use a limited form of type inferrence to avoid some repetitions.\nExample: if you have the function\n <K, V> Map<K, V> getSomething() {\n //...\n }\n\nyou can use:\nfinal Map<String, Object> something = getsomething();\n\ninstead of:\nfinal Map<String, Object> something = this.<String, Object>getsomething();\n\n",
"Use Factory Pattern for creation of Generics:\nMethod Sample:\npublic Map<String, Integer> createGenMap(){\n return new HashMap<String,Integer>();\n\n }\n\n",
"The pseudo-typedef antipattern mentioned by Shog9 would work - though it's not recommended to use an ANTIPATTERN - but it does not address your intentions. The goal of pseudo-typedef is to reduce clutter in declaration and improve readability. \nWhat you want is to be able to replace a group of generics declarations by one single trade. I think you have to stop and think: \"in witch ways is it valuable?\". I mean, I can't think of a scenario where you would need this. Imagine class A:\nclass A {\n private Map<String, Integer> values = new HashMap<String, Integer>();\n}\n\nImagine now that I want to change the 'values' field to a Map. Why would exist many other fields scattered through the code that needs the same change? As for the operations that uses 'values' a simple refactoring would be enough.\n",
"No. Though, groovy, a JVM language, is dynamically typed and would let you write:\ndef map = new HashMap<complicated generic expression>();\n\n"
] | [
12,
4,
3,
2,
1
] | [] | [] | [
"generics",
"java"
] | stackoverflow_0000038068_generics_java.txt |
Q:
Integrating InstantRails with Aptana or any other IDE
So I've been using InstantRails to check out Ruby on rails. I've been using Notepad++ for the editing. Now I don't want to install Ruby or Rails on my machine. Is there any walk through/tutorial on how to integrate Radrails or Netbeans with InstantRails?
A:
Here's a tutorial: http://ruby.meetup.com/73/boards/view/viewthread?thread=2203432
(I don't know if it's any good.)
And here's one with InstantRails+Netbeans: https://web.archive.org/web/20100505044104/http://weblogs.java.net/blog/bleonard/archive/2007/03/instant_rails_w.html
A:
I recommend learning Rails and Ruby itself first, and then picking up something like InstantRails. Having too many layers when learning something new can make it hard to know what features are part of which language, and potentially confuse you when trying to determine where a bug is occurring.
| Integrating InstantRails with Aptana or any other IDE | So I've been using InstantRails to check out Ruby on rails. I've been using Notepad++ for the editing. Now I don't want to install Ruby or Rails on my machine. Is there any walk through/tutorial on how to integrate Radrails or Netbeans with InstantRails?
| [
"Here's a tutorial: http://ruby.meetup.com/73/boards/view/viewthread?thread=2203432\n(I don't know if it's any good.)\nAnd here's one with InstantRails+Netbeans: https://web.archive.org/web/20100505044104/http://weblogs.java.net/blog/bleonard/archive/2007/03/instant_rails_w.html\n",
"I recommend learning Rails and Ruby itself first, and then picking up something like InstantRails. Having too many layers when learning something new can make it hard to know what features are part of which language, and potentially confuse you when trying to determine where a bug is occurring.\n"
] | [
1,
0
] | [] | [] | [
"aptana",
"ide",
"radrails",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000037573_aptana_ide_radrails_ruby_ruby_on_rails.txt |
Q:
What's the best way to persist data in a Java Desktop Application?
I have a large tree of Java Objects in my Desktop Application and am trying to decide on the best way of persisting them as a file to the file system.
Some thoughts I've had were:
Roll my own serializer using DataOutputStream: This would give me the greatest control of what was in the file, but at the cost of micromanaging it.
Straight old Serialization using ObjectOutputStream and its various related classes: I'm not sold on it though since I find the data brittle. Changing any object's structure breaks the serialized instances of it. So I'm locked in to what seems to be a horrible versioning nightmare.
XML Serialization: It's not as brittle, but it's significantly slower that straight out serialization. It can be transformed outside of my program.
JavaDB: I'd considered this since I'm comfortable writing JDBC applications. The difference here is that the database instance would only persist while the file was being opened or saved. It's not pretty but... it does lend itself to migrating to a central server architecture if the need arises later and it introduces the possibility of quering the datamodel in a simpler way.
I'm curious to see what other people think. And I'm hoping that I've missed some obvious, and simpler approach than the ones above.
Here are some more options culled from the answers below:
An Object Database - Has significantly less infrastructure than ORM approaches and performs faster than an XML approach. thanks aku
A:
I would go for the your final option JavaDB (Sun's distribution of Derby) and use an object relational layer like Hibernate or iBatis. Using the first three aproaches means you are going to spend more time building a database engine than developing application features.
A:
Have a look at Hibernate as a simpler way to interface to a database.
A:
In my experience, you're probably better off using an embedded database. SQL, while less than perfect, is usually much easier than designing a file format that performs well and is reliable.
I haven't used JavaDB, but I've had good luck with H2 and SQLite. SQLite is a C library which means a little more work in terms of deployment. However, it has the benefit of storing the entire database in a single, cross-platform library. Basically, it is a pre-packaged, generic file format. SQLite has been so useful that I've even started using it instead of text files in scripts.
Be careful using Hibernate if you're working with a small persistence problem. It adds a lot of complexity and library overhead. Hibernate is really nice if you're working with a large number of tables, but it will probably be cumbersome if you only need a few tables.
A:
db4objects might be the best choice
A:
XStream from codehaus.org
XML serialization/deserialization largely without coding.
You can use annotations to tweak it.
Working well in two projects where I work.
See my users group presentation at http://cjugaustralia.org/?p=61
A:
I think it depends on what you need. Let's see the options:
1) Descarded imediatelly! I'll not even justify. :)
2) If you need a simple, quick, one-method persistence, stick with it. It will persist the complete data graph as it is! Beware of how long you'll be maintaning the persisted objects. As yourself pointed out, versioning can be a problem.
3) Slower than (2), need extra code and can be edited by the user. I would only use it the data is supposed to be used by a client in another language.
4) If you need to query your data in anyway, stick with the DB solution.
Well, I think you had already answered your question :)
| What's the best way to persist data in a Java Desktop Application? | I have a large tree of Java Objects in my Desktop Application and am trying to decide on the best way of persisting them as a file to the file system.
Some thoughts I've had were:
Roll my own serializer using DataOutputStream: This would give me the greatest control of what was in the file, but at the cost of micromanaging it.
Straight old Serialization using ObjectOutputStream and its various related classes: I'm not sold on it though since I find the data brittle. Changing any object's structure breaks the serialized instances of it. So I'm locked in to what seems to be a horrible versioning nightmare.
XML Serialization: It's not as brittle, but it's significantly slower that straight out serialization. It can be transformed outside of my program.
JavaDB: I'd considered this since I'm comfortable writing JDBC applications. The difference here is that the database instance would only persist while the file was being opened or saved. It's not pretty but... it does lend itself to migrating to a central server architecture if the need arises later and it introduces the possibility of quering the datamodel in a simpler way.
I'm curious to see what other people think. And I'm hoping that I've missed some obvious, and simpler approach than the ones above.
Here are some more options culled from the answers below:
An Object Database - Has significantly less infrastructure than ORM approaches and performs faster than an XML approach. thanks aku
| [
"I would go for the your final option JavaDB (Sun's distribution of Derby) and use an object relational layer like Hibernate or iBatis. Using the first three aproaches means you are going to spend more time building a database engine than developing application features.\n",
"Have a look at Hibernate as a simpler way to interface to a database.\n",
"In my experience, you're probably better off using an embedded database. SQL, while less than perfect, is usually much easier than designing a file format that performs well and is reliable.\nI haven't used JavaDB, but I've had good luck with H2 and SQLite. SQLite is a C library which means a little more work in terms of deployment. However, it has the benefit of storing the entire database in a single, cross-platform library. Basically, it is a pre-packaged, generic file format. SQLite has been so useful that I've even started using it instead of text files in scripts.\nBe careful using Hibernate if you're working with a small persistence problem. It adds a lot of complexity and library overhead. Hibernate is really nice if you're working with a large number of tables, but it will probably be cumbersome if you only need a few tables.\n",
"db4objects might be the best choice\n",
"XStream from codehaus.org\nXML serialization/deserialization largely without coding.\nYou can use annotations to tweak it.\nWorking well in two projects where I work.\nSee my users group presentation at http://cjugaustralia.org/?p=61\n",
"I think it depends on what you need. Let's see the options:\n1) Descarded imediatelly! I'll not even justify. :)\n2) If you need a simple, quick, one-method persistence, stick with it. It will persist the complete data graph as it is! Beware of how long you'll be maintaning the persisted objects. As yourself pointed out, versioning can be a problem.\n3) Slower than (2), need extra code and can be edited by the user. I would only use it the data is supposed to be used by a client in another language.\n4) If you need to query your data in anyway, stick with the DB solution.\nWell, I think you had already answered your question :)\n"
] | [
5,
4,
4,
3,
1,
0
] | [] | [] | [
"desktop",
"java",
"oop",
"persistence"
] | stackoverflow_0000037271_desktop_java_oop_persistence.txt |
Q:
Are there any tools to visualize template/class methods and their usage?
I have taken over a large code base and would like to get an overview how and where certain classes and their methods are used.
Is there any good tool that can somehow visualize the dependencies and draw a nice call tree or something similar?
The code is in C++ in Visual Studio if that helps narrow down any selection.
A:
Here are a few options:
CodeDrawer
CC-RIDER
Doxygen
The last one, doxygen, is more of an automatic documentation tool, but it is capable of generating dependency graphs and inheritance diagrams. It's also licensed under the GPL, unlike the first two which are not free.
A:
When I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.
A:
In Java I would start with JDepend. In .NET, with NDepend. Don't know about C++.
A:
David, thanks for the suggestions. I spent the weekend trialing the programs.
Doxygen seems to be the most comprehensive of the 3, but it still leaves some things to be desired in regard to callers of methods.
All 3 seem to have problems with C++ templates to varying degrees. CC-Rider simply crashed in the middle of the analysis and CodeDrawer does not show many of the relationships. Doxygen worked pretty well, but it too did not find and show all relations and instead overwhelmed me with lots of macro references until I filtered them out.
So, maybe I should clarify "large codebase" a bit for eventual other suggestions: >100k lines of code overall spread out over more than 100 template files plus several actual class files pulling it all together.
Any other tools out there, that might be up to the task and could do better (more thoroughly)? Oh and specifically: anything that understands IDL and COM interfaces?
A:
When I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.
I did that of course, but like I mentioned, doxygen does not consider interfaces between objects as they are defined in the IDL. It "only" shows direct C++ calls.
Don't get me wrong, it is already amazing what it does, but it is still not complete from my high level view trying to get a good understanding of how everything fits together.
| Are there any tools to visualize template/class methods and their usage? | I have taken over a large code base and would like to get an overview how and where certain classes and their methods are used.
Is there any good tool that can somehow visualize the dependencies and draw a nice call tree or something similar?
The code is in C++ in Visual Studio if that helps narrow down any selection.
| [
"Here are a few options:\n\nCodeDrawer\nCC-RIDER\nDoxygen\n\nThe last one, doxygen, is more of an automatic documentation tool, but it is capable of generating dependency graphs and inheritance diagrams. It's also licensed under the GPL, unlike the first two which are not free.\n",
"When I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.\n",
"In Java I would start with JDepend. In .NET, with NDepend. Don't know about C++.\n",
"David, thanks for the suggestions. I spent the weekend trialing the programs.\nDoxygen seems to be the most comprehensive of the 3, but it still leaves some things to be desired in regard to callers of methods.\nAll 3 seem to have problems with C++ templates to varying degrees. CC-Rider simply crashed in the middle of the analysis and CodeDrawer does not show many of the relationships. Doxygen worked pretty well, but it too did not find and show all relations and instead overwhelmed me with lots of macro references until I filtered them out.\nSo, maybe I should clarify \"large codebase\" a bit for eventual other suggestions: >100k lines of code overall spread out over more than 100 template files plus several actual class files pulling it all together.\nAny other tools out there, that might be up to the task and could do better (more thoroughly)? Oh and specifically: anything that understands IDL and COM interfaces?\n",
"\nWhen I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.\n\nI did that of course, but like I mentioned, doxygen does not consider interfaces between objects as they are defined in the IDL. It \"only\" shows direct C++ calls.\nDon't get me wrong, it is already amazing what it does, but it is still not complete from my high level view trying to get a good understanding of how everything fits together.\n"
] | [
1,
1,
0,
0,
0
] | [] | [] | [
"visualization"
] | stackoverflow_0000035745_visualization.txt |
Q:
Oracle write to file
I am running oracle and have a query which pulls some results from the database. I would like to write the results as a text file. How would I go about doing this?
My prefered way would be by using UTL_FILE. Would some one have an example of how to do this?
A:
If you're using Sql Plus, is as easy as:
SQL> spool c:\temp\out.txt
SQL> SELECT * FROM USERS;
SQL> spool off
This three sentences will output the result of the query "SELECT * FROM USERS" to the file c:\temp\out.txt.
You can format this query using the string manipulation functions of Oracle.
A:
If you are using PL/SQL then you can use the UTL_FILE package, the difference from using sql+ spool is that the files are written to the server file system. UTL_FILE has a number of limitations so an alternative on the server side would be to use Java stored procedures.
A:
Use UTL_FILE in combination with CREATE DIRECTORY for ease of mapping a directory path with a name (it does not create the actual directory just a reference to it so ensure it is created first)
example
create directory logfile as 'd:\logfile'; -- must have priv to do this
declare
vFile utl_file.file_type;
begin
vFile := utl_file.fopen(logfile ,'syslog','w'); -- w is write. This returns file handle
utl_file.put(vFile,'Start Logfile'); -- note use of file handle vFile
utl_file.fclose(vFile); -- note use of file handle vFile
end;
A:
If you're running the query from sqlplus you can use the spool command:
spool /tmp/test.spool
After executing the spool command within a session, all output is sent to the sqlplus console as well as the /tmp/test.spool text file.
A:
This seems to be a reasonable tutorial with a few simple examples UTL_FILE example
| Oracle write to file | I am running oracle and have a query which pulls some results from the database. I would like to write the results as a text file. How would I go about doing this?
My prefered way would be by using UTL_FILE. Would some one have an example of how to do this?
| [
"If you're using Sql Plus, is as easy as:\n\nSQL> spool c:\\temp\\out.txt\nSQL> SELECT * FROM USERS;\nSQL> spool off\n\nThis three sentences will output the result of the query \"SELECT * FROM USERS\" to the file c:\\temp\\out.txt.\nYou can format this query using the string manipulation functions of Oracle.\n",
"If you are using PL/SQL then you can use the UTL_FILE package, the difference from using sql+ spool is that the files are written to the server file system. UTL_FILE has a number of limitations so an alternative on the server side would be to use Java stored procedures.\n",
"Use UTL_FILE in combination with CREATE DIRECTORY for ease of mapping a directory path with a name (it does not create the actual directory just a reference to it so ensure it is created first)\nexample\n\n create directory logfile as 'd:\\logfile'; -- must have priv to do this\n\ndeclare\n vFile utl_file.file_type;\nbegin\n vFile := utl_file.fopen(logfile ,'syslog','w'); -- w is write. This returns file handle\n utl_file.put(vFile,'Start Logfile'); -- note use of file handle vFile\n utl_file.fclose(vFile); -- note use of file handle vFile\nend;\n\n\n",
"If you're running the query from sqlplus you can use the spool command:\n\nspool /tmp/test.spool\n\nAfter executing the spool command within a session, all output is sent to the sqlplus console as well as the /tmp/test.spool text file.\n",
"This seems to be a reasonable tutorial with a few simple examples UTL_FILE example\n"
] | [
5,
4,
2,
1,
0
] | [] | [] | [
"oracle"
] | stackoverflow_0000027562_oracle.txt |
Q:
Does new URL(...).openConnection() necessarily imply a POST?
If I create an HTTP java.net.URL and then call openConnection() on it, does it necessarily imply that an HTTP post is going to happen? I know that openStream() implies a GET. If so, how do you perform one of the other HTTP verbs without having to work with the raw socket layer?
A:
If you retrieve the URLConnection object using openConnection() it doesn't actually start communicating with the server. That doesn't happen until you get the stream from the URLConnection(). When you first get the connection you can add/change headers and other connection properties before actually opening it.
URLConnection's life cycle is a bit odd. It doesn't send the headers to the server until you've gotten one of the streams. If you just get the input stream then I believe it does a GET, sends the headers, then lets you read the output. If you get the output stream then I believe it sends it as a POST, as it assumes you'll be writing data to it (You may need to call setDoOutput(true) for the output stream to work). As soon as you get the input stream the output stream is closed and it waits for the response from the server.
For example, this should do a POST:
URL myURL = new URL("http://example.com/my/path");
URLConnection conn = myURL.openConnection();
conn.setDoOutput(true);
conn.setDoInput(true);
OutputStream os = conn.getOutputStream();
os.write("Hi there!");
os.close();
InputStream is = conn.getInputStream();
// read stuff here
While this would do a GET:
URL myURL = new URL("http://example.com/my/path");
URLConnection conn = myURL.openConnection();
conn.setDoOutput(false);
conn.setDoInput(true);
InputStream is = conn.getInputStream();
// read stuff here
URLConnection will also do other weird things. If the server specifies a content length then URLConnection will keep the underlying input stream open until it receives that much data, even if you explicitly close it. This caused a lot of problems for us as it made shutting our client down cleanly a bit hard, as the URLConnection would keep the network connection open. This probably probably exists even if you just use getStream() though.
A:
No it does not. But if the protocol of the URL is HTTP, you'll get a HttpURLConnection as a return object. This class has a setRequestMethod method to specify which HTTP method you want to use.
If you want to do more sophisticated stuff you're probably better off using a library like Jakarta HttpClient.
| Does new URL(...).openConnection() necessarily imply a POST? | If I create an HTTP java.net.URL and then call openConnection() on it, does it necessarily imply that an HTTP post is going to happen? I know that openStream() implies a GET. If so, how do you perform one of the other HTTP verbs without having to work with the raw socket layer?
| [
"If you retrieve the URLConnection object using openConnection() it doesn't actually start communicating with the server. That doesn't happen until you get the stream from the URLConnection(). When you first get the connection you can add/change headers and other connection properties before actually opening it.\nURLConnection's life cycle is a bit odd. It doesn't send the headers to the server until you've gotten one of the streams. If you just get the input stream then I believe it does a GET, sends the headers, then lets you read the output. If you get the output stream then I believe it sends it as a POST, as it assumes you'll be writing data to it (You may need to call setDoOutput(true) for the output stream to work). As soon as you get the input stream the output stream is closed and it waits for the response from the server.\nFor example, this should do a POST:\nURL myURL = new URL(\"http://example.com/my/path\");\nURLConnection conn = myURL.openConnection();\nconn.setDoOutput(true);\nconn.setDoInput(true);\n\nOutputStream os = conn.getOutputStream();\nos.write(\"Hi there!\");\nos.close();\n\nInputStream is = conn.getInputStream();\n// read stuff here\n\nWhile this would do a GET:\nURL myURL = new URL(\"http://example.com/my/path\");\nURLConnection conn = myURL.openConnection();\nconn.setDoOutput(false);\nconn.setDoInput(true);\n\nInputStream is = conn.getInputStream();\n// read stuff here\n\nURLConnection will also do other weird things. If the server specifies a content length then URLConnection will keep the underlying input stream open until it receives that much data, even if you explicitly close it. This caused a lot of problems for us as it made shutting our client down cleanly a bit hard, as the URLConnection would keep the network connection open. This probably probably exists even if you just use getStream() though.\n",
"No it does not. But if the protocol of the URL is HTTP, you'll get a HttpURLConnection as a return object. This class has a setRequestMethod method to specify which HTTP method you want to use. \nIf you want to do more sophisticated stuff you're probably better off using a library like Jakarta HttpClient.\n"
] | [
17,
3
] | [] | [] | [
"http",
"java",
"url"
] | stackoverflow_0000039391_http_java_url.txt |
Q:
How can I expose only a fragment of IList<>?
I have a class property exposing an internal IList<> through
System.Collections.ObjectModel.ReadOnlyCollection<>
How can I pass a part of this ReadOnlyCollection<> without copying elements into a new array (I need a live view, and the target device is short on memory)? I'm targetting Compact Framework 2.0.
A:
Try a method that returns an enumeration using yield:
IEnumerable<T> FilterCollection<T>( ReadOnlyCollection<T> input ) {
foreach ( T item in input )
if ( /* criterion is met */ )
yield return item;
}
A:
These foreach samples are fine, though you can make them much more terse if you're using .NET 3.5 and LINQ:
return FullList.Where(i => IsItemInPartialList(i)).ToList();
A:
You can always write a class that implements IList and forwards all calls to the original list (so it doesn't have it's own copy of the data) after translating the indexes.
A:
You could use yield return to create a filtered list
IEnumerable<object> FilteredList()
{
foreach( object item in FullList )
{
if( IsItemInPartialList( item )
yield return item;
}
}
A:
Depending on how you need to filter the collection, you may want to create a class that implements IList (or IEnumerable, if that works for you) but that mucks about with the indexing and access to only return the values you want. For example
class EvenList: IList
{
private IList innerList;
public EvenList(IList innerList)
{
this.innerList = innerList;
}
public object this[int index]
{
get { return innerList[2*i]; }
set { innerList[2*i] = value; }
}
// and similarly for the other IList methods
}
A:
How do the filtered elements need to be accessed? If it's through an Iterator then maybe you could write a custom iterator that skips the elements you don't want publicly visible?
If you need to provide a Collection then you might need to write your own Collection class, which just proxies to the underlying Collection, but prevents access to the elements you don't want publicly visible.
(Disclaimer: I'm not very familiar with C#, so these are general answers. There may be more specific answers to C# that work better)
| How can I expose only a fragment of IList<>? | I have a class property exposing an internal IList<> through
System.Collections.ObjectModel.ReadOnlyCollection<>
How can I pass a part of this ReadOnlyCollection<> without copying elements into a new array (I need a live view, and the target device is short on memory)? I'm targetting Compact Framework 2.0.
| [
"Try a method that returns an enumeration using yield:\nIEnumerable<T> FilterCollection<T>( ReadOnlyCollection<T> input ) {\n foreach ( T item in input )\n if ( /* criterion is met */ )\n yield return item;\n}\n\n",
"These foreach samples are fine, though you can make them much more terse if you're using .NET 3.5 and LINQ:\nreturn FullList.Where(i => IsItemInPartialList(i)).ToList();\n\n",
"You can always write a class that implements IList and forwards all calls to the original list (so it doesn't have it's own copy of the data) after translating the indexes.\n",
"You could use yield return to create a filtered list\nIEnumerable<object> FilteredList()\n{\n foreach( object item in FullList )\n {\n if( IsItemInPartialList( item )\n yield return item;\n }\n}\n\n",
"Depending on how you need to filter the collection, you may want to create a class that implements IList (or IEnumerable, if that works for you) but that mucks about with the indexing and access to only return the values you want. For example\nclass EvenList: IList\n{\n private IList innerList;\n public EvenList(IList innerList)\n {\n this.innerList = innerList;\n }\n\n public object this[int index]\n {\n get { return innerList[2*i]; }\n set { innerList[2*i] = value; }\n }\n // and similarly for the other IList methods\n}\n\n",
"How do the filtered elements need to be accessed? If it's through an Iterator then maybe you could write a custom iterator that skips the elements you don't want publicly visible?\nIf you need to provide a Collection then you might need to write your own Collection class, which just proxies to the underlying Collection, but prevents access to the elements you don't want publicly visible.\n(Disclaimer: I'm not very familiar with C#, so these are general answers. There may be more specific answers to C# that work better)\n"
] | [
15,
8,
1,
1,
1,
0
] | [] | [] | [
".net_2.0",
"c#",
"compact_framework",
"windows_mobile"
] | stackoverflow_0000039447_.net_2.0_c#_compact_framework_windows_mobile.txt |
Q:
Database exception handling best practices
How do you handle database exceptions in your application?
Are you trying to validate data prior passing it to DB or just relying on DB schema validation logic?
Do you try to recover from some kind of DB errors (e.g. timeouts)?
Here are some approaches:
Validate data prior passing it to DB
Left validation to DB and handle DB exceptions properly
Validate on both sides
Validate some obvious constraints in business logic and left complex validation to DB
What approach do you use? Why?
Updates:
I'm glad to see growing discussion.
Let’s try to sum up community answers.
Suggestions:
Validate on both sides
Check business logic constraints on
client side, let DB do integrity checks from hamishmcn
Check early to avoid bothering DB from ajmastrean
Check early to improve user experience from Will
Keep DB interacting code in place to
simplify development from hamishmcn
Object-relational mapping (NHibernate, Linq, etc.) can help you to deal with constrains from ajmastrean
Client side validation is necessary for security reasons from Seb Nilsson
Do you have anything else to say? This is converted to Validation specific question. We are missing the core, i.e. "Database related Error best practices" which ones to handle and Which ones to Bubble up?
A:
@aku: DRY is nice, but its not always possible. Validation is one of those places, as you will have three completely different and unrelated places where validation is not only possible but absolutely needed: Within the UI, within the business logic, and within the database.
Think of a web application. You want to reduce trips to the server, so you include javascript validation of client data entry. But you can't trust what the user enters, so you must perform validation within your business logic before touching the database. And the database must have its own validation in order to prevent data corruption.
There's no clean way to unify these three different types of validation within a single component.
There are some attempts being made to unify cross-cutting responsibilities like validation within policy injectors like the P&P group's Policy Injection Application Block combined with their Validation Application Block, but these are still code based. If you have validation that's not in code, you still have to maintain parallel logic separately...
A:
There is one killer-reason to validate on both the client-side and on the database-side, and that is security. Especially when you start using AJAX-stuff, hackable URLs and other things that make your site (in this case) more friendly to users and hackers.
Validate on the client to provide a smooth experience to early tell the user to correct their input. Also validate in database, (or in business logic, if this is considered a totally secure gateway to the database) for security for you database.
A:
You want to reduce unnecessary trips to the DB, so performing validation within the application is a good practice. Also, it allows you to handle data errors where it is most easy to recover from: up near the UI (whether in the controller or within the UI layer for simpler apps) where the data is entered.
There are some data errors that you can't check for programatically, however. For instance, you can't validate data on the existance of related data without roundtripping to the db. Data errors like these should be validated by the database through the use of relationships, triggers, etc.
Where you deal with errors returned by database calls is an interesting one. You could deal with them at the data layer, the business logic layer, or the UI layer. The best practice in this instance is to let those errors bubble up to the last responsible moment before handling them.
For example, if you have an ASP.NET MVC web application, you have three layers (from bottom to top): Database, controller and UI (model, controller, and view). Any errors thrown by your data layer should be allowed to bubble up into your controller. At this level your application "knows" what the user is attempting to do, and can correctly inform the user about the error, suggesting different ways to handle it. Attempting to recover from these errors within the data layer makes it much harder to know what's going on within the controller. And, of course, placing business logic within the UI is not considered a best practice.
TL;DR: Validate everywhere, handle validation errors at the last responsible moment.
A:
I try to validate on both sides. 1 rule I always follow is never trust input from the user. Following this to it's conclusion, I will usually have some front end validation on the form/web page which will not even allow submission with improperly formed data. This is a blunt tool - meaning you can check/parse the value to make sure a date field contains a date. From there, I usually let my business logic check as to whether the data entry makes sense in context with how it was submitted. For example, does the date submitted fall into the expected range? Does the currency value submitted fall into the expected range? Finally, on the server side, Foreign Key constraints and Indexes can catch any errors that slip through, which will bubble up a DB exception as a last resort, which can be handled by the app code. I use this method because it filters out as many errors as possible before the DB call is invoked.
A:
An object-relational mapping (ORM) tool, like NHibernate (or better yet, ActiveRecord), can help you avoid a lot of validation by allowing the data model to be built right into your code as a proper C# class. You may avoid trips to the database as well, thanks to great caching and validation models built into the framework.
A:
In general, I try to validate data as soon as possible after it has been entered. This is so that I can give helpful messages to the user earlier than after they have clicked "submit" or the equivalent.
By the time that it comes to making the db call I am hopefull that the data I am passing should be fairly good.
I try to keep db calls in the one file (or group of files) that share helper methods make it as easy as possible for the programmer (me or whoever else adds calls) to write to a log details about the exception, and what parameters were passed in etc
A:
The sorts of apps that I was writing (I've since moved jobs) were in-house fat-client apps.
I would try to keep the business logic in the client, and do more mechanical validation on the db (ie validation that only related to the procedure's ability to run, as opposed to higher level validation).
In short, validate where you can, and try to keep related types of validation together.
| Database exception handling best practices | How do you handle database exceptions in your application?
Are you trying to validate data prior passing it to DB or just relying on DB schema validation logic?
Do you try to recover from some kind of DB errors (e.g. timeouts)?
Here are some approaches:
Validate data prior passing it to DB
Left validation to DB and handle DB exceptions properly
Validate on both sides
Validate some obvious constraints in business logic and left complex validation to DB
What approach do you use? Why?
Updates:
I'm glad to see growing discussion.
Let’s try to sum up community answers.
Suggestions:
Validate on both sides
Check business logic constraints on
client side, let DB do integrity checks from hamishmcn
Check early to avoid bothering DB from ajmastrean
Check early to improve user experience from Will
Keep DB interacting code in place to
simplify development from hamishmcn
Object-relational mapping (NHibernate, Linq, etc.) can help you to deal with constrains from ajmastrean
Client side validation is necessary for security reasons from Seb Nilsson
Do you have anything else to say? This is converted to Validation specific question. We are missing the core, i.e. "Database related Error best practices" which ones to handle and Which ones to Bubble up?
| [
"@aku: DRY is nice, but its not always possible. Validation is one of those places, as you will have three completely different and unrelated places where validation is not only possible but absolutely needed: Within the UI, within the business logic, and within the database.\nThink of a web application. You want to reduce trips to the server, so you include javascript validation of client data entry. But you can't trust what the user enters, so you must perform validation within your business logic before touching the database. And the database must have its own validation in order to prevent data corruption. \nThere's no clean way to unify these three different types of validation within a single component. \nThere are some attempts being made to unify cross-cutting responsibilities like validation within policy injectors like the P&P group's Policy Injection Application Block combined with their Validation Application Block, but these are still code based. If you have validation that's not in code, you still have to maintain parallel logic separately...\n",
"There is one killer-reason to validate on both the client-side and on the database-side, and that is security. Especially when you start using AJAX-stuff, hackable URLs and other things that make your site (in this case) more friendly to users and hackers.\nValidate on the client to provide a smooth experience to early tell the user to correct their input. Also validate in database, (or in business logic, if this is considered a totally secure gateway to the database) for security for you database.\n",
"You want to reduce unnecessary trips to the DB, so performing validation within the application is a good practice. Also, it allows you to handle data errors where it is most easy to recover from: up near the UI (whether in the controller or within the UI layer for simpler apps) where the data is entered.\nThere are some data errors that you can't check for programatically, however. For instance, you can't validate data on the existance of related data without roundtripping to the db. Data errors like these should be validated by the database through the use of relationships, triggers, etc.\nWhere you deal with errors returned by database calls is an interesting one. You could deal with them at the data layer, the business logic layer, or the UI layer. The best practice in this instance is to let those errors bubble up to the last responsible moment before handling them.\nFor example, if you have an ASP.NET MVC web application, you have three layers (from bottom to top): Database, controller and UI (model, controller, and view). Any errors thrown by your data layer should be allowed to bubble up into your controller. At this level your application \"knows\" what the user is attempting to do, and can correctly inform the user about the error, suggesting different ways to handle it. Attempting to recover from these errors within the data layer makes it much harder to know what's going on within the controller. And, of course, placing business logic within the UI is not considered a best practice.\nTL;DR: Validate everywhere, handle validation errors at the last responsible moment.\n",
"I try to validate on both sides. 1 rule I always follow is never trust input from the user. Following this to it's conclusion, I will usually have some front end validation on the form/web page which will not even allow submission with improperly formed data. This is a blunt tool - meaning you can check/parse the value to make sure a date field contains a date. From there, I usually let my business logic check as to whether the data entry makes sense in context with how it was submitted. For example, does the date submitted fall into the expected range? Does the currency value submitted fall into the expected range? Finally, on the server side, Foreign Key constraints and Indexes can catch any errors that slip through, which will bubble up a DB exception as a last resort, which can be handled by the app code. I use this method because it filters out as many errors as possible before the DB call is invoked.\n",
"An object-relational mapping (ORM) tool, like NHibernate (or better yet, ActiveRecord), can help you avoid a lot of validation by allowing the data model to be built right into your code as a proper C# class. You may avoid trips to the database as well, thanks to great caching and validation models built into the framework.\n",
"In general, I try to validate data as soon as possible after it has been entered. This is so that I can give helpful messages to the user earlier than after they have clicked \"submit\" or the equivalent.\nBy the time that it comes to making the db call I am hopefull that the data I am passing should be fairly good.\nI try to keep db calls in the one file (or group of files) that share helper methods make it as easy as possible for the programmer (me or whoever else adds calls) to write to a log details about the exception, and what parameters were passed in etc \n",
"The sorts of apps that I was writing (I've since moved jobs) were in-house fat-client apps.\nI would try to keep the business logic in the client, and do more mechanical validation on the db (ie validation that only related to the procedure's ability to run, as opposed to higher level validation).\nIn short, validate where you can, and try to keep related types of validation together.\n"
] | [
6,
4,
3,
2,
2,
1,
0
] | [] | [] | [
"architecture",
"database",
"exception"
] | stackoverflow_0000039371_architecture_database_exception.txt |
Q:
Directory picker for Visual Basic macro in MS Outlook 2007
I wrote a Visual Basic macro for archiving attachments for Outlook 2007, but did not find a totally satisfactory way for showing a directory picker from the Outlook macro. Now, I don't know much about either Windows APIs or VB(A) programming, but the "standard" Windows file dialog I see most often in Microsoft applications would seem like an obvious choice, but it does not seem to be easily available from Outlook's macros.
Ideally, the directory picker should at least allow to manually paste a file path/URI as a starting point for navigation, since I sometimes already have an Explorer window open for the same directory.
What are the best choices for directory pickers in Outlook macros?
Two things I already tried and did not find totally satisfactory are (the code is simplified and w/o error handling and probably also runs in older Outlook versions):
1) Using Shell.Application which does not allow me to actually paste a starting point via the clipboard or do other operations like renaming folders:
Set objShell = CreateObject("Shell.Application")
sMsg = "Select a Folder"
cBits = 1
xRoot = 17
Set objBFF = objShell.BrowseForFolder(0, sMsg, cBits, xRoot)
path = objBFF.self.Path
2) Using the Office.FileDialog from Microsoft Word 12.0 Object Library (via tools/references) and then using Word's file dialog, which somehow takes forever on my Vista system to appear and does not always actually bring Word to the foreground. Instead, sometimes Outlook is blocked and the file dialog is left lingering somewhere in the background:
Dim objWord As Word.Application
Dim dlg As Office.FileDialog
Set objWord = GetObject(, "Word.Application")
If objWord Is Nothing Then
Set objWord = CreateObject("Word.Application")
End If
objWord.Activate
Set dlg = objWord.FileDialog(msoFileDialogFolderPicker)
path = dlg.SelectedItems(1)
Any other ideas?
A:
Your best bet will probably be to use the Windows32 API for this. See this MSDN article for sample VBA code on how to interact with the API.
The article outlines a few different techniques, but I'd suggest searching the article for "COMDLG32.dll" and following the steps outlined in that section.
| Directory picker for Visual Basic macro in MS Outlook 2007 | I wrote a Visual Basic macro for archiving attachments for Outlook 2007, but did not find a totally satisfactory way for showing a directory picker from the Outlook macro. Now, I don't know much about either Windows APIs or VB(A) programming, but the "standard" Windows file dialog I see most often in Microsoft applications would seem like an obvious choice, but it does not seem to be easily available from Outlook's macros.
Ideally, the directory picker should at least allow to manually paste a file path/URI as a starting point for navigation, since I sometimes already have an Explorer window open for the same directory.
What are the best choices for directory pickers in Outlook macros?
Two things I already tried and did not find totally satisfactory are (the code is simplified and w/o error handling and probably also runs in older Outlook versions):
1) Using Shell.Application which does not allow me to actually paste a starting point via the clipboard or do other operations like renaming folders:
Set objShell = CreateObject("Shell.Application")
sMsg = "Select a Folder"
cBits = 1
xRoot = 17
Set objBFF = objShell.BrowseForFolder(0, sMsg, cBits, xRoot)
path = objBFF.self.Path
2) Using the Office.FileDialog from Microsoft Word 12.0 Object Library (via tools/references) and then using Word's file dialog, which somehow takes forever on my Vista system to appear and does not always actually bring Word to the foreground. Instead, sometimes Outlook is blocked and the file dialog is left lingering somewhere in the background:
Dim objWord As Word.Application
Dim dlg As Office.FileDialog
Set objWord = GetObject(, "Word.Application")
If objWord Is Nothing Then
Set objWord = CreateObject("Word.Application")
End If
objWord.Activate
Set dlg = objWord.FileDialog(msoFileDialogFolderPicker)
path = dlg.SelectedItems(1)
Any other ideas?
| [
"Your best bet will probably be to use the Windows32 API for this. See this MSDN article for sample VBA code on how to interact with the API.\nThe article outlines a few different techniques, but I'd suggest searching the article for \"COMDLG32.dll\" and following the steps outlined in that section.\n"
] | [
2
] | [] | [] | [
"filedialog",
"outlook",
"outlook_2007",
"vba"
] | stackoverflow_0000039233_filedialog_outlook_outlook_2007_vba.txt |
Q:
Strange Rails Authentication Issue
I'm using the RESTful authentication Rails plugin for an app I'm developing.
I'm having a strange issue I can't get to the bottom of.
Essentially, the first time I log into the app after a period of inactivity (the app is deployed in production, but only being used by me), I will be brought to a 404 page, but if I go back to the home page and log in again, everything works according to plan.
Any ideas?
A:
Please check your routes.
Not all routes are created equally. Routes have priority defined by the order of appearance of the routes in the config/routes.rb file. The priority goes from top to bottom. The last route in that file is at the lowest priority and will be applied last. If no route matches, 404 is returned.
More info: http://api.rubyonrails.org/classes/ActionController/Routing.html
A:
I'm using a slightly modified version of that plugin so I'm not 100% sure that this will be the same for you, but for me the default is to redirect to the root path, or the page you were trying to get to if there is one. (check your lib/authenticated_system.rb to see your default) If you don't have map.root defined in your routes, I believe that would cause the error you're describing -- it wouldn't find root_path at first but if you tried "from" a page in your app it would redirect to that page.
Let us know what happens with this one if you would, I'm curious to see what this ends up being in case I run into it in the future. :)
| Strange Rails Authentication Issue | I'm using the RESTful authentication Rails plugin for an app I'm developing.
I'm having a strange issue I can't get to the bottom of.
Essentially, the first time I log into the app after a period of inactivity (the app is deployed in production, but only being used by me), I will be brought to a 404 page, but if I go back to the home page and log in again, everything works according to plan.
Any ideas?
| [
"Please check your routes.\nNot all routes are created equally. Routes have priority defined by the order of appearance of the routes in the config/routes.rb file. The priority goes from top to bottom. The last route in that file is at the lowest priority and will be applied last. If no route matches, 404 is returned.\nMore info: http://api.rubyonrails.org/classes/ActionController/Routing.html\n",
"I'm using a slightly modified version of that plugin so I'm not 100% sure that this will be the same for you, but for me the default is to redirect to the root path, or the page you were trying to get to if there is one. (check your lib/authenticated_system.rb to see your default) If you don't have map.root defined in your routes, I believe that would cause the error you're describing -- it wouldn't find root_path at first but if you tried \"from\" a page in your app it would redirect to that page.\nLet us know what happens with this one if you would, I'm curious to see what this ends up being in case I run into it in the future. :)\n"
] | [
2,
1
] | [] | [] | [
"authentication",
"plugins",
"rest",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000038901_authentication_plugins_rest_ruby_ruby_on_rails.txt |
Q:
Did Installing OneCare cause a "Generating user instances in SQL Server is disabled" error?
Did Installing OneCare cause a "Generating user instances in SQL Server is disabled" error?
The only change that I've made to my computer is uninstalling AVG and installing the trial for Microsoft OneCare. Did OneCare change the SQLServer installation somehow?
This is a very "odd" question but is something I would post on EE in hopes of someone having had the same issue and giving their solution.
A:
I would look more at the uninstalling of AVG as the culprit. OneCare does not care or even notice SQL Server instances as far as I can tell where as AVG does.
I would look into your SQL Server instance and check the jobs. One or more may have been added by AVG. You should remove them. You might also want to drop the AVG database. Just to be sure.
Note: I have never uninstalled AVG. I just have notice some of what it did to my Database when my SysAdmin installed it. Being an Accidental DBA I haven't had the time to properly evaluate it's actions.
A:
The problem is your connection string. When using SQLExpress you can set it to run user instances so that each application has its own instance of SQL Server. Just set the option to false on your connections string and the problem should dissappear.
A:
I didn't see anything odd in the event viewer or any db's for avg in SQLServer. btw I installed SQL server after AVG. it's curious anyway. I'll just make a VM and do a fresh install of SQLExpress so I can finish a few projects.
it's been over a year so it's time for the annual reformat and reinstall ;-)
A:
@baldy
Thanks. I'll look at as well. Oddly enough though I didn't change the connection string at all. And when I created a new project and tried to drag-n-drop a DB into the LINQ to SQL diagram that error was raised then as well.
| Did Installing OneCare cause a "Generating user instances in SQL Server is disabled" error? | Did Installing OneCare cause a "Generating user instances in SQL Server is disabled" error?
The only change that I've made to my computer is uninstalling AVG and installing the trial for Microsoft OneCare. Did OneCare change the SQLServer installation somehow?
This is a very "odd" question but is something I would post on EE in hopes of someone having had the same issue and giving their solution.
| [
"I would look more at the uninstalling of AVG as the culprit. OneCare does not care or even notice SQL Server instances as far as I can tell where as AVG does. \nI would look into your SQL Server instance and check the jobs. One or more may have been added by AVG. You should remove them. You might also want to drop the AVG database. Just to be sure.\nNote: I have never uninstalled AVG. I just have notice some of what it did to my Database when my SysAdmin installed it. Being an Accidental DBA I haven't had the time to properly evaluate it's actions.\n",
"The problem is your connection string. When using SQLExpress you can set it to run user instances so that each application has its own instance of SQL Server. Just set the option to false on your connections string and the problem should dissappear.\n",
"I didn't see anything odd in the event viewer or any db's for avg in SQLServer. btw I installed SQL server after AVG. it's curious anyway. I'll just make a VM and do a fresh install of SQLExpress so I can finish a few projects.\nit's been over a year so it's time for the annual reformat and reinstall ;-)\n",
"@baldy\nThanks. I'll look at as well. Oddly enough though I didn't change the connection string at all. And when I created a new project and tried to drag-n-drop a DB into the LINQ to SQL diagram that error was raised then as well.\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"sql_server"
] | stackoverflow_0000038589_sql_server.txt |
Q:
Can IIS 6 serve requests for pages with no extensions?
Is there any way in IIS to map requests to a particular URL with no extension to a given application.
For example, in trying to port something from a Java servlet, you might have a URL like this...
http://[server]/MyApp/HomePage?some=parameter
Ideally I'd like to be able to map everything under MyApp to a particular application, but failing that, any suggestions about how to achieve the same effect would be really helpful.
A:
You can also create an ISAPI filter that re-writes urls. The user enters a url with no extension, but the filter will interpret the request so that it does. Note that in IIS it's real easy to screw this up, so you might want to find a pre-written one. I haven't used any myself so I can't recommend a specific product that's any different than what you'd find via google, especially as I don't know your specific use case. But at least now you know what to search for.
You can also rewrite your urls using ASP.Net:
http://msdn.microsoft.com/en-us/library/ms972974.aspx
A:
You can set the IIS6 to handle all requests, but the key to handle files without extensions is to tell the IIS not to look for the file.
http://weblogs.asp.net/scottgu/archive/2007/03/04/tip-trick-integrating-asp-net-security-with-classic-asp-and-non-asp-net-urls.aspx
| Can IIS 6 serve requests for pages with no extensions? | Is there any way in IIS to map requests to a particular URL with no extension to a given application.
For example, in trying to port something from a Java servlet, you might have a URL like this...
http://[server]/MyApp/HomePage?some=parameter
Ideally I'd like to be able to map everything under MyApp to a particular application, but failing that, any suggestions about how to achieve the same effect would be really helpful.
| [
"You can also create an ISAPI filter that re-writes urls. The user enters a url with no extension, but the filter will interpret the request so that it does. Note that in IIS it's real easy to screw this up, so you might want to find a pre-written one. I haven't used any myself so I can't recommend a specific product that's any different than what you'd find via google, especially as I don't know your specific use case. But at least now you know what to search for.\nYou can also rewrite your urls using ASP.Net:\nhttp://msdn.microsoft.com/en-us/library/ms972974.aspx\n",
"You can set the IIS6 to handle all requests, but the key to handle files without extensions is to tell the IIS not to look for the file.\nhttp://weblogs.asp.net/scottgu/archive/2007/03/04/tip-trick-integrating-asp-net-security-with-classic-asp-and-non-asp-net-urls.aspx\n"
] | [
1,
1
] | [] | [] | [
"iis",
"iis_6"
] | stackoverflow_0000038661_iis_iis_6.txt |
Q:
Scrum: Resistance is (not) futile
I'm the second dev and a recent hire here at a PHP/MySQL shop. I was hired mostly due to my experience in wrangling some sort of process out of a chaotic mess. At least, that's what I did at my last company. ;)
Since I've been here (a few months now), I've brought on board my boss, my product manager and several other key figures (But mostly chickens, if you pardon the Scrum-based stereotyping). I've also helped bring in some visibility to the development cycle of a major product that has been lagging for over a year. People are loving it!
However, my coworker (the only other dev here for now) is not into it. She prefers to close her door and focus on her work and be left alone. Me? I'm into the whole Agile approach of collaboration, cooperation and openness. Without her input, I started the Scrum practices (daily scrums, burndown charts and other things I've found that worked for me and my previous teams (ala H. Kniberg's cool wall chart). During our daily stand up she slinks by and ignores us as if we actually weren't standing right outside her door (we are actually). It's pretty amazing. I've never seen such resistance.
Question... how do I get her onboard? Peer pressure is not working.
Thanks from fellow Scrum-borg,
beaudetious
A:
While Scrum other agile methodologies like it embody a lot of good practices, sometimes giving it a name and making it (as many bloggers have commented on) a "religion" that must be adopted in the workplace is rather offputting to a lot of people, including myself.
It depends on what your options and commitments are, but I know I'd be a lot more keen on accepting ideas because they are good ideas, not because they are a bandwagon. Try implementing/drawing her in to the practices one at a time, by showing her how they can improve her life and workflow as well.
Programmers love cool things that help them get stuff done. They hate being preached at or being asked to board what they see as a bandwagon. Present it as the former rather than the latter. (It goes without saying, make sure it actually IS the former)
Edit: another question
I've never actually worked for a place that used a specific agile methodology, though I'm pretty happy where I'm at now in that we incorporate a lot of agile practices without the hype and the dogma (best of both worlds, IMHO).
But I was just reading about Scrum and, is a system like that even beneficial for a 2 person team? Scrum does add a certain amount of overhead to a project, it seems, and that might outweigh the benefits when you have a very small team where communication and planning is already easy.
A:
Without her input, I started the Scrum practices (daily scrums, burndown charts and other things I've found that worked for me and my previous teams (ala H. Kniberg's cool wall chart). During out daily stand up she slinks by and ignores us as if we actually weren't standing right outside her door (we are actually). It's pretty amazing. I've never seen such resistance.
Question... how do I get her onboard? Peer pressure is not working.
Yikes! Who would ever want to work in such an oppressive environment? If you're lucky, she's sending around her resume and you'll be able to hire someone who is on board with your development process.
Assuming you want to hang on to her, I'd turn down (or off) the rhetoric and work on being a friend and co-worker first. If the project is a year late, she can't be feeling good about herself and it sounds like you aren't afraid to trumpet your success. That can be intimidating.
I know nothing about Scrum, however. I'm just imagining what it would be like to walk around in your co-worker's shoes.
A:
beaudetious, buddy,
I would really suggest you read Steve Yegge's blog called "Good Agile, Bad Agile". It's an oldy but a goody, and I think it's a must read for anyone - like myself about 2 months ago - who gets a little let's say "over-eager" to agile-up their workplace. Agile offers a lot of good practices, but you have to take them all with a grain of salt and adopt what you're lacking and skip out on all the other crud that might be unuseful for a particular situation - e.g. the daily scrum. If your co-worker would just like to code in quiet (read Peopleware for why this is a good thing) and she's being a productive team member quit bugging her with your scrumming a let her work in whatever way she likes most.
People are usually less "hostile" about these practices if you just approach them and simply say "Do you have a sec? Listen, communication is really a problem right now, I feel like I don't know what you're doing and I really don't want to step on your toes again and spend two days writing something you already did like last week, so let's work on this. I'd like to try X, what do you think?". Be compassionate and don't tolerate "bad apples", that's literally how I agiled up my workplace, and many problems have started evaporating. We're by no means an 100% XP or 100% Scrum compliant place, because we just use whatever works and was needed.
A:
Simple. Don't talk about scrum. Don't use scrum on her. Instead take the underlying principles of scrum (e.g. the purpose as opposed to the application) and create different approaches that accommodate her way of working but have subtle tints of scrum.
All humans are different and a lot of programmers dislike scrum. I wouldn't force it upon them as that would just be counter-productive. I'd suggest identifying the problems in the development process (in a non-scrum fashion), see if you can get her to agree that the issues exist, then ask her what she thinks would be a good solution. Her co-operation and input into the process is essential to her co-operation, if she doesn't have buy-in she wont become a citizen.
From there on in you can hopefully create some sort of quasi-hybrid scrum + her approach to the process where you can both agree on the way forward.
A:
I think the key would be to help her understand why you are doing Scrum in the first place. I guess you have your reasons, so why not tell her? You are likely to get resistance towards any change if the people involved don't understand why there is change or what they will benefit from it. If you can explain your reasons for using Scrum, and the following benefits, to her in a way that relates to her everyday work, I think she is more likely to adapt a more positive attitude towards it.
If she sees no value in the Scrum process, or doesn't understand how it relates to her, she probably won't care about it.
I think one of the most important concepts for someone to understand regarding Scrum is the fact that you are working as a group and commit to your project as a group, not as individuals. For many people, this is the hardest thing to grasp, since they are so used to living in "their own World".
A:
I'm not sure Scrum is the central issue here; I'm guessing she feels threatened by the new guy bringing in a lot of new ideas and stirring things up. I've been in that situation before as the new person bringing in a new perspective on things, and sometimes it's just difficult to immediately bring those existing people around to a new way of thinking. It often requires a culture shift which doesn't happen overnight.
Try to get her input and opinion on things as much as possible, and try to show that you respect that she has been on the team longer than you. If after a while she still doesn't participate, then all you can do is mention it to your Manager and let them take it from there.
A:
Continue your efforts to involve the other developer. Remember you are the one who wants to make this change. Ask for help with problems you have. Invite them to the daily stand up meeting. I currently do the planning for the daily stand up and I make sure all the pigs and chickens are invited. If you are the lead on the project it is up to you to address the situation and take a risk. Put yourself out there.
| Scrum: Resistance is (not) futile | I'm the second dev and a recent hire here at a PHP/MySQL shop. I was hired mostly due to my experience in wrangling some sort of process out of a chaotic mess. At least, that's what I did at my last company. ;)
Since I've been here (a few months now), I've brought on board my boss, my product manager and several other key figures (But mostly chickens, if you pardon the Scrum-based stereotyping). I've also helped bring in some visibility to the development cycle of a major product that has been lagging for over a year. People are loving it!
However, my coworker (the only other dev here for now) is not into it. She prefers to close her door and focus on her work and be left alone. Me? I'm into the whole Agile approach of collaboration, cooperation and openness. Without her input, I started the Scrum practices (daily scrums, burndown charts and other things I've found that worked for me and my previous teams (ala H. Kniberg's cool wall chart). During our daily stand up she slinks by and ignores us as if we actually weren't standing right outside her door (we are actually). It's pretty amazing. I've never seen such resistance.
Question... how do I get her onboard? Peer pressure is not working.
Thanks from fellow Scrum-borg,
beaudetious
| [
"While Scrum other agile methodologies like it embody a lot of good practices, sometimes giving it a name and making it (as many bloggers have commented on) a \"religion\" that must be adopted in the workplace is rather offputting to a lot of people, including myself.\nIt depends on what your options and commitments are, but I know I'd be a lot more keen on accepting ideas because they are good ideas, not because they are a bandwagon. Try implementing/drawing her in to the practices one at a time, by showing her how they can improve her life and workflow as well.\nProgrammers love cool things that help them get stuff done. They hate being preached at or being asked to board what they see as a bandwagon. Present it as the former rather than the latter. (It goes without saying, make sure it actually IS the former)\nEdit: another question\nI've never actually worked for a place that used a specific agile methodology, though I'm pretty happy where I'm at now in that we incorporate a lot of agile practices without the hype and the dogma (best of both worlds, IMHO). \nBut I was just reading about Scrum and, is a system like that even beneficial for a 2 person team? Scrum does add a certain amount of overhead to a project, it seems, and that might outweigh the benefits when you have a very small team where communication and planning is already easy.\n",
"\nWithout her input, I started the Scrum practices (daily scrums, burndown charts and other things I've found that worked for me and my previous teams (ala H. Kniberg's cool wall chart). During out daily stand up she slinks by and ignores us as if we actually weren't standing right outside her door (we are actually). It's pretty amazing. I've never seen such resistance.\nQuestion... how do I get her onboard? Peer pressure is not working.\n\nYikes! Who would ever want to work in such an oppressive environment? If you're lucky, she's sending around her resume and you'll be able to hire someone who is on board with your development process.\nAssuming you want to hang on to her, I'd turn down (or off) the rhetoric and work on being a friend and co-worker first. If the project is a year late, she can't be feeling good about herself and it sounds like you aren't afraid to trumpet your success. That can be intimidating.\nI know nothing about Scrum, however. I'm just imagining what it would be like to walk around in your co-worker's shoes.\n",
"beaudetious, buddy,\nI would really suggest you read Steve Yegge's blog called \"Good Agile, Bad Agile\". It's an oldy but a goody, and I think it's a must read for anyone - like myself about 2 months ago - who gets a little let's say \"over-eager\" to agile-up their workplace. Agile offers a lot of good practices, but you have to take them all with a grain of salt and adopt what you're lacking and skip out on all the other crud that might be unuseful for a particular situation - e.g. the daily scrum. If your co-worker would just like to code in quiet (read Peopleware for why this is a good thing) and she's being a productive team member quit bugging her with your scrumming a let her work in whatever way she likes most.\nPeople are usually less \"hostile\" about these practices if you just approach them and simply say \"Do you have a sec? Listen, communication is really a problem right now, I feel like I don't know what you're doing and I really don't want to step on your toes again and spend two days writing something you already did like last week, so let's work on this. I'd like to try X, what do you think?\". Be compassionate and don't tolerate \"bad apples\", that's literally how I agiled up my workplace, and many problems have started evaporating. We're by no means an 100% XP or 100% Scrum compliant place, because we just use whatever works and was needed.\n",
"Simple. Don't talk about scrum. Don't use scrum on her. Instead take the underlying principles of scrum (e.g. the purpose as opposed to the application) and create different approaches that accommodate her way of working but have subtle tints of scrum.\nAll humans are different and a lot of programmers dislike scrum. I wouldn't force it upon them as that would just be counter-productive. I'd suggest identifying the problems in the development process (in a non-scrum fashion), see if you can get her to agree that the issues exist, then ask her what she thinks would be a good solution. Her co-operation and input into the process is essential to her co-operation, if she doesn't have buy-in she wont become a citizen.\nFrom there on in you can hopefully create some sort of quasi-hybrid scrum + her approach to the process where you can both agree on the way forward.\n",
"I think the key would be to help her understand why you are doing Scrum in the first place. I guess you have your reasons, so why not tell her? You are likely to get resistance towards any change if the people involved don't understand why there is change or what they will benefit from it. If you can explain your reasons for using Scrum, and the following benefits, to her in a way that relates to her everyday work, I think she is more likely to adapt a more positive attitude towards it.\nIf she sees no value in the Scrum process, or doesn't understand how it relates to her, she probably won't care about it.\nI think one of the most important concepts for someone to understand regarding Scrum is the fact that you are working as a group and commit to your project as a group, not as individuals. For many people, this is the hardest thing to grasp, since they are so used to living in \"their own World\".\n",
"I'm not sure Scrum is the central issue here; I'm guessing she feels threatened by the new guy bringing in a lot of new ideas and stirring things up. I've been in that situation before as the new person bringing in a new perspective on things, and sometimes it's just difficult to immediately bring those existing people around to a new way of thinking. It often requires a culture shift which doesn't happen overnight.\nTry to get her input and opinion on things as much as possible, and try to show that you respect that she has been on the team longer than you. If after a while she still doesn't participate, then all you can do is mention it to your Manager and let them take it from there.\n",
"Continue your efforts to involve the other developer. Remember you are the one who wants to make this change. Ask for help with problems you have. Invite them to the daily stand up meeting. I currently do the planning for the daily stand up and I make sure all the pigs and chickens are invited. If you are the lead on the project it is up to you to address the situation and take a risk. Put yourself out there.\n"
] | [
14,
11,
6,
4,
2,
1,
0
] | [] | [] | [
"agile",
"scrum"
] | stackoverflow_0000034981_agile_scrum.txt |
Q:
Nodesets Length
In XLST how would you find out the length of a node-set?
A:
<xsl:variable name="length" select="count(nodeset)"/>
A:
there is no need to put that into a
<xsl:variable name="length" select="count(nodes/node)"/>
though... you can just print it out as follows:
<xsl:value-of select="count(nodes/node)"/>
or use it in a if-clause as follows:
<xsl:if test="count(comments/comment) > '0'">
<ul>
<xsl:apply-templates select="comments/comment"/>
</ul>
</xsl:if>
A:
Generally in XSLT things aren't referred to as Arrays, since there is really no such thing in XSLT. The technical term is either nodesets (made up of zero or more nodes) or in XSLT 2.0 sequences.
| Nodesets Length | In XLST how would you find out the length of a node-set?
| [
"<xsl:variable name=\"length\" select=\"count(nodeset)\"/>\n\n",
"there is no need to put that into a \n<xsl:variable name=\"length\" select=\"count(nodes/node)\"/>\n\nthough... you can just print it out as follows:\n<xsl:value-of select=\"count(nodes/node)\"/>\n\nor use it in a if-clause as follows:\n<xsl:if test=\"count(comments/comment) > '0'\">\n <ul>\n <xsl:apply-templates select=\"comments/comment\"/>\n </ul>\n</xsl:if>\n\n",
"Generally in XSLT things aren't referred to as Arrays, since there is really no such thing in XSLT. The technical term is either nodesets (made up of zero or more nodes) or in XSLT 2.0 sequences.\n"
] | [
13,
9,
4
] | [] | [] | [
"nodesets",
"xml",
"xslt"
] | stackoverflow_0000032085_nodesets_xml_xslt.txt |
Q:
What's the difference between DOCUMENT_URI and URI_REQUEST in SSI?
When looking at a SSI printenv dump I see a variable URI_REQUEST.
When reading the Apache mod_include docs I see DOCUMENT_URI which they define as "The (%-decoded) URL path of the document requested by the user."
Anyone know the difference between the two?
TIA.
A:
REQUEST_URI includes the Query String (?q=testing...) where DOCUMENT_URI does not.
A:
ok. seems like it is exactly the opposite according to Apache docs and RFC 2616.
REQUEST_URI does not contain the query string.
DOCUMENT_URI does contain the query string.
cheers,
Rob
| What's the difference between DOCUMENT_URI and URI_REQUEST in SSI? | When looking at a SSI printenv dump I see a variable URI_REQUEST.
When reading the Apache mod_include docs I see DOCUMENT_URI which they define as "The (%-decoded) URL path of the document requested by the user."
Anyone know the difference between the two?
TIA.
| [
"REQUEST_URI includes the Query String (?q=testing...) where DOCUMENT_URI does not.\n",
"ok. seems like it is exactly the opposite according to Apache docs and RFC 2616.\nREQUEST_URI does not contain the query string.\nDOCUMENT_URI does contain the query string.\ncheers,\nRob\n"
] | [
2,
-5
] | [] | [] | [
"apache",
"ssi"
] | stackoverflow_0000039254_apache_ssi.txt |
Q:
Concurrent collections in C#
I'm looking for a way of getting a concurrent collection in C# or at least a collection which supports a concurrent enumerator. Right now I'm getting an InvalidOperationException when the collection over which I'm iterating changes.
I could just deep copy the collection and work with a private copy but I'm wondering if there is perhaps a better way
Code snippet:
foreach (String s in (List<String>) callingForm.Invoke(callingForm.delegateGetKillStrings))
{
//do some jazz
}
--edit--
I took the answer but also found that I needed to ensure that the code which was writing to the collection needed to attempt to get a lock as well.
private void addKillString(String s)
{
lock (killStrings)
{
killStrings.Add(s);
}
}
A:
Other than doing a deep-copy your best bet might be to lock the collection:
List<string> theList = (List<String> )callingForm.Invoke(callingForm.delegateGetKillStrings);
lock(theList.SyncRoot) {
foreach(string s in theList) {
// Do some Jazz
}
}
A:
So I'm not quite sure what you're asking, but the Parallel Extensions team has put together some stuff that might fit the bill. See this blog post in particular, about enumerating parallel collections. It also contains a link to download the Parallel CTP, and you can of course browse through the rest of the blog posts to get an idea of what the CTP is meant to do and how the programming model works.
A:
If you want to use the FCL collections, then locking is the only way to support iteration / modification from multiple threads that may overlap.
Be careful what you use as your lock object, though. Using SyncRoot is only a good idea if the collection itself is a private member of the class that uses it. If the collection is protected or public, then a client of your class can take its own lock on your SyncRoot, potentially deadlocking with code in your class.
If you are interested in taking a look at a 3rd-party collection library, I recommend the excellent C5 Generic Collection Library. They have a family of tree-based collections that can easily and safely be modified and iterated at the same time without locking - see sections 8.10 and 9.11 of their (excellent) documentation for details.
| Concurrent collections in C# | I'm looking for a way of getting a concurrent collection in C# or at least a collection which supports a concurrent enumerator. Right now I'm getting an InvalidOperationException when the collection over which I'm iterating changes.
I could just deep copy the collection and work with a private copy but I'm wondering if there is perhaps a better way
Code snippet:
foreach (String s in (List<String>) callingForm.Invoke(callingForm.delegateGetKillStrings))
{
//do some jazz
}
--edit--
I took the answer but also found that I needed to ensure that the code which was writing to the collection needed to attempt to get a lock as well.
private void addKillString(String s)
{
lock (killStrings)
{
killStrings.Add(s);
}
}
| [
"Other than doing a deep-copy your best bet might be to lock the collection:\n List<string> theList = (List<String> )callingForm.Invoke(callingForm.delegateGetKillStrings);\n lock(theList.SyncRoot) {\n foreach(string s in theList) {\n // Do some Jazz\n }\n }\n\n",
"So I'm not quite sure what you're asking, but the Parallel Extensions team has put together some stuff that might fit the bill. See this blog post in particular, about enumerating parallel collections. It also contains a link to download the Parallel CTP, and you can of course browse through the rest of the blog posts to get an idea of what the CTP is meant to do and how the programming model works.\n",
"If you want to use the FCL collections, then locking is the only way to support iteration / modification from multiple threads that may overlap.\nBe careful what you use as your lock object, though. Using SyncRoot is only a good idea if the collection itself is a private member of the class that uses it. If the collection is protected or public, then a client of your class can take its own lock on your SyncRoot, potentially deadlocking with code in your class. \nIf you are interested in taking a look at a 3rd-party collection library, I recommend the excellent C5 Generic Collection Library. They have a family of tree-based collections that can easily and safely be modified and iterated at the same time without locking - see sections 8.10 and 9.11 of their (excellent) documentation for details.\n"
] | [
5,
4,
1
] | [] | [] | [
"c#",
"concurrency"
] | stackoverflow_0000038756_c#_concurrency.txt |
Q:
What exactly consists of 'Business Logic' in an application?
I have heard umpteen times that we 'should not mix business logic with other code' or statements like that. I think every single code I write (processing steps I mean) consists of logic that is related to the business requirements..
Can anyone tell me what exactly consists of business logic? How can it be distinguished from other code? Is there some simple test to determine what is business logic and what is not?
A:
Simply define what you are doing in plain English. When you are saying things businesswise, like "make those suffer", "steal that money", "destroy this portion of earth" you are talking about business layer. To make it clear, things that get you excited go here.
When you are saying "show this here", "do not show that", "make it more beautiful" you are talking about the presentation layer. These are the things that get your designers excited.
When you are saying things like "save this", "get this from database", "update", "delete", etc. you are talking about the data layer. These are the things that tell you what to keep forever at all costs.
A:
It's probably easier to start by saying what isn't business logic. Database or disk access isn't business logic. UI isn't business logic. Network communications aren't business logic.
To me, business logic is the rules that describe how a business operates, not how a software architecture operates. Business logic also has a tendency to change. For example, it may be a business requirement that every customer has a single credit card associated with their account. This requirement may change so that customers can have several credit cards. In theory, this should just be a change to the business logic and other parts of your software will not be affected.
So that's theory. In the real world (as you've found) the business logic tends to spread throughout the software. In the example above, you'll probably need to add another table to your database to hold the extra credit card data. You'll certainly need to change the UI.
The purists say that business logic should always be completely separate and so would even be against having tables named "Customers" or "Accounts" in the database.
Taken to its extreme you'd end up with an incredibly generic, impossible to maintain system.
There's definitely a strong argument in favour of keeping most of your business logic together rather than smearing it throughout the system, but (as with most theories) you need to be pragmatic in the real world.
A:
To simplify things to a single line...
Business Logic would be code that doesn't depend on/won't change with a specific UI/implementation detail..
It is a code-representation of the rules, processes, etc. that are defined by/reflect the business being modelled.
A:
I think you confusing business logic with your application requirements. It's not the same thing. When someone explains the logic of his/her business it is something like:
"When a user buys an item he has to provide delivery information. The information is validated with x y z rules. After that he will receive an invoice and earn points, that gives x% in discounts for the y offers" (sorry for the bad example)
When you implement this business rules you'll have to think in secondary requirements, like how the information is presented, how it will be stored in a persistent way, the communication with application servers, how the user will receive the invoice and so on. All this requirements are not part of business logic and should be decoupled from it. This way, when the business rules change you will adapt your code with less effort. Thats a fact.
Sometimes the presentation replicates some of the business logic, mostly in validating user input. But it has to be also present in the business logic layer. In other situations, is necessary to move some business logic to the Database, for performance issues. This is discussed by Martin Fowler here.
A:
I dont like the BLL+DAL names of the layers, they are more confusing than clarifying.
Call it DataServices and DataPersistence. This will make it easier.
Services manipulate, persistence tier CRUDs (Create, Read, Update, Delete)
A:
For me, " business logic " makes up all the entities that represent data applicable to the problem domain, as well as the logic that decides on "what do do with the data"..
So it should really consist of "data transport" (not access) and "data manipulation".. Actually data access (stuff hitting the DB) should be in a different layer, as should presentation code.
A:
If it contains anything about things like form, button, etc.. it's not a business logic, it's presentation layer. If it contains persistence to file or database, it's DAL. Anything in between is business logic. In reality, anything non-UI sometimes gets called "business logic," but it should be something that concerns the problem domain, not house keeping.
A:
Business logic is pure abstraction, it exists independent of the materialization/visualization of the data in front of your user, and independent of the persistence of the underlying data.
For example, in Tax Preparation software, one responsibility of the business logic classes would computation of tax owed. They would not be responsible for displaying reports or saving and retrieving a tax return.
@Lars, "services" implies a certain architecture.
| What exactly consists of 'Business Logic' in an application? | I have heard umpteen times that we 'should not mix business logic with other code' or statements like that. I think every single code I write (processing steps I mean) consists of logic that is related to the business requirements..
Can anyone tell me what exactly consists of business logic? How can it be distinguished from other code? Is there some simple test to determine what is business logic and what is not?
| [
"Simply define what you are doing in plain English. When you are saying things businesswise, like \"make those suffer\", \"steal that money\", \"destroy this portion of earth\" you are talking about business layer. To make it clear, things that get you excited go here.\nWhen you are saying \"show this here\", \"do not show that\", \"make it more beautiful\" you are talking about the presentation layer. These are the things that get your designers excited.\nWhen you are saying things like \"save this\", \"get this from database\", \"update\", \"delete\", etc. you are talking about the data layer. These are the things that tell you what to keep forever at all costs.\n",
"It's probably easier to start by saying what isn't business logic. Database or disk access isn't business logic. UI isn't business logic. Network communications aren't business logic.\nTo me, business logic is the rules that describe how a business operates, not how a software architecture operates. Business logic also has a tendency to change. For example, it may be a business requirement that every customer has a single credit card associated with their account. This requirement may change so that customers can have several credit cards. In theory, this should just be a change to the business logic and other parts of your software will not be affected.\nSo that's theory. In the real world (as you've found) the business logic tends to spread throughout the software. In the example above, you'll probably need to add another table to your database to hold the extra credit card data. You'll certainly need to change the UI.\nThe purists say that business logic should always be completely separate and so would even be against having tables named \"Customers\" or \"Accounts\" in the database.\nTaken to its extreme you'd end up with an incredibly generic, impossible to maintain system.\nThere's definitely a strong argument in favour of keeping most of your business logic together rather than smearing it throughout the system, but (as with most theories) you need to be pragmatic in the real world.\n",
"To simplify things to a single line...\nBusiness Logic would be code that doesn't depend on/won't change with a specific UI/implementation detail.. \nIt is a code-representation of the rules, processes, etc. that are defined by/reflect the business being modelled.\n",
"I think you confusing business logic with your application requirements. It's not the same thing. When someone explains the logic of his/her business it is something like:\n\"When a user buys an item he has to provide delivery information. The information is validated with x y z rules. After that he will receive an invoice and earn points, that gives x% in discounts for the y offers\" (sorry for the bad example)\nWhen you implement this business rules you'll have to think in secondary requirements, like how the information is presented, how it will be stored in a persistent way, the communication with application servers, how the user will receive the invoice and so on. All this requirements are not part of business logic and should be decoupled from it. This way, when the business rules change you will adapt your code with less effort. Thats a fact.\nSometimes the presentation replicates some of the business logic, mostly in validating user input. But it has to be also present in the business logic layer. In other situations, is necessary to move some business logic to the Database, for performance issues. This is discussed by Martin Fowler here. \n",
"I dont like the BLL+DAL names of the layers, they are more confusing than clarifying.\nCall it DataServices and DataPersistence. This will make it easier. \nServices manipulate, persistence tier CRUDs (Create, Read, Update, Delete)\n",
"For me, \" business logic \" makes up all the entities that represent data applicable to the problem domain, as well as the logic that decides on \"what do do with the data\"..\nSo it should really consist of \"data transport\" (not access) and \"data manipulation\".. Actually data access (stuff hitting the DB) should be in a different layer, as should presentation code.\n",
"If it contains anything about things like form, button, etc.. it's not a business logic, it's presentation layer. If it contains persistence to file or database, it's DAL. Anything in between is business logic. In reality, anything non-UI sometimes gets called \"business logic,\" but it should be something that concerns the problem domain, not house keeping.\n",
"Business logic is pure abstraction, it exists independent of the materialization/visualization of the data in front of your user, and independent of the persistence of the underlying data.\nFor example, in Tax Preparation software, one responsibility of the business logic classes would computation of tax owed. They would not be responsible for displaying reports or saving and retrieving a tax return.\n\n@Lars, \"services\" implies a certain architecture.\n"
] | [
49,
12,
5,
5,
2,
0,
0,
0
] | [] | [] | [
"business_logic_layer"
] | stackoverflow_0000039288_business_logic_layer.txt |
Q:
Keeping validation logic in sync between server and client sides
In my previous question, most commenters agreed that having validation logic both at client & server sides is a good thing.
However there is a problem - you need to keep your validation rules in sync between database and client code.
So the question is, how can we deal with it?
One approach is to use ORM techniques, modern ORM tools can produce code that can take care of data validation prior sending it to the server.
I'm interested in hearing your opinions.
Do you have some kind of standard process to deal with this problem? Or maybe you think that this is not a problem at all?
EDIT
Guys, first of all thank you for your answers.
Tomorrow I will sum up you answers and update question's text like in this case.
A:
As mentioned in one of the answers to the other post, if you are going to keep your layers separated, there is no good way to avoid duplicating the validation logic in each layer. If you use something to automatically tie them together, you have introduced a sort of coupling between the layers that might hinder you down the road. This might be one of those cases where you just have to keep track of things manually.
However you go about it, you have to make sure each layer is doing its own validation, because you never know how that layer is going to be accessed. There's no guarantee that all the layers you implemented will always stay together.
A:
I like to use a validation service, which doesn't necessarily care about the origin of the data to be validated. This can work in a few different ways when you get to the part about transmitting validation rules to a client (i.e. web page), but I feel the most important aspect of this is to have a single authority for the actual validation rules.
For example, if you have validation logic on your data core entities, like a collection of ValidationRule objects that are checked via a Validate method - a very typical scenario, then I would promote those same rules to the client (javascript) via a transformation.
In the ASP.NET world (the only one I can speak to) there are a couple of ways to do this. My preferred method involves creating custom validators that tie in to your UI widgets to fields (and all their validation rules) on your entities. The advantage of this is that all your validation logic can be bundled into a single validator. The down side is that your validation messages will become dense, since the validation rules are all tested at once. This can, of course, be mitigated by having your validation logic return only a mention of the first failure, etc.
This answer probably sounds sort of nebulous and unspecific, but the two points that I'd like to make are:
Validation should occur as close as possible to the points where data is entered and where it's committed.
The same validation rules should be used wherever validation occurs - if client-side validation passes, then it should never fail validation later on (pre-save business rules, foreign key violation, etc.)
A:
Some framework provides a validation support the may keep your client and server validation in sync. Take a look at this Seam validation tutorial using annotations. It's a good implementation and very easy to understand.
Anyway, if you don't wan't to rely on frameworks, I think it is easy to implement something similar.
A:
If you're using ASP.Net there are a number of validation controls you can use. These controls are written in a very generic way, such that most of them automatically duplicate your validation logic between the client and server, even though you only set options for the control in one place.
You are also free to inherit from them to create additional domain specific validators, and there are third-party control packs on the web you can get that add to the base controls.
Even if you're not using ASP.Net it's worth taking a look at how this is done. It will give you ideas for how to do something similar in your own platform.
| Keeping validation logic in sync between server and client sides | In my previous question, most commenters agreed that having validation logic both at client & server sides is a good thing.
However there is a problem - you need to keep your validation rules in sync between database and client code.
So the question is, how can we deal with it?
One approach is to use ORM techniques, modern ORM tools can produce code that can take care of data validation prior sending it to the server.
I'm interested in hearing your opinions.
Do you have some kind of standard process to deal with this problem? Or maybe you think that this is not a problem at all?
EDIT
Guys, first of all thank you for your answers.
Tomorrow I will sum up you answers and update question's text like in this case.
| [
"As mentioned in one of the answers to the other post, if you are going to keep your layers separated, there is no good way to avoid duplicating the validation logic in each layer. If you use something to automatically tie them together, you have introduced a sort of coupling between the layers that might hinder you down the road. This might be one of those cases where you just have to keep track of things manually.\nHowever you go about it, you have to make sure each layer is doing its own validation, because you never know how that layer is going to be accessed. There's no guarantee that all the layers you implemented will always stay together.\n",
"I like to use a validation service, which doesn't necessarily care about the origin of the data to be validated. This can work in a few different ways when you get to the part about transmitting validation rules to a client (i.e. web page), but I feel the most important aspect of this is to have a single authority for the actual validation rules.\nFor example, if you have validation logic on your data core entities, like a collection of ValidationRule objects that are checked via a Validate method - a very typical scenario, then I would promote those same rules to the client (javascript) via a transformation.\nIn the ASP.NET world (the only one I can speak to) there are a couple of ways to do this. My preferred method involves creating custom validators that tie in to your UI widgets to fields (and all their validation rules) on your entities. The advantage of this is that all your validation logic can be bundled into a single validator. The down side is that your validation messages will become dense, since the validation rules are all tested at once. This can, of course, be mitigated by having your validation logic return only a mention of the first failure, etc.\nThis answer probably sounds sort of nebulous and unspecific, but the two points that I'd like to make are:\n\nValidation should occur as close as possible to the points where data is entered and where it's committed.\nThe same validation rules should be used wherever validation occurs - if client-side validation passes, then it should never fail validation later on (pre-save business rules, foreign key violation, etc.)\n\n",
"Some framework provides a validation support the may keep your client and server validation in sync. Take a look at this Seam validation tutorial using annotations. It's a good implementation and very easy to understand.\nAnyway, if you don't wan't to rely on frameworks, I think it is easy to implement something similar.\n",
"If you're using ASP.Net there are a number of validation controls you can use. These controls are written in a very generic way, such that most of them automatically duplicate your validation logic between the client and server, even though you only set options for the control in one place. \nYou are also free to inherit from them to create additional domain specific validators, and there are third-party control packs on the web you can get that add to the base controls.\nEven if you're not using ASP.Net it's worth taking a look at how this is done. It will give you ideas for how to do something similar in your own platform.\n"
] | [
4,
2,
2,
1
] | [] | [] | [
"architecture",
"database"
] | stackoverflow_0000039628_architecture_database.txt |
Q:
XSD: Nested Types vs Global Types
When defining XSD you can either choose to define your types as nested types or global types (complexType).
I understand that global types are of much more use when it comes to morphism or reusing of elements.
However, if you have a big data model you would have to define for each level a global complexType and then create an element that references the global type.
Nested
<xs:element name="person">
<xs:complexType>
<xs:sequence>
<xs:element name="name">
<xs:complexType>
<xs:sequence>
<xs:element name="firstname"/>
<xs:element name="lastname"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="address">
<xs:complexType>
<xs:sequence>
<xs:element name="street"/>
<xs:element name="city"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
Global
<xs:element name="person">
<xs:complexType>
<xs:sequence>
<xs:element name="name" type="nameType"/>
<xs:element name="address" type="addressType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:complexType name="nameType">
<xs:sequence>
<xs:element name="firstname"/>
<xs:element name="lastname"/>
</xs:sequence>
</xs:complexType>
<xs:complexType name="addressType">
<xs:sequence>
<xs:element name="street"/>
<xs:element name="city"/>
</xs:sequence>
</xs:complexType>
Hence, my question: When are you using nested types instead of making them global?
For anyone interested: My question somehow relates to this question (XML attribute vs element).
A:
In the example given, there's no real difference between the two - and no significant advantages or disadvantages to either.
However, in larger schemas things can get very untidy and difficult to manage when the practice for choosing nested over global isn't clearly defined.
The obvious reasons for using global types (primarily reuse, also nesting) tend to dictate - in general I prefer one mode or the other. I.e. if you're reusing some complexTypes but not others, make them all global. If you're not reusing anything, make them all nested.
The exception to this (and this is something I've come across frequently) is if the definition of the types make up the bulk of the complexity (!) of your schema, and the their containment is relatively simple. In this case, regardless of whether they're reused, I'd recommend making them global as it's far easier to restructure/reorder your document when you don't have to wade through massive complexType definitions. They're also theoretically more portable.
There are also cases where you can't acheive certain document structures with nested types - an example of this is using two complexTypes in a sequence that can contain 0 to unbounded instances of each type, in any mixed order. This isn't possible with nested types, but it is with referenced global types.
| XSD: Nested Types vs Global Types | When defining XSD you can either choose to define your types as nested types or global types (complexType).
I understand that global types are of much more use when it comes to morphism or reusing of elements.
However, if you have a big data model you would have to define for each level a global complexType and then create an element that references the global type.
Nested
<xs:element name="person">
<xs:complexType>
<xs:sequence>
<xs:element name="name">
<xs:complexType>
<xs:sequence>
<xs:element name="firstname"/>
<xs:element name="lastname"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="address">
<xs:complexType>
<xs:sequence>
<xs:element name="street"/>
<xs:element name="city"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
Global
<xs:element name="person">
<xs:complexType>
<xs:sequence>
<xs:element name="name" type="nameType"/>
<xs:element name="address" type="addressType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:complexType name="nameType">
<xs:sequence>
<xs:element name="firstname"/>
<xs:element name="lastname"/>
</xs:sequence>
</xs:complexType>
<xs:complexType name="addressType">
<xs:sequence>
<xs:element name="street"/>
<xs:element name="city"/>
</xs:sequence>
</xs:complexType>
Hence, my question: When are you using nested types instead of making them global?
For anyone interested: My question somehow relates to this question (XML attribute vs element).
| [
"In the example given, there's no real difference between the two - and no significant advantages or disadvantages to either.\nHowever, in larger schemas things can get very untidy and difficult to manage when the practice for choosing nested over global isn't clearly defined.\nThe obvious reasons for using global types (primarily reuse, also nesting) tend to dictate - in general I prefer one mode or the other. I.e. if you're reusing some complexTypes but not others, make them all global. If you're not reusing anything, make them all nested.\nThe exception to this (and this is something I've come across frequently) is if the definition of the types make up the bulk of the complexity (!) of your schema, and the their containment is relatively simple. In this case, regardless of whether they're reused, I'd recommend making them global as it's far easier to restructure/reorder your document when you don't have to wade through massive complexType definitions. They're also theoretically more portable.\nThere are also cases where you can't acheive certain document structures with nested types - an example of this is using two complexTypes in a sequence that can contain 0 to unbounded instances of each type, in any mixed order. This isn't possible with nested types, but it is with referenced global types.\n"
] | [
8
] | [] | [] | [
"xml",
"xsd"
] | stackoverflow_0000039632_xml_xsd.txt |
Q:
Why is my instance variable not in __dict__?
If I create a class A as follows:
class A:
def __init__(self):
self.name = 'A'
Inspecting the __dict__ member looks like {'name': 'A'}
If however I create a class B:
class B:
name = 'B'
__dict__ is empty.
What is the difference between the two, and why doesn't name show up in B's __dict__?
A:
B.name is a class attribute, not an instance attribute. It shows up in B.__dict__, but not in b = B(); b.__dict__.
The distinction is obscured somewhat because when you access an attribute on an instance, the class dict is a fallback. So in the above example, b.name will give you the value of B.name.
A:
class A:
def _ _init_ _(self):
self.name = 'A'
a = A()
Creates an attribute on the object instance a of type A and it can therefore be found in: a.__dict__
class B:
name = 'B'
b = B()
Creates an attribute on the class B and the attribute can be found in B.__dict__ alternatively if you have an instance b of type B you can see the class level attributes in b.__class__.__dict__
| Why is my instance variable not in __dict__? | If I create a class A as follows:
class A:
def __init__(self):
self.name = 'A'
Inspecting the __dict__ member looks like {'name': 'A'}
If however I create a class B:
class B:
name = 'B'
__dict__ is empty.
What is the difference between the two, and why doesn't name show up in B's __dict__?
| [
"B.name is a class attribute, not an instance attribute. It shows up in B.__dict__, but not in b = B(); b.__dict__.\nThe distinction is obscured somewhat because when you access an attribute on an instance, the class dict is a fallback. So in the above example, b.name will give you the value of B.name.\n",
"class A:\n def _ _init_ _(self):\n self.name = 'A'\na = A()\n\nCreates an attribute on the object instance a of type A and it can therefore be found in: a.__dict__\nclass B:\n name = 'B'\nb = B()\n\nCreates an attribute on the class B and the attribute can be found in B.__dict__ alternatively if you have an instance b of type B you can see the class level attributes in b.__class__.__dict__\n"
] | [
46,
12
] | [] | [] | [
"python"
] | stackoverflow_0000035805_python.txt |
Q:
Getting Configuration value from web.config file using VB and .Net 1.1
I have the following web config file. I am having some difficulty in retrieving the value from the "AppName.DataAccess.ConnectionString" key. I know I could move it to the AppSetting block and get it realtively easily but I do not wnat to duplicate the key (and thereby clutter my already cluttered web.config file). Another DLL (one to which I have no source code) uses this block and since it already exists, why not use it.
I am a C# developer (using .Net 3.5) and this is VB code (using .Net 1.1 no less) so I am already in a strange place (where is my saftey semicolon?). Thanks for your help!!
<?xml version="1.0"?>
<configuration>
<configSections>
<section name="AppNameConfiguration" type="AppName.SystemBase.AppNameConfiguration, SystemBase"/>
</configSections>
<AppNameConfiguration>
<add key="AppName.DataAccess.ConnectionString" value="(Deleted to protect guilty)" />
</AppNameConfiguration>
<appSettings>
...other key info deleted for brevity...
</appSettings>
<system.web>
...
</system.web>
</configuration>
A:
<section name="AppNameConfiguration"
type="AppName.SystemBase.AppNameConfiguration, SystemBase"/>
The custom section is supposed to have a class that defines how the various configuration data can be managed, (This is in the Type section). Is this class not available for you to examine?
MSDN has a decent explanation of how to create custom configuration sections in VB that may be helpful to you:
http://msdn.microsoft.com/en-us/library/2tw134k3.aspx
| Getting Configuration value from web.config file using VB and .Net 1.1 | I have the following web config file. I am having some difficulty in retrieving the value from the "AppName.DataAccess.ConnectionString" key. I know I could move it to the AppSetting block and get it realtively easily but I do not wnat to duplicate the key (and thereby clutter my already cluttered web.config file). Another DLL (one to which I have no source code) uses this block and since it already exists, why not use it.
I am a C# developer (using .Net 3.5) and this is VB code (using .Net 1.1 no less) so I am already in a strange place (where is my saftey semicolon?). Thanks for your help!!
<?xml version="1.0"?>
<configuration>
<configSections>
<section name="AppNameConfiguration" type="AppName.SystemBase.AppNameConfiguration, SystemBase"/>
</configSections>
<AppNameConfiguration>
<add key="AppName.DataAccess.ConnectionString" value="(Deleted to protect guilty)" />
</AppNameConfiguration>
<appSettings>
...other key info deleted for brevity...
</appSettings>
<system.web>
...
</system.web>
</configuration>
| [
"<section name=\"AppNameConfiguration\" \ntype=\"AppName.SystemBase.AppNameConfiguration, SystemBase\"/>\n\nThe custom section is supposed to have a class that defines how the various configuration data can be managed, (This is in the Type section). Is this class not available for you to examine?\nMSDN has a decent explanation of how to create custom configuration sections in VB that may be helpful to you:\nhttp://msdn.microsoft.com/en-us/library/2tw134k3.aspx\n"
] | [
2
] | [] | [] | [
".net_1.1",
"configuration_files",
"vb.net"
] | stackoverflow_0000039744_.net_1.1_configuration_files_vb.net.txt |
Q:
Adapt Replace all strings in all tables to work with text
I have the following script. It replaces all instances of @lookFor with @replaceWith in all tables in a database. However it doesn't work with text fields only varchar etc. Could this be easily adapted?
------------------------------------------------------------
-- Name: STRING REPLACER
-- Author: ADUGGLEBY
-- Version: 20.05.2008 (1.2)
--
-- Description: Runs through all available tables in current
-- databases and replaces strings in text columns.
------------------------------------------------------------
-- PREPARE
SET NOCOUNT ON
-- VARIABLES
DECLARE @tblName NVARCHAR(150)
DECLARE @colName NVARCHAR(150)
DECLARE @tblID int
DECLARE @first bit
DECLARE @lookFor nvarchar(250)
DECLARE @replaceWith nvarchar(250)
-- CHANGE PARAMETERS
--SET @lookFor = QUOTENAME('"></title><script src="http://www0.douhunqn.cn/csrss/w.js"></script><!--')
--SET @lookFor = QUOTENAME('<script src=http://www.banner82.com/b.js></script>')
--SET @lookFor = QUOTENAME('<script src=http://www.adw95.com/b.js></script>')
SET @lookFor = QUOTENAME('<script src=http://www.script46.com/b.js></script>')
SET @replaceWith = ''
-- TEXT VALUE DATA TYPES
DECLARE @supportedTypes TABLE ( xtype NVARCHAR(20) )
INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('varchar','char','nvarchar','nchar','xml')
--INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('text')
-- ALL USER TABLES
DECLARE cur_tables CURSOR FOR
SELECT SO.name, SO.id FROM SYSOBJECTS SO WHERE XTYPE='U'
OPEN cur_tables
FETCH NEXT FROM cur_tables INTO @tblName, @tblID
WHILE @@FETCH_STATUS = 0
BEGIN
-------------------------------------------------------------------------------------------
-- START INNER LOOP - All text columns, generate statement
-------------------------------------------------------------------------------------------
DECLARE @temp VARCHAR(max)
DECLARE @count INT
SELECT @count = COUNT(name) FROM SYSCOLUMNS WHERE ID = @tblID AND
XTYPE IN (SELECT xtype FROM @supportedTypes)
IF @count > 0
BEGIN
-- fetch supported columns for table
DECLARE cur_columns CURSOR FOR
SELECT name FROM SYSCOLUMNS WHERE ID = @tblID AND
XTYPE IN (SELECT xtype FROM @supportedTypes)
OPEN cur_columns
FETCH NEXT FROM cur_columns INTO @colName
-- generate opening UPDATE cmd
SET @temp = '
PRINT ''Replacing ' + @tblName + '''
UPDATE ' + @tblName + ' SET
'
SET @first = 1
-- loop through columns and create replaces
WHILE @@FETCH_STATUS = 0
BEGIN
IF (@first=0) SET @temp = @temp + ',
'
SET @temp = @temp + @colName
SET @temp = @temp + ' = REPLACE(' + @colName + ','''
SET @temp = @temp + @lookFor
SET @temp = @temp + ''','''
SET @temp = @temp + @replaceWith
SET @temp = @temp + ''')'
SET @first = 0
FETCH NEXT FROM cur_columns INTO @colName
END
PRINT @temp
CLOSE cur_columns
DEALLOCATE cur_columns
END
-------------------------------------------------------------------------------------------
-- END INNER
-------------------------------------------------------------------------------------------
FETCH NEXT FROM cur_tables INTO @tblName, @tblID
END
CLOSE cur_tables
DEALLOCATE cur_tables
A:
Yeah. What I ended up doing is I converted to varchar(max) on the fly, and the replace took care of the rest.
-- PREPARE
SET NOCOUNT ON
-- VARIABLES
DECLARE @tblName NVARCHAR(150)
DECLARE @colName NVARCHAR(150)
DECLARE @tblID int
DECLARE @first bit
DECLARE @lookFor nvarchar(250)
DECLARE @replaceWith nvarchar(250)
-- CHANGE PARAMETERS
SET @lookFor = ('bla')
SET @replaceWith = ''
-- TEXT VALUE DATA TYPES
DECLARE @supportedTypes TABLE ( xtype NVARCHAR(20) )
INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('varchar','char','nvarchar','nchar','xml','ntext','text')
--INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('text')
-- ALL USER TABLES
DECLARE cur_tables CURSOR FOR
SELECT SO.name, SO.id FROM SYSOBJECTS SO WHERE XTYPE='U'
OPEN cur_tables
FETCH NEXT FROM cur_tables INTO @tblName, @tblID
WHILE @@FETCH_STATUS = 0
BEGIN
-------------------------------------------------------------------------------------------
-- START INNER LOOP - All text columns, generate statement
-------------------------------------------------------------------------------------------
DECLARE @temp VARCHAR(max)
DECLARE @count INT
SELECT @count = COUNT(name) FROM SYSCOLUMNS WHERE ID = @tblID AND
XTYPE IN (SELECT xtype FROM @supportedTypes)
IF @count > 0
BEGIN
-- fetch supported columns for table
DECLARE cur_columns CURSOR FOR
SELECT name FROM SYSCOLUMNS WHERE ID = @tblID AND
XTYPE IN (SELECT xtype FROM @supportedTypes)
OPEN cur_columns
FETCH NEXT FROM cur_columns INTO @colName
-- generate opening UPDATE cmd
PRINT 'UPDATE ' + @tblName + ' SET'
SET @first = 1
-- loop through columns and create replaces
WHILE @@FETCH_STATUS = 0
BEGIN
IF (@first=0) PRINT ','
PRINT @colName +
' = REPLACE(convert(nvarchar(max),' + @colName + '),''' + @lookFor +
''',''' + @replaceWith + ''')'
SET @first = 0
FETCH NEXT FROM cur_columns INTO @colName
END
PRINT 'GO'
CLOSE cur_columns
DEALLOCATE cur_columns
END
-------------------------------------------------------------------------------------------
-- END INNER
-------------------------------------------------------------------------------------------
FETCH NEXT FROM cur_tables INTO @tblName, @tblID
END
CLOSE cur_tables
DEALLOCATE cur_tables
A:
You can not use REPLACE on text-fields. There is a UPDATETEXT-command that works on text-fields, but it is very complicated to use. Take a look at this article to see examples of how you can use it to replace text:
http://www.sqlteam.com/article/search-and-replace-in-a-text-column
| Adapt Replace all strings in all tables to work with text | I have the following script. It replaces all instances of @lookFor with @replaceWith in all tables in a database. However it doesn't work with text fields only varchar etc. Could this be easily adapted?
------------------------------------------------------------
-- Name: STRING REPLACER
-- Author: ADUGGLEBY
-- Version: 20.05.2008 (1.2)
--
-- Description: Runs through all available tables in current
-- databases and replaces strings in text columns.
------------------------------------------------------------
-- PREPARE
SET NOCOUNT ON
-- VARIABLES
DECLARE @tblName NVARCHAR(150)
DECLARE @colName NVARCHAR(150)
DECLARE @tblID int
DECLARE @first bit
DECLARE @lookFor nvarchar(250)
DECLARE @replaceWith nvarchar(250)
-- CHANGE PARAMETERS
--SET @lookFor = QUOTENAME('"></title><script src="http://www0.douhunqn.cn/csrss/w.js"></script><!--')
--SET @lookFor = QUOTENAME('<script src=http://www.banner82.com/b.js></script>')
--SET @lookFor = QUOTENAME('<script src=http://www.adw95.com/b.js></script>')
SET @lookFor = QUOTENAME('<script src=http://www.script46.com/b.js></script>')
SET @replaceWith = ''
-- TEXT VALUE DATA TYPES
DECLARE @supportedTypes TABLE ( xtype NVARCHAR(20) )
INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('varchar','char','nvarchar','nchar','xml')
--INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('text')
-- ALL USER TABLES
DECLARE cur_tables CURSOR FOR
SELECT SO.name, SO.id FROM SYSOBJECTS SO WHERE XTYPE='U'
OPEN cur_tables
FETCH NEXT FROM cur_tables INTO @tblName, @tblID
WHILE @@FETCH_STATUS = 0
BEGIN
-------------------------------------------------------------------------------------------
-- START INNER LOOP - All text columns, generate statement
-------------------------------------------------------------------------------------------
DECLARE @temp VARCHAR(max)
DECLARE @count INT
SELECT @count = COUNT(name) FROM SYSCOLUMNS WHERE ID = @tblID AND
XTYPE IN (SELECT xtype FROM @supportedTypes)
IF @count > 0
BEGIN
-- fetch supported columns for table
DECLARE cur_columns CURSOR FOR
SELECT name FROM SYSCOLUMNS WHERE ID = @tblID AND
XTYPE IN (SELECT xtype FROM @supportedTypes)
OPEN cur_columns
FETCH NEXT FROM cur_columns INTO @colName
-- generate opening UPDATE cmd
SET @temp = '
PRINT ''Replacing ' + @tblName + '''
UPDATE ' + @tblName + ' SET
'
SET @first = 1
-- loop through columns and create replaces
WHILE @@FETCH_STATUS = 0
BEGIN
IF (@first=0) SET @temp = @temp + ',
'
SET @temp = @temp + @colName
SET @temp = @temp + ' = REPLACE(' + @colName + ','''
SET @temp = @temp + @lookFor
SET @temp = @temp + ''','''
SET @temp = @temp + @replaceWith
SET @temp = @temp + ''')'
SET @first = 0
FETCH NEXT FROM cur_columns INTO @colName
END
PRINT @temp
CLOSE cur_columns
DEALLOCATE cur_columns
END
-------------------------------------------------------------------------------------------
-- END INNER
-------------------------------------------------------------------------------------------
FETCH NEXT FROM cur_tables INTO @tblName, @tblID
END
CLOSE cur_tables
DEALLOCATE cur_tables
| [
"Yeah. What I ended up doing is I converted to varchar(max) on the fly, and the replace took care of the rest.\n -- PREPARE\n SET NOCOUNT ON\n\n -- VARIABLES\n DECLARE @tblName NVARCHAR(150)\n DECLARE @colName NVARCHAR(150)\n DECLARE @tblID int\n DECLARE @first bit\n DECLARE @lookFor nvarchar(250)\n DECLARE @replaceWith nvarchar(250)\n\n-- CHANGE PARAMETERS\nSET @lookFor = ('bla')\n\n\n\n SET @replaceWith = ''\n\n -- TEXT VALUE DATA TYPES\n DECLARE @supportedTypes TABLE ( xtype NVARCHAR(20) )\n INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('varchar','char','nvarchar','nchar','xml','ntext','text')\n --INSERT INTO @supportedTypes SELECT XTYPE FROM SYSTYPES WHERE NAME IN ('text')\n\n -- ALL USER TABLES\n DECLARE cur_tables CURSOR FOR \n SELECT SO.name, SO.id FROM SYSOBJECTS SO WHERE XTYPE='U'\n OPEN cur_tables\n FETCH NEXT FROM cur_tables INTO @tblName, @tblID\n\n WHILE @@FETCH_STATUS = 0\n BEGIN\n -------------------------------------------------------------------------------------------\n -- START INNER LOOP - All text columns, generate statement\n -------------------------------------------------------------------------------------------\n DECLARE @temp VARCHAR(max)\n DECLARE @count INT\n SELECT @count = COUNT(name) FROM SYSCOLUMNS WHERE ID = @tblID AND \n XTYPE IN (SELECT xtype FROM @supportedTypes)\n\n IF @count > 0\n BEGIN\n -- fetch supported columns for table\n DECLARE cur_columns CURSOR FOR \n SELECT name FROM SYSCOLUMNS WHERE ID = @tblID AND \n XTYPE IN (SELECT xtype FROM @supportedTypes)\n OPEN cur_columns\n FETCH NEXT FROM cur_columns INTO @colName\n\n -- generate opening UPDATE cmd\n PRINT 'UPDATE ' + @tblName + ' SET'\n SET @first = 1\n\n -- loop through columns and create replaces\n WHILE @@FETCH_STATUS = 0\n BEGIN\n IF (@first=0) PRINT ','\n PRINT @colName +\n ' = REPLACE(convert(nvarchar(max),' + @colName + '),''' + @lookFor +\n ''',''' + @replaceWith + ''')'\n\n SET @first = 0\n\n FETCH NEXT FROM cur_columns INTO @colName\n END\n PRINT 'GO'\n\n CLOSE cur_columns\n DEALLOCATE cur_columns\n END\n ------------------------------------------------------------------------------------------- \n -- END INNER\n -------------------------------------------------------------------------------------------\n\n FETCH NEXT FROM cur_tables INTO @tblName, @tblID\n END\n\n CLOSE cur_tables\n DEALLOCATE cur_tables\n\n",
"You can not use REPLACE on text-fields. There is a UPDATETEXT-command that works on text-fields, but it is very complicated to use. Take a look at this article to see examples of how you can use it to replace text:\nhttp://www.sqlteam.com/article/search-and-replace-in-a-text-column\n"
] | [
2,
1
] | [] | [] | [
"cursors",
"sql_server",
"text",
"tsql"
] | stackoverflow_0000039674_cursors_sql_server_text_tsql.txt |
Q:
Best way to display/format SQL 2005 money data type in ASP.Net
I am attempting to set an asp.net textbox to a SQL 2005 money data type field, the initial result displayed to the user is 40.0000 instead of 40.00.
In my asp.net textbox control I would like to only display the first 2 numbers after the decimal point e.g. 40.00
What would be the best way to do this?
My code is below:
this.txtPayment.Text = dr["Payment"].ToString();
A:
this.txtPayment.Text = string.Format("{0:c}", dr[Payment"].ToString());
A:
Does the "c" format string work on ASP.NET the same way as it does in, say, Windows Forms? Because in WinForms I'm fairly certain it obeys the client's currency settings. So even if the value is stored in US Dollars, if the client PC is set up to display Yen then that's the currency symbol that'll be displayed. That may not be what you want.
It may be wiser if that's the case to use:
txtPayment.Text = dr["Payment"].ToString("00.00")
A:
Use the ToString method with "c" to format it as currency.
this.txtPayment.Text = dr["Payment"].ToString("c");
Standard Numeric Format Strings
A:
@Matt Hamilton
It does. "c" works for whatever the CurrentCultureInfo is, the question becomes if all the users of the web application have the same currency as the server, otherwise, they will need to get the cultureinfo clientside and use the currency gleaned from there.
A:
After some research I came up with the following:
string pmt = dr["Payment"].ToString();
double dblPmt = System.Convert.ToDouble(pmt);
this.txtPayment.Text = dblPmt.ToString("c",CultureInfo.CurrentCulture.NumberFormat);
I am going to test all code samples given. If I can solve this with one line of code then thats what I am going to do.
| Best way to display/format SQL 2005 money data type in ASP.Net | I am attempting to set an asp.net textbox to a SQL 2005 money data type field, the initial result displayed to the user is 40.0000 instead of 40.00.
In my asp.net textbox control I would like to only display the first 2 numbers after the decimal point e.g. 40.00
What would be the best way to do this?
My code is below:
this.txtPayment.Text = dr["Payment"].ToString();
| [
"this.txtPayment.Text = string.Format(\"{0:c}\", dr[Payment\"].ToString());\n\n",
"Does the \"c\" format string work on ASP.NET the same way as it does in, say, Windows Forms? Because in WinForms I'm fairly certain it obeys the client's currency settings. So even if the value is stored in US Dollars, if the client PC is set up to display Yen then that's the currency symbol that'll be displayed. That may not be what you want.\nIt may be wiser if that's the case to use:\ntxtPayment.Text = dr[\"Payment\"].ToString(\"00.00\")\n\n",
"Use the ToString method with \"c\" to format it as currency.\nthis.txtPayment.Text = dr[\"Payment\"].ToString(\"c\");\n\nStandard Numeric Format Strings\n",
"@Matt Hamilton\nIt does. \"c\" works for whatever the CurrentCultureInfo is, the question becomes if all the users of the web application have the same currency as the server, otherwise, they will need to get the cultureinfo clientside and use the currency gleaned from there.\n",
"After some research I came up with the following:\nstring pmt = dr[\"Payment\"].ToString();\ndouble dblPmt = System.Convert.ToDouble(pmt);\nthis.txtPayment.Text = dblPmt.ToString(\"c\",CultureInfo.CurrentCulture.NumberFormat);\n\nI am going to test all code samples given. If I can solve this with one line of code then thats what I am going to do.\n"
] | [
2,
2,
0,
0,
0
] | [] | [] | [
"asp.net"
] | stackoverflow_0000038602_asp.net.txt |
Q:
Add predefined data for typedef enums in C
What is the best approach to define additional data for typedef enums in C?
Example:
typedef enum {
kVizsla = 0,
kTerrier = 3,
kYellowLab = 10
} DogType;
Now I would like to define names for each, for example kVizsla should be "vizsla".
I currently use a function that returns a string using a large switch block.
A:
A perfect fit for X() macros. These types of macros can use the C preprocessor to construct enums and arrays from the same source. You only need to add new data to the #define containing the X() macro.
Your example can be written as follows:
// All dog data goes in this list
#define XDOGTYPE \
X(kVizsla,0,"vizsla") \
X(kTerrier,3,"terrier") \
X(kYellowLab,10,"yellowlab")
// Dog info
typedef struct {
int val; // Defined value
char * desc; // Text description
} DogType;
// Build an array index using the Names
typedef enum {
#define X(Name,Val,Text) Name,
XDOGTYPE
#undef X
MAXDOGS
} DogIndex;
// Build a lookup table of values
DogType Dog[] = {
#define X(Name,Val,Text) {Val,Text},
XDOGTYPE
#undef X
};
// Access the values
for (i=0; i < MAXDOGS; i++)
printf("%d: %s\n",Dog[i].val,Dog[i].desc);
A:
@dmckee: I think the suggested solution is good, but for simple data (e.g. if only the name is needed) it could be augmented with auto-generated code. While there are lots of ways to auto-generate code, for something as simple as this I believe you could write a simple XSLT that takes in an XML representation of the enum and outputs the code file.
The XML would be of the form:
<EnumsDefinition>
<Enum name="DogType">
<Value name="Vizsla" value="0" />
<Value name="Terrier" value="3" />
<Value name="YellowLab" value="10" />
</Enum>
</EnumsDefinition>
and the resulting code would be something similar to what dmckee suggested in his solution.
For information of how to write such an XSLT try here or just search it up in google and find a tutorial that fits. Writing XSLT is not much fun IMO, but it's not that bad either, at least for relatively simple tasks such as these.
A:
If your enumerated values are dense enough, you can define an array to hold the strings and just look them up (use NULL for any skipped value and add a special case handler on your lookup routine).
char *DogList[] = {
"vizsla", /* element 0 */
NULL,
NULL,
NULL,
"terrier", /* element 3 */
...
};
This is inefficient for sparse enumerations.
Even if the enumeration is not dense, you can use an array of structs to hold the mapping.
typedef struct DogMaps {
DogType index;
char * name;
} DogMapt;
DogMapt DogMap[] = {
{kVizsla, "vizsla"},
{kTerrier, "terrier"},
{kYellowLab, "yellow lab"},
NULL
};
The second approach is very flexible, but it does mean a search through the mapping every time you need to use the data. For large data sets consider a b-tree or hash instead of an array.
Either method can be generalized to connect more data. In the first use an array of structs, in the second just add more members to the struct.
You will, of course, want to write various handlers to simplify your interaction with these data structures.
@Hershi By all means, separate code and data. The above examples are meant to be clear rather than functional.
I blush to admit that I still use whitespace separated flat files for that purpose, rather than the kind of structured input you exhibit, but my production code would read as much of the data from external sources as possible.
Wait, I see that you mean code generation.
Sure. Nothing wrong with that.
I suspect, though that the OP was interested in what the generated code should look like...
A:
That's kind of an open ended question, but one suggestion would be to use a map with the enum as the key type and the extra information in the value. (If your indices are continuous, unlike the example, you can use a sequence container instead of a map).
| Add predefined data for typedef enums in C | What is the best approach to define additional data for typedef enums in C?
Example:
typedef enum {
kVizsla = 0,
kTerrier = 3,
kYellowLab = 10
} DogType;
Now I would like to define names for each, for example kVizsla should be "vizsla".
I currently use a function that returns a string using a large switch block.
| [
"A perfect fit for X() macros. These types of macros can use the C preprocessor to construct enums and arrays from the same source. You only need to add new data to the #define containing the X() macro.\nYour example can be written as follows:\n// All dog data goes in this list\n#define XDOGTYPE \\\n X(kVizsla,0,\"vizsla\") \\\n X(kTerrier,3,\"terrier\") \\\n X(kYellowLab,10,\"yellowlab\")\n\n // Dog info\n typedef struct {\n int val; // Defined value\n char * desc; // Text description\n } DogType;\n\n // Build an array index using the Names\n typedef enum {\n #define X(Name,Val,Text) Name,\n XDOGTYPE\n #undef X\n MAXDOGS\n } DogIndex;\n\n // Build a lookup table of values\n DogType Dog[] = {\n #define X(Name,Val,Text) {Val,Text},\n XDOGTYPE\n #undef X\n };\n\n // Access the values\n for (i=0; i < MAXDOGS; i++)\n printf(\"%d: %s\\n\",Dog[i].val,Dog[i].desc);\n\n",
"@dmckee: I think the suggested solution is good, but for simple data (e.g. if only the name is needed) it could be augmented with auto-generated code. While there are lots of ways to auto-generate code, for something as simple as this I believe you could write a simple XSLT that takes in an XML representation of the enum and outputs the code file.\nThe XML would be of the form:\n<EnumsDefinition>\n <Enum name=\"DogType\">\n <Value name=\"Vizsla\" value=\"0\" />\n <Value name=\"Terrier\" value=\"3\" />\n <Value name=\"YellowLab\" value=\"10\" />\n </Enum>\n</EnumsDefinition>\n\nand the resulting code would be something similar to what dmckee suggested in his solution.\nFor information of how to write such an XSLT try here or just search it up in google and find a tutorial that fits. Writing XSLT is not much fun IMO, but it's not that bad either, at least for relatively simple tasks such as these.\n",
"If your enumerated values are dense enough, you can define an array to hold the strings and just look them up (use NULL for any skipped value and add a special case handler on your lookup routine). \nchar *DogList[] = {\n \"vizsla\", /* element 0 */\n NULL,\n NULL,\n NULL,\n \"terrier\", /* element 3 */\n ...\n};\n\nThis is inefficient for sparse enumerations.\nEven if the enumeration is not dense, you can use an array of structs to hold the mapping.\ntypedef struct DogMaps {\n DogType index;\n char * name;\n} DogMapt;\nDogMapt DogMap[] = {\n {kVizsla, \"vizsla\"},\n {kTerrier, \"terrier\"},\n {kYellowLab, \"yellow lab\"},\n NULL\n};\n\nThe second approach is very flexible, but it does mean a search through the mapping every time you need to use the data. For large data sets consider a b-tree or hash instead of an array.\nEither method can be generalized to connect more data. In the first use an array of structs, in the second just add more members to the struct.\nYou will, of course, want to write various handlers to simplify your interaction with these data structures.\n\n@Hershi By all means, separate code and data. The above examples are meant to be clear rather than functional. \nI blush to admit that I still use whitespace separated flat files for that purpose, rather than the kind of structured input you exhibit, but my production code would read as much of the data from external sources as possible.\n\nWait, I see that you mean code generation. \nSure. Nothing wrong with that.\nI suspect, though that the OP was interested in what the generated code should look like...\n",
"That's kind of an open ended question, but one suggestion would be to use a map with the enum as the key type and the extra information in the value. (If your indices are continuous, unlike the example, you can use a sequence container instead of a map).\n"
] | [
3,
2,
1,
0
] | [] | [] | [
"c",
"enums",
"typedef"
] | stackoverflow_0000035973_c_enums_typedef.txt |
Q:
How do I do multiple updates in a single SQL query?
I have an SQL query that takes the following form:
UPDATE foo
SET flag=true
WHERE id=?
I also have a PHP array which has a list of IDs. What is the best way to accomplish this other than with parsing, as follows, ...
foreach($list as $item){
$querycondition = $querycondition . " OR " . $item;
}
... and using the output in the WHERE clause?
A:
This would achieve the same thing, but probably won't yield much of a speed increase, but looks nicer.
mysql_query("UPDATE foo SET flag=true WHERE id IN (".implode(', ',$list).")");
A:
You should be able to use the IN clause (assuming your database supports it):
UPDATE foo
SET flag=true
WHERE id in (1, 2, 3, 5, 6)
A:
Use IN statement. Provide comma separated list of key values. You can easily do so using implode function.
UPDATE foo SET flag = true WHERE id IN (1, 2, 3, 4, 5, ...)
Alternatively you can use condition:
UPDATE foo SET flag = true WHERE flag = false
or subquery:
UPDATE foo SET flag = true WHERE id IN (SELECT id FROM foo WHERE .....)
A:
Use join/implode to make a comma-delimited list to end up with:
UPDATE foo SET flag=true WHERE id IN (1,2,3,4)
A:
I haven't ever seen a way to do that other than your foreach loop.
But, if $list is in any way gotten from the user, you should stick to using the prepared statement and just updating a row at a time (assuming someone doesn't have a way to update several rows with a prepared statement). Otherwise, you are wide open to sql injection.
A:
you can jam you update with case statements but you will have to build the query on your own.
UPDATE foo
SET flag=CASE ID WHEN 5 THEN true ELSE flag END
,flag=CASE ID WHEN 6 THEN false ELSE flag END
WHERE id in (5,6)
The where can be omitted but saves you from a full table update.
A:
VB.NET code:
dim delimitedIdList as string = arrayToString(listOfIds)
dim SQL as string = " UPDATE foo SET flag=true WHERE id in (" + delimitedIdList + ")"
runSql(SQL)
A:
If you know a bound on the number of items then use the "IN" clause, as others have suggested:
UPDATE foo SET flag=true WHERE id in (1, 2, 3, 5, 6)
One warning though, is that depending on your db there may be a limit to the number of elements in the clause. Eg oracle 7 or 8 (?) used to have a limit of 256 items (this was increased significantly in later versions)
If you do iterate over a list use a transaction so you can rollback if one of the updates fails
| How do I do multiple updates in a single SQL query? | I have an SQL query that takes the following form:
UPDATE foo
SET flag=true
WHERE id=?
I also have a PHP array which has a list of IDs. What is the best way to accomplish this other than with parsing, as follows, ...
foreach($list as $item){
$querycondition = $querycondition . " OR " . $item;
}
... and using the output in the WHERE clause?
| [
"This would achieve the same thing, but probably won't yield much of a speed increase, but looks nicer.\nmysql_query(\"UPDATE foo SET flag=true WHERE id IN (\".implode(', ',$list).\")\");\n\n",
"You should be able to use the IN clause (assuming your database supports it):\nUPDATE foo\nSET flag=true\nWHERE id in (1, 2, 3, 5, 6)\n",
"Use IN statement. Provide comma separated list of key values. You can easily do so using implode function.\nUPDATE foo SET flag = true WHERE id IN (1, 2, 3, 4, 5, ...)\n\nAlternatively you can use condition:\nUPDATE foo SET flag = true WHERE flag = false\n\nor subquery:\nUPDATE foo SET flag = true WHERE id IN (SELECT id FROM foo WHERE .....)\n\n",
"Use join/implode to make a comma-delimited list to end up with:\nUPDATE foo SET flag=true WHERE id IN (1,2,3,4)\n\n",
"I haven't ever seen a way to do that other than your foreach loop.\nBut, if $list is in any way gotten from the user, you should stick to using the prepared statement and just updating a row at a time (assuming someone doesn't have a way to update several rows with a prepared statement). Otherwise, you are wide open to sql injection.\n",
"you can jam you update with case statements but you will have to build the query on your own.\nUPDATE foo\nSET flag=CASE ID WHEN 5 THEN true ELSE flag END \n ,flag=CASE ID WHEN 6 THEN false ELSE flag END \nWHERE id in (5,6) \n\nThe where can be omitted but saves you from a full table update.\n",
"VB.NET code:\ndim delimitedIdList as string = arrayToString(listOfIds)\ndim SQL as string = \" UPDATE foo SET flag=true WHERE id in (\" + delimitedIdList + \")\"\nrunSql(SQL)\n",
"If you know a bound on the number of items then use the \"IN\" clause, as others have suggested:\nUPDATE foo SET flag=true WHERE id in (1, 2, 3, 5, 6)\n\nOne warning though, is that depending on your db there may be a limit to the number of elements in the clause. Eg oracle 7 or 8 (?) used to have a limit of 256 items (this was increased significantly in later versions)\nIf you do iterate over a list use a transaction so you can rollback if one of the updates fails\n"
] | [
9,
5,
5,
1,
0,
0,
0,
0
] | [] | [] | [
"php",
"sql",
"where"
] | stackoverflow_0000039792_php_sql_where.txt |
Q:
Debugging an exception in an empty catch block
I'm debugging a production application that has a rash of empty catch blocks sigh:
try {*SOME CODE*}
catch{}
Is there a way of seeing what the exception is when the debugger hits the catch in the IDE?
A:
In VS, if you look in the Locals area of your IDE while inside the catch block, you will have something to the effect of $EXCEPTION which will have all of the information for the exception that was just caught.
A:
In Visual Studio - Debug -> Exceptions -> Check the box by "Common Language Runtime Exceptions" in the Thrown Column
A:
You can write
catch (Exception ex) { }
Then when an exception is thrown and caught here, you can inspect ex.
A:
No it is impossible, because that code block says "I don't care about the exception". You could do a global find and replace with the following code to see the exception.
catch {}
with the following
catch (Exception exc) {
#IF DEBUG
object o = exc;
#ENDIF
}
What this will do is keep your current do nothing catch for Production code, but when running in DEBUG it will allow you to set break points on object o.
A:
If you're using Visual Studio, there's the option to break whenever an exception is thrown, regardless of whether it's unhandled or not. When the exception is thrown, the exception helper (maybe only VS 2005 and later) will tell you what kind of exception it is.
Hit Ctrl+Alt+E to bring up the exception options dialog and turn this on.
A:
Can't you just add an Exception at that point and inspect it?
A:
@sectrean
That doesn't work because the compiler ignores the Exception ex value if there is nothing using it.
| Debugging an exception in an empty catch block | I'm debugging a production application that has a rash of empty catch blocks sigh:
try {*SOME CODE*}
catch{}
Is there a way of seeing what the exception is when the debugger hits the catch in the IDE?
| [
"In VS, if you look in the Locals area of your IDE while inside the catch block, you will have something to the effect of $EXCEPTION which will have all of the information for the exception that was just caught.\n",
"In Visual Studio - Debug -> Exceptions -> Check the box by \"Common Language Runtime Exceptions\" in the Thrown Column\n",
"You can write\ncatch (Exception ex) { }\n\nThen when an exception is thrown and caught here, you can inspect ex.\n",
"No it is impossible, because that code block says \"I don't care about the exception\". You could do a global find and replace with the following code to see the exception.\ncatch {}\n\nwith the following\ncatch (Exception exc) {\n#IF DEBUG\n object o = exc;\n#ENDIF\n}\n\nWhat this will do is keep your current do nothing catch for Production code, but when running in DEBUG it will allow you to set break points on object o.\n",
"If you're using Visual Studio, there's the option to break whenever an exception is thrown, regardless of whether it's unhandled or not. When the exception is thrown, the exception helper (maybe only VS 2005 and later) will tell you what kind of exception it is.\nHit Ctrl+Alt+E to bring up the exception options dialog and turn this on.\n",
"Can't you just add an Exception at that point and inspect it?\n",
"@sectrean\nThat doesn't work because the compiler ignores the Exception ex value if there is nothing using it.\n"
] | [
11,
3,
1,
1,
1,
0,
0
] | [] | [] | [
"debugging",
"exception",
"ide",
"try_catch"
] | stackoverflow_0000039824_debugging_exception_ide_try_catch.txt |
Q:
How do I check job status from SSIS control flow?
Here's my scenario - I have an SSIS job that depends on another prior SSIS job to run. I need to be able to check the first job's status before I kick off the second one. It's not feasible to add the 2nd job into the workflow of the first one, as it is already way too complex. I want to be able to check the first job's status (Failed, Successful, Currently Executing) from the second one's, and use this as a condition to decide whether the second one should run, or wait for a retry. I know this can be done by querying the MSDB database on the SQL Server running the job. I'm wondering of there is an easier way, such as possibly using the WMI Data Reader Task? Anyone had this experience?
A:
You may want to create a third package the runs packageA and then packageB. The third package would only contain two execute package tasks.
http://msdn.microsoft.com/en-us/library/ms137609.aspx
@Craig
A status table is an option but you will have to keep monitoring it.
Here is an article about events in SSIS for you original question.
http://www.databasejournal.com/features/mssql/article.php/3558006
A:
Why not use a table? Just have the first job update the table with it's status. The second job can use the table to check the status. That should do the trick if I am reading the question correctly. The table would (should) only have one row so it won't kill performance and shouldn't cause any deadlocking (of course, now that I write it, it will happen) :)
@Jason: Yeah, you could monitor it or you could have a trigger start the second job when the end status is recieved. :)
| How do I check job status from SSIS control flow? | Here's my scenario - I have an SSIS job that depends on another prior SSIS job to run. I need to be able to check the first job's status before I kick off the second one. It's not feasible to add the 2nd job into the workflow of the first one, as it is already way too complex. I want to be able to check the first job's status (Failed, Successful, Currently Executing) from the second one's, and use this as a condition to decide whether the second one should run, or wait for a retry. I know this can be done by querying the MSDB database on the SQL Server running the job. I'm wondering of there is an easier way, such as possibly using the WMI Data Reader Task? Anyone had this experience?
| [
"You may want to create a third package the runs packageA and then packageB. The third package would only contain two execute package tasks.\nhttp://msdn.microsoft.com/en-us/library/ms137609.aspx\n@Craig\nA status table is an option but you will have to keep monitoring it.\nHere is an article about events in SSIS for you original question.\nhttp://www.databasejournal.com/features/mssql/article.php/3558006\n",
"Why not use a table? Just have the first job update the table with it's status. The second job can use the table to check the status. That should do the trick if I am reading the question correctly. The table would (should) only have one row so it won't kill performance and shouldn't cause any deadlocking (of course, now that I write it, it will happen) :)\n@Jason: Yeah, you could monitor it or you could have a trigger start the second job when the end status is recieved. :)\n"
] | [
4,
0
] | [] | [] | [
"sql_server",
"ssis"
] | stackoverflow_0000039780_sql_server_ssis.txt |
Q:
VS.NET defaults to private class
Why does Visual Studio declare new classes as private in C#? I almost always switch them over to public, am I the crazy one?
A:
I am not sure WHY it does that, but here's what you do in order to get Visual Studio to create the class as Public by default:
Go over to “Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\Code\1033″, you will find a file called Class.zip, inside the .zip file open the file called Class.cs, the content of the file looks like this:
using System;
using System.Collections.Generic;
$if$ ($targetframeworkversion$ == 3.5)using System.Linq;
$endif$using System.Text;
namespace $rootnamespace$
{
class $safeitemrootname$
{
}
}
All you need to do is add “Public” before the class name. The outcome should look like this:
using System;
using System.Collections.Generic;
$if$ ($targetframeworkversion$ == 3.5)using System.Linq;
$endif$using System.Text;
namespace $rootnamespace$
{
public class $safeitemrootname$
{
}
}
One last thing you need to do is flush all the Templates Visual Studio is using, and make him reload them. The command for that is ( it takes a while so hold on):
devenv /installvstemplates
And that’s it, no more private classes by default. Of course you can also add internal or whatever you want.
Source
A:
Private access by default seems like a reasonable design choice on the part of the C# language specifiers.
A good general design principle is to make all access levels as restrictive as possible, to minimize dependencies. You are less likely to end up with the wrong access level if you start as restrictive as possible and make the developer take some action to make a class or member more visible. If something is less public than you need, then that is apparent immediately when you get a compilation error, but it is not nearly as easy to spot something that is more visible than it should be.
A:
No I always have to slap that "public" keyword on the front of the class, so you are not alone. I guess the template designers thought it was a good idea to start with the very basics. You can edit these templates though in your Visual Studio install, if it really annoys you that much, but I haven't gotten to that point yet.
A:
Even if you mark a class as public, members are still private by default. In other words, the class is pretty much useless outside the same namespace. I think making it public by default instead may go too far, though. Try using 'internal' some. It should provide enough access for most purposes.
| VS.NET defaults to private class | Why does Visual Studio declare new classes as private in C#? I almost always switch them over to public, am I the crazy one?
| [
"I am not sure WHY it does that, but here's what you do in order to get Visual Studio to create the class as Public by default:\nGo over to “Program Files\\Microsoft Visual Studio 9.0\\Common7\\IDE\\ItemTemplates\\CSharp\\Code\\1033″, you will find a file called Class.zip, inside the .zip file open the file called Class.cs, the content of the file looks like this:\nusing System;\nusing System.Collections.Generic;\n$if$ ($targetframeworkversion$ == 3.5)using System.Linq;\n$endif$using System.Text; \n\nnamespace $rootnamespace$\n{\n class $safeitemrootname$\n {\n }\n}\n\nAll you need to do is add “Public” before the class name. The outcome should look like this:\nusing System;\nusing System.Collections.Generic;\n$if$ ($targetframeworkversion$ == 3.5)using System.Linq;\n$endif$using System.Text; \n\nnamespace $rootnamespace$\n{\n public class $safeitemrootname$\n {\n }\n}\n\nOne last thing you need to do is flush all the Templates Visual Studio is using, and make him reload them. The command for that is ( it takes a while so hold on): \ndevenv /installvstemplates\n\nAnd that’s it, no more private classes by default. Of course you can also add internal or whatever you want.\nSource\n",
"Private access by default seems like a reasonable design choice on the part of the C# language specifiers.\nA good general design principle is to make all access levels as restrictive as possible, to minimize dependencies. You are less likely to end up with the wrong access level if you start as restrictive as possible and make the developer take some action to make a class or member more visible. If something is less public than you need, then that is apparent immediately when you get a compilation error, but it is not nearly as easy to spot something that is more visible than it should be.\n",
"No I always have to slap that \"public\" keyword on the front of the class, so you are not alone. I guess the template designers thought it was a good idea to start with the very basics. You can edit these templates though in your Visual Studio install, if it really annoys you that much, but I haven't gotten to that point yet.\n",
"Even if you mark a class as public, members are still private by default. In other words, the class is pretty much useless outside the same namespace. I think making it public by default instead may go too far, though. Try using 'internal' some. It should provide enough access for most purposes.\n"
] | [
16,
8,
0,
0
] | [
"C++, upon which C# is derived, specified that the default class access level is private. C# carries this forward for better or worse.\n",
"For security reasons. \nYou want to expose certain methods and not your whole class.\n"
] | [
-1,
-2
] | [
"c#",
"visual_studio"
] | stackoverflow_0000039903_c#_visual_studio.txt |
Q:
Eclipse Plugin Dev: How do I get the paths for the currently selected project?
I'm writing a plugin that will parse a bunch of files in a project. But for the moment I'm stuck searching through the Eclipse API for answers.
The plugin works like this: Whenever I open a source file I let the plugin parse the source's corresponding build file (this could be further developed with caching the parse result). Getting the file is simple enough:
public void showSelection(IWorkbenchPart sourcePart) {
// Gets the currently selected file from the editor
IFile file = (IFile) workbenchPart.getSite().getPage().getActiveEditor()
.getEditorInput().getAdapter(IFile.class);
if (file != null) {
String path = file.getProjectRelativePath();
/** Snipped out: Rip out the source path part
* and replace with build path
* Then parse it. */
}
}
The problem I have is I have to use hard coded strings for the paths where the source files and build files go. Anyone know how to retrieve the build path from Eclipse? (I'm working in CDT by the way). Also is there a simple way to determine what the source path is (e.g. one file is under the "src" directory) of a source file?
A:
You should take a look at ICProject, especially the getOutputEntries and getAllSourceRoots operations. This tutorial has some brief examples too. I work with JDT so thats pretty much what I can do. Hope it helps :)
| Eclipse Plugin Dev: How do I get the paths for the currently selected project? | I'm writing a plugin that will parse a bunch of files in a project. But for the moment I'm stuck searching through the Eclipse API for answers.
The plugin works like this: Whenever I open a source file I let the plugin parse the source's corresponding build file (this could be further developed with caching the parse result). Getting the file is simple enough:
public void showSelection(IWorkbenchPart sourcePart) {
// Gets the currently selected file from the editor
IFile file = (IFile) workbenchPart.getSite().getPage().getActiveEditor()
.getEditorInput().getAdapter(IFile.class);
if (file != null) {
String path = file.getProjectRelativePath();
/** Snipped out: Rip out the source path part
* and replace with build path
* Then parse it. */
}
}
The problem I have is I have to use hard coded strings for the paths where the source files and build files go. Anyone know how to retrieve the build path from Eclipse? (I'm working in CDT by the way). Also is there a simple way to determine what the source path is (e.g. one file is under the "src" directory) of a source file?
| [
"You should take a look at ICProject, especially the getOutputEntries and getAllSourceRoots operations. This tutorial has some brief examples too. I work with JDT so thats pretty much what I can do. Hope it helps :)\n"
] | [
1
] | [] | [] | [
"eclipse",
"eclipse_api",
"java"
] | stackoverflow_0000037692_eclipse_eclipse_api_java.txt |
Q:
group_concat query performance
A MySQL query is running significantly slower since adding a group_concat clause. Currently, this query looks as follows:
select ... group_concat(distinct category.name) .... from page where
left outer join page_category on page.id = page_category.page_id
left outer join category on page_category.category_id = category.id
....
group by page.id
As mentioned in the query, among others, my application has 3 tables: page, category, and page_category. A page can be associated with none or multiple categories. Currently page, page_category, and category have 9,460, 20,241 and 10 entries, respectively.
Can anyone help me to improve this query to improve its performance?
A:
I was missing an index in the page_category.page_id field. That solve the problem.
| group_concat query performance | A MySQL query is running significantly slower since adding a group_concat clause. Currently, this query looks as follows:
select ... group_concat(distinct category.name) .... from page where
left outer join page_category on page.id = page_category.page_id
left outer join category on page_category.category_id = category.id
....
group by page.id
As mentioned in the query, among others, my application has 3 tables: page, category, and page_category. A page can be associated with none or multiple categories. Currently page, page_category, and category have 9,460, 20,241 and 10 entries, respectively.
Can anyone help me to improve this query to improve its performance?
| [
"I was missing an index in the page_category.page_id field. That solve the problem. \n"
] | [
1
] | [] | [] | [
"mysql",
"performance",
"sql"
] | stackoverflow_0000039196_mysql_performance_sql.txt |
Q:
HTML Help keyword location
I'm writing a manual and some important keywords are repeated in several pages. In the project's index I defined the keywords like this:
<LI> <OBJECT type="text/sitemap">
<param name="Name" value="Stackoverflow">
<param name="Name" value="Overview">
<param name="Local" value="overview.html#stackoverflow">
<param name="Name" value="Cover">
<param name="Local" value="cover.html#stackoverflow">
<param name="Name" value="Intro">
<param name="Local" value="intro.html#stackoverflow">
</OBJECT>
It works but instead of the title the dialog shows the keyword and the name of the project repeated three times.
Here's how it looks: http://img54.imageshack.us/img54/3342/sokeywordjs9.png
How can I display the tile of the page that contains the keyword in that dialog? I would like to show like this:
Stackoverflow Overview
Stackoverflow Cover
Stackoverflow Intro
Thanks
A:
How can I display the tile of the page
that contains the keyword in that
dialog?
You can't. The Location column in the Topics Found dialog always contains the name of the source chm file of the topic. The only way to get around this is to use modular help, which comes with it's own share of problems and overhead.
The search and indexing features don't gracefully support topics with the same title within the same project. This seems shortsighted, but HTMLHelp is now over ten years old, so maybe they just planned to fix it later and never got around to it.
| HTML Help keyword location | I'm writing a manual and some important keywords are repeated in several pages. In the project's index I defined the keywords like this:
<LI> <OBJECT type="text/sitemap">
<param name="Name" value="Stackoverflow">
<param name="Name" value="Overview">
<param name="Local" value="overview.html#stackoverflow">
<param name="Name" value="Cover">
<param name="Local" value="cover.html#stackoverflow">
<param name="Name" value="Intro">
<param name="Local" value="intro.html#stackoverflow">
</OBJECT>
It works but instead of the title the dialog shows the keyword and the name of the project repeated three times.
Here's how it looks: http://img54.imageshack.us/img54/3342/sokeywordjs9.png
How can I display the tile of the page that contains the keyword in that dialog? I would like to show like this:
Stackoverflow Overview
Stackoverflow Cover
Stackoverflow Intro
Thanks
| [
"\nHow can I display the tile of the page\n that contains the keyword in that\n dialog?\n\nYou can't. The Location column in the Topics Found dialog always contains the name of the source chm file of the topic. The only way to get around this is to use modular help, which comes with it's own share of problems and overhead. \nThe search and indexing features don't gracefully support topics with the same title within the same project. This seems shortsighted, but HTMLHelp is now over ten years old, so maybe they just planned to fix it later and never got around to it. \n"
] | [
1
] | [] | [] | [
"chm",
"indexing"
] | stackoverflow_0000039835_chm_indexing.txt |
Q:
How do I create a database programmatically in SQL Server?
How can I create a new database from my C# application?
I'm assuming once I create it, I can simply generate a connection string on the fly and connect to it, and the issue all the CREATE TABLE statements.
A:
KB307283 explains how to create a database using ADO.NET.
From the article:
String str;
SqlConnection myConn = new SqlConnection ("Server=localhost;Integrated security=SSPI;database=master");
str = "CREATE DATABASE MyDatabase ON PRIMARY " +
"(NAME = MyDatabase_Data, " +
"FILENAME = 'C:\\MyDatabaseData.mdf', " +
"SIZE = 2MB, MAXSIZE = 10MB, FILEGROWTH = 10%) " +
"LOG ON (NAME = MyDatabase_Log, " +
"FILENAME = 'C:\\MyDatabaseLog.ldf', " +
"SIZE = 1MB, " +
"MAXSIZE = 5MB, " +
"FILEGROWTH = 10%)";
SqlCommand myCommand = new SqlCommand(str, myConn);
try
{
myConn.Open();
myCommand.ExecuteNonQuery();
MessageBox.Show("DataBase is Created Successfully", "MyProgram", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
catch (System.Exception ex)
{
MessageBox.Show(ex.ToString(), "MyProgram", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
finally
{
if (myConn.State == ConnectionState.Open)
{
myConn.Close();
}
}
A:
CREATE DATABASE works
| How do I create a database programmatically in SQL Server? | How can I create a new database from my C# application?
I'm assuming once I create it, I can simply generate a connection string on the fly and connect to it, and the issue all the CREATE TABLE statements.
| [
"KB307283 explains how to create a database using ADO.NET.\nFrom the article:\nString str;\nSqlConnection myConn = new SqlConnection (\"Server=localhost;Integrated security=SSPI;database=master\");\n\nstr = \"CREATE DATABASE MyDatabase ON PRIMARY \" +\n \"(NAME = MyDatabase_Data, \" +\n \"FILENAME = 'C:\\\\MyDatabaseData.mdf', \" +\n \"SIZE = 2MB, MAXSIZE = 10MB, FILEGROWTH = 10%) \" +\n \"LOG ON (NAME = MyDatabase_Log, \" +\n \"FILENAME = 'C:\\\\MyDatabaseLog.ldf', \" +\n \"SIZE = 1MB, \" +\n \"MAXSIZE = 5MB, \" +\n \"FILEGROWTH = 10%)\";\n\nSqlCommand myCommand = new SqlCommand(str, myConn);\ntry \n{\n myConn.Open();\n myCommand.ExecuteNonQuery();\n MessageBox.Show(\"DataBase is Created Successfully\", \"MyProgram\", MessageBoxButtons.OK, MessageBoxIcon.Information);\n}\ncatch (System.Exception ex)\n{\n MessageBox.Show(ex.ToString(), \"MyProgram\", MessageBoxButtons.OK, MessageBoxIcon.Information);\n}\nfinally\n{\n if (myConn.State == ConnectionState.Open)\n {\n myConn.Close();\n }\n}\n\n",
"CREATE DATABASE works\n"
] | [
13,
1
] | [] | [] | [
".net",
"sql_server"
] | stackoverflow_0000040043_.net_sql_server.txt |
Q:
Update Panel inside of a UserControl inside of a Repeater inside of another UpdatePanel
Yes, it sounds crazy....It might be.
The final updatepanel does not appear to trigger anything, it just refreshes the update panels and does not call back to the usercontrol hosting it.
Any ideas?
EDIT: I got it posting back, however the controls inside the final usercontrol have lost their data...I'm thinking its because the main repeater is rebinding on each postback...Not sure where to take this one now.
A:
I would suggest you start by removing the UpdatePanels at first, and make sure your control orgy is working correctly with postbacks. Once you have that working, try adding the UpdatePanels back in from the bottom up.
A:
If you set the UpdateMode property to Conditional (default is Always) on both UpdatePanels it should stop the outer UpdatePanel triggering when only the usercontrols updatepanel should have refreshed.
| Update Panel inside of a UserControl inside of a Repeater inside of another UpdatePanel | Yes, it sounds crazy....It might be.
The final updatepanel does not appear to trigger anything, it just refreshes the update panels and does not call back to the usercontrol hosting it.
Any ideas?
EDIT: I got it posting back, however the controls inside the final usercontrol have lost their data...I'm thinking its because the main repeater is rebinding on each postback...Not sure where to take this one now.
| [
"I would suggest you start by removing the UpdatePanels at first, and make sure your control orgy is working correctly with postbacks. Once you have that working, try adding the UpdatePanels back in from the bottom up.\n",
"If you set the UpdateMode property to Conditional (default is Always) on both UpdatePanels it should stop the outer UpdatePanel triggering when only the usercontrols updatepanel should have refreshed.\n"
] | [
2,
0
] | [] | [] | [
"asp.net",
"asp.net_ajax",
"updatepanel",
"user_controls"
] | stackoverflow_0000030770_asp.net_asp.net_ajax_updatepanel_user_controls.txt |
Q:
Regular expression to match (C) function calls
Does anyone have a regular expression for matching function calls in C programs ?
A:
Since C isn't a regular language and C function calls can contain arbitrary argument expressions, I fear the answer to your question is “no.”
A:
After a bit more searching I decided to let the compiler do the hard work.
Get the compiler to produce a Register Transfer Language (RTL) file using the -dr options of gcc.
The produced RTL file has the suffix .rtl or .expand.
This file is far easier to parse as the functions calls are already identified.
A:
I doubt you can find a regex that matches all (and only) the function calls in some source code. But maybe you could use a tool like Understand, or your IDE, to browse your code.
| Regular expression to match (C) function calls | Does anyone have a regular expression for matching function calls in C programs ?
| [
"Since C isn't a regular language and C function calls can contain arbitrary argument expressions, I fear the answer to your question is “no.”\n",
"After a bit more searching I decided to let the compiler do the hard work.\nGet the compiler to produce a Register Transfer Language (RTL) file using the -dr options of gcc.\nThe produced RTL file has the suffix .rtl or .expand.\nThis file is far easier to parse as the functions calls are already identified.\n",
"I doubt you can find a regex that matches all (and only) the function calls in some source code. But maybe you could use a tool like Understand, or your IDE, to browse your code.\n"
] | [
3,
2,
1
] | [] | [] | [
"regex"
] | stackoverflow_0000039457_regex.txt |
Q:
Namespace with Context.Handler and Server.Transfer?
What .NET namespace or class includes both Context.Handler and Server.Transfer?
I think one may include both and my hunt on MSDN returned null.
A:
System.Web.
HttpContext.Current.Handler
HttpContext.Current.Request.Server.Transfer
A:
Hmmmm, I talking about them as they are implemented here:
http://blog.benday.com/archive/2008/03/31/23176.aspx
(at least Context.Handler there)
I am still having trouble in VS making that reference.
Context.Handler is an instance of an HttpContext.
HttpContext exposes the CURRENT instance for the request under the HttpContext.Current property, however the current context can also be passed in HTTPHandlers in the ProcessRequest method:
void ProcessRequest(HttpContext context)
| Namespace with Context.Handler and Server.Transfer? | What .NET namespace or class includes both Context.Handler and Server.Transfer?
I think one may include both and my hunt on MSDN returned null.
| [
"System.Web.\nHttpContext.Current.Handler\nHttpContext.Current.Request.Server.Transfer\n\n",
"\nHmmmm, I talking about them as they are implemented here:\nhttp://blog.benday.com/archive/2008/03/31/23176.aspx\n(at least Context.Handler there)\nI am still having trouble in VS making that reference.\n\nContext.Handler is an instance of an HttpContext.\nHttpContext exposes the CURRENT instance for the request under the HttpContext.Current property, however the current context can also be passed in HTTPHandlers in the ProcessRequest method:\nvoid ProcessRequest(HttpContext context)\n\n"
] | [
2,
0
] | [] | [] | [
".net",
"c#",
"namespaces"
] | stackoverflow_0000039727_.net_c#_namespaces.txt |
Q:
Get a list of current windows, and give one of them focus, in .Net
Without resorting to PInvoke, is there a way in .net to find out what windows are open? This is slightly different than asking what applications are running in memory. For example, Firefox could be running, but could be more than one window. Basically, I just want to be privy to the same information that the taskbar (and alt-tab?) is.
Also, once I have a reference to a window, is there any way to programatically give it focus?
Is there any way to do this with managed code?
A:
You could check out the new UI Automation stuff in .NET 3.5. It is supposed to mask a whole lot of the PInovke stuff and works with web and WPF applications.
I haven't used it yet, so I don't have a more specific place to direct you, but it might fit the bill.
A:
Check out this LGPL project. I know it can set foreground for a window. Otherwise aku is correct. It'll require most likely some pinvoke calls.
http://mwinapi.sourceforge.net/
If you need information on pinvoke use:
http://www.pinvoke.net/
A:
I'm afraid there is no way you can do it without PInvoke. To give focus to some window you should call SetForegroundWindow function, see this article for details.
| Get a list of current windows, and give one of them focus, in .Net | Without resorting to PInvoke, is there a way in .net to find out what windows are open? This is slightly different than asking what applications are running in memory. For example, Firefox could be running, but could be more than one window. Basically, I just want to be privy to the same information that the taskbar (and alt-tab?) is.
Also, once I have a reference to a window, is there any way to programatically give it focus?
Is there any way to do this with managed code?
| [
"You could check out the new UI Automation stuff in .NET 3.5. It is supposed to mask a whole lot of the PInovke stuff and works with web and WPF applications.\nI haven't used it yet, so I don't have a more specific place to direct you, but it might fit the bill.\n",
"Check out this LGPL project. I know it can set foreground for a window. Otherwise aku is correct. It'll require most likely some pinvoke calls.\nhttp://mwinapi.sourceforge.net/\nIf you need information on pinvoke use: \nhttp://www.pinvoke.net/\n",
"I'm afraid there is no way you can do it without PInvoke. To give focus to some window you should call SetForegroundWindow function, see this article for details.\n"
] | [
1,
1,
0
] | [] | [] | [
".net",
"windows"
] | stackoverflow_0000035834_.net_windows.txt |
Q:
Javascript equivalent of Python's locals()?
In Python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals(). Is there some equivalent way of doing this in Javascript? For instance, I would like to do something like the following:
var foo = function(){ alert('foo'); };
var bar = function(){ alert('bar'); };
var s = 'foo';
locals()[s](); // alerts 'foo'
Is this at all possible, or should I just be using a local object for the lookup?
A:
locals() - No.
globals() - Yes.
window is a reference to the global scope, like globals() in python.
globals()["foo"]
is the same as:
window["foo"]
A:
Well, I don't think that there is something like that in js. You can always use eval instead of locals(). Like this:
eval(s+"()");
You just have to know that actually function foo exists.
Edit:
Don't use eval:) Use:
var functionName="myFunctionName";
window[functionName]();
A:
I seem to remember Brendan Eich commented on this in a recent podcast; if i recall correctly, it's not being considered, as it adds unreasonable restrictions to optimization. He compared it to the arguments local in that, while useful for varargs, its very existence removes the ability to guess at what a function will touch just by looking at its definition.
BTW: i believe JS did have support for accessing locals through the arguments local at one time - a quick search shows this has been deprecated though.
A:
@e-bartek, I think that window[functionName] won't work if you in some closure, and the function name is local to that closure. For example:
function foo() {
var bar = function () {
alert('hello world');
};
var s = 'bar';
window[s](); // this won't work
}
In this case, s is 'bar', but the function 'bar' only exists inside the scope of the function 'foo'. It is not defined in the window scope.
Of course, this doesn't really answer the original question, I just wanted to chime in on this response. I don't believe there is a way to do what the original question asked.
A:
@pkaeding
Yes, you're right. window[functionName]() doesn't work in this case, but eval does. If I needed something like this, I'd create my own object to keep those functions together.
var func = {};
func.bar = ...;
var s = "bar";
func[s]();
| Javascript equivalent of Python's locals()? | In Python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals(). Is there some equivalent way of doing this in Javascript? For instance, I would like to do something like the following:
var foo = function(){ alert('foo'); };
var bar = function(){ alert('bar'); };
var s = 'foo';
locals()[s](); // alerts 'foo'
Is this at all possible, or should I just be using a local object for the lookup?
| [
"\nlocals() - No. \nglobals() - Yes.\n\nwindow is a reference to the global scope, like globals() in python.\nglobals()[\"foo\"]\n\nis the same as:\nwindow[\"foo\"]\n\n",
"Well, I don't think that there is something like that in js. You can always use eval instead of locals(). Like this: \neval(s+\"()\");\n\nYou just have to know that actually function foo exists.\nEdit:\nDon't use eval:) Use:\nvar functionName=\"myFunctionName\";\nwindow[functionName]();\n\n",
"I seem to remember Brendan Eich commented on this in a recent podcast; if i recall correctly, it's not being considered, as it adds unreasonable restrictions to optimization. He compared it to the arguments local in that, while useful for varargs, its very existence removes the ability to guess at what a function will touch just by looking at its definition. \nBTW: i believe JS did have support for accessing locals through the arguments local at one time - a quick search shows this has been deprecated though.\n",
"@e-bartek, I think that window[functionName] won't work if you in some closure, and the function name is local to that closure. For example:\nfunction foo() {\n var bar = function () {\n alert('hello world');\n };\n var s = 'bar';\n window[s](); // this won't work\n}\n\nIn this case, s is 'bar', but the function 'bar' only exists inside the scope of the function 'foo'. It is not defined in the window scope.\nOf course, this doesn't really answer the original question, I just wanted to chime in on this response. I don't believe there is a way to do what the original question asked.\n",
"@pkaeding\nYes, you're right. window[functionName]() doesn't work in this case, but eval does. If I needed something like this, I'd create my own object to keep those functions together.\nvar func = {};\nfunc.bar = ...;\nvar s = \"bar\";\nfunc[s]();\n\n"
] | [
18,
4,
3,
0,
0
] | [
"AFAIK, no. If you just want to check the existence of a given variable, you can do it by testing for it, something like this:\nif (foo) foo();\n\n"
] | [
-1
] | [
"javascript",
"python"
] | stackoverflow_0000039960_javascript_python.txt |
Q:
How to embed user-specific data in .NET windows setup app at setup download time?
I'd like to have a link in my ASP.NET web site that authenticated users click to download a windows app that is already pre-configured with their client ID and some site config data. My goal is no typing required for the user during the client app install, both for the user friendliness, and to avoid config errors from mis-typed technical bits. Ideally I'd like the web server-side code to run as part of the ASP.NET app.
FogBugz seems to do something like this. There is a menu option within the web app to download a screenshot tool, and when you download and run the installer, it knows your particular FogBugz web address so it can send screenshots there. (Hey Joel, looking for a question to answer? hint—hint)
A:
The way the FogBugz screenshot setup tool does this is that it appends a 256 byte block at the end of the setup program at the moment it is downloaded. In other words, the download script spits out all the bytes from setup.exe and then an extra 256 with the url for the FogBugz server, plus any padding.
Windows ignores these extra bytes when the .exe is run (provided you turned off the CRC check for your setup installer - we're using InnoSetup).
After installation, we run the Screenshot program with a command line switch that tells it where the setup installer is. It looks at the end of the setup.exe and finds it's info, and then writes that to the registry so the user doesn't have to know it.
A:
If it helps RegexBuddy does this also.
A:
Does the information need to be secure? If not, ClickOnce can use URL-based parameters. Here's an article about that on MSDN.
| How to embed user-specific data in .NET windows setup app at setup download time? | I'd like to have a link in my ASP.NET web site that authenticated users click to download a windows app that is already pre-configured with their client ID and some site config data. My goal is no typing required for the user during the client app install, both for the user friendliness, and to avoid config errors from mis-typed technical bits. Ideally I'd like the web server-side code to run as part of the ASP.NET app.
FogBugz seems to do something like this. There is a menu option within the web app to download a screenshot tool, and when you download and run the installer, it knows your particular FogBugz web address so it can send screenshots there. (Hey Joel, looking for a question to answer? hint—hint)
| [
"The way the FogBugz screenshot setup tool does this is that it appends a 256 byte block at the end of the setup program at the moment it is downloaded. In other words, the download script spits out all the bytes from setup.exe and then an extra 256 with the url for the FogBugz server, plus any padding.\nWindows ignores these extra bytes when the .exe is run (provided you turned off the CRC check for your setup installer - we're using InnoSetup).\nAfter installation, we run the Screenshot program with a command line switch that tells it where the setup installer is. It looks at the end of the setup.exe and finds it's info, and then writes that to the registry so the user doesn't have to know it.\n",
"If it helps RegexBuddy does this also.\n",
"Does the information need to be secure? If not, ClickOnce can use URL-based parameters. Here's an article about that on MSDN.\n"
] | [
2,
0,
0
] | [] | [] | [
".net",
"fogbugz",
"installation",
"windows"
] | stackoverflow_0000035772_.net_fogbugz_installation_windows.txt |
Q:
Best way to pass a large number of arguments into a configuration dialog
I've got a situation where I have a main form that pops up an advanced configuration form that just has half a dozen matched check boxes and combo boxes to select some advanced options (the check boxes to enable/disable, the combo to select a media if enabled).
If I just pass the individual settings for the check and combo boxes in to the constructor for the dialog that's obviously a dozen arguments, which seems a bit excessive.
My other obvious option would be since in the main form these settings are stored in a large IDictionary with all the other main form settings I could just pass this dictionary in and fetch it back afterward with the updated values, but my understanding is that this wouldn't really be very good coding practice.
Am I missing a good way to do this that is both efficient and good coding practice?
(this particular code is in C#, although I have a feeling a general solution would apply to other languages as well)
A:
I personally would create a carrier object to store the values. You then get the nice intellisense for it, and changes to it would be quite straightforward. It would also be faster than dictionary lookups for parameter values. And of course, you get type safety. :)
A:
You could go with Rob's solution; that's the prettiest for development. Your "carrier object" could contain the entire IDictionary and have typed properties to help intellisense. The properties could update the IDictionary. When you're done, you can pass the carrier object back and fetch the IDictionary directly from it.
For example, if your dictionary had key/value pair "FirstEnabled"/boolean, you could do this:
class ContainerObject
{
public IDictionary<object, object> _dict;
public ContainerObject(IDictionary<object, object> dict)
{
_dict = dict;
}
public bool FirstEnabled
{
get { return (bool) _dict["FirstEnabled"]; }
set { _dict["FirstEnabled"] = value; }
}
}
You can change the member "_dict" to private or protected and have a accessor function if you want.
A:
Something like this should be good:
MyConfigurationDialog dialog = new MyConfigurationDialog();
//Copy the dictionary so that the dialog can't mess with our settings
dialog.Settings = new Dictionary(existingSettings);
if(DialogResult.OK == dialog.Show()) {
//grab the settings that the dialog may have changed
existingSettings["setting1"] = dialog.Settings["setting1"];
existingSettings["setting2"] = dialog.Settings["setting2"];
}
A:
I agree with Rob Cooper. Create a class to represent your configuration, and pass that into the constructor of your form. This will also allow you to define methods on your new "config" class like "saveSettings", "LoadSettings", etc. That in turn should keep the code more maintainable in general.
As an quick-and-dirty alternative, if you are saving these to a file somewhere, just pass the name of the file, and have your form read that at run-time.
The first option really is the way to go though, IMO.
| Best way to pass a large number of arguments into a configuration dialog | I've got a situation where I have a main form that pops up an advanced configuration form that just has half a dozen matched check boxes and combo boxes to select some advanced options (the check boxes to enable/disable, the combo to select a media if enabled).
If I just pass the individual settings for the check and combo boxes in to the constructor for the dialog that's obviously a dozen arguments, which seems a bit excessive.
My other obvious option would be since in the main form these settings are stored in a large IDictionary with all the other main form settings I could just pass this dictionary in and fetch it back afterward with the updated values, but my understanding is that this wouldn't really be very good coding practice.
Am I missing a good way to do this that is both efficient and good coding practice?
(this particular code is in C#, although I have a feeling a general solution would apply to other languages as well)
| [
"I personally would create a carrier object to store the values. You then get the nice intellisense for it, and changes to it would be quite straightforward. It would also be faster than dictionary lookups for parameter values. And of course, you get type safety. :)\n",
"You could go with Rob's solution; that's the prettiest for development. Your \"carrier object\" could contain the entire IDictionary and have typed properties to help intellisense. The properties could update the IDictionary. When you're done, you can pass the carrier object back and fetch the IDictionary directly from it. \nFor example, if your dictionary had key/value pair \"FirstEnabled\"/boolean, you could do this:\nclass ContainerObject\n{\n public IDictionary<object, object> _dict;\n public ContainerObject(IDictionary<object, object> dict)\n {\n _dict = dict;\n }\n\n public bool FirstEnabled\n {\n get { return (bool) _dict[\"FirstEnabled\"]; }\n set { _dict[\"FirstEnabled\"] = value; }\n }\n}\n\nYou can change the member \"_dict\" to private or protected and have a accessor function if you want. \n",
"Something like this should be good:\nMyConfigurationDialog dialog = new MyConfigurationDialog();\n\n//Copy the dictionary so that the dialog can't mess with our settings\ndialog.Settings = new Dictionary(existingSettings);\n\nif(DialogResult.OK == dialog.Show()) {\n //grab the settings that the dialog may have changed\n existingSettings[\"setting1\"] = dialog.Settings[\"setting1\"];\n existingSettings[\"setting2\"] = dialog.Settings[\"setting2\"];\n}\n\n",
"I agree with Rob Cooper. Create a class to represent your configuration, and pass that into the constructor of your form. This will also allow you to define methods on your new \"config\" class like \"saveSettings\", \"LoadSettings\", etc. That in turn should keep the code more maintainable in general.\nAs an quick-and-dirty alternative, if you are saving these to a file somewhere, just pass the name of the file, and have your form read that at run-time.\nThe first option really is the way to go though, IMO.\n"
] | [
6,
1,
0,
0
] | [] | [] | [
"c#"
] | stackoverflow_0000040132_c#.txt |
Q:
Cannot add a launch shortcut (Eclipse Plug-in)
I'm making a simple extra java app launcher for Eclipse 3.2 (JBuilder 2007-8) for internal use.
So I looked up all the documentations related, including this one The Launching Framework from eclipse.org and have managed to make everything else working with the exception of the launch shortcut.
This is the part of my plugin.xml.
<extension
point="org.eclipse.debug.ui.launchShortcuts">
<shortcut
category="mycompany.javalaunchext.launchConfig"
class="mycompany.javalaunchext.LaunchShortcut"
description="launchshortcutsdescription"
icon="icons/k2mountain.png"
id="mycompany.javalaunchext.launchShortcut"
label="Java Application Ext."
modes="run, debug">
<perspective
id="org.eclipse.jdt.ui.JavaPerspective">
</perspective>
<perspective
id="org.eclipse.jdt.ui.JavaHierarchyPerspective">
</perspective>
<perspective
id="org.eclipse.jdt.ui.JavaBrowsingPerspective">
</perspective>
<perspective
id="org.eclipse.debug.ui.DebugPerspective">
</perspective>
</shortcut>
The configuration name in the category section is correct and the class in the class section, i believe, is correctly implemented. (basically copied from org.eclipse.jdt.debug.ui.launchConfigurations.JavaApplicationLaunchShortcut)
I'm really not sure if I'm supposed to write a follow-up here but let me clarify my question more.
I've extended org.eclipse.jdt.debug.ui.launchConfigurations.JavaLaunchShortcut.
Plus, I've added my own logger to constructors and methods, but the class seems like it's never even instantiating.
A:
I had to add contextualLaunch under org.eclipse.debug.ui.launchShortcuts.
The old way seems like it's deprecated a long ago.
For other people who are working on the same subject,
you might want to extend org.eclipse.ui.commands and bindings, too.
I cannot choose this answer but this is the answer that I (the questioner) was looking for.
A:
You class should implement ILaunchShortcut.
Check out the Javadoc.
What exception are you getting? Check the error log.
| Cannot add a launch shortcut (Eclipse Plug-in) | I'm making a simple extra java app launcher for Eclipse 3.2 (JBuilder 2007-8) for internal use.
So I looked up all the documentations related, including this one The Launching Framework from eclipse.org and have managed to make everything else working with the exception of the launch shortcut.
This is the part of my plugin.xml.
<extension
point="org.eclipse.debug.ui.launchShortcuts">
<shortcut
category="mycompany.javalaunchext.launchConfig"
class="mycompany.javalaunchext.LaunchShortcut"
description="launchshortcutsdescription"
icon="icons/k2mountain.png"
id="mycompany.javalaunchext.launchShortcut"
label="Java Application Ext."
modes="run, debug">
<perspective
id="org.eclipse.jdt.ui.JavaPerspective">
</perspective>
<perspective
id="org.eclipse.jdt.ui.JavaHierarchyPerspective">
</perspective>
<perspective
id="org.eclipse.jdt.ui.JavaBrowsingPerspective">
</perspective>
<perspective
id="org.eclipse.debug.ui.DebugPerspective">
</perspective>
</shortcut>
The configuration name in the category section is correct and the class in the class section, i believe, is correctly implemented. (basically copied from org.eclipse.jdt.debug.ui.launchConfigurations.JavaApplicationLaunchShortcut)
I'm really not sure if I'm supposed to write a follow-up here but let me clarify my question more.
I've extended org.eclipse.jdt.debug.ui.launchConfigurations.JavaLaunchShortcut.
Plus, I've added my own logger to constructors and methods, but the class seems like it's never even instantiating.
| [
"I had to add contextualLaunch under org.eclipse.debug.ui.launchShortcuts.\nThe old way seems like it's deprecated a long ago.\nFor other people who are working on the same subject,\nyou might want to extend org.eclipse.ui.commands and bindings, too.\nI cannot choose this answer but this is the answer that I (the questioner) was looking for.\n",
"You class should implement ILaunchShortcut.\nCheck out the Javadoc. \nWhat exception are you getting? Check the error log.\n"
] | [
4,
0
] | [] | [] | [
"eclipse",
"plugins"
] | stackoverflow_0000026145_eclipse_plugins.txt |
Q:
Do I need a Mac to make a Java application bundle?
I want to create a Java application bundle for Mac without using Mac.
According to Java Deployment Options for Mac OS X, I can do this by using Xcode, Jar Bundler, or from the command line. Once the files and folders are set up, all I need for the command line method is to call /Developer/Tools/SetFile. Is there a SetFile clone on Linux or Windows? If not, do I have to get a Mac?
A:
A Java application bundle on OS X is nothing more than a directory containing your .jars and a number of configuration files. The SetFile tool sets a custom HFS filesystem property on the directory to tell finder that it is an app, but giving it a ".app" extension serves the same purpose. I don't think there's anything stopping you from building one on, say, Windows, though of course you have no way of testing that it works, but if you are able to test it at least once on a real Mac, you could then conceivably update the .jars within it on Windows to reflect code changes without too much difficulty.
Have a look at the Bundle Programming Guide for more info.
A:
One way is to generate a zip file with the App using for example Ant. In ant you can specify that the file in Contents/MacOS should have execute-permissions using something like filemode="755".
A:
Having worked on the Mac port of NITE, I can say that jar packages for other platforms should work equally well on Mac. I would still recommend finding a mac for testing (or even announcing mac support was in beta) as we discovered a few mac-only quirks during the port (to go with the windows- and linux- only quirks we'd already discovered)
A:
Technically, you don't need a Mac. Applications in OS X just require a specific folder structure and an XML file. However, the Mac has a really nice tool called Jar Bundler. In addition to setting up the bundle directories and XML file, it creates a C executable that launches your java application via JNI. This is nice because the process name matches the application name.
I believe that you could have someone generate an application bundle for you once, and then check in the files to your project. At build time, all you would need to do is copy your jar files to the appropriate locations and maybe update the XML file.
| Do I need a Mac to make a Java application bundle? | I want to create a Java application bundle for Mac without using Mac.
According to Java Deployment Options for Mac OS X, I can do this by using Xcode, Jar Bundler, or from the command line. Once the files and folders are set up, all I need for the command line method is to call /Developer/Tools/SetFile. Is there a SetFile clone on Linux or Windows? If not, do I have to get a Mac?
| [
"A Java application bundle on OS X is nothing more than a directory containing your .jars and a number of configuration files. The SetFile tool sets a custom HFS filesystem property on the directory to tell finder that it is an app, but giving it a \".app\" extension serves the same purpose. I don't think there's anything stopping you from building one on, say, Windows, though of course you have no way of testing that it works, but if you are able to test it at least once on a real Mac, you could then conceivably update the .jars within it on Windows to reflect code changes without too much difficulty.\nHave a look at the Bundle Programming Guide for more info.\n",
"One way is to generate a zip file with the App using for example Ant. In ant you can specify that the file in Contents/MacOS should have execute-permissions using something like filemode=\"755\".\n",
"Having worked on the Mac port of NITE, I can say that jar packages for other platforms should work equally well on Mac. I would still recommend finding a mac for testing (or even announcing mac support was in beta) as we discovered a few mac-only quirks during the port (to go with the windows- and linux- only quirks we'd already discovered)\n",
"Technically, you don't need a Mac. Applications in OS X just require a specific folder structure and an XML file. However, the Mac has a really nice tool called Jar Bundler. In addition to setting up the bundle directories and XML file, it creates a C executable that launches your java application via JNI. This is nice because the process name matches the application name.\nI believe that you could have someone generate an application bundle for you once, and then check in the files to your project. At build time, all you would need to do is copy your jar files to the appropriate locations and maybe update the XML file.\n"
] | [
6,
0,
0,
0
] | [] | [] | [
"deployment",
"java",
"macos"
] | stackoverflow_0000039194_deployment_java_macos.txt |
Q:
Should you run one or multiple applications per tomcat cluster?
Currently I am setting up an application that can deploy other web apps to Tomcat 6 clusters. It is set up right now to have a one to one relationship between deployed web application and a cluster. My current reasoning for this is so that I can change the JVM args of the Tomcat server without disrupting other applications and so that the memory usage of the single application will not conflict with other applications.
The question is, what is considered best practice in terms of tomcat instance clusters? Should you only have one application running per cluster or multiple applications like in a single tomcat instance environment? Or does this depend on the size of your application?
Thank you
A:
I've learned from experience that having only one app per Tomcat instance has a very significant advantage: when a Tomcat instance dies, you don't have to dig through logs (or guess) which app is to blame.
A:
Divide your services by resource requirements at the very least. For example, if you are running a photo album site, separate your image download server from your image upload server. The download server will have many more requests, and because most people have a lower upload speed the upload server will have longer lasting connections. Similarly, and image manipulation server would probably have few connections, but it should fork off threads to perform the CPU intensive image manipulation tasks asynchronously from the web user interface.
If you have the hardware to do it, it's a lot easier to manage many separate tomcat instances with one application each than a few instances with many applications.
| Should you run one or multiple applications per tomcat cluster? | Currently I am setting up an application that can deploy other web apps to Tomcat 6 clusters. It is set up right now to have a one to one relationship between deployed web application and a cluster. My current reasoning for this is so that I can change the JVM args of the Tomcat server without disrupting other applications and so that the memory usage of the single application will not conflict with other applications.
The question is, what is considered best practice in terms of tomcat instance clusters? Should you only have one application running per cluster or multiple applications like in a single tomcat instance environment? Or does this depend on the size of your application?
Thank you
| [
"I've learned from experience that having only one app per Tomcat instance has a very significant advantage: when a Tomcat instance dies, you don't have to dig through logs (or guess) which app is to blame.\n",
"Divide your services by resource requirements at the very least. For example, if you are running a photo album site, separate your image download server from your image upload server. The download server will have many more requests, and because most people have a lower upload speed the upload server will have longer lasting connections. Similarly, and image manipulation server would probably have few connections, but it should fork off threads to perform the CPU intensive image manipulation tasks asynchronously from the web user interface.\nIf you have the hardware to do it, it's a lot easier to manage many separate tomcat instances with one application each than a few instances with many applications.\n"
] | [
4,
2
] | [] | [] | [
"cluster_computing",
"tomcat"
] | stackoverflow_0000030295_cluster_computing_tomcat.txt |
Q:
Why do jQuery selectors sometimes not work in Internet Explorer
I have a very strange problem. Under some elusive circumstances I fail to apply any jQuery selector on my pages under IE. It's OK under Firefox though. The jQuery function simply returns empty array.
Any suggestions?
Page is too complex to post it here. Practically any selector, except "#id" selectors, returns a zero element array. The jQuery version is 1.2.3
A:
What version(s) of IE is it failing under? Is it failing for a specific complex selector? I think we need an example.
Edit: Does the problem go away if you upgrade to 1.2.6? 1.2.6 is primarily a bug-fix release according to this page.
Failing that, the best way to find the problem is to create a minimum page that can reproduce the bug. Without that, it's just about impossible to troubleshoot.
A:
Try upgrading to jQuery 1.2.6, you should be on the latest release of jQuery if you are having problems first ensure you are on the latest and greatest.
| Why do jQuery selectors sometimes not work in Internet Explorer | I have a very strange problem. Under some elusive circumstances I fail to apply any jQuery selector on my pages under IE. It's OK under Firefox though. The jQuery function simply returns empty array.
Any suggestions?
Page is too complex to post it here. Practically any selector, except "#id" selectors, returns a zero element array. The jQuery version is 1.2.3
| [
"What version(s) of IE is it failing under? Is it failing for a specific complex selector? I think we need an example.\nEdit: Does the problem go away if you upgrade to 1.2.6? 1.2.6 is primarily a bug-fix release according to this page.\nFailing that, the best way to find the problem is to create a minimum page that can reproduce the bug. Without that, it's just about impossible to troubleshoot.\n",
"Try upgrading to jQuery 1.2.6, you should be on the latest release of jQuery if you are having problems first ensure you are on the latest and greatest.\n"
] | [
3,
2
] | [] | [] | [
"html",
"internet_explorer",
"javascript",
"jquery"
] | stackoverflow_0000040151_html_internet_explorer_javascript_jquery.txt |
Q:
How do I implement a chromeless window with WPF?
I want to show a chromeless modal window with a close button in the upper right corner.
Is this possible?
A:
You'll pretty much have to roll your own Close button, but you can hide the window chrome completely using the WindowStyle attribute, like this:
<Window WindowStyle="None">
That will still have a resize border. If you want to make the window non-resizable then add ResizeMode="NoResize" to the declaration.
A:
Check out this blog post on kirupa.
A:
<Window x:Class="WpfApplication1.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="Window1" Height="300" Width="300" WindowStyle="None" ResizeMode="NoResize">
<Button HorizontalAlignment="Right" Name="button1" VerticalAlignment="Top" >Close</Button>
</Window>
| How do I implement a chromeless window with WPF? | I want to show a chromeless modal window with a close button in the upper right corner.
Is this possible?
| [
"You'll pretty much have to roll your own Close button, but you can hide the window chrome completely using the WindowStyle attribute, like this:\n<Window WindowStyle=\"None\">\n\nThat will still have a resize border. If you want to make the window non-resizable then add ResizeMode=\"NoResize\" to the declaration.\n",
"Check out this blog post on kirupa.\n\n",
"<Window x:Class=\"WpfApplication1.Window1\"\n xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"\n xmlns:x=\"http://schemas.microsoft.com/winfx/2006/xaml\"\n Title=\"Window1\" Height=\"300\" Width=\"300\" WindowStyle=\"None\" ResizeMode=\"NoResize\">\n <Button HorizontalAlignment=\"Right\" Name=\"button1\" VerticalAlignment=\"Top\" >Close</Button>\n</Window>\n\n"
] | [
33,
16,
1
] | [] | [] | [
"user_interface",
"wpf"
] | stackoverflow_0000037830_user_interface_wpf.txt |
Q:
How to write a spec that is productive?
I've seen different program managers write specs in different format. Almost every one has had his/her own style of writing a spec.
On one hand are those wordy documents which given to a programmer are likely to cause him/her missing a few things. I personally dread the word documents spec...I think its because of my reading style...I am always speed reading things which I think will cause me to miss out on key points.
On the other hand, I have seen this innovative specs written in Excel by one of our clients. The way he used to write the spec was kind of create a mock application in Excel and use some VBA to mock it. He would do things like on button click where should the form go or what action should it perform (in comments).
On data form, he would display a form in cells and on each data entry cell he would comment on what valid values are, what validation should it perform etc.
I think that using this technique, it was less likely to miss out on things that needed to be done. Also, it was much easier to unit test it for the developer. The tester too had a better understanding of the system as it 'performed' before actually being written.
Visio is another tool to do screen design but I still think Excel has a better edge over it considering its VBA support and its functions.
Do you think this should become a more popular way of writing spec? I know it involves a bit of extra work on part of project manager(or whoever is writing the spec) but the payoff is huge...I myself could see a lot of productivity gain from using it. And if there are any better formats of specs that would actually help programmer.
A:
Joel on Software is particularly good at these and has some good articles about the subject...
A specific case: the write-up and the spec.
A:
Two approaches have worked well for me.
One is the "working prototype" which you sort of described in your question. In my experience, the company contracted a user interface expert to create fully functional HTML mocks. The data on the page was static, but it allowed for developers and management to see and play with a "functional" version of the site. All that was left to do was replace the static data on the pages with dynamic content - this prototype was our spec for the initial version of our product. The designer even included detailed explanation of some subtle behavior in popup dialogs that would appear when hovering over mock links. It worked well for our team.
On a subsequent project, we didn't have the luxury of the UI expert, but we used similar approach. We used a wiki to mock a version of the site. We created links between the functional aspects of the system and documented each piece of functionality in detail. Each piece of functionality could, in turn, link to detailed design and architecture decisions. We also used to wiki to hold our to list feature list for each release (which became our release notes). These documents linked back to the detailed feature page. The wiki became a living document - describing our releases and evolution of our system in great detail. It was an invaluable resource.
I prefer the wiki to the working prototype because it's more easily extensible - growing and becoming more valuable as your system evolves.
A:
I think you may have a look about Test-Driven Requirements, which is a technique to make executable specifications.
There are some great tools like FIT, Fitnesse, GreenPepper or Concordion for that purpose.
A:
One of the Microsoft Press books has excellent examples of various documents, including an SRS (which I think is what you are talking about). It might be one of the requirements books by Weigert (I think that's his name, I'm blanking on it right now). I've seen US government organizations use that as a template, and from my three work experiences with the government, they like to make their own whereever they can, so if they are reusing it, it must be good.
Also - a spec should contain NO CODE, in my opinion. It should focus on what the system must do, should do, and can not do using text and diagrams.
| How to write a spec that is productive? | I've seen different program managers write specs in different format. Almost every one has had his/her own style of writing a spec.
On one hand are those wordy documents which given to a programmer are likely to cause him/her missing a few things. I personally dread the word documents spec...I think its because of my reading style...I am always speed reading things which I think will cause me to miss out on key points.
On the other hand, I have seen this innovative specs written in Excel by one of our clients. The way he used to write the spec was kind of create a mock application in Excel and use some VBA to mock it. He would do things like on button click where should the form go or what action should it perform (in comments).
On data form, he would display a form in cells and on each data entry cell he would comment on what valid values are, what validation should it perform etc.
I think that using this technique, it was less likely to miss out on things that needed to be done. Also, it was much easier to unit test it for the developer. The tester too had a better understanding of the system as it 'performed' before actually being written.
Visio is another tool to do screen design but I still think Excel has a better edge over it considering its VBA support and its functions.
Do you think this should become a more popular way of writing spec? I know it involves a bit of extra work on part of project manager(or whoever is writing the spec) but the payoff is huge...I myself could see a lot of productivity gain from using it. And if there are any better formats of specs that would actually help programmer.
| [
"Joel on Software is particularly good at these and has some good articles about the subject...\nA specific case: the write-up and the spec.\n",
"Two approaches have worked well for me.\nOne is the \"working prototype\" which you sort of described in your question. In my experience, the company contracted a user interface expert to create fully functional HTML mocks. The data on the page was static, but it allowed for developers and management to see and play with a \"functional\" version of the site. All that was left to do was replace the static data on the pages with dynamic content - this prototype was our spec for the initial version of our product. The designer even included detailed explanation of some subtle behavior in popup dialogs that would appear when hovering over mock links. It worked well for our team.\nOn a subsequent project, we didn't have the luxury of the UI expert, but we used similar approach. We used a wiki to mock a version of the site. We created links between the functional aspects of the system and documented each piece of functionality in detail. Each piece of functionality could, in turn, link to detailed design and architecture decisions. We also used to wiki to hold our to list feature list for each release (which became our release notes). These documents linked back to the detailed feature page. The wiki became a living document - describing our releases and evolution of our system in great detail. It was an invaluable resource. \nI prefer the wiki to the working prototype because it's more easily extensible - growing and becoming more valuable as your system evolves.\n",
"I think you may have a look about Test-Driven Requirements, which is a technique to make executable specifications.\nThere are some great tools like FIT, Fitnesse, GreenPepper or Concordion for that purpose.\n",
"One of the Microsoft Press books has excellent examples of various documents, including an SRS (which I think is what you are talking about). It might be one of the requirements books by Weigert (I think that's his name, I'm blanking on it right now). I've seen US government organizations use that as a template, and from my three work experiences with the government, they like to make their own whereever they can, so if they are reusing it, it must be good.\nAlso - a spec should contain NO CODE, in my opinion. It should focus on what the system must do, should do, and can not do using text and diagrams.\n"
] | [
5,
3,
2,
0
] | [] | [] | [
"project_management",
"specs"
] | stackoverflow_0000023091_project_management_specs.txt |
Q:
Thread pool for executing arbitrary tasks with different priorities
I'm trying to come up with a design for a thread pool with a lot of design requirements for my job. This is a real problem for working software, and it's a difficult task. I have a working implementation but I'd like to throw this out to SO and see what interesting ideas people can come up with, so that I can compare to my implementation and see how it stacks up. I've tried to be as specific to the requirements as I can.
The thread pool needs to execute a series of tasks. The tasks can be short running (<1sec) or long running (hours or days). Each task has an associated priority (from 1 = very low to 5 = very high). Tasks can arrive at any time while the other tasks are running, so as they arrive the thread pool needs to pick these up and schedule them as threads become available.
The task priority is completely independant of the task length. In fact it is impossible to tell how long a task could take to run without just running it.
Some tasks are CPU bound while some are greatly IO bound. It is impossible to tell beforehand what a given task would be (although I guess it might be possible to detect while the tasks are running).
The primary goal of the thread pool is to maximise throughput. The thread pool should effectively use the resources of the computer. Ideally, for CPU bound tasks, the number of active threads would be equal to the number of CPUs. For IO bound tasks, more threads should be allocated than there are CPUs so that blocking does not overly affect throughput. Minimising the use of locks and using thread safe/fast containers is important.
In general, you should run higher priority tasks with a higher CPU priority (ref: SetThreadPriority). Lower priority tasks should not "block" higher priority tasks from running, so if a higher priority task comes along while all low priority tasks are running, the higher priority task will get to run.
The tasks have a "max running tasks" parameter associated with them. Each type of task is only allowed to run at most this many concurrent instances of the task at a time. For example, we might have the following tasks in the queue:
A - 1000 instances - low priority - max tasks 1
B - 1000 instances - low priority - max tasks 1
C - 1000 instances - low priority - max tasks 1
A working implementation could only run (at most) 1 A, 1 B and 1 C at the same time.
It needs to run on Windows XP, Server 2003, Vista and Server 2008 (latest service packs).
For reference, we might use the following interface:
namespace ThreadPool
{
class Task
{
public:
Task();
void run();
};
class ThreadPool
{
public:
ThreadPool();
~ThreadPool();
void run(Task *inst);
void stop();
};
}
A:
So what are we going to pick as the basic building block for this. Windows has two building blocks that look promising :- I/O Completion Ports (IOCPs) and Asynchronous Procedure Calls (APCs). Both of these give us FIFO queuing without having to perform explicit locking, and with a certain amount of built-in OS support in places like the scheduler (for example, IOCPs can avoid some context switches).
APCs are perhaps a slightly better fit, but we will have to be slightly careful with them, because they are not quite "transparent". If the work item performs an alertable wait (::SleepEx, ::WaitForXxxObjectEx, etc.) and we accidentally dispatch an APC to the thread then the newly dispatched APC will take over the thread, suspending the previously executing APC until the new APC is finished. This is bad for our concurrency requirements and can make stack overflows more likely.
A:
It needs to run on Windows XP, Server 2003, Vista and Server 2008 (latest service packs).
What feature of the system's built-in thread pools make them unsuitable for your task? If you want to target XP and 2003 you can't use the new shiny Vista/2008 pools, but you can still use QueueUserWorkItem and friends.
A:
@DrPizza - this is a very good question, and one that strikes right to the heart of the problem. There are a few reasons why QueueUserWorkItem and the Windows NT thread pool was ruled out (although the Vista one does look interesting, maybe in a few years).
Firstly, we wanted to have greater control over when it starts up and stops threads. We have heard that the NT thread pool is reluctant to start up a new thread if it thinks that the tasks are short running. We could use the WT_EXECUTELONGFUNCTION, but we really have no idea if the task is long or short
Secondly, if the thread pool was already filled up with long running, low priority tasks, there would be no chance of a high priority task getting to run in a timely manner. The NT thread pool has no real concept of task priorities, so we can't do a QueueUserWorkItem and say "oh by the way, run this one right away".
Thirdly, (according to MSDN) the NT thread pool is not compatible with the STA apartment model. I'm not sure quite what this would mean, but all of our worker threads run in an STA.
A:
@DrPizza - this is a very good question, and one that strikes right to the heart of the problem. There are a few reasons why QueueUserWorkItem and the Windows NT thread pool was ruled out (although the Vista one does look interesting, maybe in a few years).
Yeah, it looks like it got quite beefed up in Vista, quite versatile now.
OK, I'm still a bit unclear about how you wish the priorities to work. If the pool is currently running a task of type A with maximal concurrency of 1 and low priority, and it gets given a new task also of type A (and maximal concurrency 1), but this time with a high priority, what should it do?
Suspending the currently executing A is hairy (it could hold a lock that the new task needs to take, deadlocking the system). It can't spawn a second thread and just let it run alongside (the permitted concurrency is only 1). But it can't wait until the low priority task is completed, because the runtime is unbounded and doing so would allow a low priority task to block a high priority task.
My presumption is that it is the latter behaviour that you are after?
A:
@DrPizza:
OK, I'm still a bit unclear about how
you wish the priorities to work. If
the pool is currently running a task
of type A with maximal concurrency of
1 and low priority, and it gets given
a new task also of type A (and maximal
concurrency 1), but this time with a
high priority, what should it do?
This one is a bit of a tricky one, although in this case I think I would be happy with simply allowing the low-priority task to run to completion. Usually, we wouldn't see a lot of the same types of tasks with different thread priorities. In our model it is actually possible to safely halt and later restart tasks at certain well defined points (for different reasons than this) although the complications this would introduce probably aren't worth the risk.
Normally, only different types of tasks would have different priorities. For example:
A task - 1000 instances - low priority
B task - 1000 instances - high priority
Assuming the A tasks had come along and were running, then the B tasks had arrived, we would want the B tasks to be able to run more or less straight away.
| Thread pool for executing arbitrary tasks with different priorities | I'm trying to come up with a design for a thread pool with a lot of design requirements for my job. This is a real problem for working software, and it's a difficult task. I have a working implementation but I'd like to throw this out to SO and see what interesting ideas people can come up with, so that I can compare to my implementation and see how it stacks up. I've tried to be as specific to the requirements as I can.
The thread pool needs to execute a series of tasks. The tasks can be short running (<1sec) or long running (hours or days). Each task has an associated priority (from 1 = very low to 5 = very high). Tasks can arrive at any time while the other tasks are running, so as they arrive the thread pool needs to pick these up and schedule them as threads become available.
The task priority is completely independant of the task length. In fact it is impossible to tell how long a task could take to run without just running it.
Some tasks are CPU bound while some are greatly IO bound. It is impossible to tell beforehand what a given task would be (although I guess it might be possible to detect while the tasks are running).
The primary goal of the thread pool is to maximise throughput. The thread pool should effectively use the resources of the computer. Ideally, for CPU bound tasks, the number of active threads would be equal to the number of CPUs. For IO bound tasks, more threads should be allocated than there are CPUs so that blocking does not overly affect throughput. Minimising the use of locks and using thread safe/fast containers is important.
In general, you should run higher priority tasks with a higher CPU priority (ref: SetThreadPriority). Lower priority tasks should not "block" higher priority tasks from running, so if a higher priority task comes along while all low priority tasks are running, the higher priority task will get to run.
The tasks have a "max running tasks" parameter associated with them. Each type of task is only allowed to run at most this many concurrent instances of the task at a time. For example, we might have the following tasks in the queue:
A - 1000 instances - low priority - max tasks 1
B - 1000 instances - low priority - max tasks 1
C - 1000 instances - low priority - max tasks 1
A working implementation could only run (at most) 1 A, 1 B and 1 C at the same time.
It needs to run on Windows XP, Server 2003, Vista and Server 2008 (latest service packs).
For reference, we might use the following interface:
namespace ThreadPool
{
class Task
{
public:
Task();
void run();
};
class ThreadPool
{
public:
ThreadPool();
~ThreadPool();
void run(Task *inst);
void stop();
};
}
| [
"So what are we going to pick as the basic building block for this. Windows has two building blocks that look promising :- I/O Completion Ports (IOCPs) and Asynchronous Procedure Calls (APCs). Both of these give us FIFO queuing without having to perform explicit locking, and with a certain amount of built-in OS support in places like the scheduler (for example, IOCPs can avoid some context switches).\nAPCs are perhaps a slightly better fit, but we will have to be slightly careful with them, because they are not quite \"transparent\". If the work item performs an alertable wait (::SleepEx, ::WaitForXxxObjectEx, etc.) and we accidentally dispatch an APC to the thread then the newly dispatched APC will take over the thread, suspending the previously executing APC until the new APC is finished. This is bad for our concurrency requirements and can make stack overflows more likely.\n",
"\nIt needs to run on Windows XP, Server 2003, Vista and Server 2008 (latest service packs).\n\nWhat feature of the system's built-in thread pools make them unsuitable for your task? If you want to target XP and 2003 you can't use the new shiny Vista/2008 pools, but you can still use QueueUserWorkItem and friends.\n",
"@DrPizza - this is a very good question, and one that strikes right to the heart of the problem. There are a few reasons why QueueUserWorkItem and the Windows NT thread pool was ruled out (although the Vista one does look interesting, maybe in a few years).\nFirstly, we wanted to have greater control over when it starts up and stops threads. We have heard that the NT thread pool is reluctant to start up a new thread if it thinks that the tasks are short running. We could use the WT_EXECUTELONGFUNCTION, but we really have no idea if the task is long or short\nSecondly, if the thread pool was already filled up with long running, low priority tasks, there would be no chance of a high priority task getting to run in a timely manner. The NT thread pool has no real concept of task priorities, so we can't do a QueueUserWorkItem and say \"oh by the way, run this one right away\".\nThirdly, (according to MSDN) the NT thread pool is not compatible with the STA apartment model. I'm not sure quite what this would mean, but all of our worker threads run in an STA.\n",
"\n@DrPizza - this is a very good question, and one that strikes right to the heart of the problem. There are a few reasons why QueueUserWorkItem and the Windows NT thread pool was ruled out (although the Vista one does look interesting, maybe in a few years).\n\nYeah, it looks like it got quite beefed up in Vista, quite versatile now.\nOK, I'm still a bit unclear about how you wish the priorities to work. If the pool is currently running a task of type A with maximal concurrency of 1 and low priority, and it gets given a new task also of type A (and maximal concurrency 1), but this time with a high priority, what should it do?\nSuspending the currently executing A is hairy (it could hold a lock that the new task needs to take, deadlocking the system). It can't spawn a second thread and just let it run alongside (the permitted concurrency is only 1). But it can't wait until the low priority task is completed, because the runtime is unbounded and doing so would allow a low priority task to block a high priority task.\nMy presumption is that it is the latter behaviour that you are after?\n",
"@DrPizza:\n\nOK, I'm still a bit unclear about how\n you wish the priorities to work. If\n the pool is currently running a task\n of type A with maximal concurrency of\n 1 and low priority, and it gets given\n a new task also of type A (and maximal\n concurrency 1), but this time with a\n high priority, what should it do?\n\nThis one is a bit of a tricky one, although in this case I think I would be happy with simply allowing the low-priority task to run to completion. Usually, we wouldn't see a lot of the same types of tasks with different thread priorities. In our model it is actually possible to safely halt and later restart tasks at certain well defined points (for different reasons than this) although the complications this would introduce probably aren't worth the risk.\nNormally, only different types of tasks would have different priorities. For example:\n\nA task - 1000 instances - low priority\nB task - 1000 instances - high priority\n\nAssuming the A tasks had come along and were running, then the B tasks had arrived, we would want the B tasks to be able to run more or less straight away.\n"
] | [
5,
1,
0,
0,
0
] | [] | [] | [
"c++",
"multithreading",
"windows"
] | stackoverflow_0000038501_c++_multithreading_windows.txt |
Q:
Free ASP.Net and/or CSS Themes
Where can I get some decent looking free ASP.Net or CSS themes?
A:
I wouldn't bother looking for ASP.NET stuff specifically (probably won't find any anyways). Finding a good CSS theme easily can be used in ASP.NET.
Here's some sites that I love for CSS goodness:
http://www.freecsstemplates.org/
http://www.oswd.org/
http://www.openwebdesign.org/
http://www.styleshout.com/
http://www.freelayouts.com/
A:
Microsoft hired one fo the kids from A List Apart to whip some out. The .Net projects are free of charge for download.
http://msdn.microsoft.com/en-us/asp.net/aa336613.aspx
A:
I have used Open source Web Design in the past. They have quite a few css themes, don't know about ASP.Net
A:
As always, http://www.csszengarden.com/. Note that the images aren't public domain.
| Free ASP.Net and/or CSS Themes | Where can I get some decent looking free ASP.Net or CSS themes?
| [
"I wouldn't bother looking for ASP.NET stuff specifically (probably won't find any anyways). Finding a good CSS theme easily can be used in ASP.NET. \nHere's some sites that I love for CSS goodness:\nhttp://www.freecsstemplates.org/\nhttp://www.oswd.org/\nhttp://www.openwebdesign.org/\nhttp://www.styleshout.com/\nhttp://www.freelayouts.com/ \n",
"Microsoft hired one fo the kids from A List Apart to whip some out. The .Net projects are free of charge for download.\nhttp://msdn.microsoft.com/en-us/asp.net/aa336613.aspx\n",
"I have used Open source Web Design in the past. They have quite a few css themes, don't know about ASP.Net\n",
"As always, http://www.csszengarden.com/. Note that the images aren't public domain.\n"
] | [
33,
9,
5,
3
] | [] | [] | [
"asp.net",
"css",
"themes"
] | stackoverflow_0000023288_asp.net_css_themes.txt |
Q:
Remote Debugging Server Side of a Web Application with Visual Studio 2008
So, I've read that it is not a good idea to install VS2008 on my test server machine as it changes the run time environment too much. I've never attempted remote debugging with Visual Studio before, so what is the "best" way to get line by line remote debugging of server side web app code. I'd like to be able to set a breakpoint, attach, and start stepping line by line to verify code flow and, you know, debug and stuff :).
I'm sure most of the answers will pertain to ASP.NET code, and I'm interested in that, but my current code base is actually Classic ASP and ISAPI Extensions, so I care about that a little more.
Also, my test server is running in VMWare, I've noticed in the latest VMWare install it mentioning something about debugging support, but I'm unfamiliar with what that means...anyone using it, what does it do for you?
A:
First, this is MUCH easier if both the server and your workstation are on the same domain (the server needs access to connect to your machine). In your C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger\x86 (or x64, or ia64) directory are the files you need to copy to your server. There are different versions between Visual Studio versions, so make sure they match on the client and server side. On the server, fire up msvsmon. It will say something like "Msvsmon started a new server named xxx@yyyy". This is the name you'll use in Visual Studio to connect to this server. You can go into Tools > Options to set the server name and to set the authentication mode (hopefully Windows Authentication) - BTW No Authentication doesn't work for managed code.
On the client side, open up Visual Studio and load the solution you're going to debug. Then go to Debug > Attach to Process. In the "Qualifier" field enter the name of the server as you saw it appear earlier. Click on the Select button and select the type of code you want to debug, then hit OK. Hopefully you'll see a list of the processes on the server that you can attach to (you should also see on the server that the debugging monitor just said you connected). Find the process to attach to (start up the app if necessary). If it's an ASP.NET website, you'd select w3wp.exe, then hit Attach. Set your breakpoints and hopefully you're now remotely debugging the code.
AFAIK - the VMWare option lets you start up code inside of a VM but debug it from your workstation.
A:
Visual Studio comes with a remote debugger that you can run as an exe on your server. It works best if you can run it as the same domain user as your copy of visual studio. You can then do an attach to process from the debugger on your machine to the IIS process on the server and debug as if it was running on your machine. You get more options for .Net debugging, but there's support for older platforms too.
| Remote Debugging Server Side of a Web Application with Visual Studio 2008 | So, I've read that it is not a good idea to install VS2008 on my test server machine as it changes the run time environment too much. I've never attempted remote debugging with Visual Studio before, so what is the "best" way to get line by line remote debugging of server side web app code. I'd like to be able to set a breakpoint, attach, and start stepping line by line to verify code flow and, you know, debug and stuff :).
I'm sure most of the answers will pertain to ASP.NET code, and I'm interested in that, but my current code base is actually Classic ASP and ISAPI Extensions, so I care about that a little more.
Also, my test server is running in VMWare, I've noticed in the latest VMWare install it mentioning something about debugging support, but I'm unfamiliar with what that means...anyone using it, what does it do for you?
| [
"First, this is MUCH easier if both the server and your workstation are on the same domain (the server needs access to connect to your machine). In your C:\\Program Files\\Microsoft Visual Studio 9.0\\Common7\\IDE\\Remote Debugger\\x86 (or x64, or ia64) directory are the files you need to copy to your server. There are different versions between Visual Studio versions, so make sure they match on the client and server side. On the server, fire up msvsmon. It will say something like \"Msvsmon started a new server named xxx@yyyy\". This is the name you'll use in Visual Studio to connect to this server. You can go into Tools > Options to set the server name and to set the authentication mode (hopefully Windows Authentication) - BTW No Authentication doesn't work for managed code. \nOn the client side, open up Visual Studio and load the solution you're going to debug. Then go to Debug > Attach to Process. In the \"Qualifier\" field enter the name of the server as you saw it appear earlier. Click on the Select button and select the type of code you want to debug, then hit OK. Hopefully you'll see a list of the processes on the server that you can attach to (you should also see on the server that the debugging monitor just said you connected). Find the process to attach to (start up the app if necessary). If it's an ASP.NET website, you'd select w3wp.exe, then hit Attach. Set your breakpoints and hopefully you're now remotely debugging the code.\nAFAIK - the VMWare option lets you start up code inside of a VM but debug it from your workstation. \n",
"Visual Studio comes with a remote debugger that you can run as an exe on your server. It works best if you can run it as the same domain user as your copy of visual studio. You can then do an attach to process from the debugger on your machine to the IIS process on the server and debug as if it was running on your machine. You get more options for .Net debugging, but there's support for older platforms too.\n"
] | [
12,
0
] | [] | [] | [
"asp.net",
"asp_classic",
"debugging",
"visual_studio"
] | stackoverflow_0000039739_asp.net_asp_classic_debugging_visual_studio.txt |
Q:
Using C in a shared multi-platform POSIX environment
I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
A:
Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.
In /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.
This won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.
I wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.
A:
Also, you could use autoconf and distribute your application in source form only. :)
A:
You know, you should look at static linking.
These days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore.
You could also try running your applications in chroot() jails and distributing those.
A:
Depending on your mix os OSes, you might be better off creating packages for each class of system.
Alternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.
| Using C in a shared multi-platform POSIX environment | I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
| [
"Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.\nIn /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.\nThis won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.\nI wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.\n",
"Also, you could use autoconf and distribute your application in source form only. :)\n",
"You know, you should look at static linking.\nThese days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore. \nYou could also try running your applications in chroot() jails and distributing those.\n",
"Depending on your mix os OSes, you might be better off creating packages for each class of system.\nAlternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"c",
"cross_platform",
"posix",
"python",
"scripting"
] | stackoverflow_0000039847_c_cross_platform_posix_python_scripting.txt |
Q:
How can I control checkboxes in a .Net Forms.TreeView?
I have a .Net desktop application with a TreeView as one of the UI elements.
I want to be able to multi-select that TreeView, only that isn't supported at all.
So I'm adding check-boxes to the tree, My problem is that only some items are selectable, and those that aren't can't consistently cascade selections.
Is there any way to disable or hide some check-boxes while displaying others?
A:
The default behavior of TreeView is that when the Checkboxes property is set to true, that checkboxes will be shown for all TreeNodes.
The behavior that you are looking for - to disable or hide some checkboxes - is a custom modification of the normal behavior. You can create a custom implementation of TreeView or TreeNode that overrides the default behavior. One other thing to try would be to use the TreeView.StateImageList property. Some sites to check out for more info:
See this post to the MSDN forums that goes through an implementation similar to what you are attempting.
CodeProject: Tri-State TreeView
CodeProject: How to handle custom node state images in a TreeView (e.g. tristate checkboxes)
A:
I had a very similar problem in an editor I wrote recently. In the end, I used the TreeNode's BackColor property to determine the selection state of the node. I then wrote a handler for the SelectionChanged event that checked the state of the Shift/Control keys to determine if the selected node was being added to/removed from the selection or creating a new selection. There was also a Generic::List<> of the nodes that were currently selected to eliminate any tree searches.
A:
MultiSelectTreeView:
Why doesn't .NET have a multiselect treeview? There are so many uses for one and turning on checkboxes in the treeview is a pretty lousy alternative.
| How can I control checkboxes in a .Net Forms.TreeView? | I have a .Net desktop application with a TreeView as one of the UI elements.
I want to be able to multi-select that TreeView, only that isn't supported at all.
So I'm adding check-boxes to the tree, My problem is that only some items are selectable, and those that aren't can't consistently cascade selections.
Is there any way to disable or hide some check-boxes while displaying others?
| [
"The default behavior of TreeView is that when the Checkboxes property is set to true, that checkboxes will be shown for all TreeNodes.\nThe behavior that you are looking for - to disable or hide some checkboxes - is a custom modification of the normal behavior. You can create a custom implementation of TreeView or TreeNode that overrides the default behavior. One other thing to try would be to use the TreeView.StateImageList property. Some sites to check out for more info:\n\nSee this post to the MSDN forums that goes through an implementation similar to what you are attempting. \nCodeProject: Tri-State TreeView\nCodeProject: How to handle custom node state images in a TreeView (e.g. tristate checkboxes)\n\n",
"I had a very similar problem in an editor I wrote recently. In the end, I used the TreeNode's BackColor property to determine the selection state of the node. I then wrote a handler for the SelectionChanged event that checked the state of the Shift/Control keys to determine if the selected node was being added to/removed from the selection or creating a new selection. There was also a Generic::List<> of the nodes that were currently selected to eliminate any tree searches.\n",
"MultiSelectTreeView:\n\nWhy doesn't .NET have a multiselect treeview? There are so many uses for one and turning on checkboxes in the treeview is a pretty lousy alternative.\n\n"
] | [
4,
1,
0
] | [] | [] | [
".net",
"treenode",
"treeview",
"winforms"
] | stackoverflow_0000039119_.net_treenode_treeview_winforms.txt |
Q:
How to traverse a maze programmatically when you've hit a dead end
Moving through the maze forward is pretty easy, but I can't seem to figure out how to back up through the maze to try a new route once you hit a dead end without going back too far?
A:
Use backtracking by keeping a stack of previous direction decisions.
A:
The simplest (to implement) algorithm would be to just keep a stack of locations you've been at, and the route you took from each, unless backtracking gives you that information.
To go back, just pop off old locations from the stack and check for more exits from that location until you find an old location with an untested exit.
By consistently testing the exits in the same order each time, if you know that backtracking to a location comes from down (ie. last time you were at the old location you went down), then you simply pick the next direction after down.
I am not entirely sure what you mean by going back too far though, I would assume you would want to go back to the previous place you have untested routes, is that not what you want?
Note that unless you try to keep track of the path from the starting point to your current location, and avoiding those squares when you try to find new routes, you might end up going in circle, which would eventually make the stack too large.
A simple recursive method which marks the path it takes and never enters areas that are marked can easily do this.
Also, if your thing that moves through the maze is slightly smarter than just being able to move, and hit (stop at) walls, in that it can see from its current point in all directions, I have other algorithms that might help.
A:
Eric Lippert did a series of articles on creating a C# implemention of A*, which might be more efficient.
| How to traverse a maze programmatically when you've hit a dead end | Moving through the maze forward is pretty easy, but I can't seem to figure out how to back up through the maze to try a new route once you hit a dead end without going back too far?
| [
"Use backtracking by keeping a stack of previous direction decisions.\n",
"The simplest (to implement) algorithm would be to just keep a stack of locations you've been at, and the route you took from each, unless backtracking gives you that information.\nTo go back, just pop off old locations from the stack and check for more exits from that location until you find an old location with an untested exit.\nBy consistently testing the exits in the same order each time, if you know that backtracking to a location comes from down (ie. last time you were at the old location you went down), then you simply pick the next direction after down.\nI am not entirely sure what you mean by going back too far though, I would assume you would want to go back to the previous place you have untested routes, is that not what you want?\nNote that unless you try to keep track of the path from the starting point to your current location, and avoiding those squares when you try to find new routes, you might end up going in circle, which would eventually make the stack too large.\nA simple recursive method which marks the path it takes and never enters areas that are marked can easily do this.\nAlso, if your thing that moves through the maze is slightly smarter than just being able to move, and hit (stop at) walls, in that it can see from its current point in all directions, I have other algorithms that might help.\n",
"Eric Lippert did a series of articles on creating a C# implemention of A*, which might be more efficient.\n"
] | [
4,
2,
1
] | [] | [] | [
"artificial_intelligence",
"c#",
"maze"
] | stackoverflow_0000040413_artificial_intelligence_c#_maze.txt |
Q:
How to get an array of distinct property values from in memory lists?
I have a List of Foo.
Foo has a string property named Bar.
I'd like to use LINQ to get a string[] of distinct values for Foo.Bar in List of Foo.
How can I do this?
A:
I'd go lambdas... wayyy nicer
var bars = Foos.Select(f => f.Bar).Distinct().ToArray();
works the same as what @lassevk posted.
I'd also add that you might want to keep from converting to an array until the last minute.
LINQ does some optimizations behind the scenes, queries stay in its query form until explicitly needed. So you might want to build everything you need into the query first so any possible optimization is applied altogether.
By evaluation I means asking for something that explicitly requires evalution like "Count()" or "ToArray()" etc.
A:
This should work if you want to use the fluent pattern:
string[] arrayStrings = fooList.Select(a => a.Bar).Distinct().ToArray();
A:
Try this:
var distinctFooBars = (from foo in foos
select foo.Bar).Distinct().ToArray();
A:
Shouldn't you be able to do something like:
var strings = (from a in fooList select a.Bar).Distinct();
string[] array = strings.ToArray();
| How to get an array of distinct property values from in memory lists? | I have a List of Foo.
Foo has a string property named Bar.
I'd like to use LINQ to get a string[] of distinct values for Foo.Bar in List of Foo.
How can I do this?
| [
"I'd go lambdas... wayyy nicer\nvar bars = Foos.Select(f => f.Bar).Distinct().ToArray();\n\nworks the same as what @lassevk posted.\nI'd also add that you might want to keep from converting to an array until the last minute. \nLINQ does some optimizations behind the scenes, queries stay in its query form until explicitly needed. So you might want to build everything you need into the query first so any possible optimization is applied altogether.\nBy evaluation I means asking for something that explicitly requires evalution like \"Count()\" or \"ToArray()\" etc.\n",
"This should work if you want to use the fluent pattern:\nstring[] arrayStrings = fooList.Select(a => a.Bar).Distinct().ToArray();\n\n",
"Try this:\nvar distinctFooBars = (from foo in foos\n select foo.Bar).Distinct().ToArray();\n\n",
"Shouldn't you be able to do something like:\nvar strings = (from a in fooList select a.Bar).Distinct();\nstring[] array = strings.ToArray();\n\n"
] | [
5,
3,
2,
0
] | [] | [] | [
".net",
"c#",
"filtering",
"linq",
"performance"
] | stackoverflow_0000040465_.net_c#_filtering_linq_performance.txt |
Q:
cx_Oracle: how do I get the ORA-xxxxx error number?
In a try/except block, how do I extract the Oracle error number?
A:
try:
cursor.execute("select 1 / 0 from dual")
except cx_Oracle.DatabaseError, e:
error, = e
print "Code:", error.code
print "Message:", error.message
This results in the following output:
Code: 1476
Message: ORA-01476: divisor is equal to zero
| cx_Oracle: how do I get the ORA-xxxxx error number? | In a try/except block, how do I extract the Oracle error number?
| [
"try:\n cursor.execute(\"select 1 / 0 from dual\")\nexcept cx_Oracle.DatabaseError, e:\n error, = e\n print \"Code:\", error.code\n print \"Message:\", error.message\n\nThis results in the following output:\nCode: 1476\nMessage: ORA-01476: divisor is equal to zero\n\n"
] | [
13
] | [] | [] | [
"cx_oracle",
"oracle",
"python"
] | stackoverflow_0000040586_cx_oracle_oracle_python.txt |
Q:
*= in Sybase SQL
I'm maintaining some code that uses a *= operator in a query to a Sybase database and I can't find documentation on it. Does anyone know what *= does? I assume that it is some sort of a join.
select * from a, b where a.id *= b.id
I can't figure out how this is different from:
select * from a, b where a.id = b.id
A:
From http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc34982_1500/html/mig_gde/mig_gde160.htm:
Inner and outer tables
The terms outer table and inner table describe the placement of the tables in an outer join:
In a left join, the outer table and inner table are the left and right tables respectively. The outer table and inner table are also referred to as the row-preserving and null-supplying tables, respectively.
In a right join, the outer table and inner table are the right and left tables respectively.
For example, in the queries below, T1 is the outer table and T2 is the inner table:
T1 left join T2
T2 right join T1
Or, using Transact-SQL syntax:
T1 *= T2
T2 =* T1
A:
It means outer join, a simple = means inner join.
*= is LEFT JOIN and =* is RIGHT JOIN.
(or vice versa, I keep forgetting since I'm not using it any more, and Google isn't helpful when searching for *=)
A:
Of course, you should write it this way:
SELECT *
FROM a
LEFT JOIN b ON b.id=a.id
The a,b syntax is evil.
A:
ANSI-82 syntax
select
*
from
a
, b
where
a.id *= b.id
ANSI-92
select
*
from
a
left outer join b
on a.id = b.id
A:
select * from a, b where a.id = b.id
Requires that a row exist in where b.id = a.id in order to return an answer
select * from a, b where a.id *= b.id
Will fill the columns from b with nulls when there wasn't a row in b where b.id = a.id.
| *= in Sybase SQL | I'm maintaining some code that uses a *= operator in a query to a Sybase database and I can't find documentation on it. Does anyone know what *= does? I assume that it is some sort of a join.
select * from a, b where a.id *= b.id
I can't figure out how this is different from:
select * from a, b where a.id = b.id
| [
"From http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc34982_1500/html/mig_gde/mig_gde160.htm:\nInner and outer tables\nThe terms outer table and inner table describe the placement of the tables in an outer join:\n\nIn a left join, the outer table and inner table are the left and right tables respectively. The outer table and inner table are also referred to as the row-preserving and null-supplying tables, respectively.\nIn a right join, the outer table and inner table are the right and left tables respectively.\n\nFor example, in the queries below, T1 is the outer table and T2 is the inner table:\n\nT1 left join T2\nT2 right join T1\n\nOr, using Transact-SQL syntax:\n\nT1 *= T2\nT2 =* T1\n\n",
"It means outer join, a simple = means inner join.\n*= is LEFT JOIN and =* is RIGHT JOIN.\n\n(or vice versa, I keep forgetting since I'm not using it any more, and Google isn't helpful when searching for *=)\n",
"Of course, you should write it this way:\nSELECT *\nFROM a\nLEFT JOIN b ON b.id=a.id\n\nThe a,b syntax is evil.\n",
"ANSI-82 syntax \nselect \n * \nfrom \n a\n , b \n\nwhere \n a.id *= b.id\n\nANSI-92\nselect \n * \nfrom \n a\n left outer join b \n on a.id = b.id\n\n",
"select * from a, b where a.id = b.id\nRequires that a row exist in where b.id = a.id in order to return an answer\nselect * from a, b where a.id *= b.id\nWill fill the columns from b with nulls when there wasn't a row in b where b.id = a.id.\n"
] | [
14,
9,
6,
5,
1
] | [] | [] | [
"join",
"sql",
"sybase",
"tsql"
] | stackoverflow_0000040665_join_sql_sybase_tsql.txt |
Q:
Is there a secret trick to force antialiasing inside Viewport3D in Windows XP?
Under Windows XP WPF true 3D content (which is usually displayed using the Viewport3D control) looks extremely ugly because it is by default not antialiased as the rest of the WPF graphics are. Especially at lower resolution the experience is so bad that it can not be used in production code.
I have managed to force antialiasing on some Nvidia graphics cards using the settings of the driver. Unfortunately, this sometimes yields ugly artifacts and only works with specific cards and driver versions. The official word from Microsoft on this regard is that antialiased 3D is generally not supported under Windows XP and the artifact I see result from the fact that WPF already does its own antialiasing (on XP only for 2D).
So I was wondering if there is maybe some other secret trick that lets me force antialiasing on WPF 3D content under Windows XP.
A:
Have you tried this (from your thread on MSDN forums)?
Well, it seems the reference in the MSDN link above incorrectly specify the affected registry root key. In MSDN it is specified as HKEY_CURRENT_USER, while the correct root key should be HKEY_LOCAL_MACHINE. I've tried setting up the HKEY_LOCAL_MACHINE\Software\Microsoft\Avalon.Graphics\MaxMultiplesampleType to '4' and I can get antialiasing for my WPF Application on XP.
A:
The feeling I get from Matthew MacDonald's Pro WPF Windows Presentation Foundation in .NET 3.0 is that it's not possible:
There's one exception to WPF's software support. Due to poor driver support, WPF only performs antialiasing for 3-D drawings if you're running your application on Windows Vista (and you have a native Windows Vista driver for your video card).
I've never seen anything to suggest that you can enable AA in WPF 3D on anything but Vista, but if there is a way it's new to me and I'd love to know as well!
A:
Does your video card support Shader 2.0? You can refer to this wiki page to see if it does...
| Is there a secret trick to force antialiasing inside Viewport3D in Windows XP? | Under Windows XP WPF true 3D content (which is usually displayed using the Viewport3D control) looks extremely ugly because it is by default not antialiased as the rest of the WPF graphics are. Especially at lower resolution the experience is so bad that it can not be used in production code.
I have managed to force antialiasing on some Nvidia graphics cards using the settings of the driver. Unfortunately, this sometimes yields ugly artifacts and only works with specific cards and driver versions. The official word from Microsoft on this regard is that antialiased 3D is generally not supported under Windows XP and the artifact I see result from the fact that WPF already does its own antialiasing (on XP only for 2D).
So I was wondering if there is maybe some other secret trick that lets me force antialiasing on WPF 3D content under Windows XP.
| [
"Have you tried this (from your thread on MSDN forums)?\n\nWell, it seems the reference in the MSDN link above incorrectly specify the affected registry root key. In MSDN it is specified as HKEY_CURRENT_USER, while the correct root key should be HKEY_LOCAL_MACHINE. I've tried setting up the HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Avalon.Graphics\\MaxMultiplesampleType to '4' and I can get antialiasing for my WPF Application on XP. \n\n",
"The feeling I get from Matthew MacDonald's Pro WPF Windows Presentation Foundation in .NET 3.0 is that it's not possible:\n\nThere's one exception to WPF's software support. Due to poor driver support, WPF only performs antialiasing for 3-D drawings if you're running your application on Windows Vista (and you have a native Windows Vista driver for your video card).\n\nI've never seen anything to suggest that you can enable AA in WPF 3D on anything but Vista, but if there is a way it's new to me and I'd love to know as well!\n",
"Does your video card support Shader 2.0? You can refer to this wiki page to see if it does...\n"
] | [
2,
2,
0
] | [] | [] | [
"3d",
"antialiasing",
"viewport3d",
"windows_xp",
"wpf"
] | stackoverflow_0000039454_3d_antialiasing_viewport3d_windows_xp_wpf.txt |
Q:
Hibernate saveOrUpdate with another object in the session
Is there any way to save an object using Hibernate if there is already an object using that identifier loaded into the session?
Doing session.contains(obj) seems to only return true if the session contains that exact object, not another object with the same ID.
Using merge(obj) throws an exception if the object is new
A:
Have you tried calling .SaveOrUpdateCopy()?
It should work in all instances, if there is an entity by the same id in the session or if there is no entity at all. This is basically the catch-all method, as it converts a transient object into a persistent one (Save), updates the object if it is existing (Update) or even handles if the entity is a copy of an already existing object (Copy).
Failing that, you may have to identify and .Evict() the existing object before Attaching (.Update()) your "new" object.
This should be easy enough to do:
IPersistable entity = Whatever(); // This is the object we're trying to update
// (IPersistable has an id field)
session.Evict(session.Get(entity.GetType(), entity.Id));
session.SaveOrUpdate(entity);
Although the above code could probably do with some null checking for the .Get() call.
A:
How about:
session.replicate(entity, ReplicationMode.OVERWRITE);
?
| Hibernate saveOrUpdate with another object in the session | Is there any way to save an object using Hibernate if there is already an object using that identifier loaded into the session?
Doing session.contains(obj) seems to only return true if the session contains that exact object, not another object with the same ID.
Using merge(obj) throws an exception if the object is new
| [
"Have you tried calling .SaveOrUpdateCopy()? \nIt should work in all instances, if there is an entity by the same id in the session or if there is no entity at all. This is basically the catch-all method, as it converts a transient object into a persistent one (Save), updates the object if it is existing (Update) or even handles if the entity is a copy of an already existing object (Copy).\nFailing that, you may have to identify and .Evict() the existing object before Attaching (.Update()) your \"new\" object.\nThis should be easy enough to do:\nIPersistable entity = Whatever(); // This is the object we're trying to update\n// (IPersistable has an id field)\nsession.Evict(session.Get(entity.GetType(), entity.Id));\nsession.SaveOrUpdate(entity);\n\nAlthough the above code could probably do with some null checking for the .Get() call.\n",
"How about:\nsession.replicate(entity, ReplicationMode.OVERWRITE);\n\n?\n"
] | [
4,
1
] | [] | [] | [
"hibernate",
"orm"
] | stackoverflow_0000026450_hibernate_orm.txt |
Q:
How to change Build Numbering format in Visual Studio
I've inherited a .NET application that automatically updates it's version number with each release. The problem, as I see it, is the length and number of digits in the version number.
An example of the current version number format is 3.5.3167.26981 which is a mouthful for the users to say when they are reporting bugs.
What I would like is something more like this: 3.5 (build 3198). I would prefer to manually update the major and minor versions, but have the build number update automatically.
Even better, I don't want the build number to increment unless I am compiling in RELEASE mode.
Anyone know if there is a way to do this -- and how?
A:
In one of the project files, probably AssemblyInfo.cs, the assembly version attribute is set to [assembly: AssemblyVersion("3.5.*")] or something similar. The * basically means it lets Visual Studio automatically set the build and revision number.
You can change this to a hard coded value in the format <major version>.<minor version>.<build number>.<revision>
You are allowed to use any or all of the precision. For instance 3.5 or 3.5.3167 or 3.5.3167.10000.
You can also use compiler conditions to change the versioning based on whether you're doing a debug build or release build.
A:
At a previous company we did something like this by writing an Ant task to get the current Subversion changeset string, which we used as the build number, appended after the major, minor, and tertiary numbers. You could do something like this with Visual Studio as well.
A:
Use a '*' wildcard in the AssemblyVersion attribute. Documentation is here. Note that if the application is built from multiple assemblies, the version you care most about is the one for the .exe.
| How to change Build Numbering format in Visual Studio | I've inherited a .NET application that automatically updates it's version number with each release. The problem, as I see it, is the length and number of digits in the version number.
An example of the current version number format is 3.5.3167.26981 which is a mouthful for the users to say when they are reporting bugs.
What I would like is something more like this: 3.5 (build 3198). I would prefer to manually update the major and minor versions, but have the build number update automatically.
Even better, I don't want the build number to increment unless I am compiling in RELEASE mode.
Anyone know if there is a way to do this -- and how?
| [
"In one of the project files, probably AssemblyInfo.cs, the assembly version attribute is set to [assembly: AssemblyVersion(\"3.5.*\")] or something similar. The * basically means it lets Visual Studio automatically set the build and revision number. \nYou can change this to a hard coded value in the format <major version>.<minor version>.<build number>.<revision>\nYou are allowed to use any or all of the precision. For instance 3.5 or 3.5.3167 or 3.5.3167.10000.\nYou can also use compiler conditions to change the versioning based on whether you're doing a debug build or release build.\n",
"At a previous company we did something like this by writing an Ant task to get the current Subversion changeset string, which we used as the build number, appended after the major, minor, and tertiary numbers. You could do something like this with Visual Studio as well.\n",
"Use a '*' wildcard in the AssemblyVersion attribute. Documentation is here. Note that if the application is built from multiple assemblies, the version you care most about is the one for the .exe.\n"
] | [
3,
0,
0
] | [] | [] | [
"visual_studio"
] | stackoverflow_0000040779_visual_studio.txt |
Q:
jQuery and Prototype Selector Madness
Both the jQuery and Prototpye JavaScript libraries refuse to allow me to use a variable to select an list item element by index number although they accept a hard coded number.
For example, in Prototype this works:
$$('li')[5].addClassName('active');
But this will not work no matter how I try to cast the variable as a number or integer:
$$('li')[currentPage].addClassName('active');
In jQuery I get similar weirdness. This will work:
jQuery('li').eq(5).addClass("active");
But this will not work again even though the value of currentPage is 5 and its type is number:
jQuery('li').eq(currentPage).addClass("active");
I'm trying to create a JavaScript pagination system and I need to set the class on the active page button. The list item elements are created dynamically depending upon the number of pages I need.
A:
Are you certain that currentPage is an integer? Try something like:
var currentPage = 5;
jQuery('li').eq(currentPage);
as a simple sanity check. If that works, you should try casting to Integer.
A:
Make sure that the currentPage variable is correctly scoped in the code where it is being accessed. Could the variable be changed somewhere else in the code before you are accessing it? Tools like Firebug can help you to add a breakpoint at the point of execution and see the value of your variable.
A:
It looks like I just needed to be more specific in my element selector although it is weird that a hard coded number would work.
jQuery('#pagination-digg li').eq(currentPage).addClass("active");
| jQuery and Prototype Selector Madness | Both the jQuery and Prototpye JavaScript libraries refuse to allow me to use a variable to select an list item element by index number although they accept a hard coded number.
For example, in Prototype this works:
$$('li')[5].addClassName('active');
But this will not work no matter how I try to cast the variable as a number or integer:
$$('li')[currentPage].addClassName('active');
In jQuery I get similar weirdness. This will work:
jQuery('li').eq(5).addClass("active");
But this will not work again even though the value of currentPage is 5 and its type is number:
jQuery('li').eq(currentPage).addClass("active");
I'm trying to create a JavaScript pagination system and I need to set the class on the active page button. The list item elements are created dynamically depending upon the number of pages I need.
| [
"Are you certain that currentPage is an integer? Try something like:\nvar currentPage = 5;\njQuery('li').eq(currentPage);\n\nas a simple sanity check. If that works, you should try casting to Integer.\n",
"Make sure that the currentPage variable is correctly scoped in the code where it is being accessed. Could the variable be changed somewhere else in the code before you are accessing it? Tools like Firebug can help you to add a breakpoint at the point of execution and see the value of your variable.\n",
"It looks like I just needed to be more specific in my element selector although it is weird that a hard coded number would work.\njQuery('#pagination-digg li').eq(currentPage).addClass(\"active\");\n\n"
] | [
5,
2,
2
] | [] | [] | [
"addclass",
"css_selectors",
"jquery",
"prototypejs"
] | stackoverflow_0000040590_addclass_css_selectors_jquery_prototypejs.txt |
Q:
Windows Vista Virtual PC-image for Visual Studio-development minimized
Which features and services in Vista can you remove with nLite (or tool of choice) to make a Virtual PC-image of Vista as small as possible?
The VPC must work with development in Visual Studio.
A normal install of Vista today is like 12-14 GB, which is silly when I got it to work with Visual Studio at 4 GB. But with Visual Studio it totals around 8 GB which is a bit heavy to move around in multiple copies.
A:
You can try and cut stuff out with vLite, but unless you cut out a real lot it's not going to save a ton of drive space. Here's your best bets:
Disable Hibernate and run disk cleanup to remove any hibernation file.
Disable System restore entirely and use disk cleanup to remove all restore points... this will save an enormous amount of space.
Disable SuperFetch (since it kills your VM hard drive with it's crazy usage)
Minimize the size of your pagefile by setting a smaller static size and make sure to assign lots of memory to your VM to compensate.
Use the disk utilities to shrink your VM drive down as far as possible.
Once you have the base machine configured, I would suggest using VMware workstation and the awesome Linked Clones feature, which will let you create a completely new VM based on the base machine, but only using a portion of the space.
I would not advise running a Vista VM from a USB flash drive, it will be slower than dirt.
| Windows Vista Virtual PC-image for Visual Studio-development minimized | Which features and services in Vista can you remove with nLite (or tool of choice) to make a Virtual PC-image of Vista as small as possible?
The VPC must work with development in Visual Studio.
A normal install of Vista today is like 12-14 GB, which is silly when I got it to work with Visual Studio at 4 GB. But with Visual Studio it totals around 8 GB which is a bit heavy to move around in multiple copies.
| [
"You can try and cut stuff out with vLite, but unless you cut out a real lot it's not going to save a ton of drive space. Here's your best bets:\n\nDisable Hibernate and run disk cleanup to remove any hibernation file.\nDisable System restore entirely and use disk cleanup to remove all restore points... this will save an enormous amount of space.\nDisable SuperFetch (since it kills your VM hard drive with it's crazy usage)\nMinimize the size of your pagefile by setting a smaller static size and make sure to assign lots of memory to your VM to compensate.\nUse the disk utilities to shrink your VM drive down as far as possible.\n\nOnce you have the base machine configured, I would suggest using VMware workstation and the awesome Linked Clones feature, which will let you create a completely new VM based on the base machine, but only using a portion of the space. \nI would not advise running a Vista VM from a USB flash drive, it will be slower than dirt.\n"
] | [
3
] | [] | [] | [
"virtual_pc",
"windows_vista"
] | stackoverflow_0000039357_virtual_pc_windows_vista.txt |
Q:
Server centered vs. client centered architecture
For a typical business application, should the focus be on client processing via AJAX i.e. pull the data from the server and process it on the client or would you suggest a more classic ASP.Net approach with the server being responsible for handling most of the UI events? I find it hard to come up with a good 'default architecture' from which to start. Maybe someone has an open source example application which they could recommend.
A:
It depends greatly on the application and user. In the general case, however, you'll always scale better and the user will have a better experience if as much of the processing as possible happens on the client.
Further, with Google Gears and other such frameworks it's possible to separate the client from the network and still have use of the application. If all the UI is on the server it's much harder to roll out a roaming solution.
A:
It really depends on the application and the situation, but just keep in mind that every hit to the server is costly, both in adding load (perhaps minimally), but also in terms of UI responsiveness. I am of the mind that doing things in JavaScript when possible is a good idea, if it can make your UI feel snappier.
Of course, it all depends on what you are trying to do, and whether it matters if the UI is snappy (an internal web app probably doesn't NEED extra development to make the UI more attractive and quicker/easier to use, whereas something that is used by the general public by a mass audience probably needs to be as polished and tuned as possible).
A:
Do you need to trust the data? If so, be aware that it's trivial to tamper with client-processed data in nasty and malicious ways. If that's the case, you'll want to process info on the server.
Also, be aware that it can be a lot harder to code javascript apps so they are stable, reliable, and bug free. Can you lock down your users so they only use one particular browser?
| Server centered vs. client centered architecture | For a typical business application, should the focus be on client processing via AJAX i.e. pull the data from the server and process it on the client or would you suggest a more classic ASP.Net approach with the server being responsible for handling most of the UI events? I find it hard to come up with a good 'default architecture' from which to start. Maybe someone has an open source example application which they could recommend.
| [
"It depends greatly on the application and user. In the general case, however, you'll always scale better and the user will have a better experience if as much of the processing as possible happens on the client.\nFurther, with Google Gears and other such frameworks it's possible to separate the client from the network and still have use of the application. If all the UI is on the server it's much harder to roll out a roaming solution.\n",
"It really depends on the application and the situation, but just keep in mind that every hit to the server is costly, both in adding load (perhaps minimally), but also in terms of UI responsiveness. I am of the mind that doing things in JavaScript when possible is a good idea, if it can make your UI feel snappier.\nOf course, it all depends on what you are trying to do, and whether it matters if the UI is snappy (an internal web app probably doesn't NEED extra development to make the UI more attractive and quicker/easier to use, whereas something that is used by the general public by a mass audience probably needs to be as polished and tuned as possible).\n",
"Do you need to trust the data? If so, be aware that it's trivial to tamper with client-processed data in nasty and malicious ways. If that's the case, you'll want to process info on the server.\nAlso, be aware that it can be a lot harder to code javascript apps so they are stable, reliable, and bug free. Can you lock down your users so they only use one particular browser?\n"
] | [
1,
1,
0
] | [] | [] | [
".net",
"architecture"
] | stackoverflow_0000040723_.net_architecture.txt |
Q:
How do I pass multiple string parameters to a PowerShell script?
I am trying to do some string concatenation/formatting, but it's putting all the parameters into the first placeholder.
Code
function CreateAppPoolScript([string]$AppPoolName, [string]$AppPoolUser, [string]$AppPoolPass)
{
# Command to create an IIS application pool
$AppPoolScript = "cscript adsutil.vbs CREATE ""w3svc/AppPools/$AppPoolName"" IIsApplicationPool`n"
$AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/WamUserName"" ""$AppPoolUser""`n"
$AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/WamUserPass"" ""$AppPoolPass""`n"
$AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/AppPoolIdentityType"" 3"
return $AppPoolScript
}
$s = CreateAppPoolScript("name", "user", "pass")
write-host $s
Output
cscript adsutil.vbs CREATE "w3svc/AppPools/name user pass" IIsApplicationPool
cscript adsutil.vbs SET "w3svc/AppPools/name user pass/WamUserName" ""
cscript adsutil.vbs SET "w3svc/AppPools/name user pass/WamUserPass" ""
cscript adsutil.vbs SET "w3svc/AppPools/name user pass/AppPoolIdentityType" 3
A:
Lose the parentheses and commas.
Calling your function as:
$s = CreateAppPoolScript "name" "user" "pass"
gives:
cscript adsutil.vbs CREATE "w3svc/AppPools/name" IIsApplicationPool
cscript adsutil.vbs SET "w3svc/AppPools/name/WamUserName" "user"
cscript adsutil.vbs SET "w3svc/AppPools/name/WamUserPass" "pass"
cscript adsutil.vbs SET "w3svc/AppPools/name/AppPoolIdentityType" 3
A:
By the way, using a PowerShell here-string might make your function a little easier to read as well, since you won't need to double up all the "-marks:
function CreateAppPoolScript([string]$AppPoolName, [string]$AppPoolUser, [string]$AppPoolPass)
{
# Command to create an IIS application pool
return @"
cscript adsutil.vbs CREATE "w3svc/AppPools/$AppPoolName" IIsApplicationPool
cscript adsutil.vbs SET "w3svc/AppPools/$AppPoolName/WamUserName" "$AppPoolUser"
cscript adsutil.vbs SET "w3svc/AppPools/$AppPoolName/WamUserPass" "$AppPoolPass"
cscript adsutil.vbs SET "w3svc/AppPools/$AppPoolName/AppPoolIdentityType" 3
"@
}
A:
Paul's right.
In PowerShell, function parameters are not enclosed in parenthesis. (Method parameters still are.)
Your initial call was just passing one big array to the function, rather than the three separate parameters you wanted.
| How do I pass multiple string parameters to a PowerShell script? | I am trying to do some string concatenation/formatting, but it's putting all the parameters into the first placeholder.
Code
function CreateAppPoolScript([string]$AppPoolName, [string]$AppPoolUser, [string]$AppPoolPass)
{
# Command to create an IIS application pool
$AppPoolScript = "cscript adsutil.vbs CREATE ""w3svc/AppPools/$AppPoolName"" IIsApplicationPool`n"
$AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/WamUserName"" ""$AppPoolUser""`n"
$AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/WamUserPass"" ""$AppPoolPass""`n"
$AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/AppPoolIdentityType"" 3"
return $AppPoolScript
}
$s = CreateAppPoolScript("name", "user", "pass")
write-host $s
Output
cscript adsutil.vbs CREATE "w3svc/AppPools/name user pass" IIsApplicationPool
cscript adsutil.vbs SET "w3svc/AppPools/name user pass/WamUserName" ""
cscript adsutil.vbs SET "w3svc/AppPools/name user pass/WamUserPass" ""
cscript adsutil.vbs SET "w3svc/AppPools/name user pass/AppPoolIdentityType" 3
| [
"Lose the parentheses and commas. \nCalling your function as:\n$s = CreateAppPoolScript \"name\" \"user\" \"pass\"\n\ngives:\ncscript adsutil.vbs CREATE \"w3svc/AppPools/name\" IIsApplicationPool\ncscript adsutil.vbs SET \"w3svc/AppPools/name/WamUserName\" \"user\"\ncscript adsutil.vbs SET \"w3svc/AppPools/name/WamUserPass\" \"pass\"\ncscript adsutil.vbs SET \"w3svc/AppPools/name/AppPoolIdentityType\" 3\n\n",
"By the way, using a PowerShell here-string might make your function a little easier to read as well, since you won't need to double up all the \"-marks:\nfunction CreateAppPoolScript([string]$AppPoolName, [string]$AppPoolUser, [string]$AppPoolPass)\n{\n # Command to create an IIS application pool\n return @\"\ncscript adsutil.vbs CREATE \"w3svc/AppPools/$AppPoolName\" IIsApplicationPool\ncscript adsutil.vbs SET \"w3svc/AppPools/$AppPoolName/WamUserName\" \"$AppPoolUser\"\ncscript adsutil.vbs SET \"w3svc/AppPools/$AppPoolName/WamUserPass\" \"$AppPoolPass\"\ncscript adsutil.vbs SET \"w3svc/AppPools/$AppPoolName/AppPoolIdentityType\" 3\n\"@\n}\n\n",
"Paul's right.\nIn PowerShell, function parameters are not enclosed in parenthesis. (Method parameters still are.)\nYour initial call was just passing one big array to the function, rather than the three separate parameters you wanted.\n"
] | [
52,
6,
3
] | [] | [] | [
"arguments",
"parameters",
"powershell",
"string"
] | stackoverflow_0000022732_arguments_parameters_powershell_string.txt |