content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How do I increment a value in a textfile using the regular Windows command-line?
I'd like to keep a "compile-counter" for one of my projects. I figured a quick and dirty way to do this would be to keep a text file with a plain number in it, and then simply call upon a small script to increment this each time I compile.
How would I go about doing this using the regular Windows command line?
I don't really feel like installing some extra shell to do this but if you have any other super simple suggestions that would accomplish just this, they're naturally appreciated as well.
A:
You can try a plain old batchfile.
@echo off
for /f " delims==" %%i in (counter.txt) do set /A temp_counter= %%i+1
echo %temp_counter% > counter.txt
assuming the count.bat and counter.txt are located in the same directory.
A:
It would be an new shell (but I think it is worth it), but from PowerShell it would be
[int](get-content counter.txt) + 1 | out-file counter.txt
A:
I'd suggest just appending the current datetime of the build to a log file.
date >> builddates.txt
That way you get a build count via the # of lines, and you may also get some interesting statistics if you can be bothered analysing the dates and times later on.
The extra size & time to count the number of lines in the file will be insignificant unless you are doing seriously fast project iterations!
A:
If you don't mind running a Microscoft Windows Based Script then this jscript will work OK. just save it as a .js file and run it from dos with "wscript c:/script.js".
var fso, f, fileCount;
var ForReading = 1, ForWriting = 2;
var filename = "c:\\testfile.txt";
fso = new ActiveXObject("Scripting.FileSystemObject");
//create file if its not found
if (! fso.FileExists(filename))
{
f = fso.OpenTextFile(filename, ForWriting, true);
f.Write("0");
f.Close();
}
f = fso.OpenTextFile(filename, ForReading);
fileCount = parseInt(f.ReadAll());
//make sure the input is a whole number
if (isNaN(fileCount))
{
fileCount = 0;
}
fileCount = fileCount + 1;
f = fso.OpenTextFile(filename, ForWriting, true);
f.Write(fileCount);
f.Close();
| How do I increment a value in a textfile using the regular Windows command-line? | I'd like to keep a "compile-counter" for one of my projects. I figured a quick and dirty way to do this would be to keep a text file with a plain number in it, and then simply call upon a small script to increment this each time I compile.
How would I go about doing this using the regular Windows command line?
I don't really feel like installing some extra shell to do this but if you have any other super simple suggestions that would accomplish just this, they're naturally appreciated as well.
| [
"You can try a plain old batchfile.\n@echo off\nfor /f \" delims==\" %%i in (counter.txt) do set /A temp_counter= %%i+1\necho %temp_counter% > counter.txt\n\nassuming the count.bat and counter.txt are located in the same directory.\n",
"It would be an new shell (but I think it is worth it), but from PowerShell it would be \n[int](get-content counter.txt) + 1 | out-file counter.txt\n\n",
"I'd suggest just appending the current datetime of the build to a log file. \ndate >> builddates.txt\n\nThat way you get a build count via the # of lines, and you may also get some interesting statistics if you can be bothered analysing the dates and times later on.\nThe extra size & time to count the number of lines in the file will be insignificant unless you are doing seriously fast project iterations!\n",
"If you don't mind running a Microscoft Windows Based Script then this jscript will work OK. just save it as a .js file and run it from dos with \"wscript c:/script.js\".\nvar fso, f, fileCount;\nvar ForReading = 1, ForWriting = 2; \nvar filename = \"c:\\\\testfile.txt\";\nfso = new ActiveXObject(\"Scripting.FileSystemObject\");\n\n//create file if its not found\nif (! fso.FileExists(filename))\n{\n f = fso.OpenTextFile(filename, ForWriting, true);\n f.Write(\"0\");\n f.Close();\n}\n\nf = fso.OpenTextFile(filename, ForReading);\nfileCount = parseInt(f.ReadAll());\n\n//make sure the input is a whole number\nif (isNaN(fileCount))\n{\n fileCount = 0; \n}\n\nfileCount = fileCount + 1;\n\nf = fso.OpenTextFile(filename, ForWriting, true);\nf.Write(fileCount);\nf.Close();\n\n"
] | [
12,
2,
2,
1
] | [] | [] | [
"command_line",
"text_files",
"windows"
] | stackoverflow_0000040787_command_line_text_files_windows.txt |
Q:
How can I update in Linq an entity that is disconnected from database?
Code below does not run correctly and throws InvalidOperationExcepiton.
public void Foo()
{
DataContext context = new DataContext();
LinqEntity item = new LinqEntity(){ Id = 1, Name = "John", Surname = "Doe"} ;
context.LinqEntities.Attach(item, true);
}
A:
By default, the entities will use all fields for checking concurrency when making edits. That's what's throwing the InvalidOperationException.
This can be setting the Update Check property for all fields to Never. This must be done on all fields to attach the entity as modified. If this is done, an additional call to context.SubmitChanges() will save the data.
Alternatively, if you know the original values, you can attach and then make the updates, but all values that are being checked must match the original values.
LinqEntity item = new LinqEntity(){ Id = 1, Name = "OldName", Surname = "OldSurname"};
context.LinqEntities.Attach(item);
item.Name = "John";
item.Surname = "Doe";
context.SubmitChanges();
A:
I'm not sure what you mean by disconnected from the database.
It appears that you are trying to insert a new row into the LinqEntities table -- is that correct?
If that is the case you'll want to do
context.LinqEntities.InsertOnSubmit(item);
context.Submit();
A:
OK, if you're trying to update a row with ID = 1, you'll do it like this:
DataContext context = new DataContext();
LinqEntity item = (from le in context.LinqEntities
where le.ID == 1
select le).Single();
item.Name = "John";
item.Surname = "Doe";
context.Submit();
You could also replace the Linq expression with a more concise lambda:
LinqEntity item = context.LinqEntities.Single(le => le.ID == 1);
The most important thing the DataContext does is track any changes you make, so that when you call the Submit method it will autogenerate the Insert statements for the things you've changed.
A:
When using an ORM you typically select an object before updating it.
You can use DataContext.ExecuteCommand(...) to bypass the ORM if you do not want to do a select.
| How can I update in Linq an entity that is disconnected from database? | Code below does not run correctly and throws InvalidOperationExcepiton.
public void Foo()
{
DataContext context = new DataContext();
LinqEntity item = new LinqEntity(){ Id = 1, Name = "John", Surname = "Doe"} ;
context.LinqEntities.Attach(item, true);
}
| [
"By default, the entities will use all fields for checking concurrency when making edits. That's what's throwing the InvalidOperationException.\nThis can be setting the Update Check property for all fields to Never. This must be done on all fields to attach the entity as modified. If this is done, an additional call to context.SubmitChanges() will save the data.\nAlternatively, if you know the original values, you can attach and then make the updates, but all values that are being checked must match the original values.\nLinqEntity item = new LinqEntity(){ Id = 1, Name = \"OldName\", Surname = \"OldSurname\"}; \ncontext.LinqEntities.Attach(item);\nitem.Name = \"John\";\nitem.Surname = \"Doe\";\ncontext.SubmitChanges();\n\n",
"I'm not sure what you mean by disconnected from the database.\nIt appears that you are trying to insert a new row into the LinqEntities table -- is that correct?\nIf that is the case you'll want to do\ncontext.LinqEntities.InsertOnSubmit(item);\ncontext.Submit();\n\n",
"OK, if you're trying to update a row with ID = 1, you'll do it like this:\nDataContext context = new DataContext();\nLinqEntity item = (from le in context.LinqEntities\n where le.ID == 1\n select le).Single();\nitem.Name = \"John\";\nitem.Surname = \"Doe\";\n\ncontext.Submit();\n\nYou could also replace the Linq expression with a more concise lambda:\nLinqEntity item = context.LinqEntities.Single(le => le.ID == 1);\n\nThe most important thing the DataContext does is track any changes you make, so that when you call the Submit method it will autogenerate the Insert statements for the things you've changed.\n",
"When using an ORM you typically select an object before updating it. \nYou can use DataContext.ExecuteCommand(...) to bypass the ORM if you do not want to do a select. \n"
] | [
4,
1,
1,
0
] | [] | [] | [
"c#",
"linq"
] | stackoverflow_0000040054_c#_linq.txt |
Q:
Do C++ logging frameworks sacrifice reusability?
In C++, there isn't a de-facto standard logging tool. In my experience, shops roll their own. This creates a bit of a problem, however, when trying to create reusable software components. If everything in your system depends on the logging component, this makes the software less reusable, basically forcing any downstream projects to take your logging framework along with the components they really want.
IOC (dependency injection) doesn't really help with the problem since your components would need to depend on a logging abstraction. Logging components themselves can add dependencies on file I/O, triggering mechanisms, and other possibly unwanted dependencies.
Does adding a dependency to your proprietary logging framework sacrifice the reusability of the component?
A:
Yes. But dependency injection will help in this case.
You can create an abstract logging base-class and create implementations for the logging-frameworks you want to use. Your components are just dependent on the abstract base-class. And you inject the implementations along with al their dependencies as needed.
A:
Yes, Mendelt is right. We do exactly this in our products. Everything depends on the ILogger abstract interface, but it does not depend on anything else. Typically an executable or a high-level DLL will be the one to construct an actual implemented Logger interface and inject it.
A:
If you are looking to build libraries which wont be recompiled, but want to provide a logging interface then perhaps a good way is to allow the user (of the library) to provide a callback.
On initialising logging with your library, they would need to specify the callback, and then the glue-code is up to them to make it play well with whatever they have.
If you can make the signature of the callback look like a standard function they might always have available to them, it provides them an easy default option if they dont actually have a logger.
Additionally the caller might have instanced components from the library multiple times, and for resource contention or threading issues, want to provide a different logger callback for each one.
| Do C++ logging frameworks sacrifice reusability? | In C++, there isn't a de-facto standard logging tool. In my experience, shops roll their own. This creates a bit of a problem, however, when trying to create reusable software components. If everything in your system depends on the logging component, this makes the software less reusable, basically forcing any downstream projects to take your logging framework along with the components they really want.
IOC (dependency injection) doesn't really help with the problem since your components would need to depend on a logging abstraction. Logging components themselves can add dependencies on file I/O, triggering mechanisms, and other possibly unwanted dependencies.
Does adding a dependency to your proprietary logging framework sacrifice the reusability of the component?
| [
"Yes. But dependency injection will help in this case.\nYou can create an abstract logging base-class and create implementations for the logging-frameworks you want to use. Your components are just dependent on the abstract base-class. And you inject the implementations along with al their dependencies as needed.\n",
"Yes, Mendelt is right. We do exactly this in our products. Everything depends on the ILogger abstract interface, but it does not depend on anything else. Typically an executable or a high-level DLL will be the one to construct an actual implemented Logger interface and inject it.\n",
"If you are looking to build libraries which wont be recompiled, but want to provide a logging interface then perhaps a good way is to allow the user (of the library) to provide a callback.\nOn initialising logging with your library, they would need to specify the callback, and then the glue-code is up to them to make it play well with whatever they have.\nIf you can make the signature of the callback look like a standard function they might always have available to them, it provides them an easy default option if they dont actually have a logger.\nAdditionally the caller might have instanced components from the library multiple times, and for resource contention or threading issues, want to provide a different logger callback for each one.\n"
] | [
5,
1,
0
] | [] | [] | [
"c++",
"code_reuse",
"logging"
] | stackoverflow_0000039304_c++_code_reuse_logging.txt |
Q:
ruby idioms for using command-line options
I'm trying to pick up ruby by porting a medium-sized (non-OO) perl program. One of my personal idioms is to set options like this:
use Getopt::Std;
our $opt_v; # be verbose
getopts('v');
# and later ...
$opt_v && print "something interesting\n";
In perl, I kind of grit my teeth and let $opt_v be (effectively) a global.
In ruby,the more-or-less exact equivalent would be
require 'optparse'
opts.on("-v", "--[no-]verbose", TrueClass, "Run verbosely") {
|$opt_verbose|
}
opts.parse!
end
where $opt_verbose is a global that classes could access. Having classes know about global flags like that seems ... er ... wrong. What's the OO-idiomatic way of doing this?
Let the main routine take care of all option-related stuff and have the classes just return things to it that it decides how to deal with?
Have classes implement optional behaviour (e.g., know how to be verbose) and set a mode via an attr_writer sort of thing?
updated: Thanks for the answers suggesting optparse, but I should have been clearer that it's not how to process command-line options I'm asking about, but more the relationship between command-line options that effectively set a global program state and classes that should ideally be independent of that sort of thing.
A:
A while back I ran across this blog post (by Todd Werth) which presented a rather lengthy skeleton for command-line scripts in Ruby. His skeleton uses a hybrid approach in which the application code is encapsulated in an application class which is instantiated, then executed by calling a "run" method on the application object. This allowed the options to be stored in a class-wide instance variable so that all methods in the application object can access them without exposing them to any other objects that might be used in the script.
I would lean toward using this technique, where the options are contained in one object and use either attr_writers or option parameters on method calls to pass relevant options to any additional objects. This way, any code contained in external classes can be isolated from the options themselves -- no need to worry about the naming of the variables in the main routine from within the thingy class if your options are set with a thingy.verbose=true attr_writer or thingy.process(true) call.
A:
The optparse library is part of the standard distribution, so you'll be able to use it without requiring any third party stuff.
I haven't used it personally, but rails seems to use it extensively and so does rspec, which I guess is a pretty solid vote of confidence
This example from rails' script/console seems to show how to use it pretty easily and nicely
A:
The first hit on google for "processing command line options in ruby" is an article about Trollop which seems to be a good tool for this job.
| ruby idioms for using command-line options | I'm trying to pick up ruby by porting a medium-sized (non-OO) perl program. One of my personal idioms is to set options like this:
use Getopt::Std;
our $opt_v; # be verbose
getopts('v');
# and later ...
$opt_v && print "something interesting\n";
In perl, I kind of grit my teeth and let $opt_v be (effectively) a global.
In ruby,the more-or-less exact equivalent would be
require 'optparse'
opts.on("-v", "--[no-]verbose", TrueClass, "Run verbosely") {
|$opt_verbose|
}
opts.parse!
end
where $opt_verbose is a global that classes could access. Having classes know about global flags like that seems ... er ... wrong. What's the OO-idiomatic way of doing this?
Let the main routine take care of all option-related stuff and have the classes just return things to it that it decides how to deal with?
Have classes implement optional behaviour (e.g., know how to be verbose) and set a mode via an attr_writer sort of thing?
updated: Thanks for the answers suggesting optparse, but I should have been clearer that it's not how to process command-line options I'm asking about, but more the relationship between command-line options that effectively set a global program state and classes that should ideally be independent of that sort of thing.
| [
"A while back I ran across this blog post (by Todd Werth) which presented a rather lengthy skeleton for command-line scripts in Ruby. His skeleton uses a hybrid approach in which the application code is encapsulated in an application class which is instantiated, then executed by calling a \"run\" method on the application object. This allowed the options to be stored in a class-wide instance variable so that all methods in the application object can access them without exposing them to any other objects that might be used in the script.\nI would lean toward using this technique, where the options are contained in one object and use either attr_writers or option parameters on method calls to pass relevant options to any additional objects. This way, any code contained in external classes can be isolated from the options themselves -- no need to worry about the naming of the variables in the main routine from within the thingy class if your options are set with a thingy.verbose=true attr_writer or thingy.process(true) call.\n",
"The optparse library is part of the standard distribution, so you'll be able to use it without requiring any third party stuff.\nI haven't used it personally, but rails seems to use it extensively and so does rspec, which I guess is a pretty solid vote of confidence\nThis example from rails' script/console seems to show how to use it pretty easily and nicely\n",
"The first hit on google for \"processing command line options in ruby\" is an article about Trollop which seems to be a good tool for this job.\n"
] | [
3,
2,
1
] | [] | [] | [
"idioms",
"ruby"
] | stackoverflow_0000038713_idioms_ruby.txt |
Q:
Best practices for debugging linking errors
When building projects in C++, I've found debugging linking errors to be tricky, especially when picking up other people's code. What strategies do people use for debugging and fixing linking errors?
A:
Not sure what your level of expertise is, but here are the basics.
Below is a linker error from VS 2005 - yes, it's a giant mess if you're not familiar with it.
ByteComparator.obj : error LNK2019: unresolved external symbol "int __cdecl does_not_exist(void)" (?does_not_exist@@YAHXZ) referenced in function "void __cdecl TextScan(struct FileTextStats &,char const *,char const *,bool,bool,__int64)" (?TextScan@@YAXAAUFileTextStats@@PBD1_N2_J@Z)
There are a couple of points to focus on:
"ByteComparator.obj" - Look for a ByteComparator.cpp file, this is the source of the linker problem
"int __cdecl does_not_exist(void)" - This is the symbol it couldn't find, in this case a function named does_not_exist()
At this point, in many cases the fastest way to resolution is to search the code base for this function and find where the implementation is. Once you know where the function is implemented you just have to make sure the two places get linked together.
If you're using VS2005, you would use the "Project Dependencies..." right-click menu. If you're using gcc, you would look in your makefiles for the executable generation step (gcc called with a bunch of .o files) and add the missing .o file.
In a second scenario, you may be missing an "external" dependency, which you don't have code for. The Win32 libraries are often times implemented in static libraries that you have to link to. In this case, go to MSDN or "Microsoft Google" and search for the API. At the bottom of the API description the library name is given. Add this to your project properties "Configuration Properties->Linker->Input->Additional Dependencies" list. For example, the function timeGetTime()'s page on MSDN tells you to use Winmm.lib at the bottom of the page.
A:
One of the common linking errors I've run into is when a function is used differently from how it's defined. If you see such an error you should make sure that every function you use is properly declared in some .h file.
You should also make sure that all the relevant source files are compiled into the same lib file. An error I've run into is when I have two sets of files compiled into two separate libraries, and I cross-call between libraries.
Is there a failure you have in mind?
A:
The C-runtime libraries are often the biggest culprit. Making sure all your projects have the same settings wrt single vs multi-threading and static vs dll.
The MSDN documentation is good for pointing out which lib a particular Win32 API call requires if it comes up as missing.
Other than that it usually comes down to turning on the verbose flag and wading through the output looking for clues.
| Best practices for debugging linking errors | When building projects in C++, I've found debugging linking errors to be tricky, especially when picking up other people's code. What strategies do people use for debugging and fixing linking errors?
| [
"Not sure what your level of expertise is, but here are the basics. \nBelow is a linker error from VS 2005 - yes, it's a giant mess if you're not familiar with it.\nByteComparator.obj : error LNK2019: unresolved external symbol \"int __cdecl does_not_exist(void)\" (?does_not_exist@@YAHXZ) referenced in function \"void __cdecl TextScan(struct FileTextStats &,char const *,char const *,bool,bool,__int64)\" (?TextScan@@YAXAAUFileTextStats@@PBD1_N2_J@Z)\n\nThere are a couple of points to focus on: \n\n\"ByteComparator.obj\" - Look for a ByteComparator.cpp file, this is the source of the linker problem\n\"int __cdecl does_not_exist(void)\" - This is the symbol it couldn't find, in this case a function named does_not_exist()\n\nAt this point, in many cases the fastest way to resolution is to search the code base for this function and find where the implementation is. Once you know where the function is implemented you just have to make sure the two places get linked together.\nIf you're using VS2005, you would use the \"Project Dependencies...\" right-click menu. If you're using gcc, you would look in your makefiles for the executable generation step (gcc called with a bunch of .o files) and add the missing .o file.\n\nIn a second scenario, you may be missing an \"external\" dependency, which you don't have code for. The Win32 libraries are often times implemented in static libraries that you have to link to. In this case, go to MSDN or \"Microsoft Google\" and search for the API. At the bottom of the API description the library name is given. Add this to your project properties \"Configuration Properties->Linker->Input->Additional Dependencies\" list. For example, the function timeGetTime()'s page on MSDN tells you to use Winmm.lib at the bottom of the page.\n",
"One of the common linking errors I've run into is when a function is used differently from how it's defined. If you see such an error you should make sure that every function you use is properly declared in some .h file.\nYou should also make sure that all the relevant source files are compiled into the same lib file. An error I've run into is when I have two sets of files compiled into two separate libraries, and I cross-call between libraries.\nIs there a failure you have in mind?\n",
"The C-runtime libraries are often the biggest culprit. Making sure all your projects have the same settings wrt single vs multi-threading and static vs dll.\nThe MSDN documentation is good for pointing out which lib a particular Win32 API call requires if it comes up as missing.\nOther than that it usually comes down to turning on the verbose flag and wading through the output looking for clues.\n"
] | [
25,
4,
3
] | [] | [] | [
"c++",
"compilation",
"gcc",
"linker",
"visual_studio"
] | stackoverflow_0000034955_c++_compilation_gcc_linker_visual_studio.txt |
Q:
Creating Infopath 2007 addins that manipulate the design-time form
I'm experimenting with creating an add-in for Infopath 2007. The documentation is very skimpy. What I'm trying to determine is what kind of actions an add-in can take while designing a form. Most of the discussion and samples are for when the user is filling out the form. Can I, for example, add a new field to the form in the designer? Add a new item to the schema? Move a form field on the design surface? It doesn't appear so, but I can't find anything definitive.
A:
There is no Object Model for the InfoPath designer.
I believe the closest that you can get is the exposed API for the Visual Studio hosting that InfoPath supports; but I don't believe that this will give you the programatic control of the designer that you'd like.
http://msdn.microsoft.com/en-us/library/aa813327.aspx#office2007infopathVSTO_InfoPathDesignerAPIIntegratingInfoPath2007VisualStudio
Sorry Kevin.
A:
Unfortunatly Bryan is probably right.
And I have tried to make a VS plugin for use with InfoPath development. It is very restrictive and hard to use. Not very effective for quick scripting work.
I have found AutoHotKey to be the best ad hoc scripting tool for use with InfoPath. It doesn't integrate directly with InfoPath, but I have found key+mouse automation to accomplish most of what I have needed.
| Creating Infopath 2007 addins that manipulate the design-time form | I'm experimenting with creating an add-in for Infopath 2007. The documentation is very skimpy. What I'm trying to determine is what kind of actions an add-in can take while designing a form. Most of the discussion and samples are for when the user is filling out the form. Can I, for example, add a new field to the form in the designer? Add a new item to the schema? Move a form field on the design surface? It doesn't appear so, but I can't find anything definitive.
| [
"There is no Object Model for the InfoPath designer.\nI believe the closest that you can get is the exposed API for the Visual Studio hosting that InfoPath supports; but I don't believe that this will give you the programatic control of the designer that you'd like.\nhttp://msdn.microsoft.com/en-us/library/aa813327.aspx#office2007infopathVSTO_InfoPathDesignerAPIIntegratingInfoPath2007VisualStudio\nSorry Kevin.\n",
"Unfortunatly Bryan is probably right.\nAnd I have tried to make a VS plugin for use with InfoPath development. It is very restrictive and hard to use. Not very effective for quick scripting work.\nI have found AutoHotKey to be the best ad hoc scripting tool for use with InfoPath. It doesn't integrate directly with InfoPath, but I have found key+mouse automation to accomplish most of what I have needed.\n"
] | [
0,
0
] | [] | [] | [
"infopath",
"ms_office"
] | stackoverflow_0000012905_infopath_ms_office.txt |
Q:
Should I use window.onload or script block?
I have a javascript function that manipulates the DOM when it is called (adds CSS classes, etc). This is invoked when the user changes some values in a form. When the document is first loading, I want to invoke this function to prepare the initial state (which is simpler in this case than setting up the DOM from the server side to the correct initial state).
Is it better to use window.onload to do this functionality or have a script block after the DOM elements I need to modify? For either case, why is it better?
For example:
function updateDOM(id) {
// updates the id element based on form state
}
should I invoke it via:
window.onload = function() { updateDOM("myElement"); };
or:
<div id="myElement">...</div>
<script language="javascript">
updateDOM("myElement");
</script>
The former seems to be the standard way to do it, but the latter seems to be just as good, perhaps better since it will update the element as soon as the script is hit, and as long as it is placed after the element, I don't see a problem with it.
Any thoughts? Is one version really better than the other?
A:
The onload event is considered the proper way to do it, but if you don't mind using a javascript library, jQuery's $(document).ready() is even better.
$(document).ready(function(){
// manipulate the DOM all you want here
});
The advantages are:
Call $(document).ready() as many times as you want to register additional code to run - you can only set window.onload once.
$(document).ready() actions happen as soon as the DOM is complete - window.onload has to wait for images and such.
I hope I'm not becoming The Guy Who Suggests jQuery On Every JavaScript Question, but it really is great.
A:
I've written lots of Javascript and window.onload is a terrible way to do it. It is brittle and waits until every asset of the page has loaded. So if one image takes forever or a resource doesn't timeout until 30 seconds, your code will not run before the user can see/manipulate the page.
Also, if another piece of Javascript decides to use window.onload = function() {}, your code will be blown away.
The proper way to run your code when the page is ready is wait for the element you need to change is ready/available. Many JS libraries have this as built-in functionality.
Check out:
http://docs.jquery.com/Events/ready#fn
http://developer.yahoo.com/yui/event/#onavailable
A:
Definitely use onload. Keep your scripts separate from your page, or you'll go mad trying to disentangle them later.
A:
Some JavaScript frameworks, such as mootools, give you access to a special event named "domready":
Contains the window Event 'domready', which will execute when the DOM has loaded. To ensure that DOM elements exist when the code attempting to access them is executed, they should be placed within the 'domready' event.
window.addEvent('domready', function() {
alert("The DOM is ready.");
});
A:
window.onload on IE waits for the binary information to load also. It isn't a strict definition of "when the DOM is loaded". So there can be significant lag between when the page is perceived to be loaded and when your script gets fired. Because of this I would recommend looking into one of the plentiful JS frameworks (prototype/jQuery) to handle the heavy lifting for you.
A:
While I agree with the others about using window.onload if possible for clean code, I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently).
Edit: I could have that backwards.
In some cases, it's necessary to use inline script when you want your script to be evaluated when the user hits the back button from another page, back to your page.
Any corrections or additions to this answer are welcome... I'm not a javascript expert.
A:
@The Geek
I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently).
In Firefox, onload is called when the DOM has finished loading regardless of how you navigated to a page.
A:
My take is the former becauase you can only have 1 window.onload function, while inline script blocks you have an n number.
A:
onLoad because it is far easier to tell what code runs when the page loads up than having to read down through scads of html looking for script tags that might execute.
| Should I use window.onload or script block? | I have a javascript function that manipulates the DOM when it is called (adds CSS classes, etc). This is invoked when the user changes some values in a form. When the document is first loading, I want to invoke this function to prepare the initial state (which is simpler in this case than setting up the DOM from the server side to the correct initial state).
Is it better to use window.onload to do this functionality or have a script block after the DOM elements I need to modify? For either case, why is it better?
For example:
function updateDOM(id) {
// updates the id element based on form state
}
should I invoke it via:
window.onload = function() { updateDOM("myElement"); };
or:
<div id="myElement">...</div>
<script language="javascript">
updateDOM("myElement");
</script>
The former seems to be the standard way to do it, but the latter seems to be just as good, perhaps better since it will update the element as soon as the script is hit, and as long as it is placed after the element, I don't see a problem with it.
Any thoughts? Is one version really better than the other?
| [
"The onload event is considered the proper way to do it, but if you don't mind using a javascript library, jQuery's $(document).ready() is even better.\n$(document).ready(function(){\n // manipulate the DOM all you want here\n});\n\nThe advantages are:\n\nCall $(document).ready() as many times as you want to register additional code to run - you can only set window.onload once.\n$(document).ready() actions happen as soon as the DOM is complete - window.onload has to wait for images and such.\n\nI hope I'm not becoming The Guy Who Suggests jQuery On Every JavaScript Question, but it really is great.\n",
"I've written lots of Javascript and window.onload is a terrible way to do it. It is brittle and waits until every asset of the page has loaded. So if one image takes forever or a resource doesn't timeout until 30 seconds, your code will not run before the user can see/manipulate the page.\nAlso, if another piece of Javascript decides to use window.onload = function() {}, your code will be blown away.\nThe proper way to run your code when the page is ready is wait for the element you need to change is ready/available. Many JS libraries have this as built-in functionality. \nCheck out:\n\nhttp://docs.jquery.com/Events/ready#fn\nhttp://developer.yahoo.com/yui/event/#onavailable\n\n",
"Definitely use onload. Keep your scripts separate from your page, or you'll go mad trying to disentangle them later.\n",
"Some JavaScript frameworks, such as mootools, give you access to a special event named \"domready\":\n\nContains the window Event 'domready', which will execute when the DOM has loaded. To ensure that DOM elements exist when the code attempting to access them is executed, they should be placed within the 'domready' event.\nwindow.addEvent('domready', function() {\n alert(\"The DOM is ready.\");\n});\n\n\n",
"window.onload on IE waits for the binary information to load also. It isn't a strict definition of \"when the DOM is loaded\". So there can be significant lag between when the page is perceived to be loaded and when your script gets fired. Because of this I would recommend looking into one of the plentiful JS frameworks (prototype/jQuery) to handle the heavy lifting for you.\n",
"While I agree with the others about using window.onload if possible for clean code, I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently). \nEdit: I could have that backwards.\nIn some cases, it's necessary to use inline script when you want your script to be evaluated when the user hits the back button from another page, back to your page.\nAny corrections or additions to this answer are welcome... I'm not a javascript expert.\n",
"@The Geek\n\nI'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently).\n\nIn Firefox, onload is called when the DOM has finished loading regardless of how you navigated to a page.\n",
"My take is the former becauase you can only have 1 window.onload function, while inline script blocks you have an n number.\n",
"onLoad because it is far easier to tell what code runs when the page loads up than having to read down through scads of html looking for script tags that might execute.\n"
] | [
17,
10,
5,
3,
2,
1,
1,
0,
0
] | [] | [] | [
"dom",
"javascript"
] | stackoverflow_0000040966_dom_javascript.txt |
Q:
Byte level length description
I have a protocol that requires a length field up to 32-bits, and it must be
generated at runtime to describe how many bytes are in a given packet.
The code below is kind of ugly but I am wondering if this can be refactored to
be slightly more efficient or easily understandable. The problem is that the
code will only generate enough bytes to describe the length of the packet, so
less than 255 bytes = 1 byte of length, less than 65535 = 2 bytes of length
etc...
{
extern char byte_stream[];
int bytes = offset_in_packet;
int n = length_of_packet;
/* Under 4 billion, so this can be represented in 32 bits. */
int t;
/* 32-bit number used for temporary storage. */
/* These are the bytes we will break up n into. */
unsigned char first, second, third, fourth;
t = n & 0xFF000000;
/* We have used AND to "mask out" the first byte of the number. */
/* The only bits which can be on in t are the first 8 bits. */
first = t >> 24;
if (t) {
printf("byte 1: 0x%02x\n",first );
byte_stream[bytes] = first; bytes++;
write_zeros = 1;
}
/* Now we shift t so that it is between 0 and 255. This is the first, highest byte of n. */
t = n & 0x00FF0000;
second = t >> 16;
if (t || write_zeros) {
printf("byte 2: 0x%02x\n", second );
byte_stream[bytes] = second; bytes++;
write_zeros = 1;
}
t = n & 0x0000FF00;
third = t >> 8;
if ( t || write_zeros) {
printf("byte 3: 0x%02x\n", third );
byte_stream[bytes] = third; bytes++;
write_zeros = 1;
}
t = n & 0x000000FF;
fourth = t;
if (t || write_zeros) {
printf("byte 4: 0x%02x\n", fourth);
byte_stream[bytes] = fourth; bytes++;
}
}
A:
You should really use a fixed-width field for your length.
When the program on the receiving end has to read the length field of your packet, how does it know where the length stops?
If the length of a packet can potentially reach 4 GB, does a 1-3 byte overhead really matter?
Do you see how complex your code has already become?
A:
Try this loop:
{
extern char byte_stream[];
int bytes = offset_in_packet;
int n = length_of_packet; /* Under 4 billion, so this can be represented in 32 bits. */
int t; /* 32-bit number used for temporary storage. */
int i;
unsigned char curByte;
for (i = 0; i < 4; i++) {
t = n & (0xFF000000 >> (i * 16));
curByte = t >> (24 - (i * 8));
if (t || write_zeros) {
printf("byte %d: 0x%02x\n", i, curByte );
byte_stream[bytes] = curByte;
bytes++;
write_zeros = 1;
}
}
}
A:
I'm not sure I understand your question. What exactly are you trying to count? If I understand correctly you're trying to find the Most Significant non-zero byte.
You're probably better off using a loop like this:
int i;
int write_zeros = 0;
for (i = 3; i >=0 ; --i) {
t = (n >> (8 * i)) & 0xff;
if (t || write_zeros) {
write_zeros = 1;
printf ("byte %d : 0x%02x\n", 4-i, t);
byte_stream[bytes++] = t;
}
}
A:
Really you're only doing four calculations, so readability seems way more important here than efficiency. My approach to make something like this more readable is to
Extract common code to a function
Put similar calculations together to make the patterns more obvious
Get rid of the intermediate variable print_zeroes and be explicit about the cases in which you output bytes even if they're zero (i.e. the preceding byte was non-zero)
I've changed the random code block into a function and changed a few variables (underscores are giving me trouble in the markdown preview screen). I've also assumed that bytes is being passed in, and that whoever is passing it in will pass us a pointer so we can modify it.
Here's the code:
/* append byte b to stream, increment index */
/* really needs to check length of stream before appending */
void output( int i, unsigned char b, char stream[], int *index )
{
printf("byte %d: 0x%02x\n", i, b);
stream[(*index)++] = b;
}
void answer( char bytestream[], unsigned int *bytes, unsigned int n)
{
/* mask out four bytes from word n */
first = (n & 0xFF000000) >> 24;
second = (n & 0x00FF0000) >> 16;
third = (n & 0x0000FF00) >> 8;
fourth = (n & 0x000000FF) >> 0;
/* conditionally output each byte starting with the */
/* first non-zero byte */
if (first)
output( 1, first, bytestream, bytes);
if (first || second)
output( 2, second, bytestream, bytes);
if (first || second || third)
output( 3, third, bytestream, bytes);
if (first || second || third || fourth)
output( 4, fourth, bytestream, bytes);
}
Ever so slightly more efficient, and maybe easier to understand would be this modification to the last four if statements:
if (n>0x00FFFFFF)
output( 1, first, bytestream, bytes);
if (n>0x0000FFFF)
output( 2, second, bytestream, bytes);
if (n>0x000000FF)
output( 3, third, bytestream, bytes);
if (1)
output( 4, fourth, bytestream, bytes);
I agree, however, that compressing this field makes the receiving state machine overly complicated. But if you can't change the protocol, this code is much easier to read.
| Byte level length description | I have a protocol that requires a length field up to 32-bits, and it must be
generated at runtime to describe how many bytes are in a given packet.
The code below is kind of ugly but I am wondering if this can be refactored to
be slightly more efficient or easily understandable. The problem is that the
code will only generate enough bytes to describe the length of the packet, so
less than 255 bytes = 1 byte of length, less than 65535 = 2 bytes of length
etc...
{
extern char byte_stream[];
int bytes = offset_in_packet;
int n = length_of_packet;
/* Under 4 billion, so this can be represented in 32 bits. */
int t;
/* 32-bit number used for temporary storage. */
/* These are the bytes we will break up n into. */
unsigned char first, second, third, fourth;
t = n & 0xFF000000;
/* We have used AND to "mask out" the first byte of the number. */
/* The only bits which can be on in t are the first 8 bits. */
first = t >> 24;
if (t) {
printf("byte 1: 0x%02x\n",first );
byte_stream[bytes] = first; bytes++;
write_zeros = 1;
}
/* Now we shift t so that it is between 0 and 255. This is the first, highest byte of n. */
t = n & 0x00FF0000;
second = t >> 16;
if (t || write_zeros) {
printf("byte 2: 0x%02x\n", second );
byte_stream[bytes] = second; bytes++;
write_zeros = 1;
}
t = n & 0x0000FF00;
third = t >> 8;
if ( t || write_zeros) {
printf("byte 3: 0x%02x\n", third );
byte_stream[bytes] = third; bytes++;
write_zeros = 1;
}
t = n & 0x000000FF;
fourth = t;
if (t || write_zeros) {
printf("byte 4: 0x%02x\n", fourth);
byte_stream[bytes] = fourth; bytes++;
}
}
| [
"You should really use a fixed-width field for your length.\n\nWhen the program on the receiving end has to read the length field of your packet, how does it know where the length stops?\nIf the length of a packet can potentially reach 4 GB, does a 1-3 byte overhead really matter?\nDo you see how complex your code has already become? \n\n",
"Try this loop:\n{\n extern char byte_stream[];\n int bytes = offset_in_packet;\n int n = length_of_packet; /* Under 4 billion, so this can be represented in 32 bits. */\n int t; /* 32-bit number used for temporary storage. */\n int i;\n\n unsigned char curByte;\n\n for (i = 0; i < 4; i++) {\n t = n & (0xFF000000 >> (i * 16));\n\n curByte = t >> (24 - (i * 8));\n if (t || write_zeros) {\n printf(\"byte %d: 0x%02x\\n\", i, curByte );\n byte_stream[bytes] = curByte;\n bytes++;\n write_zeros = 1;\n }\n\n }\n\n}\n\n",
"I'm not sure I understand your question. What exactly are you trying to count? If I understand correctly you're trying to find the Most Significant non-zero byte.\nYou're probably better off using a loop like this: \nint i; \nint write_zeros = 0; \nfor (i = 3; i >=0 ; --i) { \n t = (n >> (8 * i)) & 0xff; \n if (t || write_zeros) { \n write_zeros = 1; \n printf (\"byte %d : 0x%02x\\n\", 4-i, t); \n byte_stream[bytes++] = t;\n } \n}\n\n",
"Really you're only doing four calculations, so readability seems way more important here than efficiency. My approach to make something like this more readable is to \n\nExtract common code to a function\nPut similar calculations together to make the patterns more obvious\nGet rid of the intermediate variable print_zeroes and be explicit about the cases in which you output bytes even if they're zero (i.e. the preceding byte was non-zero)\n\nI've changed the random code block into a function and changed a few variables (underscores are giving me trouble in the markdown preview screen). I've also assumed that bytes is being passed in, and that whoever is passing it in will pass us a pointer so we can modify it.\nHere's the code:\n/* append byte b to stream, increment index */\n/* really needs to check length of stream before appending */\nvoid output( int i, unsigned char b, char stream[], int *index )\n{\n printf(\"byte %d: 0x%02x\\n\", i, b);\n stream[(*index)++] = b;\n}\n\n\nvoid answer( char bytestream[], unsigned int *bytes, unsigned int n)\n{\n /* mask out four bytes from word n */\n first = (n & 0xFF000000) >> 24;\n second = (n & 0x00FF0000) >> 16;\n third = (n & 0x0000FF00) >> 8;\n fourth = (n & 0x000000FF) >> 0;\n\n /* conditionally output each byte starting with the */\n /* first non-zero byte */\n if (first) \n output( 1, first, bytestream, bytes);\n\n if (first || second) \n output( 2, second, bytestream, bytes);\n\n if (first || second || third) \n output( 3, third, bytestream, bytes);\n\n if (first || second || third || fourth) \n output( 4, fourth, bytestream, bytes);\n }\n\nEver so slightly more efficient, and maybe easier to understand would be this modification to the last four if statements:\n if (n>0x00FFFFFF) \n output( 1, first, bytestream, bytes);\n\n if (n>0x0000FFFF) \n output( 2, second, bytestream, bytes);\n\n if (n>0x000000FF) \n output( 3, third, bytestream, bytes);\n\n if (1) \n output( 4, fourth, bytestream, bytes);\n\nI agree, however, that compressing this field makes the receiving state machine overly complicated. But if you can't change the protocol, this code is much easier to read.\n"
] | [
4,
0,
0,
0
] | [] | [] | [
"c",
"protocols"
] | stackoverflow_0000034977_c_protocols.txt |
Q:
FXRuby FXFileDialog box default directory
In FXRuby; how do I set the FXFileDialog to be at the home directory when it opens?
A:
Here's an exceedingly lazy way to do it:
#!/usr/bin/ruby
require 'rubygems'
require 'fox16'
include Fox
theApp = FXApp.new
theMainWindow = FXMainWindow.new(theApp, "Hello")
theButton = FXButton.new(theMainWindow, "Hello, World!")
theButton.tipText = "Push Me!"
iconFile = File.open("icon.jpg", "rb")
theButton.icon = FXJPGIcon.new(theApp, iconFile.read)
theButton.iconPosition = ICON_ABOVE_TEXT
iconFile.close
theButton.connect(SEL_COMMAND) {
fileToOpen = FXFileDialog.getOpenFilename(theMainWindow, "window name goes here", `echo $HOME`.chomp + "/")
}
FXToolTip.new(theApp)
theApp.create
theMainWindow.show
theApp.run
This relies on you being on a *nix box (or having the $HOME environment variable set). The lines that specifically answer your question are:
theButton.connect(SEL_COMMAND) {
fileToOpen = FXFileDialog.getOpenFilename(theMainWindow, "window name goes here", `echo $HOME`.chomp + "/")
}
Here, the first argument is the window that owns the dialog box, the second is the title of the window, and the third is the default path to start at (you need the "/" at the end otherwise it'll start a directory higher with the user's home folder selected). Check out this link for more info on FXFileDialog.
| FXRuby FXFileDialog box default directory | In FXRuby; how do I set the FXFileDialog to be at the home directory when it opens?
| [
"Here's an exceedingly lazy way to do it:\n#!/usr/bin/ruby\nrequire 'rubygems'\nrequire 'fox16'\ninclude Fox\n\ntheApp = FXApp.new\n\ntheMainWindow = FXMainWindow.new(theApp, \"Hello\")\n\ntheButton = FXButton.new(theMainWindow, \"Hello, World!\")\ntheButton.tipText = \"Push Me!\"\n\niconFile = File.open(\"icon.jpg\", \"rb\")\ntheButton.icon = FXJPGIcon.new(theApp, iconFile.read)\ntheButton.iconPosition = ICON_ABOVE_TEXT\niconFile.close\n\ntheButton.connect(SEL_COMMAND) { \nfileToOpen = FXFileDialog.getOpenFilename(theMainWindow, \"window name goes here\", `echo $HOME`.chomp + \"/\")\n}\n\nFXToolTip.new(theApp)\n\ntheApp.create\n\ntheMainWindow.show\n\ntheApp.run\n\nThis relies on you being on a *nix box (or having the $HOME environment variable set). The lines that specifically answer your question are:\ntheButton.connect(SEL_COMMAND) { \nfileToOpen = FXFileDialog.getOpenFilename(theMainWindow, \"window name goes here\", `echo $HOME`.chomp + \"/\")\n}\n\nHere, the first argument is the window that owns the dialog box, the second is the title of the window, and the third is the default path to start at (you need the \"/\" at the end otherwise it'll start a directory higher with the user's home folder selected). Check out this link for more info on FXFileDialog.\n"
] | [
1
] | [] | [] | [
"fxruby"
] | stackoverflow_0000041024_fxruby.txt |
Q:
.net solution subversion best practices?
There are so many examples of how to set up your dotnet projects but none seemed to fit our situation.
We have one solution with multiple applications, multiple dependencies. We're on SourceSafe currently and are planning to move to subversion but are finding it difficult to organize our source the right way.
Example solution
App1
App2
BizObjects
DataAccess
CustomControls
Dependencies
BizObjects->DataAccess
App1->CustomControls
App1->BizObjects
App1->DataAccess
App2->CustomControls
App2->BizObjects
We also have a configuration management system which deploys (via copy from the database) depending on which workload the operator is working. We mark an application "release" with a version and to that release, we add multiple file dependencies. Bear in mind the solution we have in place now is an attempt to band-aid the old (windows 3.1 developed) solution to work with .NET file/dependency structure.
In the case of App1, we have App1.exe, BizObjects.dll, DataAccess.dll, and CustomControls.dll.
We have the same set of dependencies for App2 due to BizObjects referencing DataAccess -- but this is defined manually. We don't have a system in place to identify the dependency tree.
Each of the dependencies for a "release" is a file and version id. And the same application could contain different versions of each file for a different workload.
Where in the world have we gone wrong? Did we go wrong?
How can we structure an svn source tree to accommodate the deployment requirements?
or
how can we restructure the code the better support a deployment strategy which makes sense for our setup?
We have an old and over-engineered solution to (it would seem) a relatively simple problem. Can anyone steer me/us in the right direction?
edit: I read this question and remembered we also have the same dev/test/prod areas which the code must move through.
| .net solution subversion best practices? | There are so many examples of how to set up your dotnet projects but none seemed to fit our situation.
We have one solution with multiple applications, multiple dependencies. We're on SourceSafe currently and are planning to move to subversion but are finding it difficult to organize our source the right way.
Example solution
App1
App2
BizObjects
DataAccess
CustomControls
Dependencies
BizObjects->DataAccess
App1->CustomControls
App1->BizObjects
App1->DataAccess
App2->CustomControls
App2->BizObjects
We also have a configuration management system which deploys (via copy from the database) depending on which workload the operator is working. We mark an application "release" with a version and to that release, we add multiple file dependencies. Bear in mind the solution we have in place now is an attempt to band-aid the old (windows 3.1 developed) solution to work with .NET file/dependency structure.
In the case of App1, we have App1.exe, BizObjects.dll, DataAccess.dll, and CustomControls.dll.
We have the same set of dependencies for App2 due to BizObjects referencing DataAccess -- but this is defined manually. We don't have a system in place to identify the dependency tree.
Each of the dependencies for a "release" is a file and version id. And the same application could contain different versions of each file for a different workload.
Where in the world have we gone wrong? Did we go wrong?
How can we structure an svn source tree to accommodate the deployment requirements?
or
how can we restructure the code the better support a deployment strategy which makes sense for our setup?
We have an old and over-engineered solution to (it would seem) a relatively simple problem. Can anyone steer me/us in the right direction?
edit: I read this question and remembered we also have the same dev/test/prod areas which the code must move through.
| [] | [] | [
"Sounds like you're trying to do configuration control with a source code control system.\nSubversion my not be the right choice, since it's really for source code (ascii files) and build dependencies, not executable files (binary) and run-time dependencies.\nMy guess is you really need an installer:\nhttp://en.wikipedia.org/wiki/List_of_installation_software\nOr maybe just a script to launch the correct configuration from a network drive.\n"
] | [
-1
] | [
"configuration_management",
"deployment",
"svn",
"version_control",
"visual_studio"
] | stackoverflow_0000016969_configuration_management_deployment_svn_version_control_visual_studio.txt |
Q:
Handle signals in the Java Virtual Machine
Is it possible to handle POSIX signals within the Java Virtual Machine?
At least SIGINT and SIGKILL should be quite platform independent.
A:
The JVM responds to signals on its own. Some will cause the JVM to shutdown gracefully, which includes running shutdown hooks. Other signals will cause the JVM to abort without running shutdown hooks.
Shutdown hooks are added using Runtime.addShutdownHook(Thread).
I don't think the JDK provides an official way to handle signals within your Java application. However, I did find this IBM article, which describes using some undocumented sun.misc.Signal class to do exactly that. The article dates from 2002 and uses JDK 1.3.1, but I've confirmed that the sun.misc.Signal class still exists in JDK 1.6.0.
A:
Perhaps Runtime#addShutdownHook ?
| Handle signals in the Java Virtual Machine | Is it possible to handle POSIX signals within the Java Virtual Machine?
At least SIGINT and SIGKILL should be quite platform independent.
| [
"The JVM responds to signals on its own. Some will cause the JVM to shutdown gracefully, which includes running shutdown hooks. Other signals will cause the JVM to abort without running shutdown hooks.\nShutdown hooks are added using Runtime.addShutdownHook(Thread).\nI don't think the JDK provides an official way to handle signals within your Java application. However, I did find this IBM article, which describes using some undocumented sun.misc.Signal class to do exactly that. The article dates from 2002 and uses JDK 1.3.1, but I've confirmed that the sun.misc.Signal class still exists in JDK 1.6.0.\n",
"Perhaps Runtime#addShutdownHook ?\n"
] | [
14,
4
] | [] | [] | [
"java",
"jvm",
"posix",
"process",
"signals"
] | stackoverflow_0000040376_java_jvm_posix_process_signals.txt |
Q:
.NET Mass Downloader with VS.NET 2005?
After downloading all .NET framework symbols and sources using NetMassDownloader, is it possible to setup the VS.NET 2005 for debugging into .NET 2.0 source files?
A:
It looks like you can download the symbols, though they're not available for browsing.
| .NET Mass Downloader with VS.NET 2005? | After downloading all .NET framework symbols and sources using NetMassDownloader, is it possible to setup the VS.NET 2005 for debugging into .NET 2.0 source files?
| [
"It looks like you can download the symbols, though they're not available for browsing.\n"
] | [
3
] | [] | [] | [
".net",
"debugging",
"visual_studio"
] | stackoverflow_0000041073_.net_debugging_visual_studio.txt |
Q:
vb.net object persisted in database
How can I go about storing a vb.net user defined object in a sql database. I am not trying to replicate the properties with columns. I mean something along the lines of converting or encoding my object to a byte array and then storing that in a field in the db. Like when you store an instance of an object in session, but I need the info to persist past the current session.
@Orion Edwards
It's not a matter of stances. It's because one day, you will change your code. Then you will try de-serialize the old object, and YOUR PROGRAM WILL CRASH.
My Program will not "CRASH", it will throw an exception. Lucky for me .net has a whole set of classes dedicated for such an occasion. At which time I will refresh my stale data and put it back in the db. That is the point of this one field (or stance, as the case may be).
A:
You can use serialization - it allows you to store your object at least in 3 forms: binary (suitable for BLOBs), XML (take advantage of MSSQL's XML data type) or just plain text (store in varchar or text column)
A:
Before you head down this road towards your own eventual insanity, you should take a look at this (or one day repeat it):
http://thedailywtf.com/Articles/The-Mythical-Business-Layer.aspx
Persisting objects in a database is not a good idea. It kills all the good things that a database is designed to do.
A:
You could use the BinaryFormatter class to serialize your object to a binary format, then save the resulting string in your database.
A:
The XmlSerializer or the DataContractSerializer in .net 3.x will do the job for you.
A:
@aku, lomaxx and bdukes - your solutions are what I was looking for.
@1800 INFORMATION - while i appreciate your stance on the matter, this is a special case of data that I get from a webservice that gets refreshed only about once a month. I dont need the data persisted in db form because thats what the webservice is for. Below is the code I finally got to work.
Serialize
#'res is my object to serialize
Dim xml_serializer As System.Xml.Serialization.XmlSerializer
Dim string_writer As New System.IO.StringWriter()
xml_serializer = New System.Xml.Serialization.XmlSerializer(res.GetType)
xml_serializer.Serialize(string_writer, res)
Deserialize
#'string_writer and xml_serializer from above
Dim serialization As String = string_writer.ToString
Dim string_reader As System.IO.StringReader
string_reader = New System.IO.StringReader(serialization)
Dim res2 As testsedie.EligibilityResponse
res2 = xml_serializer.Deserialize(string_reader)
A:
What you want to do is called "Serializing" your object, and .Net has a few different ways to go about it. One is the XmlSerializer class in the System.Xml.Serialization namespace.
Another is in the System.Runtime.Serialization namespace. This has support for a SOAP formatter, a binary formatter, and a base class you can inherit from that all implement a common interface.
For what you are talking about, the BinaryFormatter suggested earlier will probably have the best performance.
A:
I'm backing @1800 Information on this one.
Serializing objects for long-term storage is never a good idea
while i appreciate your stance on the matter, this is a special case of data that I get from a webservice that gets refreshed only about once a month.
It's not a matter of stances. It's because one day, you will change your code. Then you will try de-serialize the old object, and YOUR PROGRAM WILL CRASH.
A:
If it crashes (or throws an exception) all you are left with is a bunch of binary data to try and sift through to recreate your objects.
If you are only persisting binary why not just save straight to disk. You also might want to look at using something like xml as, as has been mentioned, if you alter your object definition you may not be able to unserialise it without some hard work.
| vb.net object persisted in database | How can I go about storing a vb.net user defined object in a sql database. I am not trying to replicate the properties with columns. I mean something along the lines of converting or encoding my object to a byte array and then storing that in a field in the db. Like when you store an instance of an object in session, but I need the info to persist past the current session.
@Orion Edwards
It's not a matter of stances. It's because one day, you will change your code. Then you will try de-serialize the old object, and YOUR PROGRAM WILL CRASH.
My Program will not "CRASH", it will throw an exception. Lucky for me .net has a whole set of classes dedicated for such an occasion. At which time I will refresh my stale data and put it back in the db. That is the point of this one field (or stance, as the case may be).
| [
"You can use serialization - it allows you to store your object at least in 3 forms: binary (suitable for BLOBs), XML (take advantage of MSSQL's XML data type) or just plain text (store in varchar or text column) \n",
"Before you head down this road towards your own eventual insanity, you should take a look at this (or one day repeat it):\nhttp://thedailywtf.com/Articles/The-Mythical-Business-Layer.aspx\nPersisting objects in a database is not a good idea. It kills all the good things that a database is designed to do.\n",
"You could use the BinaryFormatter class to serialize your object to a binary format, then save the resulting string in your database.\n",
"The XmlSerializer or the DataContractSerializer in .net 3.x will do the job for you.\n",
"@aku, lomaxx and bdukes - your solutions are what I was looking for. \n@1800 INFORMATION - while i appreciate your stance on the matter, this is a special case of data that I get from a webservice that gets refreshed only about once a month. I dont need the data persisted in db form because thats what the webservice is for. Below is the code I finally got to work. \nSerialize\n #'res is my object to serialize\n Dim xml_serializer As System.Xml.Serialization.XmlSerializer\n Dim string_writer As New System.IO.StringWriter()\n xml_serializer = New System.Xml.Serialization.XmlSerializer(res.GetType)\n xml_serializer.Serialize(string_writer, res)\n\nDeserialize\n #'string_writer and xml_serializer from above\n Dim serialization As String = string_writer.ToString\n Dim string_reader As System.IO.StringReader\n string_reader = New System.IO.StringReader(serialization)\n Dim res2 As testsedie.EligibilityResponse\n res2 = xml_serializer.Deserialize(string_reader)\n\n",
"What you want to do is called \"Serializing\" your object, and .Net has a few different ways to go about it. One is the XmlSerializer class in the System.Xml.Serialization namespace.\nAnother is in the System.Runtime.Serialization namespace. This has support for a SOAP formatter, a binary formatter, and a base class you can inherit from that all implement a common interface.\nFor what you are talking about, the BinaryFormatter suggested earlier will probably have the best performance.\n",
"I'm backing @1800 Information on this one.\nSerializing objects for long-term storage is never a good idea\n\nwhile i appreciate your stance on the matter, this is a special case of data that I get from a webservice that gets refreshed only about once a month.\n\nIt's not a matter of stances. It's because one day, you will change your code. Then you will try de-serialize the old object, and YOUR PROGRAM WILL CRASH.\n",
"If it crashes (or throws an exception) all you are left with is a bunch of binary data to try and sift through to recreate your objects. \nIf you are only persisting binary why not just save straight to disk. You also might want to look at using something like xml as, as has been mentioned, if you alter your object definition you may not be able to unserialise it without some hard work.\n"
] | [
5,
3,
2,
1,
0,
0,
0,
0
] | [] | [] | [
"serialization",
"sql",
"vb.net"
] | stackoverflow_0000040884_serialization_sql_vb.net.txt |
Q:
How do I write Firefox add-on that automatically enters proxy passwords?
Suppose someone worked for a company that put up an HTTP proxy preventing internet access without password authentication (NTLM, I think). Also suppose that this password rotated on a daily basis, which added very little security, but mostly served to annoy the employees. How would one get started writing a Firefox add-on that automatically entered these rotating passwords?
To clarify: This add-on would not just submit the password; the add-on would programmatically generate it with some knowledge of the password rotation scheme.
A:
This is built into Firefox. Open up about:config, search for 'ntlm'
The setting you're looking for is called network.automatic-ntlm-auth.trusted-uris and accepts a comma-space delimited list of your proxy server uris.
This will make FireFox automatically send hashed copies of your windows password to the proxy, which is disabled by default for obvious reasons. IE can do this automatically because it can use security zones to figure out whether a proxy server is trusted or not.
Blog post discussing this
A:
It's your lucky day - no need for an add-on!
How to configure Firefox for automatic NTLM authentication
In Firefox, type about:config into the address bar and hit enter. You should see a huge list of configuration properties.
Find the setting named network.negotiate-auth.delegation-uris (the easiest way to do this is to type that into the filter box at top).
Double-click this line, and enter the names of all servers for which network authentication is desired, separated by commas. Then press ‘OK’ to confirm.
Find the setting network.negotiate-auth.trusted-uris, and set it to the same value used in #3.
Find the setting network.ntlm.send-lm-response, and set it to true.
Skip steps 7 and 8 if you aren't using a proxy.
Open the options dialog (Tools->Options menu), and on the Advanced page, Network tab, press the Connection Settings button to get the proxy configuration dialog:
Make sure the correct proxy server is configured, and that the same list of servers is listed in the No Proxy for: entryfield as were set in step #3.
Done.
| How do I write Firefox add-on that automatically enters proxy passwords? | Suppose someone worked for a company that put up an HTTP proxy preventing internet access without password authentication (NTLM, I think). Also suppose that this password rotated on a daily basis, which added very little security, but mostly served to annoy the employees. How would one get started writing a Firefox add-on that automatically entered these rotating passwords?
To clarify: This add-on would not just submit the password; the add-on would programmatically generate it with some knowledge of the password rotation scheme.
| [
"This is built into Firefox. Open up about:config, search for 'ntlm'\nThe setting you're looking for is called network.automatic-ntlm-auth.trusted-uris and accepts a comma-space delimited list of your proxy server uris.\nThis will make FireFox automatically send hashed copies of your windows password to the proxy, which is disabled by default for obvious reasons. IE can do this automatically because it can use security zones to figure out whether a proxy server is trusted or not.\nBlog post discussing this\n",
"It's your lucky day - no need for an add-on!\nHow to configure Firefox for automatic NTLM authentication\n\nIn Firefox, type about:config into the address bar and hit enter. You should see a huge list of configuration properties. \nFind the setting named network.negotiate-auth.delegation-uris (the easiest way to do this is to type that into the filter box at top). \nDouble-click this line, and enter the names of all servers for which network authentication is desired, separated by commas. Then press ‘OK’ to confirm. \nFind the setting network.negotiate-auth.trusted-uris, and set it to the same value used in #3. \nFind the setting network.ntlm.send-lm-response, and set it to true. \nSkip steps 7 and 8 if you aren't using a proxy.\nOpen the options dialog (Tools->Options menu), and on the Advanced page, Network tab, press the Connection Settings button to get the proxy configuration dialog: \nMake sure the correct proxy server is configured, and that the same list of servers is listed in the No Proxy for: entryfield as were set in step #3. \nDone. \n\n"
] | [
6,
3
] | [] | [] | [
"authentication",
"firefox",
"passwords",
"plugins",
"proxy"
] | stackoverflow_0000041169_authentication_firefox_passwords_plugins_proxy.txt |
Q:
Should you register new extensions with Apple?
Do I need to register new extension types with Apple before I release an application that would create them on OS X?
A:
No, there's no need to register extensions.
A:
As a follow up, there is a little more information in the FAQs at the Apple Developer Connection (ADC) website:
http://developer.apple.com/faq/datatype.html
| Should you register new extensions with Apple? | Do I need to register new extension types with Apple before I release an application that would create them on OS X?
| [
"No, there's no need to register extensions.\n",
"As a follow up, there is a little more information in the FAQs at the Apple Developer Connection (ADC) website:\nhttp://developer.apple.com/faq/datatype.html\n"
] | [
4,
2
] | [] | [] | [
"apple_developer",
"macos"
] | stackoverflow_0000040719_apple_developer_macos.txt |
Q:
How do I loop through result objects in Flex?
I am having problems manually looping through xml data that is received via an HTTPService call, the xml looks something like this:
<DataTable>
<Row>
<text>foo</text>
</Row>
<Row>
<text>bar</text>
</Row>
</DataTable>
When the webservice result event is fired I do something like this:
for(var i:int=0;i<event.result.DataTable.Row.length;i++)
{
if(event.result.DataTable.Row[i].text == "foo")
mx.controls.Alert.show('foo found!');
}
This code works then there is more than 1 "Row" nodes returned. However, it seems that if there is only one "Row" node then the event.DataTable.Row object is not an error and the code subsequently breaks.
What is the proper way to loop through the HTTPService result object? Do I need to convert it to some type of XMLList collection or an ArrayCollection? I have tried setting the resultFormat to e4x and that has yet to fix the problem...
Thanks.
A:
The problem lies in this statement
event.result.DataTable.Row.length
length is not a property of XMLList, but a method:
event.result.DataTable.Row.length()
it's confusing, but that's the way it is.
Addition: actually, the safest thing to do is to always use a for each loop when iterating over XMLLists, that way you never make the mistake, it's less code, and easier to read:
for each ( var node : XML in event.result.DataTable.Row )
A:
Row isn't an array unless there are multiple Row elements. It is annoying. You have to do something like this, but I haven't written AS3 in a while so I forget if there's an exists function.
if (exists(event.result.DataTable) && exists(event.result.DataTable.Row)){
if (exists(event.result.DataTable.Row.length)) {
for(var i:int=0;i<event.result.DataTable.Row.length;i++)
{
if (exists(event.result.DataTable.Row[i].text)
&& "foo" == event.result.DataTable.Row[i].text)
mx.controls.Alert.show('foo found!');
}
}
if (exists(event.result.DataTable.Row.text)
&& "foo" == event.result.DataTable.Row.text)
mx.controls.Alert.show('foo found!');
}
A:
I would store it in an Xml object and then use its methods to search for the node value you need.
var returnedXml:Xml = new Xml(event.result.toString());
| How do I loop through result objects in Flex? | I am having problems manually looping through xml data that is received via an HTTPService call, the xml looks something like this:
<DataTable>
<Row>
<text>foo</text>
</Row>
<Row>
<text>bar</text>
</Row>
</DataTable>
When the webservice result event is fired I do something like this:
for(var i:int=0;i<event.result.DataTable.Row.length;i++)
{
if(event.result.DataTable.Row[i].text == "foo")
mx.controls.Alert.show('foo found!');
}
This code works then there is more than 1 "Row" nodes returned. However, it seems that if there is only one "Row" node then the event.DataTable.Row object is not an error and the code subsequently breaks.
What is the proper way to loop through the HTTPService result object? Do I need to convert it to some type of XMLList collection or an ArrayCollection? I have tried setting the resultFormat to e4x and that has yet to fix the problem...
Thanks.
| [
"The problem lies in this statement\nevent.result.DataTable.Row.length\n\nlength is not a property of XMLList, but a method:\nevent.result.DataTable.Row.length()\n\nit's confusing, but that's the way it is.\nAddition: actually, the safest thing to do is to always use a for each loop when iterating over XMLLists, that way you never make the mistake, it's less code, and easier to read:\nfor each ( var node : XML in event.result.DataTable.Row )\n\n",
"Row isn't an array unless there are multiple Row elements. It is annoying. You have to do something like this, but I haven't written AS3 in a while so I forget if there's an exists function.\nif (exists(event.result.DataTable) && exists(event.result.DataTable.Row)){\n if (exists(event.result.DataTable.Row.length)) {\n for(var i:int=0;i<event.result.DataTable.Row.length;i++)\n {\n if (exists(event.result.DataTable.Row[i].text)\n && \"foo\" == event.result.DataTable.Row[i].text)\n mx.controls.Alert.show('foo found!');\n }\n }\n if (exists(event.result.DataTable.Row.text)\n && \"foo\" == event.result.DataTable.Row.text)\n mx.controls.Alert.show('foo found!');\n}\n\n",
"I would store it in an Xml object and then use its methods to search for the node value you need.\nvar returnedXml:Xml = new Xml(event.result.toString());\n\n"
] | [
4,
2,
1
] | [] | [] | [
"actionscript",
"actionscript_3",
"apache_flex"
] | stackoverflow_0000040913_actionscript_actionscript_3_apache_flex.txt |
Q:
Keeping development databases in multiple environments in sync
I'm early in development on a web application built in VS2008. I have both a desktop PC (where most of the work gets done) and a laptop (for occasional portability) on which I use AnkhSVN to keep the project code synced. What's the best way to keep my development database (SQL Server Express) synced up as well?
I have a VS database project in SVN containing create scripts which I re-generate when the schema changes. The original idea was to recreate the DB whenever something changed, but it's quickly becoming a pain. Also, I'd lose all the sample rows I entered to make sure data is being displayed properly.
I'm considering putting the .MDF and .LDF files under source control, but I doubt SQL Server Express will handle it gracefully if I do an SVN Update and the files get yanked out from under it, replaced with newer copies. Sticking a couple big binary files into source control doesn't seem like an elegant solution either, even if it is just a throwaway development database. Any suggestions?
A:
There are obviously a number of ways to approach this, so I am going to list a number of links that should provide a better foundation to build on. These are the links that I've referenced in the past when trying to get others on the bandwagon.
Database Projects in Visual Studio .NET
Data Schema - How Changes are to be Implemented
Is Your Database Under Version Control?
Get Your Database Under Version Control
Also look for MSDN Webcast: Visual Studio 2005 Team Edition for Database Professionals (Part 4 of 4): Schema Source and Version Control
However, with all of that said, if you don't think that you are committed enough to implement some type of version control (either manual or semi-automated), then I HIGHLY recommend you check out the following:
Red Gate SQL Compare
Red Gate SQL Data Compare
Holy cow! Talk about making life easy! I had a project get away from me and had multiple people in making schema changes and had to keep multiple environments in sync. It was trivial to point the Red Gate products at two databases and see the differences and then sync them up.
A:
You can store backup (.bak file) of you database rather than .MDF & .LDF files.
You can restore your db easily using following script:
use master
go
if exists (select * from master.dbo.sysdatabases where name = 'your_db')
begin
alter database your_db set SINGLE_USER with rollback IMMEDIATE
drop database your_db
end
restore database your_db
from disk = 'path\to\your\bak\file'
with move 'Name of dat file' to 'path\to\mdf\file',
move 'Name of log file' to 'path\to\ldf\file'
go
You can put above mentioned script in text file restore.sql and call it from batch file using following command:
osql -E -i restore.sql
That way you can create script file to automate whole process:
Get latest db backup from SVN
repository or any suitable storage
Restore current db using bak file
A:
In addition to your database CREATE script, why don't you maintain a default data or sample data script as well?
This is an approach that we've taken for incremental versions of an application we have been maintaining for more than 2 years now, and it works very well. Having a default data script also allows your QA testers to be able to recreate bugs using the data that you also have?
You might also want to take a look at a question I posted some time ago:
Best tool for auto-generating SQL change scripts
A:
We use a combo of, taking backups from higher environments down.
As well as using ApexSql to handle initial setup of schema.
Recently been using Subsonic migrations, as a coded, source controlled, run through CI way to get change scripts in, there is also "tarantino" project developed by headspring out of texas.
Most of these approaches especially the latter, are safe to use on top of most test data. I particularly like the automated last 2 because I can make a change, and next time someone gets latest, they just run the "updater" and they are ushered to latest.
| Keeping development databases in multiple environments in sync | I'm early in development on a web application built in VS2008. I have both a desktop PC (where most of the work gets done) and a laptop (for occasional portability) on which I use AnkhSVN to keep the project code synced. What's the best way to keep my development database (SQL Server Express) synced up as well?
I have a VS database project in SVN containing create scripts which I re-generate when the schema changes. The original idea was to recreate the DB whenever something changed, but it's quickly becoming a pain. Also, I'd lose all the sample rows I entered to make sure data is being displayed properly.
I'm considering putting the .MDF and .LDF files under source control, but I doubt SQL Server Express will handle it gracefully if I do an SVN Update and the files get yanked out from under it, replaced with newer copies. Sticking a couple big binary files into source control doesn't seem like an elegant solution either, even if it is just a throwaway development database. Any suggestions?
| [
"There are obviously a number of ways to approach this, so I am going to list a number of links that should provide a better foundation to build on. These are the links that I've referenced in the past when trying to get others on the bandwagon.\n\nDatabase Projects in Visual Studio .NET\nData Schema - How Changes are to be Implemented\nIs Your Database Under Version Control?\nGet Your Database Under Version Control\nAlso look for MSDN Webcast: Visual Studio 2005 Team Edition for Database Professionals (Part 4 of 4): Schema Source and Version Control\n\nHowever, with all of that said, if you don't think that you are committed enough to implement some type of version control (either manual or semi-automated), then I HIGHLY recommend you check out the following:\n\nRed Gate SQL Compare\nRed Gate SQL Data Compare\n\nHoly cow! Talk about making life easy! I had a project get away from me and had multiple people in making schema changes and had to keep multiple environments in sync. It was trivial to point the Red Gate products at two databases and see the differences and then sync them up.\n",
"You can store backup (.bak file) of you database rather than .MDF & .LDF files.\nYou can restore your db easily using following script: \nuse master\ngo\n\nif exists (select * from master.dbo.sysdatabases where name = 'your_db')\nbegin\n alter database your_db set SINGLE_USER with rollback IMMEDIATE\n drop database your_db\nend\n\nrestore database your_db\nfrom disk = 'path\\to\\your\\bak\\file'\nwith move 'Name of dat file' to 'path\\to\\mdf\\file',\n move 'Name of log file' to 'path\\to\\ldf\\file'\ngo\n\nYou can put above mentioned script in text file restore.sql and call it from batch file using following command: \nosql -E -i restore.sql\n\nThat way you can create script file to automate whole process: \n\nGet latest db backup from SVN\nrepository or any suitable storage\nRestore current db using bak file\n\n",
"In addition to your database CREATE script, why don't you maintain a default data or sample data script as well? \nThis is an approach that we've taken for incremental versions of an application we have been maintaining for more than 2 years now, and it works very well. Having a default data script also allows your QA testers to be able to recreate bugs using the data that you also have?\nYou might also want to take a look at a question I posted some time ago:\nBest tool for auto-generating SQL change scripts\n",
"We use a combo of, taking backups from higher environments down.\nAs well as using ApexSql to handle initial setup of schema.\nRecently been using Subsonic migrations, as a coded, source controlled, run through CI way to get change scripts in, there is also \"tarantino\" project developed by headspring out of texas.\nMost of these approaches especially the latter, are safe to use on top of most test data. I particularly like the automated last 2 because I can make a change, and next time someone gets latest, they just run the \"updater\" and they are ushered to latest.\n"
] | [
7,
3,
3,
1
] | [] | [] | [
"database",
"svn",
"version_control",
"visual_studio_2008"
] | stackoverflow_0000040957_database_svn_version_control_visual_studio_2008.txt |
Q:
How to find a normal vector pointing directly from virtual world to screen in Java3D?
I think it can be done by applying the transformation matrix of the scenegraph to z-normal (0, 0, 1), but it doesn't work. My code goes like this:
Vector3f toScreenVector = new Vector3f(0, 0, 1);
Transform3D t3d = new Transform3D();
tg.getTransform(t3d); //tg is Transform Group of all objects in a scene
t3d.transform(toScreenVector);
Then I tried something like this too:
Point3d eyePos = new Point3d();
Point3d mousePos = new Point3d();
canvas.getCenterEyeInImagePlate(eyePos);
canvas.getPixelLocationInImagePlate(new Point2d(Main.WIDTH/2, Main.HEIGHT/2), mousePos); //Main is the class for main window.
Transform3D motion = new Transform3D();
canvas.getImagePlateToVworld(motion);
motion.transform(eyePos);
motion.transform(mousePos);
Vector3d toScreenVector = new Vector3f(eyePos);
toScreenVector.sub(mousePos);
toScreenVector.normalize();
But still this doesn't work correctly. I think there must be an easy way to create such vector. Do you know what's wrong with my code or better way to do so?
A:
If I get this right, you want a vector that is normal to the screen plane, but in world coordinates?
In that case you want to INVERT the transformation from World -> Screen and do Screen -> World of (0,0,-1) or (0,0,1) depending on which axis the screen points down.
Since the ModelView matrix is just a rotation matrix (ignoring the homogeneous transformation part), you can simply pull this out by taking the transpose of the rotational part, or simple reading in the bottom row - as this transposes onto the Z coordinate column under transposition.
A:
Yes, you got my question right. Sorry that I was a little bit confused yesterday. Now I have corrected the code by following your suggestion and mixing two pieces of code in the question together:
Vector3f toScreenVector = new Vector3f(0, 0, 1);
Transform3D t3d = new Transform3D();
canvas.getImagePlateToVworld(t3d);
t3d.transform(toScreenVector);
tg.getTransform(t3d); //tg is Transform Group of all objects in a scene
t3d.transform(toScreenVector);
Thank you.
| How to find a normal vector pointing directly from virtual world to screen in Java3D? | I think it can be done by applying the transformation matrix of the scenegraph to z-normal (0, 0, 1), but it doesn't work. My code goes like this:
Vector3f toScreenVector = new Vector3f(0, 0, 1);
Transform3D t3d = new Transform3D();
tg.getTransform(t3d); //tg is Transform Group of all objects in a scene
t3d.transform(toScreenVector);
Then I tried something like this too:
Point3d eyePos = new Point3d();
Point3d mousePos = new Point3d();
canvas.getCenterEyeInImagePlate(eyePos);
canvas.getPixelLocationInImagePlate(new Point2d(Main.WIDTH/2, Main.HEIGHT/2), mousePos); //Main is the class for main window.
Transform3D motion = new Transform3D();
canvas.getImagePlateToVworld(motion);
motion.transform(eyePos);
motion.transform(mousePos);
Vector3d toScreenVector = new Vector3f(eyePos);
toScreenVector.sub(mousePos);
toScreenVector.normalize();
But still this doesn't work correctly. I think there must be an easy way to create such vector. Do you know what's wrong with my code or better way to do so?
| [
"If I get this right, you want a vector that is normal to the screen plane, but in world coordinates?\nIn that case you want to INVERT the transformation from World -> Screen and do Screen -> World of (0,0,-1) or (0,0,1) depending on which axis the screen points down.\nSince the ModelView matrix is just a rotation matrix (ignoring the homogeneous transformation part), you can simply pull this out by taking the transpose of the rotational part, or simple reading in the bottom row - as this transposes onto the Z coordinate column under transposition.\n",
"Yes, you got my question right. Sorry that I was a little bit confused yesterday. Now I have corrected the code by following your suggestion and mixing two pieces of code in the question together:\nVector3f toScreenVector = new Vector3f(0, 0, 1);\n\nTransform3D t3d = new Transform3D();\ncanvas.getImagePlateToVworld(t3d);\nt3d.transform(toScreenVector);\n\ntg.getTransform(t3d); //tg is Transform Group of all objects in a scene\nt3d.transform(toScreenVector);\n\nThank you.\n"
] | [
2,
0
] | [] | [] | [
"graphics",
"java",
"java_3d"
] | stackoverflow_0000040028_graphics_java_java_3d.txt |
Q:
What are the benefits of using partitions with the Enterprise edition of SQL 2005
I'm comparing between two techniques to create partitioned tables in SQL 2005.
Use partitioned views with a standard version of SQL 2005 (described here)
Use the built in partition in the Enterprise edition of SQL 2005 (described here)
Given that the enterprise edition is much more expensive, I would like to know what are the main benefits of the newer enterprise built-in implementation. Is it just an time saver for the implementation itself. Or will I gain real performance on large DBs?
I know i can adjust the constraints in the first option to keep a sliding window into the partitions. Can I do it with the built in version?
A:
searchdotnet rulz! check this out:
http://www.eggheadcafe.com/forumarchives/SQLServerdatawarehouse/Dec2005/post25052042.asp
Updated: that link is dead. So here's a better one
http://msdn.microsoft.com/en-us/library/ms345146(SQL.90).aspx#sql2k5parti_topic6
From above:
Some of the performance and manageability benefits (of partioned tables) are
Simplify the design and
implementation of large tables that
need to be partitioned for
performance or manageability
purposes.
Load data into a new partition of an
existing partitioned table with
minimal disruption in data access in
the remaining partitions.
Load data into a new partition of an
existing partitioned table with
performance equal to loading the same
data into a new, empty table.
Archive and/or remove a portion of a
partitioned table while minimally
impacting access to the remainder of
the table.
Allow partitions to be maintained by switching partitions in and out of the partitioned table.
Allow better scaling and parallelism for extremely large operations over multiple related tables.
Improve performance over all partitions.
Improve query optimization time because each partition does not need to be optimized separately.
A:
When using the partitioned tables you can more easily move data from partition to partition. You can also partition the indexes as well.
You can also move data from one partition to another table as needed with a single ALTER TABLE command.
| What are the benefits of using partitions with the Enterprise edition of SQL 2005 | I'm comparing between two techniques to create partitioned tables in SQL 2005.
Use partitioned views with a standard version of SQL 2005 (described here)
Use the built in partition in the Enterprise edition of SQL 2005 (described here)
Given that the enterprise edition is much more expensive, I would like to know what are the main benefits of the newer enterprise built-in implementation. Is it just an time saver for the implementation itself. Or will I gain real performance on large DBs?
I know i can adjust the constraints in the first option to keep a sliding window into the partitions. Can I do it with the built in version?
| [
"searchdotnet rulz! check this out:\nhttp://www.eggheadcafe.com/forumarchives/SQLServerdatawarehouse/Dec2005/post25052042.asp\nUpdated: that link is dead. So here's a better one\nhttp://msdn.microsoft.com/en-us/library/ms345146(SQL.90).aspx#sql2k5parti_topic6\nFrom above:\nSome of the performance and manageability benefits (of partioned tables) are \n\nSimplify the design and\nimplementation of large tables that\nneed to be partitioned for\nperformance or manageability\npurposes.\nLoad data into a new partition of an\nexisting partitioned table with\nminimal disruption in data access in\nthe remaining partitions.\nLoad data into a new partition of an\nexisting partitioned table with\nperformance equal to loading the same\ndata into a new, empty table.\nArchive and/or remove a portion of a\npartitioned table while minimally\nimpacting access to the remainder of\nthe table.\nAllow partitions to be maintained by switching partitions in and out of the partitioned table.\nAllow better scaling and parallelism for extremely large operations over multiple related tables.\nImprove performance over all partitions.\nImprove query optimization time because each partition does not need to be optimized separately. \n\n",
"When using the partitioned tables you can more easily move data from partition to partition. You can also partition the indexes as well.\nYou can also move data from one partition to another table as needed with a single ALTER TABLE command.\n"
] | [
0,
0
] | [] | [] | [
"partitioning",
"sql",
"sql_server_2005"
] | stackoverflow_0000027206_partitioning_sql_sql_server_2005.txt |
Q:
Apache serving files that should not be served
Today I discovered that my fresh installation of Apache HTTP Server is able to serve files from my C:\uploads\ directory.
I have two folders in C:\uploads:
C:\uploads\templates
C:\uploads\sites
Both folders contain testimage.jpg.
I found that Apache will serve the files from the templates folder if I request:
http://localhost/templates/testimage.jpg
However, http://localhost/sites/testimage.jpg 404's!
OMG - firstly, why does Apache serve the templates folder in the first place? Is it special?
Secondly, by what arbitrary set of rules does apache disallow access to other folders such the sites?
I'm so confused. Perhaps I've taken a wrong turn somewhere during the installation.
A:
Did you look through your httpd.conf file to see what rules are in place for what is being served? Alternatively, are there .htaccess files that may be changing what is being served? You might have templates exposed in one or the other, but not sites... that's the first thing that comes to mind.
I would suggest going through these configuration files with a fine toothed comb to see what may cause the behavior you see.
| Apache serving files that should not be served | Today I discovered that my fresh installation of Apache HTTP Server is able to serve files from my C:\uploads\ directory.
I have two folders in C:\uploads:
C:\uploads\templates
C:\uploads\sites
Both folders contain testimage.jpg.
I found that Apache will serve the files from the templates folder if I request:
http://localhost/templates/testimage.jpg
However, http://localhost/sites/testimage.jpg 404's!
OMG - firstly, why does Apache serve the templates folder in the first place? Is it special?
Secondly, by what arbitrary set of rules does apache disallow access to other folders such the sites?
I'm so confused. Perhaps I've taken a wrong turn somewhere during the installation.
| [
"Did you look through your httpd.conf file to see what rules are in place for what is being served? Alternatively, are there .htaccess files that may be changing what is being served? You might have templates exposed in one or the other, but not sites... that's the first thing that comes to mind.\nI would suggest going through these configuration files with a fine toothed comb to see what may cause the behavior you see.\n"
] | [
1
] | [] | [] | [
"apache",
"hosting",
"self_hosting",
"web_hosting",
"webserver"
] | stackoverflow_0000041234_apache_hosting_self_hosting_web_hosting_webserver.txt |
Q:
How to work around unsupported unsigned integer field types in MS SQL?
Trying to make a MySQL-based application support MS SQL, I ran into the following issue:
I keep MySQL's auto_increment as unsigned integer fields (of various sizes) in order to make use of the full range, as I know there will never be negative values. MS SQL does not support the unsigned attribute on all integer types, so I have to choose between ditching half the value range or creating some workaround.
One very naive approach would be to put some code in the database abstraction code or in a stored procedure that converts between negative values on the db side and values from the larger portion of the unsigned range. This would mess up sorting of course, and also it would not work with the auto-id feature (or would it some way?).
I can't think of a good workaround right now, is there any? Or am I just being fanatic and should simply forget about half the range?
Edit:
@Mike Woodhouse: Yeah, I guess you're right. There's still a voice in my head saying that maybe I could reduce the field's size if I optimize its utilization. But if there's no easy way to do this, it's probably not worth worrying about it.
A:
When is the problem likely to become a real issue?
Given current growth rates, how soon do you expect signed integer overflow to happen in the MS SQL version?
Be pessimistic.
How long do you expect the application to live?
Do you still think the factor of 2 difference is something you should worry about?
(I have no idea what the answers are, but I think we should be sure that we really have a problem before searching any harder for a solution)
A:
I would recommend using the BIGINT data type as this goes up to 9,223,372,036,854,775,807.
SQL Server does not support signed and unsigned values.
A:
I would say this.. "How do we normally deal with differences between components?"
Encapsulate what varies..
You need to create an abstraction layer within you data access layer to get it to the point where it doesn't care whether or not the database is MySQL or MS SQL..
| How to work around unsupported unsigned integer field types in MS SQL? | Trying to make a MySQL-based application support MS SQL, I ran into the following issue:
I keep MySQL's auto_increment as unsigned integer fields (of various sizes) in order to make use of the full range, as I know there will never be negative values. MS SQL does not support the unsigned attribute on all integer types, so I have to choose between ditching half the value range or creating some workaround.
One very naive approach would be to put some code in the database abstraction code or in a stored procedure that converts between negative values on the db side and values from the larger portion of the unsigned range. This would mess up sorting of course, and also it would not work with the auto-id feature (or would it some way?).
I can't think of a good workaround right now, is there any? Or am I just being fanatic and should simply forget about half the range?
Edit:
@Mike Woodhouse: Yeah, I guess you're right. There's still a voice in my head saying that maybe I could reduce the field's size if I optimize its utilization. But if there's no easy way to do this, it's probably not worth worrying about it.
| [
"When is the problem likely to become a real issue?\nGiven current growth rates, how soon do you expect signed integer overflow to happen in the MS SQL version?\nBe pessimistic.\nHow long do you expect the application to live?\nDo you still think the factor of 2 difference is something you should worry about?\n(I have no idea what the answers are, but I think we should be sure that we really have a problem before searching any harder for a solution)\n",
"I would recommend using the BIGINT data type as this goes up to 9,223,372,036,854,775,807.\nSQL Server does not support signed and unsigned values.\n",
"I would say this.. \"How do we normally deal with differences between components?\"\nEncapsulate what varies..\nYou need to create an abstraction layer within you data access layer to get it to the point where it doesn't care whether or not the database is MySQL or MS SQL..\n"
] | [
1,
1,
0
] | [] | [] | [
"database",
"interop",
"mysql",
"sql_server"
] | stackoverflow_0000029694_database_interop_mysql_sql_server.txt |
Q:
How to implement Type-safe COM enumerations?
How could i implement Type-Safe Enumerations in Delphi in a COM scenario ? Basically, i'd like to replace a set of primitive constants of a enumeration with a set of static final object references encapsulated in a class ? .
In Java, we can do something like:
public final class Enum
{
public static final Enum ENUMITEM1 = new Enum ();
public static final Enum ENUMITEM2 = new Enum ();
//...
private Enum () {}
}
and make comparisons using the customized enumeration type:
if (anObject != Enum.ENUMITEM1) ...
I am currently using the old Delphi 5 and i would like to declare some enums parameters on the interfaces, not allowing that client objects to pass integers (or long) types in the place of the required enumeration type.
Do you have a better way of implementing enums other than using the native delphi enums ?
A:
Native Delphi enumerations are already type-safe. Java enumerations were an innovation for that language, because before it didn't have enumerations at all. However, perhaps you mean a different feature - enumeration values prefixed by their type name.
Upcoming Delphi 2009, and the last version of the Delphi for .NET product, support a new directive called scoped enums. It looks like this:
{$APPTYPE CONSOLE}
{$SCOPEDENUMS ON}
type
TFoo = (One, Two, Three);
{$SCOPEDENUMS OFF}
var
x: TFoo;
begin
x := TFoo.One;
if not (x in [TFoo.Two, TFoo.Three]) then
Writeln('OK');
end.
A:
What is wrong with native Delphi enums? They are type safe.
type
TMyEnum = (Item1, Item2, Item3);
if MyEnum <> Item1 then...
Since Delphi 2005 you can have consts in a class, but Delphi 5 can not.
type
TMyEnum = sealed class
public
const Item1 = 0;
const Item2 = 1;
const Item3 = 2;
end;
A:
Now you have provided us with some more clues about the nature of your question, namely mentioning COM, I think I understand what you mean. COM can marshal only a subset of the types Delphi knows between a COM server and client. You can define enums in the TLB editor, but these are all of the type TOleEnum which basically is an integer type (LongWord). You can have a variable of the type TOleEnum any integer value you want and assign values of different enum types to each other. Not really type safe.
I can not think of a reason why Delphi's COM can't use the type safe enums instead, but it doesn't. I am afraid nothing much can be done about that. Maybe the changes in the TLB editor in the upcoming Delphi 2009 version might change that.
For the record: When the TLB editor is not used, Delphi is perfectly able to have interface with methods who have type safe enums as parameters.
A:
I think I know why Borland choose not to use type safe enums in the TLB editor. Enums in COM can be different values while Delphi only since Delphi 6 (I think) can do that.
type
TSomeEnum = (Enum1 = 1, Enum2 = 6, Enum3 = 80); // Only since Delphi 6
| How to implement Type-safe COM enumerations? | How could i implement Type-Safe Enumerations in Delphi in a COM scenario ? Basically, i'd like to replace a set of primitive constants of a enumeration with a set of static final object references encapsulated in a class ? .
In Java, we can do something like:
public final class Enum
{
public static final Enum ENUMITEM1 = new Enum ();
public static final Enum ENUMITEM2 = new Enum ();
//...
private Enum () {}
}
and make comparisons using the customized enumeration type:
if (anObject != Enum.ENUMITEM1) ...
I am currently using the old Delphi 5 and i would like to declare some enums parameters on the interfaces, not allowing that client objects to pass integers (or long) types in the place of the required enumeration type.
Do you have a better way of implementing enums other than using the native delphi enums ?
| [
"Native Delphi enumerations are already type-safe. Java enumerations were an innovation for that language, because before it didn't have enumerations at all. However, perhaps you mean a different feature - enumeration values prefixed by their type name.\nUpcoming Delphi 2009, and the last version of the Delphi for .NET product, support a new directive called scoped enums. It looks like this:\n{$APPTYPE CONSOLE}\n{$SCOPEDENUMS ON}\ntype\n TFoo = (One, Two, Three);\n{$SCOPEDENUMS OFF}\n\nvar\n x: TFoo;\nbegin\n x := TFoo.One;\n if not (x in [TFoo.Two, TFoo.Three]) then\n Writeln('OK');\nend.\n\n",
"What is wrong with native Delphi enums? They are type safe.\ntype\n TMyEnum = (Item1, Item2, Item3);\n\nif MyEnum <> Item1 then...\n\nSince Delphi 2005 you can have consts in a class, but Delphi 5 can not.\ntype\n TMyEnum = sealed class\n public\n const Item1 = 0;\n const Item2 = 1;\n const Item3 = 2;\n end;\n\n",
"Now you have provided us with some more clues about the nature of your question, namely mentioning COM, I think I understand what you mean. COM can marshal only a subset of the types Delphi knows between a COM server and client. You can define enums in the TLB editor, but these are all of the type TOleEnum which basically is an integer type (LongWord). You can have a variable of the type TOleEnum any integer value you want and assign values of different enum types to each other. Not really type safe.\nI can not think of a reason why Delphi's COM can't use the type safe enums instead, but it doesn't. I am afraid nothing much can be done about that. Maybe the changes in the TLB editor in the upcoming Delphi 2009 version might change that.\nFor the record: When the TLB editor is not used, Delphi is perfectly able to have interface with methods who have type safe enums as parameters.\n",
"I think I know why Borland choose not to use type safe enums in the TLB editor. Enums in COM can be different values while Delphi only since Delphi 6 (I think) can do that. \ntype\n TSomeEnum = (Enum1 = 1, Enum2 = 6, Enum3 = 80); // Only since Delphi 6\n\n"
] | [
4,
3,
1,
1
] | [] | [] | [
"com",
"delphi",
"delphi_5"
] | stackoverflow_0000030529_com_delphi_delphi_5.txt |
Q:
Vista Console App?
I'm doing a fair bit of work in Ruby recently, and using
ruby script/console
Is absolutely critical. However, I'm really disappointed with the default Windows console in Vista, especially in that there's a really annoying bug where moving the cursor back when at the bottom of the screen irregularly causes it to jump back. Anyone have a decent console app they use in Windows?
A:
I use Console2.
I like the tabbed interface and that copy works properly if text breaks at the end of a line.
A:
Are you resizing the console window? I've found that the ruby scripts (irb, etc) that use the readline library don't work correctly with resized console windows (in XP or Vista).
Effectively I believe that the readline library expects the console window to be 80 characters wide, anything else and it goes bezerk. So far I haven't found a way to fix it on windows without giving up other nice features.
A:
I have had some pleasant experiences with rxvt (comes with cygwin, does not need an x server running). Putty is also often mentioned as a good alternative.
You could also try to get xterm working :)
A:
Powershell
Windows PowerShell is Microsoft's task automation framework, consisting of a command-line shell and associated scripting language built on top of, and integrated with the .NET Framework. PowerShell provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems.
| Vista Console App? | I'm doing a fair bit of work in Ruby recently, and using
ruby script/console
Is absolutely critical. However, I'm really disappointed with the default Windows console in Vista, especially in that there's a really annoying bug where moving the cursor back when at the bottom of the screen irregularly causes it to jump back. Anyone have a decent console app they use in Windows?
| [
"I use Console2.\nI like the tabbed interface and that copy works properly if text breaks at the end of a line.\n",
"Are you resizing the console window? I've found that the ruby scripts (irb, etc) that use the readline library don't work correctly with resized console windows (in XP or Vista).\nEffectively I believe that the readline library expects the console window to be 80 characters wide, anything else and it goes bezerk. So far I haven't found a way to fix it on windows without giving up other nice features.\n",
"I have had some pleasant experiences with rxvt (comes with cygwin, does not need an x server running). Putty is also often mentioned as a good alternative.\nYou could also try to get xterm working :)\n",
"Powershell\n\nWindows PowerShell is Microsoft's task automation framework, consisting of a command-line shell and associated scripting language built on top of, and integrated with the .NET Framework. PowerShell provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems.\n\n"
] | [
7,
3,
2,
0
] | [] | [] | [
"console",
"ruby",
"windows",
"windows_vista"
] | stackoverflow_0000041185_console_ruby_windows_windows_vista.txt |
Q:
Encrypt data from users in web applications
Some web applications, like Google Docs, store data generated by the users. Data that can only be read by its owner. Or maybe not?
As far as I know, this data is stored as is in a remote database. So, if anybody with enough privileges in the remote system (a sysadmin, for instance) can lurk my data, my privacy could get compromised.
What could be the best solution to store this data encrypted in a remote database and that only the data's owner could decrypt it? How to make this process transparent to the user? (You can't use the user's password as the key to encrypt his data, because you shouldn't know his password).
A:
If encryption/decryption is performed on the server, there is no way you can make sure that the cleartext is not dumped somewhere in some log file or the like.
You need to do the encryption/decryption inside the browser using JavaScript/Java/ActiveX or whatever. As a user, you need to trust the client-side of the web service not to send back the info unencrypted to the server.
Carl
A:
I think Carl, nailed it on the head, but I wanted to say that with any website, if you are providing it any confidential/personal/privileged information then you have to have a certain level of trust, and it is the responsibility of the service provider to establish this trust. This is one of those questions that has been asked many times, across the internet since it's inception, and it will only continue to grow until we all have our own SSL certs encoded on our fingerprint, and even then we will have to ask the question 'How do I know that the finger is still attached to the user?'.
A:
Well, I'd consider a process similar to Amazons AWS. You authenticate with a private password that is not saved remotely. Just a hash is used to validate the user. Then you generate a certificate with one of the main and long-tested algorithms and provide this from a secure page. Then a public/private key algorithm can be used to encrypt things for the users.
But the main problem remains the same: If someone with enough privileges can access the data (say: hacked your server), you're lost. Given enough time and power, everything could be breaked. It's just a matter of time.
But I think algorithms and applications like GPG/PGP and similar are very well known and can be implemented in a way that secure web applications - and keep the usability at a score that the average user can handle.
edit I want to catch up with @Carl and Unkwntech and add their statement: If you don't trust the site itself, don't give private data away. That's even before someone hacks their servers... ;-)
A:
Auron asked: How do you generate a key for the client to encrypt/decrypt the data? Where do you store this key?
Well, the key is usually derived from some password the user has chosen. You don't store it, you trust the user to remember it. What you can store is maybe some salt value associated to that user, to increase security against rainbow-table attacks for instance.
Crypto is hard to get right ;-) I would recommend to look at the source code for AxCrypt and for Xecrets' off-line client.
Carl
| Encrypt data from users in web applications | Some web applications, like Google Docs, store data generated by the users. Data that can only be read by its owner. Or maybe not?
As far as I know, this data is stored as is in a remote database. So, if anybody with enough privileges in the remote system (a sysadmin, for instance) can lurk my data, my privacy could get compromised.
What could be the best solution to store this data encrypted in a remote database and that only the data's owner could decrypt it? How to make this process transparent to the user? (You can't use the user's password as the key to encrypt his data, because you shouldn't know his password).
| [
"If encryption/decryption is performed on the server, there is no way you can make sure that the cleartext is not dumped somewhere in some log file or the like.\nYou need to do the encryption/decryption inside the browser using JavaScript/Java/ActiveX or whatever. As a user, you need to trust the client-side of the web service not to send back the info unencrypted to the server.\nCarl\n",
"I think Carl, nailed it on the head, but I wanted to say that with any website, if you are providing it any confidential/personal/privileged information then you have to have a certain level of trust, and it is the responsibility of the service provider to establish this trust. This is one of those questions that has been asked many times, across the internet since it's inception, and it will only continue to grow until we all have our own SSL certs encoded on our fingerprint, and even then we will have to ask the question 'How do I know that the finger is still attached to the user?'.\n",
"Well, I'd consider a process similar to Amazons AWS. You authenticate with a private password that is not saved remotely. Just a hash is used to validate the user. Then you generate a certificate with one of the main and long-tested algorithms and provide this from a secure page. Then a public/private key algorithm can be used to encrypt things for the users.\nBut the main problem remains the same: If someone with enough privileges can access the data (say: hacked your server), you're lost. Given enough time and power, everything could be breaked. It's just a matter of time.\nBut I think algorithms and applications like GPG/PGP and similar are very well known and can be implemented in a way that secure web applications - and keep the usability at a score that the average user can handle.\nedit I want to catch up with @Carl and Unkwntech and add their statement: If you don't trust the site itself, don't give private data away. That's even before someone hacks their servers... ;-)\n",
"\nAuron asked: How do you generate a key for the client to encrypt/decrypt the data? Where do you store this key?\n\nWell, the key is usually derived from some password the user has chosen. You don't store it, you trust the user to remember it. What you can store is maybe some salt value associated to that user, to increase security against rainbow-table attacks for instance. \nCrypto is hard to get right ;-) I would recommend to look at the source code for AxCrypt and for Xecrets' off-line client.\nCarl\n"
] | [
6,
4,
1,
1
] | [
"No, you can't use passwords, but you could use password hashes. However, Google Docs are all about sharing, so such a method would require storing a copy of the document for each user.\n"
] | [
-1
] | [
"encryption",
"privacy",
"web_applications"
] | stackoverflow_0000039772_encryption_privacy_web_applications.txt |
Q:
Working on a Visual Studio Project with multiple users?
I just wonder what the best approach is to have multiple users work on a Project in Visual Studio 2005 Professional.
We got a Solution with multiple Class Libraries, but when everyone opens the solution, we keep getting the "X was modified, Reload/Discard?" prompt all the time. Just opening one project is an obvious alternative, but I find it harder to use as you can't just see some of the other classes in other projects that way.
Are there any Guidelines for Team Development with VS2005 Pro?
Edit: Thanks. The current environment is a bit limited in the sense there is only 1 PC with RDP Connection, but that will change in the future. Marking the first answer as Accepted, but they are all good :)
A:
What you need is source control.
You should definitely not open the same files over the network on multiple machines. For one thing, Visual Studio has safeguards in place to prevent you from modifying certain files during a build, but it has none of that that will prevent others from modifying the same files over the network.
By setting up source control, each developer will have a separate copy of the files locally on his or her developer machine, and periodically communicate with the source control system to check in/commit changes. After that, other developers can ask for the latest updates when they're ready to retrieve them.
A:
Use source control to keep a central repository of all your code. Then each user checks out their own copy of the source code and works locally. Then submits only the code that changed.
https://en.wikipedia.org/wiki/Version_control
A:
A number of people have recommended using source control and I totally agree. However you also need do the following.
Exclude your personal options files from the repository (eg your .suo files)
Exclude your App.config files from the repository. - Not entirely but you need to have a Template.App.config. You commit that instead, and only copy your App.config into the Template.App.config when you make structural changes. That was each user has their own individual config for testing.
There are probably some other files worth excluding (obj directories and so forth) but thats all I can think of right now.
Peter
A:
This might sound snide, but if you're opening up the solution from a shared location then you're doing something wrong. If that's the case then you should start using source control (something like Subversion) and have everyone check out a copy of the project to work on.
However if you're already using source control, then it might be a symptom of having the wrong things checked in. I find that you only need the sln, and the vcproj under source control.
Otherwise I don't know...
A:
You should definitely, definitely be working with source control!
This will help stop the collisions that are occurring. Also, if you are making changes to the shared projects this often that it is a problem, then also ensure that all code is tested before getting checked in (otherwise they may bust someone else's build), but make sure they check in often (or time gained from not dealing with prompts will be lost in merging conflicts) :)
| Working on a Visual Studio Project with multiple users? | I just wonder what the best approach is to have multiple users work on a Project in Visual Studio 2005 Professional.
We got a Solution with multiple Class Libraries, but when everyone opens the solution, we keep getting the "X was modified, Reload/Discard?" prompt all the time. Just opening one project is an obvious alternative, but I find it harder to use as you can't just see some of the other classes in other projects that way.
Are there any Guidelines for Team Development with VS2005 Pro?
Edit: Thanks. The current environment is a bit limited in the sense there is only 1 PC with RDP Connection, but that will change in the future. Marking the first answer as Accepted, but they are all good :)
| [
"What you need is source control.\nYou should definitely not open the same files over the network on multiple machines. For one thing, Visual Studio has safeguards in place to prevent you from modifying certain files during a build, but it has none of that that will prevent others from modifying the same files over the network.\nBy setting up source control, each developer will have a separate copy of the files locally on his or her developer machine, and periodically communicate with the source control system to check in/commit changes. After that, other developers can ask for the latest updates when they're ready to retrieve them.\n",
"Use source control to keep a central repository of all your code. Then each user checks out their own copy of the source code and works locally. Then submits only the code that changed.\nhttps://en.wikipedia.org/wiki/Version_control\n",
"A number of people have recommended using source control and I totally agree. However you also need do the following.\n\nExclude your personal options files from the repository (eg your .suo files)\nExclude your App.config files from the repository. - Not entirely but you need to have a Template.App.config. You commit that instead, and only copy your App.config into the Template.App.config when you make structural changes. That was each user has their own individual config for testing.\n\nThere are probably some other files worth excluding (obj directories and so forth) but thats all I can think of right now.\nPeter\n",
"This might sound snide, but if you're opening up the solution from a shared location then you're doing something wrong. If that's the case then you should start using source control (something like Subversion) and have everyone check out a copy of the project to work on. \nHowever if you're already using source control, then it might be a symptom of having the wrong things checked in. I find that you only need the sln, and the vcproj under source control.\nOtherwise I don't know...\n",
"You should definitely, definitely be working with source control!\nThis will help stop the collisions that are occurring. Also, if you are making changes to the shared projects this often that it is a problem, then also ensure that all code is tested before getting checked in (otherwise they may bust someone else's build), but make sure they check in often (or time gained from not dealing with prompts will be lost in merging conflicts) :)\n"
] | [
8,
6,
3,
1,
1
] | [] | [] | [
"visual_studio"
] | stackoverflow_0000041320_visual_studio.txt |
Q:
Business Application UI Design
Basically I'm going to go a bit broad here and ask a few questions to get a bit of a picture of how people are handling UI these days.
Lately I've found it pretty easy to do some fancy things with UI design and with WPF specifically we're finding new ways to do layouts that are better looking and more functional for the user, but in contrast one of the business focused guys at our local .NET User Group wouldn't even think of using WPF until it had a datagrid that he could use to make Excel like input forms.
So basically, have you rethought the design of your business apps as you move to Web/WPF/Silverlight designs, because for us at least - in winforms we kept things fairly functional and uniform, or are you trying to keep that "known" UI?
Would a dedicated design guy (for larger teams), or a dev with more design chops rank higher when looking at hiring these days? (Check out what a designer did for Scott Hanselman's BabySmash and Microsoft's Prism demo)
Are there any design hints/tips/guidelines you use for your UI - especially for WPF?
What sites would you recommend for design?
A:
I recommend that you read Steve Krug's Don't Make Me Think first. The book has a great checklist of things that you have to take into consideration when designing your UIs. While it's focused on web usability, a lot of the lessons therein are valuable even to desktop application designers.
That being said, whether you use Windows forms or WPF or Flash or whatever new and shiny thing that comes around is, it is of utmost importance to hire either a) a real designer, or b) a development guy with a lot of UI design experience, either of which who can provide you a serious URL for their design portfolio. It will help a lot not only in improving the design of your application but also unburdening your developers from thinking about UI design, and allow them to focus on the back-end code.
As for "business focused" guys -- it would be really great if you would get the opinion of actual customers and stake holders, and have them do some usability testing for your application. It's their opinion that would matter most.
I think it would not be difficult to get a good designer up to speed on Microsoft Expression Blend to whip up some good XAML designs that your team could use to come up with a really good product.
A:
Here's a great screen cast where Billy Hollis goes into many of these issues:
http://www.dnrtv.com/default.aspx?showNum=115
A:
I think WPF can greatly improve user experience.
However there are not much business oriented controls out there which means you need to do a lot by yourself.
As for designers I think it's really hard to find WPF designer now days, it still would be a dedicated programmer rather then design-only guy.
I hope that this situation will change in nearest feature.
I think it's worth at least start experimenting with WPF to be able to compete with upcoming solutions.
A:
@aku "I think WPF can greatly improve user experience."
I believe that WPF has amazing potential as a tool to make UIs more creative and better suited to the actual data that is being displayed, BUT..............
Just the mere act of using WPF isn't going to make great UIs appear out of nowhere.
A great carpenter may use the best wood working tools, but that doesn't mean that if you picked up his tools you'd all of a sudden be popping out fine furniture.
Using WPF over HTML/Flash/WinForms/etc just increases your potential .
If that's potential for ugliness or potential for beauty is up to you.
A:
The whole concept of re-thinking a UI of an existing application is dependent on the target audience. For a boring business application, like accounting or budgeting, it may even be counter-productive. For one, users of those kinds of apps may have used a similar looking and feeling UI for years and years, and second, looking too "cute" and colorful can even bring a perception of toy-ishness (is that a word?) with it.
We have done several new projects with the latest & greatest UI gadgets, and for the most part for new applications it seems to be a good chance to get some feedback from a live audience. Then it gets easier to translate that feedback into existing applications.
We also have some apps which are still actively developed (and used obviously), where the UI looks almost like in Windows 3.1. They're awful, gray, clunky, and our only real designer is always trying to get a permission to bring it to the current centrury - but the biggest customer actively refuses this. They say it's just fine, people know how to use it, and it works even in their oldest computers.
A:
@David H Aust That's part of the reason for asking the question - with these newer tools like WPF that lend themselves to providing newer, more intricate, and at the same time simpler for the user, interfaces that we might need to adapt to new ways of doing things.
And trying to find out who else is adapting/interested and what they are doing, and where they get some inspiration, knowledge or help :)
IE: This is me being proactive about change in possibly the slackest manner ever, short of actively googling :)
^ That was a joke, to make it clear, I'm actually pretty active about learning new stuff, I'm just finding some of the crowdsourcing stackoverflow vs googling pretty interesting :)
A:
Microsoft is building a DataGrid for WPF. A CTP can be found here.
A:
@Lars Truijens - Thanks, but I think for 99% of cases that's a horrible idea, and sure, there are uses - but I've found that with WPF there's typically a much better way to do it.
Plus you can use textboxes, and use an Enter as Tab override to move through them easily and swiftly.
| Business Application UI Design | Basically I'm going to go a bit broad here and ask a few questions to get a bit of a picture of how people are handling UI these days.
Lately I've found it pretty easy to do some fancy things with UI design and with WPF specifically we're finding new ways to do layouts that are better looking and more functional for the user, but in contrast one of the business focused guys at our local .NET User Group wouldn't even think of using WPF until it had a datagrid that he could use to make Excel like input forms.
So basically, have you rethought the design of your business apps as you move to Web/WPF/Silverlight designs, because for us at least - in winforms we kept things fairly functional and uniform, or are you trying to keep that "known" UI?
Would a dedicated design guy (for larger teams), or a dev with more design chops rank higher when looking at hiring these days? (Check out what a designer did for Scott Hanselman's BabySmash and Microsoft's Prism demo)
Are there any design hints/tips/guidelines you use for your UI - especially for WPF?
What sites would you recommend for design?
| [
"I recommend that you read Steve Krug's Don't Make Me Think first. The book has a great checklist of things that you have to take into consideration when designing your UIs. While it's focused on web usability, a lot of the lessons therein are valuable even to desktop application designers.\nThat being said, whether you use Windows forms or WPF or Flash or whatever new and shiny thing that comes around is, it is of utmost importance to hire either a) a real designer, or b) a development guy with a lot of UI design experience, either of which who can provide you a serious URL for their design portfolio. It will help a lot not only in improving the design of your application but also unburdening your developers from thinking about UI design, and allow them to focus on the back-end code.\nAs for \"business focused\" guys -- it would be really great if you would get the opinion of actual customers and stake holders, and have them do some usability testing for your application. It's their opinion that would matter most.\nI think it would not be difficult to get a good designer up to speed on Microsoft Expression Blend to whip up some good XAML designs that your team could use to come up with a really good product.\n",
"Here's a great screen cast where Billy Hollis goes into many of these issues:\nhttp://www.dnrtv.com/default.aspx?showNum=115\n",
"I think WPF can greatly improve user experience. \nHowever there are not much business oriented controls out there which means you need to do a lot by yourself. \nAs for designers I think it's really hard to find WPF designer now days, it still would be a dedicated programmer rather then design-only guy. \nI hope that this situation will change in nearest feature. \nI think it's worth at least start experimenting with WPF to be able to compete with upcoming solutions.\n",
"@aku \"I think WPF can greatly improve user experience.\" \nI believe that WPF has amazing potential as a tool to make UIs more creative and better suited to the actual data that is being displayed, BUT.............. \nJust the mere act of using WPF isn't going to make great UIs appear out of nowhere. \nA great carpenter may use the best wood working tools, but that doesn't mean that if you picked up his tools you'd all of a sudden be popping out fine furniture. \nUsing WPF over HTML/Flash/WinForms/etc just increases your potential .\nIf that's potential for ugliness or potential for beauty is up to you.\n",
"The whole concept of re-thinking a UI of an existing application is dependent on the target audience. For a boring business application, like accounting or budgeting, it may even be counter-productive. For one, users of those kinds of apps may have used a similar looking and feeling UI for years and years, and second, looking too \"cute\" and colorful can even bring a perception of toy-ishness (is that a word?) with it.\nWe have done several new projects with the latest & greatest UI gadgets, and for the most part for new applications it seems to be a good chance to get some feedback from a live audience. Then it gets easier to translate that feedback into existing applications.\nWe also have some apps which are still actively developed (and used obviously), where the UI looks almost like in Windows 3.1. They're awful, gray, clunky, and our only real designer is always trying to get a permission to bring it to the current centrury - but the biggest customer actively refuses this. They say it's just fine, people know how to use it, and it works even in their oldest computers.\n",
"@David H Aust That's part of the reason for asking the question - with these newer tools like WPF that lend themselves to providing newer, more intricate, and at the same time simpler for the user, interfaces that we might need to adapt to new ways of doing things.\nAnd trying to find out who else is adapting/interested and what they are doing, and where they get some inspiration, knowledge or help :)\nIE: This is me being proactive about change in possibly the slackest manner ever, short of actively googling :) \n^ That was a joke, to make it clear, I'm actually pretty active about learning new stuff, I'm just finding some of the crowdsourcing stackoverflow vs googling pretty interesting :)\n",
"Microsoft is building a DataGrid for WPF. A CTP can be found here.\n",
"@Lars Truijens - Thanks, but I think for 99% of cases that's a horrible idea, and sure, there are uses - but I've found that with WPF there's typically a much better way to do it.\nPlus you can use textboxes, and use an Enter as Tab override to move through them easily and swiftly.\n"
] | [
12,
7,
5,
3,
3,
0,
0,
0
] | [] | [] | [
"user_interface",
"wpf"
] | stackoverflow_0000040863_user_interface_wpf.txt |
Q:
Can Mac OS X's Spotlight be configured to ignore certain file types?
I've got bunches of auxiliary files that are generated by code and LaTeX documents that I dearly wish would not be suggested by SpotLight as potential search candidates. I'm not looking for example.log, I'm looking for example.tex!
So can Spotlight be configured to ignore, say, all .log files?
(I know, I know; I should just use QuickSilver instead…)
@diciu That's an interesting answer. The problem in my case is this:
Figure out which importer handles your type of file
I'm not sure if my type of file is handled by any single importer? Since they've all got weird extensions (.aux, .glo, .out, whatever) I think it's improbable that there's an importer that's trying to index them. But because they're plain text they're being picked up as generic files. (Admittedly, I don't know much about Spotlight's indexing, so I might be completely wrong on this.)
@diciu again: TextImporterDontImportList sounds very promising; I'll head off and see if anything comes of it.
Like you say, it does seem like the whole UTI system doesn't really allow not searching for something.
@Raynet Making the files invisible is a good idea actually, albeit relatively tedious for me to set up in the general sense. If worst comes to worst, I might give that a shot (but probably after exhausting other options such as QuickSilver). (Oh, and SetFile requires the Developer Tools, but I'm guessing everyone here has them installed anyway :) )
A:
@Will - these things that define types are called uniform type identifiers.
The problem is they are a combination of extensions (like .txt) and generic types (i.e. public.plain-text matches a txt file without the txt extension based purely on content) so it's not as simple as looking for an extension.
RichText.mdimporter is probably the importer that imports your text file.
This should be easily verified by running mdimport in debug mode on one of the files you don't want indexed:
cristi:~ diciu$ echo "All work and no play makes Jack a dull boy" > ~/input.txt
cristi:~ diciu$ mdimport -d 4 -n ~/input.txt 2>&1 | grep Imported
kMD2008-09-03 12:05:06.342 mdimport[1230:10b] Imported '/Users/diciu/input.txt' of type 'public.plain-text' with plugIn /System/Library/Spotlight/RichText.mdimporter.
The type that matches in my example is public.plain-text.
I've no idea how you actually write an extension-based exception for an UTI (like public.plain-text except anything ending in .log).
Later edit: I've also looked though the RichText mdimporter binary and found a promising string but I can't figure out if it's actually being used (as a preference name or whatever):
cristi:FoodBrowser diciu$ strings /System/Library/Spotlight/RichText.mdimporter/Contents/MacOS/RichText |grep Text
TextImporterDontImportList
A:
Not sure how to do it on a file type level, but you can do it on a folder level:
Source: http://lists.apple.com/archives/spotlight-dev/2008/Jul/msg00007.html
Make spotlight ignore a folder
If you absolutely can't rename the folder because other software depends on it another technique is to go ahead and rename the directory to end in ".noindex", but then create a symlink in the same location pointing to the real location using the original name.
Most software is happy to use the symlink with the original name, but Spotlight ignores symlinks and will note the "real" name ends in *.noindex and will ignore that location.
Perhaps something like:
mv OriginalName OriginalName.noindex
ln -s OriginalName.noindex
OriginalName
ls -l
lrwxr-xr-x 1 andy admin 24 Jan 9 2008
OriginalName -> OriginalName.noindex
drwxr-xr-x 11 andy admin 374 Jul 11
07:03 Original.noindex
A:
Here's how it might work.
Note: this is not a very good solution as a system update will overwrite changes you will perform.
Get a list of all importers
cristi:~ diciu$ mdimport -L
2008-09-03 10:42:27.144 mdimport[727:10b] Paths: id(501) (
"/System/Library/Spotlight/Audio.mdimporter",
"/System/Library/Spotlight/Chat.mdimporter",
"/Developer/Applications/Xcode.app/Contents/Library/Spotlight/SourceCode.mdimporter",
Figure out which importer handles your type of file (example for the Audio importer):
cristi:~ diciu$ cat /System/Library/Spotlight/Audio.mdimporter/Contents/Info.plist
[..]
CFBundleTypeRole
MDImporter
LSItemContentTypes
public.mp3
public.aifc-audio
public.aiff-audio
Alter the importer's plist to delete the type you want to ignore.
Reimport the importer's types so the system picks up the change:
mdimport -r /System/Library/Spotlight/Chat.mdimporter
A:
The only option probably is to have them not indexed by spotlight as from some reason you cannot do negative searches. You can search for files with specifix file extension, but you cannot not search for ones that don't match.
You could try making those files invisible for Finder, Spotlight won't index invisible files. Command for setting the kIsInvisible flag on files is:
SetFile -a v [filename(s)]
| Can Mac OS X's Spotlight be configured to ignore certain file types? | I've got bunches of auxiliary files that are generated by code and LaTeX documents that I dearly wish would not be suggested by SpotLight as potential search candidates. I'm not looking for example.log, I'm looking for example.tex!
So can Spotlight be configured to ignore, say, all .log files?
(I know, I know; I should just use QuickSilver instead…)
@diciu That's an interesting answer. The problem in my case is this:
Figure out which importer handles your type of file
I'm not sure if my type of file is handled by any single importer? Since they've all got weird extensions (.aux, .glo, .out, whatever) I think it's improbable that there's an importer that's trying to index them. But because they're plain text they're being picked up as generic files. (Admittedly, I don't know much about Spotlight's indexing, so I might be completely wrong on this.)
@diciu again: TextImporterDontImportList sounds very promising; I'll head off and see if anything comes of it.
Like you say, it does seem like the whole UTI system doesn't really allow not searching for something.
@Raynet Making the files invisible is a good idea actually, albeit relatively tedious for me to set up in the general sense. If worst comes to worst, I might give that a shot (but probably after exhausting other options such as QuickSilver). (Oh, and SetFile requires the Developer Tools, but I'm guessing everyone here has them installed anyway :) )
| [
"@Will - these things that define types are called uniform type identifiers.\nThe problem is they are a combination of extensions (like .txt) and generic types (i.e. public.plain-text matches a txt file without the txt extension based purely on content) so it's not as simple as looking for an extension.\nRichText.mdimporter is probably the importer that imports your text file.\nThis should be easily verified by running mdimport in debug mode on one of the files you don't want indexed:\n\ncristi:~ diciu$ echo \"All work and no play makes Jack a dull boy\" > ~/input.txt\ncristi:~ diciu$ mdimport -d 4 -n ~/input.txt 2>&1 | grep Imported\n kMD2008-09-03 12:05:06.342 mdimport[1230:10b] Imported '/Users/diciu/input.txt' of type 'public.plain-text' with plugIn /System/Library/Spotlight/RichText.mdimporter.\n\n\nThe type that matches in my example is public.plain-text.\nI've no idea how you actually write an extension-based exception for an UTI (like public.plain-text except anything ending in .log).\nLater edit: I've also looked though the RichText mdimporter binary and found a promising string but I can't figure out if it's actually being used (as a preference name or whatever):\n\ncristi:FoodBrowser diciu$ strings /System/Library/Spotlight/RichText.mdimporter/Contents/MacOS/RichText |grep Text\n\nTextImporterDontImportList\n\n\n",
"Not sure how to do it on a file type level, but you can do it on a folder level:\nSource: http://lists.apple.com/archives/spotlight-dev/2008/Jul/msg00007.html\nMake spotlight ignore a folder\nIf you absolutely can't rename the folder because other software depends on it another technique is to go ahead and rename the directory to end in \".noindex\", but then create a symlink in the same location pointing to the real location using the original name.\nMost software is happy to use the symlink with the original name, but Spotlight ignores symlinks and will note the \"real\" name ends in *.noindex and will ignore that location.\nPerhaps something like:\n\nmv OriginalName OriginalName.noindex\n ln -s OriginalName.noindex\n OriginalName\nls -l\nlrwxr-xr-x 1 andy admin 24 Jan 9 2008\n OriginalName -> OriginalName.noindex\n drwxr-xr-x 11 andy admin 374 Jul 11\n 07:03 Original.noindex\n\n",
"Here's how it might work.\nNote: this is not a very good solution as a system update will overwrite changes you will perform.\nGet a list of all importers\n\ncristi:~ diciu$ mdimport -L\n2008-09-03 10:42:27.144 mdimport[727:10b] Paths: id(501) (\n \"/System/Library/Spotlight/Audio.mdimporter\",\n \"/System/Library/Spotlight/Chat.mdimporter\",\n \"/Developer/Applications/Xcode.app/Contents/Library/Spotlight/SourceCode.mdimporter\",\n\nFigure out which importer handles your type of file (example for the Audio importer):\n\ncristi:~ diciu$ cat /System/Library/Spotlight/Audio.mdimporter/Contents/Info.plist \n\n\n\n\n[..]\n CFBundleTypeRole\n MDImporter\n LSItemContentTypes\n \n public.mp3\n public.aifc-audio\n public.aiff-audio\n\n\nAlter the importer's plist to delete the type you want to ignore.\nReimport the importer's types so the system picks up the change:\n\nmdimport -r /System/Library/Spotlight/Chat.mdimporter\n\n",
"The only option probably is to have them not indexed by spotlight as from some reason you cannot do negative searches. You can search for files with specifix file extension, but you cannot not search for ones that don't match.\nYou could try making those files invisible for Finder, Spotlight won't index invisible files. Command for setting the kIsInvisible flag on files is:\nSetFile -a v [filename(s)]\n"
] | [
3,
2,
2,
1
] | [] | [] | [
"macos",
"spotlight"
] | stackoverflow_0000041279_macos_spotlight.txt |
Q:
Setting viewstate on postback
I am trying to set a ViewState-variable when a button is pressed, but it only works the second time I click the button. Here is the code-behind:
protected void Page_Load(object sender, EventArgs e)
{
if (Page.IsPostBack)
{
lblInfo.InnerText = String.Format("Hello {0} at {1}!", YourName, DateTime.Now.ToLongTimeString());
}
}
private string YourName
{
get { return (string)ViewState["YourName"]; }
set { ViewState["YourName"] = value; }
}
protected void btnSubmit_Click(object sender, EventArgs e)
{
YourName = txtName.Text;
}
Is there something I am missing? Here is the form-part of the design-file, very basic just as a POC:
<form id="form1" runat="server">
<div>
Enter your name: <asp:TextBox runat="server" ID="txtName"></asp:TextBox>
<asp:Button runat="server" ID="btnSubmit" Text="OK" onclick="btnSubmit_Click" />
<hr />
<label id="lblInfo" runat="server"></label>
</div>
</form>
PS: The sample is very simplified, "use txtName.Text instead of ViewState" is not the correct answer, I need the info to be in ViewState.
A:
Page_Load fires before btnSubmit_Click.
If you want to do something after your postback events have fired use Page_PreRender.
//this will work because YourName has now been set by the click event
protected void Page_PreRender(object sender, EventArgs e)
{
if (Page.IsPostBack)
lblInfo.InnerText = String.Format("Hello {0} at {1}!", YourName, DateTime.Now.ToLongTimeString());
}
The basic order goes:
Page init fires (init cannot access ViewState)
ViewState is read
Page load fires
Any events fire
PreRender fires
Page renders
| Setting viewstate on postback | I am trying to set a ViewState-variable when a button is pressed, but it only works the second time I click the button. Here is the code-behind:
protected void Page_Load(object sender, EventArgs e)
{
if (Page.IsPostBack)
{
lblInfo.InnerText = String.Format("Hello {0} at {1}!", YourName, DateTime.Now.ToLongTimeString());
}
}
private string YourName
{
get { return (string)ViewState["YourName"]; }
set { ViewState["YourName"] = value; }
}
protected void btnSubmit_Click(object sender, EventArgs e)
{
YourName = txtName.Text;
}
Is there something I am missing? Here is the form-part of the design-file, very basic just as a POC:
<form id="form1" runat="server">
<div>
Enter your name: <asp:TextBox runat="server" ID="txtName"></asp:TextBox>
<asp:Button runat="server" ID="btnSubmit" Text="OK" onclick="btnSubmit_Click" />
<hr />
<label id="lblInfo" runat="server"></label>
</div>
</form>
PS: The sample is very simplified, "use txtName.Text instead of ViewState" is not the correct answer, I need the info to be in ViewState.
| [
"Page_Load fires before btnSubmit_Click.\nIf you want to do something after your postback events have fired use Page_PreRender.\n//this will work because YourName has now been set by the click event\nprotected void Page_PreRender(object sender, EventArgs e)\n{\n if (Page.IsPostBack)\n lblInfo.InnerText = String.Format(\"Hello {0} at {1}!\", YourName, DateTime.Now.ToLongTimeString());\n}\n\nThe basic order goes:\n\nPage init fires (init cannot access ViewState)\nViewState is read\nPage load fires\nAny events fire\nPreRender fires\nPage renders\n\n"
] | [
12
] | [] | [] | [
"asp.net",
"postback",
"viewstate"
] | stackoverflow_0000041429_asp.net_postback_viewstate.txt |
Q:
Where do search engines start crawling?
What do search engine bots use as a starting point? Is it DNS look-up or do they start with some fixed list of well-know sites? Any guesses or suggestions?
A:
Your question can be interpreted in two ways:
Are you asking where search engines start their crawl from in general, or where they start to crawl a particular site?
I don't know how the big players work; but if you were to make your own search engine you'd probably seed it with popular portal sites. DMOZ.org seems to be a popular starting point. Since the big players have so much more data than we do they probably start their crawls from a variety of places.
If you're asking where a SE starts to crawl your particular site, it probably has a lot to do with which of your pages are the most popular. I imagine that if you have one super popular page that lots of other sites link to, then that would be the page that SEs starts will enter from because there are so many more entry points from other sites.
Note that I am not in SEO or anything; I just studied bot and SE traffic for a while for a project I was working on.
A:
You can submit your site to search engines using their site submission forms - this will get you into their system. When you actually get crawled after that is impossible to say - from experience it's usually about a week or so for an initial crawl (homepage, couple of other pages 1-link deep from there). You can increase how many of your pages get crawled and indexed using clear semantic link structure and submitting a sitemap - these allow you to list all of your pages, and weight them relative to one another, which helps the search engines understand how important you view each part of site relative to the others.
If your site is linked from other crawled websites, then your site will also be crawled, starting with the page linked, and eventually spreading to the rest of your site. This can take a long time, and depends on the crawl frequency of the linking sites, so the url submission is the quickest way to let google know about you!
One tool I can't recommend highly enough is the Google Webmaster Tool. It allows you to see how often you've been crawled, any errors the googlebot has stumbled across (broken links, etc) and has a host of other useful tools in there.
A:
In principle they start with nothing. Only when somebody explicitly tells them to include their website they can start crawling this site and use the links on that site to search more.
However, in practice the creator(s) of a search engine will put in some arbitrary sites they can think of. For example, their own blogs or the sites they have in their bookmarks.
In theory one could also just pick some random adresses and see if there is a website there. I doubt anyone does this though; the above method will work just fine and does not require extra coding just to bootstrap the search engine.
| Where do search engines start crawling? | What do search engine bots use as a starting point? Is it DNS look-up or do they start with some fixed list of well-know sites? Any guesses or suggestions?
| [
"Your question can be interpreted in two ways:\nAre you asking where search engines start their crawl from in general, or where they start to crawl a particular site?\nI don't know how the big players work; but if you were to make your own search engine you'd probably seed it with popular portal sites. DMOZ.org seems to be a popular starting point. Since the big players have so much more data than we do they probably start their crawls from a variety of places.\nIf you're asking where a SE starts to crawl your particular site, it probably has a lot to do with which of your pages are the most popular. I imagine that if you have one super popular page that lots of other sites link to, then that would be the page that SEs starts will enter from because there are so many more entry points from other sites.\nNote that I am not in SEO or anything; I just studied bot and SE traffic for a while for a project I was working on.\n",
"You can submit your site to search engines using their site submission forms - this will get you into their system. When you actually get crawled after that is impossible to say - from experience it's usually about a week or so for an initial crawl (homepage, couple of other pages 1-link deep from there). You can increase how many of your pages get crawled and indexed using clear semantic link structure and submitting a sitemap - these allow you to list all of your pages, and weight them relative to one another, which helps the search engines understand how important you view each part of site relative to the others.\nIf your site is linked from other crawled websites, then your site will also be crawled, starting with the page linked, and eventually spreading to the rest of your site. This can take a long time, and depends on the crawl frequency of the linking sites, so the url submission is the quickest way to let google know about you!\nOne tool I can't recommend highly enough is the Google Webmaster Tool. It allows you to see how often you've been crawled, any errors the googlebot has stumbled across (broken links, etc) and has a host of other useful tools in there. \n",
"In principle they start with nothing. Only when somebody explicitly tells them to include their website they can start crawling this site and use the links on that site to search more.\nHowever, in practice the creator(s) of a search engine will put in some arbitrary sites they can think of. For example, their own blogs or the sites they have in their bookmarks.\nIn theory one could also just pick some random adresses and see if there is a website there. I doubt anyone does this though; the above method will work just fine and does not require extra coding just to bootstrap the search engine.\n"
] | [
8,
4,
2
] | [] | [] | [
"search_engine"
] | stackoverflow_0000041419_search_engine.txt |
Q:
New Added Types in .NET Framework 2.0 Service Pack 1
I assumed there were only bug fixes/(no new types) in .NET 2.0 SP1 until I came across few posts which were mentioning DateTimeOffset structure, that was added in .NET 2.0 SP1.
Is there a full listing of the newly added types in .NET 2.0 SP1?
A:
Here's what you're looking for:
Full Article: http://www.hanselman.com/blog/CatchingRedBitsDifferencesInNET20AndNET20SP1.aspx
This may also be helpful:
Full Article: http://www.hanselman.com/blog/ChangesInTheNETBCLBetween20And35.aspx
A:
There were new interfaces added, like INotifyPropertyChanging, so there were new types added. The question is valid.
A:
DateTimeOffset was added to 2.0 SP1 - I'm not aware of any other new types.
Given the coincidental timing, it's perhaps worth reminding people that 2.0 SP1 shipped with 3.5 RTM (i.e November 2007) and 2.0 SP2 shipped with 3.5 SP1.
A:
Based on what D2VIANT referenced
Full Article: http://www.hanselman.com/blog/CatchingRedBitsDifferencesInNET20AndNET20SP1.aspx
I was able to find additional resources which list the changes in .NET SP1 some of the types added/affected are listed below
System.DateTimeOffset
System.GCCollectionMode
System.Runtime.GCLatencyMode
System.Configuration.OverrideMode
System.Data.SqlClient.SortOrder
System.Data.Design.TypedDataSetSchemaImporterExtensionFx35
System.Data.TypedDataSetGenerator.GenerateOption
System.UriIdnScope
System.ComponentModel.INotifyPropertyChanging
System.ComponentModel.PropertyChangingEventArgs
System.ComponentModel.PropertyChangingEventHandler
System.ComponentModel.Design.Serialization.IDesignerLoaderHost2
System.Configuration.IdnElement
System.Configuration.IriParsingElement
System.Configuration.UriSection
System.Net.Sockets.SendPacketsElement
and Many More... API Changes from org2.0 to 2.0 and New Methods and Types
| New Added Types in .NET Framework 2.0 Service Pack 1 | I assumed there were only bug fixes/(no new types) in .NET 2.0 SP1 until I came across few posts which were mentioning DateTimeOffset structure, that was added in .NET 2.0 SP1.
Is there a full listing of the newly added types in .NET 2.0 SP1?
| [
"Here's what you're looking for:\n\nFull Article: http://www.hanselman.com/blog/CatchingRedBitsDifferencesInNET20AndNET20SP1.aspx\nThis may also be helpful:\n\nFull Article: http://www.hanselman.com/blog/ChangesInTheNETBCLBetween20And35.aspx\n",
"There were new interfaces added, like INotifyPropertyChanging, so there were new types added. The question is valid.\n",
"DateTimeOffset was added to 2.0 SP1 - I'm not aware of any other new types.\nGiven the coincidental timing, it's perhaps worth reminding people that 2.0 SP1 shipped with 3.5 RTM (i.e November 2007) and 2.0 SP2 shipped with 3.5 SP1.\n",
"Based on what D2VIANT referenced\n\nFull Article: http://www.hanselman.com/blog/CatchingRedBitsDifferencesInNET20AndNET20SP1.aspx\n\nI was able to find additional resources which list the changes in .NET SP1 some of the types added/affected are listed below\n\nSystem.DateTimeOffset\nSystem.GCCollectionMode\nSystem.Runtime.GCLatencyMode\nSystem.Configuration.OverrideMode\nSystem.Data.SqlClient.SortOrder\nSystem.Data.Design.TypedDataSetSchemaImporterExtensionFx35\nSystem.Data.TypedDataSetGenerator.GenerateOption\nSystem.UriIdnScope\nSystem.ComponentModel.INotifyPropertyChanging\nSystem.ComponentModel.PropertyChangingEventArgs\nSystem.ComponentModel.PropertyChangingEventHandler\nSystem.ComponentModel.Design.Serialization.IDesignerLoaderHost2\nSystem.Configuration.IdnElement\nSystem.Configuration.IriParsingElement\nSystem.Configuration.UriSection\nSystem.Net.Sockets.SendPacketsElement\nand Many More... API Changes from org2.0 to 2.0 and New Methods and Types\n\n"
] | [
6,
1,
0,
0
] | [] | [] | [
".net",
".net_2.0"
] | stackoverflow_0000041256_.net_.net_2.0.txt |
Q:
Do the vi and emacs implementations for Windows behave like their Unix counterparts?
If not, what are the significant differences?
Edit: Daren Thomas asks:
which ones?
I use gvim on Windows and MacVim on the mac. Seem similar enough to be the same to me...
By which ones, I'm guessing that you mean a specific implementation of vi and emacs for Windows. I'm not sure as I thought there were only one or two. I'm looking for the ones that are closest to the Unix counterparts.
A:
I use GNU emacs built for Windows, and have found very few, if any, differences. There's the option to load your .emacs file from _emacs or .emacs (although .emacs works fine on XP and above). You can configure it to use Windows-style or Unix-style line endings by default (which I suppose you could do on a Unix system too...).
You may want to tweak such settings as Emacs's startup directory and home directory. To do the former, modify the shortcut that starts emacs. To do the latter, add a HOME environment variable - this will control where your .emacs is loaded from. For more information, check the always-excellent EmacsWiki's MsWindowsInstallation page.
A:
which ones?
I use gvim on Windows and MacVim on the mac. Seem similar enough to be the same to me...
A:
GNU Emacs has long been working natively on Windows as part of the main source, and can be compiled with Visual Studio (you can also find some pre-compiled binaries). As far as I know, there are no significant differences.
A:
There are quite a few vi clones (e.g. vim) and also various Emacs implementations (Gnu Emacs vs. XEmacs spring to mind).
These clones differ on Unix themselves and will thus also differ on Windows.
One thing I found with vim is that the directory structure for plugins etc. is very different on Windows - ~/vim.rc translates to %HOME%\vim_rc (or similar, depends on stuff I don't understand), vim tends to save stuff like plugins under C:\Program Files\vim\... instead of ~/.vim/...
A:
The Windows versions typically use the same base source code as the "regular", Unix-based versions. There may be sections of the code that are specific to Windows, just as there are sections specific to certain flavours of Unix. In general, though, the Windows versions of these packages will behave identically to the Unix ones, except where this is not possible (for example, gvim in Windows will use Windows GUI elements, of course).
| Do the vi and emacs implementations for Windows behave like their Unix counterparts? | If not, what are the significant differences?
Edit: Daren Thomas asks:
which ones?
I use gvim on Windows and MacVim on the mac. Seem similar enough to be the same to me...
By which ones, I'm guessing that you mean a specific implementation of vi and emacs for Windows. I'm not sure as I thought there were only one or two. I'm looking for the ones that are closest to the Unix counterparts.
| [
"I use GNU emacs built for Windows, and have found very few, if any, differences. There's the option to load your .emacs file from _emacs or .emacs (although .emacs works fine on XP and above). You can configure it to use Windows-style or Unix-style line endings by default (which I suppose you could do on a Unix system too...).\nYou may want to tweak such settings as Emacs's startup directory and home directory. To do the former, modify the shortcut that starts emacs. To do the latter, add a HOME environment variable - this will control where your .emacs is loaded from. For more information, check the always-excellent EmacsWiki's MsWindowsInstallation page.\n",
"which ones?\nI use gvim on Windows and MacVim on the mac. Seem similar enough to be the same to me...\n",
"GNU Emacs has long been working natively on Windows as part of the main source, and can be compiled with Visual Studio (you can also find some pre-compiled binaries). As far as I know, there are no significant differences.\n",
"There are quite a few vi clones (e.g. vim) and also various Emacs implementations (Gnu Emacs vs. XEmacs spring to mind).\nThese clones differ on Unix themselves and will thus also differ on Windows.\nOne thing I found with vim is that the directory structure for plugins etc. is very different on Windows - ~/vim.rc translates to %HOME%\\vim_rc (or similar, depends on stuff I don't understand), vim tends to save stuff like plugins under C:\\Program Files\\vim\\... instead of ~/.vim/...\n",
"The Windows versions typically use the same base source code as the \"regular\", Unix-based versions. There may be sections of the code that are specific to Windows, just as there are sections specific to certain flavours of Unix. In general, though, the Windows versions of these packages will behave identically to the Unix ones, except where this is not possible (for example, gvim in Windows will use Windows GUI elements, of course).\n"
] | [
2,
0,
0,
0,
0
] | [] | [] | [
"emacs",
"vi",
"windows"
] | stackoverflow_0000041525_emacs_vi_windows.txt |
Q:
Parsing a log file with regular expressions
I'm currently working on a parser for our internal log files (generated by log4php, log4net and log4j). So far I have a nice regular expression to parse the logs, except for one annoying bit: Some log messages span multiple lines, which I can't get to match properly. The regex I have now is this:
(?<date>\d{2}/\d{2}/\d{2})\s(?<time>\d{2}):\d{2}:\d{2}),\d{3})\s(?<message>.+)
The log format (which I use for testing the parser) is this:
07/23/08 14:17:31,321 log
message
spanning
multiple
lines
07/23/08 14:17:31,321 log message on one line
When I run the parser right now, I get only the line the log starts on. If I change it to span multiple lines, I get only one result (the whole log file).
@samjudson:
You need to pass the RegexOptions.Singleline flag in to the regular expression, so that "." matches all characters, not just all characters except new lines (which is the default).
I tried that, but then it matches the whole file. I also tried to set the message-group to .+? (non-greedy), but then it matches a single character (which isn't what I'm looking for either).
The problem is that the pattern for the message matches on the date-group as well, so when it doesn't break on a new-line it just goes on and on and on.
I use this regex for the message group now. It works, unless there's a pattern IN the log message which is the same as the start of the log message.
(?<message>(.(?!\d{2}/\d{2}/\d{2}\s\d{2}:\d{2}:\d{2},\d{3}\s\[\d{4}\]))+)
A:
This will only work if the log message doesn't contain a date at the beginning of the line, but you could try adding a negative look-ahead assertion for a date in the "message" group:
(?<date>\d{2}/\d{2}/\d{2})\s(?<time>\d{2}:\d{2}:\d{2},\d{3})\s(?<message>(.(?!^\d{2}/\d{2}/
\d{2}))+)
Note that this requires the use of the RegexOptions.MultiLine flag.
A:
You obviously need that "messages lines" can be distinguished from "log lines"; if you allow the message part to start with date/time after a new line, then there is simply no way to determine what is part of a message and what not. So, instead of using the dot, you need an expression that allows anything that does not include a newline followed by a date and time.
Personally, however, I would not use a regular expression to parse the whole log entry. I prefer using my own loop to iterate over each line and use one simple regular expression to determine whether a line is the start of a new entry or not. Also from the point of readability this would have my preference.
A:
The problem you have is that you need to terminate the RegEx pattern so it knows when one message ends and then next starts.
When you were running in default mode the newline was working as an implicit terminator.
The problem is if you go into multiline mode there's no terminator so the pattern will gobble up the whole file. Non-greedy matches a few characters as possible which will be just one.
Now, if use the date for the next message as the terminator I think your parser will only get every other line.
Is there something else in the file you could to terminate the pattern?
A:
You need to pass the RegexOptions. Singleline flag in to the regular expression, so that "." matches all characters, not just all characters except new lines (which is the default).
A:
You might find it a lot easier to parse the file with a proper parser generator - ANTLR can generate one in C#... Context Free parsers only seem hard until you "get" them - after that, they are much simpler and friendlier to use than Regular Expressions...
| Parsing a log file with regular expressions | I'm currently working on a parser for our internal log files (generated by log4php, log4net and log4j). So far I have a nice regular expression to parse the logs, except for one annoying bit: Some log messages span multiple lines, which I can't get to match properly. The regex I have now is this:
(?<date>\d{2}/\d{2}/\d{2})\s(?<time>\d{2}):\d{2}:\d{2}),\d{3})\s(?<message>.+)
The log format (which I use for testing the parser) is this:
07/23/08 14:17:31,321 log
message
spanning
multiple
lines
07/23/08 14:17:31,321 log message on one line
When I run the parser right now, I get only the line the log starts on. If I change it to span multiple lines, I get only one result (the whole log file).
@samjudson:
You need to pass the RegexOptions.Singleline flag in to the regular expression, so that "." matches all characters, not just all characters except new lines (which is the default).
I tried that, but then it matches the whole file. I also tried to set the message-group to .+? (non-greedy), but then it matches a single character (which isn't what I'm looking for either).
The problem is that the pattern for the message matches on the date-group as well, so when it doesn't break on a new-line it just goes on and on and on.
I use this regex for the message group now. It works, unless there's a pattern IN the log message which is the same as the start of the log message.
(?<message>(.(?!\d{2}/\d{2}/\d{2}\s\d{2}:\d{2}:\d{2},\d{3}\s\[\d{4}\]))+)
| [
"This will only work if the log message doesn't contain a date at the beginning of the line, but you could try adding a negative look-ahead assertion for a date in the \"message\" group:\n(?<date>\\d{2}/\\d{2}/\\d{2})\\s(?<time>\\d{2}:\\d{2}:\\d{2},\\d{3})\\s(?<message>(.(?!^\\d{2}/\\d{2}/\n\\d{2}))+)\n\nNote that this requires the use of the RegexOptions.MultiLine flag.\n",
"You obviously need that \"messages lines\" can be distinguished from \"log lines\"; if you allow the message part to start with date/time after a new line, then there is simply no way to determine what is part of a message and what not. So, instead of using the dot, you need an expression that allows anything that does not include a newline followed by a date and time.\nPersonally, however, I would not use a regular expression to parse the whole log entry. I prefer using my own loop to iterate over each line and use one simple regular expression to determine whether a line is the start of a new entry or not. Also from the point of readability this would have my preference.\n",
"The problem you have is that you need to terminate the RegEx pattern so it knows when one message ends and then next starts.\nWhen you were running in default mode the newline was working as an implicit terminator.\nThe problem is if you go into multiline mode there's no terminator so the pattern will gobble up the whole file. Non-greedy matches a few characters as possible which will be just one.\nNow, if use the date for the next message as the terminator I think your parser will only get every other line.\nIs there something else in the file you could to terminate the pattern?\n",
"You need to pass the RegexOptions. Singleline flag in to the regular expression, so that \".\" matches all characters, not just all characters except new lines (which is the default).\n",
"You might find it a lot easier to parse the file with a proper parser generator - ANTLR can generate one in C#... Context Free parsers only seem hard until you \"get\" them - after that, they are much simpler and friendlier to use than Regular Expressions...\n"
] | [
3,
2,
1,
0,
0
] | [] | [] | [
"c#",
"regex"
] | stackoverflow_0000041407_c#_regex.txt |
Q:
Working with Common/Utility Libraries
At the company I work for we have a "Utility" project that is referenced by pretty much ever application we build. It's got lots of things like NullHelpers, ConfigSettingHelpers, Common ExtensionMethods etc.
The way we work is that when we want to make a new project, we get the latest version of the project from source control add it to the solution and then reference the project from any new projects that get added to the solution.
This has worked ok, however there have been a couple of instances where people have made "breaking changes" to the common project, which works for them, but doesn't work for others.
I've been thinking that rather than adding the common library as a project reference perhaps we should start developing the common library as a standalone dll and publish different versions and target a particular version for a particular project so that changes can be made without any risk to other projects using the common library.
Having said all that I'm interested to see how others reference or use their common libraries.
A:
That's exactly what we're doing. We have a Utility project which has some non project specific useful functions. We increase the version manually (minor), build the project in Release version, sign it and put it to a shared location.
People then use the specific version of the library.
If some useful methods are implemented in some specific projects which could find their way into main Utility project, we put the to a special helper class in the project, and mark them as a possible Utility candidate (simple //TODO). At the end of the project, we review the candidates and if they stick, we move them to the main library.
Breaking changes are a no-no and we mark methods and classes as [Obsolete] if needed.
But, it doesn't really matter because we increase the version on every publish.
Hope this helps.
A:
We use branching in source control; everyone uses the head branch until they make a release. When they branch the release, they'll branch the common utilities project as well.
Additionally, our utilities project has its own unit tests. That way, other teams can know if they would break the build for other teams.
Of course, we still have problems like you mention occasionally. But when one team checks in a change that breaks another team's build, it usually means the contract for that method/object has been broken somewhere. We look at these as opportunities to improve the design of the common utilities project... or at least to write more unit tests :/
A:
I've had the EXACT same issue!
I used to use project references, but it all seems to go bad, when as you say, you have many projects referencing it.
I now compile to a DLL, and set the CopyLocal property for the DLL reference to false after the first build (otherwise I find it can override sub projects and just become a mess).
I guess in theory it should probably be GAC'ed, but if its a problem that is changing a lot (as mine is) this can become problematic..
| Working with Common/Utility Libraries | At the company I work for we have a "Utility" project that is referenced by pretty much ever application we build. It's got lots of things like NullHelpers, ConfigSettingHelpers, Common ExtensionMethods etc.
The way we work is that when we want to make a new project, we get the latest version of the project from source control add it to the solution and then reference the project from any new projects that get added to the solution.
This has worked ok, however there have been a couple of instances where people have made "breaking changes" to the common project, which works for them, but doesn't work for others.
I've been thinking that rather than adding the common library as a project reference perhaps we should start developing the common library as a standalone dll and publish different versions and target a particular version for a particular project so that changes can be made without any risk to other projects using the common library.
Having said all that I'm interested to see how others reference or use their common libraries.
| [
"That's exactly what we're doing. We have a Utility project which has some non project specific useful functions. We increase the version manually (minor), build the project in Release version, sign it and put it to a shared location.\nPeople then use the specific version of the library.\nIf some useful methods are implemented in some specific projects which could find their way into main Utility project, we put the to a special helper class in the project, and mark them as a possible Utility candidate (simple //TODO). At the end of the project, we review the candidates and if they stick, we move them to the main library.\nBreaking changes are a no-no and we mark methods and classes as [Obsolete] if needed. \nBut, it doesn't really matter because we increase the version on every publish.\nHope this helps.\n",
"We use branching in source control; everyone uses the head branch until they make a release. When they branch the release, they'll branch the common utilities project as well. \nAdditionally, our utilities project has its own unit tests. That way, other teams can know if they would break the build for other teams. \nOf course, we still have problems like you mention occasionally. But when one team checks in a change that breaks another team's build, it usually means the contract for that method/object has been broken somewhere. We look at these as opportunities to improve the design of the common utilities project... or at least to write more unit tests :/\n",
"I've had the EXACT same issue!\nI used to use project references, but it all seems to go bad, when as you say, you have many projects referencing it.\nI now compile to a DLL, and set the CopyLocal property for the DLL reference to false after the first build (otherwise I find it can override sub projects and just become a mess).\nI guess in theory it should probably be GAC'ed, but if its a problem that is changing a lot (as mine is) this can become problematic..\n"
] | [
5,
3,
1
] | [] | [] | [
"api",
"c#",
"visual_studio"
] | stackoverflow_0000041405_api_c#_visual_studio.txt |
Q:
Recommended Source Control Directory Structure?
I am going to be using Subversion for source control on a new J2EE web application. What directory structure will you recommend for organizing code, tests and documentation?
A:
I usually have
Project Directory
src - actual source
doc - documentation
lib - libraries referenced from source
dep - installation files for dependencies that don't fit in lib
db - database installation script
In work with Visual Studio, I'm not sure if this works the same in the java world. But i usually put stuff in different project folders in src. For each source project there's a separate test project. Build files go in the main project directory. I usually put a README there too documenting how to setup the project if it needs more than just checking out.
EDIT: This is the structure for a single working checkout of the project. It will be duplicated for each branch/tag in your revision control system (remember, in most SVN system, copies are cheap). The above example under Subversion would look like:
/project
/trunk
/src
/doc
/...
/branches
/feature1
/src
/doc
/...
/feature2
/src
/doc
/...
A:
I found some old questions here on SO that might be interesting for you:
Whats a good standard code layout for a php application
Contains a link to an article on Scalable and Flexible Directory Structure for Web Applications (focus on PHP, though)
How to structure a java application, in other words: where do I put my classes?
Structure of Projects in Version Control
A:
To expand on what Mendelt Siebenga suggested, I would also add a web directory (for JSP files, WEB-INF, web.xml, etc).
Tests should go in a folder named test that is a sibling of the main src folder - this way your unit test classes can have the same package name as the source code being tested (to ease with situations where you want to test protected methods or classes, for example... see the JUnit FAQ for this, and this question also on Where should I put my test files?).
I haven't had much use for it myself, but a Maven project will also create a resources folder alongside the src folder for non-source code that you want to package/deploy along with the main source code - things such as properties files, resources bundles, etc. Your mileage may vary on this one.
A:
I use Eclipse for creating J2EE web applications and this will create the following project structure:
WebAppName\
\lib
\src
\tests
etc...
I would then create an SVN folder on our trunk called WebAppNameProject. Within this folder I would create folders called WebAppNameSource, Documentation etc. Within the WebAppNameSource folder I would place the project source generated by Eclipse. Thus I would have the following folder structure in SVN:
\svn\trunk\WebAppNameProject
\WebAppNameSource
\lib
\src
\tests
etc...
\Documentation
Hope this helps.
| Recommended Source Control Directory Structure? | I am going to be using Subversion for source control on a new J2EE web application. What directory structure will you recommend for organizing code, tests and documentation?
| [
"I usually have\n\nProject Directory\n src - actual source\n doc - documentation\n lib - libraries referenced from source\n dep - installation files for dependencies that don't fit in lib\n db - database installation script\n\nIn work with Visual Studio, I'm not sure if this works the same in the java world. But i usually put stuff in different project folders in src. For each source project there's a separate test project. Build files go in the main project directory. I usually put a README there too documenting how to setup the project if it needs more than just checking out.\nEDIT: This is the structure for a single working checkout of the project. It will be duplicated for each branch/tag in your revision control system (remember, in most SVN system, copies are cheap). The above example under Subversion would look like:\n/project\n /trunk\n /src\n /doc\n /...\n /branches\n /feature1\n /src\n /doc\n /...\n /feature2\n /src\n /doc\n /...\n\n",
"I found some old questions here on SO that might be interesting for you:\n\nWhats a good standard code layout for a php application\n\nContains a link to an article on Scalable and Flexible Directory Structure for Web Applications (focus on PHP, though)\n\nHow to structure a java application, in other words: where do I put my classes?\nStructure of Projects in Version Control\n\n",
"To expand on what Mendelt Siebenga suggested, I would also add a web directory (for JSP files, WEB-INF, web.xml, etc). \nTests should go in a folder named test that is a sibling of the main src folder - this way your unit test classes can have the same package name as the source code being tested (to ease with situations where you want to test protected methods or classes, for example... see the JUnit FAQ for this, and this question also on Where should I put my test files?).\nI haven't had much use for it myself, but a Maven project will also create a resources folder alongside the src folder for non-source code that you want to package/deploy along with the main source code - things such as properties files, resources bundles, etc. Your mileage may vary on this one.\n",
"I use Eclipse for creating J2EE web applications and this will create the following project structure:\nWebAppName\\\n \\lib\n \\src\n \\tests\n etc...\n\nI would then create an SVN folder on our trunk called WebAppNameProject. Within this folder I would create folders called WebAppNameSource, Documentation etc. Within the WebAppNameSource folder I would place the project source generated by Eclipse. Thus I would have the following folder structure in SVN:\n\\svn\\trunk\\WebAppNameProject\n \\WebAppNameSource\n \\lib\n \\src\n \\tests\n etc...\n \\Documentation \n\nHope this helps.\n"
] | [
14,
3,
2,
0
] | [] | [] | [
"code_organization",
"jakarta_ee",
"java",
"svn",
"version_control"
] | stackoverflow_0000041513_code_organization_jakarta_ee_java_svn_version_control.txt |
Q:
Whats the best way to unit test from multiple threads?
this kind of follows on from another question of mine.
Basically, once I have the code to access the file (will review the answers there in a minute) what would be the best way to test it?
I am thinking of creating a method which just spawns lots of BackgroundWorker's or something and tells them all load/save the file, and test with varying file/object sizes. Then, get a response back from the threads to see if it failed/succeeded/made the world implode etc.
Can you guys offer any suggestions on the best way to approach this? As I said before, this is all kinda new to me :)
Edit
Following ajmastrean's post:
I am using a console app to test with Debug.Asserts :)
Update
I originally rolled with using BackgroundWorker to deal with the threading (since I am used to that from Windows dev) I soon realised that when I was performing tests where multiple ops (threads) needed to complete before continuing, I realised it was going to be a bit of a hack to get it to do this.
I then followed up on ajmastrean's post and realised I should really be using the Thread class for working with concurrent operations. I will now refactor using this method (albeit a different approach).
A:
In .NET, ThreadPool threads won't return without setting up ManualResetEvents or AutoResetEvents. I find these overkill for a quick test method (not to mention kind of complicated to create, set, and manage). Background worker is a also a bit complex with the callbacks and such.
Something I have found that works is
Create an array of threads.
Setup the ThreadStart method of each thread.
Start each thread.
Join on all threads (blocks the current thread until all other threads complete or abort)
public static void MultiThreadedTest()
{
Thread[] threads = new Thread[count];
for (int i = 0; i < threads.Length; i++)
{
threads[i] = new Thread(DoSomeWork());
}
foreach(Thread thread in threads)
{
thread.Start();
}
foreach(Thread thread in threads)
{
thread.Join();
}
}
A:
@ajmastrean, since unit test result must be predictable we need to synchronize threads somehow. I can't see a simple way to do it without using events.
I found that ThreadPool.QueueUserWorkItem gives me an easy way to test such use cases
ThreadPool.QueueUserWorkItem(x => {
File.Open(fileName, FileMode.Open);
event1.Set(); // Start 2nd tread;
event2.WaitOne(); // Blocking the file;
});
ThreadPool.QueueUserWorkItem(x => {
try
{
event1.WaitOne(); // Waiting until 1st thread open file
File.Delete(fileName); // Simulating conflict
}
catch (IOException e)
{
Debug.Write("File access denied");
}
});
| Whats the best way to unit test from multiple threads? | this kind of follows on from another question of mine.
Basically, once I have the code to access the file (will review the answers there in a minute) what would be the best way to test it?
I am thinking of creating a method which just spawns lots of BackgroundWorker's or something and tells them all load/save the file, and test with varying file/object sizes. Then, get a response back from the threads to see if it failed/succeeded/made the world implode etc.
Can you guys offer any suggestions on the best way to approach this? As I said before, this is all kinda new to me :)
Edit
Following ajmastrean's post:
I am using a console app to test with Debug.Asserts :)
Update
I originally rolled with using BackgroundWorker to deal with the threading (since I am used to that from Windows dev) I soon realised that when I was performing tests where multiple ops (threads) needed to complete before continuing, I realised it was going to be a bit of a hack to get it to do this.
I then followed up on ajmastrean's post and realised I should really be using the Thread class for working with concurrent operations. I will now refactor using this method (albeit a different approach).
| [
"In .NET, ThreadPool threads won't return without setting up ManualResetEvents or AutoResetEvents. I find these overkill for a quick test method (not to mention kind of complicated to create, set, and manage). Background worker is a also a bit complex with the callbacks and such.\nSomething I have found that works is \n\nCreate an array of threads.\nSetup the ThreadStart method of each thread.\nStart each thread.\nJoin on all threads (blocks the current thread until all other threads complete or abort) \n\npublic static void MultiThreadedTest()\n{\n Thread[] threads = new Thread[count];\n\n for (int i = 0; i < threads.Length; i++)\n {\n threads[i] = new Thread(DoSomeWork());\n }\n\n foreach(Thread thread in threads)\n {\n thread.Start();\n }\n\n foreach(Thread thread in threads)\n {\n thread.Join();\n }\n}\n\n",
"@ajmastrean, since unit test result must be predictable we need to synchronize threads somehow. I can't see a simple way to do it without using events.\nI found that ThreadPool.QueueUserWorkItem gives me an easy way to test such use cases\n ThreadPool.QueueUserWorkItem(x => { \n File.Open(fileName, FileMode.Open);\n event1.Set(); // Start 2nd tread;\n event2.WaitOne(); // Blocking the file;\n});\nThreadPool.QueueUserWorkItem(x => { \n try\n {\n event1.WaitOne(); // Waiting until 1st thread open file\n File.Delete(fileName); // Simulating conflict\n }\n catch (IOException e)\n {\n Debug.Write(\"File access denied\");\n }\n});\n\n"
] | [
19,
1
] | [
"Your idea should work fine. Basically you just want to spawn a bunch of threads, and make sure the ones writing the file take long enough to do it to actually make the readers wait. If all of your threads return without error, and without blocking forever, then the test succeeds.\n"
] | [
-2
] | [
"c#",
"multithreading",
"testing",
"unit_testing"
] | stackoverflow_0000041568_c#_multithreading_testing_unit_testing.txt |
Q:
PHP Include function outputting unknown char
When using the php include function the include is succesfully executed, but it is also outputting a char before the output of the include is outputted, the char is of hex value 3F and I have no idea where it is coming from, although it seems to happen with every include.
At first I thbought it was file encoding, but this doesn't seem to be a problem. I have created a test case to demonstrate it: (link no longer working) http://driveefficiently.com/testinclude.php this file consists of only:
<? include("include.inc"); ?>
and include.inc consists of only:
<? echo ("hello, world"); ?>
and yet, the output is: "?hello, world" where the ? is a char with a random value. It is this value that I do not know the origins of and it is sometimes screwing up my sites a bit.
Any ideas of where this could be coming from? At first I thought it might be something to do with file encoding, but I don't think its a problem.
A:
What you are seeing is a UTF-8 Byte Order Mark:
The UTF-8 representation of the BOM is the byte sequence EF BB BF, which appears as the ISO-8859-1 characters  in most text editors and web browsers not prepared to handle UTF-8.
Byte Order Mark on Wikipedia
PHP does not understand that these characters should be "hidden" and sends these to the browser as if they were normal characters. To get rid of them you will need to open the file using a "proper" text editor that will allow you to save the file as UTF-8 without the leading BOM.
You can read more about this problem here
A:
Your web server (or your text editor) apparently includes a BOM into the document. I don't see the rogue character in my browser except when I set the site's encoding explicitly to Latin-1. Then, I see two (!) UTF-8 BOMs.
/EDIT: From the fact that there are two BOMs I conclude that the BOM is actually included by your editor at the beginning of the file. What editor do you use? If you use Visual Studio, you've got to say “Save As …” in the File menu and then choose the button “Save with encoding …”. There, choose “UTF-8 without BOM” or something similar.
A:
It doesn't show up on the rendered page in Firefox or IE but you can see the funny character when you View Source in IE
Is this on a Linux machine? Could you do find & replace with vim or sed to see if you can get rid of the 3F that way?
If it's on Windows, try opening include.inc with Notepad to see if the funny char is visible & can be deleted.
I'd also be curious to see what happens if you copy the code out of the include and just run it by itself.
A:
I see hello, world on the page you linked to. No problems that I can see...
I'm using Firefox 3.0.1 and Windows XP. What browser/OS are you running? Perhaps that might be the problem.
A:
Character 3F actually is the question mark, it isn't just displaying as one.
I get the same results as Thomas, no question mark showing up.
In theory it could be some problem with a web proxy but I am inclined to suspect a stray question mark in your PHP markup...which perhaps you have fixed by now so we don't see the problem.
A:
I'd also be curious to see what
happens if you copy the code out of
the include and just run it by itself.
Mark: this is on a shared hosting solution, so I can not get shell access to the file. However, as you can see here, there are no characters that shouldn't be there, and running the same file as a script does not produce this char. (The shared hosting company have been of 0 help, continually telling me it is a browser issue).
| PHP Include function outputting unknown char | When using the php include function the include is succesfully executed, but it is also outputting a char before the output of the include is outputted, the char is of hex value 3F and I have no idea where it is coming from, although it seems to happen with every include.
At first I thbought it was file encoding, but this doesn't seem to be a problem. I have created a test case to demonstrate it: (link no longer working) http://driveefficiently.com/testinclude.php this file consists of only:
<? include("include.inc"); ?>
and include.inc consists of only:
<? echo ("hello, world"); ?>
and yet, the output is: "?hello, world" where the ? is a char with a random value. It is this value that I do not know the origins of and it is sometimes screwing up my sites a bit.
Any ideas of where this could be coming from? At first I thought it might be something to do with file encoding, but I don't think its a problem.
| [
"What you are seeing is a UTF-8 Byte Order Mark:\n\nThe UTF-8 representation of the BOM is the byte sequence EF BB BF, which appears as the ISO-8859-1 characters  in most text editors and web browsers not prepared to handle UTF-8.\nByte Order Mark on Wikipedia\n\nPHP does not understand that these characters should be \"hidden\" and sends these to the browser as if they were normal characters. To get rid of them you will need to open the file using a \"proper\" text editor that will allow you to save the file as UTF-8 without the leading BOM.\nYou can read more about this problem here\n",
"Your web server (or your text editor) apparently includes a BOM into the document. I don't see the rogue character in my browser except when I set the site's encoding explicitly to Latin-1. Then, I see two (!) UTF-8 BOMs.\n/EDIT: From the fact that there are two BOMs I conclude that the BOM is actually included by your editor at the beginning of the file. What editor do you use? If you use Visual Studio, you've got to say “Save As …” in the File menu and then choose the button “Save with encoding …”. There, choose “UTF-8 without BOM” or something similar.\n",
"It doesn't show up on the rendered page in Firefox or IE but you can see the funny character when you View Source in IE\n\nIs this on a Linux machine? Could you do find & replace with vim or sed to see if you can get rid of the 3F that way? \nIf it's on Windows, try opening include.inc with Notepad to see if the funny char is visible & can be deleted.\nI'd also be curious to see what happens if you copy the code out of the include and just run it by itself.\n",
"I see hello, world on the page you linked to. No problems that I can see...\nI'm using Firefox 3.0.1 and Windows XP. What browser/OS are you running? Perhaps that might be the problem.\n",
"Character 3F actually is the question mark, it isn't just displaying as one.\nI get the same results as Thomas, no question mark showing up.\nIn theory it could be some problem with a web proxy but I am inclined to suspect a stray question mark in your PHP markup...which perhaps you have fixed by now so we don't see the problem.\n",
"\nI'd also be curious to see what\n happens if you copy the code out of\n the include and just run it by itself.\n\nMark: this is on a shared hosting solution, so I can not get shell access to the file. However, as you can see here, there are no characters that shouldn't be there, and running the same file as a script does not produce this char. (The shared hosting company have been of 0 help, continually telling me it is a browser issue).\n"
] | [
11,
3,
1,
0,
0,
0
] | [] | [] | [
"byte_order_mark",
"encoding",
"php"
] | stackoverflow_0000041647_byte_order_mark_encoding_php.txt |
Q:
jQuery tablesorter plugin column width incorrect in IE7
I am using the tablesorter plugin (http://tablesorter.com) and am having a problem with column widths in IE7. It looks fine in Firefox and sometimes in IE7.
Here's a screenshot of the problem:
IE7 View
and here's how it's supposed to look:
Firefox view
A:
This is a common layout problem in IE. If you are using CSS to style the columns width, also add the column widths to the td tags. Set the first column to a percentage that will try to suck up most of the space, like 50% or something. The first column width will take everything that it can and the other columns will abide by their static width.
| jQuery tablesorter plugin column width incorrect in IE7 | I am using the tablesorter plugin (http://tablesorter.com) and am having a problem with column widths in IE7. It looks fine in Firefox and sometimes in IE7.
Here's a screenshot of the problem:
IE7 View
and here's how it's supposed to look:
Firefox view
| [
"This is a common layout problem in IE. If you are using CSS to style the columns width, also add the column widths to the td tags. Set the first column to a percentage that will try to suck up most of the space, like 50% or something. The first column width will take everything that it can and the other columns will abide by their static width.\n"
] | [
6
] | [] | [] | [
"html",
"internet_explorer_7",
"javascript",
"jquery",
"tablesorter"
] | stackoverflow_0000041692_html_internet_explorer_7_javascript_jquery_tablesorter.txt |
Q:
Unit testing in Xcode 3.1
I read the question on 'The best way to unit test Objective-C and followed the instructions, but no matter what I do, the Unit tests do not run. Actually the entire program does not run, I get the following message.
dyld: Library not loaded: @rpath/SenTestingKit.framework/Versions/A/SenTestingKit Referenced from /Users/garethlewis/work/objc/UnitTesting/build/Debug/UnitTesting
Reason: image not found
I have set the DYLD_FALLBACK_FRAMEWORK_PATH variable, and also the XCInjectBundle as well as the DYLD_INSERT_LIBRARIES and added the variable -SenTest All.
I can't have the only installation of Xcode 3.1 that Unit testing fails on.
Can someone who has managed to get Unit Testing on Xcode 3.1 working give some details on what needs to be done. It would help so much, with what I am trying to do.
A:
You don't need to do this stuff to just run your tests.
If you're writing tests for an application, you should just need to set the Test Host and Bundle Loader build settings for your unit test bundle target and they will be run as part of your build. If you're writing tests for a framework you don't even need to do that, just make sure your test bundle links against your framework.
I assume you're actually talking about debugging your tests, not just running them. If so, it's important to give us the following information:
what kind of tests — application or framework — you're trying to debug
what environment variables you set, and what values you set them to
what arguments you set (-SenTest All should be an argument, not environment variable)
what the full error shown in your debug console is, not just the specific failure
That will help diagnose what's going on.
At first glance, it looks like you might have a typo in your DYLD_FALLBACK_FRAMEWORK_PATH because that determines where dyld will look for the SenTestingKit.framework binary if @rpath cannot be resolved. Knowing what it's set to will probably help.
(PS - It's Xcode.)
| Unit testing in Xcode 3.1 | I read the question on 'The best way to unit test Objective-C and followed the instructions, but no matter what I do, the Unit tests do not run. Actually the entire program does not run, I get the following message.
dyld: Library not loaded: @rpath/SenTestingKit.framework/Versions/A/SenTestingKit Referenced from /Users/garethlewis/work/objc/UnitTesting/build/Debug/UnitTesting
Reason: image not found
I have set the DYLD_FALLBACK_FRAMEWORK_PATH variable, and also the XCInjectBundle as well as the DYLD_INSERT_LIBRARIES and added the variable -SenTest All.
I can't have the only installation of Xcode 3.1 that Unit testing fails on.
Can someone who has managed to get Unit Testing on Xcode 3.1 working give some details on what needs to be done. It would help so much, with what I am trying to do.
| [
"You don't need to do this stuff to just run your tests.\nIf you're writing tests for an application, you should just need to set the Test Host and Bundle Loader build settings for your unit test bundle target and they will be run as part of your build. If you're writing tests for a framework you don't even need to do that, just make sure your test bundle links against your framework.\nI assume you're actually talking about debugging your tests, not just running them. If so, it's important to give us the following information:\n\nwhat kind of tests — application or framework — you're trying to debug\nwhat environment variables you set, and what values you set them to\nwhat arguments you set (-SenTest All should be an argument, not environment variable)\nwhat the full error shown in your debug console is, not just the specific failure\n\nThat will help diagnose what's going on.\nAt first glance, it looks like you might have a typo in your DYLD_FALLBACK_FRAMEWORK_PATH because that determines where dyld will look for the SenTestingKit.framework binary if @rpath cannot be resolved. Knowing what it's set to will probably help.\n(PS - It's Xcode.)\n"
] | [
4
] | [] | [] | [
"unit_testing",
"xcode",
"xcode3.1"
] | stackoverflow_0000041337_unit_testing_xcode_xcode3.1.txt |
Q:
Is there a way to access web.xml properties from a Java Bean?
Is there any way in the Servlet API to access properties specified in web.xml (such as initialization parameters) from within a Bean or Factory class that is not associated at all with the web container?
For example, I'm writing a Factory class, and I'd like to include some logic within the Factory to check a hierarchy of files and configuration locations to see which if any are available to determine which implementation class to instantiate - for example,
a properties file in the classpath,
a web.xml parameter,
a system property, or
some default logic if nothing else is available.
I'd like to be able to do this without injecting any reference to ServletConfig or anything similiar to my Factory - the code should be able to run ok outside of a Servlet Container.
This might sound a little bit uncommon, but I'd like for this component I'm working on to be able to be packaged with one of our webapps, and also be versatile enough to be packaged with some of our command-line tools without requiring a new properties file just for my component - so I was hoping to piggyback on top of other configuration files such as web.xml.
If I recall correctly, .NET has something like Request.GetCurrentRequest() to get a reference to the currently executing Request - but since this is a Java app I'm looking for something simliar that could be used to gain access to ServletConfig.
A:
One way you could do this is:
public class FactoryInitialisingServletContextListener implements ServletContextListener {
public void contextDestroyed(ServletContextEvent event) {
}
public void contextInitialized(ServletContextEvent event) {
Properties properties = new Properties();
ServletContext servletContext = event.getServletContext();
Enumeration<?> keys = servletContext.getInitParameterNames();
while (keys.hasMoreElements()) {
String key = (String) keys.nextElement();
String value = servletContext.getInitParameter(key);
properties.setProperty(key, value);
}
Factory.setServletContextProperties(properties);
}
}
public class Factory {
static Properties _servletContextProperties = new Properties();
public static void setServletContextProperties(Properties servletContextProperties) {
_servletContextProperties = servletContextProperties;
}
}
And then have the following in your web.xml
<listener>
<listener-class>com.acme.FactoryInitialisingServletContextListener<listener-class>
</listener>
If your application is running in a web container, then the listener will be invoked by the container once the context has been created. In which case, the _servletContextProperties will be replaced with any context-params specified in the web.xml.
If your application is running outside a web container, then _servletContextProperties will be empty.
A:
Have you considered using the Spring framework for this? That way, your beans don't get any extra cruft, and spring handles the configuration setup for you.
A:
I think that you will have to add an associated bootstrap class which takes a reference to a ServletConfig (or ServletContext) and transcribes those values to the Factory class. At least this way you can package it separately.
@toolkit : Excellent, most humbled - This is something that I have been trying to do for a while
| Is there a way to access web.xml properties from a Java Bean? | Is there any way in the Servlet API to access properties specified in web.xml (such as initialization parameters) from within a Bean or Factory class that is not associated at all with the web container?
For example, I'm writing a Factory class, and I'd like to include some logic within the Factory to check a hierarchy of files and configuration locations to see which if any are available to determine which implementation class to instantiate - for example,
a properties file in the classpath,
a web.xml parameter,
a system property, or
some default logic if nothing else is available.
I'd like to be able to do this without injecting any reference to ServletConfig or anything similiar to my Factory - the code should be able to run ok outside of a Servlet Container.
This might sound a little bit uncommon, but I'd like for this component I'm working on to be able to be packaged with one of our webapps, and also be versatile enough to be packaged with some of our command-line tools without requiring a new properties file just for my component - so I was hoping to piggyback on top of other configuration files such as web.xml.
If I recall correctly, .NET has something like Request.GetCurrentRequest() to get a reference to the currently executing Request - but since this is a Java app I'm looking for something simliar that could be used to gain access to ServletConfig.
| [
"One way you could do this is:\npublic class FactoryInitialisingServletContextListener implements ServletContextListener {\n\n public void contextDestroyed(ServletContextEvent event) {\n }\n\n public void contextInitialized(ServletContextEvent event) {\n Properties properties = new Properties();\n ServletContext servletContext = event.getServletContext();\n Enumeration<?> keys = servletContext.getInitParameterNames();\n while (keys.hasMoreElements()) {\n String key = (String) keys.nextElement();\n String value = servletContext.getInitParameter(key);\n properties.setProperty(key, value);\n }\n Factory.setServletContextProperties(properties);\n }\n}\n\npublic class Factory {\n\n static Properties _servletContextProperties = new Properties();\n\n public static void setServletContextProperties(Properties servletContextProperties) {\n _servletContextProperties = servletContextProperties;\n }\n}\n\nAnd then have the following in your web.xml\n<listener>\n <listener-class>com.acme.FactoryInitialisingServletContextListener<listener-class>\n</listener>\n\nIf your application is running in a web container, then the listener will be invoked by the container once the context has been created. In which case, the _servletContextProperties will be replaced with any context-params specified in the web.xml.\nIf your application is running outside a web container, then _servletContextProperties will be empty.\n",
"Have you considered using the Spring framework for this? That way, your beans don't get any extra cruft, and spring handles the configuration setup for you.\n",
"I think that you will have to add an associated bootstrap class which takes a reference to a ServletConfig (or ServletContext) and transcribes those values to the Factory class. At least this way you can package it separately.\n@toolkit : Excellent, most humbled - This is something that I have been trying to do for a while\n"
] | [
5,
1,
0
] | [] | [] | [
"java",
"servlets"
] | stackoverflow_0000041659_java_servlets.txt |
Q:
Is there a python module for regex matching in zip files
I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files.
Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
A:
There's nothing that will automatically do what you want.
However, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file.
#!/usr/bin/python
import zipfile
f = zipfile.ZipFile('myfile.zip')
for subfile in f.namelist():
print subfile
data = f.read(subfile)
for line in data.split('\n'):
print line
A:
You could loop through the zip files, reading individual files using the zipfile module and running your regex on those, eliminating to unzip all the files at once.
I'm fairly certain that you can't run a regex over the zipped data, at least not meaningfully.
A:
To access the contents of a zip file you have to unzip it, although the zipfile package makes this fairly easy, as you can unzip each file within an archive individually.
Python zipfile module
A:
Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp?
(Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.)
EDIT: Also note that it's probably much more sensible to just use the zipfile solution.
| Is there a python module for regex matching in zip files | I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files.
Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
| [
"There's nothing that will automatically do what you want.\nHowever, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file.\n#!/usr/bin/python\n\nimport zipfile\nf = zipfile.ZipFile('myfile.zip')\n\nfor subfile in f.namelist():\n print subfile\n data = f.read(subfile)\n for line in data.split('\\n'):\n print line\n\n",
"You could loop through the zip files, reading individual files using the zipfile module and running your regex on those, eliminating to unzip all the files at once. \nI'm fairly certain that you can't run a regex over the zipped data, at least not meaningfully.\n",
"To access the contents of a zip file you have to unzip it, although the zipfile package makes this fairly easy, as you can unzip each file within an archive individually.\nPython zipfile module\n",
"Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp?\n(Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.)\nEDIT: Also note that it's probably much more sensible to just use the zipfile solution.\n"
] | [
10,
0,
0,
0
] | [] | [] | [
"python",
"regex",
"text_processing",
"zip"
] | stackoverflow_0000014281_python_regex_text_processing_zip.txt |
Q:
HTTP POST - I'm stuck
I have to POST some parameters to a URL outside my network, and the developers on the other side asked me to not use HTTP Parameters: instead I have to post my key-values in HTTP Headers.
The fact is that I don't really understand what they mean: I tried to use a ajax-like post, with XmlHttp objects, and also I tried to write in the header with something like
Request.Headers.Add(key,value);
but I cannot (exception from the framework); I tried the other way around, using the Response object like
Response.AppendHeader("key", "value");
and then redirect to the page... but this doesn't work, as well.
It's evident, I think, that I'm stuck there, any help?
EDIT I forgot to tell you that my environment is .Net 2.0, c#, on Win server 2003.
The exception I got is
System.PlatformNotSupportedException was unhandled by user code
Message="Operation is not supported on this platform."
Source="System.Web"
This looks like it's caused by my tentative to Request.Add, MS an year ago published some security fixes that don't permit this.
A:
Have you tried the WebClient class? An example might look like:
WebClient client = new WebClient();
NameValueCollection data = new NameValueCollection();
data["var1"] = "var1";
client.UploadValues("http://somewhere.com/api", "POST", data);
A:
Take a look at HttpWebRequest. You should be able to construct a request to the URL in question using HttpWebRequest.Method = "POST".
A:
Like @lassevk said, a redirect won't work.
You should use the WebRequest class to do an HTTP POST from your page or application. There's an example here.
A:
You should post more information.
For instance, is this C#? It looks like it, but I might be wrong.
Also, you say you get an exception, what is the exception type and message?
In any case, you can't redirect to a page for POST, you need to submit it from the browser, not from the server redirect, so if you want to automate this, I would guess you would need to generate a html page with a form tag, with some hidden input fields, and then submit it with javascript.
A:
I think they mean they don't want you to use URL parameters (GET). If you use http headers, it's not really querying through POST any more.
A:
What language/framework?
Using Python and httplib2, you should be able to do something like:
http = httplib2.Http()
http.request(url, 'POST', headers={'key': 'value'}, body=urllib.urlencode(''))
A:
I believe that the Request object would only accept a certain set of predefined headers.
There's an enumeration that lists all the supported HTTP Headers too.
But I can't remember it at the moment... I'll look it up in a sec...
A:
I tested your scenario using 2 sample pages using XmlHttpRequest option.
Custom headers are available in the aspx page posted to, using XmlHttpRequest.
Create the following 2 pages. Make sure the aspx page is in a solution , so that you can run the in the debugger, set break point and inspect the Request.Header collection.
<html>
<head>
< script language="javascript">
function SendRequest()
{
var r = new XMLHttpRequest();
r.open('get', 'http://localhost/TestSite/CheckHeader.aspx');
r.setRequestHeader('X-Test', 'one');
r.setRequestHeader('X-Test', 'two');
r.send(null);
}
< script / >
</head>
<body>
<form>
<input type="button" value="Click Me" OnClick="SendRequest();" />
</form>
</body>
</html>
CheckHeader.aspx
using System;
using System.Web;
using System.Web.UI;
public partial class CheckHeader : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string value = string.Empty;
foreach (string key in Request.Headers)
value = Request.Headers[key].ToString();
}
}
Man.. This html editor sucks.. or i do not know how to use it...
A:
The exception I was facing yesterday was caused by my stupid try to write on the headers of the already built page.
When I started creating my Request following one of the mothods indicated here, I could write my headers.
Now I'm using the WebRequest object, as in the sample indicated by @sectrean, here.
Thanks a lot to everybody. StackOverflow rocks :-)
| HTTP POST - I'm stuck | I have to POST some parameters to a URL outside my network, and the developers on the other side asked me to not use HTTP Parameters: instead I have to post my key-values in HTTP Headers.
The fact is that I don't really understand what they mean: I tried to use a ajax-like post, with XmlHttp objects, and also I tried to write in the header with something like
Request.Headers.Add(key,value);
but I cannot (exception from the framework); I tried the other way around, using the Response object like
Response.AppendHeader("key", "value");
and then redirect to the page... but this doesn't work, as well.
It's evident, I think, that I'm stuck there, any help?
EDIT I forgot to tell you that my environment is .Net 2.0, c#, on Win server 2003.
The exception I got is
System.PlatformNotSupportedException was unhandled by user code
Message="Operation is not supported on this platform."
Source="System.Web"
This looks like it's caused by my tentative to Request.Add, MS an year ago published some security fixes that don't permit this.
| [
"Have you tried the WebClient class? An example might look like:\n WebClient client = new WebClient();\n NameValueCollection data = new NameValueCollection();\n data[\"var1\"] = \"var1\";\n client.UploadValues(\"http://somewhere.com/api\", \"POST\", data);\n\n",
"Take a look at HttpWebRequest. You should be able to construct a request to the URL in question using HttpWebRequest.Method = \"POST\".\n",
"Like @lassevk said, a redirect won't work.\nYou should use the WebRequest class to do an HTTP POST from your page or application. There's an example here.\n",
"You should post more information.\nFor instance, is this C#? It looks like it, but I might be wrong.\nAlso, you say you get an exception, what is the exception type and message?\nIn any case, you can't redirect to a page for POST, you need to submit it from the browser, not from the server redirect, so if you want to automate this, I would guess you would need to generate a html page with a form tag, with some hidden input fields, and then submit it with javascript.\n",
"I think they mean they don't want you to use URL parameters (GET). If you use http headers, it's not really querying through POST any more.\n",
"What language/framework?\nUsing Python and httplib2, you should be able to do something like:\nhttp = httplib2.Http()\nhttp.request(url, 'POST', headers={'key': 'value'}, body=urllib.urlencode(''))\n\n",
"I believe that the Request object would only accept a certain set of predefined headers.\nThere's an enumeration that lists all the supported HTTP Headers too.\nBut I can't remember it at the moment... I'll look it up in a sec...\n",
"I tested your scenario using 2 sample pages using XmlHttpRequest option.\nCustom headers are available in the aspx page posted to, using XmlHttpRequest.\nCreate the following 2 pages. Make sure the aspx page is in a solution , so that you can run the in the debugger, set break point and inspect the Request.Header collection.\n<html>\n<head>\n< script language=\"javascript\">\n\nfunction SendRequest()\n{\n var r = new XMLHttpRequest();\n r.open('get', 'http://localhost/TestSite/CheckHeader.aspx');\n r.setRequestHeader('X-Test', 'one');\n r.setRequestHeader('X-Test', 'two');\n r.send(null);\n\n}\n< script / >\n\n</head>\n<body>\n<form>\n<input type=\"button\" value=\"Click Me\" OnClick=\"SendRequest();\" />\n</form>\n</body>\n</html>\n\nCheckHeader.aspx\nusing System;\nusing System.Web;\nusing System.Web.UI;\npublic partial class CheckHeader : System.Web.UI.Page\n{\nprotected void Page_Load(object sender, EventArgs e)\n{\n string value = string.Empty;\n foreach (string key in Request.Headers)\n value = Request.Headers[key].ToString();\n}\n\n}\nMan.. This html editor sucks.. or i do not know how to use it...\n",
"The exception I was facing yesterday was caused by my stupid try to write on the headers of the already built page.\nWhen I started creating my Request following one of the mothods indicated here, I could write my headers. \nNow I'm using the WebRequest object, as in the sample indicated by @sectrean, here.\nThanks a lot to everybody. StackOverflow rocks :-)\n"
] | [
3,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"asp.net",
"forms",
"http",
"post"
] | stackoverflow_0000040452_asp.net_forms_http_post.txt |
Q:
Maintain the correct version for a COM dll referenced in a .NET project
I want to reference a COM DLL in a .NET project, but I also want to make sure that the interop DLL created will have the correct version (so that patches will know when the DLL must be changed).
If I use TlbImp I can specify the required version with the /asmversion flag but when I add it directly from Visual Studio it gets a version that has nothing to do with the original COM DLL's version.
I tried changing the version in the .vcproj file
<ItemGroup>
<COMReference Include="MYDLLLib">
<Guid>{459F8813-D74D-DEAD-BEEF-00CAFEBABEA5}</Guid>
<!-- I changed this -->
<VersionMajor>1</VersionMajor>
<!-- This too -->
<VersionMinor>0</VersionMinor>
<Lcid>0</Lcid>
<WrapperTool>tlbimp</WrapperTool>
<Isolated>False</Isolated>
</COMReference>
</ItemGroup>
But then the project failed to build with the following error:
error CS0246: The type or namespace name 'MYDLLLib' could not be found (are you missing a using directive or an assembly reference?)
Is there any way to get this done without creating all my COM references with TlbImp in advance?
If the answer is yes is there a way to specify a build number in addition to the major and minor versions? (e.g 1.2.42.0)
A:
The Guid refers to the Guid for the TypeLib not the DLL directly. The version numbers refer to the TypeLib's version not the DLLs.
The version number will come from your idl file, and I believe it only supports a major and minor version and not a build version. Is this version changing when you modify the typelib?
The version numbers will appear in the registry under:
HKEY_CLASSES_ROOT\Typelib\{typelib uuid}\Major.Minor
If the minor version is set to 0 then I believe it will import the 'latest' version that matches the major version, but the major version must be set to something.
| Maintain the correct version for a COM dll referenced in a .NET project | I want to reference a COM DLL in a .NET project, but I also want to make sure that the interop DLL created will have the correct version (so that patches will know when the DLL must be changed).
If I use TlbImp I can specify the required version with the /asmversion flag but when I add it directly from Visual Studio it gets a version that has nothing to do with the original COM DLL's version.
I tried changing the version in the .vcproj file
<ItemGroup>
<COMReference Include="MYDLLLib">
<Guid>{459F8813-D74D-DEAD-BEEF-00CAFEBABEA5}</Guid>
<!-- I changed this -->
<VersionMajor>1</VersionMajor>
<!-- This too -->
<VersionMinor>0</VersionMinor>
<Lcid>0</Lcid>
<WrapperTool>tlbimp</WrapperTool>
<Isolated>False</Isolated>
</COMReference>
</ItemGroup>
But then the project failed to build with the following error:
error CS0246: The type or namespace name 'MYDLLLib' could not be found (are you missing a using directive or an assembly reference?)
Is there any way to get this done without creating all my COM references with TlbImp in advance?
If the answer is yes is there a way to specify a build number in addition to the major and minor versions? (e.g 1.2.42.0)
| [
"The Guid refers to the Guid for the TypeLib not the DLL directly. The version numbers refer to the TypeLib's version not the DLLs. \nThe version number will come from your idl file, and I believe it only supports a major and minor version and not a build version. Is this version changing when you modify the typelib?\nThe version numbers will appear in the registry under:\nHKEY_CLASSES_ROOT\\Typelib\\{typelib uuid}\\Major.Minor\n\nIf the minor version is set to 0 then I believe it will import the 'latest' version that matches the major version, but the major version must be set to something.\n"
] | [
2
] | [] | [] | [
".net",
"c++",
"com"
] | stackoverflow_0000041832_.net_c++_com.txt |
Q:
What is a DSL and where should I use it?
I'm hearing more and more about domain specific languages being thrown about and how they change the way you treat business logic, and I've seen Ayende's blog posts and things, but I've never really gotten exactly why I would take my business logic away from the methods and situations I'm using in my provider.
If you've got some background using these things, any chance you could put it in real laymans terms:
What exactly building DSLs means?
What languages are you using?
Where using a DSL makes sense?
What is the benefit of using DSLs?
A:
DSL's are good in situations where you need to give some aspect of the system's control over to someone else. I've used them in Rules Engines, where you create a simple language that is easier for less-technical folks to use to express themselves- particularly in workflows.
In other words, instead of making them learn java:
DocumentDAO myDocumentDAO = ServiceLocator.getDocumentDAO();
for (int id : documentIDS) {
Document myDoc = MyDocumentDAO.loadDoc(id);
if (myDoc.getDocumentStatus().equals(DocumentStatus.UNREAD)) {
ReminderService.sendUnreadReminder(myDoc)
}
I can write a DSL that lets me say:
for (document : documents) {
if (document is unread) {
document.sendReminder
}
There are other situations, but basically, anywhere you might want to use a macro language, script a workflow, or allow after-market customization- these are all candidates for DSL's.
A:
DSL stands for Domain Specific Language i.e. language designed specifically for solving problems in given area.
For example, Markdown (markup language used to edit posts on SO) can be considered as a DSL.
Personally I find a place for DSL almost in every large project I'm working on. Most often I need some kind of SQL-like query language. Another common usage is rule-based systems, you need some kind of language to specify rules\conditions.
DSL makes sense in context where it's difficult to describe\solve problem by traditional means.
A:
If you use Microsoft Visual Studio, you are already using multiple DSLs -- the design surface for web forms, winforms, etc. is a DSL. The Class Designer is another example.
A DSL is just a set of tools that (at least in theory) make development in a specific "domain" (i.e. visual layout) easier, more intuitive, and more productive.
As far as building a DSL, some of the stuff people like Ayende have written about is related to "text parsing" dsls, letting developers (or end users) enter "natural text" into an application, which parses the text and generates some sort of code or output based on it.
You could use any language to build your own DSL. Microsoft Visual Studio has a lot of extensibility points, and the patterns & practices "Guidance Automation Toolkit" and Visual Studio SDK can assist you in adding DSL functionality to Visual Studio.
A:
DSL is just a fancy name and can mean different things:
Rails (the Ruby thing) is sometimes called a DSL because it adds special methods (and overwrites some built-in ones too) for talking about web applications
ANT, Makefile syntax etc. are also DSLs, but have their own syntax. This is what I would call a DSL.
One important aspect of this hype: It does make sense to think of your application in terms of a language. What do you want to talk about in your app? These should then be your classes and methods:
Define a "language" (either a real syntax as proposed by others on this page or a class hierarchy for your favorite language) that is capable of expressing your problem.
Solve your problem in terms of that language.
A:
DSL are basic compilers for custom languages. A good 'free and open' tool to develop them is available at ANTLR. Recently, I've been looking at this DSL for a state machine language use on a new project . I agree with Tim Howland above, that they can be a good way to let someone else customize your application.
A:
FYI, a book on DSLs is in the pipeline as part of Martin Fowler's signature series.
If its of the same standard as the other books in the series, it should be a good read.
More information here
A:
DSL is basically creating your own small sublanguage to solve a specific domain problem. This is solved using method chaining. Languages where dots and parentheses are optional help make these expression seem more natural. It can also be similar to a builder pattern.
DSL aren't languages themselves, but rather a pattern that you apply to your API to make the calls be more self explanatory.
One example is Guice, Guice Users Guide http://docs.google.com/View?docid=dd2fhx4z_5df5hw8 has some description further down of how interfaces are bound to implementations, and in what contexts.
Another common example is for query languages. For example:
NewsDAO.writtenBy("someUser").before("someDate").updateStatus("Deleted")
In the implementation, imagine each method returning either a new Query object, or just this updating itself internally. At any point you can terminate the chain by using for example rows() to get all the rows, or updateSomeField as I have done above here. Both will return a result object.
I would recommend taking a look at the Guice example above as well, as each call there returns a new type with new options on them. A good IDE will allow you to complete, making it clear which options you have at each point.
Edit: seems many consider DSLs as new, simple, single purpose languages with their own parsers. I always associate DSL as using method chaining as a convention to express operations.
| What is a DSL and where should I use it? | I'm hearing more and more about domain specific languages being thrown about and how they change the way you treat business logic, and I've seen Ayende's blog posts and things, but I've never really gotten exactly why I would take my business logic away from the methods and situations I'm using in my provider.
If you've got some background using these things, any chance you could put it in real laymans terms:
What exactly building DSLs means?
What languages are you using?
Where using a DSL makes sense?
What is the benefit of using DSLs?
| [
"DSL's are good in situations where you need to give some aspect of the system's control over to someone else. I've used them in Rules Engines, where you create a simple language that is easier for less-technical folks to use to express themselves- particularly in workflows.\nIn other words, instead of making them learn java:\nDocumentDAO myDocumentDAO = ServiceLocator.getDocumentDAO();\nfor (int id : documentIDS) {\nDocument myDoc = MyDocumentDAO.loadDoc(id);\nif (myDoc.getDocumentStatus().equals(DocumentStatus.UNREAD)) {\n ReminderService.sendUnreadReminder(myDoc)\n}\n\nI can write a DSL that lets me say:\nfor (document : documents) {\nif (document is unread) {\n document.sendReminder\n}\n\nThere are other situations, but basically, anywhere you might want to use a macro language, script a workflow, or allow after-market customization- these are all candidates for DSL's.\n",
"DSL stands for Domain Specific Language i.e. language designed specifically for solving problems in given area.\nFor example, Markdown (markup language used to edit posts on SO) can be considered as a DSL.\nPersonally I find a place for DSL almost in every large project I'm working on. Most often I need some kind of SQL-like query language. Another common usage is rule-based systems, you need some kind of language to specify rules\\conditions.\nDSL makes sense in context where it's difficult to describe\\solve problem by traditional means.\n",
"If you use Microsoft Visual Studio, you are already using multiple DSLs -- the design surface for web forms, winforms, etc. is a DSL. The Class Designer is another example.\nA DSL is just a set of tools that (at least in theory) make development in a specific \"domain\" (i.e. visual layout) easier, more intuitive, and more productive.\nAs far as building a DSL, some of the stuff people like Ayende have written about is related to \"text parsing\" dsls, letting developers (or end users) enter \"natural text\" into an application, which parses the text and generates some sort of code or output based on it.\nYou could use any language to build your own DSL. Microsoft Visual Studio has a lot of extensibility points, and the patterns & practices \"Guidance Automation Toolkit\" and Visual Studio SDK can assist you in adding DSL functionality to Visual Studio.\n",
"DSL is just a fancy name and can mean different things:\n\nRails (the Ruby thing) is sometimes called a DSL because it adds special methods (and overwrites some built-in ones too) for talking about web applications\nANT, Makefile syntax etc. are also DSLs, but have their own syntax. This is what I would call a DSL.\n\nOne important aspect of this hype: It does make sense to think of your application in terms of a language. What do you want to talk about in your app? These should then be your classes and methods:\n\nDefine a \"language\" (either a real syntax as proposed by others on this page or a class hierarchy for your favorite language) that is capable of expressing your problem. \nSolve your problem in terms of that language.\n\n",
"DSL are basic compilers for custom languages. A good 'free and open' tool to develop them is available at ANTLR. Recently, I've been looking at this DSL for a state machine language use on a new project . I agree with Tim Howland above, that they can be a good way to let someone else customize your application. \n",
"FYI, a book on DSLs is in the pipeline as part of Martin Fowler's signature series.\nIf its of the same standard as the other books in the series, it should be a good read.\nMore information here\n",
"DSL is basically creating your own small sublanguage to solve a specific domain problem. This is solved using method chaining. Languages where dots and parentheses are optional help make these expression seem more natural. It can also be similar to a builder pattern.\nDSL aren't languages themselves, but rather a pattern that you apply to your API to make the calls be more self explanatory.\nOne example is Guice, Guice Users Guide http://docs.google.com/View?docid=dd2fhx4z_5df5hw8 has some description further down of how interfaces are bound to implementations, and in what contexts.\nAnother common example is for query languages. For example:\nNewsDAO.writtenBy(\"someUser\").before(\"someDate\").updateStatus(\"Deleted\")\n\nIn the implementation, imagine each method returning either a new Query object, or just this updating itself internally. At any point you can terminate the chain by using for example rows() to get all the rows, or updateSomeField as I have done above here. Both will return a result object.\nI would recommend taking a look at the Guice example above as well, as each call there returns a new type with new options on them. A good IDE will allow you to complete, making it clear which options you have at each point. \nEdit: seems many consider DSLs as new, simple, single purpose languages with their own parsers. I always associate DSL as using method chaining as a convention to express operations.\n"
] | [
18,
11,
5,
3,
3,
3,
1
] | [] | [] | [
"dsl",
"theory"
] | stackoverflow_0000041724_dsl_theory.txt |
Q:
Cost of Inserts vs Update in SQL Server
I have a table with more than a millon rows. This table is used to index tiff images. Each image has fields like date, number, etc. I have users that index these images in batches of 500. I need to know if it is better to first insert 500 rows and then perform 500 updates or, when the user finishes indexing, to do the 500 inserts with all the data. A very important thing is that if I do the 500 inserts at first, this time is free for me because I can do it the night before.
So the question is: is it better to do inserts or inserts and updates, and why? I have defined a id value for each image, and I also have other indices on the fields.
A:
Updates in Sql server result in ghosted rows - i.e. Sql crosses one row out and puts a new one in. The crossed out row is deleted later.
Both inserts and updates can cause page-splits in this way, they both effectively 'add' data, it's just that updates flag the old stuff out first.
On top of this updates need to look up the row first, which for lots of data can take longer than the update.
Inserts will just about always be quicker, especially if they are either in order or if the underlying table doesn't have a clustered index.
When inserting larger amounts of data into a table look at the current indexes - they can take a while to change and build. Adding values in the middle of an index is always slower.
You can think of it like appending to an address book: Mr Z can just be added to the last page, while you'll have to find space in the middle for Mr M.
A:
Doing the inserts first and then the updates does seem to be a better idea for several reasons. You will be inserting at a time of low transaction volume. Since inserts have more data, this is a better time to do it.
Since you are using an id value (which is presumably indexed) for updates, the overhead of updates will be very low. You would also have less data during your updates.
You could also turn off transactions at the batch (500 inserts/updates) level and use it for each individual record, thus reducing some overhead.
Finally, test this out to see the actual performance on your server before making a final decision.
A:
This isn't a cut and dry question. Krishna's and Galegian's points are spot on.
For updates, the impact will be lessened if the updates are affecting fixed-length fields. If updating varchar or blob fields, you may add a cost of page splits during update when the new value surpasses the length of the old value.
A:
I think inserts will run faster. They do not require a lookup (when you do an update you are basically doing the equivalent of a select with the where clause). And also, an insert won't lock the rows the way an update will, so it won't interfere with any selects that are happening against the table at the same time.
A:
The execution plan for each query will tell you which one should be more expensive. The real limiting factor will be the writes to disk, so you may need to run some tests while running perfmon to see which query causes more writes and causes the disk queue to get the longest (longer is bad).
A:
I'm not a database guy, but I imagine doing the inserts in one shot would be faster because the updates require a lookup whereas the inserts do not.
| Cost of Inserts vs Update in SQL Server | I have a table with more than a millon rows. This table is used to index tiff images. Each image has fields like date, number, etc. I have users that index these images in batches of 500. I need to know if it is better to first insert 500 rows and then perform 500 updates or, when the user finishes indexing, to do the 500 inserts with all the data. A very important thing is that if I do the 500 inserts at first, this time is free for me because I can do it the night before.
So the question is: is it better to do inserts or inserts and updates, and why? I have defined a id value for each image, and I also have other indices on the fields.
| [
"Updates in Sql server result in ghosted rows - i.e. Sql crosses one row out and puts a new one in. The crossed out row is deleted later.\nBoth inserts and updates can cause page-splits in this way, they both effectively 'add' data, it's just that updates flag the old stuff out first.\nOn top of this updates need to look up the row first, which for lots of data can take longer than the update.\nInserts will just about always be quicker, especially if they are either in order or if the underlying table doesn't have a clustered index.\nWhen inserting larger amounts of data into a table look at the current indexes - they can take a while to change and build. Adding values in the middle of an index is always slower.\nYou can think of it like appending to an address book: Mr Z can just be added to the last page, while you'll have to find space in the middle for Mr M.\n",
"Doing the inserts first and then the updates does seem to be a better idea for several reasons. You will be inserting at a time of low transaction volume. Since inserts have more data, this is a better time to do it.\nSince you are using an id value (which is presumably indexed) for updates, the overhead of updates will be very low. You would also have less data during your updates.\nYou could also turn off transactions at the batch (500 inserts/updates) level and use it for each individual record, thus reducing some overhead.\nFinally, test this out to see the actual performance on your server before making a final decision.\n",
"This isn't a cut and dry question. Krishna's and Galegian's points are spot on.\nFor updates, the impact will be lessened if the updates are affecting fixed-length fields. If updating varchar or blob fields, you may add a cost of page splits during update when the new value surpasses the length of the old value.\n",
"I think inserts will run faster. They do not require a lookup (when you do an update you are basically doing the equivalent of a select with the where clause). And also, an insert won't lock the rows the way an update will, so it won't interfere with any selects that are happening against the table at the same time.\n",
"The execution plan for each query will tell you which one should be more expensive. The real limiting factor will be the writes to disk, so you may need to run some tests while running perfmon to see which query causes more writes and causes the disk queue to get the longest (longer is bad). \n",
"I'm not a database guy, but I imagine doing the inserts in one shot would be faster because the updates require a lookup whereas the inserts do not.\n"
] | [
38,
2,
2,
2,
1,
0
] | [] | [] | [
"performance",
"sql_server"
] | stackoverflow_0000041837_performance_sql_server.txt |
Q:
How do I do monkeypatching in python?
I've had to do some introspection in python and it wasn't pretty:
name = sys._getframe(1).f_code
name = "%s:%d %s()" %(os.path.split(name.co_filename)[1],name.co_firstlineno,name.co_name)
To get something like
foo.py:22 bar() blah blah
In our debugging output.
I'd ideally like to prepend anything to stderr with this sort of information --
Is it possible to change the behaviour of print globally within python?
A:
A print statement does its IO through "sys.stdout.write" so you can override sys.stdout if you want to manipulate the print stream.
A:
The python inspect module makes this a lot easier and cleaner.
| How do I do monkeypatching in python? | I've had to do some introspection in python and it wasn't pretty:
name = sys._getframe(1).f_code
name = "%s:%d %s()" %(os.path.split(name.co_filename)[1],name.co_firstlineno,name.co_name)
To get something like
foo.py:22 bar() blah blah
In our debugging output.
I'd ideally like to prepend anything to stderr with this sort of information --
Is it possible to change the behaviour of print globally within python?
| [
"A print statement does its IO through \"sys.stdout.write\" so you can override sys.stdout if you want to manipulate the print stream.\n",
"The python inspect module makes this a lot easier and cleaner. \n"
] | [
3,
1
] | [] | [] | [
"monkeypatching",
"python"
] | stackoverflow_0000041562_monkeypatching_python.txt |
Q:
Finding all messages with a given PR_SEARCH_KEY
I need to query an Exchange server to find all messages having a certain value in PR_SEARCH_KEY. Do I have to open every mailbox and iterate through it or is there a faster solution?
Edit: This is for a program that needs to prepend something to the subject line of all copies of a message I got through a journal mailbox.
A:
You haven't gotten any answers yet so I figured I would try a sub-optimal solution.
I'm not sure that you will be able to do what you need to do with the tool I'm going to propose (and, perhaps you are beyond this possible solution), but have you tried to find the messages of interest using ExMerge?
I've found that ExMerge can track down specific messages and get them for me across multiple mailboxes. It doesn't look like you can get directly to the PR_SEARCH_KEY value, but maybe there is another way to skin this cat.
You can download ExMerge at Microsoft Download for ExMerge .
Also, there are some good high-level details on ExMerge at the Microsoft Exchange Team Blog .
| Finding all messages with a given PR_SEARCH_KEY | I need to query an Exchange server to find all messages having a certain value in PR_SEARCH_KEY. Do I have to open every mailbox and iterate through it or is there a faster solution?
Edit: This is for a program that needs to prepend something to the subject line of all copies of a message I got through a journal mailbox.
| [
"You haven't gotten any answers yet so I figured I would try a sub-optimal solution.\nI'm not sure that you will be able to do what you need to do with the tool I'm going to propose (and, perhaps you are beyond this possible solution), but have you tried to find the messages of interest using ExMerge?\nI've found that ExMerge can track down specific messages and get them for me across multiple mailboxes. It doesn't look like you can get directly to the PR_SEARCH_KEY value, but maybe there is another way to skin this cat.\nYou can download ExMerge at Microsoft Download for ExMerge .\nAlso, there are some good high-level details on ExMerge at the Microsoft Exchange Team Blog .\n"
] | [
2
] | [] | [] | [
"email",
"exchange_server",
"mapi",
"performance"
] | stackoverflow_0000038348_email_exchange_server_mapi_performance.txt |
Q:
Instance constructor sets a static member, is it thread safe?
I am re-factoring some code and am wondering about the use of a lock in the instance constructor.
public class MyClass {
private static Int32 counter = 0;
private Int32 myCount;
public MyClass() {
lock(this) {
counter++;
myCount = counter;
}
}
}
Please confirm
Instance constructors are thread-safe.
The lock statement prevents access to that code block, not to the static 'counter' member.
If the intent of the original programmer were to have each instance know its 'count', how would I synchronize access to the 'counter' member to ensure that another thread isn't new'ing a MyClass and changing the count before this one sets its count?
FYI - This class is not a singleton. Instances must simply be aware of their number.
A:
If you are only incrementing a number, there is a special class (Interlocked) for just that...
http://msdn.microsoft.com/en-us/library/system.threading.interlocked.increment.aspx
Interlocked.Increment Method
Increments a specified variable and stores the result, as an atomic operation.
System.Threading.Interlocked.Increment(myField);
More information about threading best practices...
http://msdn.microsoft.com/en-us/library/1c9txz50.aspx
A:
I'm guessing this is for a singleton pattern or something like it. What you want to do is not lock your object, but lock the counter while your are modifying it.
private static int counter = 0;
private static object counterLock = new Object();
lock(counterLock) {
counter++;
myCounter = counter;
}
Because your current code is sort of redundant. Especially being in the constructor where there is only one thread that can call a constructor, unlike with methods where it could be shared across threads and be accessed from any thread that is shared.
From the little I can tell from you code, you are trying to give the object the current count at the time of it being created. So with the above code the counter will be locked while the counter is updated and set locally. So all other constructors will have to wait for the counter to be released.
A:
You can use another static object to lock on it.
private static Object lockObj = new Object();
and lock this object in the constructor.
lock(lockObj){}
However, I'm not sure if there are situations that should be handled because of compiler optimization in .NET like in the case of java
A:
@ajmastrean
I am not saying you should use the singleton pattern itself, but adopt its method of encapsulating the instantiation process.
i.e.
Make the constructor private.
Create a static instance method that returns the type.
In the static instance method, use the lock keyword before instantiating.
Instantiate a new instance of the type.
Increment the count.
Unlock and return the new instance.
EDIT
One problem that has occurred to me, if how would you know when the count has gone down? ;)
EDIT AGAIN
Thinking about it, you could add code to the destructor that calls another static method to decrement the counter :D
A:
The most efficient way to do this would be to use the Interlocked increment operation. It will increment the counter and return the newly set value of the static counter all at once (atomically)
class MyClass {
static int _LastInstanceId = 0;
private readonly int instanceId;
public MyClass() {
this.instanceId = Interlocked.Increment(ref _LastInstanceId);
}
}
In your original example, the lock(this) statement will not have the desired effect because each individual instance will have a different "this" reference, and multiple instances could thus be updating the static member at the same time.
In a sense, constructors can be considered to be thread safe because the reference to the object being constructed is not visible until the constructor has completed, but this doesn't do any good for protecting a static variable.
(Mike Schall had the interlocked bit first)
A:
I think if you modify the Singleton Pattern to include a count (obviously using the thread-safe method), you will be fine :)
Edit
Crap I accidentally deleted!
I am not sure if instance constructors ARE thread safe, I remember reading about this in a design patterns book, you need to ensure that locks are in place during the instantiation process, purely because of this..
A:
@Rob
FYI, This class may not be a singleton, I need access to different instances. They must simply maintain a count. What part of the singleton pattern would you change to perform 'counter' incrementing?
Or are you suggesting that I expose a static method for construction blocking access to the code that increments and reads the counter with a lock.
public MyClass {
private static Int32 counter = 0;
public static MyClass GetAnInstance() {
lock(MyClass) {
counter++;
return new MyClass();
}
}
private Int32 myCount;
private MyClass() {
myCount = counter;
}
}
| Instance constructor sets a static member, is it thread safe? | I am re-factoring some code and am wondering about the use of a lock in the instance constructor.
public class MyClass {
private static Int32 counter = 0;
private Int32 myCount;
public MyClass() {
lock(this) {
counter++;
myCount = counter;
}
}
}
Please confirm
Instance constructors are thread-safe.
The lock statement prevents access to that code block, not to the static 'counter' member.
If the intent of the original programmer were to have each instance know its 'count', how would I synchronize access to the 'counter' member to ensure that another thread isn't new'ing a MyClass and changing the count before this one sets its count?
FYI - This class is not a singleton. Instances must simply be aware of their number.
| [
"If you are only incrementing a number, there is a special class (Interlocked) for just that...\nhttp://msdn.microsoft.com/en-us/library/system.threading.interlocked.increment.aspx\n\nInterlocked.Increment Method\nIncrements a specified variable and stores the result, as an atomic operation.\n\nSystem.Threading.Interlocked.Increment(myField);\n\nMore information about threading best practices...\nhttp://msdn.microsoft.com/en-us/library/1c9txz50.aspx\n",
"I'm guessing this is for a singleton pattern or something like it. What you want to do is not lock your object, but lock the counter while your are modifying it. \nprivate static int counter = 0;\nprivate static object counterLock = new Object();\n\nlock(counterLock) {\n counter++;\n myCounter = counter;\n}\n\nBecause your current code is sort of redundant. Especially being in the constructor where there is only one thread that can call a constructor, unlike with methods where it could be shared across threads and be accessed from any thread that is shared.\nFrom the little I can tell from you code, you are trying to give the object the current count at the time of it being created. So with the above code the counter will be locked while the counter is updated and set locally. So all other constructors will have to wait for the counter to be released.\n",
"You can use another static object to lock on it. \nprivate static Object lockObj = new Object();\n\nand lock this object in the constructor.\nlock(lockObj){}\n\nHowever, I'm not sure if there are situations that should be handled because of compiler optimization in .NET like in the case of java\n",
"@ajmastrean\nI am not saying you should use the singleton pattern itself, but adopt its method of encapsulating the instantiation process.\ni.e.\n\nMake the constructor private.\nCreate a static instance method that returns the type.\nIn the static instance method, use the lock keyword before instantiating.\nInstantiate a new instance of the type.\nIncrement the count.\nUnlock and return the new instance.\n\nEDIT\nOne problem that has occurred to me, if how would you know when the count has gone down? ;)\nEDIT AGAIN\nThinking about it, you could add code to the destructor that calls another static method to decrement the counter :D\n",
"The most efficient way to do this would be to use the Interlocked increment operation. It will increment the counter and return the newly set value of the static counter all at once (atomically)\nclass MyClass {\n\n static int _LastInstanceId = 0;\n private readonly int instanceId; \n\n public MyClass() { \n this.instanceId = Interlocked.Increment(ref _LastInstanceId); \n }\n}\n\nIn your original example, the lock(this) statement will not have the desired effect because each individual instance will have a different \"this\" reference, and multiple instances could thus be updating the static member at the same time.\nIn a sense, constructors can be considered to be thread safe because the reference to the object being constructed is not visible until the constructor has completed, but this doesn't do any good for protecting a static variable.\n(Mike Schall had the interlocked bit first)\n",
"I think if you modify the Singleton Pattern to include a count (obviously using the thread-safe method), you will be fine :)\nEdit\nCrap I accidentally deleted!\nI am not sure if instance constructors ARE thread safe, I remember reading about this in a design patterns book, you need to ensure that locks are in place during the instantiation process, purely because of this..\n",
"@Rob\nFYI, This class may not be a singleton, I need access to different instances. They must simply maintain a count. What part of the singleton pattern would you change to perform 'counter' incrementing?\nOr are you suggesting that I expose a static method for construction blocking access to the code that increments and reads the counter with a lock.\npublic MyClass {\n\n private static Int32 counter = 0;\n public static MyClass GetAnInstance() {\n\n lock(MyClass) {\n counter++;\n return new MyClass();\n }\n }\n\n private Int32 myCount;\n private MyClass() {\n myCount = counter;\n }\n}\n\n"
] | [
12,
4,
3,
3,
2,
0,
0
] | [] | [] | [
".net",
"c#",
"multithreading",
"thread_safety"
] | stackoverflow_0000041792_.net_c#_multithreading_thread_safety.txt |
Q:
How to display line breaks in SharePoint comment history field
We have a SharePoint list setup with history enabled so the Comments field keeps all the past values. When it displays, the Comments field is void of all line breaks. However, when SharePoint e-mails the change to us, the line breaks are in there. The Description field also shows the line breaks.
So, something must be stripping out the line breaks in the read-only view of the Comments field.
Any idea on how to customize that so it retains the formatting in the detail view of the SharePoint list item?
Update: the stock pages with this behavior are
/EditForm.aspx
/DispForm.aspx
A:
What are you using to view the list? A default SharePoint view (AllItems.aspx?), or a DataFormWebPart? Something else?
If you can customize the page that displays this list in SharePoint Designer, make sure this field is set to display "Rich Text" and not just "Plain Text".
If this is the cause of your problem, then another symptom you might see is that certain symbols are displayed as their HTML codes (e.g. "&" as "&").
| How to display line breaks in SharePoint comment history field | We have a SharePoint list setup with history enabled so the Comments field keeps all the past values. When it displays, the Comments field is void of all line breaks. However, when SharePoint e-mails the change to us, the line breaks are in there. The Description field also shows the line breaks.
So, something must be stripping out the line breaks in the read-only view of the Comments field.
Any idea on how to customize that so it retains the formatting in the detail view of the SharePoint list item?
Update: the stock pages with this behavior are
/EditForm.aspx
/DispForm.aspx
| [
"What are you using to view the list? A default SharePoint view (AllItems.aspx?), or a DataFormWebPart? Something else?\nIf you can customize the page that displays this list in SharePoint Designer, make sure this field is set to display \"Rich Text\" and not just \"Plain Text\".\nIf this is the cause of your problem, then another symptom you might see is that certain symbols are displayed as their HTML codes (e.g. \"&\" as \"&\").\n"
] | [
1
] | [] | [] | [
"sharepoint"
] | stackoverflow_0000041996_sharepoint.txt |
Q:
Standard way to open a folder window in linux?
I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application.
On OSX, I can open a window in the finder with
os.system('open "%s"' % foldername)
and on Windows with
os.startfile(foldername)
What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)?
This looks like something that could be specified by the freedesktop.org folks (a python module, similar to webbrowser, would also be nice!).
A:
os.system('xdg-open "%s"' % foldername)
xdg-open can be used for files/urls also
A:
this would probably have to be done manually, or have as a config item since there are many file managers that users may want to use. Providing a way for command options as well.
There might be an function that launches the defaults for kde or gnome in their respective toolkits but I haven't had reason to look for them.
A:
You're going to have to do this based on the running window manager. OSX and Windows have a (defacto) standard way because there is only one choice.
You shouldn't need to specify the exact filemanager application, though, this should be possible to do through the wm. I know Gnome does, and it's important to do this in KDE since there are two possible file managers (Konqueror/Dolphin) that may be in use.
I agree that this would be a good thing for freedesktop.org to standardize, although I doubt it will happen unless someone steps up and volunteers to do it.
EDIT: I wasn't aware of xdg-open. Good to know!
| Standard way to open a folder window in linux? | I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application.
On OSX, I can open a window in the finder with
os.system('open "%s"' % foldername)
and on Windows with
os.startfile(foldername)
What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)?
This looks like something that could be specified by the freedesktop.org folks (a python module, similar to webbrowser, would also be nice!).
| [
"os.system('xdg-open \"%s\"' % foldername)\n\nxdg-open can be used for files/urls also\n",
"this would probably have to be done manually, or have as a config item since there are many file managers that users may want to use. Providing a way for command options as well.\nThere might be an function that launches the defaults for kde or gnome in their respective toolkits but I haven't had reason to look for them.\n",
"You're going to have to do this based on the running window manager. OSX and Windows have a (defacto) standard way because there is only one choice.\nYou shouldn't need to specify the exact filemanager application, though, this should be possible to do through the wm. I know Gnome does, and it's important to do this in KDE since there are two possible file managers (Konqueror/Dolphin) that may be in use.\nI agree that this would be a good thing for freedesktop.org to standardize, although I doubt it will happen unless someone steps up and volunteers to do it.\n\nEDIT: I wasn't aware of xdg-open. Good to know!\n"
] | [
15,
0,
0
] | [] | [] | [
"cross_platform",
"desktop",
"linux",
"python"
] | stackoverflow_0000041969_cross_platform_desktop_linux_python.txt |
Q:
Levenshtein distance based methods Vs Soundex
As per this comment in a related thread, I'd like to know why Levenshtein distance based methods are better than Soundex.
A:
Soundex is rather primitive - it was originally developed to be hand calculated. It results in a key that can be compared.
Soundex works well with western names, as it was originally developed for US census data. It's intended for phonetic comparison.
Levenshtein distance looks at two values and produces a value based on their similarity. It's looking for missing or substituted letters.
Basically Soundex is better for finding that "Schmidt" and "Smith" might be the same surname.
Levenshtein distance is better for spotting that the user has mistyped "Levnshtein" ;-)
A:
I would suggest using Metaphone, not Soundex. As noted, Soundex was developed in the 19th century for American names. Metaphone will give you some results when checking the work of poor spellers who are "sounding it out", and spelling phonetically.
Edit distance is good at catching typos such as repeated letters, transposed letters, or hitting the wrong key.
Consider the application to decide which will fit your users best—or use both together, with Metaphone complementing the suggestions produced by Levenshtein.
With regard to the original question, I've used n-grams successfully in information retrieval applications.
A:
I agree with you on Daitch-Mokotoff, Soundex is biased because the original US census takers wanted 'Americanized' names.
Maybe an example on the difference would help:
Soundex puts addition value in the start of a word - in fact it only considers the first 4 phonetic sounds. So while "Schmidt" and "Smith" will match "Smith" and "Wmith" won't.
Levenshtein's algorithm would be better for finding typos - one or two missing or replaced letters produces a high correlation, while the phonetic impact of those missing letters is less important.
I don't think either is better, and I'd consider both a distance algorithm and a phonetic one for helping users correct typed input.
A:
@Keith:
As I posted on the other question, Daitch-Mokotoff is better for us Europeans (and I'd argue the US).
I've also read the Wiki on Levenshtein. But I don't see why (in real life) it's better for the user than Soundex.
| Levenshtein distance based methods Vs Soundex | As per this comment in a related thread, I'd like to know why Levenshtein distance based methods are better than Soundex.
| [
"Soundex is rather primitive - it was originally developed to be hand calculated. It results in a key that can be compared.\nSoundex works well with western names, as it was originally developed for US census data. It's intended for phonetic comparison.\nLevenshtein distance looks at two values and produces a value based on their similarity. It's looking for missing or substituted letters.\nBasically Soundex is better for finding that \"Schmidt\" and \"Smith\" might be the same surname.\nLevenshtein distance is better for spotting that the user has mistyped \"Levnshtein\" ;-)\n",
"I would suggest using Metaphone, not Soundex. As noted, Soundex was developed in the 19th century for American names. Metaphone will give you some results when checking the work of poor spellers who are \"sounding it out\", and spelling phonetically.\nEdit distance is good at catching typos such as repeated letters, transposed letters, or hitting the wrong key.\nConsider the application to decide which will fit your users best—or use both together, with Metaphone complementing the suggestions produced by Levenshtein.\nWith regard to the original question, I've used n-grams successfully in information retrieval applications.\n",
"I agree with you on Daitch-Mokotoff, Soundex is biased because the original US census takers wanted 'Americanized' names.\nMaybe an example on the difference would help:\nSoundex puts addition value in the start of a word - in fact it only considers the first 4 phonetic sounds. So while \"Schmidt\" and \"Smith\" will match \"Smith\" and \"Wmith\" won't.\nLevenshtein's algorithm would be better for finding typos - one or two missing or replaced letters produces a high correlation, while the phonetic impact of those missing letters is less important.\nI don't think either is better, and I'd consider both a distance algorithm and a phonetic one for helping users correct typed input.\n",
"@Keith:\nAs I posted on the other question, Daitch-Mokotoff is better for us Europeans (and I'd argue the US).\nI've also read the Wiki on Levenshtein. But I don't see why (in real life) it's better for the user than Soundex. \n"
] | [
17,
9,
2,
0
] | [] | [] | [
"algorithm",
"fuzzy_search",
"soundex"
] | stackoverflow_0000042013_algorithm_fuzzy_search_soundex.txt |
Q:
Non-Clustered Index on a Clustered Index column improves performance?
In SQL Server 2005, the query analyzer has told me many times to create a non-clustered index on a primary ID column of a table which already has a clustered index. After following this recommendation, the query execution plan reports that the query should be faster.
Why would a Non-Clustered index on the same column (with the same sort order) be faster than a Clustered index?
A:
A clustered index has all the data for the table while a non clustered index only has the column + the location of the clustered index or the row if it is on a heap (a table without a clustered index). So if you do a count(column) and that column is indexed with a non clustered index SQL server only has to scan the non clustered index which is faster than the clustered index because more will fit on 8K pages
A:
I'd guess it would be faster in cases where you don't need the full row data, for example if you're just checking if a row with a given ID does exist. Then a clustered index would be rather huge while a small "one column" index would be much slimmer.
| Non-Clustered Index on a Clustered Index column improves performance? | In SQL Server 2005, the query analyzer has told me many times to create a non-clustered index on a primary ID column of a table which already has a clustered index. After following this recommendation, the query execution plan reports that the query should be faster.
Why would a Non-Clustered index on the same column (with the same sort order) be faster than a Clustered index?
| [
"A clustered index has all the data for the table while a non clustered index only has the column + the location of the clustered index or the row if it is on a heap (a table without a clustered index). So if you do a count(column) and that column is indexed with a non clustered index SQL server only has to scan the non clustered index which is faster than the clustered index because more will fit on 8K pages\n",
"I'd guess it would be faster in cases where you don't need the full row data, for example if you're just checking if a row with a given ID does exist. Then a clustered index would be rather huge while a small \"one column\" index would be much slimmer.\n"
] | [
16,
2
] | [
"A clustered index will generally be faster, but you can only have 1 clustered index. So if the table already has a clustered index on a different column, then a non-clustered index is the best you can do.\n"
] | [
-1
] | [
"indexing",
"sql_server",
"sql_server_2005"
] | stackoverflow_0000042015_indexing_sql_server_sql_server_2005.txt |
Q:
Why does the Bourne shell printf iterate over a %s argument?
What's going on here?
printf.sh:
#! /bin/sh
NAME="George W. Bush"
printf "Hello, %s\n" $NAME
Command line session:
$ ./printf.sh
Hello, George
Hello, W.
Hello, Bush
UPDATE: printf "Hello, %s\n" "$NAME" works. For why I'm not using echo, consider
echo.sh:
#! /bin/sh
FILE="C:\tmp"
echo "Filename: $FILE"
Command-line:
$ ./echo.sh
Filename: C: mp
The POSIX spec for echo says, "New applications are encouraged to use printf instead of echo" (for this and other reasons).
A:
Your NAME variable is being substituted like this:
printf "Hello, %s\n" George W. Bush
Use this:
#! /bin/sh
NAME="George W. Bush"
printf "Hello, %s\n" "$NAME"
A:
is there a specific reason you are using printf or would echo work for you as well?
NAME="George W. Bush"
echo "Hello, "$NAME
results in
Hello, George W. Bush
edit:
The reason it is iterating over "George W. Bush" is because the bourne shell is space delimitted. To keep using printf you have to put $NAME in double quotes
printf "Hello, %s\n" "$NAME"
A:
The way I interpret the man page is it considers the string you pass it to be an argument; if your string has spaces it thinks you are passing multiple arguments. I believe ColinYounger is correct by surrounding the variable with quotes, which forces the shell to interpret the string as a single argument.
An alternative might be to let printf expand the variable:
printf "Hello, $NAME."
The links are for bash, but I am pretty sure the same holds for sh.
A:
If you want all of those words to be printed out on their own, use print instead of printf
printf takes the formatting specification and applies it to each argument that you pass in. Since you have three arguments {George, W., Bush}, it outputs the string three times using the different arguments.
| Why does the Bourne shell printf iterate over a %s argument? | What's going on here?
printf.sh:
#! /bin/sh
NAME="George W. Bush"
printf "Hello, %s\n" $NAME
Command line session:
$ ./printf.sh
Hello, George
Hello, W.
Hello, Bush
UPDATE: printf "Hello, %s\n" "$NAME" works. For why I'm not using echo, consider
echo.sh:
#! /bin/sh
FILE="C:\tmp"
echo "Filename: $FILE"
Command-line:
$ ./echo.sh
Filename: C: mp
The POSIX spec for echo says, "New applications are encouraged to use printf instead of echo" (for this and other reasons).
| [
"Your NAME variable is being substituted like this:\nprintf \"Hello, %s\\n\" George W. Bush\n\nUse this:\n#! /bin/sh\nNAME=\"George W. Bush\"\nprintf \"Hello, %s\\n\" \"$NAME\"\n\n",
"is there a specific reason you are using printf or would echo work for you as well?\nNAME=\"George W. Bush\"\necho \"Hello, \"$NAME\n\nresults in \nHello, George W. Bush\n\nedit:\nThe reason it is iterating over \"George W. Bush\" is because the bourne shell is space delimitted. To keep using printf you have to put $NAME in double quotes\nprintf \"Hello, %s\\n\" \"$NAME\"\n\n",
"The way I interpret the man page is it considers the string you pass it to be an argument; if your string has spaces it thinks you are passing multiple arguments. I believe ColinYounger is correct by surrounding the variable with quotes, which forces the shell to interpret the string as a single argument.\nAn alternative might be to let printf expand the variable:\nprintf \"Hello, $NAME.\"\n\nThe links are for bash, but I am pretty sure the same holds for sh.\n",
"If you want all of those words to be printed out on their own, use print instead of printf \nprintf takes the formatting specification and applies it to each argument that you pass in. Since you have three arguments {George, W., Bush}, it outputs the string three times using the different arguments.\n"
] | [
7,
1,
1,
0
] | [] | [] | [
"shell",
"unix"
] | stackoverflow_0000042070_shell_unix.txt |
Q:
Parsing an XML file in C#
What is the fastest method of parsing an XML file in C#? I'm using .Net 2.0
A:
If you're using .Net 2 then the XmlReader and XmlDocument are about it.
If you can use .Net 3.5 then the new Linq to Xml methods are a big improvement.
A:
I haven't benched-marked it myself, but when I've asked about it in the past I've been told that XmlDocument is supposed to be faster. I have my doubts, though, since XmlDocument would need to create a DOM while XmlReader does not.
A:
If you use an XmlTextReader class it will technically be faster than using an XmlDocument, which parses the entire file and builds a DOM for you. But you must also consider that fact that with an XmlTextReader, you are just reading one node at a time, so there is the additional overhead of making sense of the data as you read it. If you are going to end up storing everything yourself anyway, using XmlDocument might end up being more efficient.
| Parsing an XML file in C# | What is the fastest method of parsing an XML file in C#? I'm using .Net 2.0
| [
"If you're using .Net 2 then the XmlReader and XmlDocument are about it.\nIf you can use .Net 3.5 then the new Linq to Xml methods are a big improvement.\n",
"I haven't benched-marked it myself, but when I've asked about it in the past I've been told that XmlDocument is supposed to be faster. I have my doubts, though, since XmlDocument would need to create a DOM while XmlReader does not.\n",
"If you use an XmlTextReader class it will technically be faster than using an XmlDocument, which parses the entire file and builds a DOM for you. But you must also consider that fact that with an XmlTextReader, you are just reading one node at a time, so there is the additional overhead of making sense of the data as you read it. If you are going to end up storing everything yourself anyway, using XmlDocument might end up being more efficient.\n"
] | [
13,
1,
1
] | [] | [] | [
"c#",
"xml"
] | stackoverflow_0000041994_c#_xml.txt |
Q:
Logging in a PHP webapp
I want to keep logs of some things that people do in my app, in some cases so that it can be undone if needed.
Is it best to store such logs in a file or a database? I'm completely at a loss as to what the pros and cons are except that it's another table to setup.
Is there a third (or fourth etc) option that I'm not aware of that I should look into and learn about?
A:
There is at least one definite reason to go for storing in the database. You can use INSERT DELAYED in MySQL (or similar constructs in other databases), which returns immediately. You won't get any return data from the database with these kinds of queries, and they are not guaranteed to be applied.
By using INSERT DELAYED, you won't slow down your app to much because of the logging. The database is free to write the INSERTs to disk at any time, so it can bundle a bunch of inserts together.
You need to watch out for using MySQL's built in timestamp function (like CURRENT_TIMESTAMP or CUR_DATE()), because they will be called whenever the query is actually executed. So you should make sure that any time data is generated in your programming language, and not by the database. (This paragraph might be MySQL-specific)
A:
You will almost certainly want to use a database for flexible, record based access and to take advantage of the database's ability to handle concurrent data access. If you need to track information that may need to be undone, having it in a structured format is a benefit, as is having the ability to update a row indicating when and by whom a given transaction has been undone.
You likely only want to write to a file if very high performance is an issue, or if you have very unstructured or large amounts of data per record that might be unweidly to store in a database. Note that Unless your application has a very large number of transactions database speed is unlikely to be an issue. Also note that if you are working with a file you'll need to handle concurrent access (read / write / locking) very carefully which is likely not something you want to have to deal with.
A:
I'm a big fan of log4php. It gives you a standard interface for logging actions. It's based on log4j. The library loads a central config file, so you never need to change your code to change logging. It also offers several log targets, like files, syslog, databases, etc.
A:
I'd use a database simply for maintainability - also multiple edits on a file may cause some getting missed out.
A:
I will second both of the above suggestions and add that file locking on a flat file log may cause issues when there are a lot of users.
| Logging in a PHP webapp | I want to keep logs of some things that people do in my app, in some cases so that it can be undone if needed.
Is it best to store such logs in a file or a database? I'm completely at a loss as to what the pros and cons are except that it's another table to setup.
Is there a third (or fourth etc) option that I'm not aware of that I should look into and learn about?
| [
"There is at least one definite reason to go for storing in the database. You can use INSERT DELAYED in MySQL (or similar constructs in other databases), which returns immediately. You won't get any return data from the database with these kinds of queries, and they are not guaranteed to be applied.\nBy using INSERT DELAYED, you won't slow down your app to much because of the logging. The database is free to write the INSERTs to disk at any time, so it can bundle a bunch of inserts together.\nYou need to watch out for using MySQL's built in timestamp function (like CURRENT_TIMESTAMP or CUR_DATE()), because they will be called whenever the query is actually executed. So you should make sure that any time data is generated in your programming language, and not by the database. (This paragraph might be MySQL-specific)\n",
"You will almost certainly want to use a database for flexible, record based access and to take advantage of the database's ability to handle concurrent data access. If you need to track information that may need to be undone, having it in a structured format is a benefit, as is having the ability to update a row indicating when and by whom a given transaction has been undone. \nYou likely only want to write to a file if very high performance is an issue, or if you have very unstructured or large amounts of data per record that might be unweidly to store in a database. Note that Unless your application has a very large number of transactions database speed is unlikely to be an issue. Also note that if you are working with a file you'll need to handle concurrent access (read / write / locking) very carefully which is likely not something you want to have to deal with.\n",
"I'm a big fan of log4php. It gives you a standard interface for logging actions. It's based on log4j. The library loads a central config file, so you never need to change your code to change logging. It also offers several log targets, like files, syslog, databases, etc. \n",
"I'd use a database simply for maintainability - also multiple edits on a file may cause some getting missed out.\n",
"I will second both of the above suggestions and add that file locking on a flat file log may cause issues when there are a lot of users.\n"
] | [
9,
5,
3,
1,
1
] | [] | [] | [
"logging",
"php"
] | stackoverflow_0000035943_logging_php.txt |
Q:
How does Google Chrome control/contain multiple processes?
How does Google Chrome command and control multiple cross platform processes and provide a shared window / rendering area?
Any insights?
A:
There is a document called Multi-process Architecture on the Chromium developer site. It might be a good starting place.
A:
The source code is online here ...
| How does Google Chrome control/contain multiple processes? | How does Google Chrome command and control multiple cross platform processes and provide a shared window / rendering area?
Any insights?
| [
"There is a document called Multi-process Architecture on the Chromium developer site. It might be a good starting place.\n",
"The source code is online here ...\n"
] | [
7,
2
] | [] | [] | [
"google_chrome",
"process"
] | stackoverflow_0000042143_google_chrome_process.txt |
Q:
Pure Python library to generate Identicons?
Does anyone know of a FOSS Python lib for generating Identicons? I've looked, but so far I haven't had much luck.
A:
I've found two implementations:
http://coderepos.org/share/browser/lang/python/misc/identicon.py
http://code.google.com/p/visicon/
| Pure Python library to generate Identicons? | Does anyone know of a FOSS Python lib for generating Identicons? I've looked, but so far I haven't had much luck.
| [
"I've found two implementations:\nhttp://coderepos.org/share/browser/lang/python/misc/identicon.py\nhttp://code.google.com/p/visicon/\n"
] | [
12
] | [] | [] | [
"identicon",
"python"
] | stackoverflow_0000042093_identicon_python.txt |
Q:
Java: JApplet, How do you embed it in a webpage?
I searched for this subject on Google and got some website about an experts exchange...so I figured I should just ask here instead.
How do you embed a JApplet in HTML on a webpage?
A:
Here is an example from sun's website:
<applet code="TumbleItem.class"
codebase="examples/"
archive="tumbleClasses.jar, tumbleImages.jar"
width="600" height="95">
<param name="maxwidth" value="120">
<param name="nimgs" value="17">
<param name="offset" value="-57">
<param name="img" value="images/tumble">
Your browser is completely ignoring the <APPLET> tag!
</applet>
A:
Although you didn't say so, just in case you were using JSPs, you also have the option of the jsp:plugin tag?
A:
Use the <applet> tag. For more info: http://java.sun.com/docs/books/tutorial/deployment/applet/html.html
| Java: JApplet, How do you embed it in a webpage? | I searched for this subject on Google and got some website about an experts exchange...so I figured I should just ask here instead.
How do you embed a JApplet in HTML on a webpage?
| [
"Here is an example from sun's website:\n<applet code=\"TumbleItem.class\" \n codebase=\"examples/\"\n archive=\"tumbleClasses.jar, tumbleImages.jar\"\n width=\"600\" height=\"95\">\n <param name=\"maxwidth\" value=\"120\">\n <param name=\"nimgs\" value=\"17\">\n <param name=\"offset\" value=\"-57\">\n <param name=\"img\" value=\"images/tumble\">\n\nYour browser is completely ignoring the <APPLET> tag!\n</applet>\n\n",
"Although you didn't say so, just in case you were using JSPs, you also have the option of the jsp:plugin tag?\n",
"Use the <applet> tag. For more info: http://java.sun.com/docs/books/tutorial/deployment/applet/html.html\n"
] | [
6,
2,
1
] | [] | [] | [
"html",
"java",
"web_applications"
] | stackoverflow_0000042153_html_java_web_applications.txt |
Q:
What are the best ways to determine what port an application is using?
This is an adapted version of a question from someone in my office. She's trying to determine how to tell what ports MSDE is running on for an application we have in the field.
Answers to that narrower question would be greatly appreciated. I'm also interested in a broader answer that could be applied to any networked applications.
A:
netstat -b
from the command line will display the application name, process owner, address, and port number used for all running applications.
A:
I've always liked the sysinternals app TCPView, which can now be found here. Good luck.
A:
netstat -b is a great answer, you may need to use the -a option as well.
Without -a netstat shows active connections, with -a it shows listening ports with no active clients as well.
A:
Download currports from here.
It will show you which ports are open and which processes are associated with each port.
Scroll down to:
Download CurrPorts
| What are the best ways to determine what port an application is using? | This is an adapted version of a question from someone in my office. She's trying to determine how to tell what ports MSDE is running on for an application we have in the field.
Answers to that narrower question would be greatly appreciated. I'm also interested in a broader answer that could be applied to any networked applications.
| [
"netstat -b\n\nfrom the command line will display the application name, process owner, address, and port number used for all running applications.\n",
"I've always liked the sysinternals app TCPView, which can now be found here. Good luck. \n",
"netstat -b is a great answer, you may need to use the -a option as well.\nWithout -a netstat shows active connections, with -a it shows listening ports with no active clients as well.\n",
"Download currports from here.\nIt will show you which ports are open and which processes are associated with each port.\nScroll down to:\nDownload CurrPorts\n"
] | [
13,
8,
6,
2
] | [] | [] | [
"msde",
"port",
"sql_server"
] | stackoverflow_0000042146_msde_port_sql_server.txt |
Q:
Where did these hex named folders come from?
First off, I am using Windows XP. I have multiple hard drives and it looks like something decided to make some folders on the second one ( which is just a data drive, no os ). These folders all have names like "e69f29f1b1f166d3d30b8c9f7156ba" and "bd92c24cc278614082cd88e7a64b". They contain folders named update, whose "access is denied", so my best guess would be they are Windows updates. So I probably can't get rid of them but could someone at least explain what they are and why they are on the wrong drive?
A:
Windows will always use the hard drive with the most space to download windows updates. This is what happened to you.
http://computershopper.com/forums/showthread.php?t=265
| Where did these hex named folders come from? | First off, I am using Windows XP. I have multiple hard drives and it looks like something decided to make some folders on the second one ( which is just a data drive, no os ). These folders all have names like "e69f29f1b1f166d3d30b8c9f7156ba" and "bd92c24cc278614082cd88e7a64b". They contain folders named update, whose "access is denied", so my best guess would be they are Windows updates. So I probably can't get rid of them but could someone at least explain what they are and why they are on the wrong drive?
| [
"Windows will always use the hard drive with the most space to download windows updates. This is what happened to you.\nhttp://computershopper.com/forums/showthread.php?t=265\n"
] | [
11
] | [] | [] | [
"directory",
"hex",
"windows"
] | stackoverflow_0000042204_directory_hex_windows.txt |
Q:
SQL Number Formatting
I'm looking to use SQL to format a number with commas in the thousands, but no decimal (so can't use Money) - any suggestions?
I'm using SQL Server 2005, but feel free to answer for others as well (like MySQL)
A:
With TSQL you could cast to money and convert it will add the .00, but you could use replace or substring to remove.
replace(convert(varchar, cast(column as money), 1), '.00', '')
In SQL 2005 you could use a CLR function as well
[Microsoft.SqlServer.Server.SqlFunction]
public static SqlString FormatNumber(SqlInt32 number)
{
return number.Value.ToString("N0");
}
and call it as any other user-defined function
SELECT dbo.FormatNumber(value)
A:
In MySQL, the FORMAT() function will do the trick.
A:
In Oracle you can specify a format parameter to the to_char function:
TO_CHAR(1234, '9,999') --> 1,234
A:
Any specific reason you want this done on the server side? Seems like it is a task better suited for the client/report.
Otherwise, you are storing a number as a string just so you can keep the formatting how you want it -- but you just lost the ability to do even basic arithmetic on it without having to reconvert it to a number.
If you're really determined to do it in SQL and have a justifiable reason for it, I guess my vote is on Scott's method: Value --> Money --> Varchar --> Trim off the decimal portion
-- Kevin Fairchild
A:
For SQL Server, you could format the number as money and then delete the right-most three characters.
replace(convert (varchar, convert (money, 109999), 1), '.00','')
| SQL Number Formatting | I'm looking to use SQL to format a number with commas in the thousands, but no decimal (so can't use Money) - any suggestions?
I'm using SQL Server 2005, but feel free to answer for others as well (like MySQL)
| [
"With TSQL you could cast to money and convert it will add the .00, but you could use replace or substring to remove.\nreplace(convert(varchar, cast(column as money), 1), '.00', '')\n\nIn SQL 2005 you could use a CLR function as well\n[Microsoft.SqlServer.Server.SqlFunction]\npublic static SqlString FormatNumber(SqlInt32 number)\n{\n return number.Value.ToString(\"N0\");\n}\n\nand call it as any other user-defined function\nSELECT dbo.FormatNumber(value)\n\n",
"In MySQL, the FORMAT() function will do the trick.\n",
"In Oracle you can specify a format parameter to the to_char function:\nTO_CHAR(1234, '9,999') --> 1,234\n",
"Any specific reason you want this done on the server side? Seems like it is a task better suited for the client/report.\nOtherwise, you are storing a number as a string just so you can keep the formatting how you want it -- but you just lost the ability to do even basic arithmetic on it without having to reconvert it to a number.\nIf you're really determined to do it in SQL and have a justifiable reason for it, I guess my vote is on Scott's method: Value --> Money --> Varchar --> Trim off the decimal portion\n-- Kevin Fairchild\n",
"For SQL Server, you could format the number as money and then delete the right-most three characters.\nreplace(convert (varchar, convert (money, 109999), 1), '.00','')\n\n"
] | [
4,
2,
2,
2,
0
] | [] | [] | [
"mysql",
"number_formatting",
"sql",
"sql_server"
] | stackoverflow_0000042203_mysql_number_formatting_sql_sql_server.txt |
Q:
Simple effects in Flex
I would like to show some hidden text in a Flex application and have it fade out in a couple of seconds...
I have looked into Delay and Pause effects in Flex, but have yet to see an example of how to do this realistically easy effect...
anyone now how to do it or have a good resource?
Thanks.
A:
If I understand you correctly, you want to have the text automatically fade out a few seconds after it is shown?
I would probably do something like this: (Haven't tested the code, so there are probably typos.)
<mx:Script>
import flash.utils.*;
var fadeTimer:Timer = new Timer(2000); // 2 seconds
fadeTimer.addEventListener("timer", fadeTimerTickHandler);
// Call this to show the hidden text.
function showTheText():void{
theTextField.visible = true;
fadeTimer.start();
}
// This gets called every time the timer "ticks" (2 seconds)
function fadeTimerTickHandler(eventArgs:TimerEvent){
fadeTimer.stop();
fadeTimer.reset();
theTextField.visible = false;
}
</mx:Script>
<mx:Fade id="hideEffectFade" alphaFrom="1.0" alphaTo="0.0" duration="900"/>
<mx:Text id="theTextField" text="The Text" hideEffect="{hideEffectFade}"/>
Also, you need to be sure to embed your fonts or the effect won't work on your text. See Simeon's post for more info.
| Simple effects in Flex | I would like to show some hidden text in a Flex application and have it fade out in a couple of seconds...
I have looked into Delay and Pause effects in Flex, but have yet to see an example of how to do this realistically easy effect...
anyone now how to do it or have a good resource?
Thanks.
| [
"If I understand you correctly, you want to have the text automatically fade out a few seconds after it is shown?\nI would probably do something like this: (Haven't tested the code, so there are probably typos.)\n<mx:Script>\n import flash.utils.*;\n\n var fadeTimer:Timer = new Timer(2000); // 2 seconds\n fadeTimer.addEventListener(\"timer\", fadeTimerTickHandler);\n\n // Call this to show the hidden text.\n function showTheText():void{\n theTextField.visible = true;\n fadeTimer.start();\n }\n\n // This gets called every time the timer \"ticks\" (2 seconds)\n function fadeTimerTickHandler(eventArgs:TimerEvent){\n fadeTimer.stop();\n fadeTimer.reset();\n theTextField.visible = false;\n }\n</mx:Script>\n\n<mx:Fade id=\"hideEffectFade\" alphaFrom=\"1.0\" alphaTo=\"0.0\" duration=\"900\"/>\n\n<mx:Text id=\"theTextField\" text=\"The Text\" hideEffect=\"{hideEffectFade}\"/>\n\nAlso, you need to be sure to embed your fonts or the effect won't work on your text. See Simeon's post for more info.\n"
] | [
2
] | [] | [] | [
"actionscript_3",
"apache_flex"
] | stackoverflow_0000042234_actionscript_3_apache_flex.txt |
Q:
What's the best way to read the contents of a text file to a string in .NET?
It seems like there should be something shorter than this:
private string LoadFromFile(string path)
{
try
{
string fileContents;
using(StreamReader rdr = File.OpenText(path))
{
fileContents = rdr.ReadToEnd();
}
return fileContents;
}
catch
{
throw;
}
}
A:
First of all, the title asks for "how to write the contents of strnig to a text file"
but your code example is for "how to read the contents of a text file to a string.
Answer to both questions:
using System.IO;
...
string filename = "C:/example.txt";
string content = File.ReadAllText(filename);
File.WriteAllText(filename, content);
See also ReadAllLines/WriteAllLines and ReadAllBytes/WriteAllBytes if instead of a string you want a string array or byte array.
A:
string text = File.ReadAllText("c:\file1.txt");
File.WriteAllText("c:\file2.txt", text);
Also check out ReadAllLines/WriteAllLines and ReadAllBytes/WriteAllBytes
A:
There's no point in that exception handler. It does nothing. This is just a shorterned version of your code, it's fine:
private string LoadFromFile(string path)
{
using(StreamReader rdr = File.OpenText(path))
return rdr.ReadToEnd();
}
A:
File.ReadAllText() maybe?
ms-help://MS.VSCC.v90/MS.MSDNQTR.v90.en/fxref_mscorlib/html/4803f846-3d8a-de8a-18eb-32cfcd038f76.htm if you have VS2008's help installed.
| What's the best way to read the contents of a text file to a string in .NET? | It seems like there should be something shorter than this:
private string LoadFromFile(string path)
{
try
{
string fileContents;
using(StreamReader rdr = File.OpenText(path))
{
fileContents = rdr.ReadToEnd();
}
return fileContents;
}
catch
{
throw;
}
}
| [
"First of all, the title asks for \"how to write the contents of strnig to a text file\"\nbut your code example is for \"how to read the contents of a text file to a string.\nAnswer to both questions:\nusing System.IO;\n...\nstring filename = \"C:/example.txt\";\nstring content = File.ReadAllText(filename);\nFile.WriteAllText(filename, content);\n\nSee also ReadAllLines/WriteAllLines and ReadAllBytes/WriteAllBytes if instead of a string you want a string array or byte array.\n",
"string text = File.ReadAllText(\"c:\\file1.txt\");\nFile.WriteAllText(\"c:\\file2.txt\", text);\n\nAlso check out ReadAllLines/WriteAllLines and ReadAllBytes/WriteAllBytes\n",
"There's no point in that exception handler. It does nothing. This is just a shorterned version of your code, it's fine:\n private string LoadFromFile(string path)\n {\n using(StreamReader rdr = File.OpenText(path))\n return rdr.ReadToEnd();\n }\n\n",
"File.ReadAllText() maybe?\nms-help://MS.VSCC.v90/MS.MSDNQTR.v90.en/fxref_mscorlib/html/4803f846-3d8a-de8a-18eb-32cfcd038f76.htm if you have VS2008's help installed.\n"
] | [
17,
5,
4,
3
] | [] | [] | [
".net",
"string",
"text_files"
] | stackoverflow_0000042286_.net_string_text_files.txt |
Q:
C++ Compiler Error C2371 - Redefinition of WCHAR
I am getting C++ Compiler error C2371 when I include a header file that itself includes odbcss.h. My project is set to MBCS.
C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\odbcss.h(430) :
error C2371: 'WCHAR' : redefinition; different basic types 1>
C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\winnt.h(289) :
see declaration of 'WCHAR'
I don't see any defines in odbcss.h that I could set to avoid this. Has anyone else seen this?
A:
This is a known bug - see the Microsoft Connect website:
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=98699
The error doesn't occur if you compile your app as Unicode instead of MBCS.
A:
There are a half-dozen posts on various forums around the web about this - it seems to potentially be an issue when odbcss.h is used in the presence of MFC. Most of the answers involve changing the order of included headers (voodoo debugging). The header that includes odbcss.h compiles fine in it's native project, but when it is included in a different project, it gives this error. We even put it in the latter project's stdafx.h, right after the base include for MFC, and still no joy. We finally worked around it by moving it into a cpp file in the original project, which does not use MFC (which should have been done anyway - but it wasn't our code). So we've got a work-around, but no real solution.
A:
This error happens when you redeclare a variable of the same name as a variable that has already been declared. Have you looked to see if odbcss.h has declared a variable you already have?
A:
does this help?
http://bytes.com/forum/thread602063.html
Content from the thread:
Bruno van Dooren [MVP VC++] but i know the solution of this problem.
it solves by changing project setting of "Treat wchar_t as Built-in
Type" value "No (/Zc:wchar_t-)". But I am using "Xtreme Toolkit
Professional Edition" for making good look & Feel of an application,
when i fix the above problem by changing project settings a new
linking errors come from Xtreme Toolkit Library. So what i do to fix
this problem, in project setting "Treat wchar_t as Built-in Type"
value "yes" and i wrote following statements where i included wab.h
header file. You can change that setting on a per-codefile basis so
that only specific files are compiled with that particular setting. If
you can solve your problems that way it would be the cleanest
solution.
#define WIN16
#include "wab.h"
#undef WIN16
and after that my project is working fine and all the things related to WAB is also working fine. any one guide me, is that the right way
to solve this problem??? and, will this have any effect on the rest of
project?? I wouldn't worry about it. whatever the definition, it is a
16 bit variable in both cases. I agree that it isn't the best looking
solution, but it should work IF WIN16 has no other impact inside the
wab.h file.
--
Kind regards, Bruno van Dooren bruno_nos_pam_van_dooren@hotmail.com
Remove only "_nos_pam"
| C++ Compiler Error C2371 - Redefinition of WCHAR | I am getting C++ Compiler error C2371 when I include a header file that itself includes odbcss.h. My project is set to MBCS.
C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\odbcss.h(430) :
error C2371: 'WCHAR' : redefinition; different basic types 1>
C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\winnt.h(289) :
see declaration of 'WCHAR'
I don't see any defines in odbcss.h that I could set to avoid this. Has anyone else seen this?
| [
"This is a known bug - see the Microsoft Connect website:\nhttp://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=98699\nThe error doesn't occur if you compile your app as Unicode instead of MBCS.\n",
"There are a half-dozen posts on various forums around the web about this - it seems to potentially be an issue when odbcss.h is used in the presence of MFC. Most of the answers involve changing the order of included headers (voodoo debugging). The header that includes odbcss.h compiles fine in it's native project, but when it is included in a different project, it gives this error. We even put it in the latter project's stdafx.h, right after the base include for MFC, and still no joy. We finally worked around it by moving it into a cpp file in the original project, which does not use MFC (which should have been done anyway - but it wasn't our code). So we've got a work-around, but no real solution.\n",
"This error happens when you redeclare a variable of the same name as a variable that has already been declared. Have you looked to see if odbcss.h has declared a variable you already have?\n",
"does this help?\nhttp://bytes.com/forum/thread602063.html\nContent from the thread:\n\nBruno van Dooren [MVP VC++] but i know the solution of this problem.\n it solves by changing project setting of \"Treat wchar_t as Built-in\n Type\" value \"No (/Zc:wchar_t-)\". But I am using \"Xtreme Toolkit\n Professional Edition\" for making good look & Feel of an application,\n when i fix the above problem by changing project settings a new\n linking errors come from Xtreme Toolkit Library. So what i do to fix\n this problem, in project setting \"Treat wchar_t as Built-in Type\"\n value \"yes\" and i wrote following statements where i included wab.h\n header file. You can change that setting on a per-codefile basis so\n that only specific files are compiled with that particular setting. If\n you can solve your problems that way it would be the cleanest\n solution.\n#define WIN16\n#include \"wab.h\"\n#undef WIN16\nand after that my project is working fine and all the things related to WAB is also working fine. any one guide me, is that the right way\n to solve this problem??? and, will this have any effect on the rest of\n project?? I wouldn't worry about it. whatever the definition, it is a\n 16 bit variable in both cases. I agree that it isn't the best looking\n solution, but it should work IF WIN16 has no other impact inside the\n wab.h file.\n--\nKind regards, Bruno van Dooren bruno_nos_pam_van_dooren@hotmail.com\n Remove only \"_nos_pam\"\n\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"c++",
"visual_studio"
] | stackoverflow_0000042126_c++_visual_studio.txt |
Q:
Are There Adapters for CF Type II to MicroSD?
I honestly have only started recently researching this so my knowledge is limited. I was approached about adapting some Pocket PC software to operate on the Windows 6 platform. After considering how I would go about doing that in the Compact Framework I received more details.
It seems there is a desire to utilize (re-use) CF Type II devices on a mobile phone platform (using more modern miniSD or microSD slots). While there exist plenty of microSD to CF adapters, there seems to be none going the other direction (even though I realize that would be an awkward looking adapter in physical design). Is this true and what prevents this technically?
A:
There is nothing that does this currently. Likely because you can't exactly fit a CF card in a MicroSD card....it would have to have some weird cable coming off of it, which would likely cause it to no longer fit in the slot. Also, CF is a Parallel interface while SD uses a Serial interface.
| Are There Adapters for CF Type II to MicroSD? | I honestly have only started recently researching this so my knowledge is limited. I was approached about adapting some Pocket PC software to operate on the Windows 6 platform. After considering how I would go about doing that in the Compact Framework I received more details.
It seems there is a desire to utilize (re-use) CF Type II devices on a mobile phone platform (using more modern miniSD or microSD slots). While there exist plenty of microSD to CF adapters, there seems to be none going the other direction (even though I realize that would be an awkward looking adapter in physical design). Is this true and what prevents this technically?
| [
"There is nothing that does this currently. Likely because you can't exactly fit a CF card in a MicroSD card....it would have to have some weird cable coming off of it, which would likely cause it to no longer fit in the slot. Also, CF is a Parallel interface while SD uses a Serial interface.\n"
] | [
1
] | [] | [] | [
"hardware",
"mobile",
"pocketpc"
] | stackoverflow_0000042312_hardware_mobile_pocketpc.txt |
Q:
IE6 rending UL's incorrectly
Sometimes IE6 will render the text of a <ul> list the same color as the background color. If you select it, they show back up, or if you scroll the page up and back down.
It is obviously a rendering bug, but I was wondering if anyone knows of a workaround to make it reliable?
A:
try giving it hasLayout with
zoom: 1
A:
Have you tried explicitly setting a line-height? For some reason this seems to be the solution to a great many IE6 rendering bugs!
e.g.
.mylist {
line-height: 1.6em;
}
| IE6 rending UL's incorrectly | Sometimes IE6 will render the text of a <ul> list the same color as the background color. If you select it, they show back up, or if you scroll the page up and back down.
It is obviously a rendering bug, but I was wondering if anyone knows of a workaround to make it reliable?
| [
"try giving it hasLayout with\nzoom: 1\n\n",
"Have you tried explicitly setting a line-height? For some reason this seems to be the solution to a great many IE6 rendering bugs!\ne.g.\n.mylist {\n line-height: 1.6em;\n}\n\n"
] | [
1,
0
] | [] | [] | [
"css",
"html",
"internet_explorer_6",
"rendering"
] | stackoverflow_0000042342_css_html_internet_explorer_6_rendering.txt |
Q:
What is the C# equivalent of the Oracle PL/SQL COALESCE function?
Is there a one statement or one line way to accomplish something like this, where the string s is declared AND assigned the first non-null value in the expression?
//pseudo-codeish
string s = Coalesce(string1, string2, string3);
or, more generally,
object obj = Coalesce(obj1, obj2, obj3, ...objx);
A:
As Darren Kopp said.
Your statement
object obj = Coalesce(obj1, obj2, obj3, ...objx);
Can be written like this:
object obj = obj1 ?? obj2 ?? obj3 ?? ... objx;
to put it in other words:
var a = b ?? c;
is equivalent to
var a = b != null ? b : c;
A:
the ?? operator.
string a = nullstring ?? "empty!";
| What is the C# equivalent of the Oracle PL/SQL COALESCE function? | Is there a one statement or one line way to accomplish something like this, where the string s is declared AND assigned the first non-null value in the expression?
//pseudo-codeish
string s = Coalesce(string1, string2, string3);
or, more generally,
object obj = Coalesce(obj1, obj2, obj3, ...objx);
| [
"As Darren Kopp said.\nYour statement\nobject obj = Coalesce(obj1, obj2, obj3, ...objx);\n\nCan be written like this:\nobject obj = obj1 ?? obj2 ?? obj3 ?? ... objx;\n\nto put it in other words:\nvar a = b ?? c;\n\nis equivalent to\nvar a = b != null ? b : c;\n\n",
"the ?? operator.\nstring a = nullstring ?? \"empty!\";\n\n"
] | [
14,
2
] | [] | [] | [
"c#",
"coalesce",
"oracle"
] | stackoverflow_0000042386_c#_coalesce_oracle.txt |
Q:
How do I implement OpenID in my web application?
Does Stackoverflow create a new OpenID when a user registers with an email address (i.e. does not provide an existing OpenID)? How do you do that? Do you have code examples in C#? Java? Python?
A:
You can find OpenID implementations here. If you just want more information, I would check out the OpenID site.
A:
The Plaxo OpenID recipe (from the OpenID site) was one of the better howtos I've seen.
A:
Scott Hanselman posted a while back about setting up OpenID in .net.
| How do I implement OpenID in my web application? | Does Stackoverflow create a new OpenID when a user registers with an email address (i.e. does not provide an existing OpenID)? How do you do that? Do you have code examples in C#? Java? Python?
| [
"You can find OpenID implementations here. If you just want more information, I would check out the OpenID site.\n",
"The Plaxo OpenID recipe (from the OpenID site) was one of the better howtos I've seen.\n",
"Scott Hanselman posted a while back about setting up OpenID in .net.\n"
] | [
13,
7,
1
] | [
"I think you are mis-understanding OpenID, the process of registering and OpenID is the responsibility of the user, you'll note that there is no place to signup here without an OpenID.\n"
] | [
-1
] | [
"openid",
"web_applications"
] | stackoverflow_0000042407_openid_web_applications.txt |
Q:
Why is my image coming out garbled?
I've got some Java code using a servlet and Apache Commons FileUpload to upload a file to a set directory. It's working fine for character data (e.g. text files) but image files are coming out garbled. I can open them but the image doesn't look like it should. Here's my code:
Servlet
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try {
String customerPath = "\\leetest\\";
// Check that we have a file upload request
boolean isMultipart = ServletFileUpload.isMultipartContent(request);
if (isMultipart) {
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload();
// Parse the request
FileItemIterator iter = upload.getItemIterator(request);
while (iter.hasNext()) {
FileItemStream item = iter.next();
String name = item.getFieldName();
if (item.isFormField()) {
// Form field. Ignore for now
} else {
BufferedInputStream stream = new BufferedInputStream(item
.openStream());
if (stream == null) {
LOGGER
.error("Something went wrong with fetching the stream for field "
+ name);
}
byte[] bytes = StreamUtils.getBytes(stream);
FileManager.createFile(customerPath, item.getName(), bytes);
stream.close();
}
}
}
} catch (Exception e) {
throw new UploadException("An error occured during upload: "
+ e.getMessage());
}
}
StreamUtils.getBytes(stream) looks like:
public static byte[] getBytes(InputStream src, int buffsize)
throws IOException {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
byte[] buff = new byte[buffsize];
while (true) {
int nBytesRead = src.read(buff);
if (nBytesRead < 0) {
break;
}
byteStream.write(buff);
}
byte[] result = byteStream.toByteArray();
byteStream.close();
return result;
}
And finally FileManager.createFile looks like:
public static void createFile(String customerPath, String filename,
byte[] fileData) throws IOException {
customerPath = getFullPath(customerPath + filename);
File newFile = new File(customerPath);
if (!newFile.getParentFile().exists()) {
newFile.getParentFile().mkdirs();
}
FileOutputStream outputStream = new FileOutputStream(newFile);
outputStream.write(fileData);
outputStream.close();
}
Can anyone spot what I'm doing wrong?
Cheers,
Lee
A:
One thing I don't like is here in this block from StreamUtils.getBytes():
1 while (true) {
2 int nBytesRead = src.read(buff);
3 if (nBytesRead < 0) {
4 break;
5 }
6 byteStream.write(buff);
7 }
At line 6, it writes the entire buffer, no matter how many bytes are read in. I am not convinced this will always be the case. It would be more correct like this:
1 while (true) {
2 int nBytesRead = src.read(buff);
3 if (nBytesRead < 0) {
4 break;
5 } else {
6 byteStream.write(buff, 0, nBytesRead);
7 }
8 }
Note the 'else' on line 5, along with the two additional parameters (array index start position and length to copy) on line 6.
I could imagine that for larger files, like images, the buffer returns before it is filled (maybe it is waiting for more). That means you'd be unintentionally writing old data that was remaining in the tail end of the buffer. This is almost certainly happening most of the time at EoF, assuming a buffer > 1 byte, but extra data at EoF is probably not the cause of your corruption...it is just not desirable.
A:
I'd just use commons io Then you could just do an IOUtils.copy(InputStream, OutputStream);
It's got lots of other useful utility methods.
A:
Are you sure that the image isn't coming through garbled or that you aren't dropping some packets on the way in.
A:
I don't know what difference it makes, but there seems to be a mismatch of method signatures. The getBytes() method called in your doPost() method has only one argument:
byte[] bytes = StreamUtils.getBytes(stream);
while the method source you included has two arguments:
public static byte[] getBytes(InputStream src, int buffsize)
Hope that helps.
A:
Can you perform a checksum on your original file, and the uploaded file and see if there is any immediate differences?
If there are then you can look at performing a diff, to determine the exact part(s) of the file that are missing changed.
Things that pop to mind is beginning or end of stream, or endianness.
| Why is my image coming out garbled? | I've got some Java code using a servlet and Apache Commons FileUpload to upload a file to a set directory. It's working fine for character data (e.g. text files) but image files are coming out garbled. I can open them but the image doesn't look like it should. Here's my code:
Servlet
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try {
String customerPath = "\\leetest\\";
// Check that we have a file upload request
boolean isMultipart = ServletFileUpload.isMultipartContent(request);
if (isMultipart) {
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload();
// Parse the request
FileItemIterator iter = upload.getItemIterator(request);
while (iter.hasNext()) {
FileItemStream item = iter.next();
String name = item.getFieldName();
if (item.isFormField()) {
// Form field. Ignore for now
} else {
BufferedInputStream stream = new BufferedInputStream(item
.openStream());
if (stream == null) {
LOGGER
.error("Something went wrong with fetching the stream for field "
+ name);
}
byte[] bytes = StreamUtils.getBytes(stream);
FileManager.createFile(customerPath, item.getName(), bytes);
stream.close();
}
}
}
} catch (Exception e) {
throw new UploadException("An error occured during upload: "
+ e.getMessage());
}
}
StreamUtils.getBytes(stream) looks like:
public static byte[] getBytes(InputStream src, int buffsize)
throws IOException {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
byte[] buff = new byte[buffsize];
while (true) {
int nBytesRead = src.read(buff);
if (nBytesRead < 0) {
break;
}
byteStream.write(buff);
}
byte[] result = byteStream.toByteArray();
byteStream.close();
return result;
}
And finally FileManager.createFile looks like:
public static void createFile(String customerPath, String filename,
byte[] fileData) throws IOException {
customerPath = getFullPath(customerPath + filename);
File newFile = new File(customerPath);
if (!newFile.getParentFile().exists()) {
newFile.getParentFile().mkdirs();
}
FileOutputStream outputStream = new FileOutputStream(newFile);
outputStream.write(fileData);
outputStream.close();
}
Can anyone spot what I'm doing wrong?
Cheers,
Lee
| [
"One thing I don't like is here in this block from StreamUtils.getBytes():\n 1 while (true) {\n 2 int nBytesRead = src.read(buff);\n 3 if (nBytesRead < 0) {\n 4 break;\n 5 }\n 6 byteStream.write(buff);\n 7 }\n\nAt line 6, it writes the entire buffer, no matter how many bytes are read in. I am not convinced this will always be the case. It would be more correct like this:\n 1 while (true) {\n 2 int nBytesRead = src.read(buff);\n 3 if (nBytesRead < 0) {\n 4 break;\n 5 } else {\n 6 byteStream.write(buff, 0, nBytesRead);\n 7 }\n 8 }\n\nNote the 'else' on line 5, along with the two additional parameters (array index start position and length to copy) on line 6.\nI could imagine that for larger files, like images, the buffer returns before it is filled (maybe it is waiting for more). That means you'd be unintentionally writing old data that was remaining in the tail end of the buffer. This is almost certainly happening most of the time at EoF, assuming a buffer > 1 byte, but extra data at EoF is probably not the cause of your corruption...it is just not desirable. \n",
"I'd just use commons io Then you could just do an IOUtils.copy(InputStream, OutputStream);\nIt's got lots of other useful utility methods. \n",
"Are you sure that the image isn't coming through garbled or that you aren't dropping some packets on the way in. \n",
"I don't know what difference it makes, but there seems to be a mismatch of method signatures. The getBytes() method called in your doPost() method has only one argument:\nbyte[] bytes = StreamUtils.getBytes(stream);\n\nwhile the method source you included has two arguments:\npublic static byte[] getBytes(InputStream src, int buffsize)\n\nHope that helps.\n",
"Can you perform a checksum on your original file, and the uploaded file and see if there is any immediate differences?\nIf there are then you can look at performing a diff, to determine the exact part(s) of the file that are missing changed.\nThings that pop to mind is beginning or end of stream, or endianness.\n"
] | [
4,
1,
0,
0,
0
] | [] | [] | [
"apache_commons_fileupload",
"file_io",
"java"
] | stackoverflow_0000041686_apache_commons_fileupload_file_io_java.txt |
Q:
How do I measure bytes in/out of an IP port used for .NET remoting?
I am using .NET remoting to retrieve periodic status updates from a Windows service into a 'controller' application which is used to display some live stats about what the service is doing.
The resulting network traffic is huge - many times the size of the data for the updates - so clearly I have implemented the remoting code incorrectly in a very inefficient way. As a first step towards fixing it, I need to monitor the traffic on the IP port the service is using to talk to the controller, so that I can establish a baseline and then verify a fix.
Can anyone recommend a utility and/or coding technique that I can use to get the traffic stats? A "bytes sent" count for the port would suffice.
A:
Wireshark is one of the best tools for capturing and analyzing IP traffic.
[Edit] Sort of lame that you answered first and didn't get the check mark. I didn't mean to snake you. +1 as a consolation.
A:
I highly recommend Wireshark for traffic analysis.
| How do I measure bytes in/out of an IP port used for .NET remoting? | I am using .NET remoting to retrieve periodic status updates from a Windows service into a 'controller' application which is used to display some live stats about what the service is doing.
The resulting network traffic is huge - many times the size of the data for the updates - so clearly I have implemented the remoting code incorrectly in a very inefficient way. As a first step towards fixing it, I need to monitor the traffic on the IP port the service is using to talk to the controller, so that I can establish a baseline and then verify a fix.
Can anyone recommend a utility and/or coding technique that I can use to get the traffic stats? A "bytes sent" count for the port would suffice.
| [
"Wireshark is one of the best tools for capturing and analyzing IP traffic.\n[Edit] Sort of lame that you answered first and didn't get the check mark. I didn't mean to snake you. +1 as a consolation.\n",
"I highly recommend Wireshark for traffic analysis.\n"
] | [
6,
6
] | [] | [] | [
".net",
"networking",
"remoting",
"windows"
] | stackoverflow_0000042468_.net_networking_remoting_windows.txt |
Q:
Are semicolons needed after an object literal assignment in JavaScript?
The following code illustrates an object literal being assigned, but with no semicolon afterwards:
var literal = {
say: function(msg) { alert(msg); }
}
literal.say("hello world!");
This appears to be legal, and doesn't issue a warning (at least in Firefox 3). Is this completely legal, or is there a strict version of JavaScript where this is not allowed?
I'm wondering in particular for future compatibility issues... I would like to be writing "correct" JavaScript, so if technically I need to use the semicolon, I would like to be using it.
A:
Not technically, JavaScript has semicolons as optional in many situations.
But, as a general rule, use them at the end of any statement. Why? Because if you ever want to compress the script, it will save you from countless hours of frustration.
Automatic semicolon insertion is performed by the interpreter, so you can leave them out if you so choose. In the comments, someone claimed that
Semicolons are not optional with statements like break/continue/throw
but this is incorrect. They are optional; what is really happening is that line terminators affect the automatic semicolon insertion; it is a subtle difference.
Here is the rest of the standard on semicolon insertion:
For convenience, however, such semicolons may be omitted from the source text in certain situations. These situations are described by saying that semicolons are automatically inserted into the source code token stream in those situations.
A:
The YUI Compressor and dojo shrinksafe should work perfectly fine without semicolons since they're based on a full JavaScript parser. But Packer and JSMin won't.
The other reason to always use semi-colons at the end of statements is that occasionally you can accidentally combine two statements to create something very different. For example, if you follow the statement with the common technique to create a scope using a closure:
var literal = {
say: function(msg) { alert(msg); }
}
(function() {
// ....
})();
The parser might interpret the brackets as a function call, here causing a type error, but in other circumstances it could cause a subtle bug that's tricky to trace. Another interesting mishap is if the next statement starts with a regular expression, the parser might think the first forward slash is a division symbol.
A:
JavaScript interpreters do something called "semicolon insertion", so if a line without a semicolon is valid, a semicolon will quietly be added to the end of the statement and no error will occur.
var foo = 'bar'
// Valid, foo now contains 'bar'
var bas =
{ prop: 'yay!' }
// Valid, bas now contains object with property 'prop' containing 'yay!'
var zeb =
switch (zeb) {
...
// Invalid, because the lines following 'var zeb =' aren't an assignable value
Not too complicated and at least an error gets thrown when something is clearly not right. But there are cases where an error is not thrown, but the statements are not executed as intended due to semicolon insertion. Consider a function that is supposed to return an object:
return {
prop: 'yay!'
}
// The object literal gets returned as expected and all is well
return
{
prop: 'nay!'
}
// Oops! return by itself is a perfectly valid statement, so a semicolon
// is inserted and undefined is unexpectedly returned, rather than the object
// literal. Note that no error occurred.
Bugs like this can be maddeningly difficult to hunt down and while you can't ensure this never happens (since there's no way I know of to turn off semicolon insertion), these sorts of bugs are easier to identify when you make your intentions clear by consistently using semicolons. That and explicitly adding semicolons is generally considered good style.
I was first made aware of this insidious little possibility when reading Douglas Crockford's superb and succinct book "JavaScript: The Good Parts". I highly recommend it.
A:
In this case there is no need for a semicolon at the end of the statement. The conclusion is the same but the reasoning is way off.
JavaScript does not have semicolons as "optional". Rather, it has strict rules around automatic semicolon insertion. Semicolons are not optional with statements like break, continue, or throw. Refer to the ECMA Language Specification for more details; specifically 11.9.1, rules of automatic semicolon insertion.
A:
Use JSLint to keep your JavaScript clean and tidy
JSLint says:
Error:
Implied global: alert 2
Problem at line 3 character 2: Missing
semicolon.
}
A:
The semi-colon is not necessary. Some people choose to follow the convention of always terminating with a semi-colon instead of allowing JavaScript to do so automatically at linebreaks, but I'm sure you'll find groups advocating either direction.
If you are looking at writing "correct" JavaScript, I would suggest testing things in Firefox with javascript.options.strict (accessed via about:config) set to true. It might not catch everything, but it should help you ensure your JavaScript code is more compliant.
| Are semicolons needed after an object literal assignment in JavaScript? | The following code illustrates an object literal being assigned, but with no semicolon afterwards:
var literal = {
say: function(msg) { alert(msg); }
}
literal.say("hello world!");
This appears to be legal, and doesn't issue a warning (at least in Firefox 3). Is this completely legal, or is there a strict version of JavaScript where this is not allowed?
I'm wondering in particular for future compatibility issues... I would like to be writing "correct" JavaScript, so if technically I need to use the semicolon, I would like to be using it.
| [
"Not technically, JavaScript has semicolons as optional in many situations. \nBut, as a general rule, use them at the end of any statement. Why? Because if you ever want to compress the script, it will save you from countless hours of frustration.\nAutomatic semicolon insertion is performed by the interpreter, so you can leave them out if you so choose. In the comments, someone claimed that \n\nSemicolons are not optional with statements like break/continue/throw\n\nbut this is incorrect. They are optional; what is really happening is that line terminators affect the automatic semicolon insertion; it is a subtle difference. \nHere is the rest of the standard on semicolon insertion:\n\nFor convenience, however, such semicolons may be omitted from the source text in certain situations. These situations are described by saying that semicolons are automatically inserted into the source code token stream in those situations.\n\n",
"The YUI Compressor and dojo shrinksafe should work perfectly fine without semicolons since they're based on a full JavaScript parser. But Packer and JSMin won't.\nThe other reason to always use semi-colons at the end of statements is that occasionally you can accidentally combine two statements to create something very different. For example, if you follow the statement with the common technique to create a scope using a closure:\nvar literal = {\n say: function(msg) { alert(msg); }\n}\n(function() {\n // ....\n})();\n\nThe parser might interpret the brackets as a function call, here causing a type error, but in other circumstances it could cause a subtle bug that's tricky to trace. Another interesting mishap is if the next statement starts with a regular expression, the parser might think the first forward slash is a division symbol.\n",
"JavaScript interpreters do something called \"semicolon insertion\", so if a line without a semicolon is valid, a semicolon will quietly be added to the end of the statement and no error will occur.\nvar foo = 'bar'\n// Valid, foo now contains 'bar'\nvar bas =\n { prop: 'yay!' }\n// Valid, bas now contains object with property 'prop' containing 'yay!'\nvar zeb =\nswitch (zeb) {\n ...\n// Invalid, because the lines following 'var zeb =' aren't an assignable value\n\nNot too complicated and at least an error gets thrown when something is clearly not right. But there are cases where an error is not thrown, but the statements are not executed as intended due to semicolon insertion. Consider a function that is supposed to return an object:\nreturn {\n prop: 'yay!'\n}\n// The object literal gets returned as expected and all is well\nreturn\n{\n prop: 'nay!'\n}\n// Oops! return by itself is a perfectly valid statement, so a semicolon\n// is inserted and undefined is unexpectedly returned, rather than the object\n// literal. Note that no error occurred.\n\nBugs like this can be maddeningly difficult to hunt down and while you can't ensure this never happens (since there's no way I know of to turn off semicolon insertion), these sorts of bugs are easier to identify when you make your intentions clear by consistently using semicolons. That and explicitly adding semicolons is generally considered good style.\nI was first made aware of this insidious little possibility when reading Douglas Crockford's superb and succinct book \"JavaScript: The Good Parts\". I highly recommend it.\n",
"In this case there is no need for a semicolon at the end of the statement. The conclusion is the same but the reasoning is way off.\nJavaScript does not have semicolons as \"optional\". Rather, it has strict rules around automatic semicolon insertion. Semicolons are not optional with statements like break, continue, or throw. Refer to the ECMA Language Specification for more details; specifically 11.9.1, rules of automatic semicolon insertion.\n",
"Use JSLint to keep your JavaScript clean and tidy\nJSLint says: \n\nError:\nImplied global: alert 2\nProblem at line 3 character 2: Missing\n semicolon.\n}\n\n",
"The semi-colon is not necessary. Some people choose to follow the convention of always terminating with a semi-colon instead of allowing JavaScript to do so automatically at linebreaks, but I'm sure you'll find groups advocating either direction.\nIf you are looking at writing \"correct\" JavaScript, I would suggest testing things in Firefox with javascript.options.strict (accessed via about:config) set to true. It might not catch everything, but it should help you ensure your JavaScript code is more compliant.\n"
] | [
40,
16,
8,
3,
0,
0
] | [
"This is not valid (see clarification below) JavaScript code, since the assignment is just a regular statement, no different from\nvar foo = \"bar\";\n\nThe semicolon can be left out since JavaScript interpreters attempt to add a semicolon to fix syntax errors, but this is an extra and unnecessary step. I don't know of any strict mode, but I do know that automated parsers or compressors / obfuscators need that semicolon.\nIf you want to be writing correct JavaScript code, write the semicolon :-)\nAccording to the ECMAscript spec, http://www.ecma-international.org/publications/standards/Ecma-262.htm, the semicolons are automatically inserted if missing. This makes them not required for the script author, but it implies they are required for the interpreter. This means the answer to the original question is 'No', they are not required when writing a script, but, as is pointed out by others, it is recommended for various reasons.\n"
] | [
-1
] | [
"javascript"
] | stackoverflow_0000042247_javascript.txt |
Q:
MVC - where to implement form validation (server-side)?
In coding a traditional MVC application, what is the best practice for coding server-side form validations? Does the code belong in the controller, or the model layer? And why?
A:
From Wikipedia:
Model-view-controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern isolates business logic from user interface considerations, resulting in an application where it is easier to modify either the visual appearance of the application or the underlying business rules without affecting the other. In MVC, the model represents the information (the data) of the application and the business rules used to manipulate the data; the view corresponds to elements of the user interface such as text, checkbox items, and so forth; and the controller manages details involving the communication to the model of user actions such as keystrokes and mouse movements.
Thus, model - it holds the application and the business rules.
A:
I completely agree with Josh. However you may create a kind of validation layer between Controller and Model so that most of syntactical validations can be carried out on data before it reaches to model.
For example,
The validation layer would validate the date format, amount format, mandatory fields, etc...
So that model would purely concentrate on business validations like x amount should be greater than y amount.
A:
My experience with MVC thus far consists of entirely rails.
Rails does it's validation 100% in the Model.
For the most part this works very well. I'd say 9 out of 10 times it's all you need.
There are some areas however where what you're submitting from a form doesn't match up with your model properly. There may be some additional filtering/rearranging or so on.
The best way to solve these situations I've found is to create faux-model objects, which basically act like Model objects but map 1-to-1 with the form data. These faux-model objects don't actually save anything, they're just a bucket for the data with validations attached.
An example of such a thing (in rails) is ActiveForm
Once the data gets into those (and is valid) it's usually a pretty simple step to transfer it directly across to your actual models.
A:
The basic syntax check should be in the control as it translates the user input for the model. The model needs to do the real data validation.
| MVC - where to implement form validation (server-side)? | In coding a traditional MVC application, what is the best practice for coding server-side form validations? Does the code belong in the controller, or the model layer? And why?
| [
"From Wikipedia:\n\nModel-view-controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern isolates business logic from user interface considerations, resulting in an application where it is easier to modify either the visual appearance of the application or the underlying business rules without affecting the other. In MVC, the model represents the information (the data) of the application and the business rules used to manipulate the data; the view corresponds to elements of the user interface such as text, checkbox items, and so forth; and the controller manages details involving the communication to the model of user actions such as keystrokes and mouse movements.\n\nThus, model - it holds the application and the business rules.\n",
"I completely agree with Josh. However you may create a kind of validation layer between Controller and Model so that most of syntactical validations can be carried out on data before it reaches to model.\nFor example,\nThe validation layer would validate the date format, amount format, mandatory fields, etc... \nSo that model would purely concentrate on business validations like x amount should be greater than y amount.\n",
"My experience with MVC thus far consists of entirely rails.\nRails does it's validation 100% in the Model.\nFor the most part this works very well. I'd say 9 out of 10 times it's all you need.\nThere are some areas however where what you're submitting from a form doesn't match up with your model properly. There may be some additional filtering/rearranging or so on.\nThe best way to solve these situations I've found is to create faux-model objects, which basically act like Model objects but map 1-to-1 with the form data. These faux-model objects don't actually save anything, they're just a bucket for the data with validations attached.\nAn example of such a thing (in rails) is ActiveForm\nOnce the data gets into those (and is valid) it's usually a pretty simple step to transfer it directly across to your actual models.\n",
"The basic syntax check should be in the control as it translates the user input for the model. The model needs to do the real data validation.\n"
] | [
4,
4,
0,
0
] | [] | [] | [
"forms",
"model_view_controller",
"validation"
] | stackoverflow_0000025675_forms_model_view_controller_validation.txt |
Q:
TFS - Branching for experimental development: Solution fails to load
Disclaimer: I'm stuck on TFS and I hate it.
My source control structure looks like this:
/dev
/releases
/branches
/experimental-upgrade
I branched from dev to experimental-upgrade and didn't touch it. I then did some more work in dev and merged to experimental-upgrade. Somehow TFS complained that I had changes in both source and target and I had to resolve them. I chose to "Copy item from source branch" for all 5 items.
I check out the experimental-upgrade to a local folder and try to open the main solution file in there. TFS prompts me:
"Projects have recently been added to this solution. Would you like to get them from source control?
If I say yes it does some stuff but ultimately comes back failing to load a handful of the projects. If I say no I get the same result.
Comparing my sln in both branches tells me that they are equal.
Can anyone let me know what I'm doing wrong? This should be a straightforward branch/merge operation...
TIA.
UPDATE:
I noticed that if I click "yes" on the above dialog, the projects are downloaded to the $/ root of source control... (i.e. out of the dev & branches folders)
If I open up the solution in the branch and remove the dead projects and try to re-add them (by right-clicking sln, add existing project, choose project located in the branch folder, it gives me the error...
Cannot load the project c:\sandbox\my_solution\proj1\proj1.csproj, the file has been removed or deleted. The project path I was trying to add is this: c:\sandbox\my_solution\branches\experimental-upgrade\proj1\proj1.csproj
What in the world is pointing these projects outside of their local root? The solution file is identical to the one in the dev branch, and those projects load just fine. I also looked at the vspscc and vssscc files but didn't find anything.
Ideas?
A:
@Ben
You can actually do a full delete in TFS, but it is highly not recommended unless you know what you are doing. You have to do it from the command line with the command tf destroy
tf destroy [/keephistory] itemspec1 [;versionspec]
[itemspec2...itemspecN] [/stopat:versionspec] [/preview]
[/startcleanup] [/noprompt]
Versionspec:
Date/Time Dmm/dd/yyyy
or any .Net Framework-supported format
or any of the date formats of the local machine
Changeset number Cnnnnnn
Label Llabelname
Latest version T
Workspace Wworkspacename;workspaceowner
Just before you do this make sure you try it out with the /preview. Also everybody has their own methodology for branching. Mine is to branch releases, and do all development in the development or root folder. Also it sounded like branching worked fine for you, just the solution file was screwed up, which may be because of a binding issue and the vssss file.
A:
@Nick: No changes have been made to this just yet. I may have to delete it and re-branch (however you really can't fully delete in TFS)
And I have to disagree... branching is absolutely a good practice for experimental changes. Shelving is just temporary storage that will get backed up if I don't want to check in yet. But this needs to be developed while we develop real features.
A:
Without knowing more about your solution setup I can't be sure. But, if you have any project references that could explain it. Because you have the "experimental-upgrade" subfolder under "branches" your relative paths have changed.
This means when VS used to look for your referenced projects in ..\..\project\whatever it now has to look in ..\..\..\project\whatever. Note the extra ..\
To fix this you have to re-add your project references. I haven't found a better way. You can either remove them and re-add them, or go to the properties window and change the path to them, then reload them. Either way, you'll have to redo your references to them from any projects.
Also, check your working folders to make sure that it didn't download any of your projects into the wrong folders. This can happen sometimes...
A:
A couple of things. Are the folder structures the same? Can you delete and readd the project references successfully?
If you create a solution and then manually add all of the projects, does that work. (That may not be feasable - we have solutions with over a hundred projects).
One other thing (and it may be silly) - after you did the branch, did you commit it? I'm wondering if you branched and didn't check it in, and then merged, and then when you tried to check-in then, TFS was mighty confused.
A:
@Kevin:
This means when VS used to look for your referenced projects in ....\project\whatever it now has to look in ......\project\whatever. Note the extra ..\
You may be on to something here, however it doesn't explain why some projects load and others do not. I haven't found a correlation between them yet.
I think I'll try to re-add the projects and see if that works.
A:
@Cory:
I think that's what I'm going to try... I have about 20 projects and 8 or so aren't loading. The folder structures are identical from root... ie: there aren't any references outside of DEV.
| TFS - Branching for experimental development: Solution fails to load | Disclaimer: I'm stuck on TFS and I hate it.
My source control structure looks like this:
/dev
/releases
/branches
/experimental-upgrade
I branched from dev to experimental-upgrade and didn't touch it. I then did some more work in dev and merged to experimental-upgrade. Somehow TFS complained that I had changes in both source and target and I had to resolve them. I chose to "Copy item from source branch" for all 5 items.
I check out the experimental-upgrade to a local folder and try to open the main solution file in there. TFS prompts me:
"Projects have recently been added to this solution. Would you like to get them from source control?
If I say yes it does some stuff but ultimately comes back failing to load a handful of the projects. If I say no I get the same result.
Comparing my sln in both branches tells me that they are equal.
Can anyone let me know what I'm doing wrong? This should be a straightforward branch/merge operation...
TIA.
UPDATE:
I noticed that if I click "yes" on the above dialog, the projects are downloaded to the $/ root of source control... (i.e. out of the dev & branches folders)
If I open up the solution in the branch and remove the dead projects and try to re-add them (by right-clicking sln, add existing project, choose project located in the branch folder, it gives me the error...
Cannot load the project c:\sandbox\my_solution\proj1\proj1.csproj, the file has been removed or deleted. The project path I was trying to add is this: c:\sandbox\my_solution\branches\experimental-upgrade\proj1\proj1.csproj
What in the world is pointing these projects outside of their local root? The solution file is identical to the one in the dev branch, and those projects load just fine. I also looked at the vspscc and vssscc files but didn't find anything.
Ideas?
| [
"@Ben\nYou can actually do a full delete in TFS, but it is highly not recommended unless you know what you are doing. You have to do it from the command line with the command tf destroy\ntf destroy [/keephistory] itemspec1 [;versionspec]\n [itemspec2...itemspecN] [/stopat:versionspec] [/preview]\n [/startcleanup] [/noprompt]\n\nVersionspec:\n Date/Time Dmm/dd/yyyy\n or any .Net Framework-supported format\n or any of the date formats of the local machine\n Changeset number Cnnnnnn\n Label Llabelname\n Latest version T\n Workspace Wworkspacename;workspaceowner\n\nJust before you do this make sure you try it out with the /preview. Also everybody has their own methodology for branching. Mine is to branch releases, and do all development in the development or root folder. Also it sounded like branching worked fine for you, just the solution file was screwed up, which may be because of a binding issue and the vssss file.\n",
"@Nick: No changes have been made to this just yet. I may have to delete it and re-branch (however you really can't fully delete in TFS)\nAnd I have to disagree... branching is absolutely a good practice for experimental changes. Shelving is just temporary storage that will get backed up if I don't want to check in yet. But this needs to be developed while we develop real features.\n",
"Without knowing more about your solution setup I can't be sure. But, if you have any project references that could explain it. Because you have the \"experimental-upgrade\" subfolder under \"branches\" your relative paths have changed.\nThis means when VS used to look for your referenced projects in ..\\..\\project\\whatever it now has to look in ..\\..\\..\\project\\whatever. Note the extra ..\\\nTo fix this you have to re-add your project references. I haven't found a better way. You can either remove them and re-add them, or go to the properties window and change the path to them, then reload them. Either way, you'll have to redo your references to them from any projects.\nAlso, check your working folders to make sure that it didn't download any of your projects into the wrong folders. This can happen sometimes...\n",
"A couple of things. Are the folder structures the same? Can you delete and readd the project references successfully? \nIf you create a solution and then manually add all of the projects, does that work. (That may not be feasable - we have solutions with over a hundred projects).\nOne other thing (and it may be silly) - after you did the branch, did you commit it? I'm wondering if you branched and didn't check it in, and then merged, and then when you tried to check-in then, TFS was mighty confused.\n",
"@Kevin:\n\nThis means when VS used to look for your referenced projects in ....\\project\\whatever it now has to look in ......\\project\\whatever. Note the extra ..\\\n\nYou may be on to something here, however it doesn't explain why some projects load and others do not. I haven't found a correlation between them yet.\nI think I'll try to re-add the projects and see if that works.\n",
"@Cory:\nI think that's what I'm going to try... I have about 20 projects and 8 or so aren't loading. The folder structures are identical from root... ie: there aren't any references outside of DEV.\n"
] | [
2,
0,
0,
0,
0,
0
] | [] | [] | [
"tfs",
"version_control"
] | stackoverflow_0000042490_tfs_version_control.txt |
Q:
How to evaluate an IP?
How can I determine if a string is an IP address? Either IPv4 or IPv6?
What is the least and most number of characters?
I assume this would be a regex answer.
A:
In .NET there's an IPAddress type which has a handy method TryParse.
Example:
if(System.Net.IPAddress.TryParse(PossibleIPAddress, validatedIPAddress)){
//validatedIPAddress is good
}
// or more simply:
bool IsValidIPAddress(string possibleIP){
return System.Net.IPAddress.TryParse(PossibleIPAddress, null)
}
A:
I've done this before, but I like Raymond Chen's post at:
http://blogs.msdn.com/oldnewthing/archive/2006/05/22/603788.aspx
Where he basically advocates using regexes for what they're good at: parsing out the tokens. Then evaluate the results. His example:
function isDottedIPv4(s)
{
var match = s.match(/^(\d+)\.(\d+)\.(\d+)\.(\d+)$/);
return match != null &&
match[1] <= 255 && match[2] <= 255 &&
match[3] <= 255 && match[4] <= 255;
}
It's much easier to look at that and grok what it's supposed to be doing.
A:
For IPv4 you can use this regular expression.
\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b
It looks quite complex but it works by limiting each quad to the numbers 0-255.
A:
Since half of that regex handles the fact that the last segment doesn't have a period at the end, you could cut it in half if you tack a '.' to the end of your possible IP address.
Something like this:
bool IsValidIPAddress(string possibleIP){
CrazyRegex = \b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){4}\b
return Regex.Match(possibleIP+'.', CrazyRegex)
}
A:
@unsliced that is correct however it will of course depend on implementation, if you are parsing an IP from a user visiting your site then your are fine to use regex as it SHOULD be in x.x.x.x format.
For IPv6 you could use this
[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}
however it does not catch everything because with IPv6 it is much more complicated, acording to wikipedia all of the following examples are technicaly correct however the regex above will only catch the ones with a *
2001:0db8:0000:0000:0000:0000:1428:57ab*
2001:0db8:0000:0000:0000::1428:57ab*
2001:0db8:0:0:0:0:1428:57ab*
2001:0db8:0:0::1428:57ab
2001:0db8::1428:57ab
2001:db8::1428:57ab
| How to evaluate an IP? | How can I determine if a string is an IP address? Either IPv4 or IPv6?
What is the least and most number of characters?
I assume this would be a regex answer.
| [
"In .NET there's an IPAddress type which has a handy method TryParse.\nExample: \nif(System.Net.IPAddress.TryParse(PossibleIPAddress, validatedIPAddress)){\n //validatedIPAddress is good\n}\n\n// or more simply:\nbool IsValidIPAddress(string possibleIP){\n return System.Net.IPAddress.TryParse(PossibleIPAddress, null)\n}\n\n",
"I've done this before, but I like Raymond Chen's post at:\nhttp://blogs.msdn.com/oldnewthing/archive/2006/05/22/603788.aspx\nWhere he basically advocates using regexes for what they're good at: parsing out the tokens. Then evaluate the results. His example:\nfunction isDottedIPv4(s)\n{\n var match = s.match(/^(\\d+)\\.(\\d+)\\.(\\d+)\\.(\\d+)$/);\n return match != null &&\n match[1] <= 255 && match[2] <= 255 &&\n match[3] <= 255 && match[4] <= 255;\n}\n\nIt's much easier to look at that and grok what it's supposed to be doing.\n",
"For IPv4 you can use this regular expression.\n\\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\b\n\nIt looks quite complex but it works by limiting each quad to the numbers 0-255.\n",
"Since half of that regex handles the fact that the last segment doesn't have a period at the end, you could cut it in half if you tack a '.' to the end of your possible IP address.\nSomething like this:\nbool IsValidIPAddress(string possibleIP){\n CrazyRegex = \\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){4}\\b\n return Regex.Match(possibleIP+'.', CrazyRegex)\n}\n\n",
"@unsliced that is correct however it will of course depend on implementation, if you are parsing an IP from a user visiting your site then your are fine to use regex as it SHOULD be in x.x.x.x format.\nFor IPv6 you could use this\n[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}:[A-F0-9]{0,4}\n\nhowever it does not catch everything because with IPv6 it is much more complicated, acording to wikipedia all of the following examples are technicaly correct however the regex above will only catch the ones with a *\n2001:0db8:0000:0000:0000:0000:1428:57ab*\n2001:0db8:0000:0000:0000::1428:57ab*\n2001:0db8:0:0:0:0:1428:57ab*\n2001:0db8:0:0::1428:57ab\n2001:0db8::1428:57ab\n2001:db8::1428:57ab\n\n"
] | [
6,
6,
4,
0,
0
] | [
"IPv4 becomes: /\\d\\d?\\d?.\\d\\d?\\d?.\\d\\d?\\d?.\\d\\d?\\d?/\nI'm not sure about the IPv6 rules.\n"
] | [
-1
] | [
"ip_address",
"language_agnostic",
"regex",
"validation"
] | stackoverflow_0000042345_ip_address_language_agnostic_regex_validation.txt |
Q:
Can't get my event to fire
When loading a page for the first time (!IsPostback), I am creating a button in code and adding it to my page, then adding an event handler to the click event.
However, when clicking the button, after the page reloads, my event handler does not fire.
Can anyone explain why?
A:
@Brad: Your answer isn't complete; he's most likely doing it too late in the page lifecycle, during the Page_Load event.
Okay, here's what you're missing.
ASP.NET is stateless. That means, after your page is rendered and sent to the browser, the page object and everything on it is destroyed. There is no link that remains on the server between that page and what is on the user's browser.
When the user clicks a button, that event is sent back to the server, along with other information, like the hidden viewstate field.
On the server side, ASP.NET determines what page handles the request, and rebuilds the page from scratch. New instances of server controls are created and linked together according to the .aspx page. Once it is reassembled, the postback data is evaluated. The viewstate is used to populate controls, and events are fired.
This all happens in a specific order, called the Page Lifecycle. In order to do more complex things in ASP.NET, such as creating dynamic controls and adding them to the web page at runtime, you MUST understand the page lifecycle.
With your issue, you must create that button every single time that page loads. In addition, you must create that button BEFORE events are fired on the page. Control events fire between Page_Load and Page_LoadComplete.
You want your controls loaded before ViewState information is parsed and added to controls, and before control events fire, so you need to handle the PreInit event and add your button at that point. Again, you must do this EVERY TIME the page is loaded.
One last note; page event handling is a bit odd in ASP.NET because the events are autowired up. Note the Load event handler is called Page_Load...
A:
You need to add the button always not just for non-postbacks.
A:
If you are not reattaching the event handler on every postback, then the event will not exist for the button. You need top make sure the event handler is attached every time the page is refreshed. So, here is the order of events for your page:
Page is created with button and event handler is attached
Button is clicked, causing a postback
On postback, the page_load event skips the attaching of the event handler becaue of your !IsPostback statement
At this point, there is no event handler for the button, so clicking it will not fire your event
A:
That is because the event binding that happens needs to be translated in to HTML. This postback that happens if bound to the page between OnInit and OnLoad. So if you want the button to bind events correclty make sure you do the work in OnInit.
See the Page Life Cycle explaination.
http://msdn.microsoft.com/en-us/library/ms178472.aspx
| Can't get my event to fire | When loading a page for the first time (!IsPostback), I am creating a button in code and adding it to my page, then adding an event handler to the click event.
However, when clicking the button, after the page reloads, my event handler does not fire.
Can anyone explain why?
| [
"@Brad: Your answer isn't complete; he's most likely doing it too late in the page lifecycle, during the Page_Load event.\nOkay, here's what you're missing.\nASP.NET is stateless. That means, after your page is rendered and sent to the browser, the page object and everything on it is destroyed. There is no link that remains on the server between that page and what is on the user's browser.\nWhen the user clicks a button, that event is sent back to the server, along with other information, like the hidden viewstate field. \nOn the server side, ASP.NET determines what page handles the request, and rebuilds the page from scratch. New instances of server controls are created and linked together according to the .aspx page. Once it is reassembled, the postback data is evaluated. The viewstate is used to populate controls, and events are fired.\nThis all happens in a specific order, called the Page Lifecycle. In order to do more complex things in ASP.NET, such as creating dynamic controls and adding them to the web page at runtime, you MUST understand the page lifecycle. \nWith your issue, you must create that button every single time that page loads. In addition, you must create that button BEFORE events are fired on the page. Control events fire between Page_Load and Page_LoadComplete.\nYou want your controls loaded before ViewState information is parsed and added to controls, and before control events fire, so you need to handle the PreInit event and add your button at that point. Again, you must do this EVERY TIME the page is loaded.\nOne last note; page event handling is a bit odd in ASP.NET because the events are autowired up. Note the Load event handler is called Page_Load...\n",
"You need to add the button always not just for non-postbacks.\n",
"If you are not reattaching the event handler on every postback, then the event will not exist for the button. You need top make sure the event handler is attached every time the page is refreshed. So, here is the order of events for your page:\n\nPage is created with button and event handler is attached\nButton is clicked, causing a postback\nOn postback, the page_load event skips the attaching of the event handler becaue of your !IsPostback statement\nAt this point, there is no event handler for the button, so clicking it will not fire your event\n\n",
"That is because the event binding that happens needs to be translated in to HTML. This postback that happens if bound to the page between OnInit and OnLoad. So if you want the button to bind events correclty make sure you do the work in OnInit.\nSee the Page Life Cycle explaination.\nhttp://msdn.microsoft.com/en-us/library/ms178472.aspx\n"
] | [
4,
2,
1,
0
] | [] | [] | [
"asp.net"
] | stackoverflow_0000042554_asp.net.txt |
Q:
Thread-safe use of a singleton's members
I have a C# singleton class that multiple classes use. Is access through Instance to the Toggle() method thread-safe? If yes, by what assumptions, rules, etc. If no, why and how can I fix it?
public class MyClass
{
private static readonly MyClass instance = new MyClass();
public static MyClass Instance
{
get { return instance; }
}
private int value = 0;
public int Toggle()
{
if(value == 0)
{
value = 1;
}
else if(value == 1)
{
value = 0;
}
return value;
}
}
A:
Is access through 'Instance' to the 'Toggle()' class threadsafe? If yes, by what assumptions, rules, etc. If no, why and how can I fix it?
No, it's not threadsafe.
Basically, both threads can run the Toggle function at the same time, so this could happen
// thread 1 is running this code
if(value == 0)
{
value = 1;
// RIGHT NOW, thread 2 steps in.
// It sees value as 1, so runs the other branch, and changes it to 0
// This causes your method to return 0 even though you actually want 1
}
else if(value == 1)
{
value = 0;
}
return value;
You need to operate with the following assumption.
If 2 threads are running, they can and will interleave and interact with eachother randomly at any point. You can be half way through writing or reading a 64 bit integer or float (on a 32 bit CPU) and another thread can jump in and change it out from underneath you.
If the 2 threads never access anything in common, it doesn't matter, but as soon as they do, you need to prevent them from stepping on each others toes. The way to do this in .NET is with locks.
You can decide what and where to lock by thinking about things like this:
For a given block of code, if the value of something got changed out from underneath me, would it matter? If it would, you need to lock that something for the duration of the code where it would matter.
Looking at your example again
// we read value here
if(value == 0)
{
value = 1;
}
else if(value == 1)
{
value = 0;
}
// and we return it here
return value;
In order for this to return what we expect it to, we assume that value won't get changed between the read and the return. In order for this assumption to actually be correct, you need to lock value for the duration of that code block.
So you'd do this:
lock( value )
{
if(value == 0)
... // all your code here
return value;
}
HOWEVER
In .NET you can only lock Reference Types. Int32 is a Value Type, so we can't lock it.
We solve this by introducing a 'dummy' object, and locking that wherever we'd want to lock 'value'.
This is what Ben Scheirman is referring to.
A:
The original impplementation is not thread safe, as Ben points out
A simple way to make it thread safe is to introduce a lock statement. Eg. like this:
public class MyClass
{
private Object thisLock = new Object();
private static readonly MyClass instance = new MyClass();
public static MyClass Instance
{
get { return instance; }
}
private Int32 value = 0;
public Int32 Toggle()
{
lock(thisLock)
{
if(value == 0)
{
value = 1;
}
else if(value == 1)
{
value = 0;
}
return value;
}
}
}
A:
Your thread could stop in the middle of that method and transfer control to a different thread. You need a critical section around that code...
private static object _lockDummy = new object();
...
lock(_lockDummy)
{
//do stuff
}
A:
I'd also add a protected constructor to MyClass to prevent the compiler from generating a public default constructor.
A:
That is what I thought. But, I I'm
looking for the details... 'Toggle()'
is not a static method, but it is a
member of a static property (when
using 'Instance'). Is that what makes
it shared among threads?
If your application is multi-threaded and you can forsee that multiple thread will access that method, that makes it shared among threads. Because your class is a Singleton you know that the diferent thread will access the SAME object, so be cautioned about the thread-safety of your methods.
And how does this apply to singletons
in general. Would I have to address
this in every method on my class?
As I said above, because its a singleton you know diferent thread will acess the same object, possibly at the same time. This does not mean you have to make every method obtain a lock. If you notice that a simultaneos invocation can lead to corrupted state of the class, then you should apply the method mentioned by @Thomas
A:
Can I assume that the singleton pattern exposes my otherwise lovely thread-safe class to all the thread problems of regular static members?
No. Your class is simply not threadsafe. The singleton has nothing to do with it.
(I'm getting my head around the fact that instance members called on a static object cause threading problems)
It's nothing to do with that either.
You have to think like this: Is it possible in my program for 2 (or more) threads to access this piece of data at the same time?
The fact that you obtain the data via a singleton, or static variable, or passing in an object as a method parameter doesn't matter. At the end of the day it's all just some bits and bytes in your PC's RAM, and all that matters is whether multiple threads can see the same bits.
A:
I was thinking that if I dump the singleton pattern and force everyone to get a new instance of the class it would ease some problems... but that doesn't stop anyone else from initializing a static object of that type and passing that around... or from spinning off multiple threads, all accessing 'Toggle()' from the same instance.
Bingo :-)
I get it now. It's a tough world. I wish I weren't refactoring legacy code :(
Unfortunately, multithreading is hard and you have to be very paranoid about things :-)
The simplest solution in this case is to stick with the singleton, and add a lock around the value, like in the examples.
A:
Quote:
if(value == 0) { value = 1; }
if(value == 1) { value = 0; }
return value;
value will always be 0...
| Thread-safe use of a singleton's members | I have a C# singleton class that multiple classes use. Is access through Instance to the Toggle() method thread-safe? If yes, by what assumptions, rules, etc. If no, why and how can I fix it?
public class MyClass
{
private static readonly MyClass instance = new MyClass();
public static MyClass Instance
{
get { return instance; }
}
private int value = 0;
public int Toggle()
{
if(value == 0)
{
value = 1;
}
else if(value == 1)
{
value = 0;
}
return value;
}
}
| [
"\nIs access through 'Instance' to the 'Toggle()' class threadsafe? If yes, by what assumptions, rules, etc. If no, why and how can I fix it?\n\nNo, it's not threadsafe.\nBasically, both threads can run the Toggle function at the same time, so this could happen\n // thread 1 is running this code\n if(value == 0) \n {\n value = 1; \n // RIGHT NOW, thread 2 steps in.\n // It sees value as 1, so runs the other branch, and changes it to 0\n // This causes your method to return 0 even though you actually want 1\n }\n else if(value == 1) \n { \n value = 0; \n }\n return value;\n\nYou need to operate with the following assumption.\nIf 2 threads are running, they can and will interleave and interact with eachother randomly at any point. You can be half way through writing or reading a 64 bit integer or float (on a 32 bit CPU) and another thread can jump in and change it out from underneath you.\nIf the 2 threads never access anything in common, it doesn't matter, but as soon as they do, you need to prevent them from stepping on each others toes. The way to do this in .NET is with locks.\nYou can decide what and where to lock by thinking about things like this:\nFor a given block of code, if the value of something got changed out from underneath me, would it matter? If it would, you need to lock that something for the duration of the code where it would matter.\nLooking at your example again\n // we read value here\n if(value == 0) \n {\n value = 1; \n }\n else if(value == 1) \n { \n value = 0; \n }\n // and we return it here\n return value;\n\nIn order for this to return what we expect it to, we assume that value won't get changed between the read and the return. In order for this assumption to actually be correct, you need to lock value for the duration of that code block.\nSo you'd do this:\nlock( value )\n{\n if(value == 0) \n ... // all your code here\n return value;\n}\n\nHOWEVER\nIn .NET you can only lock Reference Types. Int32 is a Value Type, so we can't lock it.\nWe solve this by introducing a 'dummy' object, and locking that wherever we'd want to lock 'value'.\nThis is what Ben Scheirman is referring to.\n",
"The original impplementation is not thread safe, as Ben points out\nA simple way to make it thread safe is to introduce a lock statement. Eg. like this:\npublic class MyClass\n{\n private Object thisLock = new Object();\n private static readonly MyClass instance = new MyClass();\n public static MyClass Instance\n {\n get { return instance; }\n }\n private Int32 value = 0;\n public Int32 Toggle()\n {\n lock(thisLock)\n {\n if(value == 0) \n {\n value = 1; \n }\n else if(value == 1) \n { \n value = 0; \n }\n return value;\n }\n }\n}\n\n",
"Your thread could stop in the middle of that method and transfer control to a different thread. You need a critical section around that code...\nprivate static object _lockDummy = new object();\n\n\n...\n\nlock(_lockDummy)\n{\n //do stuff\n}\n\n",
"I'd also add a protected constructor to MyClass to prevent the compiler from generating a public default constructor.\n",
"\nThat is what I thought. But, I I'm\n looking for the details... 'Toggle()'\n is not a static method, but it is a\n member of a static property (when\n using 'Instance'). Is that what makes\n it shared among threads?\n\nIf your application is multi-threaded and you can forsee that multiple thread will access that method, that makes it shared among threads. Because your class is a Singleton you know that the diferent thread will access the SAME object, so be cautioned about the thread-safety of your methods.\n\nAnd how does this apply to singletons\n in general. Would I have to address\n this in every method on my class?\n\nAs I said above, because its a singleton you know diferent thread will acess the same object, possibly at the same time. This does not mean you have to make every method obtain a lock. If you notice that a simultaneos invocation can lead to corrupted state of the class, then you should apply the method mentioned by @Thomas\n",
"\nCan I assume that the singleton pattern exposes my otherwise lovely thread-safe class to all the thread problems of regular static members?\n\nNo. Your class is simply not threadsafe. The singleton has nothing to do with it.\n\n(I'm getting my head around the fact that instance members called on a static object cause threading problems)\n\nIt's nothing to do with that either.\nYou have to think like this: Is it possible in my program for 2 (or more) threads to access this piece of data at the same time?\nThe fact that you obtain the data via a singleton, or static variable, or passing in an object as a method parameter doesn't matter. At the end of the day it's all just some bits and bytes in your PC's RAM, and all that matters is whether multiple threads can see the same bits.\n",
"\nI was thinking that if I dump the singleton pattern and force everyone to get a new instance of the class it would ease some problems... but that doesn't stop anyone else from initializing a static object of that type and passing that around... or from spinning off multiple threads, all accessing 'Toggle()' from the same instance.\n\nBingo :-)\n\nI get it now. It's a tough world. I wish I weren't refactoring legacy code :(\n\nUnfortunately, multithreading is hard and you have to be very paranoid about things :-)\nThe simplest solution in this case is to stick with the singleton, and add a lock around the value, like in the examples.\n",
"Quote:\nif(value == 0) { value = 1; }\nif(value == 1) { value = 0; }\nreturn value;\n\nvalue will always be 0...\n"
] | [
30,
8,
2,
2,
2,
2,
1,
0
] | [
"Well, I actually don't know C# that well... but I am ok at Java, so I will give the answer for that, and hopefully the two are similar enough that it will be useful. If not, I apologize.\nThe answer is, no, it's not safe. One thread could call Toggle() at the same time as the other, and it is possible, although unlikely with this code, that Thread1 could set value in between the times that Thread2 checks it and when it sets it.\nTo fix, simply make Toggle() synchronized. It doesn't block on anything or call anything that might spawn another thread which could call Toggle(), so that's all you have to do save it.\n"
] | [
-1
] | [
".net",
"c#",
"multithreading",
"singleton",
"thread_safety"
] | stackoverflow_0000042505_.net_c#_multithreading_singleton_thread_safety.txt |
Q:
.Net [Windows] TreeView TreeNode does not retain color change after drag and drop
I have a form with 2 tree views, the user can drag and drop a node from one to another. After a node has been dragged and dropped, I change the color[highlight] of the source node in the Drag-Drop event handles.The color of the node changes fine.
But when the users hovers the mouse over the source tree view after that, it flickers and the highlighting I had done disappears, reverting to the original color.
I'm not handling any other event, I don't reload the treeview and I'm not changing the color.
From my understanding of the MSDN documentation, I don't see any Refresh or Repaint type events.
A:
Simply call TreeView.Invalidate() method to force tree view to repaint.
| .Net [Windows] TreeView TreeNode does not retain color change after drag and drop | I have a form with 2 tree views, the user can drag and drop a node from one to another. After a node has been dragged and dropped, I change the color[highlight] of the source node in the Drag-Drop event handles.The color of the node changes fine.
But when the users hovers the mouse over the source tree view after that, it flickers and the highlighting I had done disappears, reverting to the original color.
I'm not handling any other event, I don't reload the treeview and I'm not changing the color.
From my understanding of the MSDN documentation, I don't see any Refresh or Repaint type events.
| [
"Simply call TreeView.Invalidate() method to force tree view to repaint.\n"
] | [
1
] | [] | [] | [
"c#",
"treeview",
"windows",
"winforms"
] | stackoverflow_0000042272_c#_treeview_windows_winforms.txt |
Q:
Setting up Continuous Integration with SVN
What tools would you recommend for setting up CI for build and deployment of multiple websites built on DotNetNuke using SVN for source control?
We are currently looking at configuring Cruise Control to work with NAnt, NUnit, NCover and Trac as a test case. What other combinations would worth investigating?
We have full control of our development environment so using some form of CI is certain here but I would also like to convince our production services team that they can reliably deploy to the system test, uat and even production environments using these tools.
A:
Take a look at Hudson. It's highly customizable, and, IMHO, easier than CruiseControl.
A:
We use CruiseControl with NUnit, NCover, FxCop, SVN and some custom tools we wrote ourselves to produce the reports. In my opinion it has proven (over the last few years) to be an excellent combination.
It's frustrating that MS restricts all of its integration tools to VSTS. Its test framework is as good as NUnit, but you can't use its code coverage tools or anything else.
I'd check out XNuit - it's looking pretty promising (but currently lacking UI).
We automate nightly builds, and you could automate UAT and manual test builds, but I'm not sure that we'd ever want to automate the release to our production servers. Even if it were any change would be important enough that someone would have to watch over it anyway.
A:
I would have a look at Team City http://www.jetbrains.com/teamcity/index.html
I know some people who are looking in to this and they say good things about it.
My companies build process is done in FinalBuilder so I'm going to be looking at their server soon.
CC is quite good in that you can have one CC server monitor another CC server so you could set up stuff like - when a build completes on your build server, your test server would wake up, boot up a virtual machine and deploy your application. Stuff like that.
A:
Microsoft loosened it's constraint on the Testing Platform by including it in Visual Studio 2008 Professional and allowing for the tests to be run from the command line with Framework 3.5 installed. We did a crossover for a client recently and so far they have been able to run all the tests without the need for NUnit.
A:
We use CruiseControl.NET running msbuild scripts. Msbuild is responsible for updating from SVN on every commit, compiling, and running FxCop and NCover/NUnit.
A:
I would recommend you take a look at NAnt + NUnit ( + NCover) + TeamCity with SVN for your build system. There is actually a very nice article describing this configuration at Pete W's idea book (Sorry, this link doesn't exist anymore!)
| Setting up Continuous Integration with SVN | What tools would you recommend for setting up CI for build and deployment of multiple websites built on DotNetNuke using SVN for source control?
We are currently looking at configuring Cruise Control to work with NAnt, NUnit, NCover and Trac as a test case. What other combinations would worth investigating?
We have full control of our development environment so using some form of CI is certain here but I would also like to convince our production services team that they can reliably deploy to the system test, uat and even production environments using these tools.
| [
"Take a look at Hudson. It's highly customizable, and, IMHO, easier than CruiseControl.\n",
"We use CruiseControl with NUnit, NCover, FxCop, SVN and some custom tools we wrote ourselves to produce the reports. In my opinion it has proven (over the last few years) to be an excellent combination.\nIt's frustrating that MS restricts all of its integration tools to VSTS. Its test framework is as good as NUnit, but you can't use its code coverage tools or anything else.\nI'd check out XNuit - it's looking pretty promising (but currently lacking UI).\nWe automate nightly builds, and you could automate UAT and manual test builds, but I'm not sure that we'd ever want to automate the release to our production servers. Even if it were any change would be important enough that someone would have to watch over it anyway.\n",
"I would have a look at Team City http://www.jetbrains.com/teamcity/index.html\nI know some people who are looking in to this and they say good things about it.\nMy companies build process is done in FinalBuilder so I'm going to be looking at their server soon.\nCC is quite good in that you can have one CC server monitor another CC server so you could set up stuff like - when a build completes on your build server, your test server would wake up, boot up a virtual machine and deploy your application. Stuff like that.\n",
"Microsoft loosened it's constraint on the Testing Platform by including it in Visual Studio 2008 Professional and allowing for the tests to be run from the command line with Framework 3.5 installed. We did a crossover for a client recently and so far they have been able to run all the tests without the need for NUnit.\n",
"We use CruiseControl.NET running msbuild scripts. Msbuild is responsible for updating from SVN on every commit, compiling, and running FxCop and NCover/NUnit.\n",
"I would recommend you take a look at NAnt + NUnit ( + NCover) + TeamCity with SVN for your build system. There is actually a very nice article describing this configuration at Pete W's idea book (Sorry, this link doesn't exist anymore!)\n"
] | [
5,
3,
1,
0,
0,
0
] | [] | [] | [
"continuous_integration",
"svn"
] | stackoverflow_0000007190_continuous_integration_svn.txt |
Q:
How to do a simple mail merge in OpenOffice
I need to do a simple mail merge in OpenOffice using C++, VBScript, VB.Net or C# via OLE or native API. Are there any good examples available?
A:
I haven't come up with a solution I'm really happy with but here are some notes:
Q. What is the OO API for mail merge?
A. http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html
Q. What support groups?
A. http://user.services.openoffice.org/en/forum/viewforum.php?f=20
Q. Sample code?
A. http://user.services.openoffice.org/en/forum/viewtopic.php?f=20&t=946&p=3778&hilit=mail+merge#p3778
http://user.services.openoffice.org/en/forum/viewtopic.php?f=20&t=8088&p=38017&hilit=mail+merge#p38017
Q. Any more examples?
A. file:///C:/Program%20Files/OpenOffice.org_2.4_SDK/examples/examples.html (comes with the SDK)
http://www.oooforum.org/forum/viewtopic.phtml?p=94970
Q. How do I build the examples?
A. e.g., for WriterDemo (C:\Program Files\OpenOffice.org_2.4_SDK\examples\CLI\VB.NET\WriterDemo)
Add references to everything in here: C:\Program Files\OpenOffice.org 2.4\program\assembly
That is cli_basetypes, cli_cppuhelper, cli_types, cli_ure
Q. Does OO use the same separate data/document file for mail merge?
A. It allows for a range of data sources including csv files
Q. Does OO allow you to merge to all the different types (fax, email, new document printer)?
A. You can merge to a new document, print and email
Q. Can you add custom fields?
A. Yes
Q. How do you create a new document in VB.Net?
A.
Dim xContext As XComponentContext
xContext = Bootstrap.bootstrap()
Dim xFactory As XMultiServiceFactory
xFactory = DirectCast(xContext.getServiceManager(), _
XMultiServiceFactory)
'Create the Desktop
Dim xDesktop As unoidl.com.sun.star.frame.XDesktop
xDesktop = DirectCast(xFactory.createInstance("com.sun.star.frame.Desktop"), _
unoidl.com.sun.star.frame.XDesktop)
'Open a new empty writer document
Dim xComponentLoader As unoidl.com.sun.star.frame.XComponentLoader
xComponentLoader = DirectCast(xDesktop, unoidl.com.sun.star.frame.XComponentLoader)
Dim arProps() As unoidl.com.sun.star.beans.PropertyValue = _
New unoidl.com.sun.star.beans.PropertyValue() {}
Dim xComponent As unoidl.com.sun.star.lang.XComponent
xComponent = xComponentLoader.loadComponentFromURL( _
"private:factory/swriter", "_blank", 0, arProps)
Dim xTextDocument As unoidl.com.sun.star.text.XTextDocument
xTextDocument = DirectCast(xComponent, unoidl.com.sun.star.text.XTextDocument)
Q. How do you save the document?
A.
Dim storer As unoidl.com.sun.star.frame.XStorable = DirectCast(xTextDocument, unoidl.com.sun.star.frame.XStorable)
arProps = New unoidl.com.sun.star.beans.PropertyValue() {}
storer.storeToURL("file:///C:/Users/me/Desktop/OpenOffice Investigation/saved doc.odt", arProps)
Q. How do you Open the document?
A.
Dim xComponent As unoidl.com.sun.star.lang.XComponent
xComponent = xComponentLoader.loadComponentFromURL( _
"file:///C:/Users/me/Desktop/OpenOffice Investigation/saved doc.odt", "_blank", 0, arProps)
Q. How do you initiate a mail merge in VB.Net?
A.
Don't know. This functionality is in the API reference but is missing from the IDL. We may be slightly screwed. Assuming the API was working, it looks like running a merge is fairly simple.
In VBScript:
Set objServiceManager = WScript.CreateObject("com.sun.star.ServiceManager")
'Now set up a new MailMerge using the settings extracted from that doc
Set oMailMerge = objServiceManager.createInstance("com.sun.star.text.MailMerge")
oMailMerge.DocumentURL = "file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt"
oMailMerge.DataSourceName = "adds"
oMailMerge.CommandType = 0 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#CommandType
oMailMerge.Command = "adds"
oMailMerge.OutputType = 2 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#OutputType
oMailMerge.execute(Array())
In VB.Net (Option Strict Off)
Dim t_OOo As Type
t_OOo = Type.GetTypeFromProgID("com.sun.star.ServiceManager")
Dim objServiceManager As Object
objServiceManager = System.Activator.CreateInstance(t_OOo)
Dim oMailMerge As Object
oMailMerge = t_OOo.InvokeMember("createInstance", _
BindingFlags.InvokeMethod, Nothing, _
objServiceManager, New [Object]() {"com.sun.star.text.MailMerge"})
'Now set up a new MailMerge using the settings extracted from that doc
oMailMerge.DocumentURL = "file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt"
oMailMerge.DataSourceName = "adds"
oMailMerge.CommandType = 0 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#CommandType
oMailMerge.Command = "adds"
oMailMerge.OutputType = 2 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#OutputType
oMailMerge.execute(New [Object]() {})
The same thing but with Option Strict On (doesn't work)
Dim t_OOo As Type
t_OOo = Type.GetTypeFromProgID("com.sun.star.ServiceManager")
Dim objServiceManager As Object
objServiceManager = System.Activator.CreateInstance(t_OOo)
Dim oMailMerge As Object
oMailMerge = t_OOo.InvokeMember("createInstance", _
BindingFlags.InvokeMethod, Nothing, _
objServiceManager, New [Object]() {"com.sun.star.text.MailMerge"})
'Now set up a new MailMerge using the settings extracted from that doc
oMailMerge.GetType().InvokeMember("DocumentURL", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {"file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt"})
oMailMerge.GetType().InvokeMember("DataSourceName", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {"adds"})
oMailMerge.GetType().InvokeMember("CommandType", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {0})
oMailMerge.GetType().InvokeMember("Command", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {"adds"})
oMailMerge.GetType().InvokeMember("OutputType", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {2})
oMailMerge.GetType().InvokeMember("Execute", BindingFlags.InvokeMethod Or BindingFlags.IgnoreReturn, Nothing, oMailMerge, New [Object]() {}) ' this line fails with a type mismatch error
A:
You should take a look at Apache OpenOffice API. A project for creating an API for Open Office. A few languages they said to support are: C++, Java, Python, CLI, StarBasic, JavaScript and OLE.
Java Example of a mailmerge in OpenOffice.
| How to do a simple mail merge in OpenOffice | I need to do a simple mail merge in OpenOffice using C++, VBScript, VB.Net or C# via OLE or native API. Are there any good examples available?
| [
"I haven't come up with a solution I'm really happy with but here are some notes:\n\nQ. What is the OO API for mail merge?\nA. http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html\nQ. What support groups?\nA. http://user.services.openoffice.org/en/forum/viewforum.php?f=20\nQ. Sample code?\nA. http://user.services.openoffice.org/en/forum/viewtopic.php?f=20&t=946&p=3778&hilit=mail+merge#p3778\nhttp://user.services.openoffice.org/en/forum/viewtopic.php?f=20&t=8088&p=38017&hilit=mail+merge#p38017\nQ. Any more examples?\nA. file:///C:/Program%20Files/OpenOffice.org_2.4_SDK/examples/examples.html (comes with the SDK)\nhttp://www.oooforum.org/forum/viewtopic.phtml?p=94970\nQ. How do I build the examples?\nA. e.g., for WriterDemo (C:\\Program Files\\OpenOffice.org_2.4_SDK\\examples\\CLI\\VB.NET\\WriterDemo)\n\nAdd references to everything in here: C:\\Program Files\\OpenOffice.org 2.4\\program\\assembly\nThat is cli_basetypes, cli_cppuhelper, cli_types, cli_ure\n\nQ. Does OO use the same separate data/document file for mail merge?\nA. It allows for a range of data sources including csv files\nQ. Does OO allow you to merge to all the different types (fax, email, new document printer)?\nA. You can merge to a new document, print and email\nQ. Can you add custom fields?\nA. Yes\nQ. How do you create a new document in VB.Net?\nA.\n Dim xContext As XComponentContext\n\n xContext = Bootstrap.bootstrap()\n\n Dim xFactory As XMultiServiceFactory\n xFactory = DirectCast(xContext.getServiceManager(), _\n XMultiServiceFactory)\n\n 'Create the Desktop\n Dim xDesktop As unoidl.com.sun.star.frame.XDesktop\n xDesktop = DirectCast(xFactory.createInstance(\"com.sun.star.frame.Desktop\"), _\n unoidl.com.sun.star.frame.XDesktop)\n\n 'Open a new empty writer document\n Dim xComponentLoader As unoidl.com.sun.star.frame.XComponentLoader\n xComponentLoader = DirectCast(xDesktop, unoidl.com.sun.star.frame.XComponentLoader)\n Dim arProps() As unoidl.com.sun.star.beans.PropertyValue = _\n New unoidl.com.sun.star.beans.PropertyValue() {}\n Dim xComponent As unoidl.com.sun.star.lang.XComponent\n xComponent = xComponentLoader.loadComponentFromURL( _\n \"private:factory/swriter\", \"_blank\", 0, arProps)\n Dim xTextDocument As unoidl.com.sun.star.text.XTextDocument\n xTextDocument = DirectCast(xComponent, unoidl.com.sun.star.text.XTextDocument)\n\nQ. How do you save the document?\nA.\n Dim storer As unoidl.com.sun.star.frame.XStorable = DirectCast(xTextDocument, unoidl.com.sun.star.frame.XStorable)\n arProps = New unoidl.com.sun.star.beans.PropertyValue() {}\n storer.storeToURL(\"file:///C:/Users/me/Desktop/OpenOffice Investigation/saved doc.odt\", arProps)\n\nQ. How do you Open the document?\nA.\n Dim xComponent As unoidl.com.sun.star.lang.XComponent\n xComponent = xComponentLoader.loadComponentFromURL( _\n \"file:///C:/Users/me/Desktop/OpenOffice Investigation/saved doc.odt\", \"_blank\", 0, arProps)\n\nQ. How do you initiate a mail merge in VB.Net?\nA.\n\nDon't know. This functionality is in the API reference but is missing from the IDL. We may be slightly screwed. Assuming the API was working, it looks like running a merge is fairly simple.\nIn VBScript:\nSet objServiceManager = WScript.CreateObject(\"com.sun.star.ServiceManager\")\n'Now set up a new MailMerge using the settings extracted from that doc\nSet oMailMerge = objServiceManager.createInstance(\"com.sun.star.text.MailMerge\")\noMailMerge.DocumentURL = \"file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt\"\noMailMerge.DataSourceName = \"adds\"\noMailMerge.CommandType = 0 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#CommandType\noMailMerge.Command = \"adds\"\noMailMerge.OutputType = 2 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#OutputType\noMailMerge.execute(Array())\nIn VB.Net (Option Strict Off)\n Dim t_OOo As Type\n t_OOo = Type.GetTypeFromProgID(\"com.sun.star.ServiceManager\")\n Dim objServiceManager As Object\n objServiceManager = System.Activator.CreateInstance(t_OOo)\n\n Dim oMailMerge As Object\n oMailMerge = t_OOo.InvokeMember(\"createInstance\", _\n BindingFlags.InvokeMethod, Nothing, _\n objServiceManager, New [Object]() {\"com.sun.star.text.MailMerge\"})\n\n 'Now set up a new MailMerge using the settings extracted from that doc\n oMailMerge.DocumentURL = \"file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt\"\n oMailMerge.DataSourceName = \"adds\"\n oMailMerge.CommandType = 0 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#CommandType\n oMailMerge.Command = \"adds\"\n oMailMerge.OutputType = 2 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#OutputType\n oMailMerge.execute(New [Object]() {})\n\nThe same thing but with Option Strict On (doesn't work)\n Dim t_OOo As Type\n t_OOo = Type.GetTypeFromProgID(\"com.sun.star.ServiceManager\")\n Dim objServiceManager As Object\n objServiceManager = System.Activator.CreateInstance(t_OOo)\n\n Dim oMailMerge As Object\n oMailMerge = t_OOo.InvokeMember(\"createInstance\", _\n BindingFlags.InvokeMethod, Nothing, _\n objServiceManager, New [Object]() {\"com.sun.star.text.MailMerge\"})\n\n 'Now set up a new MailMerge using the settings extracted from that doc\n oMailMerge.GetType().InvokeMember(\"DocumentURL\", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {\"file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt\"})\n oMailMerge.GetType().InvokeMember(\"DataSourceName\", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {\"adds\"})\n oMailMerge.GetType().InvokeMember(\"CommandType\", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {0})\n oMailMerge.GetType().InvokeMember(\"Command\", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {\"adds\"})\n oMailMerge.GetType().InvokeMember(\"OutputType\", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {2})\n oMailMerge.GetType().InvokeMember(\"Execute\", BindingFlags.InvokeMethod Or BindingFlags.IgnoreReturn, Nothing, oMailMerge, New [Object]() {}) ' this line fails with a type mismatch error\n\n\n\n",
"You should take a look at Apache OpenOffice API. A project for creating an API for Open Office. A few languages they said to support are: C++, Java, Python, CLI, StarBasic, JavaScript and OLE.\nJava Example of a mailmerge in OpenOffice.\n"
] | [
9,
2
] | [] | [] | [
"c#",
"c++",
"com",
"ole",
"openoffice.org"
] | stackoverflow_0000038842_c#_c++_com_ole_openoffice.org.txt |
Q:
How can I get markdown to format this code properly?
Here is some code I could not get to format properly in markdown, this is straight C code, pasted into the text box with the '4 spaces' format to denote code:
#define PRINT(x, format, ...) \
if ( x ) { \
if ( debug_fd != NULL ) { \
fprintf(debug_fd, format, ##__VA_ARGS__); \
} \
else { \
fprintf(stdout, format, ##__VA_ARGS__); \
} \
}
It seems as though the '\' causes the newlines to be ignored. Ok fine, I am used to this in bash, but if I put '\' the second one doesn't show up. As if the second one is absorbed. Am I missing something?
A:
Add at least four spaces or a hard tab before each line of the code. Like this:
#define PRINT(x, format, ...) \
if ( x ) { \
if ( debug_fd != NULL ) { \
fprintf(debug_fd, format, ##VA_ARGS); \
} \
else { \
fprintf(stdout, format, ##VA_ARGS); \
} \
}
A:
You can also use the HTML tags <pre><code> in succession. I find this easier for pasting code into the window.
#define PRINT(x, format, ...)
if ( x )
{
if ( debug_fd != NULL )
{
fprintf(debug_fd, format, ##VA_ARGS);
}
else
{
fprintf(stdout, format, ##VA_ARGS);
}
}
| How can I get markdown to format this code properly? | Here is some code I could not get to format properly in markdown, this is straight C code, pasted into the text box with the '4 spaces' format to denote code:
#define PRINT(x, format, ...) \
if ( x ) { \
if ( debug_fd != NULL ) { \
fprintf(debug_fd, format, ##__VA_ARGS__); \
} \
else { \
fprintf(stdout, format, ##__VA_ARGS__); \
} \
}
It seems as though the '\' causes the newlines to be ignored. Ok fine, I am used to this in bash, but if I put '\' the second one doesn't show up. As if the second one is absorbed. Am I missing something?
| [
"Add at least four spaces or a hard tab before each line of the code. Like this:\n#define PRINT(x, format, ...) \\\nif ( x ) { \\\n if ( debug_fd != NULL ) { \\\n fprintf(debug_fd, format, ##VA_ARGS); \\\n} \\\nelse { \\\n fprintf(stdout, format, ##VA_ARGS); \\\n} \\\n}\n\n",
"You can also use the HTML tags <pre><code> in succession. I find this easier for pasting code into the window.\n#define PRINT(x, format, ...)\nif ( x ) \n{\n if ( debug_fd != NULL ) \n { \n fprintf(debug_fd, format, ##VA_ARGS); \n } \n else \n { \n fprintf(stdout, format, ##VA_ARGS); \n } \n}\n"
] | [
2,
2
] | [
"#define PRINT(x, format, ...)\nif ( x ) \n{\n if ( debug_fd != NULL ) \n { \n fprintf(debug_fd, format, ##VA_ARGS); \n } \n else \n { \n fprintf(stdout, format, ##VA_ARGS); \n } \n}\n\n"
] | [
-1
] | [
"c",
"formatting",
"markdown"
] | stackoverflow_0000042762_c_formatting_markdown.txt |
Q:
Fixed Legend in Google Maps Mashup
I have a page with a Google Maps mashup that has pushpins that are color-coded by day (Monday, Tuesday, etc.) The IFrame containing the map is dynamically sized, so it gets resized when the browser window is resized.
I'd like to put a legend in the corner of the map window that tells the user what each color means. The Google Maps API includes a GScreenOverlay class that has the behavior that I want, but it only lets you specify an image to use as an overlay, and I'd prefer to use a DIV with text in it. What's the easiest way to position a DIV over the map window in (for example) the lower left corner that'll automatically stay in the same place relative to the corner when the browser window is resized?
A:
You can add your own Custom Control and use it as a legend.
This code will add a box 150w x 100h (Gray Border/ with White Background) and the words "Hello World" inside of it. You swap out the text for any HTML you would like in the legend. This will stay Anchored to the Top Right (G_ANCHOR_TOP_RIGHT) 10px down and 50px over of the map.
function MyPane() {}
MyPane.prototype = new GControl;
MyPane.prototype.initialize = function(map) {
var me = this;
me.panel = document.createElement("div");
me.panel.style.width = "150px";
me.panel.style.height = "100px";
me.panel.style.border = "1px solid gray";
me.panel.style.background = "white";
me.panel.innerHTML = "Hello World!";
map.getContainer().appendChild(me.panel);
return me.panel;
};
MyPane.prototype.getDefaultPosition = function() {
return new GControlPosition(
G_ANCHOR_TOP_RIGHT, new GSize(10, 50));
//Should be _ and not _
};
MyPane.prototype.getPanel = function() {
return me.panel;
}
map.addControl(new MyPane());
A:
I would use HTML like the following:
<div id="wrapper">
<div id="map" style="width:400px;height:400px;"></div>
<div id="legend"> ... marker descriptions in here ... </div>
</div>
You can then style this to keep the legend in the bottom right:
div#wrapper { position: relative; }
div#legend { position: absolute; bottom: 0px; right: 0px; }
position: relative will cause any contained elements to be positioned relative to the #wrapper container, and position: absolute will cause the #legend div to be "pulled" out of the flow and sit above the map, keeping it's bottom right edge at the bottom of the #wrapper and stretching as required to contain the marker descriptions.
| Fixed Legend in Google Maps Mashup | I have a page with a Google Maps mashup that has pushpins that are color-coded by day (Monday, Tuesday, etc.) The IFrame containing the map is dynamically sized, so it gets resized when the browser window is resized.
I'd like to put a legend in the corner of the map window that tells the user what each color means. The Google Maps API includes a GScreenOverlay class that has the behavior that I want, but it only lets you specify an image to use as an overlay, and I'd prefer to use a DIV with text in it. What's the easiest way to position a DIV over the map window in (for example) the lower left corner that'll automatically stay in the same place relative to the corner when the browser window is resized?
| [
"You can add your own Custom Control and use it as a legend.\nThis code will add a box 150w x 100h (Gray Border/ with White Background) and the words \"Hello World\" inside of it. You swap out the text for any HTML you would like in the legend. This will stay Anchored to the Top Right (G_ANCHOR_TOP_RIGHT) 10px down and 50px over of the map.\nfunction MyPane() {}\nMyPane.prototype = new GControl;\nMyPane.prototype.initialize = function(map) {\n var me = this;\n me.panel = document.createElement(\"div\");\n me.panel.style.width = \"150px\";\n me.panel.style.height = \"100px\";\n me.panel.style.border = \"1px solid gray\";\n me.panel.style.background = \"white\";\n me.panel.innerHTML = \"Hello World!\";\n map.getContainer().appendChild(me.panel);\n return me.panel;\n};\n\nMyPane.prototype.getDefaultPosition = function() {\n return new GControlPosition(\n G_ANCHOR_TOP_RIGHT, new GSize(10, 50));\n //Should be _ and not _\n};\n\nMyPane.prototype.getPanel = function() {\n return me.panel;\n}\nmap.addControl(new MyPane());\n\n",
"I would use HTML like the following:\n<div id=\"wrapper\">\n <div id=\"map\" style=\"width:400px;height:400px;\"></div>\n <div id=\"legend\"> ... marker descriptions in here ... </div>\n</div>\n\nYou can then style this to keep the legend in the bottom right:\ndiv#wrapper { position: relative; }\ndiv#legend { position: absolute; bottom: 0px; right: 0px; }\n\nposition: relative will cause any contained elements to be positioned relative to the #wrapper container, and position: absolute will cause the #legend div to be \"pulled\" out of the flow and sit above the map, keeping it's bottom right edge at the bottom of the #wrapper and stretching as required to contain the marker descriptions.\n"
] | [
10,
3
] | [] | [] | [
"google_maps",
"google_maps_api_2",
"html",
"javascript"
] | stackoverflow_0000036515_google_maps_google_maps_api_2_html_javascript.txt |
Q:
Writing/Using C++ Libraries
I am looking for basic examples/tutorials on:
How to write/compile libraries in C++ (.so files for Linux, .dll files for Windows).
How to import and use those libraries in other code.
A:
The code
r.cc :
#include "t.h"
int main()
{
f();
return 0;
}
t.h :
void f();
t.cc :
#include<iostream>
#include "t.h"
void f()
{
std::cout << "OH HAI. I'M F." << std::endl;
}
But how, how, how?!
~$ g++ -fpic -c t.cc # get t.o
~$ g++ -shared -o t.so t.o # get t.so
~$ export LD_LIBRARY_PATH="." # make sure t.so is found when dynamically linked
~$ g++ r.cc t.so # get an executable
The export step is not needed if you install the shared library somewhere along the global library path.
| Writing/Using C++ Libraries | I am looking for basic examples/tutorials on:
How to write/compile libraries in C++ (.so files for Linux, .dll files for Windows).
How to import and use those libraries in other code.
| [
"The code\nr.cc :\n#include \"t.h\"\n\nint main()\n{\n f();\n return 0;\n}\n\nt.h :\nvoid f();\n\nt.cc :\n#include<iostream>\n#include \"t.h\" \n\nvoid f()\n{\n std::cout << \"OH HAI. I'M F.\" << std::endl;\n}\n\nBut how, how, how?!\n~$ g++ -fpic -c t.cc # get t.o\n~$ g++ -shared -o t.so t.o # get t.so\n~$ export LD_LIBRARY_PATH=\".\" # make sure t.so is found when dynamically linked\n~$ g++ r.cc t.so # get an executable\n\nThe export step is not needed if you install the shared library somewhere along the global library path.\n"
] | [
18
] | [] | [] | [
"c++"
] | stackoverflow_0000042770_c++.txt |
Q:
Java object allocation overhead
I am writing an immutable DOM tree in Java, to simplify access from multiple threads.*
However, it does need to support inserts and updates as fast as possible. And since it is immutable, if I make a change to a node on the N'th level of the tree, I need to allocate at least N new nodes in order to return the new tree.
My question is, would it be dramatically faster to pre-allocate nodes rather than create new ones every time the tree is modified? It would be fairly easy to do - keep a pool of several hundred unused nodes, and pull one out of the pool rather than create one whenever it was required for a modify operation. I can replenish the node pool when there's nothing else going on. (in case it isn't obvious, execution time is going to be much more at a premium in this application than heap space is)
Is it worthwhile to do this? Any other tips on speeding it up?
Alternatively, does anyone know if an immutable DOM library already? I searched, but couldn't find anything.
*Note: For those of you who aren't familiar with the concept of immutability, it basically means that on any operation to an object that changes it, the method returns a copy of the object with the changes in place, rather than the changed object. Thus, if another thread is still reading the object it will continue to happily operate on the "old" version, unaware that changes have been made, rather than crashing horribly. See http://www.javapractices.com/topic/TopicAction.do?Id=29
A:
These days, object creation is pretty dang fast, and the concept of object pooling is kind of obsolete (at least in general; connection pooling is of course still valid).
Avoid premature optimization. Create your nodes when you need them when doing your copies, and then see if that becomes prohibitively slow. If so, then look into some techniques to speed it up. But unless you already know that what you've got isn't fast enough, I wouldn't go introducing all the complexity you're going to need to get pooling going.
A:
I hate to give a non-answer, but I think the only definitive way to answer a performance question like this might be for you to code both approaches, benchmark the two, and compare the results.
A:
I'm not sure if you can avoid explicitly synchronizing certain methods in order to make sure everything is thread-safe.
One specific case you need to synchronize one side or the other of making a newly created node available to other threads as otherwise you risk the VM/CPU re-ordering the writes of the fields past the write of the reference to the shared node, exposing a party constructed object.
Try to think in a higher level. You have an IMMUTABLE tree (that is basically a set of nodes pointing to its children). You want to insert a node in it. Then, there's no way out: you have to create a new WHOLE tree.
If you choose to implement the tree as a set of nodes pointing to the children, then you would have to create new nodes along the path of the changed node to the root. The others have the same value as before, and normally are shared. So you need to create a partial new tree, which usually would mean (depth of edited node) parent nodes.
If you can cope with a less direct implementation, you should be able to get away with only creating parts of nodes, using techniques similar to those described in Purely Functional Data Structures to either reduce the average cost of the creation, or you can by-pass it using semi-functional approaches (such as creating an iterator which wraps an existing iterator, but returns the new node instead of the old, together with a mechanism to repair such patches in the structure as time goes on). An XPath style api might be better than a DOM api in that case - it might you decouple the nodes from the tree a bit more, and treat the mutated tree more intelligently.
A:
I'm a little confused about what you're trying to do in the first place. You want all of the nodes to be immutable AND you want to pool them? Aren't these 2 ideas mutually exclusive? When you pull an object out of the pool, won't you have to invoke a setter to link up the children?
I think that using immutable nodes is probably not going to give you the kind of thread-safety you need in the first place. What happens if 1 thread is iterating over the nodes (a search or something), while another thread is adding/removing nodes? Won't the results of the search be invalid? I'm not sure if you can avoid explicitly synchronizing certain methods in order to make sure everything is thread-safe.
A:
@Outlaw Programmer
When you pull an object out of the
pool, won't you have to invoke a
setter to link up the children?
Each node needn't be immutable internally to the package, only to the outward-facing interface. node.addChild() would be an immutable function with public visibility and return a Document, wheras node.addChildInternal() would be be a normal, mutable function with package visibility. But since it is internal to the package, it can only be called as a descendent of addChild() and the structure as a whole is guarenteed to be thread safe (provided I synchronize access to the object pool). Do you see a flaw in this...? If so, please tell me!
I think that using immutable nodes is probably not going to give you the kind of thread-safety you need in the first place. What happens if 1 thread is iterating over the nodes (a search or something), while another thread is adding/removing nodes?
The tree as a whole will be immutable. Say I have Thread1 and Thread2, and tree dom1. Thread1 starts a read operation on dom1, while, concurrently, Thread2 starts a write operation on dom1. However, all the changes Thread2 makes will actually be made to a new object, dom2, and dom1 will be immutable. It is true that the values read by Thread1 will be (a few microseconds) out of date, but it won't crash on an IndexOutOfBounds or NullPointer exception or something like it would if it was reading a mutable object that was being written to. Then, Thread2 can fire an event containing dom2 to Thread1 so that it can do its read again and update its results, if necessary.
Edit: clarified
A:
I think @Outlaw has a point. The structure of the DOM tree resides in the nodes itself, having a node pointing to its children. To modify the structure of a tree you have to modify the node, so you can't have it pooled, you have to create a new one.
Try to think in a higher level. You have an IMMUTABLE tree (that is basically a set of nodes pointing to its children). You want to insert a node in it. Then, there's no way out: you have to create a new WHOLE tree.
Yes, the immutable tree is thread-safe, but it will impact performance. Object creation may be fast, but not faster then NO object creation. :)
| Java object allocation overhead | I am writing an immutable DOM tree in Java, to simplify access from multiple threads.*
However, it does need to support inserts and updates as fast as possible. And since it is immutable, if I make a change to a node on the N'th level of the tree, I need to allocate at least N new nodes in order to return the new tree.
My question is, would it be dramatically faster to pre-allocate nodes rather than create new ones every time the tree is modified? It would be fairly easy to do - keep a pool of several hundred unused nodes, and pull one out of the pool rather than create one whenever it was required for a modify operation. I can replenish the node pool when there's nothing else going on. (in case it isn't obvious, execution time is going to be much more at a premium in this application than heap space is)
Is it worthwhile to do this? Any other tips on speeding it up?
Alternatively, does anyone know if an immutable DOM library already? I searched, but couldn't find anything.
*Note: For those of you who aren't familiar with the concept of immutability, it basically means that on any operation to an object that changes it, the method returns a copy of the object with the changes in place, rather than the changed object. Thus, if another thread is still reading the object it will continue to happily operate on the "old" version, unaware that changes have been made, rather than crashing horribly. See http://www.javapractices.com/topic/TopicAction.do?Id=29
| [
"These days, object creation is pretty dang fast, and the concept of object pooling is kind of obsolete (at least in general; connection pooling is of course still valid).\nAvoid premature optimization. Create your nodes when you need them when doing your copies, and then see if that becomes prohibitively slow. If so, then look into some techniques to speed it up. But unless you already know that what you've got isn't fast enough, I wouldn't go introducing all the complexity you're going to need to get pooling going.\n",
"I hate to give a non-answer, but I think the only definitive way to answer a performance question like this might be for you to code both approaches, benchmark the two, and compare the results.\n",
"\nI'm not sure if you can avoid explicitly synchronizing certain methods in order to make sure everything is thread-safe.\n\nOne specific case you need to synchronize one side or the other of making a newly created node available to other threads as otherwise you risk the VM/CPU re-ordering the writes of the fields past the write of the reference to the shared node, exposing a party constructed object.\n\nTry to think in a higher level. You have an IMMUTABLE tree (that is basically a set of nodes pointing to its children). You want to insert a node in it. Then, there's no way out: you have to create a new WHOLE tree.\n\nIf you choose to implement the tree as a set of nodes pointing to the children, then you would have to create new nodes along the path of the changed node to the root. The others have the same value as before, and normally are shared. So you need to create a partial new tree, which usually would mean (depth of edited node) parent nodes. \nIf you can cope with a less direct implementation, you should be able to get away with only creating parts of nodes, using techniques similar to those described in Purely Functional Data Structures to either reduce the average cost of the creation, or you can by-pass it using semi-functional approaches (such as creating an iterator which wraps an existing iterator, but returns the new node instead of the old, together with a mechanism to repair such patches in the structure as time goes on). An XPath style api might be better than a DOM api in that case - it might you decouple the nodes from the tree a bit more, and treat the mutated tree more intelligently. \n",
"I'm a little confused about what you're trying to do in the first place. You want all of the nodes to be immutable AND you want to pool them? Aren't these 2 ideas mutually exclusive? When you pull an object out of the pool, won't you have to invoke a setter to link up the children?\nI think that using immutable nodes is probably not going to give you the kind of thread-safety you need in the first place. What happens if 1 thread is iterating over the nodes (a search or something), while another thread is adding/removing nodes? Won't the results of the search be invalid? I'm not sure if you can avoid explicitly synchronizing certain methods in order to make sure everything is thread-safe.\n",
"@Outlaw Programmer\n\nWhen you pull an object out of the\n pool, won't you have to invoke a\n setter to link up the children?\n\nEach node needn't be immutable internally to the package, only to the outward-facing interface. node.addChild() would be an immutable function with public visibility and return a Document, wheras node.addChildInternal() would be be a normal, mutable function with package visibility. But since it is internal to the package, it can only be called as a descendent of addChild() and the structure as a whole is guarenteed to be thread safe (provided I synchronize access to the object pool). Do you see a flaw in this...? If so, please tell me!\n\nI think that using immutable nodes is probably not going to give you the kind of thread-safety you need in the first place. What happens if 1 thread is iterating over the nodes (a search or something), while another thread is adding/removing nodes?\n\nThe tree as a whole will be immutable. Say I have Thread1 and Thread2, and tree dom1. Thread1 starts a read operation on dom1, while, concurrently, Thread2 starts a write operation on dom1. However, all the changes Thread2 makes will actually be made to a new object, dom2, and dom1 will be immutable. It is true that the values read by Thread1 will be (a few microseconds) out of date, but it won't crash on an IndexOutOfBounds or NullPointer exception or something like it would if it was reading a mutable object that was being written to. Then, Thread2 can fire an event containing dom2 to Thread1 so that it can do its read again and update its results, if necessary.\nEdit: clarified\n",
"I think @Outlaw has a point. The structure of the DOM tree resides in the nodes itself, having a node pointing to its children. To modify the structure of a tree you have to modify the node, so you can't have it pooled, you have to create a new one.\nTry to think in a higher level. You have an IMMUTABLE tree (that is basically a set of nodes pointing to its children). You want to insert a node in it. Then, there's no way out: you have to create a new WHOLE tree. \nYes, the immutable tree is thread-safe, but it will impact performance. Object creation may be fast, but not faster then NO object creation. :)\n"
] | [
12,
3,
1,
0,
0,
0
] | [] | [] | [
"concurrency",
"dom",
"java",
"xml"
] | stackoverflow_0000042383_concurrency_dom_java_xml.txt |
Q:
Chrome tabs and processes
I was reading googlebooks on chrome, where they talk about why they decided to spin up a process to host browser tab, every time you created a new tab.'
So
2 tabs = 2 chrome processes
3 tabs = 3 chrome processes and so on .. right??
But i opened up some 20 or so tabs, but in task manager, i could only find 3 chrome processes..
What is going on??
I was taught that creating a process is an expensive proposition in terms of resources needed, and there are other light weight options available (like app domains in .net for ex)..
So is chrome taking some hybrid approach?? Create few processes and then start hosting additional tabs inside those limited set of processes??
A:
it's being hosted in the first process. open up chrome. you'll see 2 processes (manager and initial tab). then open 10 more tabs, you'll notice the second process's memory jump a lot. then type in google.com or something into the first tab, and you'll see a new process get spawned.
also notice, if you do shift+esc and brink up the task manager in chrome, all those tabs will be grouped together, one w/ memory, the others without.
A:
Don't forget that if two sites share a session, they share a process. So following a link from one site that opens a new page will be in the same session (and thus the same process).
For each tab created with Ctrl+T, you should get a new process.
A:
I've also noticed that tabs browsing the same domain ar grouped in the same process. So if you have 3 tab browsing stackoverflow.com, those three tabs will appread as one process
A:
Process creation is relatively expensive, certainly compared to thread creation. But the frequency of process creation in Chrome is very slow, so the real issue is the amount of resource overhead vs other techniques.
The Google team figured that the benefits of a separate process model justified the resource costs. Given the current resources on desktop machines this trade off makes a lot of sense.
| Chrome tabs and processes | I was reading googlebooks on chrome, where they talk about why they decided to spin up a process to host browser tab, every time you created a new tab.'
So
2 tabs = 2 chrome processes
3 tabs = 3 chrome processes and so on .. right??
But i opened up some 20 or so tabs, but in task manager, i could only find 3 chrome processes..
What is going on??
I was taught that creating a process is an expensive proposition in terms of resources needed, and there are other light weight options available (like app domains in .net for ex)..
So is chrome taking some hybrid approach?? Create few processes and then start hosting additional tabs inside those limited set of processes??
| [
"it's being hosted in the first process. open up chrome. you'll see 2 processes (manager and initial tab). then open 10 more tabs, you'll notice the second process's memory jump a lot. then type in google.com or something into the first tab, and you'll see a new process get spawned.\nalso notice, if you do shift+esc and brink up the task manager in chrome, all those tabs will be grouped together, one w/ memory, the others without.\n",
"Don't forget that if two sites share a session, they share a process. So following a link from one site that opens a new page will be in the same session (and thus the same process).\nFor each tab created with Ctrl+T, you should get a new process.\n",
"I've also noticed that tabs browsing the same domain ar grouped in the same process. So if you have 3 tab browsing stackoverflow.com, those three tabs will appread as one process\n",
"Process creation is relatively expensive, certainly compared to thread creation. But the frequency of process creation in Chrome is very slow, so the real issue is the amount of resource overhead vs other techniques.\nThe Google team figured that the benefits of a separate process model justified the resource costs. Given the current resources on desktop machines this trade off makes a lot of sense.\n"
] | [
5,
3,
2,
0
] | [] | [] | [
"google_chrome",
"resources"
] | stackoverflow_0000042804_google_chrome_resources.txt |
Q:
PHP, Arrays, and References
Why does the following code not work as I was expecting?
<?php
$data = array(
array('Area1', null, null),
array(null, 'Section1', null),
array(null, null, 'Location1'),
array('Area2', null, null),
array(null, 'Section2', null),
array(null, null, 'Location2')
);
$root = array();
foreach ($data as $row) {
if ($row[0]) {
$area = array();
$root[$row[0]] =& $area;
} elseif ($row[1]) {
$section = array();
$area[$row[1]] =& $section;
} elseif ($row[2]) {
$section[] = $row[2];
}
}
print_r($root);
Expected result:
Array(
[Area1] => Array(
[Section1] => Array(
[0] => Location1
)
)
[Area2] => Array(
[Section2] => Array(
[0] => Location2
)
)
)
Actual result:
Array(
[Area1] => Array(
[Section2] => Array(
[0] => Location2
)
)
[Area2] => Array(
[Section2] => Array(
[0] => Location2
)
)
)
A:
If you modify your code on two lines as follows:
$area = array();
$section = array();
to this:
unset($area);
$area = array();
unset($section);
$section = array();
it will work as expected.
In the first version, $area and $section are acting as "pointers" to the value inside the $root array. If you reset the values first, those variables can then be used to create brand new arrays instead of overwriting the previous arrays.
A:
This will also works:
$root[$row[0]] = array();
$area =& $root[$row[0]];
| PHP, Arrays, and References | Why does the following code not work as I was expecting?
<?php
$data = array(
array('Area1', null, null),
array(null, 'Section1', null),
array(null, null, 'Location1'),
array('Area2', null, null),
array(null, 'Section2', null),
array(null, null, 'Location2')
);
$root = array();
foreach ($data as $row) {
if ($row[0]) {
$area = array();
$root[$row[0]] =& $area;
} elseif ($row[1]) {
$section = array();
$area[$row[1]] =& $section;
} elseif ($row[2]) {
$section[] = $row[2];
}
}
print_r($root);
Expected result:
Array(
[Area1] => Array(
[Section1] => Array(
[0] => Location1
)
)
[Area2] => Array(
[Section2] => Array(
[0] => Location2
)
)
)
Actual result:
Array(
[Area1] => Array(
[Section2] => Array(
[0] => Location2
)
)
[Area2] => Array(
[Section2] => Array(
[0] => Location2
)
)
)
| [
"If you modify your code on two lines as follows:\n$area = array();\n\n$section = array();\n\nto this:\nunset($area);\n$area = array();\n\nunset($section);\n$section = array();\n\nit will work as expected.\nIn the first version, $area and $section are acting as \"pointers\" to the value inside the $root array. If you reset the values first, those variables can then be used to create brand new arrays instead of overwriting the previous arrays.\n",
"This will also works:\n$root[$row[0]] = array();\n$area =& $root[$row[0]];\n\n"
] | [
3,
1
] | [] | [] | [
"arrays",
"php",
"reference"
] | stackoverflow_0000042876_arrays_php_reference.txt |
Q:
Reasons for SQL differences
Why are SQL distributions so non-standard despite an ANSI standard existing for SQL? Are there really that many meaningful differences in the way SQL databases work or is it just the two databases with which I have been working: MS-SQL and PostgreSQL? Why do these differences arise?
A:
The ANSI standard specifies only a limited set of commands and data types. Once you go beyond those, the implementors are on their own. And some very important concepts aren't specified at all, such as auto-incrementing columns. SQLite just picks the first non-null integer, MySQL requires AUTO INCREMENT, PostgreSQL uses sequences, etc. It's a mess, and that's only among the OSS databases! Try getting Oracle, Microsoft, and IBM to collectively decide on a tricky bit of functionality.
A:
It's a form of "Stealth lock-in". Joel goes into great detail here:
http://www.joelonsoftware.com/articles/fog0000000056.html
http://www.joelonsoftware.com/articles/fog0000000052.html
Companies end up tying their business functionality to non-standard or weird unsupported functionality in their implementation, this restricts their ability to move away from their vendor to a competitor.
On the other hand, it's pretty short-sighted because anyone with half a brain will tend to abstract away the proprietary pieces, or avoid the lock-in altogether, if it gets too egregious.
A:
First, I don't find databases to be as, say, browsers or operating systems in terms of incompatibility. Anyone with a few hours of training can start doing selects, inserts, deletes and updates on any SQL database. Meanwhile, it's difficult to write HTML that renders identically on every browser or write system code for more than one OS. Generally, differences in SQL are related to performance or fairly esoteric features. The major exception seems to be date formats and functions.
Second, database developers generally are motivated to add features that differentiate their product from everyone else. Products like Oracle, MS SQL Server and MySQL are vast ecosystems that rarely cross-pollinate in practice. At my workplace, we use Oracle and MySQL, but we could probably switch over to 100% Oracle in about a day if needed or desired. So I care a lot about the shiny toys Oracle gives us with each release, but I don't even know what version of MySQL we are using. IBM, Microsoft, PostgreSQL and the rest might as well not exist as far as we are concerned. Having the features to get and keep customers and users is far more important than compatibility in the database world. (That's the positive spin on the "lock-in" answer, I suppose.)
Third, there are legitimate reasons for different companies to implement SQL differently. For instance, Oracle has a multi-versioning system that allows very fast and scalable consistent reads. Other databases lack that feature, but usually are faster inserting rows and rolling back transactions. This is a fundamental difference in these systems. It doesn't make one better than the other (at least in the general case), just different. One should not be surprised if the SQL ontop of a database engine takes advantage of its strengths and attempts to minimize its weaknesses. In fact, it would be irresponsible of the developers to not do this.
A:
John: The standard actually covers lots of subjects, including identity columns, sequences, triggers, routines, upsert, etc. But of course, many of these standards-components may have been brought in place later than the first implementations; and this could be a reason why SQL standards compliance is somewhat low, generally.
Neall: There are actually areas where the SQL standard is ahead of the implementations. For example, it would be nice to have CREATE ASSERTION, but as far as I know, no DBMS implements assertions yet.
Personally, I believe that the closed nature of some ISO standards (like the SQL standard) is part of the problem: When a standard is not readily available online, it's less likely to be known by implementors/planners, and too few customers ask for compliance because they don't know what to ask for.
A:
It's certainly effective lock-in, as 1800 says. But in fairness to the database vendors, the SQL standard is always playing catch-up to current databases' feature sets. Most databases we have today are of pretty ancient lineages. If you trace Microsoft SQL Server back to its roots, I think you'll find Ingres - one of the very first relational databases written in the '70s. And Postgres was originally written by some of the same people in the '80s as a successor to Ingres. Oracle goes way back, and I'm not sure where MySQL came in.
Database non-portability does suck, but it could be a lot worse.
| Reasons for SQL differences | Why are SQL distributions so non-standard despite an ANSI standard existing for SQL? Are there really that many meaningful differences in the way SQL databases work or is it just the two databases with which I have been working: MS-SQL and PostgreSQL? Why do these differences arise?
| [
"The ANSI standard specifies only a limited set of commands and data types. Once you go beyond those, the implementors are on their own. And some very important concepts aren't specified at all, such as auto-incrementing columns. SQLite just picks the first non-null integer, MySQL requires AUTO INCREMENT, PostgreSQL uses sequences, etc. It's a mess, and that's only among the OSS databases! Try getting Oracle, Microsoft, and IBM to collectively decide on a tricky bit of functionality.\n",
"It's a form of \"Stealth lock-in\". Joel goes into great detail here:\n\nhttp://www.joelonsoftware.com/articles/fog0000000056.html\nhttp://www.joelonsoftware.com/articles/fog0000000052.html\n\nCompanies end up tying their business functionality to non-standard or weird unsupported functionality in their implementation, this restricts their ability to move away from their vendor to a competitor.\nOn the other hand, it's pretty short-sighted because anyone with half a brain will tend to abstract away the proprietary pieces, or avoid the lock-in altogether, if it gets too egregious.\n",
"First, I don't find databases to be as, say, browsers or operating systems in terms of incompatibility. Anyone with a few hours of training can start doing selects, inserts, deletes and updates on any SQL database. Meanwhile, it's difficult to write HTML that renders identically on every browser or write system code for more than one OS. Generally, differences in SQL are related to performance or fairly esoteric features. The major exception seems to be date formats and functions.\nSecond, database developers generally are motivated to add features that differentiate their product from everyone else. Products like Oracle, MS SQL Server and MySQL are vast ecosystems that rarely cross-pollinate in practice. At my workplace, we use Oracle and MySQL, but we could probably switch over to 100% Oracle in about a day if needed or desired. So I care a lot about the shiny toys Oracle gives us with each release, but I don't even know what version of MySQL we are using. IBM, Microsoft, PostgreSQL and the rest might as well not exist as far as we are concerned. Having the features to get and keep customers and users is far more important than compatibility in the database world. (That's the positive spin on the \"lock-in\" answer, I suppose.)\nThird, there are legitimate reasons for different companies to implement SQL differently. For instance, Oracle has a multi-versioning system that allows very fast and scalable consistent reads. Other databases lack that feature, but usually are faster inserting rows and rolling back transactions. This is a fundamental difference in these systems. It doesn't make one better than the other (at least in the general case), just different. One should not be surprised if the SQL ontop of a database engine takes advantage of its strengths and attempts to minimize its weaknesses. In fact, it would be irresponsible of the developers to not do this.\n",
"John: The standard actually covers lots of subjects, including identity columns, sequences, triggers, routines, upsert, etc. But of course, many of these standards-components may have been brought in place later than the first implementations; and this could be a reason why SQL standards compliance is somewhat low, generally.\nNeall: There are actually areas where the SQL standard is ahead of the implementations. For example, it would be nice to have CREATE ASSERTION, but as far as I know, no DBMS implements assertions yet.\nPersonally, I believe that the closed nature of some ISO standards (like the SQL standard) is part of the problem: When a standard is not readily available online, it's less likely to be known by implementors/planners, and too few customers ask for compliance because they don't know what to ask for.\n",
"It's certainly effective lock-in, as 1800 says. But in fairness to the database vendors, the SQL standard is always playing catch-up to current databases' feature sets. Most databases we have today are of pretty ancient lineages. If you trace Microsoft SQL Server back to its roots, I think you'll find Ingres - one of the very first relational databases written in the '70s. And Postgres was originally written by some of the same people in the '80s as a successor to Ingres. Oracle goes way back, and I'm not sure where MySQL came in.\nDatabase non-portability does suck, but it could be a lot worse.\n"
] | [
8,
5,
5,
4,
2
] | [] | [] | [
"postgresql",
"sql",
"sql_server"
] | stackoverflow_0000037441_postgresql_sql_sql_server.txt |
Q:
How do I convert an IntPtr to a Stream?
class Foo
{
static bool Bar(Stream^ stream);
};
class FooWrapper
{
bool Bar(LPCWSTR szUnicodeString)
{
return Foo::Bar(??);
}
};
MemoryStream will take a byte[] but I'd like to do this without copying the data if possible.
A:
You can avoid the copy if you use an UnmanagedMemoryStream() instead (class exists in .NET FCL 2.0 and later). Like MemoryStream, it is a subclass of IO.Stream, and has all the usual stream operations.
Microsoft's description of the class is:
Provides access to unmanaged blocks of memory from managed code.
which pretty much tells you what you need to know. Note that UnmanagedMemoryStream() is not CLS-compliant.
A:
If I had to copy the memory, I think the following would work:
static Stream^ UnicodeStringToStream(LPCWSTR szUnicodeString)
{
//validate the input parameter
if (szUnicodeString == NULL)
{
return nullptr;
}
//get the length of the string
size_t lengthInWChars = wcslen(szUnicodeString);
size_t lengthInBytes = lengthInWChars * sizeof(wchar_t);
//allocate the .Net byte array
array^ byteArray = gcnew array(lengthInBytes);
//copy the unmanaged memory into the byte array
Marshal::Copy((IntPtr)(void*)szUnicodeString, byteArray, 0, lengthInBytes);
//create a memory stream from the byte array
return gcnew MemoryStream(byteArray);
}
| How do I convert an IntPtr to a Stream? | class Foo
{
static bool Bar(Stream^ stream);
};
class FooWrapper
{
bool Bar(LPCWSTR szUnicodeString)
{
return Foo::Bar(??);
}
};
MemoryStream will take a byte[] but I'd like to do this without copying the data if possible.
| [
"You can avoid the copy if you use an UnmanagedMemoryStream() instead (class exists in .NET FCL 2.0 and later). Like MemoryStream, it is a subclass of IO.Stream, and has all the usual stream operations.\nMicrosoft's description of the class is:\n\nProvides access to unmanaged blocks of memory from managed code.\n\nwhich pretty much tells you what you need to know. Note that UnmanagedMemoryStream() is not CLS-compliant.\n",
"If I had to copy the memory, I think the following would work:\n\nstatic Stream^ UnicodeStringToStream(LPCWSTR szUnicodeString)\n{\n //validate the input parameter\n if (szUnicodeString == NULL)\n {\n return nullptr;\n }\n\n //get the length of the string\n size_t lengthInWChars = wcslen(szUnicodeString); \n size_t lengthInBytes = lengthInWChars * sizeof(wchar_t);\n\n //allocate the .Net byte array\n array^ byteArray = gcnew array(lengthInBytes);\n\n //copy the unmanaged memory into the byte array\n Marshal::Copy((IntPtr)(void*)szUnicodeString, byteArray, 0, lengthInBytes);\n\n //create a memory stream from the byte array\n return gcnew MemoryStream(byteArray);\n}\n"
] | [
8,
0
] | [] | [] | [
".net",
"c++",
"interop",
"managed_c++"
] | stackoverflow_0000042446_.net_c++_interop_managed_c++.txt |
Q:
Backward Converting SQL Databases
Does anyone know of any free tools that can assist in converting an SQL2005 database back to SQL2000 format? I know that you can script all the objects and then do a dump of the data, but this is a lot of work to do manually.
A:
Reviewing some other related questions I just found Microsoft's Database Publishing Wizard. It does most of what I need, although I have used nVarChar(max) in a couple of places and it simply fails to handle those cases and bombs out without generating anything.
A:
Have you considered using DTS to transfer the data across? It should be independant of the version.
| Backward Converting SQL Databases | Does anyone know of any free tools that can assist in converting an SQL2005 database back to SQL2000 format? I know that you can script all the objects and then do a dump of the data, but this is a lot of work to do manually.
| [
"Reviewing some other related questions I just found Microsoft's Database Publishing Wizard. It does most of what I need, although I have used nVarChar(max) in a couple of places and it simply fails to handle those cases and bombs out without generating anything.\n",
"Have you considered using DTS to transfer the data across? It should be independant of the version.\n"
] | [
1,
0
] | [] | [] | [
"database",
"sql_server",
"sql_server_2005"
] | stackoverflow_0000042954_database_sql_server_sql_server_2005.txt |
Q:
What's the purpose (if any) of "javascript:" in event handler tags?
I've been making a concerted effort to improve my javascript skills lately by reading as much javascript code as I can. In doing this I've sometimes seen the javascript: prefix appended to the front of event handler attributes in HTML element tags. What's the purpose of this prefix? Basically, is there any appreciable difference between:
onchange="javascript: myFunction(this)"
and
onchange="myFunction(this)"
?
A:
Probably nothing in your example. My understanding is that javascript: is for anchor tags (in place of an actual href). You'd use it so that your script can execute when the user clicks the link, but without initiating a navigation back to the page (which a blank href coupled with an onclick will do).
For example:
<a href="javascript:someFunction();">Blah</a>
Rather than:
<a href="" onclick="someFunction();">Blah</a>
A:
It should not be used in event handlers (though most browsers work defensively, and will not punish you). I would also argue that it should not be used in the href attribute of an anchor. If a browser supports javascript, it will use the properly defined event handler. If a browser does not, a javascript: link will appear broken. IMO, it is better to point them to a page explaining that they need to enable javascript to use that functionality, or better yet a non-javascript required version of the functionality. So, something like:
<a href="non-ajax.html" onclick="niftyAjax(); return false;">Ajax me</a>
Edit: Thought of a good reason to use javascript:. Bookmarklets. For instance, this one sends you to google reader to view the rss feeds for a page:
var b=document.body;
if(b&&!document.xmlVersion){
void(z=document.createElement('script'));
void(z.src='http://www.google.com/reader/ui/subscribe-bookmarklet.js');
void(b.appendChild(z));
}else{
location='http://www.google.com/reader/view/feed/'+encodeURIComponent(location.href)
}
To have a user easily add this Bookmarklet, you would format it like so:
<a href="javascript:var%20b=document.body;if(b&&!document.xmlVersion){void(z=document.createElement('script'));void(z.src='http://www.google.com/reader/ui/subscribe-bookmarklet.js');void(b.appendChild(z));}else{location='http://www.google.com/reader/view/feed/'+encodeURIComponent(location.href)}">Drag this to your bookmarks, or right click and bookmark it!</a>
A:
It should only be used in the href tag.
That's ridiculous.
The accepted way is this:
<a href="/non-js-version/" onclick="someFunction(); return false">Blah</a>
But to answer the OP, there is generally no reason to use javascript: anymore. In fact, you should attach the javascript event from your script, and not inline in the markup. But, that's a purist thing I think :-D
A:
The origins of javascript: in an event handler is actually just an IE specific thing so that you can specify the language in addition to the handler. This is because vbscript is also a supported client side scripting language in IE. Here's an example of "vbscript:".
In other browsers (as has been said by Shadow2531) javascript: is just a label and is basically ignored.
href="javascript:..." can be used in links to execute javascript code as DannySmurf points out.
A:
I am no authority in JavaScript, and perhaps more of a dunce than the asker, but AFAIK, the difference is that the javascript: prefix is preferred/required in URI-contexts, where the argument may be as well a traditional HTTP URL as a JavaScript trigger.
So, my intuitive answer would be that, since onChange expects JavaScript, the javascript: prefix is redundant (if not downright erroneous). You can, however, write javascript:myFunction(this) in your address bar, and that function is run. Without the javascript:, your browser would try to interpret myFunction(this) as a URL and tries to fetch the DNS info, browse to that server, etc...
A:
javascript: in JS code (like in an onclick attribute) is just a label for use with continue/goto label statements that may or may not be supported by the browser (probably not anywhere). It could be zipzambam: instead. Even if the label can't be used, browsers still accept it so it doesn't cause an error.
This means that if someone's throwing a useless label in an onclick attribute, they probably don't know what they're doing and are just copying and pasting or doing it out of habit from doing the below.
javascript: in the href attribute signifies a Javascript URI.
Example:
javascript:(function()%7Balert(%22test%22)%3B%7D)()%3B
A:
I don't know if the javascript: prefix means anything within the onevent attributes but I know they are annoying in anchor tags when trying to open the link in a new tab. The href should be used as a fall back and never to attach javascript to links.
A:
@mercutio
That's ridiculous.
No, it's not ridiculous, javascript: is a pseudo protocol that can indeed only be used as the subject of a link, so he's quite right. Your suggestion is indeed better, but the best way of all is to use unobtrusive javascript techniques to iterate over HTML elements and add behaviour programmatically, as used in libraries like jQuery.
A:
Basically, is there any appreciable difference between: onchange="javascript: myFunction(this)" and onchange="myFunction(this)" ?
Assuming you meant href="javascript: myFunction(this)", yes there is, especially when loading content using the javascript. Using the javascript: pseudo protocol makes the content inaccessible to some humans and all search engines, whereas using a real href and then changing the behaviour of the link using javascript makes the content accessible if javascript is turned off or not available in the particular client.
A:
Flubba:
Use of javascript: in HREF breaks "Open in New Window" and "Open in New Tab" in a Firefox and other browsers.
It isn't "wrong", but if you want to make your site hard to navigate...
| What's the purpose (if any) of "javascript:" in event handler tags? | I've been making a concerted effort to improve my javascript skills lately by reading as much javascript code as I can. In doing this I've sometimes seen the javascript: prefix appended to the front of event handler attributes in HTML element tags. What's the purpose of this prefix? Basically, is there any appreciable difference between:
onchange="javascript: myFunction(this)"
and
onchange="myFunction(this)"
?
| [
"Probably nothing in your example. My understanding is that javascript: is for anchor tags (in place of an actual href). You'd use it so that your script can execute when the user clicks the link, but without initiating a navigation back to the page (which a blank href coupled with an onclick will do).\nFor example:\n<a href=\"javascript:someFunction();\">Blah</a>\n\nRather than:\n<a href=\"\" onclick=\"someFunction();\">Blah</a>\n\n",
"It should not be used in event handlers (though most browsers work defensively, and will not punish you). I would also argue that it should not be used in the href attribute of an anchor. If a browser supports javascript, it will use the properly defined event handler. If a browser does not, a javascript: link will appear broken. IMO, it is better to point them to a page explaining that they need to enable javascript to use that functionality, or better yet a non-javascript required version of the functionality. So, something like:\n<a href=\"non-ajax.html\" onclick=\"niftyAjax(); return false;\">Ajax me</a>\n\nEdit: Thought of a good reason to use javascript:. Bookmarklets. For instance, this one sends you to google reader to view the rss feeds for a page:\nvar b=document.body;\nif(b&&!document.xmlVersion){\n void(z=document.createElement('script'));\n void(z.src='http://www.google.com/reader/ui/subscribe-bookmarklet.js');\n void(b.appendChild(z));\n}else{\n location='http://www.google.com/reader/view/feed/'+encodeURIComponent(location.href)\n}\n\nTo have a user easily add this Bookmarklet, you would format it like so:\n<a href=\"javascript:var%20b=document.body;if(b&&!document.xmlVersion){void(z=document.createElement('script'));void(z.src='http://www.google.com/reader/ui/subscribe-bookmarklet.js');void(b.appendChild(z));}else{location='http://www.google.com/reader/view/feed/'+encodeURIComponent(location.href)}\">Drag this to your bookmarks, or right click and bookmark it!</a>\n\n",
"\nIt should only be used in the href tag.\n\nThat's ridiculous.\nThe accepted way is this:\n<a href=\"/non-js-version/\" onclick=\"someFunction(); return false\">Blah</a>\n\nBut to answer the OP, there is generally no reason to use javascript: anymore. In fact, you should attach the javascript event from your script, and not inline in the markup. But, that's a purist thing I think :-D\n",
"The origins of javascript: in an event handler is actually just an IE specific thing so that you can specify the language in addition to the handler. This is because vbscript is also a supported client side scripting language in IE. Here's an example of \"vbscript:\".\nIn other browsers (as has been said by Shadow2531) javascript: is just a label and is basically ignored.\nhref=\"javascript:...\" can be used in links to execute javascript code as DannySmurf points out.\n",
"I am no authority in JavaScript, and perhaps more of a dunce than the asker, but AFAIK, the difference is that the javascript: prefix is preferred/required in URI-contexts, where the argument may be as well a traditional HTTP URL as a JavaScript trigger.\nSo, my intuitive answer would be that, since onChange expects JavaScript, the javascript: prefix is redundant (if not downright erroneous). You can, however, write javascript:myFunction(this) in your address bar, and that function is run. Without the javascript:, your browser would try to interpret myFunction(this) as a URL and tries to fetch the DNS info, browse to that server, etc...\n",
"javascript: in JS code (like in an onclick attribute) is just a label for use with continue/goto label statements that may or may not be supported by the browser (probably not anywhere). It could be zipzambam: instead. Even if the label can't be used, browsers still accept it so it doesn't cause an error.\nThis means that if someone's throwing a useless label in an onclick attribute, they probably don't know what they're doing and are just copying and pasting or doing it out of habit from doing the below.\njavascript: in the href attribute signifies a Javascript URI.\nExample:\njavascript:(function()%7Balert(%22test%22)%3B%7D)()%3B\n\n",
"I don't know if the javascript: prefix means anything within the onevent attributes but I know they are annoying in anchor tags when trying to open the link in a new tab. The href should be used as a fall back and never to attach javascript to links.\n",
"@mercutio\n\n\nThat's ridiculous.\n\n\nNo, it's not ridiculous, javascript: is a pseudo protocol that can indeed only be used as the subject of a link, so he's quite right. Your suggestion is indeed better, but the best way of all is to use unobtrusive javascript techniques to iterate over HTML elements and add behaviour programmatically, as used in libraries like jQuery.\n",
"\n\nBasically, is there any appreciable difference between: onchange=\"javascript: myFunction(this)\" and onchange=\"myFunction(this)\" ?\n\n\nAssuming you meant href=\"javascript: myFunction(this)\", yes there is, especially when loading content using the javascript. Using the javascript: pseudo protocol makes the content inaccessible to some humans and all search engines, whereas using a real href and then changing the behaviour of the link using javascript makes the content accessible if javascript is turned off or not available in the particular client.\n",
"Flubba:\nUse of javascript: in HREF breaks \"Open in New Window\" and \"Open in New Tab\" in a Firefox and other browsers.\nIt isn't \"wrong\", but if you want to make your site hard to navigate...\n"
] | [
19,
13,
5,
2,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"javascript"
] | stackoverflow_0000023217_javascript.txt |
Q:
Regex to match against something that is not a specific substring
I am looking for a regex that will match a string that starts with one substring and does not end with a certain substring.
Example:
// Updated to be correct, thanks @Apocalisp
^foo.*(?<!bar)$
Should match anything that starts with "foo" and doesn't end with "bar". I know about the [^...] syntax, but I can't find anything that will do that for a string instead of single characters.
I am specifically trying to do this for Java's regex, but I've run into this before so answers for other regex engines would be great too.
Thanks to @Kibbee for verifying that this works in C# as well.
A:
I think in this case you want negative lookbehind, like so:
foo.*(?<!bar)
A:
I'm not familiar with Java regex but documentation for the Pattern Class would suggest you could use (?!X) for a non-capturing zero-width negative lookahead (it looks for something that is not X at that postision, without capturing it as a backreference). So you could do:
foo.*(?!bar) // not correct
Update: Apocalisp's right, you want negative lookbehind. (you're checking that what the .* matches doesn't end with bar)
A:
Verified @Apocalisp's answer using:
import java.util.regex.Pattern;
public class Test {
public static void main(String[] args) {
Pattern p = Pattern.compile("^foo.*(?<!bar)$");
System.out.println(p.matcher("foobar").matches());
System.out.println(p.matcher("fooBLAHbar").matches());
System.out.println(p.matcher("1foo").matches());
System.out.println(p.matcher("fooBLAH-ar").matches());
System.out.println(p.matcher("foo").matches());
System.out.println(p.matcher("foobaz").matches());
}
}
This output the the right answers:
false
false
false
true
true
true
A:
As other commenters said, you need a negative lookahead. In Java you can use this pattern:
"^first_string(?!.?second_string)\\z"
^ - ensures that string starts with
first_string
\z - ensures that string ends with second_string
(?!.?second_string) - means that first_string can't be followed by second_string
| Regex to match against something that is not a specific substring | I am looking for a regex that will match a string that starts with one substring and does not end with a certain substring.
Example:
// Updated to be correct, thanks @Apocalisp
^foo.*(?<!bar)$
Should match anything that starts with "foo" and doesn't end with "bar". I know about the [^...] syntax, but I can't find anything that will do that for a string instead of single characters.
I am specifically trying to do this for Java's regex, but I've run into this before so answers for other regex engines would be great too.
Thanks to @Kibbee for verifying that this works in C# as well.
| [
"I think in this case you want negative lookbehind, like so:\nfoo.*(?<!bar)\n\n",
"I'm not familiar with Java regex but documentation for the Pattern Class would suggest you could use (?!X) for a non-capturing zero-width negative lookahead (it looks for something that is not X at that postision, without capturing it as a backreference). So you could do:\nfoo.*(?!bar) // not correct\n\nUpdate: Apocalisp's right, you want negative lookbehind. (you're checking that what the .* matches doesn't end with bar)\n",
"Verified @Apocalisp's answer using:\nimport java.util.regex.Pattern;\npublic class Test {\n public static void main(String[] args) {\n Pattern p = Pattern.compile(\"^foo.*(?<!bar)$\");\n System.out.println(p.matcher(\"foobar\").matches());\n System.out.println(p.matcher(\"fooBLAHbar\").matches());\n System.out.println(p.matcher(\"1foo\").matches());\n System.out.println(p.matcher(\"fooBLAH-ar\").matches());\n System.out.println(p.matcher(\"foo\").matches());\n System.out.println(p.matcher(\"foobaz\").matches());\n }\n}\n\nThis output the the right answers:\nfalse\nfalse\nfalse\ntrue\ntrue\ntrue\n\n",
"As other commenters said, you need a negative lookahead. In Java you can use this pattern:\n\"^first_string(?!.?second_string)\\\\z\"\n\n\n^ - ensures that string starts with\nfirst_string\n\\z - ensures that string ends with second_string\n(?!.?second_string) - means that first_string can't be followed by second_string\n\n"
] | [
9,
1,
1,
0
] | [] | [] | [
"c#",
"java",
"regex",
"regex_negation"
] | stackoverflow_0000042990_c#_java_regex_regex_negation.txt |
Q:
Incrementing from 0 to 100 in assembly language
This is kinda oddball, but I was poking around with the GNU assembler today (I want to be able to at least read the syntax), and was trying to get this little contrived example of mine to work. Namely I just want to go from 0 to 100, printing out numbers all the while. So a few minutes later I come up with this:
# count.s: print the numbers from 0 to 100.
.text
string: .asciz "%d\n"
.globl _main
_main:
movl $0, %eax # The starting point/current value.
movl $100, %ebx # The ending point.
_loop:
# Display the current value.
pushl %eax
pushl $string
call _printf
addl $8, %esp
# Check against the ending value.
cmpl %eax, %ebx
je _end
# Increment the current value.
incl %eax
jmp _loop
_end:
All I get from this is 3 printed over and over again. Like I said, just a little contrived example, so don't worry too much about it, it's not a life or death problem.
(The formatting's a little messed up, but nothing major).
A:
You can't trust what any called procedure does to any of the registers.
Either push the registers onto the stack and pop them back off after calling printf or have the increment and end point values held in memory and read/written into registers as you need them.
I hope the following works. I'm assuming that pushl has an equivalant popl and you can push an extra couple of numbers onto the stack.
# count.s: print the numbers from 0 to 100.
.text
string: .asciz "%d\n"
.globl _main
_main:
movl $0, %eax # The starting point/current value.
movl $100, %ebx # The ending point.
_loop:
# Remember your registers.
pushl %eax
pushl %ebx
# Display the current value.
pushl %eax
pushl $string
call _printf
addl $8, %esp
# reinstate registers.
popl %ebx
popl %eax
# Check against the ending value.
cmpl %eax, %ebx
je _end
# Increment the current value.
incl %eax
jmp _loop
_end:
A:
I'm not too familiar with _printf, but could it be that it modifies eax? Printf should return the number of chars printed, which in this case is two: '0' and '\n'. I think it returns this in eax, and when you increment it, you get 3, which is what you proceed to print.
You might be better off using a different register for the counter.
A:
You can safely use registers that are "callee-saved" without having to save them yourself. On x86 these are edi, esi, and ebx; other architectures have more.
These are documented in the ABI references: http://math-atlas.sourceforge.net/devel/assembly/
A:
Well written functions will usually push all the registers onto the stack and then pop them when they're done so that they remain unchanged during the function. The exception would be eax that contains the return value. Library functions like printf are most likely written this way, so I wouldn't do as Wedge suggests:
You'll need to do the same for any other variable you have. Using registers to store local variables is pretty much reserved to architectures with enough registers to support it (e.g. EPIC, amd64, etc.)
In fact, from what I know, compilers usually compile functions that way to deal exactly with this issue.
@seanyboy, your solution is overkill. All that's needed is to replace eax with some other register like ecx.
A:
Nathan is on the right track. You can't assume that register values will be unmodified after calling a subroutine. In fact, it's best to assume they will be modified, else the subroutine wouldn't be able to do it's work (at least for low register count architectures like x86). If you want to preserve a value you should store it in memory (e.g. push it onto the stack and keep track of it's location).
You'll need to do the same for any other variable you have. Using registers to store local variables is pretty much reserved to architectures with enough registers to support it (e.g. EPIC, amd64, etc.)
| Incrementing from 0 to 100 in assembly language | This is kinda oddball, but I was poking around with the GNU assembler today (I want to be able to at least read the syntax), and was trying to get this little contrived example of mine to work. Namely I just want to go from 0 to 100, printing out numbers all the while. So a few minutes later I come up with this:
# count.s: print the numbers from 0 to 100.
.text
string: .asciz "%d\n"
.globl _main
_main:
movl $0, %eax # The starting point/current value.
movl $100, %ebx # The ending point.
_loop:
# Display the current value.
pushl %eax
pushl $string
call _printf
addl $8, %esp
# Check against the ending value.
cmpl %eax, %ebx
je _end
# Increment the current value.
incl %eax
jmp _loop
_end:
All I get from this is 3 printed over and over again. Like I said, just a little contrived example, so don't worry too much about it, it's not a life or death problem.
(The formatting's a little messed up, but nothing major).
| [
"You can't trust what any called procedure does to any of the registers. \nEither push the registers onto the stack and pop them back off after calling printf or have the increment and end point values held in memory and read/written into registers as you need them. \nI hope the following works. I'm assuming that pushl has an equivalant popl and you can push an extra couple of numbers onto the stack. \n# count.s: print the numbers from 0 to 100. \n .text\nstring: .asciz \"%d\\n\"\n .globl _main\n\n_main:\n movl $0, %eax # The starting point/current value.\n movl $100, %ebx # The ending point.\n\n_loop:\n # Remember your registers.\n pushl %eax\n pushl %ebx\n\n # Display the current value.\n pushl %eax\n pushl $string\n call _printf\n addl $8, %esp\n\n # reinstate registers.\n popl %ebx\n popl %eax\n\n # Check against the ending value.\n cmpl %eax, %ebx\n je _end\n\n # Increment the current value.\n incl %eax\n jmp _loop \n\n_end:\n\n",
"I'm not too familiar with _printf, but could it be that it modifies eax? Printf should return the number of chars printed, which in this case is two: '0' and '\\n'. I think it returns this in eax, and when you increment it, you get 3, which is what you proceed to print.\nYou might be better off using a different register for the counter.\n",
"You can safely use registers that are \"callee-saved\" without having to save them yourself. On x86 these are edi, esi, and ebx; other architectures have more.\nThese are documented in the ABI references: http://math-atlas.sourceforge.net/devel/assembly/\n",
"Well written functions will usually push all the registers onto the stack and then pop them when they're done so that they remain unchanged during the function. The exception would be eax that contains the return value. Library functions like printf are most likely written this way, so I wouldn't do as Wedge suggests:\n\nYou'll need to do the same for any other variable you have. Using registers to store local variables is pretty much reserved to architectures with enough registers to support it (e.g. EPIC, amd64, etc.)\n\nIn fact, from what I know, compilers usually compile functions that way to deal exactly with this issue.\n@seanyboy, your solution is overkill. All that's needed is to replace eax with some other register like ecx.\n",
"Nathan is on the right track. You can't assume that register values will be unmodified after calling a subroutine. In fact, it's best to assume they will be modified, else the subroutine wouldn't be able to do it's work (at least for low register count architectures like x86). If you want to preserve a value you should store it in memory (e.g. push it onto the stack and keep track of it's location).\nYou'll need to do the same for any other variable you have. Using registers to store local variables is pretty much reserved to architectures with enough registers to support it (e.g. EPIC, amd64, etc.)\n"
] | [
12,
6,
5,
3,
1
] | [
"You could rewrite it so that you use registers that aren't suppose to change, for example %ebp. Just make sure you push them onto the stack at the beginning, and pop them off at the end of your routine.\n# count.s: print the numbers from 0 to 100. \n .text\nstring: .asciz \"%d\\n\"\n .globl _main\n\n_main:\n push %ecx\n push %ebp\n movl $0, %ecx # The starting point/current value.\n movl $100, %ebp # The ending point.\n\n_loop:\n # Display the current value.\n pushl %ecx\n pushl $string\n call _printf\n addl $8, %esp\n\n # Check against the ending value.\n cmpl %ecx, %ebp\n je _end\n\n # Increment the current value.\n incl %ecx\n jmp _loop \n\n_end:\n pop %ebp\n pop %ecx\n\n"
] | [
-1
] | [
"assembly",
"gnu_assembler"
] | stackoverflow_0000019409_assembly_gnu_assembler.txt |
Q:
Is reusing a variable in VB6 a good idea?
Essentially I want to know if in VB.NET 2005 if using a sqlcommand and then reusing it by using the NEW is wrong. Will it cause a memory leak.
EG:
try
dim mySQL as new sqlcommand(sSQL, cnInput)
// do a sql execute and read the data
mySQL = new sqlcommand(sSQLdifferent, cnInput)
// do sql execute and read the data
catch ...
finally
if mysql isnot nothing then
mysql.dispose
mysql = nothing
end if
EDIT: put try catch in to avoid the comments about not using them
A:
Just to extend what Longhorn213 said, here's the code for it:
Using mysql as SqlCommand = new SqlCommand(sSql, cnInput)
' do stuff'
End Using
Using mysql as SqlCommand = new SqlCommand(otherSql, cnInput)
' do other stuff'
End Using
(edit) Just as an FYI, using automatically wraps the block of code around a try/finally that calls the Dispose method on the variable it is created with. Thus, it's an easy way to ensure your resource is released. http://msdn.microsoft.com/en-us/library/htd05whh(VS.80).aspx
A:
Garbage collection will gather up the first new when it is run.
Only the second one you purposely dispose in the Finally block. The first one will be disposed of the next time the garbage collection is run.
I do not think this is a good idea. If the first command is not closed correctly it is possible you would have an open connection to the database and it will not be disposed.
A better way would be to dispose the first command after you are done using it, and then to reuse it.
A:
Uh, to all those people saying "it's OK, don't worry about it, the GC will handle it..." the whole point of the Dispose pattern is to handle those resources the GC can't dispose of. So if an object has a Dispose method, you'd better call it when you're done with it!
In summary, Longhorn213 is correct, listen to him.
A:
One thing I never worked out - If I have a class implementing IDisposable, but I never actually dispose it myself, I just leave it hanging around for the GC, will the GC actually call Dispose for me?
A:
No, the garbage collector will find the old version of mySql and deallocate it in due course.
The garbage collector should pick up anything that's been dereferenced as long as it hasn't been moved into the Large Object Heap.
A:
Whilst garbage collection will clean up after you eventually the dispose pattern is there to help the system release any resources associated with the object sooner, So you should call dispose once you are done with the object before re-assigning to it.
A:
Be careful. If you have to do a lot of these in a loop it can be slow. It's much better to just update the .CommandText property of the same command, like this (also, you can clean up the syntax a little):
Using mysql as New SqlCommand(sSql, cnInput)
' do stuff'
mySql.CommandText = otherSql
'do other stuff'
End Using
Of course, that only works if the first command is no longer active. If you're still in the middle of going through a datareader then you better create a new command.
| Is reusing a variable in VB6 a good idea? | Essentially I want to know if in VB.NET 2005 if using a sqlcommand and then reusing it by using the NEW is wrong. Will it cause a memory leak.
EG:
try
dim mySQL as new sqlcommand(sSQL, cnInput)
// do a sql execute and read the data
mySQL = new sqlcommand(sSQLdifferent, cnInput)
// do sql execute and read the data
catch ...
finally
if mysql isnot nothing then
mysql.dispose
mysql = nothing
end if
EDIT: put try catch in to avoid the comments about not using them
| [
"Just to extend what Longhorn213 said, here's the code for it:\nUsing mysql as SqlCommand = new SqlCommand(sSql, cnInput)\n ' do stuff'\nEnd Using\n\nUsing mysql as SqlCommand = new SqlCommand(otherSql, cnInput)\n ' do other stuff'\nEnd Using\n\n(edit) Just as an FYI, using automatically wraps the block of code around a try/finally that calls the Dispose method on the variable it is created with. Thus, it's an easy way to ensure your resource is released. http://msdn.microsoft.com/en-us/library/htd05whh(VS.80).aspx\n",
"Garbage collection will gather up the first new when it is run.\nOnly the second one you purposely dispose in the Finally block. The first one will be disposed of the next time the garbage collection is run.\nI do not think this is a good idea. If the first command is not closed correctly it is possible you would have an open connection to the database and it will not be disposed.\nA better way would be to dispose the first command after you are done using it, and then to reuse it.\n",
"Uh, to all those people saying \"it's OK, don't worry about it, the GC will handle it...\" the whole point of the Dispose pattern is to handle those resources the GC can't dispose of. So if an object has a Dispose method, you'd better call it when you're done with it!\nIn summary, Longhorn213 is correct, listen to him.\n",
"One thing I never worked out - If I have a class implementing IDisposable, but I never actually dispose it myself, I just leave it hanging around for the GC, will the GC actually call Dispose for me?\n",
"No, the garbage collector will find the old version of mySql and deallocate it in due course.\nThe garbage collector should pick up anything that's been dereferenced as long as it hasn't been moved into the Large Object Heap.\n",
"Whilst garbage collection will clean up after you eventually the dispose pattern is there to help the system release any resources associated with the object sooner, So you should call dispose once you are done with the object before re-assigning to it.\n",
"Be careful. If you have to do a lot of these in a loop it can be slow. It's much better to just update the .CommandText property of the same command, like this (also, you can clean up the syntax a little):\nUsing mysql as New SqlCommand(sSql, cnInput)\n ' do stuff'\n\n mySql.CommandText = otherSql\n\n 'do other stuff'\nEnd Using\n\nOf course, that only works if the first command is no longer active. If you're still in the middle of going through a datareader then you better create a new command.\n"
] | [
6,
2,
1,
1,
0,
0,
0
] | [] | [] | [
".net",
"garbage_collection",
"memory_leaks",
"vb.net"
] | stackoverflow_0000042422_.net_garbage_collection_memory_leaks_vb.net.txt |
Q:
Can I create a value for a missing tag in XPath?
I have an application which extracts data from an XML file using XPath. If a node in that XML source file is missing I want to return the value "N/A" (much like the Oracle NVL function). The trick is that the application doesn't support XSLT; I'd like to do this using XPath and XPath alone.
Is that possible?
A:
It can be done but only if the return value when the node does exist is the string value of the node, not the node itself. The XPath
substring(concat("N/A", /foo/baz), 4 * number(boolean(/foo/baz)))
will return the string value of the baz element if it exists, otherwise the string "N/A".
To generalize the approach:
substring(concat($null-value, $node),
(string-length($null-value) + 1) * number(boolean($node)))
where $null-value is the null value string and $node the expression to select the node. Note that if $node evaluates to a node-set that contains more than one node, the string value of the first node is used.
A:
Short answer: no. Such a function was considered and explicitly rejected for version 2 of the XPath spec (see the non-normative Illustrative User-written Functions section).
A:
For empty nodes, you need
boolean(string-length($node))
(You can omit the call to number() as the cast from boolean to number is implicit here.)
A:
It can be done with XPath 1.0. Say you have
<foo>
<bar/>
</foo>
If you want to test if foo has a baz child,
substring("N/A", 4 * number(boolean(/foo/baz)))
will return "N/A" if the expression /foo/baz returns an empty node-set, otherwise it returns an empty string.
A:
@jelovirt
So if I understand this correctly, we concatenate the default answer and the value of the node, and then take the correct subset of the resulting string by testing for the existence of the node to set the offset to either zero or the position right after my default string. That is the most perverse twisting of a language I've ever seen. (I love it!)
To clarify what you said, this approach works when the the node is missing, not when the node is empty. But by replacing "number(boolean($node))" with "string-length($node)" it will work on empty nodes instead.
| Can I create a value for a missing tag in XPath? | I have an application which extracts data from an XML file using XPath. If a node in that XML source file is missing I want to return the value "N/A" (much like the Oracle NVL function). The trick is that the application doesn't support XSLT; I'd like to do this using XPath and XPath alone.
Is that possible?
| [
"It can be done but only if the return value when the node does exist is the string value of the node, not the node itself. The XPath\nsubstring(concat(\"N/A\", /foo/baz), 4 * number(boolean(/foo/baz)))\n\nwill return the string value of the baz element if it exists, otherwise the string \"N/A\".\nTo generalize the approach:\nsubstring(concat($null-value, $node),\n (string-length($null-value) + 1) * number(boolean($node)))\n\nwhere $null-value is the null value string and $node the expression to select the node. Note that if $node evaluates to a node-set that contains more than one node, the string value of the first node is used.\n",
"Short answer: no. Such a function was considered and explicitly rejected for version 2 of the XPath spec (see the non-normative Illustrative User-written Functions section).\n",
"For empty nodes, you need\nboolean(string-length($node))\n\n(You can omit the call to number() as the cast from boolean to number is implicit here.)\n",
"It can be done with XPath 1.0. Say you have\n<foo>\n <bar/>\n</foo>\n\nIf you want to test if foo has a baz child,\nsubstring(\"N/A\", 4 * number(boolean(/foo/baz)))\n\nwill return \"N/A\" if the expression /foo/baz returns an empty node-set, otherwise it returns an empty string.\n",
"@jelovirt\nSo if I understand this correctly, we concatenate the default answer and the value of the node, and then take the correct subset of the resulting string by testing for the existence of the node to set the offset to either zero or the position right after my default string. That is the most perverse twisting of a language I've ever seen. (I love it!)\nTo clarify what you said, this approach works when the the node is missing, not when the node is empty. But by replacing \"number(boolean($node))\" with \"string-length($node)\" it will work on empty nodes instead.\n"
] | [
5,
3,
2,
1,
1
] | [] | [] | [
"xml",
"xpath",
"xslt"
] | stackoverflow_0000040361_xml_xpath_xslt.txt |
Q:
Redirecting ".local" subdomain to unicast DNS
I regularly access Windows domains that have been set up to use a domain under the .local top level name. This conflicts with Bonjour/Zeroconf which reserves .local for it's own use. A number of platforms support Bonjour out of the box (including Mac OS, iPhone, and Ubuntu) and there's numerous name resolution issues when this confict occurs.
I have a manual (per workstation) workaround in place for Mac OS by creating an /etc/resolver/ntdomain.local as per resolver(5) which works well. Unfortunately this requires manual changes on every workstation and does not work on the iPhone.
What I'm looking for is a way to redirect requests for *.ntdomain.local coming in via mDNS to a specific unicast DNS server. I don't mind writing some code if required. I can deploy on either preferably Debian or alternatively Windows 2003. It looks like Avahi may be the library I'm looking for.
Can this be done without registering every address in the subdomain or is it possible to register a single NS record of ntdomain.local that points to the Windows DNS server?
A:
You can "merge" the unicast and multicast .local namespaces (with unicast taking precedence) as explained on Avahi and Unicast .local. Apple has instructions for doing the same on Mac OS X.
Another option is to add domain-name=.localnet to /etc/avahi/avahi-daemon.conf to have it use .localnet instead of .local for the multicast DNS namespace.
| Redirecting ".local" subdomain to unicast DNS | I regularly access Windows domains that have been set up to use a domain under the .local top level name. This conflicts with Bonjour/Zeroconf which reserves .local for it's own use. A number of platforms support Bonjour out of the box (including Mac OS, iPhone, and Ubuntu) and there's numerous name resolution issues when this confict occurs.
I have a manual (per workstation) workaround in place for Mac OS by creating an /etc/resolver/ntdomain.local as per resolver(5) which works well. Unfortunately this requires manual changes on every workstation and does not work on the iPhone.
What I'm looking for is a way to redirect requests for *.ntdomain.local coming in via mDNS to a specific unicast DNS server. I don't mind writing some code if required. I can deploy on either preferably Debian or alternatively Windows 2003. It looks like Avahi may be the library I'm looking for.
Can this be done without registering every address in the subdomain or is it possible to register a single NS record of ntdomain.local that points to the Windows DNS server?
| [
"You can \"merge\" the unicast and multicast .local namespaces (with unicast taking precedence) as explained on Avahi and Unicast .local. Apple has instructions for doing the same on Mac OS X.\nAnother option is to add domain-name=.localnet to /etc/avahi/avahi-daemon.conf to have it use .localnet instead of .local for the multicast DNS namespace.\n"
] | [
4
] | [] | [] | [
"bonjour",
"dns",
"mdns",
"zeroconf"
] | stackoverflow_0000040295_bonjour_dns_mdns_zeroconf.txt |
Q:
What does VS 2008's "Convert to Website" mean?
I have upgraded a MS Visual Studio Application from VS 2003 to VS 2008 (Targeting .NET 2.0). As part of the conversion process the wizard said I needed to take the additional step of Converting my Project to a Website by Right-Clicking and blah blah blah...
I didn't follow directions and the web application seems to be working fine.
My question is, should I be concerned about pushing this to a production system? What exactly is going on here?
A:
There are two types of web applications in ASP.NET: The Web Site and Web Application Project. The difference between the two are discussed here:
Difference between web site and web applications in Visual Studio 2005
Convert to Website allows you to convert a Web Application Project to a Web Site.
Visual Studio 2003 used the Web Application Project style, but initially VS2005 only supported web sites. VS2005 SP1 brought back Web Applications.
If you don't want to convert your project to a web site, apply SP1 if you're using VS2005. VS2008 can support either.
A:
Convert to Website moves all of your control declarations from the main page class to a secondary file (yourpage.aspx.designer.cs).
It does this by using a partial class. That is, the same class for your page, but split into two seperate files.
This allows the VS2k5 (and VS2k8) designer to generate code for your pages without dumping generated code spaghetti into the main class file.
You don't need to do this step to build the project, but if you continue to maintain the project you will want too.
EDIT:
Hey look, MSDN backs me up:
To convert the code to use the partial-class model
Make sure the code compiles without errors.
In Solution Explorer, right-click the project name and click Convert to Web Application. This command iterates through each page and user control in the project. It moves all control declarations to a .designer.cs or designer.vb file. It also adds event handler declarations to the server-control markup in the .aspx and .ascx files.
When the process has finished, check the Task List window to see whether any conversion errors are reported.
If the Task List displays errors, right-click the relevant page in Solution Explorer and select View Code and View Code Gen File to examine the code and fix problems.
Recompile the project to make sure that it compiles without errors.
A:
There are two types of web applications in ASP.NET: The Web Site and Web Application Project.
Convert to Website allows you to convert a Web Application Project to a Web Site.
As far as I can recall, Convert to a Website does not do this, the Web Application project is a regular application structure with your typical \bin etc.
The WebSite project instead is based upon the concept of an App_Code directory for classes, and an App_Date directory for data, with your regular ASPX files going anywhere. The idea is to avoid having to precompile into DLL's before deployment, which can be easier in some shared hosting situations.
I am not aware of any wizard that will restructure the project between these types, but I may be wrong.
A:
The only thing you might have missed was whether or not you wanted to make a backup of the 2003 project (just in case). It's no big deal.
Check out:
Converting a Visual Studio .NET 2003 Web Project to a Visual Studio Web Application Project
Visual Studio Conversion Wizard
A:
Convert to Website moves all of your control declarations from the main page class to a secondary file (yourpage.aspx.designer.cs).
Why would I want to do this? It's bad enough that there is a .js .css .vb .aspx file for each page. Do I really need to split up the .vb into two more files just so I can hide the declarations ? page.designer.aspx.vb.h anyone?
| What does VS 2008's "Convert to Website" mean? | I have upgraded a MS Visual Studio Application from VS 2003 to VS 2008 (Targeting .NET 2.0). As part of the conversion process the wizard said I needed to take the additional step of Converting my Project to a Website by Right-Clicking and blah blah blah...
I didn't follow directions and the web application seems to be working fine.
My question is, should I be concerned about pushing this to a production system? What exactly is going on here?
| [
"There are two types of web applications in ASP.NET: The Web Site and Web Application Project. The difference between the two are discussed here:\nDifference between web site and web applications in Visual Studio 2005\nConvert to Website allows you to convert a Web Application Project to a Web Site.\nVisual Studio 2003 used the Web Application Project style, but initially VS2005 only supported web sites. VS2005 SP1 brought back Web Applications.\nIf you don't want to convert your project to a web site, apply SP1 if you're using VS2005. VS2008 can support either.\n",
"Convert to Website moves all of your control declarations from the main page class to a secondary file (yourpage.aspx.designer.cs).\nIt does this by using a partial class. That is, the same class for your page, but split into two seperate files.\nThis allows the VS2k5 (and VS2k8) designer to generate code for your pages without dumping generated code spaghetti into the main class file.\nYou don't need to do this step to build the project, but if you continue to maintain the project you will want too.\nEDIT:\nHey look, MSDN backs me up:\nTo convert the code to use the partial-class model\n\nMake sure the code compiles without errors.\nIn Solution Explorer, right-click the project name and click Convert to Web Application. This command iterates through each page and user control in the project. It moves all control declarations to a .designer.cs or designer.vb file. It also adds event handler declarations to the server-control markup in the .aspx and .ascx files.\nWhen the process has finished, check the Task List window to see whether any conversion errors are reported.\nIf the Task List displays errors, right-click the relevant page in Solution Explorer and select View Code and View Code Gen File to examine the code and fix problems.\nRecompile the project to make sure that it compiles without errors. \n\n",
"\nThere are two types of web applications in ASP.NET: The Web Site and Web Application Project.\n Convert to Website allows you to convert a Web Application Project to a Web Site.\n\nAs far as I can recall, Convert to a Website does not do this, the Web Application project is a regular application structure with your typical \\bin etc.\nThe WebSite project instead is based upon the concept of an App_Code directory for classes, and an App_Date directory for data, with your regular ASPX files going anywhere. The idea is to avoid having to precompile into DLL's before deployment, which can be easier in some shared hosting situations.\nI am not aware of any wizard that will restructure the project between these types, but I may be wrong.\n",
"The only thing you might have missed was whether or not you wanted to make a backup of the 2003 project (just in case). It's no big deal.\nCheck out:\nConverting a Visual Studio .NET 2003 Web Project to a Visual Studio Web Application Project\nVisual Studio Conversion Wizard\n",
"\nConvert to Website moves all of your control declarations from the main page class to a secondary file (yourpage.aspx.designer.cs).\n\nWhy would I want to do this? It's bad enough that there is a .js .css .vb .aspx file for each page. Do I really need to split up the .vb into two more files just so I can hide the declarations ? page.designer.aspx.vb.h anyone?\n"
] | [
8,
3,
1,
1,
1
] | [] | [] | [
"visual_studio_2008",
"web_applications"
] | stackoverflow_0000043019_visual_studio_2008_web_applications.txt |
Q:
How can I get a commit message from a bzr post-commit hook?
I'm trying to write a bzr post-commit hook for my private bugtracker, but I'm stuck at the function signature of
post_commit(local, master, old_revno, old_revid, new_revno, mew_revid)
How can I extract the commit message for the branch from this with bzrlib in Python?
A:
And the answer is like so:
def check_commit_msg(local, master, old_revno, old_revid, new_revno, new_revid):
branch = local or master
revision = branch.repository.get_revision(new_revid)
print revision.message
local and master are Branch objects, so once you have a revision, it's easy to extract the message.
| How can I get a commit message from a bzr post-commit hook? | I'm trying to write a bzr post-commit hook for my private bugtracker, but I'm stuck at the function signature of
post_commit(local, master, old_revno, old_revid, new_revno, mew_revid)
How can I extract the commit message for the branch from this with bzrlib in Python?
| [
"And the answer is like so:\ndef check_commit_msg(local, master, old_revno, old_revid, new_revno, new_revid):\n branch = local or master\n revision = branch.repository.get_revision(new_revid)\n print revision.message\n\nlocal and master are Branch objects, so once you have a revision, it's easy to extract the message.\n"
] | [
5
] | [] | [] | [
"bazaar",
"dvcs",
"python"
] | stackoverflow_0000043099_bazaar_dvcs_python.txt |
Q:
Best way to multi-thread?
What is the best way to multi-thread in the C language? I want something that is very efficient and not a CPU hog. Thanks.
A:
The correct (standard) way to do this on C and Windows is with __beginthreadex.
This is usually preferred to calling CreateThread directly as CreateThread doesn't init C runtime support for the thread. So if you create a thread using CreateThread, and call a CRT function, bad stuff can/will happen.
Note that __beginthreadex calls CreateThread internally, but performs some other work behind the scenes.
A:
If you're on a UNIX-based platform (Linux or Mac OS X) your best option is POSIX threads. They're the standard cross-platform way to multithread in a POSIX environment. They can also be used in Windows, but there are probably better (more native) solutions for that platform.
A:
Your question is a bit general to answer effectively. You might look into such things as:
CreateThread in the windows SDK
boost::thread
| Best way to multi-thread? | What is the best way to multi-thread in the C language? I want something that is very efficient and not a CPU hog. Thanks.
| [
"The correct (standard) way to do this on C and Windows is with __beginthreadex.\nThis is usually preferred to calling CreateThread directly as CreateThread doesn't init C runtime support for the thread. So if you create a thread using CreateThread, and call a CRT function, bad stuff can/will happen.\nNote that __beginthreadex calls CreateThread internally, but performs some other work behind the scenes.\n",
"If you're on a UNIX-based platform (Linux or Mac OS X) your best option is POSIX threads. They're the standard cross-platform way to multithread in a POSIX environment. They can also be used in Windows, but there are probably better (more native) solutions for that platform.\n",
"Your question is a bit general to answer effectively. You might look into such things as:\nCreateThread in the windows SDK\nboost::thread\n"
] | [
3,
2,
0
] | [] | [] | [
"c",
"multithreading"
] | stackoverflow_0000043086_c_multithreading.txt |
Q:
Where does "Change Management" end and "Project Failure" begin?
I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data.
Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot.
So I ask you this:
What is change management, and how does it apply to a project?
Where does "change management" end, and "project failure" begin?
@shog9:
I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented.
I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?"
A:
I think, most of the time, we developers forget this we all do is, after all, about bussiness.
From that point of view a project is not a failure while the client is willing to pay for it. It all depends on the client, some clients have more patience and understand better the risks of software development, other just won't pay if there's a substantial delay.
Anyway, about your question. Whenever you evolve a project there are risks involved, maybe you schedule the end of the project in a certain date but it will take like six month longer than you expected. In that case you have to balance what you have already spent and what you have to gain against the risks you're taking. There's actually an entire science called "decision making" that studies it at software level, so your boss is not wrong at all.
Let's look at some questions, Is the client willing to wait for the project? Is he willing to assume certain overcosts? Even if he doesn't, Is worth completing the project assuming the extra costs instead of throwing away all the already done work? Can the company assume what's already lost?
The real answer to your problem lies behind that questions. You can't establish a point and say, here, if the project isn't done by this time then it's a failure. As for your specific situation, who knows? Your boss has probably more information that you have so your work is to tell him how is the project going, how much it will take and how much it will cost (in terms hours/man if you wish)
A:
Unless the goals were clearly stated in the beginning of the project, there are no clear lines between "success" and "failure." Often, a project would have varying degree of success/failure.
For some, just getting some concepts in code would be a success, while other may measure success as recovering all investments and making profit.
Two well-known modes of failures are schedule slip and quality deterioration, but in real-world, people do not seem to care much about them.
Simple ways to slip the schedule are to let the managers make request whenever they want (features creep) and let the programmers code whatever they feel is right (cowboy coding). Change management process such as sprint planning of scrum and planning game of XP are some of the examples. Theses are some of the attempts for the management and the developers to ship reliable products on time. If either party is not interested in reliable or on-time, then change management would not be useful.
A:
I suppose how successful the project is depends on who the client is. If the client were the company directors and they are happy, then the project was successful regardless of the failures along the way.
A:
Andy Rutledge has written a pretty interesting article on success. Though the title is Pre-bid Discussions, the article defines having a successful project, which for Andy entails:
Will I or my team be allowed to bring our best work to the final result?
Is the client prepared to engage in the project appropriately?
Is the client prepared to begin this project?
Is the client prepared to invest trust in my or my team’s ideas?
Am I or is my team prepared to fulfill or exceed the project requirements?
This article was pointed out by Obie Fernandez, a successful consultant, in his Do the Hustle conference about consulting.
A:
What is change management, and how does it apply to a project?
Change management is about approving and communicating changes to a project before they happen. If someone on your project (user, sponsor, team member.. whoever) wants to add a feature, the change needs to be documented and analysed for the effect. Any resulting changes to scope, budget and schedule must then be approved before the change is undertaken. These changes are typically approved by your sponsor, your steering committee or your client.
Once the changes have been approved and accepted that is your new plan. It doesn't matter what the original budget or schedule was.
Change Management on projects is all about the principle of "No Surprises". The right people (your Change Control Board) need to approve any changes to Scope, Schedule and Budget before they are acted upon.
One thing to remember is that there may be certain explicit or implicit constraints and tolerances for change. You may be have to deliver your project by a certain date to meet government regulatory requirements. Or your organisation may have a threshold that once a project budget is 30% over the original budget it must go to a "C" level or the project is killed. Investigating and explicitly stating these thresholds and tolerances up front are a good way of having better successful projects.
Where does "change management" end, and "project failure" begin?
If a project delivers on the approved scope, schedule and budget then it is successful.
However it may be still viewed as a failure. Post Implementation Reviews are a good tool to qualify this with your stakeholders (not just your boss). Also Benefit Realisation would be worthwhile looking into to see outside the blackbox of the project and the impact on the business as a whole.
| Where does "Change Management" end and "Project Failure" begin? | I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data.
Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot.
So I ask you this:
What is change management, and how does it apply to a project?
Where does "change management" end, and "project failure" begin?
@shog9:
I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented.
I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?"
| [
"I think, most of the time, we developers forget this we all do is, after all, about bussiness.\nFrom that point of view a project is not a failure while the client is willing to pay for it. It all depends on the client, some clients have more patience and understand better the risks of software development, other just won't pay if there's a substantial delay.\nAnyway, about your question. Whenever you evolve a project there are risks involved, maybe you schedule the end of the project in a certain date but it will take like six month longer than you expected. In that case you have to balance what you have already spent and what you have to gain against the risks you're taking. There's actually an entire science called \"decision making\" that studies it at software level, so your boss is not wrong at all.\nLet's look at some questions, Is the client willing to wait for the project? Is he willing to assume certain overcosts? Even if he doesn't, Is worth completing the project assuming the extra costs instead of throwing away all the already done work? Can the company assume what's already lost?\nThe real answer to your problem lies behind that questions. You can't establish a point and say, here, if the project isn't done by this time then it's a failure. As for your specific situation, who knows? Your boss has probably more information that you have so your work is to tell him how is the project going, how much it will take and how much it will cost (in terms hours/man if you wish)\n",
"Unless the goals were clearly stated in the beginning of the project, there are no clear lines between \"success\" and \"failure.\" Often, a project would have varying degree of success/failure.\nFor some, just getting some concepts in code would be a success, while other may measure success as recovering all investments and making profit.\nTwo well-known modes of failures are schedule slip and quality deterioration, but in real-world, people do not seem to care much about them. \nSimple ways to slip the schedule are to let the managers make request whenever they want (features creep) and let the programmers code whatever they feel is right (cowboy coding). Change management process such as sprint planning of scrum and planning game of XP are some of the examples. Theses are some of the attempts for the management and the developers to ship reliable products on time. If either party is not interested in reliable or on-time, then change management would not be useful.\n",
"I suppose how successful the project is depends on who the client is. If the client were the company directors and they are happy, then the project was successful regardless of the failures along the way.\n",
"Andy Rutledge has written a pretty interesting article on success. Though the title is Pre-bid Discussions, the article defines having a successful project, which for Andy entails:\n\nWill I or my team be allowed to bring our best work to the final result?\nIs the client prepared to engage in the project appropriately?\nIs the client prepared to begin this project?\nIs the client prepared to invest trust in my or my team’s ideas?\nAm I or is my team prepared to fulfill or exceed the project requirements?\n\nThis article was pointed out by Obie Fernandez, a successful consultant, in his Do the Hustle conference about consulting.\n",
"What is change management, and how does it apply to a project?\nChange management is about approving and communicating changes to a project before they happen. If someone on your project (user, sponsor, team member.. whoever) wants to add a feature, the change needs to be documented and analysed for the effect. Any resulting changes to scope, budget and schedule must then be approved before the change is undertaken. These changes are typically approved by your sponsor, your steering committee or your client.\nOnce the changes have been approved and accepted that is your new plan. It doesn't matter what the original budget or schedule was.\nChange Management on projects is all about the principle of \"No Surprises\". The right people (your Change Control Board) need to approve any changes to Scope, Schedule and Budget before they are acted upon. \nOne thing to remember is that there may be certain explicit or implicit constraints and tolerances for change. You may be have to deliver your project by a certain date to meet government regulatory requirements. Or your organisation may have a threshold that once a project budget is 30% over the original budget it must go to a \"C\" level or the project is killed. Investigating and explicitly stating these thresholds and tolerances up front are a good way of having better successful projects.\nWhere does \"change management\" end, and \"project failure\" begin?\nIf a project delivers on the approved scope, schedule and budget then it is successful. \nHowever it may be still viewed as a failure. Post Implementation Reviews are a good tool to qualify this with your stakeholders (not just your boss). Also Benefit Realisation would be worthwhile looking into to see outside the blackbox of the project and the impact on the business as a whole.\n"
] | [
5,
1,
0,
0,
0
] | [] | [] | [
"change_management",
"project_management"
] | stackoverflow_0000037263_change_management_project_management.txt |
Q:
IList.Cast() returns error, syntax looks ok
public static IList<T> LoadObjectListAll<T>()
{
ISession session = CheckForExistingSession();
var cfg = new NHibernate.Cfg.Configuration().Configure();
var returnList = session.CreateCriteria(typeof(T));
var list = returnList.List();
var castList = list.Cast<typeof(T)>();
return castList;
}
So, I'm getting a build error where I am casting the "list" element to a generic IList .... can anyone see a glaring error here?
A:
T is not a type nor a System.Type. T is a type parameter. typeof(T) returns the type of T. The typeof operator does not act on an object, it returns the Type object of a type. http://msdn.microsoft.com/en-us/library/58918ffs.aspx
@John is correct in answering your direct question. But the NHibernate code there is a little off. You shouldn't be configuring the ISessionFactory after getting the ISession, for example.
public static T[] LoadObjectListAll()
{
var session = GetNewSession();
var criteria = session.CreateCriteria(typeof(T));
var results = criteria.List<T>();
return results.ToArray();
}
A:
I think
var castList = list.Cast<typeof(T)>();
should be
var castList = list.Cast<T>();
@Jon Limjap The most glaring error I can see is
that an IList is definitely different from an IList<T>. An IList is non-generic
(e.g., ArrayList).
The original question was already using an IList<T>. It was removed when someone edited the formatting. Probably a problem with Markdown.
Fixed now.
A:
T is already a type parameter, you don't need to call typeof on it. TypeOf takes a type and returns its type parameter.
A:
The IList is an IList<T>, it just got fubared by markdown when she posted it. I tried to format it, but I missed escaping the <T>..Fixing that now.
A:
CLI only supports generic arguments for covariance and contravariance when using delegates, but when using generics there are some limitations, for example, you can cast a string to an object so most people will assume that you can do the same with List to a List but you can't do that because there is no covariance between generic parameters however you can simulate covariance as you can see in this article. So it depends on the runtime type that it is generated by the abstract factory.
That reads like a markov chain... Bravo.
A:
"The original question was already using an IList<T>. It was removed when someone edited the formatting. Probably a problem with Markdown."
Thats what i saw but it was edited by someone and that's the reason why I put my explanation about covariance but for some reason i was marked down to -1.
A:
@Jonathan Holland
T is already a type parameter, you don't need to call typeof on it. TypeOf takes a type and returns its type parameter.
typeof "takes" a type and returns its System.Type
A:
You have too many temporary variables which are confusing
instead of
var returnList = session.CreateCriteria(typeof(T));
var list = returnList.List();
var castList = list.Cast<typeof(T)>();
return castList;
Just do
return session.CreateCriteria(typeof(T)).List().Cast<T>();
A:
@Jon and @Jonathan is correct, but you also have to change the return type to
IList<T>
also. Unless that is just a markdown bug.
@Jonathan, figured that was the case.
I am not sure what version of nHibernate you are using. I haven't tried the gold release of 2.0 yet, but you could clean the method up some, by removing some lines:
public static IList<T> LoadObjectListAll()
{
ISession session = CheckForExistingSession();
// Not sure if you can configure a session after retrieving it. CheckForExistingSession should have this logic.
// var cfg = new NHibernate.Cfg.Configuration().Configure();
var criteria = session.CreateCriteria(typeof(T));
return criteria.List<T>();
}
A:
CLI only supports generic arguments for covariance and contravariance when using delegates, but when using generics there are some limitations, for example, you can cast a string to an object so most people will assume that you can do the same with List<string> to a List<object> but you can't do that because there is no covariance between generic parameters however you can simulate covariance as you can see in this article. So it depends on the runtime type that it is generated by the abstract factory.
| IList.Cast() returns error, syntax looks ok | public static IList<T> LoadObjectListAll<T>()
{
ISession session = CheckForExistingSession();
var cfg = new NHibernate.Cfg.Configuration().Configure();
var returnList = session.CreateCriteria(typeof(T));
var list = returnList.List();
var castList = list.Cast<typeof(T)>();
return castList;
}
So, I'm getting a build error where I am casting the "list" element to a generic IList .... can anyone see a glaring error here?
| [
"T is not a type nor a System.Type. T is a type parameter. typeof(T) returns the type of T. The typeof operator does not act on an object, it returns the Type object of a type. http://msdn.microsoft.com/en-us/library/58918ffs.aspx\n@John is correct in answering your direct question. But the NHibernate code there is a little off. You shouldn't be configuring the ISessionFactory after getting the ISession, for example.\npublic static T[] LoadObjectListAll()\n{\n var session = GetNewSession();\n var criteria = session.CreateCriteria(typeof(T));\n var results = criteria.List<T>();\n return results.ToArray(); \n}\n\n",
"I think \nvar castList = list.Cast<typeof(T)>();\n\nshould be \nvar castList = list.Cast<T>();\n\n\n@Jon Limjap The most glaring error I can see is\n that an IList is definitely different from an IList<T>. An IList is non-generic\n (e.g., ArrayList).\n\nThe original question was already using an IList<T>. It was removed when someone edited the formatting. Probably a problem with Markdown.\nFixed now.\n",
"T is already a type parameter, you don't need to call typeof on it. TypeOf takes a type and returns its type parameter.\n",
"The IList is an IList<T>, it just got fubared by markdown when she posted it. I tried to format it, but I missed escaping the <T>..Fixing that now.\n",
"\nCLI only supports generic arguments for covariance and contravariance when using delegates, but when using generics there are some limitations, for example, you can cast a string to an object so most people will assume that you can do the same with List to a List but you can't do that because there is no covariance between generic parameters however you can simulate covariance as you can see in this article. So it depends on the runtime type that it is generated by the abstract factory.\n\nThat reads like a markov chain... Bravo.\n",
"\"The original question was already using an IList<T>. It was removed when someone edited the formatting. Probably a problem with Markdown.\"\nThats what i saw but it was edited by someone and that's the reason why I put my explanation about covariance but for some reason i was marked down to -1.\n",
"@Jonathan Holland\n\nT is already a type parameter, you don't need to call typeof on it. TypeOf takes a type and returns its type parameter.\n\ntypeof \"takes\" a type and returns its System.Type\n",
"You have too many temporary variables which are confusing\ninstead of\nvar returnList = session.CreateCriteria(typeof(T));\nvar list = returnList.List();\nvar castList = list.Cast<typeof(T)>();\nreturn castList;\n\nJust do\nreturn session.CreateCriteria(typeof(T)).List().Cast<T>();\n\n",
"@Jon and @Jonathan is correct, but you also have to change the return type to\nIList<T>\n\nalso. Unless that is just a markdown bug.\n@Jonathan, figured that was the case.\nI am not sure what version of nHibernate you are using. I haven't tried the gold release of 2.0 yet, but you could clean the method up some, by removing some lines:\npublic static IList<T> LoadObjectListAll()\n{\n ISession session = CheckForExistingSession();\n // Not sure if you can configure a session after retrieving it. CheckForExistingSession should have this logic.\n // var cfg = new NHibernate.Cfg.Configuration().Configure();\n var criteria = session.CreateCriteria(typeof(T));\n return criteria.List<T>();\n}\n\n",
"CLI only supports generic arguments for covariance and contravariance when using delegates, but when using generics there are some limitations, for example, you can cast a string to an object so most people will assume that you can do the same with List<string> to a List<object> but you can't do that because there is no covariance between generic parameters however you can simulate covariance as you can see in this article. So it depends on the runtime type that it is generated by the abstract factory.\n"
] | [
7,
5,
1,
1,
1,
1,
1,
1,
0,
0
] | [
"The most glaring error I can see is that an IList is definitely different from an IList<T>. An IList is non-generic (e.g., ArrayList).\nSo your method signature should be:\npublic static IList<T> LoadObjectListAll()\n\n"
] | [
-1
] | [
".net",
".net_3.5",
"c#",
"nhibernate",
"syntax"
] | stackoverflow_0000043126_.net_.net_3.5_c#_nhibernate_syntax.txt |
Q:
Insert current date in Excel template at creation
I'm building an excel template (*.xlt) for a user here, and one of the things I want to do is have it insert the current date when a new document is created (ie, when they double-click the file in windows explorer). How do I do this?
Update: I should have added that I would prefer not to use any vba (macro). If that's the only option, then so be it, but I'd really like to avoid forcing my user to remember to click some 'allow macro content' button.
A:
You could use the worksheet function =TODAY(), but obviously this would be updated to the current date whenever the workbook is recalculated.
The only other method I can think of is, as 1729 said, to code the Workbook_Open event:
Private Sub Workbook_Open()
ThisWorkbook.Worksheets("Sheet1").Range("A1").Value = Date
End Sub
You can reduce the problem of needing the user to accept macros each time by digitaly signing the template (in VBA IDE Tools | Digital Signature...) and select a digital certificate, however, you will need to get a certificate from a commercial certification authority (see http://msdn.microsoft.com/en-us/library/ms995347.aspx). The user will need to select to always trust this certificate the first time they run the template, but thereafter, they will not be prompted again.
A:
You can edit the default template for excel -
There is a file called Book.xlt in the XLSTART directory, normally located at C:\Program Files\Microsoft Office\Office\XLStart\
You should be able to add a macro called Workbook_Open
Private Sub Workbook_Open()
If ActiveWorkBook.Sheets(1).Range("A1") = "" Then
ActiveWorkBook.Sheets(1).Range("A1") = Now
End If
End Sub
My VBA is a little rusty, but you might find something like this works.
A:
To avoid VBA, and if you think your users might follow instructions, you could ask them to copy the date and then paste special->values to set the date so that it won't change in future.
| Insert current date in Excel template at creation | I'm building an excel template (*.xlt) for a user here, and one of the things I want to do is have it insert the current date when a new document is created (ie, when they double-click the file in windows explorer). How do I do this?
Update: I should have added that I would prefer not to use any vba (macro). If that's the only option, then so be it, but I'd really like to avoid forcing my user to remember to click some 'allow macro content' button.
| [
"You could use the worksheet function =TODAY(), but obviously this would be updated to the current date whenever the workbook is recalculated.\nThe only other method I can think of is, as 1729 said, to code the Workbook_Open event:\nPrivate Sub Workbook_Open()\n ThisWorkbook.Worksheets(\"Sheet1\").Range(\"A1\").Value = Date\nEnd Sub\n\nYou can reduce the problem of needing the user to accept macros each time by digitaly signing the template (in VBA IDE Tools | Digital Signature...) and select a digital certificate, however, you will need to get a certificate from a commercial certification authority (see http://msdn.microsoft.com/en-us/library/ms995347.aspx). The user will need to select to always trust this certificate the first time they run the template, but thereafter, they will not be prompted again.\n",
"You can edit the default template for excel - \nThere is a file called Book.xlt in the XLSTART directory, normally located at C:\\Program Files\\Microsoft Office\\Office\\XLStart\\\nYou should be able to add a macro called Workbook_Open\nPrivate Sub Workbook_Open()\n If ActiveWorkBook.Sheets(1).Range(\"A1\") = \"\" Then\n ActiveWorkBook.Sheets(1).Range(\"A1\") = Now\n End If\nEnd Sub\n\nMy VBA is a little rusty, but you might find something like this works.\n",
"To avoid VBA, and if you think your users might follow instructions, you could ask them to copy the date and then paste special->values to set the date so that it won't change in future.\n"
] | [
4,
2,
0
] | [] | [] | [
"excel",
"templates"
] | stackoverflow_0000040637_excel_templates.txt |
Q:
Setting Variable Types in PHP
I know that I can do something like
$int = (int)99; //(int) has a maximum or 99
To set the variable $int to an integer and give it a value of 99.
Is there a way to set the type to something like LongBlob in MySQL for LARGE Integers in PHP?
A:
No. PHP does what is called automatic type conversion.
In your example
$int = (int)123;
the "(int)" just assures that at that exact moment 123 will be handled as an int.
I think your best bet would be to use a class to provide some sort of type safety.
A:
No, the type LongBlob is specific to MySQL. In PHP it is seen as binary data (usually characters), if you tried to convert it to an int it would take the first 32 bits of data (platform dependent) and push that into the variable.
| Setting Variable Types in PHP | I know that I can do something like
$int = (int)99; //(int) has a maximum or 99
To set the variable $int to an integer and give it a value of 99.
Is there a way to set the type to something like LongBlob in MySQL for LARGE Integers in PHP?
| [
"No. PHP does what is called automatic type conversion.\nIn your example\n$int = (int)123;\n\nthe \"(int)\" just assures that at that exact moment 123 will be handled as an int.\nI think your best bet would be to use a class to provide some sort of type safety.\n",
"No, the type LongBlob is specific to MySQL. In PHP it is seen as binary data (usually characters), if you tried to convert it to an int it would take the first 32 bits of data (platform dependent) and push that into the variable.\n"
] | [
5,
0
] | [] | [] | [
"php",
"variable_types"
] | stackoverflow_0000043291_php_variable_types.txt |