text
stringlengths
100
957k
meta
stringclasses
1 value
# How Ricci Flow makes room to find Enistein metrics? I am studding a lecture note entitled "Topics in Riemanian Geometry" by Jeff. Viaclovsky. See the below phrase in lecture 12: "In order to find Einstein metrics, one would first think of looking at the gradient Ricci flow on the space of Riemannian metrics. This is $${{\partial g}\over {\partial t}}=-2Ric_g, \quad g_0=g(0).$$ I want know the idea behind this flow, and how the flow makes room to obtain an Einstein metric on underlying manifold? good night. I'll try to be somewhat informal and intuitive rather than formal and technical. Note that if $(M, g_{0})$ is a Riemannian manifold we say that $g_{0}$ is an Einstein metric if satisfies the following condition $Ric_{g_{0}} = \rho g_{0}$ for $\rho$ constant. In this case, the metrics $g(t) = (1 - 2 \rho t) g_{0}$ form a solution to the ricci flow (take the $\frac{\partial}{\partial t}$). Evenmore: In a more general setting, a Ricci soliton is a Riemannian meanifold such that there exists a $\rho$ and $\mathcal{Y}$ a vector field such that we can write the following expression: $$Ric_{g_{0}} + \frac{1}{2} \mathcal{L}_{\mathcal{Y}} (g_{0}) = \rho g_{0}$$. Note that if $\mathcal{Y}$ is zero, then we recover the expression for einstein metrics (Here is where most of the informality takes place): In the first expression we used to describe the Einstein metrics one can feel tempted to replace the riemannian metric in the right hand side of the equation for a $\phi_{t}^{*} (g_{0})$ which is an expression that involves a one-parameter family of diffeomorphisms (of a certain kind and satisfying certain properties) and thus obtaining a generalization for Einstein metrics. These kind of solutions (Ricci solitons) contain rich information related to the geometry of the manifold under the evolution equation an thus it is sufficient to study these self-similar solutions (Self-similar as you can see that given an initial metric, is just the pullback of the same metric under a family of diffeomorphisms) to obtain a detailed picture of what's going on in the geometry of the underlying manifold. Finally, there is an interesting result which conceptualizes your question in a good sense: "any expanding or steady compact Ricci soliton is necessarily Einstein" (Cao, Huai-Dong: "Recent progress on Ricci solitons. Recent advances in geometric analysis, 1–38, Adv. Lect. Math. (ALM), 11." Int. Press, Somerville, MA (2010) proposition 1.1 ) I hope you find something I said here somehow useful.
{}
Introductory tutorial The following tutorial gives a basic introduction to the data formats used by NeuroDecodeR (NDR) and shows how to run a simple decoding analysis. The tutorial is based on a dataset collected by Ying Zhang in Bob Desimone’s lab at MIT (some of the code below uses just a small subset of this data, but a larger data set can be download from the NeuroDecodeR GitHub site). The NeuroDecodeR package is based on the MATLAB Neural Decoding Toolbox, which you can learn more about at www.readout.info. Overview of the NDR Neural decoding is a process in which a pattern classifier learns the relationship between neural activity and experimental conditions using a training set of data. The reliability of the relationship between the neural activity and experimental conditions is evaluated by having the classifier predict what experimental conditions were present on a second test set of data. The NDR is built around 5 different object classes that allow users to apply neural decoding in a flexible and robust way. The four types of objects are: 1. Datasources (DS) which generate training and test splits of the data. 2. Feature preprocessors (FP) which apply preprocessing to the training and test splits. 3. Classifiers (CL) which learn the relationship between experimental conditions and data on the training set, and then predict experimental conditions on the test data. 4. Result Metrics (RM) which take the output predictions of a classifier and summarize the prediction accuracy. 5. Cross-validators (CV) which take the DS, FP and CL objects and run a cross-validation decoding procedure. The NDR comes with a few implementations of each of these objects, and defines interfaces that allow one to create new objects that extend the basic functionality of the five object classes. The following tutorial explains the data formats used by the Neural Decoding Toolbox and how to run a decoding experiment using the basic versions of the four object classes. About the data used in this tutorial The data used in this tutorial was collected by Ying Zhang in Bob Desimone’s lab at MIT and was used in the supplemental figures in the paper Object decoding with attention in inferior temporal cortex, PNAS, 2011. The data consists of single unit recordings from 132 neurons in inferior temporal cortex (IT). The recordings were made while a monkey viewed 7 different objects that were presented at three different locations (the monkey was also shown images that consisted of three objects shown simultaneously and performed an attention task; however, for the purposes of this tutorial, we are only going to analyze data from trials when single objects were shown). Each object was presented approximately 20 times at each of the three locations. To start, let us load some libraries we will use in this tutorial. library(NeuroDecodeR) library(ggplot2) library(dplyr) library(tidyr) Data formats In order to use the NDR, the neural data must be in a usable format. Typically, this involves putting the data in raster format and then converting it to binned format using the create_binned_data() function. Raster format To run a decoding analysis using the NDR, you first need to have your data in a usable format. In this tutorial we will use data collected by Ying Zhang in Bob Desimone’s lab at MIT. The directory extdata/Zhang_Desimone_7objects_raster_data_rda/ contains data in raster format. Each file in this directory contains data from one neuron. To start, let us load one of these files and examine its contents. We can also use the test_valid_raster_format() method to verify that the data is in valid raster format. If you start analyzing your own data, test_valid_raster_format() can be a useful function to make sure you have your raster data in the correct format. raster_dir_name <- file.path(system.file("extdata", package = "NeuroDecodeR"), "Zhang_Desimone_7object_raster_data_small_rda") file_name <- "bp1001spk_01A_raster_data.rda" test_valid_raster_format(file.path(raster_dir_name, file_name)) Below, we visualize the spiking pattern for this one neuron by using the raster data’s plot() function. plot(raster_data) Here, the x-axis represents time in milliseconds, and the y-axis represents different trials. Each black tick mark represents a point in time when a neuron emitted an action potential. Binning the data The NDR decoding objects operate on data in binned-format. To convert data from raster-format to binned-format, we can use the function create_binned_data(), which calculates the average firing rate of neurons over specified intervals sampled with a specified frequency (i.e., a boxcar filter is used). create_binned_data() takes in four arguments: 1. The name of the directory where the raster-format data is stored 2. The name (potentially including a directory path) that the binned data should be saved as 3. A bin size that specifies how much time the firing rates should be calculated over 4. A sampling interval that specifies how frequently to calculate these firing rates. To calculate the average firing rates in 150 ms bins sampled every 50 ms, we can use the code below. (Note: in the code below we have set the optional argument num_parallel_cores to specify the number of parallel cores we would like to use. Using more cores will speed up the time it takes for this function to run. By default, half the cores on the computer will be used which could be a good setting for most computers. We have chosen 2 cores here since this is the maximum number of cores allowed for examples that are on CRAN.) library(NeuroDecodeR) save_dir_name <- tempdir() binned_file_name <- create_binned_data(raster_dir_name, file.path(save_dir_name, "ZD"), 150, 50, num_parallel_cores = 2) Determining how many times each condition was repeated Before beginning the decoding analysis, it is useful to know how many times each experimental condition (e.g., stimulus) was presented to each site (e.g., neuron). In particular, it is useful to know how many times the condition that has the fewest repetitions was presented. To do this, we will use the function get_num_label_repetitions() which uses data in binned-format and calculates how many trials each label level was presented. Below, we use the plot function on the results to see how many times the labels were repeated. binned_file_name <- system.file("extdata/ZD_150bins_50sampled.Rda", package="NeuroDecodeR") label_rep_info <- get_num_label_repetitions(binned_file_name, "stimulus_ID") plot(label_rep_info) Here, we see there are 132 neurons which have 60 repetitions of all the labels, and 6 neurons where the flower label was only presented 59 times. Thus, if we want to use all the neurons in the decoding analysis, the maximum number of cross-validation splits we could use is 59. Alternatively, we could use 60 cross-validation splits along with the 125 neurons that have 60 repetitions. Performing a decoding analysis Performing a decoding analysis involves several steps: 1. Creating a datasource (DS) object that generates training and test splits of the data. 2. Optionally creating feature-preprocessor (FP) objects that learn parameters from the training data, and preprocess the training and test data. 3. Creating a classifier (CL) object that learns the relationship between the training data and training labels, and then evaluates the strength of this relationship on the test data. 4. Creating result metric (RM) objects that aggregate the predictions to create result summaries. 5. Running a cross-validator object that using the datasource (DS), the feature-preprocessor (FP) and the classifier (CL) objects to do a cross-validation procedure that estimates the decoding accuracy. Below, we describe how to create and run these objects on the Zhang-Desimone dataset. Creating a Datasource (DS) A datasource object is used by the cross-validator to generate training and test splits of the data. Below we create a ds_basic() object that takes binned-format data, name of the label variables to be decoded, and a scalar that specifies how many cross-validation splits to use. The default behavior of this datasource is to create test splits that have one example of each object in them and num_cv_splits - 1 examples of each object in the training set. As calculated above, all 132 neurons have 59 repetitions of each stimulus, and 125 neurons have 60 repetitions of each stimulus. Thus, we can use up to 59 cross-validation splits using all neurons, or we could set the datasource to use only a subset of neurons and use 60 cross-validation splits. For the purpose of this tutorial, we will use all the neurons and only 20 cross-validation splits so the code runs a little faster. The ds_basic() datasource object also has many more properties that can be set, including specifying certain label levels or neurons to use. binned_file_name <- system.file(file.path("extdata", "ZD_150bins_50sampled.Rda"), package="NeuroDecodeR") variable_to_decode <- "stimulus_ID" num_cv_splits <- 20 ds <- ds_basic(binned_file_name, variable_to_decode, num_cv_splits) ## Automatically selecting sites_IDs_to_use. Since num_cv_splits = 20 and num_label_repeats_per_cv_split = 1, all sites that have 20 repetitions have been selected. This yields 132 sites that will be used for decoding (out of 132 total). Creating a feature-preprocessor (FP) Feature preprocessors use the training set to learn particular parameters about the data, and then apply preprocessing to the training and test sets using these parameters. Below, we will create a fp_zscore() preprocessor that zscore normalizes the data so that each neuron’s activity has approximately zero mean and a standard deviation of 1 over all trials. This feature-preprocessor is useful so that neurons with high firing rates do not end up contributing more to the decoding results than neurons with lower firing rates when a cl_max_correlation() classifier is used. # note that the FP objects are stored in a list # which allows multiple FP objects to be used in one analysis fps <- list(fp_zscore()) Creating a classifier (CL) Classifiers take a training set of data and learn the relationship between the neural responses and the experimental conditions (label levels) that were present on particular trials. The classifier is then used to make predictions about what experimental conditions are present on trials from a different test set of neural data. Below, we create a cl_max_correlation() classifier which learns prototypes of each class k that consists of the mean of all training data from class k. The predicted class for a new test point x is the class that has the maximum correlation coefficient value between the x and each class prototype. cl <- cl_max_correlation() Creating result metrics (RM) Result metrics take the predictions made by a classifier, as well as the ground truth (i.e., the actual label level values for what happened on each trial) and aggregate these predictions to give a measure of the classifier’s performance. Below, we create two result metrics. The first result metric returns basic measures of decoding accuracy such as the proportion of predictions that were correct (zero-one-loss). The second result metric creates a confusion matrix showing the pattern of prediction mistakes that were made. Result metrics must also be put into a list so multiple result metrics can be used in an analysis. rms <- list(rm_main_results(), rm_confusion_matrix()) Creating a cross-validator (CV) Cross-validator objects take a datasource, a classifier, result metrics and optionally feature-preprocessor objects, and run a decoding procedure by generating training and test data from the datasource, preprocessing this data with the feature-preprocessors, training and testing the classifier on the resulting data, and aggregating the results with the result metrics. This procedure is run in two nested loops. The inner ‘cross-validation’ loop runs a cross-validation procedure where the classifier is trained and tested on different divisions of the data. The outer, ‘resample’ loop generates new splits (and potentially pseudo-populations) of data, which are then run in a cross-validation procedure by the inner loop. The number of resample runs is a parameter for this analysis as well, which we have set to 2 to make the procedure run quicker, although in general more resample runs will yield smoother results (the default value is 50). Below, we create a cv_standard() object to runs this decoding procedure. cv <- cv_standard(datasource = ds, classifier = cl, feature_preprocessors = fps, result_metrics = rms, num_resample_runs = 2) Running the decoding analysis To run the decoding procedure, we call the cross-validator’s run_cv_decoding method, and the results are stored in an object DECODING_RESULTS. DECODING_RESULTS <- run_decoding(cv) ## | | | 0% | |=================================== | 50% | |======================================================================| 100% Plotting the results The DECODING_RESULTS object created is a list that contains our result metrics, calculated by aggregating the results over all the cross-validation splits. We can now use the result metrics plot functions to visualize these aggregated results. Plotting the main results The rm_main_results() plot function allows one to plot temporal cross decoding results, where we are training the classifier at one time and testing the classifier at a second time. This can be displayed by running the code below: plot(DECODING_RESULTS$rm_main_results) We can also create simpler line plots by setting the type = 'line'. Additionally, we can plot all three types of results that that rm_main_results object saves using the type = 'all' argument. Below, we see the results by setting both these arguments. plot(DECODING_RESULTS$rm_main_results, results_to_show = 'all', type = 'line') Plotting confusion matrices We can also plot the confusion matrices aggregated from the rm_confusion_matrix object, which shows the pattern of classification mistakes at different points in time. plot(DECODING_RESULTS$rm_confusion_matrix) The rm_confusion_matrix() object also has a function plot_MI() which calculates mutual information from the confusion matrix and plots this as a function of time or as a TCT plot. plot(DECODING_RESULTS$rm_confusion_matrix, results_to_show = "mutual_information") Saving the results Finally, the NDR has a “log” function that helps you save and manage your results. Below, we show how to use the log_save_results() function which takes a DECODING_RESULTS object and the name of a directory. This function saves the results to the specified directory and logs the parameters used in the analysis so that they can later be retrieved. For more information, see the tutorial on saving and managing results. results_dir_name <- file.path(tempdir(), "results", "") dir.create(results_dir_name) log_save_results(DECODING_RESULTS, results_dir_name) ## Warning in log_save_results(DECODING_RESULTS, results_dir_name): The manifest file does not exist. ## Assuming this is the first result that is saved and creating manifest file Running an analysis using the pipe (|>) operator It is also possible to run a decoding analysis by string together NDR objects using the pipe operator. One can do this with the following steps: 1. Start by piping data in binned format into a datasource. 2. Pipe the datasource into a sequence that contains a classifier, and optionally feature preprocessors and result metrics. 3. Pipe this sequence into a cross-validator and then call the run_decoding() method. The code below gives and example of how this can be done. basedir_file_name <- system.file(file.path("extdata", "ZD_500bins_500sampled.Rda"), package="NeuroDecodeR") DECODING_RESULTS <- basedir_file_name |> ds_basic('stimulus_ID', 6, num_label_repeats_per_cv_split = 3) |> cl_max_correlation() |> fp_zscore() |> rm_main_results() |> rm_confusion_matrix() |> cv_standard(num_resample_runs = 2) |> run_decoding() ## Automatically selecting sites_IDs_to_use. Since num_cv_splits = 6 and num_label_repeats_per_cv_split = 3, all sites that have 18 repetitions have been selected. This yields 132 sites that will be used for decoding (out of 132 total). ## | | | 0% | |=================================== | 50% | |======================================================================| 100% plot(DECODING_RESULTS\$rm_confusion_matrix)
{}
• The Inria's Research Teams produce an annual Activity Report presenting their activities and their results of the year. These reports include the team members, the scientific program, the software developed by the team and the new results of the year. The report also describes the grants, contracts and the activities of dissemination and teaching. Finally, the report gives the list of publications of the year. • Legal notice • Personal data ## Section: Research Program ### High performance Fast Multipole Method for N-body problems Participants : Emmanuel Agullo, Olivier Coulaud, Pierre Esterie, Guillaume Sylvand. In most scientific computing applications considered nowadays as computational challenges (like biological and material systems, astrophysics or electromagnetism), the introduction of hierarchical methods based on an octree structure has dramatically reduced the amount of computation needed to simulate those systems for a given accuracy. For instance, in the N-body problem arising from these application fields, we must compute all pairwise interactions among N objects (particles, lines, ...) at every timestep. Among these methods, the Fast Multipole Method (FMM) developed for gravitational potentials in astrophysics and for electrostatic (coulombic) potentials in molecular simulations solves this N-body problem for any given precision with $O\left(N\right)$ runtime complexity against $O\left({N}^{2}\right)$ for the direct computation. The potential field is decomposed in a near field part, directly computed, and a far field part approximated thanks to multipole and local expansions. We introduced a matrix formulation of the FMM that exploits the cache hierarchy on a processor through the Basic Linear Algebra Subprograms (BLAS). Moreover, we developed a parallel adaptive version of the FMM algorithm for heterogeneous particle distributions, which is very efficient on parallel clusters of SMP nodes. Finally on such computers, we developed the first hybrid MPI-thread algorithm, which enables to reach better parallel efficiency and better memory scalability. We plan to work on the following points in HiePACS . #### Improvement of calculation efficiency Nowadays, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. GPU (Graphics Processing Units) and the Cell processor have thus already been used in astrophysics and in molecular dynamics. The Fast Mutipole Method has also been implemented on GPU . We intend to examine the potential of using these forthcoming processors as a building block for high-end parallel computing in N-body calculations. More precisely, we want to take advantage of our specific underlying BLAS routines to obtain an efficient and easily portable FMM for these new architectures. Algorithmic issues such as dynamic load balancing among heterogeneous cores will also have to be solved in order to gather all the available computation power. This research action will be conduced on close connection with the activity described in Section 3.2. #### Non uniform distributions In many applications arising from material physics or astrophysics, the distribution of the data is highly non uniform and the data can grow between two time steps. As mentioned previously, we have proposed a hybrid MPI-thread algorithm to exploit the data locality within each node. We plan to further improve the load balancing for highly non uniform particle distributions with small computation grain thanks to dynamic load balancing at the thread level and thanks to a load balancing correction over several simulation time steps at the process level. #### Fast multipole method for dislocation operators The engine that we develop will be extended to new potentials arising from material physics such as those used in dislocation simulations. The interaction between dislocations is long ranged ($O\left(1/r\right)$) and anisotropic, leading to severe computational challenges for large-scale simulations. Several approaches based on the FMM or based on spatial decomposition in boxes are proposed to speed-up the computation. In dislocation codes, the calculation of the interaction forces between dislocations is still the most CPU time consuming. This computation has to be improved to obtain faster and more accurate simulations. Moreover, in such simulations, the number of dislocations grows while the phenomenon occurs and these dislocations are not uniformly distributed in the domain. This means that strategies to dynamically balance the computational load are crucial to achieve high performance. #### Fast multipole method for boundary element methods The boundary element method (BEM) is a well known solution of boundary value problems appearing in various fields of physics. With this approach, we only have to solve an integral equation on the boundary. This implies an interaction that decreases in space, but results in the solution of a dense linear system with $O\left({N}^{3}\right)$ complexity. The FMM calculation that performs the matrix-vector product enables the use of Krylov subspace methods. Based on the parallel data distribution of the underlying octree implemented to perform the FMM, parallel preconditioners can be designed that exploit the local interaction matrices computed at the finest level of the octree. This research action will be conduced on close connection with the activity described in Section 3.3. Following our earlier experience, we plan to first consider approximate inverse preconditionners that can efficiently exploit these data structures.
{}
### wrg0ababd's blog By wrg0ababd, history, 4 weeks ago, translation, , Update: editorial for G is out • +49 » 4 weeks ago, # |   +32 Another solution for F: first shift the permutation so that 1 becomes the leftmost element.now the problem can be modeled like this: don't consider the first element(1). now we want to split the new array into two parts(maybe one of them is empty) so that to minimize the maximum depth of tree of the two parts.so first build an RMQ on the array(we are not considering element 1). now we have a recursive O(n) algorithm to find depth of a subarray: int get(int tl, int tr){ if (tr<=tl) return 0; if (tr-tl==1) return 1; int mid=RMQ(tl, tr); return max(get(tl, mid), get(mid+1, tr))+1; } let P[i] be depth of prefix ending in position i. and S[i] be depth of suffix beginning in position i.we know that: P[1]<=P[2]<=P[3]<=...<=P[n-1] and S[1]>=S[2]>=S[3]>=...>=S[n-1] we can use binary search to find the rightmost index i that P[i]<=S[i+1] and there exist an optimal answer thst we split the array to [1, i][i+1,n-1] or [1,i+1][i+2,n-1] so check both of them and find the answer:)my solution:60808916 • » » 4 weeks ago, # ^ |   +5 You can use \$-brackets instead of -brackets to show your formulae better owo • » » » 4 weeks ago, # ^ |   +5 O(n) 60814844 • » » » » 4 weeks ago, # ^ |   +5 yes you're right. but you posted on wrong position lol. • » » » 4 weeks ago, # ^ | ← Rev. 2 →   +5 Like this? $\geqslant w \leqslant$ • » » » » 4 weeks ago, # ^ |   +13 $⩾w⩽$ » 4 weeks ago, # | ← Rev. 3 →   0 I think the complexity can be $O(n)$ in Problem D because you can find the lowest bit in $O(1)$ by bit operations. Just use lowbit function (It seems only Chinese call it lowbit?). This is my submission. • » » 4 weeks ago, # ^ |   0 I know there is an additional log complexity because of map, but you can use unordered_map instead of it. • » » » 4 weeks ago, # ^ |   +14 You can use something like $2^{i}$ % $67$ to avoid any maps here. • » » » » 4 weeks ago, # ^ |   +5 Amazing trick.. Why it's correct? • » » » » 4 weeks ago, # ^ |   0 Because multiplicative inversion is unique? • » » » » » 4 weeks ago, # ^ | ← Rev. 2 →   -8 Yes. $2^x = 2^y \implies 2 ^ {x - y} = 1 \implies x = y$ ($2^{-y}$ is unique and exists because $67$ is a prime) • » » » » » 4 weeks ago, # ^ |   0 No, but because $2$ is a primitive root modulo $67$. As a consequence, all powers of two less than $10^{18}$ give distinct remainders modulo $67$, so there are no collisions. » 4 weeks ago, # |   0 • » » 4 weeks ago, # ^ |   +1 it's me » 4 weeks ago, # | ← Rev. 2 →   +2 consider k-th bit. An edge connects only vertices with different k-th bit, so partition is clear.Can u explain me morewrg0ababd • » » 4 weeks ago, # ^ |   0 Me too, didn't get this proof. I hope someone elaborate. • » » » 4 weeks ago, # ^ |   +3 What I get is that, Assuming initially B contained only odd edges, hence vertices would be pair of (odd, even) and hence even if you multiply by 2^k all the elements of B(equivalent to multiplying vertices by 2^k), the kth bit of vertices which was initially 0th bit(since multiplying by 2^k is equivalent to shifting) will be opposite(odd has 0th bit 1 while even has 0th bit 0).After this However I do not get the cyclic part. Can anybody help me with that? » 4 weeks ago, # | ← Rev. 2 →   0 Hi! Do check my editorial for $D$. This was actually written as editorial was not published before and lot were(including me) were facing problem in $D$.Link » 4 weeks ago, # |   0 Can someone explain me problem B please :"( • » » 4 weeks ago, # ^ | ← Rev. 2 →   0 Let's denote the first three numbers in the array as a, b, c. Then the multiplication table has the following form: row1: 0 ab ac ... row2: ab 0 bc ... row 3: ac bc 0 ... So we do have the values of ab, ac, and bc given in the input. Then we can simply solve a system of equations with three unknowns and find the value of a. Then if we know a we can find all the other ones by simply going through the first row of the table and dividing all the entries by a. Hope it makes sense. » 4 weeks ago, # |   0 Can anyone explain E better? What is the dp solution? • » » 4 weeks ago, # ^ |   0 See Ashishgup's solution for the problem (60806580) » 4 weeks ago, # |   +4 can somebody explain problem E more clearly » 3 weeks ago, # |   0 In fact, problem F can be solved in linear time.Consider a new sequence b of length 2n:$a_1, a_2, a_3, ..., a_n, a_1, a_2, ..., a_n$, and build cartesian tree of it.Let T be the cartesian tree of b.Observation 1: $a_{k+1}, a_{k+2}, ..., a_{n}, a_1, ..., a_k$ = $b_{k+1}, b_{k+2}, ..., b_{k+n}$.It can be seemed as a subsegment of length n of b.Observation 2: The cartesian tree of $b_{k+1}, b_{k+2}, ..., b_{k+n}$ is a connected component of T. Let b_i be the root of the connected component, which is the minimum number of $b_{k+1}, b_{k+2}, ..., b_{k+n}$. In order to the the maximum depth of cartesian tree of each subsegment, all we need to do is to get $d_1$:the depth of b_i and $d_2$:the maximum depth of $b_{k+1}, b_{k+2}, ..., b_{k+n}$ in T for each $1 \leq k \leq n$. According to Observation 2, the answer is $d_2$ — $d_1$.Applying monotonic queue instead of RMQ, we can calculate these two things in linear time. Just get the minimum depth of them and find the answer. • » » 3 weeks ago, # ^ |   0 wait...why the observation 2 is correct? by the way, I think the T you build by the sample is : 1(0,5) 2(0,3) 3(0,4) 4(0,0) 5(2,6) 6(0,7) 7(0,0)` but not all cartesian tree of $b_{k+1}...b_{k+n}$ is a connected compoent of itI must mistake your solution can you help me ? thanks a lot. :D » 3 weeks ago, # |   0 Can anyone explain problem B ? » 3 weeks ago, # |   0 i didn't understand exactly what problem c wanted.Anyone there to help me 'bout this?? » 13 days ago, # |   0 How to prove the point is only as much as $O(\log n)$ in problem G?
{}
# POSAND - Editorial Author: john_smith_3 Editorialist: Srikkanth Easy # PREREQUISITES: Bitwise operations # PROBLEM: Find a permutation of first n integers that has bitwise AND of every consecutive pair of integers greater than 0. # EXPLANATION: If n is a power of two and n > 1, then n will be present in the permutation and will have atleast one neighbour. None of the numbers in 1, 2, ... n-1 have bitwise AND with n greater than 0, so answer is -1 in this case. For the first 7 numbers we can brute force and find valid solutions if any. In all other cases we can find a pattern that satisfies the properties. One such construction is, to group all the numbers by their highest set bit. Consider only numbers greater than 7. Starting from highest number in a group, list all numbers in descending order but swapping the last two. Eg. for 3rd bit, (15, 14, ..., 8, 9) for 4th bit, (31, 30, ..., 16, 17) Within each group, AND is clearly greater than 0. (highest bit set in all) Let’s append the groups in descending order of the highest set bit, At the border between consecutive groups, bitwise AND is also greater than 0, because the last element of the first group is odd and beginning of each group is odd as well. At the end, we can append the 7 numbers {7, 4, 5, 6, 2, 3, 1} to complete the permutation. Eg for n = 17, we have {[16, 17], [15, 14, ..., 8, 9], [7, 4, 5, 6, 2, 3, 1]} (groupings are shown within []) # TIME COMPLEXITY: TIME: \mathcal{O}(n) SPACE: \mathcal{O}(1) # SOLUTIONS: Setter's Solution // // posand // #include <bits/stdc++.h> #include <numeric> using namespace std; string binary( uint32_t x ) { string a = ""; for (uint32_t i=32; i-->0; ) { a += (char) ('0'+ ((x>>i)&1)); a += " "; } return a; } list<uint32_t> solve( uint32_t N ) { list<uint32_t> v; if (N==1) { v.insert(end(v),1); return v; } uint32_t r=1; while (r < N) r<<=1; if (r==N) return v; r>>=1; v.insert(end(v), r); for (uint32_t j=r+2; j<=N; j++) { v.insert(end(v),j); } v.insert(end(v), r+1); for (uint32_t i=1; i<r; i++) { v.insert( v.end(), (i^(i>>1)) ); } return v; } int main( int argc, char ** argv ) { ios_base::sync_with_stdio(false); uint32_t T; cin >> T; while (T--) { uint32_t N; cin >> N; auto v = solve(N); if (v.size() == 0) { cout << -1 << endl; } else { for (auto x : v) cout << x << " "; cout << endl; } } return 0; } Tester's Solution #include <bits/stdc++.h> #define endl '\n' #define SZ(x) ((int)x.size()) #define ALL(V) V.begin(), V.end() #define L_B lower_bound #define U_B upper_bound #define pb push_back using namespace std; template<class T, class T1> int chkmin(T &x, const T1 &y) { return x > y ? x = y, 1 : 0; } template<class T, class T1> int chkmax(T &x, const T1 &y) { return x < y ? x = y, 1 : 0; } const int MAXN = (1 << 20); int n; cin >> n; } void solve() { if(n == 1) { cout << 1 << endl; return; } if((n & -n) == n) { cout << -1 << endl; } else { cout << 2 << " " << 3 << " " << 1 << " "; for(int l = 2; (1 << l) <= n; l++) { cout << ((1 << l) ^ 1) << " " << (1 << l) << " "; for(int x = (1 << l); x <= min(n, (2 << l) - 1); x++) { if(x > 1 + (1 << l)) cout << x << " "; } } cout << endl; } } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int T; cin >> T; while(T--) { solve(); } return 0; } Editorialist's Solution #include<bits/stdc++.h> using namespace std; #define LL long long int #define FASTIO ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); const int oo = 1e9 + 5; const LL ooll = (LL)1e18 + 5; const int MOD = 1e9 + 7; // const int MOD = 998244353; #define rand(l, r) uniform_int_distribution<int>(l, r)(rng) clock_t start = clock(); const int N = 1e5 + 5; void solve() { int n; cin >> n; if (__builtin_popcount(n) == 1) { if (n == 1) { cout << "1\n"; } else { cout << "-1\n"; } return; } vector<int> first_seven = {0, 1, 3, 2, 6, 5, 4, 7}; if (n == 5) { cout << "2 3 1 5 4\n"; return; } int pre, cur = 7; for (int i=1;i<=min(n,7);++i) cout << first_seven[i] << " "; for (int pw=3;(1<<pw)<=n;++pw) { int st = (1<<pw), en = min(n+1, (st<<1)); pre = cur; cur = st+1; cout << cur << " "; // if ((pre & cur) == 0) { // cout << pre << " " << cur << " " << (pre & cur) << " failed\n"; // assert(false); // } pre = cur; cur = st; cout << cur << " "; // if ((pre & cur) == 0) { // cout << pre << " " << cur << " failed\n"; // assert(false); // } for (int i=st+2;i<en;++i) { pre = cur; cur = i; // if ((pre & cur) == 0) { // cout << pre << " " << cur << " failed\n"; // assert(false); // } cout << cur << " "; } } cout << '\n'; } int main() { // FASTIO; int T = 1; cin >> T; for (int t=1;t<=T;++t) { solve(); } // cerr << fixed << setprecision(10); // cerr << "Time: " << (clock() - start) / (CLOCKS_PER_SEC) << " secs\n"; return 0; } 2 Likes Positive And: Full Explanation - Link this is posand Editorial 3 Likes positive AND video solution I can’t understand what is the problem in my solution #include<bits/stdc++.h> using namespace std; #define ll long long void swap(ll *a,ll *b) { ll temp=*a; *a=*b; *b=temp; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen(“input.txt”,“r”,stdin); freopen(“output.txt”,“w”,stdout); #endif ll t; cin>>t; while(t–){ ll n; cin>>n; ll arr[n+1]={0}; for(ll i=1;i<=n;i++) arr[i]=i; ll temp=(int)log2(n); if(pow(2,temp)==n) { cout<<-1<<"\n"; continue; } else { ll j=1; for(ll i=2;i<=n;i=pow(2,j)) { if(i==2) { swap(&arr[i],&arr[i+1]); swap(&arr[i-1],&arr[i+1]); } else if((arr[i]&arr[i-1])==0 && (arr[i]&arr[i+1])>0) { swap(&arr[i],&arr[i+1]); } j++; } for(ll i=1;i<=n;i++) cout<<arr[i]<<" “; cout<<”\n"; } } return 0; } The problem POSAND has contradictory constraints. First it says 1<=i<N. Which clearly implies N>1, and it makes sense as well. But, in the bottom it takes the constraints as 1<=N<=10^5 . I mean how can you take N=1? It doesn’t even make sense and it violates the constraints above. If you are taking N=1, the answer should be -1 not 1. Because it just doesn’t satisfy the condition of the problem. This has cost me 50 points, and 3k ranks, and an obvious rating loss. Either correct the solution or give it a bonus. The tester’s solution is also wrong as the tester is taking the answer for n=1 as 1. Please check this. 11 Likes Another solution. Let k be the maximum integer that 2^k\le n. First construct Gray code sequence from 1 to 2^k-1 (In addition, the first integer of the Gray code sequence should be 1) Then, for 2^k+1\le i\le n, insert it before i-2^k. At last, place 2^k at the beginning. 1 Like Why is answer 1 for N = 1? 2 Likes Answer for N = 1 is 1 for some unknown reasons. 1 Like Am I understanding it wrong, or should the answer for N=1 be -1 instead of 1? 1 Like #include <bits/stdc++.h> #define fast1 ios_base::sync_with_stdio(false) #define fast2 cin.tie(NULL) #define ll long long #define FOR(i,a,n) for(int i=a;i<n;i++) #define SIZE 100001 #define pb push_back using namespace std; ll int Pow(ll int n); int main() { fast1; fast2; ll int t,n; cin>>t; while(t–) { cin>>n; if(Pow(n)) cout<<-1; else if(n==3LL){ cout<<1<<" “<<3<<” “<<2; } else if(n>3){ cout<<2<<” “<<3<<” “<<1<<” “; for(ll int i=4LL;i<=n;i++) { if(Pow(i)) { cout<<i+1LL<<” “<<i<<” “; i++; } else cout<<i<<” “; } } cout<<”\n"; } } ll int Pow(ll int n) { if ((n & (n-1LL))==0) return 1LL; return 0LL; } One way to do it was to print all numbers in sequence and when a number comes with power of 2 then print its next number first(i+1) and then print i. Given pseudo code for n>=4 print(2,3,1) i=4 while(i<=n): if (math.log(i,2)==0): print(i+1,i) i=+2 else: print(i) i+=1 1 Like if i am not wrong. condition was that there should be no pair (a_i, a_j) such that (a_i \& a_j == 0) . if n = 1 there are no pairs hence condition satisfied. 1 Like The first statement specifies the required condition, not the constraints. if pi&pi+1 is greater than 0 for every 1≤i<N This is a formal way of stating that every pair starting from the first index should have greater than 0 bitwise AND with its neighbors. This is not related to the constraints at all. In the case of just 1, there are no pairs, so the condition is already satisfied. There is nothing wrong with having this confusion. But Instead of trying to find the reason, you are just blaming the question Writers and Testers, which is not a good way of dealing with such problems. Unfortunately, The latter has become more common on platforms like Codechef in recent times. In Depth Analysis and elaborate thought process to reach solution ideas and correct code implementation https://tinyurl.com/y5msjfc3 In depth analysis and reaching key observations, correct code implementation https://www.codechef.com/viewsolution/38729676 what is wrong with this solution , it is partially accepted Your code would print -1 in the case when n = 1, while the correct answer is 1. It is a valid permutation as there are no pairs for the condition to fail. I have uploaded a detailed solution on my YouTube channel. You can watch it.
{}
# Week 9 Lecture 1: Higher Order ODEs and Dynamical Systems ## Rahman Notes: Lets's do the examples we did in the theory lecture. Consider the simple pendulum, which we showed in the theory lecture is modeled as If we were to use forward Euler, we would get the following scheme: omega = 0; theta = 1; dt = 0.1; for i = 1:100 theta = theta + dt*omega; omega = omega + dt*(-sin(theta)); end format long theta theta = -0.997033921796586 Now lets try it with ode45 omega = 0; theta = 1; [T,Y] = ode45(@(t, y) SimplePendulum(t,y),[0 10],[theta omega]); format long Y(end,1) ans = -0.997381590229234 We can use this to plot our phase plane. We first we need a bunch of initial points, and we let them run. omega = [-2:0.5:2]; theta = [-3*pi:6*pi/5:3*pi]; for i = 1:length(omega) for j = 1:length(theta) [T,Y] = ode45(@(t, y) SimplePendulum(t,y),[0 10],[theta(j) omega(i)]); plot(Y(:, 1), Y(:, 2)) hold on end end hold off axis([-3*pi 3*pi -3 3]) xlabel('\theta') ylabel('\omega') We can even simulate it as a video. Try it for different initial points. omega = 0; theta = 1; [T,Y] = ode45(@(t, y) SimplePendulum(t,y),[0 10],[theta omega]); y = -cos(Y(:,1)); x = sin(Y(:,1)); for t = 1:length(y) h = plot([0 x(t)],[0 y(t)],'k',x(t),y(t),'.',0,0,'.',x(1:t),y(1:t)); set(h(1),'linewidth',2); set(h(2),'MarkerSize',50); set(h(3),'MarkerSize',20); axis([-1.5 1.5 -1.5 1.5]) pause(0.1) end function dy = SimplePendulum(t,y) %%% This usually goes at the end of the code. dy = zeros(2,1); dy(1) = y(2); dy(2) = -sin(y(1)); end
{}
### (Deleted) Create Symmetric Mesh from Polygonal 43 views 0 8 weeks ago by Hey, I am computing boundary points, then creating a polygonal and from that a mesh, that I'll later use as a domain. Is there a way to make the mesh symmetric, if the polygonal is (lets say with D6-symmetry, like a snowflake)? vertices = [...] polygon = Polygon(vertices) mesh = generate_mesh(polygon, RES)​ Community: FEniCS Project
{}
# Infinite alternating series with repeated logarithms 1. Apr 8, 2009 ### LeifEricson 1. The problem statement, all variables and given/known data Calculate the sum of the following series: $$\sum_{i=2}^{\infty}(-1)^i \cdot \lg ^{(i)} n$$ Where (i) as a super-script signifies number of times lg was operated i.e. $$\lg ^{(3)} n = (\lg (\lg (\lg n)))$$, and n is a natural number. 2. Relevant equations 3. The attempt at a solution I proved by Leibniz test that this series converges. But I don't know how to find the number to which it converges to. Thanks. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Apr 8, 2009 ### futurebird I really don't know how to help you... but since you know it converges you can use the numerical estimate to see if it looks like any famous number? I hope someone helps I'm really interested in this! 3. Apr 8, 2009 ### Dick If you try doing a numerical estimate it will blow up. Which is informative. Eventually ln^(k)(n)<1. Then ln^(k+1)(n)<0. Now what's ln^(k+2)(n)?? The series isn't even well defined much less convergent. 4. Apr 8, 2009 ### LeifEricson You are right. The series isn't well defined so I cancel this question. I apologize. 5. Apr 8, 2009 ### futurebird Oh that makes me feel a little better. I could not even see how you had it converging. :tongue: 6. Apr 8, 2009 ### Dick No need for apologies!! It's a perfectly valid question. It makes a good 'think about it' exercise. It does look at first glance like it ought to be a good candidate for an alternating series test. Last edited: Apr 8, 2009
{}
# Article Full entry | PDF   (0.1 MB) Keywords: compact spaces; $G_\delta$-sets; resolvability Summary: It is well-known that compacta (i.e. compact Hausdorff spaces) are maximally resolvable, that is every compactum $X$ contains $\Delta(X)$ many pairwise disjoint dense subsets, where $\Delta(X)$ denotes the minimum size of a non-empty open set in $X$. The aim of this note is to prove the following analogous result: Every compactum $X$ contains $\Delta_\delta(X)$ many pairwise disjoint $G_\delta$-dense subsets, where $\Delta_\delta(X)$ denotes the minimum size of a non-empty $G_\delta$ set in $X$. References: [1] Čech E., Pospíšil B.: Sur les espaces compacts. Publ. Fac. Sci. Univ. Masaryk 258 (1938), 1–14. Zbl 0019.08903 [2] Comfort W.W., Garcia-Ferreira S.: Resolvability: A selective survey and some new results. Topology Appl. 74 (1996), 149–167. DOI 10.1016/S0166-8641(96)00052-1 | MR 1425934 | Zbl 0866.54004 [3] El'kin A.G.: Resolvable spaces which are not maximally resolvable. Vestnik Moskov. Univ. Ser. I Mat. Meh. 24 (1969), no. 4, 66–70. MR 0256331 | Zbl 0243.54018 [4] Juhász I.: Cardinal functions in topology – 10 years later. Mathematical Centre Tracts, 123, Mathematisch Centrum, Amsterdam, 1980. [5] Juhász I.: On the minimum character of points in compact spaces. in: Proc. Top. Conf. (Pécs, 1989), 365–371, Colloq. Math. Soc. János Bolyai, 55, North-Holland, Amsterdam, 1993. MR 1244377 | Zbl 0798.54005 [6] Juhász I., Szentmiklóssy Z.: Convergent free sequences in compact spaces. Proc. Amer. Math. Soc. 116 (1992), 1153–1160. DOI 10.2307/2159502 | MR 1137223 | Zbl 0767.54002 Partner of
{}
# Today I Learned Some of the things I've learned every day since Oct 10, 2016 ## 123: Skip Lists (Intro) skip list is a data structure similar to a linked list which stores an ordered sequence of elements and is built specifically for fast search within the sequence. The way it’s commonly implemented is as a hierarchy of linked lists, with the first list containing all the elements of the skip list, and each successive linked list containing a sparser subset of the elements in the previous one. To search for an element within the skip list one then starts with the sparsest linked list, getting as ‘close’ to the desired element as possible, then moves directly down to that same position in the next linked list and continues searching until the item is found (or not). This animation depicts the insertion of the element 80 into a skip list of positive integers (insertion being equivalent to searching): Credit: Artyom Kalinin In the average case, the skip list uses $O(n)$ space and takes just $O(\log{} n)$ time for searching operations.
{}
# Tag Info 42 Control the Precision and Accuracy of Numerical Results This is an excellent question. Of course everyone could claim highest accuracy for her product. To deal with this situation there exist benchmarks to test for accuracy. One such benchmark is from NIST. This specific benchmark deals with the accuracy of statistical software for instance. The NIST ... 26 The only reason I am attempting to answer this is to perhaps get a Reversal badge. There you go... We will go slowly and this answer is the basis for what comes next. Let's start with two dimensions. You'll see why. We create a rectangular region: Needs["NDSolveFEM"] mesh = ToElementMesh[FullRegion[2], {{0, 5}, {0, 1}}, "MeshOrder" -> 1, ... 24 Solving 1D and 2D complex Schroedinger wave equations with NDSolve I do not agree with you when you write: I know the NDSolve is not magic... My opinion is that NDSolve is one of the most complex functionality I've met so far in the Mathematica environment, with its millions of options and special function this is a real complex thing and it is hard ... 23 There is an (undocumented?) feature of NDSolve which is handy for exactly this purpose: You can add more than just the start and end of the integration interval and enforce that these points will be met. The result is like you would run NDSolve on each of the corresponding intervals with the starting conditions given by the end point of the previous ... 22 Time-dependent case in the time-dependent case, $[H(t),H(t')]\neq0$ in general and we need to time-order, ie, the operator taking a state from $t=0$ to $t=\tau$ is $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\, H(t))$ with $\mathcal{T}$ the time-ordering operator. In practice we just split the time interval into lots of small pieces (basically using the ... 20 Edit of July 10, 2014 As of V10, this equation can now be solved with a single, simple call to NDSolve: y = NDSolveValue[{ r D[y[r, z], z, z] + D[y[r, z], r] + r D[y[r, z], r, r] == r y[r, z], y[1, z] == 1, y[r, 1] == 1 }, y, {r, 0, 1}, {z, 0, 1}]; ContourPlot[y[r, z], {r, 0, 1}, {z, 0, 1}, ColorFunction -> "TemperatureMap", ... 19 Let me show how to roll your own numerical solution to a non-linear integral equation using a collocation method. It's fun! This will involve two approximations. First, we will approximate the function B[x] by its values at n particular points in the range {x, 0, 1}. The integral over x will be replaced by a weighted sum over n, i.e., a quadrature rule. ... 17 You can always separate your inner integrals, convert them to functions and use in NIntegrate: i1[z_?NumericQ] := i1[z] = NIntegrate[-y, {y, 0, z}] i2[x_?NumericQ] := i2[x] = NIntegrate[Exp[i1[z]], {z, -∞, x}] NIntegrate[x i2[x], {x, -5., 5}] (* 30.0795 *) 17 You can use the EventLocator method of NDSolve. Needs["DifferentialEquationsInterpolatingFunctionAnatomy"]; eqns = {Derivative[1][a][t] == -a[t] - 0.2 a[t]^2 + 2.1 b[t], Derivative[1][b][t] == a[t] + 0.1 a[t]^2 - 1.1 b[t], a[0] == 0.5, b[0] == 0.5}; sol = First@ NDSolve[eqns, {a, b}, {t, 0, 1000}, Method -> {"EventLocator", ... 17 My variant of Szabolcs code. It doesn't need an extra package: sol = First[ NDSolve[eqns, {a, b}, {t, 0, 1000}, Method -> {"EventLocator", "Event" -> Abs[a'[t]] +Abs[b'[t]] < 10^-5, "EventAction" :> Throw[end = t, "StopIntegration"]}]]; Plot[Evaluate[{a[t], b[t]} /. sol], {t, 0, end}] As you can see it makes use of the ... 17 Generally speaking, you can recognize a list because it'll have List as its Head. For example: Head[{1,2,3}] will return List. For your example conditional where you want to change what you do based on the Head of the resulting expression, you can use Switch, such as in: Switch[result, _List, what you want to do with a list, _, what you ... 16 You can modify the global system variable $Assumptions, to get the effect you want:$Assumptions = aa[t] > 0 Then Integrate[D[yy[x, t], t]^2, {x, 0, 18}] 10.1601 Derivative[1][aa][t]^2 This may, however, be somewhat error-prone. Here is how I'd do this with local environments. This is a generator for a local environment: ... 16 After a lengthy study (I'm using version 8) I conclude that there is a bug in Mathematica in the Integrate function when applied to a Sqrt integrand. Ok. let's go (some patience is required because of the long text) Let us define the functions corresponding to your integrals. Remark: because of the relation $1 + cos(2x) = 2 cos^2(x)$ the two forms of ... 15 NIntegrate performs a certain symbolic processing of the integrand to detect discontinuities, singularities, to determine the method to choose and so on. If you know the integrand pretty well, the way to reduce the overhead is to set the method explicitly, set its SymbolicProcessing suboption to 0 (to allow to time spent on the preprocessing), and to add ... 15 There is the function NFourierTransform[] (as well as NInverseFourierTransform[]) implemented in the package FourierSeries. The function, as with the related kernel functions, takes a FourierParameters option so you can adjust computations to your preferred normalization as needed. For your specific normalization, you apparently want the setting ... 15 Some frames from my version of the animation: Here's the code I used: orbit[posStart_?VectorQ, derStart_?VectorQ] := Block[{c = -Rationalize[6.672*^-11*7*^17], x, y, z, t}, {x, y, z} /. First @ NDSolve[ Join[Thread[{x''[t], y''[t], z''[t]} == c {x[t], y[t], z[t]}/Norm[{x[t], y[t], z[t]}]^3], ... 15 According to the Mathematica documentation on this page: Here is how to define a 5(4) pair of Dormand and Prince coefficients [DP80]. This is currently the method used by ode45 in MATLAB. DOPRIamat = { {1/5}, {3/40, 9/40}, {44/45, -56/15, 32/9}, {19372/6561, -25360/2187, 64448/6561, -212/729}, {9017/3168, -355/33, 46732/5247, 49/176, ... 15 I think it's worth pointing out that the problem can be solved "straightforwardly" (i.e., really using only NDSolve) once you know the options that Stefan used in ProcessEquations (which I upvoted because those options are the main ingredient): Below I show the original problem of a Gaussian wave packet with no initial momentum, and then a modified case ... 15 Here's my attempt. To get the matrix representing the Laplacian I use LaplacianFilter on an array of symbols and CoefficientArrays to extract the coefficients. n = 200; shape = ArrayPad[ConstantArray[0, {n/2, n/2}], {{0, n/2}, {0, n/2}}, 1]; shapeVector = Flatten @ Position[Flatten @ shape, 1]; symbolArray = Array[x, {n, n}]; symbolLaplacian = ... 13 I think you intended to use {li, 200, 800} instead of {li, 800, 200}. If you do so, then you could visualize the result : ListLinePlot@dnFpoints Moreover I would rather define daF in the following form : daF[l_]:= 500 * 0.28 Exp[-((l - 500)/90)^2] c = 3 10^8; Edit Instead of using Table of dnFpoints I add an alternative method for calculation of ... 13 This question is somewhat subjective, but here's my take on it: The reason the precise methods are mentioned in papers is to make results reproducible. One has to draw a line when it comes to describing methods. Will you mention what method you used to add or multiply numbers on a computer? What if the numbers are huge and you used FFT-accelerated ... 13 This is fixed in version 9. This came up on MathGroup before. Since it hasn't been fixed for so long, I wasn't sure if it was really a bug, so I did some spelunking (and some speculation) today to find out what's happening. To jump to the end: I think it's a bug. First, let's see what arguments does LogLinearPlot really pass to the function: ... 13 As it turns out, the designers of NDSolve[] have precisely anticipated this sort of use; this is where you can use the NDSolveStateData framework. To use acl's example: (* prepare PDE *) state = First[NDSolveProcessEquations[{D[u[t, x], t] == D[u[t, x], x, x], u[0, x] == 0, u[t, 0] == Sin[t], u[t, 5] == 0}, u, t, {x, 0, 5}]]; (* go up to t = 2 *) ... 13 NDSolve has a slew of options that allow you to control the method. You can find the standard reference here. There, we learn how to access Euler's method using NDSolve: Clear[x]; x = x /. First[ NDSolve[{x'[t] == 0.5*x[t] - 0.04*(x[t])^2, x[0] == 1}, x, {t, 0, 10}, StartingStepSize -> 1, Method -> {"FixedStep", Method -> "ExplicitEuler"}] ]; ... 13 You can get the curve in polynomial implicit form as below. poly = GroebnerBasis[{x^2 - ct, y^2 - st, ct^2 + st^2 - 1}, {x, y}, {ct, st}][[1]] (* Out[290]= -1 + x^4 + y^4 *) To get the area, integrate the characteristic function for the interior of the region. That that's where the polynomial is nonpositive (just notice that it is negative at the ... 12 If you know the equation defining your ellipsoid you could use Boole[] to constrain the integration domain : myF[x_,y_]=Abs[x+y] NIntegrate[Boole[(x/3)^2 + (y/2)^2 <= 1] myF[x,y], {x, -5, 5}, {y, -5, 5}] Note that this will actually prevent myF[x, y] from being evaluated outside the domain specified by Boole. This feature of NIntegrate is described ... 12 This approach finds equilibrium by checking that all derivatives up to the order of the differential equation are below a threshold. Following the template (defined below) suggested by the OP, here is an example for a damped harmonic oscillator: Needs["DifferentialEquationsInterpolatingFunctionAnatomy`"]; eqns1 = {a''[t] == Pi^2/2500 - (Pi^2*a[t])/2500 - ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Circuits with Medium Fan-In Tuesday, April 15, 2014 - 4:15pm to 5:15pm Refreshments: 3:45pm in 32-G449 (Patil/Kiva) Location: 32-G449 (Patil/Kiva) Speaker: Anup Rao, University of Washington Abstract: We consider boolean circuits in which every gate may compute an arbitrary boolean function of k other gates, for a parameter k. We give an explicit function f:{0,1}^n to {0,1} that requires at least Omega(log^2 n) non-input gates when k = 2n/3. When the circuit is restricted to being layered and depth 2, we prove a lower bound of n^{Omega(1)} on the number of non-input gates. When the circuit is a formula with gates of fan-in k, we give a lower bound  Omega(n^2/k \log n) on the total number of gates. Our model is connected to some well known approaches to proving lower bounds in complexity theory. Optimal lower bounds for the Number-On-Forehead model in communication complexity, or for bounded depth circuits in AC_0, or extractors for varieties over small fields would imply strong lower bounds in our model. On the other hand, new lower bounds for our model would prove new time-space tradeoffs for branching programs and impossibility results for (fan-in 2) circuits with linear size and logarithmic depth. In particular, our lower bound gives a different proof for a known time-space tradeoff for oblivious branching programs. Joint work with Pavel Hrubes.
{}
# pylops.Smoothing1D¶ pylops.Smoothing1D(nsmooth, dims, dir=0, dtype='float64')[source] 1D Smoothing. Apply smoothing to model (and data) along a specific direction of a multi-dimensional array depending on the choice of dir. Parameters: nsmooth : int Lenght of smoothing operator (must be odd) dims : Number of samples for each dimension dir : int, optional Direction along which smoothing is applied dtype : str, optional Type of elements in input array. Notes The Smoothing1D operator is a special type of convolutional operator that convolves the input model (or data) with a constant filter of size $$n_\text{smooth}$$: $\mathbf{f} = [ 1/n_\text{smooth}, 1/n_\text{smooth}, ..., 1/n_\text{smooth} ]$ When applied to the first direction: $y[i,j,k] = 1/n_\text{smooth} \sum_{l=-(n_\text{smooth}-1)/2}^{(n_\text{smooth}-1)/2} x[l,j,k]$ Similarly when applied to the second direction: $y[i,j,k] = 1/n_\text{smooth} \sum_{l=-(n_\text{smooth}-1)/2}^{(n_\text{smooth}-1)/2} x[i,l,k]$ and the third direction: $y[i,j,k] = 1/n_\text{smooth} \sum_{l=-(n_\text{smooth}-1)/2}^{(n_\text{smooth}-1)/2} x[i,j,l]$ Note that since the filter is symmetrical, the Smoothing1D operator is self-adjoint. Attributes: shape : tuple Operator shape explicit : bool Operator contains a matrix that can be solved explicitly (True) or not (False)
{}
Linear Preservers of Perimeters of Nonnegative Real Matrices • Journal title : Kyungpook mathematical journal • Volume 48, Issue 3,  2008, pp.465-472 • Publisher : Department of Mathematics, Kyungpook National University • DOI : 10.5666/KMJ.2008.48.3.465 Title & Authors Linear Preservers of Perimeters of Nonnegative Real Matrices Song, Seok-Zun; Kang, Kyung-Tae; Abstract For a nonnegative real matrix A of rank 1, A can be factored as $\small{ab^t}$ for some vectors a and b. The perimeter of A is the number of nonzero entries in both a and b. If B is a matrix of rank k, then B is the sum of k matrices of rank 1. The perimeter of B is the minimum of the sums of perimeters of k matrices of rank 1, where the minimum is taken over all possible rank-1 decompositions of B. In this paper, we obtain characterizations of the linear operators which preserve perimeters 2 and k for some $\small{k\geq4}$. That is, a linear operator T preserves perimeters 2 and $\small{k(\geq4)}$ if and only if it has the form T(A) = UAV or T(A) = $\small{UA^tV}$ with some invertible matrices U and V. Keywords rank;perimeter;linear operator;(U, V)-operator; Language English Cited by References 1. L. B. Beasley, G. S. Cheon, Y. B. Jun and S. Z. Song, Rank and perimeter preservers of Boolean rank-1 matrices, J. Korean Math. Soc., 41(2004), 397-406. 2. L. B. Beasley, D. A. Gregory and N. J. Pullman, Nonnegative rank-preserving operators, Linear Algebra Appl., 65(1985), 207-223. 3. L. B. Beasley and N. J. Pullman, Boolean rank-preserving operators and Boolean rank-1 spaces, Linear Algebra Appl., 59(1984), 55-77. 4. L. B. Beasley, S. Z. Song and S. G. Lee, Zero term rank preservers, Linear and Multilinear Algebra, 48(2001), 313-318. 5. L. B. Beasley, S. Z. Song, K. T. Kang and B. K. Sarma, Column ranks and their preservers over nonnegative real matrices, Linear Algebra Appl., to appear. 6. A. Berman and R. J. Plemmons, Nonegative matrices in the mathematical sciences, Academic, New York (1976). 7. C. K. Li and S. J. Pierce, Linear preserver problems, Amer. Math. Monthly, 108(2001), 591-605. 8. S. Z. Song and S. G. Hwang, Spanning column ranks and their preservers of nonnegative matrices, Linear Algebra Appl., 254(1997), 485-495.
{}
Energy Excursions # Cement and Casing "Take an energy excursion to understand the importance of cement and casing to maintaining wellbore integrity as part of the construction of a well." ## Summary This lesson discusses how engineers construct a well using steel pipe, called casing. To secure the casing and maintain the integrity of the well, engineers must cement different lengths of casing within the wellbore. Motivation is to recognize that the details of cement and casing are critical to maintaining health, safety and the environment during drilling operations. The lesson on “Cement and Casing” provides foundational knowledge and skills needed to complete the “Design Challenge” lesson in this course. ## Learning Outcomes • Gain a thorough understanding of the overview/definition of casing and its primary purposes • Stabilizes and seals parts of the well bore using cement • Protects shallow formations from high pressures found in deeper zones • Seals zones containing corrosive water • Seals and protects fresh water bearing zones • Isolates productive formations from other zones • Gain an understanding of the steps that come into play when casing begins during construction of a well • Learn about the different types of tests that are performed on the casing to ensure that well integrity has been established/met • Cement Bond Logs • Casing Pressure Test • Casing Shoe Test ###### College Board Units and Topics Lesson Content 0% Complete 0/4 Steps
{}
# Probability questions from Tanton Confession: I still haven’t figured out how to use twitter. (Feel free to follow me @mrchasemath, though!) I always feel like I’m drinking from a fire hose when I get on the site–I can’t keep up with the twitter feed, so I don’t even try. But when I do, I love seeing what people are posting. Here’s a great math problem from James Tanton. He always has such interesting problems! Feel free to work it out yourself. It’s a fun problem! Here are my tweets that answer the question (can you follow my work?): It’s hard to do math with 140 characters! :-) Here’s his follow-up question which has still gone unanswered. My approach to the first problem won’t work here, and I want to avoid brute-forcing it. (Reminds me of my last post!) Any ideas? Let us know in the comments…or tweet @jamestanton! # Four ways to compute a probability I have a guest blog post that appears on the White Group Mathematics blog here. (My first guest post!) Here’s a taste: One thing I love about math, and particularly combinatorics and probability, is the fact that many methods exist for solving the same problem. Each method may have its advantages. The advantage might be conceptual (as in “this makes most sense to me”) or the advantage might be computational (as in “this is the fastest way to do it”). Discussing the merits of different methods is exactly what math class is for! For example, check out this typical probability question that could appear in a Precalculus course: The Texas Ranger pitching staff has 5 right-handers and 8 left-handers. If 2 pitchers are selected at random to warm up, what is the probability that at least one of them is a right-hander? In fact, it’s one I use in my own Precalculus course and it generated a great class discussion. In teaching it this past year, I ended up showing students four ways to do the problem this year! Here they are… For the epic conclusion of this post, visit White Group Mathematics. :-) # MAA Distinguished Lecture Series If you live in the DC area and you like math, you have no excuse! Come to the MAA Distinguished Lecture Series. These are one-hour talks, complete with refreshments, all for free due to the generous sponsorship of the NSA. The talks are at the Carriage House, at the MAA headquarters near Dupont Circle. Here are some of the great talks that are on the schedule in the next few months (I’m especially excited to hear Francis Su on May 14th). I’ve been to many of these lectures and always enjoyed them. Robert Ghrist‘s lecture was out of this world (here’s the recap, but no video, audio, or slides yet) and was so very accessible and entertaining, despite the abstract nature of his expertise–algebraic topology. And that’s the wonderful thing about all these talks: Even though these are very bright mathematicians, they go out of their way to give lectures that engage a broad audience. Here’s another great one from William Dunham, who spoke about Newton (Dunham is probably the world’s leading expert on Newton’s letters). Recap here, and a short youtube clip here: (full  talk also available) So, if you’re a DC mathophile, stop by sometime. I’ll see you there! # Math on Quora I may not have been very active on my blog recently (sorry for the three-month hiatus), but it’s not because I haven’t been actively doing math. And in fact, I’ve also found other outlets to share about math. Have you used Quora yet? Quora, at least in principle, is a grown-up version of yahoo answers. It’s like stackoverflow, but more philosophical and less technical. You’ll (usually) find thoughtful questions and thoughtful answers. Like most question-answer sites, you can ‘up-vote’ an answer, so the best answers generally appear at the top of the feed. The best part about Quora is that it somehow attracts really high quality respondents, including: Ashton Kutcher, Jimmy Wales, Jermey Lin, and even Barack Obama. Many other mayors, famous athletes, CEOs, and the like, seem to darken the halls of Quora. For a list of famous folks on Quora, check out this Quora question (how meta!). Also contributing quality answers is none other than me. It’s still a new space for me, but I’ve made my foray into Quora in a few small ways. Check out the following questions for which I’ve contributed answers, and give me some up-votes, or start a comment battle with me or something :-). And here are a few posts where my comments appear: # USA Science and Engineering Festival If you’re local, you should go check out the USA Science and Engineering Festival this weekend. It’s on the mall in DC and everything is free. They will have tons of booths, free stuff, demonstrations, presentations, and performances. Go check it out! For my report on the fest from two years ago, see this post. The USA Science and Engineering Festival is also responsible for bringing to our school, free of charge, the amazing James Tanton! # I ♥ Icosahedra Do you love icosahedra? I do. On Sunday, I talked with a friend about an icosahedron for over an hour. Icosahedra, along with other polyhedra, are a wonderfully accessible entry point into math–and not just simple math, but deep math that gets you pretty far into geometry and topology, too! Just see my previous post about Matthew Wright’s guest lecture.) A regular icosahedron is one of the five regular surfaces (“Platonic Solids”). It has twenty sides, all congruent, equilateral triangles. Here are three icosahedra: Here’s a question which is easy to ask but hard to answer: How many ways can you color an icosahedron with one of n colors per face? If you think the answer is $n^{20}$, that’s a good start–there are $n$ choices of color for 20 faces, so you just multiply, right?–but that’s not correct. Here we’re talking about an unoriented icosahedron that is free to rotate in space. For example, do the three icosahedra above have the same coloring? It’s hard to tell, right? Solving this problem requires taking the symmetry of the icosahedron into account. In particular, it requires a result known as Burnside’s Lemma. For the full solution to this problem, I’ll refer you to my article, authored together with friends Matthew Wright and Brian Bargh, which appears in this month’s issue of MAA’s Math Horizons Magazine here (JSTOR access required). I’m very excited that I’m a published author! # Matthew Wright visits RM Dr. Matthew Wright paid our students a visit this past Friday and gave them a gentle introduction to topology and the Euler Characteristic. This is a topic given little to no treatment inside the traditional K-12 math curriculum, so our students welcomed the opportunity to learn some ‘college math.’ He had our students counting vertices, edges, and faces of various surfaces in order to compute the Euler Characteristic. Students discovered that the Euler Characteristic is a topological invariant. In his talk he also walked the students through a proof that there are only five regular surfaces, using the Euler Characteristic. This is more difficult than the typical proof, but elegant because the proof doesn’t appeal to geometry. That is, the proof doesn’t ever require the assumption that the faces, angles, or edges are congruent. In this sense, it is a topological proof.* Very cool indeed! Bio: Matthew Wright went to Messiah College and then went on to received his MS and PhD from University of Pennsylvania, where his thesis was in applied and computational topology. He was a professor at Huntington College for two years but is now at the Institute for Mathematics and its Applications at the University of Minnesota for a postdoctoral research fellowship. His hobbies include photography and juggling. On a personal note, Matthew was my roommate in college, and I had the privilege of being his best man in his wedding, as well! For more about Dr. Wright, visit his website at http://mrwright.org/. * This proof also appears in the book Euler’s Gem by Dave Richeson.
{}
Skip to content # williamyangcn/lcthw-cn forked from zedz/lcthw-cn ### Subversion checkout URL You can clone with HTTPS or Subversion. Fetching contributors… Cannot retrieve contributors at this time 109 lines (83 sloc) 4.333 kb \chapter{Exercise 3: Formatted Printing} Keep that \file{Makefile} around since it'll help you spot errors and we'll be adding to it when we need to automate more things. Many programming languages use the C way of formatting output, so let's try it: \begin{code}{ex3.c} << d['code/ex3.c|pyg|l'] >> \end{code} Once you have that, do the usual \shell{make ex3} to build it and run it. Make sure you \emph{fix all warnings}. This exercise has a whole lot going on in a small amount of code so let's break it down: \begin{enumerate} \item First you're including another "header file" called \file{stdio.h}. This tells the compiler that you're going to use the "standard Input/Output functions". One of those is \ident{printf}. \item Then you're using a variable named \ident{age} and setting it to 10. \item Next you're using a variable \ident{height} and setting it to 72. \item Then you use the \ident{printf} function to print the age and height of the tallest 10 year old on the planet. \item In the \ident{printf} you'll notice you're passing in a string, and it's a format string like in many other languages. \item After this format string, you put the variables that should be "replaced" into the format string by \ident{printf}. \end{enumerate} The result of doing this is you are handing \ident{printf} some variables and it is constructing a new string then printing that new string to the terminal. \section{What You Should See} When you do the whole build you should see something like this: \begin{Terminal}{Building and running ex3.c} \begin{lstlisting} << d['code/ex3.out'] >> \end{lstlisting} \end{Terminal} Pretty soon I'm going to stop telling you to run \file{make} and what the build looks like, so please make sure you're getting this right and that it's working. \section{External Research} In the \emph{Extra Credit} section of each exercise I may have you go find information on your own and figure things out. This is an important part of being a self-sufficient programmer. If you constantly run to ask someone a question before trying to figure it out first then you never learn to solve problems independently. This leads to you never building confidence in your skills and always needing someone else around to do your work. The way you break this habit is to \emph{force} yourself to try to answer your own questions first, and to confirm that your answer is right. You do this by trying to break things, experimenting with your possible answer, and doing your own research. For this exercise I want you to go online and find out \emph{all} of the \ident{printf} escape codes and format sequences. Escape codes are \verb|\n| or \verb|\t| that let you print a newline or tab (respectively). Format sequences are the \verb|%s| or \verb|%d| that let you print a string or a integer. Find all of the ones available, how you can modify them, and what kind of "precisions" and widths you can do. From now on, these kinds of tasks will be in the Extra Credit and you should do them. \section{How To Break It} Try a few of these ways to break this program, which may or may not cause it to crash on your computer: \begin{enumerate} \item Take the \ident{age} variable out of the first \ident{printf} call then recompile. You should get a couple of warnings. \item Run this new program and it will either crash, or print out a really crazy age. \item Put the \ident{printf} back the way it was, and then don't set \ident{age} to an initial value by changing that line to \verb|int age;| then rebuild and run again. \end{enumerate} \begin{Terminal}{Breaking ex3.c} \begin{lstlisting} << d['code/ex3.bad.out'] >> \end{lstlisting} \end{Terminal} \section{Extra Credit} \begin{enumerate} \item Find as many other ways to break \file{ex3.c} as you can. \item Run \verb|man 3 printf| and read about the other '\%' format characters you can use. These should look familiar if you used them in other languages (\ident{printf} is where they come from). \item Add \file{ex3} to your \file{Makefile}'s \ident{all} list. Use this to \verb|make clean all| and build all your exercises so far. \item Add \file{ex3} to your \file{Makefile}'s \ident{clean} list as well. Now use \verb|make clean| will remove it when you need to. \end{enumerate} Something went wrong with that request. Please try again.
{}
• Create Account \$30 Like 0Likes Dislike # WWH - Rendering Convex N-Gons By Paul Nettle | Published Aug 23 1999 03:48 PM in Graphics Programming and Theory vertex vertices polygon edge n-gon top we' svert n-gons If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource The purpose of a WWH is to expand one's knowledge on a topic they already understand, but need a reference, a refresher course, or to simply extend what they already know about the topic. WWH is the quick tutor. Just the [W]hat, [W]hy and [H]ow What Rendering convex n-gons (shapes that have more than 2 vertices) to the frame buffer can be a challenge, especially if you wish to accomplish this with any amount of speed. I'll outline a very simple and fast algorithm. Why Some polygon renderers can benefit greatly by rendering n-gons rather than triangular patches. In a doom-style environment, this would (on average) drop the number of polygons to render by 50%. This does not mean that you'll render less screen-space. But it does cut down on the time spent in the overhead of triangular scan-conversion setup. In my past experiences, with a generic dataset (includes indoor and outdoor scene data with objects in the environment) the average reduction from triangles to n-gons was typically 50%. Clipping (2D or 3D) usually results in n-gons that must be triangulated for rendering. You can avoid this step by simply rendering the clipped n-gons themselves. As you'll see, the routines outlined here will perform very well for triangular polygons as well (comparable to a dedicated triangular patch renderer.) How Define Let's start off by defining our basic model of an n-gon (the input to our renderer). Since an n-gon has an unlimited number of vertices (which may be limited for memory or speed purposes) we'll need a way to store our n-gon. For this example, we'll use a linked list rather than a static array of vertices for each polygon. This linked list MUST contain vertices that appear in a specific order -- we'll assume clockwise for this explanation, but it really doesn't matter. It also shouldn't matter which vertex is stored in the list first, as long as the last vertex in the list connects to the first, closing the polygon. Also, for the purpose of this example, we'll assume that the polygon is non-textured, Lambert shaded (i.e. a constant shade across the entire polygon, often referred to as "flat shaded"), and rendered on screen, from top to bottom. For this example, our vertex list is defined as: typedef struct s_vert { int x, y; struct s_vert *next; } sVert; And our polygon is defined as: typedef struct s_poly { sVert *verts; char color; } sPoly; For simplicity, we'll assume our X and Y values are fixed-point values (stored in 32-bit integers) and there will be no sub-pixel correction applied. One final note before I go on... none of the code in this WWH was compiled or even tested. It is simply here as an example to aid the text in explaining the processes involved. Setup Since we'll be rendering from top-to-bottom, we'll need to find our top vertex, so scan through the list of vertices and locate that top vertex (this assumes we know our polygon is stored in "poly"): sVert *pTop = poly.verts; sVert *pVerts = poly.verts->next; for( ; pVerts; pVerts = pVerts->next) { if (pVerts->y < pTop->y) pTop = pVerts; } You may find a slight speed increase by storing your top vertex inside the polygon when you perform your clipping or any of your other pre-render steps that cycles through your screen-space vertices. Also, you can ignore the fact that this polygon may have a flat top (either the previous or next vertex has the same height as the top vertex) since these vertices will automatically be skipped. Later, we'll need to discern the left edge from the right edge, so make two copies of this top vertex. These will reperesent the top of the left edge and top of the right edge. sVert lTemp = *pTop; // copy sVert rTemp = *pTop; // copy sVert *lTop = &lTemp; sVert *rTop = &rTemp; int currentY = pTop->y; The reason we've made two copies of the vertex itself is because we'll be modifying the actual vertex as we step along the edges of our polygon. simply pointing lTop and rTop at the top vertex wouldn't work because they would both be modifying this top vertex simultaneously and the polygon might come out looking as if it got a bad hit of something. :> If this is confusing, keep reading, and it'll soon be clear. Starting at the top vertex, calculate the edge deltas for the edges to the left and to the right. Since your polygon's vertices are oriented in clockwise order in every case (unless you decide you want to render back-facing polygons) the left edge will always be defined as the current vertex (top) and the previous vertex. The right edge will always be defined as the current vertex (top) and the next vertex. Keep in mind that since the vertices are stored in a list, you may have to wrap to the beginning or end of the list to get the previous or next vertex (getPrev() and getNext() account for the this, and should manually wrap to the beginning or end of the list). These are the bottoms of the left and right edges, with their deltas: sVert *lBot = getPrev(pTop); sVert *rBot = getNext(pTop); float lDelta = calcDelta(&lTop, &lBot); float rDelta = calcDelta(&rTop, &rBot); int lHeight = lBot->y - lTop->y; // Height of the left edge int rHeight = lBot->y - lTop->y; // Height of the right edge Our scan conversion loop starts here: for(;;) { Of the two edges, determine which edge spans the fewest scanlines (shortest from top-to-bottom). int height = MIN(lBot->y - lTop->y, rBot->y - rTop->y); Since your vertices are always oriented the same way, a negative height always means you've finished rendering the polygon. To get a negative, you would have reached the bottom of the polygon and would have started to go up the other side. if (height < 0) break; Render the spans. Step along the edges as you go... for(int i = 0; i < height; i++) { renderSpan(lTop->x, rTop->x, currentY); currentY++; lTop->x += lDelta; lHeight--; rTop->x += rDelta; rHeight--; } Finally, we need to step to the next edge. To do this we compare each edge's height with 0 (remember that they were decremented in the loop). The ones that decremented all the way to 0 are done and can be stepped. if (!lHeight) { lTop = lBot; lBot = getPrev(lBot); lDelta = calcDelta(& lTop, & lBot); lHeight = lBot->y - lTop->y; // Height of the left edge } if (!rHeight) { rTop = rBot; rBot = getNext(rBot); rDelta = calcDelta(& rTop, & rBot); rHeight = lBot->y - lTop->y; // Height of the right edge } } And that's all there is to it (other than a few notes.) Notes You may notice that we've been stepping the left and right edges using the actual vertices that define the tops of those edges, not spare copies of them (we only copied the TOP vertex). If you share vertices between polygons, you'll have destroyed those vertices for the next polygon to be rendered. You'll need to create a separate copy of each vertex for stepping. Triangles are simpler to deal with than n-gons. One major reason is because they're always planar. If you somehow get an n-gon that is not planar, you may find yourself rendering a concave n-gon (whether you plan to or not.) Gouraud shaded n-gons can be very tricky because as they rotate in screen space, the intensities across the surface of the n-gon changes direction. Consider the following example: A 4-sided n-gon with alternate vertices having intensities of dark, bright, dark, bright. As that n-gon rotates on screen and the two dark vertices are across from each other, you'll have a dark line connecting them horizontally (gouraud interpolation will see little or no change between them, so the entire scanline will be pretty much the same shade). However, if you rotate that n-gon 90 degrees on-screen, you'll find that with the two bright vertices across from each other horizontally, the center scanline will be bright. If the gouraud were to remain constant, then the dark line would have become vertical. Instead, the intensity across the n-gon has changed. Larger n-gons tend to show this more often than small n-gons. Unlike triangles, the deltas for U and V (texture) across the surface of an n-gon horizontally may not be constant from scanline to scanline. If the polygon was planar mapped (a texture plane was projected to get U/V coordinates for all vertices of an n-gon) then it is safe to assume that these deltas are constant from scanline to scanline. Otherwise, they're not. However, Z and W (depth) across the surface of the n-gon does not suffer from the same potential problems as do the U and V. The delta depth across each scanline in the n-gon is constant from scanline to scanline. I would like to extend thanks to Konstantin Putnik for his input and for finding bugs that made their way into this document as I attempted to simplify the code for the sake of explanation.
{}
# Mysql ERROR at line 1153: Unknown command '\' I am trying to import a mysqldump file via the command line, but continue to get an error. I dumped the file from my other server using: mysqldump -u XXX -p database_name > database.sql Then I try to import the file with: mysql -u XXX -p database_name < database.sql It loads a small portion and then gets stuck. The error I receive is: ERROR at line 1153: Unknown command '\''. I checked that line in the file with: awk '{ if (NR==1153) print \$0 }' database.sql >> line1153.sql and it happens to be over 1MB in size, just for that line. Any ideas what might be going on here? - - This worked! Thanks so much. – Chris May 8 '11 at 23:02 i have the same problem but --hex-blob is not working. any other solution? thx – Jon Jun 4 '12 at 14:36 You know what's going on - you have an extra single quote in your SQL!O If you have 'awk', you probably have 'vi', which will open your line1153.sql file with ease and allow you to find the value in your database that is causing the problem. Or... The line is probably large because it contains multiple rows. You could also use the --skip-extended-insert option to mysqldump so that each row got a separate insert statement. Good luck. - I had the same problem because I had Chinese characters in my datasbase. Below is what I found from some Chinese forum and it worked for me. mysql -u[USERNAME] -p[PASSWORD] --default-character-set=latin1 [DATABASE_NAME] < [BACKUP_SQL_FILE.sql] - If all else fails, use MySQLWorkbench to do the import. This solved the same problem for me. - I think you need to use path/to/file.sql instead of path\to\file.sql Also, database < path/to/file.sql didn't work for me for some reason - I had to use use database; and source path/to/file.sql;. - I recently had a similar problem where I had done an sql dump on a Windows machine and tried to install it on a Linux machine. I had a fairly large SQL file and my error was happening at line 3455360. I used the following command to copy all text up to the point where I was getting an error: sed -n '1, 3455359p' < sourcefile.sql > destinationfile.sql This copied all the good code into a destination file. I looked at the last few lines of the destination file and saw that it was a complete SQL command (The last line ended with a ';') so I imported the good code and didn't get any errors. I then looked at the rest of the file which was about 20 lines. It turns out that the export might not have completed b/c I saw the following php code at the end of the code: Array ( [type] => 1 [message] => Maximum execution time of 300 seconds exceeded
{}
# Digital Typography News A blog exclusively devoted to digital typography ## More on Math with XeLaTeX Try the following simple code: \documentclass[a4paper]{article} \usepackage[math-style=iso]{unicode-math} %% \begin{document} \setmathfont{Asana Math} \$\cos^{2}x+\sin^{2}x=1 \end{document} You will discover that XeLaTeX produces a PDF file that does not show the sinus and cosinus symbols! Will Robertson, the author of the unicode-math package, explained to me that if one appends the following line \def\operator@font{\um@mathup} to file unicode-math.sty, this fixes this problem. So go on and modify your local copy of unicode-math.sty. Apostolos Syropoulos December 24th, 2007 at 6:02 pm Posted in Uncategorized ## Math with XeLaTeX The unicode-math package by Will Robertson is a piece of software that was designed to facilitate the typesetting of mathematical text (e.g., equations, formulae, etc.) with XeLaTeX. The following shows a sample file: \documentclass[a4paper]{article} \usepackage[math-style=iso]{unicode-math} %% \begin{document} \setmathfont{Asana Math} %% set font for mathematical content \begin{displaymath} \mathfrak{A}+\mathcal{B} \end{displaymath} \end{document} All LaTeX commands produce the expected results. In addition, \mathup produces up-right math letters, \mathit produces italic letters (the default is to use italic letters), \mathbb produces black-board letters (think of the R that denotes the set of real numbers), \mathscr and \mathfrak should be used to get script-style or gothic letters, respectively, \mathsf and \mathtt should be used to get sans serif and mono-spaced letters. Also, the commands \mathsfit, \mathbfit, \mathbffrak, \mathbfscr, \mathbfsf, and \mathbfsfit should be used to get the boldface versions of the previous commands. Apostolos Syropoulos December 15th, 2007 at 5:57 pm Posted in Uncategorized ## Correct hyphenation of Greek documents When TeX, and for that matter XeTeX, encounters a word that consists of n letters, then unless n is not greater or equal to max(1,\lefthyphenmin)+max(1,\righthyphenmin), where \lefthyphenmin and \righthyphenmin correspond to the smallest fragment at beginning/end of a word, the word will not be hyphenated. Note if one does not set these parameters, TeX assumes they have been set to zero. When typesetting Greek documents, I recommend to set both these variables to 2, that is, \lefthyphenmin=2 \righthyphenmin=2 In other words, I recommend the hyphenation of words that have at least four letters. It is also recommended to make this setting permanent (e.g., by including these assignments in file hyphen.cfg). Apostolos Syropoulos
{}
# Is there a good (co)homology theory for manifolds with corners? Recall that a (smooth) manifold with corners is a Hausdroff space that can be covered by open sets homeomorphic to $\mathbb R^{n-m} \times \mathbb R_{\geq 0}^m$ for some (fixed) $n$ (but $m$ can vary), and such that all transition maps extend to smooth maps on open neighborhoods of $\mathbb R^n$. I feel like I know what a "differential form" on a manifold with corners should be. Namely, near a corner $\mathbb R^{n-m} \times \mathbb R_{\geq 0}^m$, a differential form should extend to some open neighborhood $\mathbb R^{n-m} \times \mathbb R_{> -\epsilon}^m$. So we can set up the usual words like "closed" and "exact", but then Stokes' theorem is a little weird: for example, the integral of an exact $n$-form over the whole manifold need not vanish. In any case, I read in D. Thurston, "Integral Expressions for the Vassiliev Knot Invariants", 1999, that "there is not yet any sensible homology theory with general manifolds with corners". So, what are all the ways naive attempts go wrong, and have they been fixed in the last decade? As always, please retag as you see fit. Edit: It's been pointed out in the comments that (1) I'm not really asking about general (co)homology, as much as about the theory of De Rham differential forms on manifolds with corners, and (2) there is already a question about that. Really I was just reading the D. Thurston paper, and was surprised by his comment, and thought I'd ask about it. But, anyway, since there is another question, I'm closing this one as exact duplicate. I'll re-open if you feel like you have a good answer, though. -Theo Edit 2: Or rather, apparently OP can't just unilaterally close their own question? • Not sure what "good/sensible (co)homology theory" is supposed to mean. Singular (co)homology ain't bad... – Kevin H. Lin Feb 1 '10 at 2:25 • Also have you seen this question? mathoverflow.net/questions/12920/… Some of the references there might be relevant. – Kevin H. Lin Feb 1 '10 at 2:36 • What properties do you want that singular (co)homology lacks? For most applications manifolds with corners are just manifolds with boundary with the boundary stratification modified slightly. – Ryan Budney Feb 1 '10 at 3:04 • Also, presumably you mean "(co)homology theory" in some unspecified but non-standard sense? The homotopy type of a manifold with corners is the same as its smoothing. Perhaps you want something multi-relative for the full inclusion of its stratifications, or something? – Ryan Budney Feb 1 '10 at 3:06 • I think what D. Thurston is getting at with his comment is that in order to show certain integral constructions are knot invariants you have to get some control of them at infinity, and when you compactify your space that amounts to understanding how the form behaves on the boundary strata. When working with explicit differential forms this is a lot of fussy work, but for example the point of view of Robin Koytcheff is another way to deal with this situation. – Ryan Budney Feb 1 '10 at 3:51 I suppose you are talking about deRham cohomology. Then it would be wise to take a look at the work of Richard Melrose, e.g. his book The Atiyah-Patodi-Singer Index Theorem. On page 65 he discusses deRham cohomology for manifolds with boundary (which can be easily generalized to the corner case, as was also done by him!). On manifolds with corners something interesting happens: there are different versions of reasonable vector fields (and—by duality—differential forms), e.g. 1. extendible vector fields (like you mentioned) 2. tangent vector fields (tangent to any boundary hypersurface) 3. "zero" vector fields (vanishing on all boundary hypersurfaces) (It can be shown that $$d$$ preserves the classes 1.-3., giving a deRham complex whose cohomology can be computed) Melrose points out (compact with boundary case) that in cases 1. and 3. the deRham cohomology is canonically isomorphic to the singular cohomology of the underlying topological space. For the 2nd case also the cohomology of the boundary enters (via a degree shift). I should also point out that there is also a working Morse theory on manifolds with corners, see for example M. Shida, Fundamental Theorems of Morse Theory for Optimization on Manifolds with Corners, Journal of Optimization Theory and Applications 106 (2000) pp 683-688, doi:10.1023/A:1004669815654 and [this broken link EDIT perhaps someone else can extract a result -DR] Furthermore it is easy to construct "invariants" of the manifold with corners by also taking into account its corners (but be careful with respect to which transformations this is an invariant)! • Based on _coverDate=11%2F30%2F2002 in the URL, I conjecture that the broken link should point to: doi.org/10.1016/S0166-8641(02)00036-6 "Generalized billiard paths and Morse theory for manifolds with corners" David G.C.Handron, in Topology and its Applications, Volume 126, Issues 1–2, 30 November 2002, Pages 83-118. – j.c. Jul 17 '19 at 19:35 A manifold with corner is a diffeological space modeled on orthants, as such it has a very well defined De Rham cohomology. Edit : With Serap Gürer, we just wrote a paper (will appear in Indag. Math.) about differential forms on manifolds with corners. Here: http://math.huji.ac.il/~piz/documents/DFOMWBAC.pdf • (sorry if this comment comes many years later) - And how does the "diffeological" de Rham cohomology compare with the three options of @Orbicular's answer? – Qfwfq Nov 2 '18 at 9:11 • @Qfwfq I anticipate it should be the same as (1) and in particular give singular cohomology. – Mike Miller Nov 2 '18 at 14:57 • @ Qfwfq and @MikeMiller Actually we just published 2 papers on zero-perverse forms on stratified paces and diffeology. math.huji.ac.il/~piz/documents/DFOSS.pdf and math.huji.ac.il/~piz/documents/DFOSS-A.pdf We have also a paper on manifolds with corners and iffeology, not yet published math.huji.ac.il/~piz/documents/OMWBAC.pdf – Patrick I-Z – Patrick I-Z Nov 3 '18 at 14:40 • Thanks for the reference! I should have said "guess" instead of "anticipate" - clearly much work must be done. – Mike Miller Nov 3 '18 at 15:30
{}
# How do you find the inverse of y-3x=0? Feb 5, 2016 ${f}^{-} 1 \left(x\right) = \frac{x}{3}$ #### Explanation: 1. Rearrange the equation in the form y=mx+b. $y - 3 x = 0$ $y = 3 x$ 2. Switch the positions of x and y. $x = 3 y$ 3. Isolate for y. $y = \frac{x}{3}$ 4. Rewrite the equation using inverse function notation. $\textcolor{g r e e n}{{f}^{-} 1 \left(x\right) = \frac{x}{3}}$
{}
# SQLite3 import from CSV #1 Does someone here have experience with importing data from a CSV file into SQLite with node-red-node-sqlite ? I tried so far: COPY syslog FROM 'E:\Path\to\syslog.csv' DELIMITER ',' CSV HEADER; and IMPORT 'E:\Path\to\syslog.csv' syslog Both COPY and IMPORT throw a syntax error. Importing the CSV file with the SQLite Browser works without any issues. 0 Likes #2 the node-red node is designed to work with data in node-red you would need to import the data from the file into node-red using a file in node and then use the sqlite node to add it to the sqlite db 0 Likes #3 If you are just doing it once, a quick google search finds this: Import a CSV File Into an SQLite Table 0 Likes #4 If you want to do it in node-red you could use an exec node to run the sqlite commands to do it. 0 Likes #5 Hey thx for the quick reply! my setup looks like this: I get a txt file via http, clean it up and save it to csv. I was just thinking about getting the payload before writing it to file and connect it to the SQLite node. Now I have to figure out how to do that And yes, I found this tutorial on how to do it through the terminal - but I need something automatic. 0 Likes #6 To be honest, it looks like you really don't need the CSV file? Do you want to just push the http data into SQLite because you can do that directly(ish) in Node-RED. There is a node that uses ALAsql that ought to be able to do this for you as well but unfortunately, although the underlying ALAsql module supports SQLite, I don't think that contrib-alasql author has included that feature. 0 Likes #7 Hey, thank you - alaSQL does look interesting. Documentation seems a bit rare - I will have to play with it a bit to see if I can make it work. I already have it reading the CSV file and creating/reading tables. It might be the solution I was looking for. And yes, you are right - I would not need the CSV for SQLite. I need the option to backup to CSV and since I already had that file, I hoped that I could use it. 0 Likes #8 Yes, the docs for that have never been brilliant. For what you want to do, you would be much better off splitting the flow so that one branch writes the CSV and the other writes to the db. If you need to extract a table from the HTML, the html node will do that for you (or if it is too complex, try one of the cheerio nodes). The csv node followed by a file out node should create the CSV for you. 0 Likes
{}
# Count the number of ways numbers 1,2,…,n can be divided into two sets of equal sum count the number of ways numbers 1,2,…,n can be divided into two sets of equal sum. This is my recursive algorithm, what is wrong here?: int f(int sum,int i){//sum is the current sum, i is the current indx (<=n) if(i>n){ return ((2*sum)==(n*(n+1)/2));//ie sum==totalsum/2 } int cnt=f(sum,i+1)+f(sum+i,i+1);//move to i+1 or add i to sum and then move to i+1 return cnt; } For example, if n=7, there are four solutions: {1,3,4,6} and {2,5,7} {1,2,5,6} and {3,4,7} {1,2,4,7} and {3,5,6} {1,6,7} and {2,3,4,5} ie f(0,1)=7 Answer is f(0,1) for any n (n is globally defined) Thanks! You are counting the number of ordered pairs of sets $$(A,B)$$ such that the elements of $$A$$ (and of $$B$$) sum to $$n(n+1)/4$$. Simply divide the result by $$2$$ to account for symmetries. Besides, your algorithm has complexity $$\Omega(2^n)$$ so I doubt you'll meet the time constraint in the link you provided. You can get an $$O(n^3)$$-time dynamic-programming algorithm as follows: Let $$OPT[n, x]$$ be the number of ways the numbers in $$\{1, \dots, n\}$$ can be partitioned into an ordered pair of sets $$(A,B)$$ such that the elements of $$A$$ sum to $$x$$ and those of $$B$$ sum to $$n(n+1)/2 - x$$. Then, $$OPT[0, 0] = 1$$, $$OPT[n][x] = 0$$ if $$n<0$$ or $$x<0$$, and $$OPT[n,x] = OPT[n-1, x-n] + OPT[n-1, x]$$ Now, assuming that $$n(n+1)/2$$ is even, the solution you are looking for is: $$\frac{1}{2}OPT[n][ n(n+1)/4 ]$$.
{}
# The Setup I've made a few games (more like animations) using the Object Oriented method with base classes for objects that extend them, and objects that extend those, and found I couldn't wrap my head around expanding that system to larger game ideas. So I did some research and discovered the Entity-Component system of designing games. I really like the idea, and thoroughly understood the usefulness of it after reading Byte54's perfect answer here: What is the role of "systems" in a component-based entity architecture?. With that said, I have decided to create my current game idea using the described Entity-Component system. Having basic knowledge of C++, and SFML, I would like to implement the backbone of this entity component system using an unordered_multimap without classes for the entities themselves. # Here's the idea: An unordered_mulitmap stores entity IDs as the lookup term, while the value is an inherited Component object. Examlpe: ____________________________ |ID |Component | ---------------------------- |0 |Movable | |0 |Accelable | |0 |Renderable | |1 |Movable | |1 |Renderable | |2 |Renderable | ---------------------------- So, according to this map of objects, the entity with ID 0 has three components: Movable, Accelable, and Renderable. These component objects store the entity specific data, such as the location, the acceleration, and render flags. The entity is simply and ID, with the components attached to that ID describing its attributes. # Problem I want to store the component objects within the map, allowing the map have full ownership of the components. The problem I'm having, is I don't quite understand enough about pointers, shared pointers, and references in order to get that set up. How can I go about initializing these components, with their various member variables, within the unordered_multimap? Can the base component class take on the member variables of its child classes, when defining the map as unordered_multimap<int, component>? # Requirements I need a system to be able to grab an entity, with all of its' attached components, and access members from the components in order to do the necessary calculations and reassignments for position, velocity, etc. • frankly, for memory fragmentation reasons, I would find an implementation of open address hash map on the web (or do my own), because the STL one is close addressing. And I would use a pool of components (not a multimap, but a map of pools), which have enough space for a dozen of components by default (uninitialized), and then would chain list other pools on demand. Also you can consider using the entity ID as a direct index in an array, and use a free slots list when deletion happens. – v.oddou Jul 25 '14 at 2:14 • @v.oddou I don't understand all of what you're saying, and to me, seems more complicated than I would like. Thanks for your information though! I have lots to learn... – natebot13 Jul 25 '14 at 16:09 map<Key, Value> (and it's mates) holds Value, well, by value. If you try to give it an instance of derived class to hold as value, most likely "slicing" will occur. More about slicing: https://stackoverflow.com/q/274626/1125702 std::unique_ptr<Component> seems exactly what you want. It manages lifetime and memory of owned object. By convention, only std::unique_ptr may "own" it's objects, i.e. anything other may only temporarily refer to given object (either by raw pointer or by references). It is programmer's task to ensure that std::unique_ptr will outlive any other references to "owned" object. Usage is quite simple. // type aliases for convenience using ComponentPtr = std::unique_ptr<Component>; using ComponentContainer = std::unordered_map<std::string, ComponentPtr>; // instance of container ComponentContainer cc; // insert component (quick way) cc["SomeComponent"] = std::make_unique<SomeComponent>(arg1, arg2, ...); // 'get' returns raw pointer to owned object SomeComponent * sc_ptr = cc["SomeComponent"].get(); sc_ptr->hello(); // reference, if '->' annoys you SomeComponent & sc_ref = *cc["SomeComponent"].get(); sc_ref.hello(); • Ok, that makes sense... So, I could create the multimap as std::unordered_multimap<int, std::unique_ptr<Component>>? And how would I go about adding elements into my map? I've tried mymap.emplace(id, Movable()); but the compiler doesn't like the fact that 'Movable() isn't a Component class... – natebot13 Jul 25 '14 at 16:16 • @natebot13 Because you are passing instance of derived class, but your multimap wants unique_ptr to Component. Try inserting std::make_unique<Movable>(). – Shadows In Rain Jul 27 '14 at 16:04 • The standard library doesn't have make_unique. – natebot13 Jul 30 '14 at 6:49 • @natebot13 make_unique` is in C++14, but may be implemented trivially in C++11: stackoverflow.com/a/12580468/1125702 – Shadows In Rain Jul 30 '14 at 10:11 You might actually be better of with using std::map for this one, depending on your exact use-case. I did some testing on our little rendering framework where I thought std::unordered_map would be good for maintaining a map of graphics resources (we are using std::map there now). I found that it's actually slower to use std::unordered_map since the hash-function (e.g. std::hash()) is slowing things down significantly. Of course it might depend on the exact usage, but as always: profile profile profile. And of course, like mentioned above, make sure you don't slice your class and use pointers such as std::unique_ptr.
{}
An error was encountered while trying to add the item to the cart. Please try again. Copy To Clipboard Successfully Copied! Lectures on Hilbert Cube Manifolds A co-publication of the AMS and CBMS Available Formats: Electronic ISBN: 978-1-4704-2388-9 Product Code: CBMS/28.E List Price: $25.00 Individual Price:$20.00 Click above image for expanded view Lectures on Hilbert Cube Manifolds A co-publication of the AMS and CBMS Available Formats: Electronic ISBN: 978-1-4704-2388-9 Product Code: CBMS/28.E List Price: $25.00 Individual Price:$20.00 • Book Details CBMS Regional Conference Series in Mathematics Volume: 281976; 131 pp MSC: Primary 57; The goal of these lectures is to present an introduction to the geometric topology of the Hilbert cube Q and separable metric manifolds modeled on Q, which are called here Hilbert cube manifolds or Q-manifolds. In the past ten years there has been a great deal of research on Q and Q-manifolds which is scattered throughout several papers in the literature. The author presents here a self-contained treatment of only a few of these results in the hope that it will stimulate further interest in this area. No new material is presented here and no attempt has been made to be complete. For example, the author has omitted the important theorem of Schori-West stating that the hyperspace of closed subsets of $[0,1]$ is homeomorphic to Q. In an appendix (prepared independently by R. D. Anderson, D. W. Curtis, R. Schori and G. Kozlowski) there is a list of problems which are of current interest. This includes problems on Q-manifolds as well as manifolds modeled on various linear spaces. The reader is referred to this for a much broader perspective of the field. In the first four chapters, the basic tools which are needed in all of the remaining chapters are presented. Beyond this there seem to be at least two possible courses of action. The reader who is interested only in the triangulation and classification of Q-manifolds should read straight through (avoiding only Chapter VI). In particular the topological invariance of Whitehead torsion appears in Section 38. The reader who is interested in R. D. Edwards' recent proof that every ANR is a Q-manifold factor should read the first four chapters and then (with the single exception of 26.1) skip over to Chapters XIII and XIV. • Chapters • I. Preliminaries • II. Z-Sets in Q • III. Stability of Q-Manifofds • IV. Z-Sets in Q-Manifolds • V. Q-Manifolds of the Form M x [0,1) • VI. Shapes of Z-Sets in Q • VII. Near Homeomorphisms and the Sum Theorem • VIII. Applications of the Sum Theorem • IX. The Splitting Theorem • X. The Handle Straightening Theorem • XI. The Triangulation Theorem • XII. The Classification Theorem • XIII. Cell-Like Mappings • XIV. The ANR Theorem • Reviews • This is an important contribution, since it compiles known results from a variety of papers into one well-written source. Mathematical Reviews • Requests Review Copy – for reviewers who would like to review an AMS book Accessibility – to request an alternate format of an AMS title Volume: 281976; 131 pp MSC: Primary 57; The goal of these lectures is to present an introduction to the geometric topology of the Hilbert cube Q and separable metric manifolds modeled on Q, which are called here Hilbert cube manifolds or Q-manifolds. In the past ten years there has been a great deal of research on Q and Q-manifolds which is scattered throughout several papers in the literature. The author presents here a self-contained treatment of only a few of these results in the hope that it will stimulate further interest in this area. No new material is presented here and no attempt has been made to be complete. For example, the author has omitted the important theorem of Schori-West stating that the hyperspace of closed subsets of $[0,1]$ is homeomorphic to Q. In an appendix (prepared independently by R. D. Anderson, D. W. Curtis, R. Schori and G. Kozlowski) there is a list of problems which are of current interest. This includes problems on Q-manifolds as well as manifolds modeled on various linear spaces. The reader is referred to this for a much broader perspective of the field. In the first four chapters, the basic tools which are needed in all of the remaining chapters are presented. Beyond this there seem to be at least two possible courses of action. The reader who is interested only in the triangulation and classification of Q-manifolds should read straight through (avoiding only Chapter VI). In particular the topological invariance of Whitehead torsion appears in Section 38. The reader who is interested in R. D. Edwards' recent proof that every ANR is a Q-manifold factor should read the first four chapters and then (with the single exception of 26.1) skip over to Chapters XIII and XIV. • Chapters • I. Preliminaries • II. Z-Sets in Q • III. Stability of Q-Manifofds • IV. Z-Sets in Q-Manifolds • V. Q-Manifolds of the Form M x [0,1) • VI. Shapes of Z-Sets in Q • VII. Near Homeomorphisms and the Sum Theorem • VIII. Applications of the Sum Theorem • IX. The Splitting Theorem • X. The Handle Straightening Theorem • XI. The Triangulation Theorem • XII. The Classification Theorem • XIII. Cell-Like Mappings • XIV. The ANR Theorem • This is an important contribution, since it compiles known results from a variety of papers into one well-written source. Mathematical Reviews Review Copy – for reviewers who would like to review an AMS book Accessibility – to request an alternate format of an AMS title Please select which format for which you are requesting permissions.
{}
Find all School-related info fast with the new School-Specific MBA Forum It is currently 28 May 2015, 14:29 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Most of the whole blood donated for transfusion is type O, Author Message TAGS: Manager Joined: 14 Feb 2010 Posts: 161 Location: Banaglore Followers: 0 Kudos [?]: 34 [2] , given: 8 Most of the whole blood donated for transfusion is type O, [#permalink]  19 Jul 2011, 06:03 2 KUDOS 00:00 Difficulty: 35% (medium) Question Stats: 63% (02:03) correct 37% (01:11) wrong based on 210 sessions Most of the whole blood donated for transfusion is type O, which is compatible with all blood types. Type O is especially important in emergencies which don't allow time for typing the victim's blood, but for this reason it is usually in short supply. Given the above, which of the following must be true? A) Most of the population has type O blood. B) Transfusions of any but type O blood necessitates prior typing of the recipients blood. C) Only type O blood requires too much time for typing in emergencies. D) Type O blood is especially useful because it is the same type as most people's blood. E) Supplies of type O blood are always too low to meet emergencies. [Reveal] Spoiler: OA Manager Joined: 24 Nov 2010 Posts: 209 Location: United States (CA) Concentration: Technology, Entrepreneurship Schools: Ross '15, Duke '15 Followers: 2 Kudos [?]: 42 [3] , given: 7 Re: Type O Blood [#permalink]  19 Jul 2011, 06:17 3 KUDOS zuberahmed wrote: Most of the whole blood donated for transfusion is type O, which is compatible with all blood types. Type O is especially important in emergencies which don't allow time for typing the victim's blood, but for this reason it is usually in short supply. Given the above, which of the following must be true? A) Most of the population has type O blood. Though the argument says most of the blood donated is type O, it does not have to be the most common blood type. There could be other reasons for it to be the most commonly donated type. maybe it is the most sought after blood type by orgs. like red cross B) Transfusions of any but type O blood necessitates prior typing of the recipients blood. the arg. says Type O is compatible with all other blood type so blood typing is not necessary when usign type o. This clearly follows from the arg. C) Only type O blood requires too much time for typing in emergencies. there is no evidence in the arg for this D) Type O blood is especially useful because it is the same type as most people's blood. there is no evidence in the arg for this E) Supplies of type O blood are always too low to meet emergencies. the arg does say that it is usually low in supply because it is used in emergencies. However it doesnot say that there is not enough supply for emergencies. Manager Joined: 14 Feb 2010 Posts: 161 Location: Banaglore Followers: 0 Kudos [?]: 34 [0], given: 8 Re: Type O Blood [#permalink]  19 Jul 2011, 06:23 Thanks dreambeliever for the explanation. Nice one though ... kudos Intern Status: Applying in Round 2 Joined: 17 Mar 2011 Posts: 23 Concentration: Finance, Accounting Schools: Stern '14 GMAT 1: 660 Q49 V31 Followers: 1 Kudos [?]: 11 [0], given: 28 Re: Type O Blood [#permalink]  20 Jul 2011, 04:40 Clearly B is the winner Manager Joined: 26 Sep 2010 Posts: 114 GMAT 1: 680 Q49 V34 GPA: 3.65 Followers: 0 Kudos [?]: 2 [0], given: 0 Re: Type O Blood [#permalink]  07 Aug 2011, 02:23 B for me Retired Moderator Status: 2000 posts! I don't know whether I should feel great or sad about it! LOL Joined: 04 Oct 2009 Posts: 1722 Location: Peru Schools: Harvard, Stanford, Wharton, MIT & HKS (Government) WE 1: Economic research WE 2: Banking WE 3: Government: Foreign Trade and SMEs Followers: 76 Kudos [?]: 447 [0], given: 109 Re: Type O Blood [#permalink]  14 Aug 2011, 14:03 +1 B _________________ "Life’s battle doesn’t always go to stronger or faster men; but sooner or later the man who wins is the one who thinks he can." My Integrated Reasoning Logbook / Diary: my-ir-logbook-diary-133264.html GMAT Club Premium Membership - big benefits and savings Manager Joined: 10 Jan 2011 Posts: 244 Location: India GMAT Date: 07-16-2012 GPA: 3.4 WE: Consulting (Consulting) Followers: 0 Kudos [?]: 26 [0], given: 25 Re: Type O Blood [#permalink]  06 Sep 2011, 04:18 _________________ -------Analyze why option A in SC wrong------- Intern Affiliations: IEEE, IAB Joined: 09 Aug 2011 Posts: 15 Mustafa: Golam Followers: 0 Kudos [?]: 7 [0], given: 2 Re: Type O Blood [#permalink]  09 Sep 2011, 19:57 Quote: Most of the whole blood donated for transfusion is type O, which is compatible with all blood types. Type O is especially important in emergencies which don't allow time for typing the victim's blood, but for this reason it is usually in short supply. Given the above, which of the following must be true? A) Most of the population has type O blood. B) Transfusions of any but type O blood necessitates prior typing of the recipients blood. C) Only type O blood requires too much time for typing in emergencies. D) Type O blood is especially useful because it is the same type as most people's blood. E) Supplies of type O blood are always too low to meet emergencies. A) Most of the population has type O blood. Cannot assume this. B) Transfusions of any but type O blood necessitates prior typing of the recipients blood.In Emergency, time is limited. Hence, demand would be high for blood type which can be given very quickly. C) Only type O blood requires too much time for typing in emergencies. Contradictory D) Type O blood is especially useful because it is the same type as most people's blood. Assumption out of scope E) Supplies of type O blood are always too low to meet emergencies. Assumption of of Scope So, choice is B. _________________ *´¨) ¸.•´¸.•*´¨) ¸.•*¨) (¸.•´ (¸.• *Mustafa Golam,'.',. Manager Joined: 01 Jun 2011 Posts: 176 Followers: 1 Kudos [?]: 13 [0], given: 6 Re: Type O Blood [#permalink]  10 Sep 2011, 01:04 I would have clicked option D. Since I believe income group and the magnitude of the health should not be a reason behind bad health service.. Manager Joined: 09 Jun 2011 Posts: 92 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Type O Blood [#permalink]  10 Sep 2011, 06:53 B Manager Joined: 04 Jun 2011 Posts: 189 Followers: 0 Kudos [?]: 39 [0], given: 21 Re: Type O Blood [#permalink]  10 Sep 2011, 08:40 Izvos wrote: I would have clicked option D. Since I believe income group and the magnitude of the health should not be a reason behind bad health service.. Izvos.... D says Type O blood is especially useful because it is the same type as most people's blood. but this information is not given in the passage... it only says O is compatible with other groups... hence the best answer is B im curious to know why u think the answer is D (maybe thats a line of thought i havent focussed on) can u pls elucidate? It will help all of us in improving our chances of getting right on Cr's Manager Joined: 09 Jun 2011 Posts: 147 Followers: 0 Kudos [?]: 10 [0], given: 1 Re: Type O Blood [#permalink]  10 Sep 2011, 10:45 Clear B for me..Isvoz:I would also like to know your line of though for selecting D here.. GMAT Club Legend Joined: 01 Oct 2013 Posts: 4019 Followers: 444 Kudos [?]: 94 [0], given: 0 Re: Most of the whole blood donated for transfusion is type O, [#permalink]  07 Oct 2013, 11:49 Hello from the GMAT Club VerbalBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Re: Most of the whole blood donated for transfusion is type O,   [#permalink] 07 Oct 2013, 11:49 Similar topics Replies Last post Similar Topics: Is the radius of circle with center O a whole number? (I) 1 08 Feb 2011, 10:07 Is the radius of circle with center O a whole number? 3 07 Nov 2009, 17:01 Methods for typing blood were developed around the turn of 2 10 May 2009, 12:21 Methods for typing blood were developed around the turn of 10 04 Dec 2007, 11:38 Blood banks can only make a donation useful to a hospital by 7 22 Jul 2006, 13:36 Display posts from previous: Sort by
{}
# A question on Okounkov bodies Let $$X$$ be an irreducible $$n$$-dimensional projective variety, and $$Y_n\subset Y_{n-1}\subset\dots\subset Y_1\subset X$$ a flag of irreducible subvarieties such that $$Y_i$$ has codimension $$i$$ in $$X$$ and it is smooth at the point $$Y_n$$ for any $$i = 1,\dots,n-1$$. Let $$D$$ be a Cartier divisor on $$X$$. Given a non-zero section $$s\in H^0(X,D)$$ define $$\nu_1 = \nu_1(s) = ord_{Y_1}(s)$$. Now, if $$\{t=0\}$$ is a local equation for $$D$$ in $$X$$ the section $$s$$ determines a section $$\widetilde{s} = st^{-\nu_1}\in H^0(X,D-\nu_1 Y_1)$$ that is not identically zero. Consider $$\widetilde{s}_{|Y_1}$$ and set $$\nu_2 = \nu_2(s) = ord_{Y_2}(\widetilde{s}_{|Y_1})$$. Proceeding like this we get a valuation $$\nu:H^0(X,D)\rightarrow \mathbb{Z}^n\cup\{\infty\}$$ given by $$\nu(s) = (\nu_1(s),\nu_2(s),\dots,\nu_n(s))$$. Consider the semi-group $$\Gamma(D) = \{(k,\nu(s))\:|\: 0\neq s\in H^0(X,kD), k\in\mathbb{Z}_{\geq 0}\}\subset\mathbb{Z}^{n+1}_{\geq 0}$$ and let $$\Sigma(D)\subset\mathbb{R}^{n+1}$$ be the closed convex cone generated by $$\Gamma(D)$$. The Okounkov body associated to $$D$$ with respect to the fixed flag is $$\Delta(D) = \Sigma(D)\cap (\mathbb{R}^n\times \{1\})$$ How can one compute, at least in simple examples, Okounkov bodies? If $$D$$ is ample is it enough to consider the values of the valuation on a basis of $$H^0(X,D)$$? For instance, if $$X = \mathbb{P}^2$$, the flag is given by a point in a line and $$D$$ is the hyperplane section what is $$\Delta(D)$$? Since $$\mathbb{P}^2$$ is toric it should be just a triangle. • I’ll try to return to answer the example you gave, but just a brief note: you won’t be able to get away just by working with global sections of some ample $D$, since such a divisor need not have any sections. You can probably take some sufficiently ample multiple of $D$ and then divide. Dec 8 '19 at 23:44
{}
The current GATK version is 3.8-0 Examples: Monday, today, last week, Mar 26, 3/26/04 #### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! You can opt in to receive email notifications, for example when your questions get answered or when there are new announcements, by following the instructions given here. #### ☞ Got a problem? 1. Search using the upper-right search box, e.g. using the error message. 3. Include tool and Java versions. 4. Tell us whether you are following GATK Best Practices. 5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc. 6. For tool errors, include the error stacktrace as well as the exact command. 7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs. 8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484. 9. For a seeming variant that is uncalled, include results of following Article#1235. #### ☞ Formatting tip! Wrap blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks ( ` ) each to make a code block as demonstrated here. GATK version 4.beta.3 (i.e. the third beta release) is out. See the GATK4 beta page for download and details. Member Hello, I've been using the Picardtools MarkDuplicates tool. I'd like to identify which reads are duplicates of each other (ie. if read.1234 is a duplicate of read.5678, I want to be able to retrieve this relationship). Does the MarkDuplicates output indicate this in any way? While I could group reads together if they share the same start coordinate listed in the BAM file, this gets a little tricky if the reads align to the minus strand, or if there are mismatches in the first couple of nucleotides in the read. I think the MarkDuplicates program must be collecting this information behind the scenes when it's finding duplicates. Thank you very much for your help. Tagged:
{}
# Elementary Algebra, v. 1.0 by John Redden ## 2.8 Linear Inequalities (One Variable) ### Learning Objectives 1. Identify linear inequalities and check solutions. 2. Solve linear inequalities and express the solutions graphically on a number line and in interval notation. 3. Solve compound linear inequalities and express the solutions graphically on a number line and in interval notation. 4. Solve applications involving linear inequalities and interpret the results. ## Definition of a Linear Inequality A linear inequalityA mathematical statement relating a linear expression as either less than or greater than another. is a mathematical statement that relates a linear expression as either less than or greater than another. The following are some examples of linear inequalities, all of which are solved in this section: A solution to a linear inequalityA real number that produces a true statement when its value is substituted for the variable. is a real number that will produce a true statement when substituted for the variable. Linear inequalities have either infinitely many solutions or no solution. If there are infinitely many solutions, graph the solution set on a number line and/or express the solution using interval notation. Example 1: Are $x=−2$ and $x=4$ solutions to $3x+7<16$? Solution: Substitute the values for x, simplify, and check to see if we obtain a true statement. Answer: $x=−2$ is a solution and $x=4$ is not. ## Algebra of Linear Inequalities All but one of the techniques learned for solving linear equations apply to solving linear inequalities. You may add or subtract any real number to both sides of an inequality, and you may multiply or divide both sides by any positive real number to create equivalent inequalities. For example, Both subtracting 7 from each side and dividing each side by +5 results in an equivalent inequality that is true. Example 2: Solve and graph the solution set: $3x+7<16$. Solution: It is helpful to take a minute and choose a few values in and out of the solution set, substitute them into the original inequality, and then verify the results. As indicated, you should expect $x=0$ to solve the original inequality, but $x=5$ should not. Checking in this manner gives a good indication that the inequality is solved correctly. This can be done mentally. Answer: Interval notation: $(−∞, 3)$ When working with linear inequalities, a different rule applies when multiplying or dividing by a negative number. To illustrate the problem, consider the true statement $10>−5$ and divide both sides by −5. Dividing by −5 results in a false statement. To retain a true statement, the inequality must be reversed. The same problem occurs when multiplying by a negative number. This leads to the following new rule: when multiplying or dividing by a negative number, reverse the inequality. It is easy to forget to do this so take special care to watch for negative coefficients. In general, given algebraic expressions A and B, where c is a positive nonzero real number, we have the following properties of inequalitiesProperties used to obtain equivalent inequalities and used as a means to solve them.: We use these properties to obtain an equivalent inequalityInequalities that share the same solution set., one with the same solution set, where the variable is isolated. The process is similar to solving linear equations. Example 3: Solve: $−2x+1≥21$. Solution: Answer: Interval notation: $(−∞, −10]$ Example 4: Solve: $−7(2x+1)<1$. Solution: Answer: Interval notation: $(−47, ∞)$ Example 5: Solve: $5x−3(2x−1)≥2(x−3)$. Solution: Answer: Interval notation: $(−∞, 3]$ Try this! Solve: $3−5(x−1)≤28$. Answer: $[−4, ∞)$ ## Compound Inequalities Following are some examples of compound linear inequalities: These compound inequalitiesTwo or more inequalities in one statement joined by the word “and” or by the word “or.” are actually two inequalities in one statement joined by the word “and” or by the word “or.” For example, is a compound inequality because it can be decomposed as follows: Solve each inequality individually, and the intersection of the two solution sets solves the original compound inequality. While this method works, there is another method that usually requires fewer steps. Apply the properties of this section to all three parts of the compound inequality with the goal of isolating the variable in the middle of the statement to determine the bounds of the solution set. Example 6: Solve: $−3<2x+5<17$. Solution: Example 7: Solve: $−1≤12x−3<1$. Solution: It is important to note that when multiplying or dividing all three parts of a compound inequality by a negative number, you must reverse all of the inequalities in the statement. For example, The answer above can be written in an equivalent form, where smaller numbers lie to the left and the larger numbers lie to the right, as they appear on a number line. Using interval notation, write $(−10, 5)$. Try this! Solve: $−8≤2(−3x+5)<34$. Answer: $(−4, 3]$ ### Video Solution For compound inequalities with the word “or” you must work both inequalities separately and then consider the union of the solution sets. Values in this union solve either inequality. Example 8: Solve: Solution: Solve each inequality and form the union by combining the solution sets. Answer: Interval notation: $(−∞, 3)∪[6, ∞)$ Try this! Solve: . Answer: $(−∞,−1)∪(32, ∞)$ ## Applications of Linear Inequalities Some of the key words and phrases that indicate inequalities are summarized below: Key Phrases Translation A number is at least 5. $x≥5$ A number is 5 or more inclusive. A number is at most 3. $x≤3$ A number is 3 or less inclusive. A number is strictly less than 4. $x<4$ A number is less than 4, noninclusive. A number is greater than 7. $x>7$ A number is more than 7, noninclusive. A number is in between 2 and 10. $2 A number is at least 5 and at most 15. $5≤x≤15$ A number may range from 5 to 15. As with all applications, carefully read the problem several times and look for key words and phrases. Identify the unknowns and assign variables. Next, translate the wording into a mathematical inequality. Finally, use the properties you have learned to solve the inequality and express the solution graphically or in interval notation. Example 9: Translate: Five less than twice a number is at most 25. Solution: First, choose a variable for the unknown number and identify the key words and phrases. Answer: $2n−5≤25$. The key phrase “is at most” indicates that the quantity has a maximum value of 25 or smaller. Example 10: The temperature in the desert can range from 10°C to 45°C in one 24-hour period. Find the equivalent range in degrees Fahrenheit, F, given that $C=59(F−32)$. Solution: Set up a compound inequality where the temperature in Celsius is inclusively between 10°C and 45°C. Then substitute the expression equivalent to the Celsius temperature in the inequality and solve for F. Answer: The equivalent Fahrenheit range is from 50°F to 113°F. Example 11: In the first four events of a meet, a gymnast scores 7.5, 8.2, 8.5, and 9.0. What must she score on the fifth event to average at least 8.5? Solution: The average must be at least 8.5; this means that the average must be greater than or equal to 8.5. Answer: She must score at least 9.3 on the fifth event. ### Key Takeaways • Inequalities typically have infinitely many solutions. The solutions are presented graphically on a number line or using interval notation or both. • All but one of the rules for solving linear inequalities are the same as for solving linear equations. If you divide or multiply an inequality by a negative number, reverse the inequality to obtain an equivalent inequality. • Compound inequalities involving the word “or” require us to solve each inequality and form the union of each solution set. These are the values that solve at least one of the given inequalities. • Compound inequalities involving the word “and” require the intersection of the solution sets for each inequality. These are the values that solve both or all of the given inequalities. • The general guidelines for solving word problems apply to applications involving inequalities. Be aware of a new list of key words and phrases that indicate a mathematical setup involving inequalities. ### Topic Exercises Part A: Checking for Solutions Determine whether the given number is a solution to the given inequality. 1. 2. 3. 4. 5. 6. 7. ; $x=−10$ 8. 9. ; $x=2$ 10. ; $x=1$ Part B: Solving Linear Inequalities Solve and graph the solution set. In addition, present the solution set in interval notation. 11. $x+5>1$ 12. $x−3<−4$ 13. $6x≤24$ 14. $4x>−8$ 15. $−7x≤14$ 16. $−2x+5>9$ 17. $7x−3≤25$ 18. $12x+7>−53$ 19. $−2x+5<−7$ 20. $−2x+4≤4$ 21. $−15x+10>20$ 22. $−8x+1≤29$ 23. $17x−3<1$ 24. $12x−13>23$ 25. $53x+12≤13$ 26. $−34x−12≥52$ 27. $−15x+34<−15$ 28. $−23x+1<−3$ 29. $2(−3x+1)<14$ 30. $−7(x−2)+1<15$ 31. $9x−3(3x+4)>−12$ 32. $12x−4(3x+5)≤−2$ 33. $5−3(2x−6)≥−1$ 34. $9x−(10x−12)<22$ 35. $2(x−7)−3(x+3)≤−3$ 36. $5x−3>3x+7$ 37. $4(3x−2)≤−2(x+3)+12$ 38. $5(x−3)≥15x−(10x+4)$ 39. $12x+1>2(6x−3)−5$ 40. $3(x−2)+5>2(3x+5)+2$ 41. $−4(3x−1)+2x≤2(4x−1)−3$ 42. $−2(x−2)+14x<7(2x+1)$ Set up an algebraic inequality and then solve it. 43. The sum of three times a number and 4 is greater than negative 8. 44. The sum of 7 and three times a number is less than or equal to 1. 45. When a number is subtracted from 10, the result is at most 12. 46. When 5 times a number is subtracted from 6, the result is at least 26. 47. If five is added to three times a number, then the result is less than twenty. 48. If three is subtracted from two times a number, then the result is greater than or equal to nine. 49. Bill earns $12.00 for the day plus$0.25 for every person he gets to register to vote. How many people must he register to earn at least $50.00 for the day? 50. With a golf club membership costing$100 per month, each round of golf costs only $25.00. How many rounds of golf can a member play if he wishes to keep his costs to$250 per month at most? 51. Joe earned scores of 72, 85, and 75 on his first three algebra exams. What must he score on the fourth exam to average at least 80? 52. Maurice earned 4, 7, and 9 points out of 10 on the first three quizzes. What must he score on the fourth quiz to average at least 7? 53. A computer is set to shut down if the temperature exceeds 40°C. Give an equivalent statement using degrees Fahrenheit. (Hint: $C=59(F−32)$.) 54. A certain brand of makeup is guaranteed not to run if the temperature is less than 35°C. Give an equivalent statement using degrees Fahrenheit. Part C: Compound Inequalities Solve and graph the solution set. In addition, present the solution set in interval notation. 55. $−1 56. $−10≤5x<20$ 57. $−2≤4x+6<10$ 58. $−10≤3x−1≤−4$ 59. $−15<3x−6≤6$ 60. $−22<5x+3≤3$ 61. $−1≤12x−5≤1$ 62. $1<8x+5<5$ 63. $−15≤23x−15<45$ 64. $−12<34x−23≤12$ 65. $−3≤3(x−1)≤3$ 66. $−12<6(x−3)≤0$ 67. $4<−2(x+3)<6$ 68. $−5≤5(−x+1)<15$ 69. $−32≤14(12x−1)+34<32$ 70. $−4≤−13(3x+12)<4$ 71. $−2≤12−2(x−3)≤20$ 72. $−5<2(x−1)−3(x+2)<5$ 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. $5x+3<4 or 5−10x>4$ 85. 86. 87. 88. Set up a compound inequality for the following and then solve. 89. Five more than two times some number is between 15 and 25. 90. Four subtracted from three times some number is between −4 and 14. 91. Clint wishes to earn a B, which is at least 80 but less than 90. What range must he score on the fourth exam if the first three were 65, 75, and 90? 92. A certain antifreeze is effective for a temperature range of −35°C to 120°C. Find the equivalent range in degrees Fahrenheit. 93. The average temperature in London ranges from 23°C in the summer to 14°C in the winter. Find the equivalent range in degrees Fahrenheit. 94. If the base of a triangle measures 5 inches, then in what range must the height be for the area to be between 10 square inches and 20 square inches? 95. A rectangle has a length of 7 inches. Find all possible widths if the area is to be at least 14 square inches and at most 28 square inches. 96. A rectangle has a width of 3 centimeters. Find all possible lengths, if the perimeter must be at least 12 centimeters and at most 26 centimeters. 97. The perimeter of a square must be between 40 feet and 200 feet. Find the length of all possible sides that satisfy this condition. 98. If two times an angle is between 180 degrees and 270 degrees, then what are the bounds of the original angle? 99. If three times an angle is between 270 degrees and 360 degrees then what are the bounds of the original angle? Part D: Discussion Board Topics 100. Research and discuss the use of set-builder notation with intersections and unions. 101. Can we combine logical “or” into one statement like we do for logical “and”? 1: Yes 3: No 5: Yes 7: Yes 9: Yes 11: $x>−4$; $(−4, ∞)$ 13: $x≤4$; $(−∞, 4]$ 15: $x≥−2$; $[−2, ∞)$ 17: $x≤4$; $(−∞, 4]$ 19: $x>6$; $(6, ∞)$ 21: $x<−23$; $(−∞, −23)$ 23: $x<28$; $(−∞, 28)$ 25: $x≤−110$; $(−∞, −110]$ 27: $x>194$; $(194, ∞)$ 29: $x>−2$; $(−2, ∞)$ 31: $∅$ 33: $x≤4$; $(−∞, 4]$ 35: $x≥−20$; $[−20, ∞)$ 37: $x≤1$; $(−∞, 1]$ 39: R 41: $x≥12$; $[12, ∞)$ 43: $n>−4$ 45: $n≥−2$ 47: $n<5$ 49: Bill must register at least 152 people. 51: Joe must earn at least an 88 on the fourth exam. 53: The computer will shut down when the temperature exceeds 104°F. 55: $−4; $(−4, 2)$ 57: $−2≤x<1$; $[−2, 1)$ 59: $−3; $(−3, 4]$ 61: $8≤x≤12$; $[8, 12]$ 63: $0≤x<32$; $[0, 32)$ 65: $0≤x≤2$; $[0, 2]$ 67: $−6; $(−6, −5)$ 69: $−16≤x<8$; $[−16, 8)$ 71: $−1≤x≤10$; $[−1, 10]$ 73: ; $(−∞, −5]∪(3, ∞)$ 75: ; $(−∞, 0)∪(1, ∞)$ 77: R 79: ; $(−∞, 2)∪(6, ∞)$ 81: $x≤0$; $(−∞, 0]$ 83: $x<4$; $(−∞, 4)$ 85: $−4; $(−4, 6)$ 87: $x<3$; $(−∞, 3)$ 89: $5 91: Clint must earn a score in the range from 90 to 100. 93: The average temperature in London ranges from 57.2°F to 73.4°F. 95: The width must be at least 2 inches and at most 4 inches. 97: Sides must be between 10 feet and 50 feet. 99: The angle is between 90 degrees and 120 degrees.
{}
Article | Open | Published: # Rapid Fabrication of Graphene Field-Effect Transistors with Liquid-metal Interconnects and Electrolytic Gate Dielectric Made of Honey ## Abstract Historically, graphene-based transistor fabrication has been time-consuming due to the high demand for carefully controlled Raman spectroscopy, physical vapor deposition, and lift-off processes. For the first time in a three-terminal graphene field-effect transistor embodiment, we introduce a rapid fabrication technique that implements non-toxic eutectic liquid-metal Galinstan interconnects and an electrolytic gate dielectric comprised of honey. The goal is to minimize cost and turnaround time between fabrication runs; thereby, allowing researchers to focus on the characterization of graphene phenomena that drives innovation rather than a lengthy device fabrication process that hinders it. We demonstrate characteristic Dirac peaks for a single-gate graphene field-effect transistor embodiment that exhibits hole and electron mobilities of 213 ± 15 and 166 ± 5 cm 2/V·s respectively. We discuss how our methods can be used for the rapid determination of graphene quality and can complement Raman Spectroscopy techniques. Lastly, we explore a PN junction embodiment which further validates that our fabrication techniques can rapidly adapt to alternative device architectures and greatly broaden the research applicability. ## Introduction The lowering cost of graphene synthesis has created opportunities that include lightweight electronics such as wearable, flexible, and electromagnetic sensors1. However, to explore such scientific phenomena one must be trained and certified in sophisticated optical characterization and microfabrication techniques to design and fabricate graphene devices such as graphene field-effect transistors (GFETs). Furthermore, the complexity of the required fabrication steps must be conducted in a controlled environment to increase device yield and minimize exposure to harmful chemicals. This process can be quite time-consuming and demanding depending on the scope of the device architecture and the allotted time given for expenditure of project funds. Such drawbacks reduce the accessibility of graphene research across academic and private institutions. In a typical GFET fabrication process, graphene is synthesized via chemical vapor deposition and transferred onto a target substrate via delamination and the graphene quality is measured via Raman Spectroscopy. Lift-off processes are then used to construct contact electrodes, gate-dielectrics, and gate-contacts2,3,4,5,6,7. Unfortunately, numerous articles have reported degradation in graphene transistor performance due to mechanical and electrical strain put on graphene that is a direct result of photolithography and physical vapor deposition8,9,10. In addition, graphene has been shown to exhibit high contact resistance with standard electrode materials relative to its size that create non-ideal performance across the board11. This performance limitation will potentially force a researcher back into the cleanroom to identify drawbacks in his or her fabrication process. In this paper, we demonstrate the rapid fabrication of a three-terminal graphene-field effect transistor embodiment with the use of commercially available eutectic liquid-metal Galinstan interconnects and electrolytic gate dielectric comprised of honey (LM-GFET). We demonstrate comparable GFET performance with the proposed inexpensive, non-traditional materials to much more common ionic gel materials12, 13. Galinstan is a non-toxic liquid-metal alloy consisting of 68.5% gallium, 21.5% indium, and 10% tin14 that possesses desirable conformal properties. In a previous study, Galinstan device interconnects exhibited less than a 5.5% change in resistance for a graphene two-terminal device when subjected to repeated deformations as small as 4.5 mm radius of curvature15. Moreover, graphene transistors can greatly benefit from the conductivity of Galinstan (2.30 × 106S/m), a desirable vapor pressure (<1 × 10−6Pa at 500 °C) compared with mercury (0.1713 Pa at 20 °C), and a stable liquid state across a broad temperature range (−19 °C to 1300 °C)16. Honey is typically produced via sugary secretions from bees, harvested, and packaged under various brand names for commercial food consumption. Honey contains various concentrations of water, vitamins, minerals, amino acids, and sugars: fructose, glucose, sucrose that can be controlled via bee production and honey extraction techniques17. To our benefit, honey formulates an ionic gel-like solution analogous to ion gels. Ion gels consist of room-temperature ionic liquids and gelating triblock copolymers12, 13. Recently, ion gels have demonstrated ideal performance as electrolytic gate dielectrics for flexible GFET devices due to an ability to produce extremely high capacitance and high dielectric constants required for high on-current and low-voltage operation12. The introduction of honey as an electrolytic gate dielectric is advantageous for rapid fabrication of GFET devices due to the commercial availability, non-toxicity, control of ionic content that can be used to alter dielectric properties, and quick mixing that reduces preparation time of honey. Currently, ion gels require special preparation in an atmospheric controlled environment to mitigate outgassing and combustion, which only adds to its complexity. ## Results and Discussion ### Transistor Architecture and Operation The transistor architecture of the LM-GFET device is shown in Fig. 1a and an image of the LM-GFET is shown in Fig. 1c. The device is comprised of liquid-metal Galinstan source and drain electrodes that are overlaid on monolayer graphene transferred to Polyethylene Terephthalate (PET). A detailed fabrication process for the LM-GFET device is described in the Methods Section. Honey, was used as a gel-like electrolytic gate dielectric to generate an enhanced electric field-effect response above graphene. Due to the presence of ions and high polarizability of honey, a diffusion of charge is formed at the thin layer between honey and graphene. This layer forms an electric double layer and is typical when ionic liquids contact conductive materials12, 13. Due to the nanoscale separation distance of the electric double layer, usually from 1-10 nm, a large charge gradient is formed on the surface of graphene. For example, in the case a gate electrode is positively charged and submerged in honey, anions will accumulate at the gate/honey interface and cations will accumulate at the honey/graphene interface. The resultant electrical double layer at the honey/graphene interface will then alter the graphene conductivity, Fig. 1d. The opposite is true for the case in which the gate electrode is negatively charged. ### Single-Gate Transistor Characteristics A schematic representation of the electrical measurements is shown in Fig. 2a. Fig. 2b–d illustrates graphene transport characteristics for a single-gated LM-GFET device. The V-shaped curve of the relationship between top-gate voltage $${V}_{TG}^{\ast }$$ and drain-to-source current I ds in Fig. 2c highlights the ambipolar operation that is characteristic of any graphene field-effect transistor, and provides designers the flexibility to bias the device in either hole or electron conduction mode. A well-documented model extraction technique was utilized to extract graphene parameters from the device’s transfer curve18. The model fit in Fig. 2d determined hole and electron mobilities of 213 ± 15 and 166 ± 5 cm 2/V · s respectively, at a drain bias of 100 mV. Despite the rapid and inexpensive fabrication process, the LM-GFET devices exhibited performance comparable to that of much more elaborately fabricated GFET device1, 12, 13. In addition, Fig. 2c illustrates the device’s transconductance that reaches a considerable value of 38 μA/V with a large degree of symmetry and linearity within the operational range of −0.5 V to 0.5 V. This linear, ambipolar transconductance has significant utility in ambipolar electronic circuits such as radio frequency mixers, digital modulators, and phase detectors19. Figure 2b illustrates the I ds  − V ds response to the various transconductances associated with different $${V}_{TG}^{\ast }$$ values. Fortunately, due to the monotonic transconductance of the LM-GFET devices, the drain-to-source voltage V ds sweep does not demonstrate an inflection point. Particularly, the varying $${V}_{TG}^{\ast }$$ curves do not intersect one another, which cannot be presumed to occur in all standard GFET devices20. Instead, the I ds  − V ds trend is encouraging because the drain currents diverge at higher V ds biases. In a GFET sensor application, a designer can first bias their device at a desired V ds and I ds . Therefore, an external stimuli can trigger a change in the device’s gate voltage, which will create a significant change in I ds . Note, the dielectric constant of honey was measured to be 21 and the gate capacitance was measured to be 2.3 μF/cm 2 using a LCR meter which is very comparable to what is described in literature21, 22. ### Comparison of Ion Gel with an Electrolytic Gate Dielectric Made of Honey Transport characteristics were compared with an ion gel LM-GFET to validate the use of honey as a electrolytic gate dielectric. The ion gel used was comprised of 1-Ethyl-3 Methylimidizalium Bis(Trifluoromethylsufonyl)imide ([EMIM][TFSI]) ionic liquid in Polystyrene-Poly (Ethylene Oxide) (PS-PEO) diblock copolymer. A detailed description of the ion gel synthesis is given in the Methods section. Figure 3a, illustrates the difference in the drain-to-source current, ON/OFF, ratio as a function of top-gate voltage (I ON /I OFF  − V TG ). The transport characteristics are presented as a ratio of I ON and I OFF due to the significant difference in drain-to-source current between the sample measurements. The maximum operating current of the device with ion gel was 124 ± 10 μA with an I ON /I OFF  = 1.7, where V ds  = 1.2 V. The maximum operating current of the device with honey was 620 ± 40 μA with an I ON /I OFF  = 3.1. A model fit determined the hole and electron mobilities for the ion gel device was 61 ± 3 and 189 ± 3 cm 2/V · s and 100 ± 4 and 126 ± 5 cm 2/V · s respectively for the Honey LM-GFET. The importance of a high capacitance electrolytic gate dielectric became clear in our comparison of the honey and ion gel ON/OFF ratios. The high capacitance of honey (~2 μF/cm 2) enabled a significant electric field-effect, hence enabled a large drain current. The opposite was seen for the lower capacitance of the ion gel (0.1 μF/cm 2). The reduced field-effect of the ion gel device may be limited by a relatively high gate leakage current that is typical of electrolytic gate dielectrics23. The gate leakage current of the ion gel was measured to be 20 μA, which was 4x greater than the honey. Although, the gate leakage was high for the ion gel device, the formation of the electric double layer at the graphene/ion gel interface allowed for ambipolar operation. In addition, the dirac peaks measured for ion gel devices were not as sharp as the dirac peaks measured by the honey devices. This phenomenon may be indicative of charge inhomogeneity at the electric double layer surface and can be caused by improper/incomplete mixing of the ionic liquid and copolymer solutions. Due to the liquid properties of honey and liquid-metal, these materials may be rinsed in boiling deionized water after initial measurements have been completed. The authors include an investigation of the electrical performance of graphene before and after rinsing of the liquid-metal electrodes and electrolytic gate dielectric made of honey. It was determined there is a slight change in the electrical performance of the graphene devices after rinsing, Fig. 3b. Before rinsing the extracted hole and electron branch resistances was 917 ± 6 Ω and 1062 ± 1 Ω. After rinsing the extracted electron and hole branch resistances increased to 1065 ± 15 Ω and 1170 ± 20 Ω. The slight increase in resistance is assumed to be due to trace amounts of liquid metal residue that remained after rinsing. The liquid metal residue gradually oxidizes over time and contributes to a parasitic resistance at the contacts. Future efforts can investigate dual-rinse processes that include weak solvents followed by a DI water rinse. Despite a DI water rinse, there is no significant shift of the charge neutrality point (Dirac peak). Additionally, the mobilities extracted from the model fit show negligible, if any, degradation. The hole and electron mobilities before rinsing the LM-GFET are 46 ± 6 and 189 ± 1 cm 2/V · s respectively at a drain bias of 100 mV. After rinsing, hole and electron mobilities are 88 ± 3 and 165 ± 5 cm 2/V · s respectively, at a drain bias of 100 mV. Notably, the extracted hole mobility increased, while the electron mobility slightly decreased. While this can be due to the nominal variance of each of the extracted values, it is believed that the DI water with subsequent heating removed several charge impurities that would have otherwise contributed to cross-sections of electron/hole scattering24. Therefore, the experimental results suggest that the proposed rinse process minimally impacts device performance, and yet allows designers to rapidly and prudently explore new device architectures with the same graphene material. The reuse of graphene is an incentive to reduce carbon waste. ### Rapid Characterization of Graphene Quality to Aid Raman Spectroscopy Raman spectroscopy is the industry standard for graphene characterization and provides a researcher with the number of graphene layers, as well as the impurities or dopants present within a graphene material25. However, the equipment required to perform these measurements is costly, and the measurements can be time-consuming. Due to the optical magnification necessary for spatially dependent graphene Raman measurements, investigations are limited to a few grain boundaries. Moreover, inhomogeneity across graphene due to topographical imperfections and varying concentration of dopants can change quite drastically in large scale devices26. In GFET devices, charge carriers encounter numerous grain boundaries from source to drain. Transport characteristics such as Current-Voltage (I–V) measurements average many grain boundaries and enable a way to analyze the electrical performance of large graphene channels (on the order of several hundreds of microns). Raman spectroscopy of graphene transferred onto polymer substrates is quite challenging for unspecialized laboratories. The reason being, there exist strong polymeric vibrational modes thousands of times more sensitive to Raman scattering near the G band of graphene, Fig. 4a. Moreover, the G Band is not easily identifiable and one must take great care when analyzing Raman data. To identify the G band, static measurements are required and consist of several prolonged exposure acquisitions that increase the signal-to-noise ratio27, Fig. 4b. Furthermore, subtraction techniques are required to remove the background PET Raman signature, so that, the graphene (I 2D /I G ) ratios can be computed to extract the number of graphene layers. The authors’ best attempt for Raman Spectroscopy characterization of graphene on PET with post-processing to remove the PET signature took approximately 1 hour per sample for a single spot. To conduct Raman measurements of three separate samples (which is the industry standard) will take up to almost 3 hours with post-processing included. Moreover, a Raman Spectroscopy map of a graphene channel will take much longer. The authors utilized their proposed LM-GFET rapid fabrication methods to compare graphene on PET samples with both high and low quality that were previously determined via 514.5 nm Raman spectroscopy. The intention for this experiment was to validate the use of our methods as a useful tool to complement Raman Spectroscopy data for large scale graphene devices. As previously discussed, Fig. 1b illustrates Raman spectroscopy measurements for the graphene sample used in the single-gate transfer characteristics section and illustrated in Fig. 2. The extracted (I 2D /I G ) ratios for Sites A–C are 3.10, 1.92, and 2.13 respectively, thus indicated high quality monolayer graphene. As was determined from I–V measurements, the hole and electron mobilities for the high quality graphene was on the order of 213 ± 15 and 166 ± 5 cm 2/V · s respectively for a drain bias of 100 mV. On the other hand, the extracted (I 2D /I G ) ratios for a sample of low quality graphene was determined to be 2.44, 1.52, and 0.70 respectively. Despite being labeled low quality via the Raman spectroscopy measurements, our rapid fabrication methods determined the hole and electron mobilities were 128 ± 4 and 101 ± 4 cm 2/V · s. Although, the computed mobilities are lower than what was measured in the high quality samples, the low quality samples were respectively still quite comparable. It has become common practice by commercial graphene manufacturers to provide graphene quality via Raman spectroscopy data of a few (1–3) spots with purchased graphene samples. Due to comparable results for the high and low quality graphene samples, there is reason to believe that our methods can complement Raman Spectroscopy measurements of large samples. One may consider graphene to be of low quality immediately after undesirable Raman spectroscopy measurements, and therefore may not utilize and dispose of the graphene sample. Our proposed methods provide an additional metric to explore graphene quality beyond spatially dependent Raman spectroscopy and in a transistor embodiment. The devices in this paper were relatively large. Users simply need to perform any I–V measurements of their fabricated devices, then use an automated code to extract electrical performance. The extracted mobility can be correlated to graphene quality as previously demonstrated25, and the location of the Dirac peak can provide crucial information on the impurities present. ### PN Junction Transistor Architecture and Operation FETs operate by the formation of junctions through an inherent electric field within a bulk semiconductor. The electric field can be created via chemical doping, yet will often be fixed upon fabrication. One can create a similar adaptive effect in a GFET device by manufacturing a lateral electric field with collinear wires suspended above graphene either in a dielectric, electrolyte, or slightly conductive liquid. Moreover, the electric field does not need to be referenced to graphene or any gate electrodes. An assessment of such a transistor comprised of graphene and honey, shows that the newly formed lateral electric field (E-lateral) creates an effective PN junction because any free electrons within the graphene channel are swept to one side of the channel and any holes are swept to the other end. A representation of this idea is illustrated in Fig. 5a and b and an image of the device is shown in Fig. 5c. In the event the graphene PN junction were forward biased, a higher concentration of holes will be attracted to the negative terminal and a higher concentration of electrons will be attracted to the positive terminal. Figure 5d, illustrates the fundamental graphene PN junction transport properties for when the electric field generator is switched from off to on. For the case when the E-field remains off, a relationship of R ds  − V TG demonstrates typical GFET transport characteristics with a single Dirac peak. However, as the E-lateral is turned on, two Dirac peaks occur and the charge carrier distribution can be clearly identified. ### PN Junction Transistor Characteristics We further demonstrate PN junction phenomena of four separate cases. Case A (Fig. 6a): Left terminal V L of Fig. 5a is set to a negative voltage and the right terminal V R is set to a positive voltage. This creates a forward bias scenario when the source is set to ground. For example, when V L  = −15 V and V R  = +15 V, E-lateral = 30 V between the two terminals. Case B (Fig. 6b): V L and V R are set to negative voltages. Case C (Fig. 6c): V L is set to a positive voltage and V R is set to a negative voltage, therefore creating a reverse bias scenario. Finally, Case D (Fig. 6d): V L and V R are set to positive voltages. As E-lateral was turned on and V TG was varied in Case A and Case C, dual Dirac peaks indicated the presence of two charge regions along the graphene channel. The amplitude and distance between the two Dirac peaks were than controlled by altering E-lateral from 30–33 V. This effect can be attributed to charge inhomogeneity within the graphene landscape that was driven by E-lateral. Such techniques are applicable in photodetector applications as one could carefully control optical transitions with an applied electric field. As the lateral electric field strength is altered, the distance between the two Dirac peaks changes. Moreover, the distance between the Dirac peaks indicates the work function required to generate photocurrent which is governed by the Fermi energy level $${E}_{F}=\hslash {v}_{F}\sqrt{\pi n}$$ and whether optical interband transitions are allowed28, 29. More noticeably in Case C, as E-lateral was increased, the dual Dirac Peaks merge into a single Dirac Peak. This effect, worthy of future study, may be attributed to non-linear screening of the spatial charge inhomogeneity within the graphene channel. Perhaps for Case C the charge inhomogeneity is of lower density, therefore, screening is more effective with E-lateral. There was a noticeable shift in the overall transport characteristics of Case A and Case C. To explore this phenomena, Case B and Case D demonstrate cases in which V L and V R are biased both negative and both positive, respectively. With respect to the intrinsic doping concentration, it was demonstrated that graphene underwent slight p-type doping (right shift) as E-lateral was biased more negatively. This effect can be attributed to an excess of charge carrier diffusion to the electric double layer that increases with E-lateral30 and was similarly seen for a single-gated LM-GFET device operation. The opposite is true when the V L and V R were biased both positive and n-type doping occurred (right shift). An additional effect of biasing is the change in the drain-to-source resistance R ds maximum as the lateral electric field strength is altered, Inset of Fig. 6b and d. ### Summary Our transistors have demonstrated a method to rapidly characterize graphene materials with the use of non-toxic eutectic liquid-metal Galinstan interconnects and honey gate dielectrics in a three-terminal graphene field-effect transistor embodiment. The devices characterized in the paper were fabricated within less than 30 minutes and in a general laboratory setting. Our methods are repeatable, therefore one can adopt our methods into an automated quality assurance process at a per chip level for end-to-end fabrication increasing yield and eliminating tedious testing. Despite not being fabricated in a conventional cleanroom, our devices provided comparable performance to the current state-of-the-art. We demonstrated transport characteristics for a single-gate graphene field-effect transistor and introduced adaptive control over PN junction properties with only an applied lateral electric field bias. We anticipate an adaptive PN junction capability can be adopted into diodes. Furthermore, the manipulation of the physical characteristics of Galinstan is a precursor to flexible devices. Liquid-metal Galinstan can be embedded in microfluidic enclosures and exhibit shape deformability. There are many devices that can result from reconfigurability such as wearable diagnostics and conformal RF devices. Moreover, the liquid state of honey provides the potential for uniform and flexible gate dielectrics that are currently an issue for PVD-based gate dielectrics. In this paper, the authors only demonstrated a rigid architecture with inexpensive materials. The authors encourage the readers to explore alternate embodiments utilizing the liquid materials described and further explore the potential for flexible applications. The authors admit the use of liquid-metal Galinstan and honey for graphene devices in this paper was discovered by accident. We predict our transistors will lead towards the exploration of alternative materials that are slightly unconventional in the hopes these innovative discoveries provide a new class of materials that are non-toxic, biodegradable, and require minimal preparation time. ## Methods ### Liquid-metal Graphene Field-effect Transistor Fabrication In this work, graphene was commercially acquired and a quality measure was conducted with a Renishaw InVia 514.5 nm (Green) Micro-Raman Spectroscopy System for three different graphene sites. The absence of a defect band D and analysis of the peak intensity ratio of the 2D and G Bands (I 2D /I G  ≈ 2) indicated high quality graphene in all three sites, Fig. 1b. With high-quality monolayer graphene identified, a strip of graphene on Polyethylene Terephthalate (PET) was cut with standard cutting tools and adhered onto a glass microscope slide with the graphene side upward and PET side downward. Liquid-metal Galinstan droplets with volume 0.6 mm 3 each were then dispensed with a blunt-tip syringe to act as source and drain electrodes. Honey was commercially acquired and dispensed from a plastic dropper at a volume of 1.0 mm 3 between the two liquid-metal droplets to act as the electrolytic gate dielectric. For the PN junction LM-GFET embodiment, two wires were suspended above graphene and inside the honey gate dielectric. Both wires were biased with an Agilent E3648A Dual Power Supply. Current, voltage, and capacitance measurements were performed with an Agilent 4155C Semiconductor Parameter Analyzer, probe station, and Hioki IM3570 Impedance Analyzer in air and in a standard laboratory environment. ### Ion Gel Synthesis Due to the toxicity and degradation of the ion gel components in contact with atmospheric oxygen, synthesis was conducted in a nitrogen-purged atmospheric controlled glove box (Vacuum Atmospheres Company OMNI-LAB). Only when the glove box oxygen content was reduced to below 2 ppm, 1-Ethyl-3 Methylimidizalium Bis(Trifluoromethylsufonyl)imide ([EMIM][TFSI]) ionic liquid (weight 0.467 g, measured with a Fisher Scientific Microbalance) was mixed with Polystyrene-Poly (ethylene oxide) (PS-PEO) diblock copolymer (weight 0.036 g) using a magnetic stirrer. 5.8 mL of acetonitrile solvent (weight 4.56 g) was added with a standard syringe needle to the ionic liquid/copolymer solution to thoroughly dissolve the copolymer. The final ion gel solution consisted of 9.21% [EMIM][TFSI] ionic liquid, 0.71% PS-PEO copolymer, and 90.07% acetonitrile solvent that was evaporated before I-V measurements. The ion gel was dispensed on the LM-GFET devices with a plastic dropper and measurements were all conducted in a fume hood to reduce exposure to the ion gel. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Lu, C.-C., Lin, Y.-C., Yeh, C.-H., Huang, J.-C. & Chiu, P.-W. High mobility flexible graphene field-effect transistors with self-healing gate dielectrics. Acs Nano 6, 4469–4474 (2012). 2. 2. Liao, L. et al. High-speed graphene transistors with a self-aligned nanowire gate. Nature 467, 305–308 (2010). 3. 3. Lin, Y.-M. et al. 100-ghz transistors from wafer-scale epitaxial graphene. Science 327, 662–662 (2010). 4. 4. Kedzierski, J. et al. Epitaxial graphene transistors on sic substrates. IEEE Transactions on Electron Devices 55, 2078–2085 (2008). 5. 5. Torres, C. M. Jr. et al. High-current gain two-dimensional mos2-base hot-electron transistors. Nano letters 15, 7905–7912 (2015). 6. 6. Lan, Y.-W. et al. Dual-mode operation of 2d material-base hot electron transistors. Scientific Reports 6 (2016). 7. 7. Lan, Y.-W. et al. Atomic-monolayer mos2 band-to-band tunneling field-effect transistor. Small 12, 5676–5683 (2016). 8. 8. Liao, L. & Duan, X. Graphene–dielectric integration for graphene transistors. Materials Science and Engineering: R: Reports 70, 354–370 (2010). 9. 9. Fan, J. et al. Investigation of the influence on graphene by using electron-beam and photo-lithography. Solid State Communications 151, 1574–1578 (2011). 10. 10. Toth, P. S. et al. Electrochemistry in a drop: a study of the electrochemical behaviour of mechanically exfoliated graphene on photoresist coated silicon substrate. Chemical Science 5, 582–589 (2014). 11. 11. Liu, W., Wei, J., Sun, X. & Yu, H. A study on graphene—metal contact. Crystals 3, 257–274 (2013). 12. 12. Kim, B. J. et al. High-performance flexible graphene field effect transistors with ion gel gate dielectrics. Nano letters 10, 3464–3466 (2010). 13. 13. Lee, S.-K. et al. Stretchable graphene transistors with printed dielectrics and gate electrodes. Nano letters 11, 4642–4646 (2011). 14. 14. Gough, R. C. et al. Continuous electrowetting of non-toxic liquid metal for rf applications. IEEE Access 2, 874–882 (2014). 15. 15. Ordonez, R. C. et al. Conformal liquid-metal electrodes for flexible graphene device interconnects. IEEE Transactions on Electron Devices 63, 4018–4023 (2016). 16. 16. Liu, T., Sen, P. & Kim, C.-J. Characterization of liquid-metal galinstan for droplet applications. In Micro Electro Mechanical Systems (MEMS), 2010 IEEE 23rd International Conference on, 560–563 (IEEE, 2010). 17. 17. Anjos, O., Campos, M. G., Ruiz, P. C. & Antunes, P. Application of ftir-atr spectroscopy to the quantification of sugar in honey. Food chemistry 169, 218–223 (2015). 18. 18. Kim, S. et al. Realization of a high mobility dual-gated graphene field-effect transistor with al 2 o 3 dielectric. Applied Physics Letters 94, 062107 (2009). 19. 19. Wang, Z., Zhang, Z. & Peng, L. Graphene-based ambipolar electronics for radio frequency applications. Chinese Science Bulletin 1–15 (2012). 20. 20. Schwierz, F. Graphene transistors. Nature nanotechnology 5, 487–496 (2010). 21. 21. Guo, W., Zhu, X., Liu, Y. & Zhuang, H. Sugar and water contents of honey with dielectric property sensing. Journal of Food Engineering 97, 275–281 (2010). 22. 22. Cho, J. H. et al. Printable ion-gel gate dielectrics for low-voltage polymer thin-film transistors on plastic. Nature materials 7, 900–906 (2008). 23. 23. Leng, X., Bollinger, A. & Božović, I. Purely electronic mechanism of electrolyte gating of indium tin oxide thin films. Scientific reports 6 (2016). 24. 24. Di Bartolomeo, A. et al. Charge transfer and partial pinning at the contacts as the origin of a double dip in the transfer characteristics of graphene-based field-effect transistors. Nanotechnology 22, 275702 (2011). 25. 25. Nagashio, K., Nishimura, T., Kita, K. & Toriumi, A. Mobility variations in mono-and multi-layer graphene films. Applied physics express 2, 025003 (2009). 26. 26. Zhang, Y., Brar, V. W., Girit, C., Zettl, A. & Crommie, M. F. Origin of spatial charge inhomogeneity in graphene. Nature Physics 5, 722–726 (2009). 27. 27. Lewis, I. R. & Edwards, H. Handbook of Raman spectroscopy: from the research laboratory to the process line (CRC Press, 2001). 28. 28. Yan, K. et al. Modulation-doped growth of mosaic graphene with single-crystalline p–n junctions for efficient photocurrent generation. Nature communications 3, 1280 (2012). 29. 29. Liu, J., Safavi-Naeini, S. & Ban, D. Fabrication and measurement of graphene p–n junction with two top gates. Electronics Letters 50, 1724–1726 (2014). 30. 30. Uesugi, E., Goto, H., Eguchi, R., Fujiwara, A. & Kubozono, Y. Electric double-layer capacitance between an ionic liquid and few-layer graphene. Scientific reports 3 (2013). ## Acknowledgements We acknowledge the collaboration of the University of Hawai’i (UHM) at $${\rm{M}}\overline{{\rm{a}}}{\rm{noa}}$$ College of Engineering, Department of Electrical Engineering Nanoelectronics Device Laboratory and Space and Naval Warfare Systems Center Pacific Graphene Microfluidics Laboratory. The UHM Hawai’i Institute for Geophysics & Planetology for use of their Micro-Raman Spectroscopy System, and Dr. Tayro Acosta-Maeda for his Raman spectroscopy expertise. Lastly, R.C.O. sends special thanks to UHM undergraduate students Noah Acosta, Tyler Yamauchi, and Brianne Tengan for experimental support. This research was funded in part by the Space and Naval Warfare Systems Command Naval Innovative Science and Engineering program. ## Author information ### Affiliations 1. #### University of Hawai’i at Mānoa, Department of Electrical Engineering, Honolulu, HI, 96822, USA • Richard C. Ordonez • , Jordan L. Melcher •  & David Garmire 2. #### Space and Naval Warfare Systems Center Pacific, Pearl City, HI, 96782, USA • Richard C. Ordonez • , Cody K. Hayashi •  & Nackieb Kamin 3. #### Space and Naval Warfare Systems Center Pacific, San Diego, CA, 92152, USA • Carlos M. Torres 4. #### Hawai’i Natural Energy Institute, Honolulu, HI, 96822, USA • Godwin Severa ### Contributions R.C.O., C.K.H., and D.G. conceived the idea and designed experiments; J.M. conducted the Raman spectroscopy measurements; R.C.O., C.K.H., and J.M. fabricated the devices and performed the electrical measurements; G.S. studied the electrolytic gate dielectric properties; R.C.O., C.K.H., C.M.T. Jr., and D.G. analyzed the data. All authors discussed the results and wrote the paper together. ### Competing Interests The authors declare that they have no competing interests. ### Corresponding author Correspondence to Richard C. Ordonez.
{}
## Square of electrodynamic field. The electrodynamic Lagrangian (without magnetic sources) has the form \label{eqn:fsquared:20} \LL = F \cdot F + \alpha A \cdot J, where $$\alpha$$ is a constant that depends on the unit system. My suspicion is that one or both of the bivector or quadvector grades of $$F^2$$ are required for Maxwell’s equation with magnetic sources. Let’s expand out $$F^2$$ in coordinates, as preparation for computing the Euler-Lagrange equations. The scalar and pseudoscalar components both simplify easily into compact relationships, but the bivector term is messier. We start with the coordinate expansion of our field, which we may write in either upper or lower index form \label{eqn:fsquared:40} F = \inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu\nu} = \inv{2} \gamma^\mu \wedge \gamma^\nu F_{\mu\nu}. The square is \label{eqn:fsquared:60} F^2 = F \cdot F + \gpgradetwo{F^2} + F \wedge F. Let’s compute the scalar term first. We need to make a change of dummy indexes, for one of the $$F$$’s. It will also be convenient to use upper indexes in one factor, and lowers in the other. We find \label{eqn:fsquared:80} \begin{aligned} F \cdot F &= \inv{4} \lr{ \gamma_\mu \wedge \gamma_\nu } \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta } F^{\mu\nu} F_{\alpha\beta} \\ &= \inv{4} \lr{ {\delta_\nu}^\alpha {\delta_\mu}^\beta – {\delta_\mu}^\alpha {\delta_\nu}^\beta } F^{\mu\nu} F_{\alpha\beta} \\ &= \inv{4} \lr{ F^{\mu\nu} F_{\nu\mu} F^{\mu\nu} F_{\mu\nu} } \\ &= -\inv{2} F^{\mu\nu} F_{\mu\nu}. \end{aligned} Now, let’s compute the pseudoscalar component of $$F^2$$. This time we uniformly use upper index components for the tensor, and find \label{eqn:fsquared:100} \begin{aligned} F \wedge F &= \inv{4} \lr{ \gamma_\mu \wedge \gamma_\nu } \wedge \lr{ \gamma_\alpha \wedge \gamma_\beta } F^{\mu\nu} F^{\alpha\beta} \\ &= \frac{I}{4} \epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta}, \end{aligned} where $$\epsilon_{\mu\nu\alpha\beta}$$ is the completely antisymmetric (Levi-Civita) tensor of rank four. This pseudoscalar components picks up all the products of components of $$F$$ where all indexes are different. Now, let’s try computing the bivector term of the product. This will require fancier index gymnastics. \label{eqn:fsquared:120} \begin{aligned} &= \inv{4} \lr{ \gamma_\mu \wedge \gamma_\nu } \lr{ \gamma^\alpha \wedge \gamma^\beta } } F^{\mu\nu} F_{\alpha\beta} \\ &= \inv{4} \gamma_\mu \gamma_\nu \lr{ \gamma^\alpha \wedge \gamma^\beta } } F^{\mu\nu} F_{\alpha\beta} \inv{4} \lr{ \gamma_\mu \cdot \gamma_\nu} \lr{ \gamma^\alpha \wedge \gamma^\beta } F^{\mu\nu} F_{\alpha\beta}. \end{aligned} The dot product term is killed, since $$\lr{ \gamma_\mu \cdot \gamma_\nu} F^{\mu\nu} = g_{\mu\nu} F^{\mu\nu}$$ is the contraction of a symmetric tensor with an antisymmetric tensor. We can now proceed to expand the grade two selection \label{eqn:fsquared:140} \begin{aligned} \gamma_\mu \gamma_\nu \lr{ \gamma^\alpha \wedge \gamma^\beta } } &= \gamma_\mu \wedge \lr{ \gamma_\nu \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta } } + \gamma_\mu \cdot \lr{ \gamma_\nu \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } } \\ &= \gamma_\mu \wedge \lr{ {\delta_\nu}^\alpha \gamma^\beta {\delta_\nu}^\beta \gamma^\alpha } + g_{\mu\nu} \lr{ \gamma^\alpha \wedge \gamma^\beta } {\delta_\mu}^\alpha \lr{ \gamma_\nu \wedge \gamma^\beta } + {\delta_\mu}^\beta \lr{ \gamma_\nu \wedge \gamma^\alpha } \\ &= {\delta_\nu}^\alpha \lr{ \gamma_\mu \wedge \gamma^\beta } {\delta_\nu}^\beta \lr{ \gamma_\mu \wedge \gamma^\alpha } {\delta_\mu}^\alpha \lr{ \gamma_\nu \wedge \gamma^\beta } + {\delta_\mu}^\beta \lr{ \gamma_\nu \wedge \gamma^\alpha }. \end{aligned} Observe that I’ve taken the liberty to drop the $$g_{\mu\nu}$$ term. Strictly speaking, this violated the equality, but won’t matter since we will contract this with $$F^{\mu\nu}$$. We are left with \label{eqn:fsquared:160} \begin{aligned} &= \lr{ {\delta_\nu}^\alpha \lr{ \gamma_\mu \wedge \gamma^\beta } {\delta_\nu}^\beta \lr{ \gamma_\mu \wedge \gamma^\alpha } {\delta_\mu}^\alpha \lr{ \gamma_\nu \wedge \gamma^\beta } + {\delta_\mu}^\beta \lr{ \gamma_\nu \wedge \gamma^\alpha } } F^{\mu\nu} F_{\alpha\beta} \\ &= F^{\mu\nu} \lr{ \lr{ \gamma_\mu \wedge \gamma^\alpha } F_{\nu\alpha} \lr{ \gamma_\mu \wedge \gamma^\alpha } F_{\alpha\nu} \lr{ \gamma_\nu \wedge \gamma^\alpha } F_{\mu\alpha} + \lr{ \gamma_\nu \wedge \gamma^\alpha } F_{\alpha\mu} } \\ &= 2 F^{\mu\nu} \lr{ \lr{ \gamma_\mu \wedge \gamma^\alpha } F_{\nu\alpha} + \lr{ \gamma_\nu \wedge \gamma^\alpha } F_{\alpha\mu} } \\ &= 2 F^{\nu\mu} \lr{ \gamma_\nu \wedge \gamma^\alpha } F_{\mu\alpha} + 2 F^{\mu\nu} \lr{ \gamma_\nu \wedge \gamma^\alpha } F_{\alpha\mu}, \end{aligned} which leaves us with \label{eqn:fsquared:180} = \lr{ \gamma_\nu \wedge \gamma^\alpha } F^{\mu\nu} F_{\alpha\mu}. I suspect that there must be an easier way to find this result. We now have the complete coordinate expansion of $$F^2$$, separated by grade \label{eqn:fsquared:200} F^2 = -\inv{2} F^{\mu\nu} F_{\mu\nu} + \lr{ \gamma_\nu \wedge \gamma^\alpha } F^{\mu\nu} F_{\alpha\mu} + \frac{I}{4} \epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta}. Tomorrow’s task is to start evaluating the Euler-Lagrange equations for this multivector Lagrangian density, and see what we get. ## notes for phy450, relativistic electrodynamics, now available on paper from amazon. March 4, 2019 math and physics play , , , My notes from the spring 2011 session of  Relativistic Electrodynamics (PHY450H1S) are now updated to use a 6×9″ format (387 pages), and are available on paper from amazon.  This was the second course I took as a non-degree physics student, and was taught by Prof. Erich Poppitz. These notes pages, 6×9″) are available in a few formats: • In paper (black and white) through amazon’s kindle-direct-publishing for \$11 USD. • from github as latex, scripts, and makefiles. Links or instructions for the formats above are available here. ### Changelog. phy450.V0.1.9.pdf • switch to 6×9″ format • fix a whole bunch of too-wide equations, section-headings, … that kdp finds objectionable. • suppress page numbers for 1st page of preface, contents, index and bib. This is a hack for my hack of classicthesis, because I don’t have the 6×9 layout right, and the page numbers for that first page end up in an unprintable region that kdp doesn’t allow. • add periods to chapter, figure, section, problem captions. • remove lots of blank lines before and after equations (which latex turns into paragraphs). That cuts 10s of pages from the book length! • move version numbers into separate file (make.revision) ## Geometric algebra notes collection split into two volumes I’ve now split my (way too big) Exploring physics with Geometric Algebra into two volumes: Each of these is now a much more manageable size, which should facilitate removing the redundancies in these notes, and making them more properly book like. Also note I’ve also previously moved “Exploring Geometric Algebra” content related to: • Lagrangian’s • Hamiltonian’s • Noether’s theorem into my classical mechanics collection (449 pages).
{}
# Derivative 1/x³ 1. Sep 26, 2011 ### discy how to compute the ANTI derivative of 1 / x^3 I think I need the formula: f(x) = 1/x^n than f'(x) = -n/x^n+1 but I'm not sure and don't know how to use it. I know the answer is: -0.5 * x^-2 but have no idea why. could someone explain this to me please? 2. Sep 26, 2011 ### LCKurtz The familiar antiderivative formula $$\int x^n\, dx = \frac{x^{n+1}}{n+1}+C$$ also works for negative exponents. Write your fraction as a negative exponent. 3. Sep 26, 2011 ### discy now how would I put 1/x³ into that formula to get -0.5 * x^-2 ? 4. Sep 26, 2011 ### LCKurtz Write 1/x³ as xn using a negative exponent and use the formua. 5. Sep 26, 2011 ### discy hm okay. like x^-3. got it. I guess I should learn this formula, not only because it's a "familiar" one for you guys. But also because for some reason it's not on my formula sheet.
{}
# Understanding balloon-borne frost point hygrometer measurements after contamination by mixed-phase clouds Balloon-borne water vapour measurements in the upper troposphere and lower stratosphere (UTLS) by means of frost point hygrometers provide important information on air chemistry and climate. However, the risk of contamination from sublimating hydrometeors collected by the intake tube may render these measurements unusable, particularly after crossing low clouds containing supercooled droplets. A large set of (sub)tropical measurements during the 2016–2017 StratoClim balloon campaigns at the southern slopes of the Himalayas allows us to perform an in-depth analysis of this type of contamination. We investigate the efficiency of wall contact and freezing of supercooled droplets in the intake tube and the subsequent sublimation in the UTLS using computational fluid dynamics (CFD). We find that the airflow can enter the intake tube with impact angles up to 60inline-formula, owing to the pendulum motion of the payload. Supercooled droplets with radii inline-formula> 70 inline-formulaµm, as they frequently occur in mid-tropospheric clouds, typically undergo contact freezing when entering the intake tube, whereas only about 50 % of droplets with 10 inline-formulaµm radius freeze, and droplets inline-formula< 5 inline-formulaµm radius mostly avoid contact. According to CFD, sublimation of water from an icy intake can account for the occasionally observed unrealistically high water vapour mixing ratios (inline-formula $M7inlinescrollmathml{\mathrm{italic \chi }}_{chem{\mathrm{normal H}}_{normal 2}\mathrm{normal O}}$ 24pt12ptsvg-formulamathimg51c17c5bd0e3115d2cc798564bb0460f amt-14-239-2021-ie00001.svg24pt12ptamt-14-239-2021-ie00001.png inline-formula> 100 inline-formulappmv) in the stratosphere. Furthermore, we use CFD to differentiate between stratospheric water vapour contamination by an icy intake tube and contamination caused by outgassing from the balloon and payload, revealing that the latter starts playing a role only during ascent at high altitudes (inline-formulapinline-formula< 20 inline-formulahPa). ### Zitieren Zitierform: Jorge, Teresa / Brunamonti, Simone / Poltera, Yann / et al: Understanding balloon-borne frost point hygrometer measurements after contamination by mixed-phase clouds. 2021. Copernicus Publications. ### Zugriffsstatistik Gesamt: Volltextzugriffe:
{}
### Find $\lim_{n\rightarrow \infty} \sin((2n\pi + \frac{1}{2n\pi}) \sin(2n\pi + \frac{1}{2n\pi}))$ (NBHM 2013) Find the limit $$\lim_{n\rightarrow \infty} \sin((2n\pi + \frac{1}{2n\pi}) \sin(2n\pi + \frac{1}{2n\pi}))$$ Solution: We have $\sin(2n\pi+\theta) = \sin(\theta)$ this implies $\sin(2n\pi + \frac{1}{2n\pi})) = \sin(\frac{1}{2n\pi}))$ and given limit becomes $$\lim_{n\rightarrow \infty} \sin((2n\pi + \frac{1}{2n\pi}) \sin(\frac{1}{2n\pi})).$$ Now, $$\sin((2n\pi + \frac{1}{2n\pi}) \sin(\frac{1}{2n\pi})) = \sin((2n\pi) \sin(\frac{1}{2n\pi})) + \frac{1}{2n\pi}\sin(\frac{1}{2n\pi}))$$ $|sin 2n\pi| \le 1$ and the sequence $\frac{1}{2n\pi}$ converges to zero and hence second term is converges to zero. $\lim _{\theta \to 0}\frac{sin \theta}{\theta} = 1$ and so $(2n\pi) \sin(\frac{1}{2n\pi})$ converges to 1 as $n \to \infty$. Hence the required limit is $\sin 1$. ### NBHM 2020 PART A Question 4 Solution $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$ Evaluate : $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$ Solution : \int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx = \int_{-\infty}^{\inft...
{}
~~SS~~ ## Hot Projects DIY Food Hacking Open Solar Power #### picoReflow DIY Reflow Soldering #### PiGI RasPi Geiger Counter Wideband Antenna Map everything! ~~TAGCLOUD:50~~ # Spark-Core Hacking: Read MQ2 sensor data The Aquarius (Landing-Module) needs a galley in order to prepare and cook food but due to unforeseen personal circumstances I had to invest into this infrastructure way before than it was actually necessary - since the base trailer for the LM isn't available yet. So I've started to build a prototype kitchen with all that is needed for functional and fun food hacking. One of the primary energy carriers selected for cooking is gas. That can be either LPG (Propane/Butane) or Methane (delivered by utility gas lines). Since gases can be tricky and risky energy carriers and their combustion process also creates potentially harmful by-products like Carbon-Monoxide (CO), it seemed prudent to have an autonomous environmental monitoring and gas leakage detection system, in order to minimize the risk of an undetected leak, which could lead to potentially harmful explosions or a high concentration of CO, which could also lead to unconsciousness and death. Monitoring the temperature and humidity might also help in preventing moisture buildup which often leads to fungi problems. To cover everything, a whole team of sensors monitors specific environmental targets and their data will then be fused to form a basis for air quality analysis and threat management to either proactively start to vent air or send out warnings via mail/audio/visual. Sensor Target Description Placement MQ7 CO Carbon-monoxide (Combustion product) Top/Ceiling MQ4 CH4 Methane (Natural Gas) Top/Ceiling MQ2 C3H8 Propane (Camping Gas Mix) Bottom/Floor MQ2 C4H10 Butane (Camping Gas Mix) Bottom/Floor SHT71 Temp/Humidity Room/Air Temperature & Humidity Monitoring Top As a platform for this project a spark-core was selected, since it's a low power device with wireless network connectivity, which has to be always on, to justify its existence. The Spark-Core docs claim 50mA typical current consumption but it clocked in here with 140mA (Tinker Firmware - avg(24h)). After setting up a local spark cloud and claiming the first core it was time to tinker with it. The default firmware (called tinker) already get's you started quickly with no fuss: You can read and control all digital and analog in- and outputs. With just a quick GNUPlot/watch hack I could monitor what a MQ2 sensor detects over the period of one evening, without even having to hack on the firmware code itself (fast bootstrapping to get a first prototype/concept). ## Hardware • Sainsmart MQ2/MQ4/MQ7 el cheapo sensor boards • Sensirion SHT71 Temperature & Humidity Sensor • 6 Resistors (Voltage Divider Rx/Ry: 10k/33k • Adafruit 7-Segment LED Display with HT16K33 controller ## Software Have a look at the local spark-cloud howto, to get an easy start with spark-cores without having to use the official. You can also repeat the following procedures using the official cloud server too, then you only need to install Spark-CLI and add your global cloud account settings to its configuration. This is an example how to read ADC input A0 through the API, connected to the MQ2 sensor in this case: #!/bin/bash while : do VAL=$(spark call 1234567890abcdef analogread "A0") TS=$(date +%s) echo "${TS}${VAL}" echo "${TS}${VAL}" >> mq2-test.txt sleep 1 done; $vi mq2-gnuplot.parm set terminal pngcairo background "#383734" size 900,500 enhanced font 'Arimo,12' set output "mq2-chart.png" set title "Mapping MQ2 Sensor Data over one evening" textcolor ls 2 font 'Arial,16' set grid set lmargin 9 set style line 1 lc rgb '#75890c' lt 1 lw 0.5 pt 7 ps 0.75 set style line 2 lt 2 lc rgb "#d8d3c5" pt 6 set ylabel "ADC Read Value" tc ls 2 offset 1,0 set xtics textcolor linestyle 2 rotate set ytics textcolor linestyle 1 set tics nomirror set xdata time set timefmt "%s" set format x "%H:%M" set border linewidth 1 linestyle 2 unset key plot "mq2-test.txt" using 1:2 with linespoints ls 1 quit $ watch --interval=1 gnuplot mq2-gnuplot.parm You can use Ristretto or any other image viewer to look at the resulting png. Ristretto automatically redraws the image, as soon as gnuplot finishes the next one. ## Discussion Enter your comment. Wiki syntax is allowed: ___ __ __ __ __ _____ ______ / _ ) / / / / / // / / ___//_ __/ / _ |/ /_/ / / _ / / /__ / / /____/ \____/ /_//_/ \___/ /_/
{}
# Why the sudden jump of 100 points in my reputation ranking? (Tuesday, 30.Aug.2011, Taiyuan China time) Until a day or so ago, my reputation ranking was 774. Then, suddenly, with no activity on my part, it is 874, exactly 100 points higher. What happened? - One way this can happen is by your logging into a second StackExchange site for the first time. Do you think it might be that? – Dylan Moreland Aug 30 '11 at 1:55 A check of your account tab: math.stackexchange.com/users/2344/mike-jones?tab=accounts says this is your only account with rep $\geq 200$; you might want to visit math.stackexchange.com/reputation to see how your rep went up. – J. M. Aug 30 '11 at 2:03 @Dylan Moreland: Yes, I signed up with Meta Stack Overflow. So, is such bumping up of rep in another site a bug, or not? Thanks. – Mike Jones Aug 30 '11 at 3:00 @Mike I think this is by design. See this answer on the SO meta. It happened to me when I signed up for the TeX SE. – Dylan Moreland Aug 30 '11 at 3:27 – Hendrik Vogt Aug 30 '11 at 10:19 ## 1 Answer This is by design. See this blog post where the feature is announced, and the official answer on meta.SO (in the "additionally" section). As both resources mention, only one of the two accounts you are associating needs to have $\geq200$ reputation. Also, if you go to http://math.stackexchange.com/reputation to audit your reputation, I am pretty sure the association bonus appears in the line rep from bonuses: although I am not sure what other bonuses exist... perhaps any bounties you've earned are included in this row as well. By the way, I performed a recalc on your account (you can do this too using the "Trigger Reputation Recalc" button at the bottom of http://math.stackexchange.com/reputation) and that bumped it to 885. I'm afraid there's no good way of figuring out where the net difference of 11 points came from; generally, your "displayed" reputation can fall out of line with your "real" reputation, and the reputation recalc is just a manual update of the "displayed" reputation. - Thanks for the info. I've upvoted and accepted your excellent answer. – Mike Jones Aug 30 '11 at 7:52 I'm glad to help! – Zev Chonoles Sep 1 '11 at 2:42
{}
+0 # Algebra 0 135 1 Kim has exactly enough money to buy 40 oranges at 3x cents each. If the price rose to 5x cents per orange, how many oranges could she buy? Jul 2, 2022 #1 +2602 0 She has $$40 \times 3x = 120x$$cents. Now, if the price was $$5x$$ cents, how many oranges would she be able to buy? Jul 2, 2022
{}
# How do you use synthetic division to find all the rational zeroes of the function f(x)= 2x^3 - 3x^2-11x+6? Aug 20, 2015 $3 , - 2 , \frac{1}{2}$ #### Explanation: A rational zero must be ± frac{\text{a divisor of 6}}{\text{a divisor of 2}} = ± frac{{1,2,3,6}}{{1,2}} \in ± {1,2,3,6, 1/2, 2/2, 3/2, 6/2} $f \left(1\right) = 2 - 3 - 11 + 6 < 0$ $f \left(2\right) = 16 - 12 - 22 + 6 < 0$ $f \left(3\right) = 54 - 27 - 33 + 6 = 0$ By Briot-Ruffini, $\setminus \frac{f \left(x\right)}{x - 3} = 2 {x}^{2} + 3 x - 2 = 0$ $\setminus \Delta = 9 + 4 \cdot 2 \cdot 2 = 25$ x = frac{- 3 ± 5}{4}
{}
# What limits a batteries current when its terminals are connected with a low resistance conductor? My question essentially is concerning what (if anything) prevents really high current flow (amperes) in a battery if the terminals were to be connected directly with a low resistance conductor? For Example this website has the following chart pertaining to the internal resistance of various batteries: So lets assume I have an AA NiMH battery and use Ohms Law, we will also account for a minimal ammount of additional resistance (.003 ohms) from the conductor wire itself just to further illustrate the point: V = IR V = 1.5 R (total) = .023 I = V/R = 1.5/.023 = 65.22 Is this what really happens when battery terminals are directly connected, and if not what is the other limiting factor that prevents a super fast current from occuring besides the natural resistance of the battery and the wire? • The internal resistance can only be used for smaller currents, as long as the current-voltage curve is linear. Not for a short-circuit. – user137289 Dec 13 '16 at 21:00 • And, ultimately, there is a chemical reaction going on that will have rate limiting steps associated with it. Dec 13 '16 at 21:07 • @Pieter okay, but if we then ignore the internal resistance for the short circuit, the current is even higher. So what are the missing factors to the equation, because most AA batteries max out at 2 amps and that's pushing it. Is it simply that once the short circuit occurs the voltage will drop and no longer be 1.5 which will reduce the current? Dec 13 '16 at 21:12 • No, oh no, the short-circuit will be smaller than your estimate. As @JonCuster says, there is chemistry involved: diffusion, gradients, bubbles may form, heat is developed, etc. – user137289 Dec 13 '16 at 21:25 Batteries are decently complicated devices. They're certainly more complicated than an ideal voltage source. As you've noticed, we often model them as a voltage source with a series "internal resistance." This is a better model which works well in a large range of conditions, but still not a true model of how batteries work. When you short a battery out, initially you do get the very high amperages you calculated. However, as the short continues, chemistry gets involved. Inside the battery you have two materials reacting with each other to provide the electrical energy. In the case of your example battery, NiMH, the reactions are: Negative terminal: $H_2O + M + e^− \Leftrightarrow OH^− + MH$ Positive terminal: $Ni(OH)_2 + OH^− \Leftrightarrow NiO(OH) + H_2O + e^−$ Noe that the negative terminal produces hydroxide ions and the positive terminal consumes them. Also note the double arrows on both equations. The equilibrium point depends strongly on the concentrations of the compounds. In normal operation, the hydroxide ions have time to work their way from the negative terminal to the positive terminal, keeping the reactions going smoothly. However, in a short, there will be a build up of hydroxide near the negative terminal because it takes time to diffuse away from the terminal. This will reduce the rate at which the reaction occurs, reducing the maximum current the battery produces in this short example to below what you would expect from modeling the internal resistance alone. Also, depending on your battery, there may be 3rd order effects. You're going to generate lots of heat in the battery, and that causes reactions to change rates. Depending on the battery, this may accelerate the discharge. For an example of what these effects might be like, I point to this video of a lithium-ion battery. In this case the battery was *ahem* "encouraged" with a knife, but runaways like this for lithium ion batteries are not unheard of. • That actually clears it up very well for me. Thanks for the answer! Dec 13 '16 at 22:58
{}
# [Solution] Shinju and the Lost Permutation solution codeforces Shinju and the Lost Permutation solution codeforces – Shinju loves permutations very much! Today, she has borrowed a permutation 𝑝p from Juju to play with. The 𝑖i-th cyclic shift of a permutation 𝑝p is a transformation on the permutation such that 𝑝=[𝑝1,𝑝2,,𝑝𝑛]p=[p1,p2,…,pn] will now become 𝑝=[𝑝𝑛𝑖+1,,𝑝𝑛,𝑝1,𝑝2,,𝑝𝑛𝑖]p=[pn−i+1,…,pn,p1,p2,…,pn−i]. # Shinju and the Lost Permutation solution codeforces Let’s define the power of permutation 𝑝p as the number of distinct elements in the prefix maximums array 𝑏b of the permutation. The prefix maximums array 𝑏b is the array of length 𝑛n such that 𝑏𝑖=max(𝑝1,𝑝2,,𝑝𝑖)bi=max(p1,p2,…,pi). For example, the power of [1,2,5,4,6,3][1,2,5,4,6,3] is 44 since 𝑏=[1,2,5,5,6,6]b=[1,2,5,5,6,6] and there are 44 distinct elements in 𝑏b. Unfortunately, Shinju has lost the permutation 𝑝p! The only information she remembers is an array 𝑐c, where 𝑐𝑖ci is the power of the (𝑖1)(i−1)-th cyclic shift of the permutation 𝑝p. She’s also not confident that she remembers it correctly, so she wants to know if her memory is good enough. Given the array 𝑐c, determine if there exists a permutation 𝑝p that is consistent with 𝑐c. You do not have to construct the permutation 𝑝p. A permutation is an array consisting of 𝑛n distinct integers from 11 to 𝑛n in arbitrary order. For example, [2,3,1,5,4][2,3,1,5,4] is a permutation, but [1,2,2][1,2,2] is not a permutation (22 appears twice in the array) and [1,3,4][1,3,4] is also not a permutation (𝑛=3n=3 but there is 44 in the array). ## Shinju and the Lost Permutation solution codeforces The input consists of multiple test cases. The first line contains a single integer 𝑡t (1𝑡51031≤t≤5⋅103) — the number of test cases. The first line of each test case contains an integer 𝑛n (1𝑛1051≤n≤105). The second line of each test case contains 𝑛n integers 𝑐1,𝑐2,,𝑐𝑛c1,c2,…,cn (1𝑐𝑖𝑛1≤ci≤n). It is guaranteed that the sum of 𝑛n over all test cases does not exceed 105105. Output For each test case, print “YES” if there is a permutation 𝑝p exists that satisfies the array 𝑐c, and “NO” otherwise. You can output “YES” and “NO” in any case (for example, strings “yEs“, “yes“, “Yes” and “YES” will be recognized as a positive response). Example input Copy 6 1 1 2 1 2 2 2 2 6 1 2 4 6 3 5 6 2 3 1 2 3 4 3 3 2 1 ## Shinju and the Lost Permutation solution codeforces output Copy YES YES NO NO YES NO ### Shinju and the Lost Permutation solution codeforces Note In the first test case, the permutation [1][1] satisfies the array 𝑐c. In the second test case, the permutation [2,1][2,1] satisfies the array 𝑐c. In the fifth test case, the permutation [5,1,2,4,6,3][5,1,2,4,6,3] satisfies the array 𝑐c. Let’s see why this is true. • The zeroth cyclic shift of 𝑝p is [5,1,2,4,6,3][5,1,2,4,6,3]. Its power is 22 since 𝑏=[5,5,5,5,6,6]b=[5,5,5,5,6,6] and there are 22 distinct elements — 55 and 66. • The first cyclic shift of 𝑝p is [3,5,1,2,4,6][3,5,1,2,4,6]. Its power is 33 since 𝑏=[3,5,5,5,5,6]b=[3,5,5,5,5,6]. • The second cyclic shift of 𝑝p is [6,3,5,1,2,4][6,3,5,1,2,4]. Its power is 11 since 𝑏=[6,6,6,6,6,6]b=[6,6,6,6,6,6]. • The third cyclic shift of 𝑝p is [4,6,3,5,1,2][4,6,3,5,1,2]. Its power is 22 since 𝑏=[4,6,6,6,6,6]b=[4,6,6,6,6,6]. • The fourth cyclic shift of 𝑝p is [2,4,6,3,5,1][2,4,6,3,5,1]. Its power is 33 since 𝑏=[2,4,6,6,6,6]b=[2,4,6,6,6,6]. • The fifth cyclic shift of 𝑝p is [1,2,4,6,3,5][1,2,4,6,3,5]. Its power is 44 since 𝑏=[1,2,4,6,6,6]b=[1,2,4,6,6,6]. Therefore, 𝑐=[2,3,1,2,3,4]c=[2,3,1,2,3,4]. In the third, fourth, and sixth testcases, we can show that there is no permutation that satisfies array 𝑐c.
{}
The linear model is calculated from the slope of a localized least-squares regression model y=y(x). The localization is defined by the x difference from the point in question, with data at distance exceeding L/2 being ignored. With a boxcar window, all data within the local domain are treated equally, while with a hanning window, a raised-cosine weighting function is used; the latter produces smoother derivatives, which can be useful for noisy data. The function is based on internal calculation, not on lm(). runlm(x, y, xout, window = c("hanning", "boxcar"), L, deriv) ## Arguments x a vector holding x values. a vector holding y values. optional vector of x values at which the derivative is to be found. If not provided, x is used. type of weighting function used to weight data within the window; see ‘Details’. width of running window, in x units. If not provided, a reasonable default will be used. an optional indicator of the desired return value; see ‘Examples’. ## Value If deriv is not specified, a list containing vectors of output values y and y, derivative (dydx), along with the scalar length scale L. If deriv=0, a vector of values is returned, and if deriv=1, a vector of derivatives is returned. ## Examples library(oce) # Case 1: smooth a noisy signal x <- 1:100 y <- 1 + x/100 + sin(x/5) yn <- y + rnorm(100, sd=0.1) L <- 4 calc <- runlm(x, y, L=L) plot(x, y, type='l', lwd=7, col='gray')points(x, yn, pch=20, col='blue')lines(x, calc\$y, lwd=2, col='red') # Case 2: square of buoyancy frequency data(ctd) par(mfrow=c(1,1)) plot(ctd, which="N2")rho <- swRho(ctd) z <- swZ(ctd) zz <- seq(min(z), max(z), 0.1) N2 <- -9.8 / mean(rho) * runlm(z, rho, zz, deriv=1) lines(N2, -zz, col='red')legend("bottomright", lwd=2, bg="white", col=c("black", "red"), legend=c("swN2()", "using runlm()"))
{}
Followers 0 Combining multiple .x files into a single .x file 29 posts in this topic I have created multiple .x files, each one got different animation, example: "walk.x", "run.x", "die.x". How do I combine them together to create "character.x"? I tried using DirectX SDK tool called MeshView, but it's not working, I don't get to see the animations in the program. I'm using Alin's DirectX Exporter for exporting .x file from 3Ds Max. 0 Share on other sites You don't need to do this programmatically. You can do it from the command line or using a batch script. Assuming all your X files are in text format, just trim all but one of the X Files on disk so that they only have "AnimationSets" in them. Leave one of the X files so it has the other skinned mesh pieces (like Frames and Meshes), now just use 'copy' from the command line to concatenate all the files into one. e.g. copy /A bugbear.x + bugbear_walk.x + bugbear_idle.x bugbear_all.x This leaves a file bugbear_all.x that has the primary skinned mesh as well as the walk and idle animation sets. Unfortunately the copy command leaves a redundant EOF character at the end of the file, which you will have to remove somehow. 2 Share on other sites If it's BIP  animations, then simply combine them in the 'Mixer' in Max (select root skeletal node and click on the Motion tab). I don't know if Alin's exporter supports multiple animations in one file, but Pandasoft's exporter most certainly does. Doing it this way means you don't have to post-combine them if you use the animations on another character. 0 Share on other sites as i suspected, and as Steve's post confirms, you can put multiple animations in a single .x file. another way to do it is to use text format, and simply cut and paste them together using a text editor. double check your .xof file format description in the directx docs to make sure you get the syntax right. .x files tend to nest stuff using lots of "{ }'s " as i recall. since i don't use dx animation system anymore out of curiosity, what do you use now? 0 Share on other sites out of curiosity, what do you use now? I am using my own mesh format and animation system, since i always wanted full control over things and i really hated dx a.s. guts. It is working but not perfect yet (having some problems with transitions between animations... long story) but i am working really hard and i will resolve my issues. Also i have created 3ds max utility plugin using their IGame interface, with which i was surprised how easy it was to use, to export animation (and other) data that i need. 0 Share on other sites Okay, trying to open and save .x file and the following code fail, getting D3DERR_INVALIDCALL when I call D3DXSaveMeshHierarchyToFile(): LPD3DXANIMATIONCONTROLLER animController; hr = D3DXLoadMeshHierarchyFromX(filePath, D3DXMESH_MANAGED, device, memoryAllocator, NULL, &frameRoot, &animController); assert(SUCCEEDED(hr)); // OK hr = D3DXSaveMeshHierarchyToFile("character_with_all_animations.x", D3DXF_FILEFORMAT_TEXT, frameRoot, animController, NULL); assert(SUCCEEDED(hr)); // <---- ERROR: Getting D3DERR_INVALIDCALL here Edited by Medo3337 0 Share on other sites I have just tried the same thing, saving immediately after load function and it succeeded for me. Enable dx debug runtime to examine any error logs you might have in output window. Do you know how? Edited by belfegor 0 Share on other sites @belfegor: Notice that "memoryAllocator" is associated with MeshHierarchy class. I'm running debug mode and don't see any error message in the output window. Here is main.cpp source code: // main.cpp #include "windows.h" #include "d3d9.h" #include "d3dx9.h" #include "MeshHierarchy.h" #include "assert.h" #pragma comment(lib, "d3d9.lib") #pragma comment(lib, "d3dx9.lib") LPDIRECT3DDEVICE9 device; LPDIRECT3D9 d3d; int main() { d3d = Direct3DCreate9(D3D9b_SDK_VERSION); D3DPRESENT_PARAMETERS d3dpp; ZeroMemory(&d3dpp, sizeof(d3dpp)); d3dpp.Windowed = TRUE; d3dpp.BackBufferWidth = 800; d3dpp.BackBufferHeight = 600; d3dpp.EnableAutoDepthStencil = TRUE; d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8; d3dpp.AutoDepthStencilFormat = D3DFMT_D16; d3dpp.hDeviceWindow = GetConsoleWindow(); d3dpp.BackBufferCount = 1; HRESULT hr; hr = d3d->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, GetConsoleWindow(), D3DCREATE_HARDWARE_VERTEXPROCESSING, &d3dpp, &device); assert(SUCCEEDED(hr)); CMeshHierarchy *memoryAllocator = new CMeshHierarchy; LPD3DXFRAME frameRoot; LPD3DXANIMATIONCONTROLLER animController; hr = D3DXLoadMeshHierarchyFromX("character.x", D3DXMESH_MANAGED, device, memoryAllocator, NULL, &frameRoot, &animController); assert(SUCCEEDED(hr)); hr = D3DXSaveMeshHierarchyToFile("character_with_all_animations.x", D3DXF_FILEFORMAT_TEXT, frameRoot, animController, NULL); if (hr == D3DERR_INVALIDCALL) MessageBox(NULL, "Invalid D3D Call", "Error", MB_ICONERROR | MB_OK); assert(SUCCEEDED(hr)); } 0 Share on other sites You have not enabled DX debug runtime! 1. You need to link with debug dx lib: d3dx9d.lib (notice the d) 2. Set this at top of your cpp (before including any dx headers): #define D3D_DEBUG_INFO          1 3. Set this options in dx control panel (dxcpl.exe), you can find it in DX SDK utilities folder: 4. Run your app with debugging and close it 5. Examine output window for DX error/warning logs 2 Share on other sites I fixed this problem, now I have a slight another problem, I couldn't combine the animations together: vector<LPCSTR> filePath; filePath.push_back("walk.x"); filePath.push_back("die.x"); LPD3DXFRAME frameRoot; LPD3DXANIMATIONCONTROLLER newController; DWORD numAllAnimations = 0; for(UINT i = 0; i < filePath.size(); i++) { LPD3DXANIMATIONCONTROLLER animController; hr = D3DXLoadMeshHierarchyFromX(filePath[i], D3DXMESH_MANAGED, device, memoryAllocator, NULL, &frameRoot, &animController); assert(SUCCEEDED(hr)); numAllAnimations += animController->GetNumAnimationSets(); hr = animController->CloneAnimationController(animController->GetMaxNumAnimationOutputs(), numAllAnimations, animController->GetMaxNumTracks(), animController->GetMaxNumEvents(), &newController); assert(SUCCEEDED(hr)); for(UINT j = 0; j < animController->GetNumAnimationSets(); j++) { LPD3DXANIMATIONSET set; animController->GetAnimationSet(j, &set); newController->RegisterAnimationSet(set); } } delete memoryAllocator; memoryAllocator=0; hr = D3DXSaveMeshHierarchyToFile("G:\\character_with_all_animations.x", D3DXF_FILEFORMAT_TEXT, frameRoot, newController, NULL); if (hr == D3DERR_INVALIDCALL) MessageBox(NULL, "Invalid D3D Call", NULL, MB_ICONERROR | MB_OK); assert(SUCCEEDED(hr)); I can only get one of the animations to work, not both. 0 Share on other sites You have to put CloneAnimationController before loop so it is cloned only once. To make it easy for me to explain with little code as possible i am gonna assume you have only one animation set in each x file, then you could improve it later for multiple animations per x file when you got this working: vector<LPCSTR> filePath; filePath.push_back("walk.x"); filePath.push_back("die.x"); filePath.push_back("idle.x"); filePath.push_back("crouch.x"); filePath.push_back("wave.x"); filePath.push_back("jump.x"); DWORD numAllAnimations = filePath.size(); LPD3DXFRAME frameRoot, othersFrameRoot; LPD3DXANIMATIONCONTROLLER newController; LPD3DXANIMATIONCONTROLLER animController; memoryAllocator, NULL, &frameRoot, &animController); animController->CloneAnimationController(animController->GetMaxNumAnimationOutputs(), numAllAnimations, animController->GetMaxNumTracks(), animController->GetMaxNumEvents(), &newController); for(UINT i = 1; i < filePath.size(); i++) // note that we skip first { memoryAllocator, NULL, &othersFrameRoot, &animController); LPD3DXANIMATIONSET set; animController->GetAnimationSet(0, &set); newController->RegisterAnimationSet(set); } delete memoryAllocator; memoryAllocator=0; hr = D3DXSaveMeshHierarchyToFile("G:\\character_with_all_animations.x", D3DXF_FILEFORMAT_TEXT, frameRoot, newController, NULL); I am not sure if CloneAnimationCotroller will copy animation set in new controller from one that with cloned from, since MSDN docs are incomplete and obscure, so you need to test that. As always check all functions for failure, i removed it for readability. Also there is some memory leaks here but it doesn't matter much you can fix that later until you got this working first. Edited by belfegor 1 Share on other sites @belfegor: Working now! some animations like "die" should not loop, how do I stop it from looping? I want to play "some" animations just once. "walk", "run" animations working correctly since they should loop, however "die" is looping while it should not. 0 Share on other sites Is it possible that I determine that the animation should play once or loop from 3Ds Max? I never use D3DXCreateKeyframedAnimationSet() in my code, instead, I'm getting the existing animation set using GetAnimationSet() and assigning it to the track. 0 Share on other sites Is it possible that I determine that the animation should play once or loop from 3Ds Max? It doesn't work like that. Just stop updating your bone matrices when "die" animation finishes, this way you also save yourself a dozen matrix multiplies and what not. Edited by belfegor 0 Share on other sites Okay, so how do I know that the animation finished so I can stop updating the bone matrices? Or maybe there is a way to set D3DXPLAY_ONCE, how do I update the following code to set D3DXPLAY_ONCE? UINT animationIndex = 0; // Animation that I want to play LPD3DXANIMATIONSET set; animController->GetAnimationSet(animationIndex, &set); 0 Share on other sites Okay, so how do I know that the animation finished so I can stop updating the bone matrices? Or maybe there is a way to set D3DXPLAY_ONCE, how do I update the following code to set D3DXPLAY_ONCE? This is why i hate ("hate" is even light word for what i think about) dx animation system as there is no convenient way to do this, but instead you need to do stupid hoops and hacks to setup such a simple thing. From what i gathered from Frank Luna animation book you need to set a callback and create compressed keyframed animation set, should look something like this: ... ID3DXKeyframedAnimationSet* animSetTemp = 0;  _animCtrl->GetAnimationSet( dieAnimationSetInx, (ID3DXAnimationSet**)&animSetTemp); // Compress it. ID3DXBuffer* compressedInfo = 0; animSetTemp->Compress(D3DXCOMPRESS_DEFAULT, 0.5f, 0, &compressedInfo); UINT numCallbacks = 1; D3DXKEY_CALLBACK key; double ticks = animSetTemp->GetSourceTicksPerSecond(); keys.Time = animSetTemp->GetPeriod()*ticks; keys.pCallbackData = (void*)&MyCallbackData; ID3DXCompressedAnimationSet* compressedAnimSet = 0; D3DXCreateCompressedAnimationSet(animSetTemp->GetName(), animSetTemp->GetSourceTicksPerSecond(), animSetTemp->GetPlaybackType(), compressedInfo, numCallbacks, &key, &compressedAnimSet); compressedInfo->Release(); // Remove the old (non compressed) animation set. _animCtrl->UnregisterAnimationSet(animSetTemp); animSetTemp->Release(); // Add the new (compressed) animation set. _animCtrl->RegisterAnimationSet(compressedAnimSet); // Hook up the animation set to the first track. _animCtrl->SetTrackAnimationSet(0, compressedAnimSet); compressedAnimSet->Release(); 0 Share on other sites Hmm, the code is throwing "Access violation" exception, but the purpose in the code is to set a callback for the animations. Why can't I simply set the flag D3DXPLAY_ONCE? I believe it can be done by simply setting D3DXPLAY_ONCE somewhere instead of doing callbacks. 0 Share on other sites Here is what I tried to do to set D3DXPLAY_ONCE flag (not working though): ID3DXKeyframedAnimationSet* set = 0; animController->GetAnimationSet(animationIndex, (ID3DXAnimationSet**)&set); D3DXCreateKeyframedAnimationSet(set->GetName(), set->GetSourceTicksPerSecond(), D3DXPLAY_ONCE, set->GetNumAnimations(), set->GetNumCallbackKeys(), NULL, &set); // Code here to set the track and play the animation... 0 Share on other sites From what i can tell D3DXCreateKeyframedAnimationSet only outputs ID3DXKeyframedAnimationSet it does not copy SRT keys from existing animation set, you need to that yourself. Note that i have not done this before so i am going to guess with help from MSDN documentation: ID3DXAnimationSet* dieAnimationSet; ID3DXKeyframedAnimationSet* dieKeyFramedAnimationSet; animController->GetAnimationSet(animationIndex, &dieAnimationSet); // creates "empty" keyframed-animation-set D3DXCreateKeyframedAnimationSet( dieAnimationSet->GetName(), dieAnimationSet->GetSourceTicksPerSecond(), D3DXPLAY_ONCE, dieAnimationSet->GetNumAnimations(), dieAnimationSet->GetNumCallbackKeys(), NULL, &dieKeyFramedAnimationSet); // now you need to copy SRT keys data from dieAnimationSet to dieKeyFramedAnimationSet // some thing here i am not sure how to obtain as documentation is not clear enough LPD3DXKEY_VECTOR3 scaleKeys[ numKeys ]; LPD3DXKEY_QUATERNION rotKeys[ numKeys ]; LPD3DXKEY_VECTOR3 translKeys[ numKeys ]; for each numKeys and "periodic/local/whatever_time_they_mean_by_this" { D3DXVECTOR3 scale, transl; D3DXQUATERNION rot; dieAnimationSet->GetSRT( timePos, 0, &scale, &rot, &transl ); scaleKeys[ i ].Time = timePos; scaleKeys[ i ].Value = scale; rotKeys[ i ].Time = timePos; rotKeys[ i ].Value = rot; translKeys[ i ].Time = timePos; translKeys[ i ].Value = transl; } DWORD newAnimIndex; dieKeyFramedAnimationSet->RegisterAnimationSRTKeys( "animation_name", numKeys, numKeys, numKeys, scaleKeys, rotKeys, translKeys, &newAnimIndex); // then here unregister old dieAnimationSet and register new dieKeyFramedAnimationSet to animation controler Now look how stupid this is for setting such a small and trivial thing. Edited by belfegor 0 Share on other sites @belfegor: I'm new to D3DX animation, so I will need a little more help and "periodic/local/whatever_time_they_mean_by_this" I don't understand this part How do I get numKeys? The following functions doesn't exists: dieAnimationSet->GetSourceTicksPerSecond() dieAnimationSet->GetNumCallbackKeys() 0 Share on other sites I am sorry, i just copied your code from above for that function and didn't look if it is correct. I commented in code that i don't know for some things how to obtain them. You could put it like this for test: D3DXCreateKeyframedAnimationSet( dieAnimationSet->GetName(), 30.0, // for this check in 3ds max and/or exporter options on which value it was set D3DXPLAY_ONCE, 1, // assume 1 animation in set 0, // assume no callback keys, otherwise i don't know how to obtain them NULL, &dieKeyFramedAnimationSet); Then for numKeys you could try (my guess is that GetPeriod() is in seconds?): double timeStep = dieAnimationSet->GetPeriod() / 30.0; DWORD numKeys = std::floor(dieAnimationSet->GetPeriod() * 30); double timePos = 0.0; for(DWORD i =0; i < numKeys; ++i, timePos += timeStep) { D3DXVECTOR3 scale, transl; D3DXQUATERNION rot; dieAnimationSet->GetSRT( timePos, 0, &scale, &rot, &transl ); ... } Maybie there is an easier way but i don't see one. Edited by belfegor 0 Share on other sites Now, the animation is not playing anymore :/ Here is the code: LPD3DXANIMATIONSET tempSet = 0; LPD3DXKEYFRAMEDANIMATIONSET set; m_animController->GetAnimationSet(index, &tempSet); D3DXCreateKeyframedAnimationSet( tempSet->GetName(), 30.0, // for this check in 3ds max and/or exporter options on which value it was set D3DXPLAY_ONCE, 1, // assume 1 animation in set 0, // assume no callback keys, otherwise i don't know how to obtain them NULL, &set); double timeStep = tempSet->GetPeriod() / 30.0; DWORD numKeys = std::floor(tempSet->GetPeriod() * 30); // now you need to copy SRT keys data from dieAnimationSet to dieKeyFramedAnimationSet // some thing here i am not sure how to obtain as documentation is not clear enough LPD3DXKEY_VECTOR3 scaleKeys = new D3DXKEY_VECTOR3[numKeys](); LPD3DXKEY_QUATERNION rotKeys = new D3DXKEY_QUATERNION[numKeys](); LPD3DXKEY_VECTOR3 translKeys = new D3DXKEY_VECTOR3[numKeys](); double timePos = 0.0; for(DWORD i =0; i < numKeys; ++i, timePos += timeStep) { D3DXVECTOR3 scale, transl; D3DXQUATERNION rot; tempSet->GetSRT( timePos, 0, &scale, &rot, &transl ); scaleKeys[ i ].Time = timePos; scaleKeys[ i ].Value = scale; rotKeys[ i ].Time = timePos; rotKeys[ i ].Value = rot; translKeys[ i ].Time = timePos; translKeys[ i ].Value = transl; } DWORD newAnimIndex; set->RegisterAnimationSRTKeys( tempSet->GetName(), numKeys, numKeys, numKeys, scaleKeys, rotKeys, translKeys, &newAnimIndex); animController->UnregisterAnimationSet(tempSet); tempSet->Release(); animController->RegisterAnimationSet(set); 0 Create an account or sign in to comment You need to be a member in order to leave a comment Create an account Sign up for a new account in our community. It's easy! Register a new account
{}
# Frames for the solution of operator equations in Hilbert spaces with fixed dual pairing Balazc, Peter and Harbrecht, Helmut. (2018) Frames for the solution of operator equations in Hilbert spaces with fixed dual pairing. Preprints Fachbereich Mathematik, 2018 (08). Preview 385Kb Official URL: https://edoc.unibas.ch/70170/ ## Abstract For the solution of operator equations, Stevenson introduced in [52] a definition of frames, where a Hilbert space and its dual are not identified. This means that the Riesz isomorphism is not used as an identification, that, for example, does not make sense for the Sobolev spaces $H_0^1 (Ω)$ and $H^{−1} (Ω)$. In this article, we are going to revisit this concept of Stevenson frames and introduce it for Banach spaces. This is equivalent to $l^2$-Banach frames. It is known that, if such a system exists, by defining a new inner product and using the Riesz isomorphism, the Banach space is isomorphic to a Hilbert space. In this article, we deal with the contrasting setting, where $\mathcal{H}$ and $\mathcal{H}^\prime$ are not identified, and equivalent norms are distinguished, and show that in this setting the investigation of $l^2$-Banach frames make sense. Faculties and Departments: 05 Faculty of Science > Departement Mathematik und Informatik > Mathematik > Computational Mathematics (Harbrecht)12 Special Collections > Preprints Fachbereich Mathematik Harbrecht, Helmut Preprint Universität Basel English 10.5451/unibas-ep70170 11 Apr 2019 20:56 11 Apr 2019 20:47 Repository Staff Only: item control page
{}
# Embedding Complexity and Discrete Optimization I: A New Divide and Conquer Approach to Discrete Optimization D. Cieslik, A. Dress, K. T. Huber, V. Moulton Research output: Contribution to journalArticlepeer-review ## Abstract In this paper, we introduce a new and quite natural way of analyzing instances of discrete optimization problems in terms of what we call the embedding complexity of an associated more or (sometimes also) less canonical embedding of the (generally vast) solution space R of a given problem into a product $\Pi_{e \in E} P_e$ of (generally many small) factor sets $P_e (e \in E)$ so that the score $s (\pi)$ of a solution $\pi$, interpreted as an element $\pi = (\pi_e)_{e\in E} \in \Pi_{e\in E} p_e$, can be computed additively by summing over the local scores $s_e (\pi_e)$ of all of its components $\pi_e$, for some appropriate score functions $S_e (e \in E)$ defined on the various factor sets $P_e$. This concept arises naturally within the context of a general Divide \& Conquer strategy for solving discrete optimization problems using dynamic-programming procedures. Relations with the treewidth concept and with linear-programming approaches to discrete optimization, as well as ways to exploit our approach to computing Boltzmann statistics for discrete optimization problems are indicated. In further papers, we will discuss these relations in more detail, we will relate embedding complexity also to other concepts of data representation like e.g., PQ-trees, and we will apply the ideas developed here towards designing schemes for solving specific optimization problems, e.g., the Steiner problem for graphs. Original language English 257-273 17 Annals of Combinatorics 6 3-4 Published - Sep 2002
{}
# Change the chapter font in TOC without using “tocloft” package As using tocloft in book class produces some unexpected side-effect, I wonder how to change the chapter font (e.g. 1 Introduction) as sans serif without using any other packages? For example, change the highlighted text to sans serif font. MWE: \documentclass{book} \usepackage[UKenglish]{babel} \usepackage{color} \usepackage[lining ]{libertine} \usepackage[ T1 ]{fontenc} \definecolor{DarkBlue}{RGB}{0,51,153} \usepackage[ ]{titlesec} \titleformat{\chapter}[display] {\LARGE \color{DarkBlue}} {\vspace{-1 em} \flushright \normalsize \color{black} \MakeUppercase{ \bfseries \sffamily \chaptertitlename } \hspace{1 em} { \fontsize{70}{70}\selectfont \color{black} \sffamily \thechapter }} {20 pt} { \bfseries \sffamily \LARGE} \titleformat{\section} {\Large \bfseries \sffamily \color{DarkBlue}} {\thesection}{1 em}{} \titleformat{\subsection} {\large \bfseries \sffamily \color{DarkBlue}} {\thesubsection}{1 em}{} \titleformat{\subsubsection} {\normalsize \sffamily \bfseries \color{DarkBlue}} {\thesubsubsection}{1 em}{} %\slshape \setcounter{tocdepth}{3} % table of content depth \setcounter{secnumdepth}{3} \usepackage{lipsum} \begin{document} \frontmatter \tableofcontents \chapter{Preface} \lipsum[1-3] \mainmatter \chapter{Introduction} \section{Foo} \lipsum[1-12] \chapter{How to make nuclear bomb} \section{Equations} \lipsum[1-12] \backmatter \chapter{Appendix~A} \lipsum[1-12] \chapter{Appendix~B} \lipsum[1-12] \end{document} - Without using any package or also without providing a minimal working example (MWE)? – lockstep Feb 5 '13 at 16:29 The simplest manner would be to redefine \l@chapter, the document class must be known before saying more. – jfbu Feb 5 '13 at 16:30 What is the unexpected side-effct you are seeing? – Peter Wilson Feb 5 '13 at 19:06 @PeterWilson, the spacing of TOC is changed as well as the blank page for LOF,LOT with book class – KOF Feb 5 '13 at 19:54 I don't think that the spacing of the TOC is changed. The manual does say that with chapters (e.g., book class) the LOF and LOT do not necessarily start on new pages. If you want that then add \clearpage or \cleardoublepage before calling for the LOF or LOT. – Peter Wilson Feb 5 '13 at 20:49 This is not really tested for lack of copy-pastable code, but should work in the book class. Put in the preamble: \makeatletter \def\l@chapter#1#2{% \ifnum \c@tocdepth >\m@ne \addpenalty {-\@highpenalty }\vskip 1.0em \@plus\p@ \setlength \@tempdima {1.5em}\begingroup \parindent \z@ \rightskip\@pnumwidth \parfillskip -\@pnumwidth \leavevmode \bfseries \advance \leftskip \@tempdima \hskip -\leftskip {\sffamily #1}\nobreak \hfil \nobreak \hb@xt@ \@pnumwidth {\hss #2}\par \penalty \@highpenalty \endgroup \fi} \makeatother Update: if you want to achieve similar effect with memoir or scrbook then use rather the following code: \makeatletter \let\originall@chapter\l@chapter \def\l@chapter#1#2{\originall@chapter{{\sffamily #1}}{#2}} \makeatother The above is actually preferable to the one I proposed earlier, also for the book class, as it will less encumber your preamble. Nevertheless the advantage of the first proposal is that you can also in case of need customize the hard-coded lengths therein. \documentclass{book} \usepackage{lipsum} \makeatletter \def\l@chapter#1#2{% \ifnum \c@tocdepth >\m@ne \addpenalty {-\@highpenalty }\vskip 1.0em \@plus\p@ \setlength \@tempdima {1.5em}\begingroup \parindent \z@ \rightskip\@pnumwidth \parfillskip -\@pnumwidth \leavevmode \bfseries \advance \leftskip \@tempdima \hskip -\leftskip {\sffamily #1}\nobreak \hfil \nobreak \hb@xt@ \@pnumwidth {\hss #2}\par \penalty \@highpenalty \endgroup \fi} \makeatother \begin{document} \tableofcontents \chapter{One} \section{A} \lipsum[1] \section{B} \lipsum[2] \chapter{Two} \section{C} \lipsum[3] \section{D} \lipsum[4] \end{document} And with the code provided in the question, one obtains: - It works perfectly! – KOF Feb 5 '13 at 16:47 rather than redefining \l@chapter the way I did, one could save the old definition, and encapsulate it in a new one with the first parameter #1 replaced by {\sffamily #1}. Both methods are compatible with hyperref. Note though that I had to put \sffamily #1 in a group to limit the scope of the font change. I think this will be in 99.9% of the cases with no after-effect elsewhere. – jfbu Feb 5 '13 at 16:54 @KOF: please not that if you change your document class to memoir or scrbook then the proposed method is not the good one. It would be better to do the alternative I mentioned in my comment. – jfbu Feb 5 '13 at 16:57 only original document classes are used for my writting :), because I don't know what kind of weird settings in these classes. – KOF Feb 5 '13 at 17:22 memoir actually has most (if not all, I don't know) of the tocloft commands and functionalities. Both memoir and scrbook are well documented: memman and scrguien – jfbu Feb 5 '13 at 17:31
{}
# insert equation in google slides Insert. How to Insert a Circle in Google Slides. It automatically changes your equation into Latex code and gives you a GIF image that you can download and insert into Google Presentation. 1. For a fraction choose a 1 by 2 table. I will update this as I find more options. Put equations in Google Docs or Slides with the power of LaTeX and the simplicity of a graphical editor. Help . Sitemap. Math Symbols. The alt key code for the division ( + ) symbol is 0247. Mathtype For Google Docs Documentation Wiris. These equations can be helpful to determine how fast bacteria or monsters, or even Alice, grows and shrinks! Ouvrez un document dans Google Docs. Math made easy EquatIO ® for Google is an easy-to-use extension for Google Chrome. by Alice Keeler. Inserting Maths Equations In Google Docs Using Technology Better. First, you type your equation into the yellow box. To enable screen reader support, press Ctrl+Alt+Z To learn about keyboard shortcuts, press Ctrl+slash. To open MathType to write an equation, choose either Insert/edit math equation or Insert/edit chem formula from the Add-ons menu: You can copy and paste formulas created with MathType from Google Docs to Gogle Slides and vice versa. Function overview. It offers a myriad of symbols, characters, symbols, languages, and more. Details. Insert a Preformatted Equation in PowerPoint. You can insert special characters in your documents and presentations without having to remember all those Alt-codes by using Google Docs and Slides easy-to-use character insertion tool. Tools. Math and Google Drawing 1. This free app features a variety of style and formatting features that can be used to make professional-level slides, one of which is its ability to create bullet point lists. If you’ve done it in Word you know that you can create square or rectangular matrices by choosing from the Matrix section of the Equation tab. First, let me give you an idea of what exactly are we going to do here. Connect to an external service - This add-on needs to connect to an external service to render the LaTeX code. HTML view of the presentation. Search the world's information, including webpages, images, videos and more. Install EquatIO . Here’s how you can insert special characters into your documents. LaTeX can also be inserted directly. These could be tables, pictures, or even videos, all of which can be added with the tools in Google Slides. Insert. For a mixed number choose a 2 by 2 table. Insérer une équation. Once the formula is copied in the new environment you will be able to edit the formula using MathType. Forget about having to know LaTeX to write math. This is essential, use any service you like, be it Google Drive, OneDrive, or any other online service for that matter as long as it serves the purpose. Cliquez là où vous souhaitez insérer l'équation. To insert this into Google Docs, simply open your document, place the cursor where you need the symbol to appear, press and hold the Alt key, type "0247", then release the Alt key. Create & Edit drawings online Publish drawings online Insert text, shapes, arrows, scribbles, and images Insert drawings into other Google Docs or Slides Photo Annotation In the ribbon at the top click on “Insert’. All the other things are as it was like the original except the mathematical equations. To begin using the software, simply do the following. Sélectionnez les symboles à ajouter à partir de l'un de ces menus : Alphabet grec; Opérations diverses; Relations; Opérations mathématiques; Flèches; Ajoutez des nombres ou des variables de substitution dans la zone de texte. RELATED: How to Insert Symbols into Google Docs and Slides. Unfortunately there is not an equation editor for Google Presentation. How To Use The Equation Editor In Google Docs . Unfortunately, Google Slides and Drawings do not support add-ons yet (please Google!). Google Slides. Apply Superscript in Google Docs (Text, Numbers or Symbols) by Avantix Learning Team | Updated September 22, 2020. But ... Use the system of equations in Google Docs. 3 Good Math Equation Apps For Students Educational Technology And Mobile Learning. x = y = Write a word sentence: Write the equation: MORE 👥 W ork with a partner or in your group to complete the following. Adjust the width of the table as needed by dragging the side border. In your document, open the “Insert” tab and then click the “Special Characters” option. - Handwriting Working on a touch device? Write an equation for the total cost to bowl. The equation will appear below the yellow box. Math Equations Google Workspace Marketplace. March 1, 2018 By Matt. In this application we can create whatever, documents, slides and spreadsheets. Insert a PDF into Google Slides. Triggers let Apps Script run a function automatically when a certain event, like opening a document, occurs. This editor is based on MathQuill, ... To insert equations into a google document the add-on needs to be able to edit the document. por ; 18/12/2020 Math and Google Drawing with Rae Fearing and Charlene Knowlton link → goo.gl/Gt7tXL 2. One of the easiest methods of adding equations to PowerPoint slides is to insert them directly via ‘Equation’. When the Special Characters dialog opens, click the drop-down box on the right and click “Superscript” from the list of choices. This works in any text editor--not just Google Docs. You can handwrite your equations! Google Slides makes your ideas shine with a variety of presentation themes, hundreds of fonts, embedded video, animations, and more. DOCS‎ > ‎Menempelkan/Insert Object : Gambar, Grafik, dll‎ > ‎ Menyisipkan Equations. I can create a fraction using the insert equation in google docs, but then have to take a screen grab, as it won't copy paste into a text box in slides. Bowling costs $2.50 per game plus$1.50 for shoe rental. 1. Slides, Mac OS, Education. Products Equatio Equatio for Google. Menyisipkan Equations (Menempelkan/Insert Object : Gambar, Grafik, dll) Klik “Insert” kemudian pilih “Equation”. Use CodeCogs (a free website) This first method is to use this website. Tools. You now have a choice of four menus. Understanding that it is something as easy as using Google Docs documents without an internet connection, you deserve to know that of course it is possible to use the system of equations in Docs. You can add and edit math formulas very simply and intuitively, then save the output file as png file format and insert this math equation image to your Google Docs/Google Slides. The division symbol will appear. If you want to create a custom keystroke, use the instructions in Word Help on the subject: Insert an equation. Learn how to insert maths equations easily into Google Docs Click To Tweet. Select it. How To Create Equations For Google Slides Presentations You. Writing Exponential Equations Created by @CarolynLantos. Open up your Google Doc as you normally would. Option 1. Write an equation for the number of baseball cards you have left. Cliquez sur Insertion Équation. Get a head start with templates Menggunakan Template. You simply need to insert an Alt key code. Then, just type the number or expression you want to include under the square root. Vs. PowerPoint: The Basics. This video shows how to input basic equations (sum, average, etc) into a cell in Google Sheets Início » google docs insert equation shortcut google docs insert equation shortcut. Key Features: -Equations are editable. In Google Docs there is an equation editor that allows for users to insert math symbols into the document. Put equations in Google Docs or Slides with the power of LaTeX and the simplicity of a graphical editor. Write and edit math equations and chemical formulas in your documents and presentations with MathType for Google Docs and Google Slides. To insert a square root, just type \sqrt and then just press Enter or Tab to insert the symbol. - Easy to use User-friendly interface that provides the easiest experience from day one. 1. Comment only . Google Slides is a powerful tool for creating and viewing presentations on the web and iOS and Android smart devices. Simple triggers are a set of reserved functions built into Apps Script, like the function onOpen(e), which executes when a user opens a Google Docs, Sheets, Slides, or Forms file.Installable triggers offer more capabilities than simple triggers but must be activated before use. Equations ( Menempelkan/Insert Object: Gambar, Grafik, dll‎ > ‎ equations! Are as it was like the original except the mathematical equations CodeCogs ( a free version of math type to! To create a custom keystroke, use the instructions in Word Help on the right and “Superscript”. The width of the table as needed by dragging the side border in PowerPoint 2013 formulas... Quadratic functions can Help you find how high this rocket went graphical.! Editor -- not just Google Docs or Slides, and more! ) allows for users to insert symbols... And click “Superscript” from the list of choices documents, Slides and spreadsheets first, you have the! Apply Superscript in Google Docs insert Maths equations in Google Docs ( Text, Numbers or symbols by! To Tweet Docs and Google Drawing with Rae Fearing and Charlene Knowlton link → goo.gl/Gt7tXL 2 and “Superscript”... World 's information, including webpages, images, videos and more equation for the (. High this rocket went not an equation editor for Google is an easy-to-use for! You have selected the equation editor for Google Chrome you simply need to math... First, you have to use the system of equations in Google Docs do support... Please Google! ) just press Enter or tab to insert math symbols the! insert table '' solution, not a work around that creates more work except... Head start with templates you simply need to insert math symbols into the yellow box equation! Other things are as it was like the original except the mathematical equations exactly are we going to here! Which can be helpful to determine how fast bacteria or monsters, or even Alice, and... We going to do here of using a table goo.gl/Gt7tXL 2 and!! Just type \sqrt and then insert table '' and then insert table '' learn... Cards you have left a 2 by 2 table tab and then insert table '' we need PDF. As you normally would is 0247 1.50 for shoe rental to an service. Google Apps for Education untuk pengajar, staff dan siswa untuk pembelajaran abad 21... Mengatur Slide Characters” option another. Create mathematical equations, formulas, quizzes and more graphical editor and then insert table '' then... Website or application insert table '' and then insert table '' and then click drop-down. Alt key code for the number or expression you want to create equations for Google Docs or Slides the... What exactly are we going to do here an embed code for the division ( + ) symbol 0247! Math type to create equations for Google easily create mathematical equations, formulas, quizzes and more expression want! I will update this as i find more options quizzes and more Apps. Alice, grows and shrinks this add-on needs to connect to an external service to render the LaTeX added the. Ctrl+Alt+Z to learn about keyboard shortcuts, press Ctrl+slash: how to the. Then, just type the number or expression you want to include the..., including webpages, images, videos and more the mathematical equations your documents webpages, images, and... 1.50 for shoe rental PDF you want to include under the square root in. To add math expressions and equations in PowerPoint 2013 top menu bar, ... Equations ( Menempelkan/Insert Object: Gambar, Grafik, dll‎ > ‎ Menyisipkan equations ( Menempelkan/Insert Object: Gambar Grafik. To an external service - this add-on needs to connect to an external insert equation in google slides to render LaTeX... Via ‘Equation’ experience from day one inserting Maths equations in Google Docs using Technology Better power LaTeX! Characters, symbols, characters, symbols, characters, symbols, languages and. Insert into Google Presentation automatically when a certain event, like opening a document open. Our addon sidebar will open up a document ‎ Menyisipkan equations adding to... - Easy to use another website or application used insert equation in google slides the popular online calculator. The division ( + ) symbol is 0247 3 Good math equation Apps for Education untuk pengajar, staff siswa. Like the original except the mathematical equations, let me give you an of! Directly via ‘Equation’ Google Slides makes your ideas shine with a variety of Presentation themes, hundreds fonts. Charlene Knowlton link → goo.gl/Gt7tXL 2 Slides and Drawings do not support add-ons yet ( please Google!.! G Suite Apps Menyisipkan equations a square root: Gambar, Grafik, dll ) “Insert”! On the subject: insert an Alt key code ® for Google easily create equations. September 22, 2020 insert equation in google slides is not an equation for the LaTeX..... use the equation tools you’ll see a new tool bar has opened up the. €Ž Menyisipkan equations exactly are we going to do here that creates more work Google! Educational Technology and Mobile Learning, hundreds of fonts, embedded video, animations, and more create... New tool bar has opened up underneath the existing one click to Tweet version of math type the... Latex code and gives you an embed code for the total cost to bowl Forms you., animations, and more or symbols ) by Avantix Learning Team | Updated September 22, 2020 free )... Include under the square root, just type \sqrt and then just Enter! World 's information, including webpages, images, videos and more the yellow box ( Text Numbers! And then just press Enter or tab to insert available publicly on the and! This first method is to insert the symbol to create equations for Google Slides and spreadsheets and... 21... Mengatur Slide your equation into LaTeX code is to use another website or.. Exactly are we going to do here then, just type the number or expression you to. Up underneath the existing one the existing one use another website or application of... Of baseball cards you have left open up a document Knowlton link → goo.gl/Gt7tXL 2 yet ( Google! I also know the trick of using a table you create for school work. As well as Word does Numbers or symbols ) by Avantix Learning Team | Updated September,... Slides with the power of LaTeX and the simplicity of a graphical editor insert available publicly the! Per game plus \$ 1.50 for shoe rental provides the easiest methods of adding to... Having to know LaTeX to write math your documents total cost to bowl of choices selected the equation editor Google! Based on MathQuill, the same GUI-based math editor used by the popular online graphing calculator Desmos have to another... Then just press Enter or tab to insert an equation for the total cost to bowl a... Update this as i find more options Fearing and Charlene Knowlton link → goo.gl/Gt7tXL 2 dialog opens, click drop-down. Of which can be added with the power of LaTeX and the simplicity of graphical. World 's information, including webpages, images, videos and more, use the instructions Word. And our addon sidebar will open up your browser, head over to Google Docs doesn’t handle matrices as as! Enter or tab to insert math equations and chemical formulas in your,..., dll‎ > ‎ Menyisipkan equations this post we will show you how to create a keystroke! Yet ( please Google! ) you an idea of what exactly are we going to here! Over to Google Docs click to Tweet use the equation tools you’ll a. Simply do the following number of baseball cards you have to use User-friendly interface that provides easiest! Presentation themes, hundreds of fonts, embedded video, animations, and.! Text, Numbers or symbols ) by Avantix Learning Team | Updated September 22, 2020 equation Apps for Educational., head over to Google Docs doesn’t handle matrices as well as Word does images, and. 'S information, including webpages, images, videos and more myriad of symbols, languages, and.. Or symbols ) by Avantix Learning Team | Updated September 22, 2020: how to create a custom,! Type \sqrt and then just press Enter or tab to insert math equations and chemical formulas in your,! Support, press Ctrl+Alt+Z to learn about keyboard shortcuts, press Ctrl+slash drop-down box on the.. You type your equation into the yellow box video, animations, and open up your Google Doc as normally... Pengajar, staff dan siswa untuk pembelajaran abad 21... Mengatur Slide,!, click table '' math made Easy equatio ® for Google is easy-to-use! Good math equation Apps for Education untuk pengajar, staff dan siswa untuk pembelajaran abad 21... Slide. The new environment you will be able to edit the formula is copied in the top on. Object: Gambar, Grafik, dll ) Klik “Insert” kemudian pilih “Equation” into Google Docs and Slides... Insert available publicly on the subject: insert an equation for the number or expression you to. €ŽMenempelkan/Insert Object: Gambar, Grafik, dll‎ > ‎ Menyisipkan equations Word on. Equations for Google easily create mathematical equations, formulas, quizzes and in. You will be able to edit the formula using MathType or application on MathQuill, the same GUI-based editor! Or Slides with the power of LaTeX and the simplicity of a graphical editor changes your into... ) by Avantix Learning Team | Updated September 22, 2020 Text editor -- not just Google.! Power of LaTeX and the simplicity of a graphical editor that provides the easiest experience from day one cost bowl. Help you find how high this rocket went is 0247, pictures, or even videos all...
{}
# When does every $\infty$-localization correspond to a Bousfield localization? Let $\mathcal{M}$ be a model category presenting an $\infty$-category $\mathcal{C}$. I believe that every left Bousfield localization $\widetilde{\mathcal{M}}$ of $\mathcal{M}$ corresponds to a reflective $\infty$-subcategory $\widetilde{\mathcal{C}}$ in $\mathcal{C}$ -- perhaps some hypotheses are needed for this. What about the converse? Suppose I have a reflective $\infty$-subcategory $\widetilde{\mathcal{C}} \subseteq \mathcal{C}$. Under what conditions is this induced by a left Bousfield localization $\widetilde{\mathcal{M}} \subseteq \mathcal{M}$? Does it suffice that every left Bousfield localization exist in $\mathcal{M}$? Is this a necessary condition? Now is probably the time for me to admit that I don't even know an example of a model category that doesn't admit all left Bousfield localizations -- what is an example? • The reference in Higher Topos Theory for the positive results is Section A.3.7. – AAK Jul 20 '16 at 4:08 • Thanks! Apparently it's A.3.7.8 to be precise. Of course this is in the locally presentable context. I don't know why I keep expecting more to be known outside this context... – Tim Campion Jul 20 '16 at 6:37 You definitely need $M$ to be combinatorial for these types of statements. I believe Lurie has shown that every accessible localization of a presentable infinity category can be expressed as a left Bousfield localization. See chapter 5 section 5 of HTT. He uses strongly reflective to mean it comes from an accessible localization. Without accessibility, it breaks down, as the next example shows. If, for some reason, every left Bousfield localization of $M$ was known to exist, then I would expect that any localization of the $\infty$-category of $M$ comes from a localization of $M$. I'd prove this using the universal principle. Since the cofibrations are the same in any left localization, only the weak equivalences matter, and under the assumption about localizations existing, every class of weak equivalences corresponds to a left localization. But this assumption is ridiculously strong, and certainly not necessary for the result you want, as chapter 5 of HTT shows.
{}
# Higher Order Linear Homogeneous Differential Equations with Variable Coefficients • The linear homogeneous equation of the $$n$$th order has the form ${{y^{\left( n \right)}} + {a_1}\left( x \right){y^{\left( {n – 1} \right)}} + \cdots }+{ {a_{n – 1}}\left( x \right)y’ }+{ {a_n}\left( x \right)y }={ 0,}$ where the coefficients $${a_1}\left( x \right),$$ $${a_2}\left( x \right), \ldots ,$$ $${a_n}\left( x \right)$$ are continuous functions on some interval $$\left[ {a,b} \right].$$ The left side of the equation can be written in abbreviated form using the linear differential operator $$L:$$ $Ly\left( x \right) = 0,$ where $$L$$ denotes the set of operations of differentiation, multiplication by the coefficients $${a_i}\left( x \right),$$ and addition. The operator $$L$$ is linear, and therefore has the following properties: 1. $$L\left[ {{y_1}\left( x \right) + {y_2}\left( x \right)} \right] =$$ $$L\left[ {{y_1}\left( x \right)} \right] + L\left[ {{y_2}\left( x \right)} \right],$$ 2. $$L\left[ {Cy\left( x \right)} \right] =$$ $$CL\left[ {y\left( x \right)} \right],$$ where $${{y_1}\left( x \right)},$$ $${{y_2}\left( x \right)}$$ are arbitrary, $$n – 1$$ times differentiable functions, $$C$$ is any number. It follows from the properties of the operator $$L$$ that if the functions $${y_1},{y_2}, \ldots ,{y_n}$$ are solutions of the homogeneous differential equation of the $$n$$th order, then the function of the form ${y\left( x \right) }={ {C_1}{y_1} + {C_2}{y_2} + \cdots }+{ {C_n}{y_n},}$ where $${C_1},{C_2}, \ldots ,{C_n}$$ are arbitrary constants, will also satisfy this equation. The last expression is the general solution of homogeneous differential equation if the functions $${y_1},{y_2}, \ldots ,{y_n}$$ form a fundamental system of solutions. ### Fundamental System of Solutions The set of $$n$$ linearly independent particular solutions $${y_1},{y_2}, \ldots ,{y_n}$$ is called a fundamental system of the homogeneous linear differential equation of the $$n$$th order. The functions $${y_1},{y_2}, \ldots ,{y_n}$$ are linearly independent on the interval $$\left[ {a,b} \right]$$ if the identity ${{\alpha _1}{y_1} + {\alpha _2}{y_2} + \cdots }+{ {\alpha _n}{y_n} }\equiv {0}$ holds only provided ${{\alpha _1} = {\alpha _2} = \cdots }={ {\alpha _n} }={ 0,}$ where the numbers $${\alpha _1},{\alpha _2}, \ldots ,{\alpha _n}$$ are not simultaneously $$0.$$ To test functions for linear independence it is convenient to use the Wronskian: ${W\left( x \right) = {W_{{y_1},{y_2}, \ldots ,{y_n}}}\left( x \right) } = {\left| {\begin{array}{*{20}{c}} {{y_1}}&{{y_2}}& \cdots &{{y_n}}\\ {{y’_1}}&{{y’_2}}& \cdots &{{y’_n}}\\ \cdots & \cdots & \cdots & \cdots \\ {y_1^{\left( {n – 1} \right)}}&{y_2^{\left( {n – 1} \right)}}& \cdots &{y_n^{\left( {n – 1} \right)}} \end{array}} \right|.}$ Let the functions $${y_1},{y_2}, \ldots ,{y_n}$$ be $$n – 1$$ times differentiable on the interval $$\left[ {a,b} \right].$$ Then if these functions are linearly dependent on the interval $$\left[ {a,b} \right],$$ then the following identity holds: $W\left( x \right) \equiv 0.$ Accordingly, if these functions are linearly independent on $$\left[ {a,b} \right],$$ we have the formula $W\left( x \right) \ne 0.$ The fundamental system of solutions uniquely defines a linear homogeneous differential equation. In particular, the fundamental system $${y_1},{y_2},{y_3}$$ defines a third-order equation, which is expressed through determinant as follows: ${\left| {\begin{array}{*{20}{c}} {{y_1}}&{{y_2}}&{{y_3}}&y\\ {{y’_1}}&{{y’_2}}&{{y’_3}}&y’\\ {{y^{\prime\prime}_1}}&{{y^{\prime\prime}_2}}&{{y^{\prime\prime}_3}}&y^{\prime\prime}\\ {{y^{\prime\prime\prime}_1}}&{{y^{\prime\prime\prime}_2}}&{{y^{\prime\prime\prime}_3}}&y^{\prime\prime\prime} \end{array}} \right| }={ 0.}$ The expression for the differential equation of the $$n$$th order can be written similarly: ${\left| {\begin{array}{*{20}{c}} {{y_1}}&{{y_2}}& \cdots &{{y_n}}&y\\ {{y’_1}}&{{y’_2}}& \cdots &{{y’_n}}&y’\\ \cdots & \cdots & \cdots & \cdots & \cdots \\ {y_1^{\left( n \right)}}&{y_2^{\left( n \right)}}& \cdots &{y_n^{\left( n \right)}}&{{y^{\left( n \right)}}} \end{array}} \right| }={ 0.}$ ### Liouville’s Formula Suppose that the functions $${y_1},{y_2}, \ldots ,{y_n}$$ form a fundamental system of solutions for a differential equations of $$n$$th order. Suppose that the point $${x_0}$$ belongs to the interval $$\left[ {a,b} \right].$$ Then the Wronskian is determined by Liouville’s formula: ${W\left( x \right) }={ W\left( {{x_0}} \right){e^{ – \int\limits_{{x_0}}^x {{a_1}\left( t \right)dt} }},}$ where $${a_1}$$ is the coefficient of the derivative $${y^{\left( {n – 1} \right)}}$$ in the differential equation. Here we assume that the coefficient $${a_0}\left( x \right)$$ of $${y^{\left( n \right)}}$$ in the differential equation is equal to $$1.$$ Otherwise, Liouville’s formula takes the form: ${W\left( x \right) }={ W\left( {{x_0}} \right){e^{ – \int\limits_{{x_0}}^x {\frac{{{a_1}\left( t \right)}}{{{a_0}\left( t \right)}}dt} }},\;\;}\kern-0.3pt {{a_0}\left( t \right) \ne 0,\;\;}\kern-0.3pt{t \in \left[ {a,b} \right].}$ ### Reduction of Order of a Homogeneous Linear Equation The order of a linear homogeneous equation ${Ly\left( x \right) }={ {y^{\left( n \right)}} + {a_1}\left( x \right){y^{\left( {n – 1} \right)}} + \cdots } + {{a_{n – 1}}\left( x \right)y’ }+{ {a_n}\left( x \right)y }={ 0}$ can be reduced by one by the substitution $$y’ = yz.$$ Unfortunately, usually such a substitution does not simplify the solution, because the new equation in the variable $$z$$ becomes nonlinear. If a particular solution $${y_1}$$ is known, then the order of the differential equation can be reduced (while maintaining its linearity) by replacing $y = {y_1}z,\;\;z’ = u.$ In general, if we know $$k$$ linearly independent particular solutions, the order of the equation can be reduced by $$k$$ units. • ## Solved Problems Click a problem to see the solution. ### Example 1 Show that the functions $$x,$$ $$\sin x,$$ $$\cos x$$ are linearly independent. ### Example 2 Show that the functions $$x,{x^2},{x^3},{x^4}$$ form a linearly independent system. ### Example 3 Make a differential equation, which is determined by the fundamental system of functions $$1,{x^2},{e^x}.$$ ### Example 4 Find the general solution of the equation $$\left( {2x – 3} \right)y^{\prime\prime\prime}$$ $$-\; \left( {6x – 7} \right)y^{\prime\prime}$$ $$+\; 4xy’ – 4y = 0,$$ if the particular solutions $${y_1} = {e^x},$$ $${y_2} = {e^{2x}}$$ are known. Page 1 Concept Page 2 Problems 1-4
{}
# ##\$ Factor Trinomials Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. ## How to Factor Trinomials 1. Compare the trinomial to the general form $$ax^2+bx+c$$ and identify $$a$$, $$b$$, and $$c$$. 2. Find two numbers that add up to $$b$$ and multiply to $$ac$$. 3. Rewrite the middle term $$bx$$ as two terms and factor by grouping. • As shown below, this step can be skipped if the leading coefficient is 1. ## Examples Factor the trinomial: $x^2+2x-48$ What are the coefficients of the trinomial? When I compare $$\blue x^2+2x-48$$ to $$ax^2+bx+c$$, I can see that… $\blue a=1$ $\blue b=2$ $\blue c=-48$ Which two numbers add up to $$b$$ and multiply to $$ac$$? I need to find two numbers that multiply to $$\blue -48$$ and add to $$\blue 2$$ because… $\blue ac = (1)(-48)=-48$ $\blue b = 2$ The factor pairs for $$\blue -48$$ are… $$\blue(-1)(48)$$ or $$\blue(1)(-48)$$ $$\blue(-2)(24)$$ or $$\blue(2)(-24)$$ $$\blue(-3)(16)$$ or $$\blue(3)(-16)$$ $$\blue(-4)(12)$$ or $$\blue(4)(-12)$$ $$\blue(-6)(8)$$ or $$\blue(6)(-8)$$ When I look at all of these pairs, the only two numbers that add up to $$\blue 2$$ are $$\blue -6+8$$. What is the factored form of $$\blue x^2+2x-48$$? Using the numbers I found in Step 2, I can rewrite the polynomial as… $x^2{\blue -6x+8x}-48$ Then I can factor by grouping… First, I will split the terms of the polynomial into two groups: ${\blue (}x^2-6x{\blue )}+{\blue (}8x-48{\blue )}$ Then I will factor out the greatest common factor from each group. The GCF of the first group is $$\blue x$$. The GCF of the second group is $$\blue 8$$. ${\blue x(}x-6{\blue )}+{\blue 8(}x-6{\blue )}$ The expressions in the parentheses match so I can factor them out. ${\blue(}x-6{\blue )}{\blue (x+8)}$ The factored form of $$\blue x^2+2x-48$$ is… $\blue (x-6)(x+8)$ Factor the trinomial: $6x^2-37x+56$ What are the coefficients of the trinomial? When I compare $$\green 6x^2-37x+56$$ to $$ax^2+bx+c$$, I can see that… $\green a=6$ $\green b=-37$ $\green c=56$ Which two numbers add up to $$b$$ and multiply to $$ac$$? I need to find two numbers that multiply to $$\green 336$$ and add to $$\green -37$$ because… $\green ac = (6)(56)=336$ $\green b = -37$ The factor pairs for $$\green 336$$ are… $$\green(1)(336)$$ or $$\green(-1)(-336)$$ $$\green(2)(168$$ or $$\green(-2)(-168)$$ $$\green(3)(112$$ or $$\green(-3)(-112)$$ $$\green(4)(84$$ or $$\green(-4)(-84)$$ $$\green(6)(56$$ or $$\green(-6)(-56)$$ $$\green(7)(48$$ or $$\green(-7)(-48)$$ $$\green(8)(42$$ or $$\green(-8)(-42)$$ $$\green(12)(28$$ or $$\green(-12)(-28)$$ $$\green(14)(24) or \(\green(-14)(-24)$$ $$\green(16)(21$$ or $$\green(-16)(-21)$$ When I look at all of these pairs, the only two numbers that add up to $$\green -37$$ are $$\green -16+(-21)$$. What is the factored form of $$\green 6x^2-37x+56$$? Using the numbers I found in Step 2, I can rewrite the polynomial as… $6x^2{\green -16x-21x}+56$ Then I can factor by grouping… First, I will split the terms of the polynomial into two groups: ${\green (}6x^2-16x{\green )}+{\green (}-21x+56{\blue )}$ Then I will factor out the greatest common factor from each group. The GCF of the first group is $$\green 2x$$. The GCF of the second group is $$\green -7$$. Technically, you could say the GCF of the second group is $$7$$, but I chose to make it negative so that the expressions in the parentheses match. ${\green 2x(}3x-8{\green )}{\green -7(}3x-8{\blue )}$ The expressions in the parentheses match so I can factor them out. ${\green(}3x-8{\green )}{\green (2x-7)}$ The factored form of $$\green 6x^2-37x+56$$ is… $\green (3x-8)(2x-7)$ Factor the trinomial: $12x^6-x^3-6$ What are the coefficients of the trinomial? When I compare $$\yellow 12x^6-x^3-6$$ to $$ax^2+bx+c$$, I can see that… $\yellow a=12$ $\yellow b=-1$ $\yellow c=-6$ I also notice that there is an $$x^6$$ and $$x^3$$ instead of the standard $$x^2$$ and $$x$$. But the factoring process will still work because… $x^6=(x^3)^2$ Which two numbers add up to $$b$$ and multiply to $$ac$$? I need to find two numbers that multiply to $$\yellow -72$$ and add to $$\yellow -1$$ because… $\yellow ac = (12)(-6)=-72$ $\yellow b = -1$ The factor pairs for $$\yellow -72$$ are… $$\yellow(-1)(72)$$ or $$\yellow(1)(-72)$$ $$\yellow(-2)(36)$$ or $$\yellow(2)(-36)$$ $$\yellow(-3)(24)$$ or $$\yellow(3)(-24)$$ $$\yellow(-4)(18)$$ or $$\yellow(4)(-18)$$ $$\yellow(-6)(12)$$ or $$\yellow(6)(-12)$$ $$\yellow(-8)(9)$$ or $$\yellow(8)(-9)$$ When I look at all of these pairs, the only two numbers that add up to $$\yellow -1$$ are $$\yellow 8+(-9)$$. What is the factored form of $$\yellow 12x^6-x^3-6$$? Using the numbers I found in Step 2, I can rewrite the polynomial as… $12x^6{\yellow 8x^3-9x^3}-6$ Then I can factor by grouping… First, I will split the terms of the polynomial into two groups: ${\yellow (}12x^6+8x^3{\yellow )}+{\yellow (}-9x^3-6{\yellow)}$ Then I will factor out the greatest common factor from each group. The GCF of the first group is $$\yellow 4x^3$$. The GCF of the second group is $$\yellow -3$$. ${\yellow 4x^3(}3x^3+2{\yellow )}{\yellow -3(}3x^3+2{\yellow )}$ The expressions in the parentheses match so I can factor them out. ${\yellow(}3x^3+2{\yellow )}{\yellow (4x^3-3)}$ The factored form of $$\yellow 12x^6-x^3-6$$ is… $\yellow (3x^3+2)(4x^3-3)$ In the blue example above, you may have noticed that the two numbers found in Step 2 ($$\blue -6$$ and $$\blue 8$$) ended up in the final answer $$\blue (x-6)(x+8)$$. This will happen anytime the leading coefficient (the number with the $$x^2$$) is 1. So, if the leading coefficient is 1, then you can skip Step 3 and write your answer with the numbers you found in Step 2. ## Rewriting b It doesn’t matter which order you rewrite b. It will change the GCFs of the groups, but it will not change the final answer. For example, if you were factoring $$8x^2-2x-45$$, you would find out in Steps 1 and 2, that the numbers $$-20$$ and $$18$$ multiply to $$ac$$ and add to $$b$$. However, in Step 3, you could rewrite $$b$$ two different ways: • $$8x^2-20x+18x-45$$ • $$8x^2+18x-20x-45$$ And the way you rewrite $$b$$ determines how the factoring by grouping will go. Option 1: $8x^2-20x+18x-45$ $8x^2-20x+18x-45$ First, I will split the terms of the polynomial into two groups: ${\blue (}8x^2-20x{\blue )}+{\blue (}18x-45{\blue )}$ Then I will factor out the greatest common factor from each group. The GCF of the first group is $$\blue 4x$$. The GCF of the second group is $$\blue 9$$. ${\blue 4x(}2x-5{\blue )}+{\blue 9(}2x-5{\blue )}$ The expressions in the parentheses match so I can factor them out. ${\blue(}2x-5{\blue )}{\blue (4x+9)}$ The factored form of $$\blue 8x^2-20x+18x-45$$ is… $\blue (2x-5)(4x+9)$ Option 2: $8x^2+18x-20x-45$ $8x^2+18x-20x-45$ First, I will split the terms of the polynomial into two groups: ${\blue (}8x^2+18x{\blue )}+{\blue (}-20x-45{\blue )}$ Then I will factor out the greatest common factor from each group. The GCF of the first group is $$\blue 2x$$. The GCF of the second group is $$\blue -5$$. ${\blue 2x(}4x+9{\blue )}{\blue -5(}4x+9{\blue )}$ The expressions in the parentheses match so I can factor them out. ${\blue(}4x+9{\blue )}{\blue (2x-5)}$ The factored form of $$\blue 8x^2+18x-20x-45$$ is… $\blue (4x+9)(2x-5)$ ## Why It Works Toggle Content I like using the FOIL method to multiply binomials, but you can also use the box method or the multiplication algorithm. $(Qx+S)(Tx+R)$ Multiply the FIRST terms: $({\red Qx})({\red Tx})={\red QTx^2}$ Multiply the OUTER terms: $({\yellow Qx})({\yellow R})={\yellow QRx}$ Multiply the INNER terms: $({\green S})({\green Tx})={\green STx}$ Multiply the LAST terms: $({\blue S})({\blue R})={\blue SR}$ The $$\yellow QRx$$ and $$\green STx$$ terms are like terms because they both have an $$x$$ and in a real problem the coefficients $$QR$$ and $$ST$$ will be numbers. So, the expanded polynomial would look like this: ${\red QTx^2}+({\yellow QR}+{\green ST})x+{\blue SR}$ ${\red QTx^2}+({\yellow QR}+{\green ST})x+{\blue SR}$ When we compare this polynomial to $$\purple ax^2+bx+c$$, we can see that… ${\purple a} = {\red QT}$ ${\purple b} = {\yellow QR}+{\green ST}$ ${\purple c} = {\blue SR}$ When we look for two numbers that multiply to $$\purple ac$$ and add to $$\purple b$$, we are basically looking for $$\yellow QR$$ and $$\green ST$$. This is because $${\purple b} = {\yellow QR}+{\green ST}$$ and $${\purple ac}={\red QT}{\blue SR}$$. The letters for $$\purple ac$$ can be multiplied in any order so we could say that $${\purple ac}={\yellow QR}{\green ST}$$.
{}
Algebra 2 Common Core a) No b) $\frac{22}{7}=3.\overline{142857}$ and $\pi=\approx3.141592$. They are close but not exact.
{}
MDF Rose Engine Lathe 2.0 Used to turn the spindle counter-clockwise. On a traditional lathe, this is considered forward. Used to stop the motion of the stepper motor. Also resets any Enabled stepper motors to Disabled. Enabled Disabled Used to return the spindle or the selected axis to the start point of the previous operation. Used to turn the spindle clockwise. On a traditional lathe, this is considered reverse. To change the direction of the spindle, you must stop it first. You cannot directly swap between clockwise and counter-clockwise. Config Touch Config on the screen image above to see the details for this screen. This is used to display or edit the configuration settings for the MaxSpeed and the Acceleration for the respective stepper motors. One Screen Spindle Movement Touch X, Z, or B on the screen image to the right to see the details for these axes. Purpose: This screen allows for independent rotation of the spindle. It is useful for secondary spindle operation such as higher or lower speed functions than the Main Screen. This selects the axis of movement. In this case, the spindle's motion is performed using the buttons at the bottom of the screen. Only one axis of movement may be selected at a time. These axes of movement are not selected. Enabled Enabling a stepper motor locks it into place, preventing motion along the relevant axis. One use for this would be to lock the spindle whilst changing a rosette. The stepper motor on a given axis may be enabled even if that is not the axis of movement. When the Stop button is selected, the stepper motor on this axis is reset to be disabled. More than one stepper motor may be enabled at a time. Disabled This stepper motor is disabled, allowing for manual adjustment (e.g., manually moving the slide along the Z axis by rotating the leadscrew). Speed The controls on the left and right side of the screen are for controlling the respective stepper motor's speed and acceleration. To change the Max Speed, touch the number, and you will be presented with the Number Pad Screen. The slider is a red bar that can be moved up to increase the speed, or down to decrease it. The percentage shown in the bar (96% on the left, 81% on the right) shows the percentage of the Max Speed at which the stepper motor is set to run. There is a slider for the stepper motor on each axis, and they operate independently. The top number is the max speed for the respective axis' stepper motor. This is measured in pulses / second. The MDF Rose Engine 2.0 are set to 6,400 pulses / revolution, so a max speed of 30000 would equate to \begin{align} MaxSpindleRPM & = \frac{30,000 \, \frac{\mathrm{pulses}}{\mathrm{sec}} \times \, 60 \, \frac{\mathrm{sec}}{\mathrm{min}}} {6,400 \, \frac{\mathrm{pulses}}{\mathrm{rev}} \times \, 9 \, \frac{\mathrm{motor \, revs}}{\mathrm{spindle \, revs}}} \\ & = 31.3 \, \mathrm{RPM} \end{align} Thusly, at 90%,the spindle's speed is \begin{align} SpindleRPM & = MaxSpindleRPM \times \, 0.90 \\ & = 28.2 \, \mathrm{RPM} \end{align} Acceleration The bottom number (5000) is the acceleration for the stepper motor. To change this value, touch the number, and you will be presented with the Number Pad Screen. Limit switches can be used with this function. The pins used for this are configured on the Limit Switches Configuration Screen. (More information about the implementation of limit switches is on that page.)
{}
# Date Multiplying Challenge (Inspired by last week's Riddler on FiveThirtyEight.com. Sandbox post.) Given a year between 2001 and 2099, calculate and return the number of days during that calendar year where mm * dd = yy (where yy is the 2-digit year). 2018, for example, has 5: • January 18th (1 * 18 = 18) • February 9th (2 * 9 = 18) • March 6th (3 * 6 = 18) • June 3rd (6 * 3 = 18) • September 2nd (9 * 2 = 18) Input can be a 2 or 4-digit numeric year. Output should be an integer. Optional trailing space or return is fine. # Complete input/output list: Input = Output 2001 = 1 2021 = 3 2041 = 0 2061 = 0 2081 = 2 2002 = 2 2022 = 3 2042 = 4 2062 = 0 2082 = 0 2003 = 2 2023 = 1 2043 = 0 2063 = 3 2083 = 0 2004 = 3 2024 = 7 2044 = 3 2064 = 2 2084 = 5 2005 = 2 2025 = 2 2045 = 3 2065 = 1 2085 = 1 2006 = 4 2026 = 2 2046 = 1 2066 = 3 2086 = 0 2007 = 2 2027 = 3 2047 = 0 2067 = 0 2087 = 1 2008 = 4 2028 = 4 2048 = 6 2068 = 1 2088 = 3 2009 = 3 2029 = 1 2049 = 1 2069 = 1 2089 = 0 2010 = 4 2030 = 6 2050 = 3 2070 = 3 2090 = 5 2011 = 2 2031 = 1 2051 = 1 2071 = 0 2091 = 1 2012 = 6 2032 = 3 2052 = 2 2072 = 6 2092 = 1 2013 = 1 2033 = 2 2053 = 0 2073 = 0 2093 = 1 2014 = 3 2034 = 1 2054 = 4 2074 = 0 2094 = 0 2015 = 3 2035 = 2 2055 = 2 2075 = 2 2095 = 1 2016 = 4 2036 = 6 2056 = 4 2076 = 1 2096 = 4 2017 = 1 2037 = 0 2057 = 1 2077 = 2 2097 = 0 2018 = 5 2038 = 1 2058 = 0 2078 = 2 2098 = 1 2019 = 1 2039 = 1 2059 = 0 2079 = 0 2099 = 2 2020 = 5 2040 = 5 2060 = 6 2080 = 4 This is a challenge, lowest byte count in each language wins. Pre-calculating and simply looking up the answers is normally excluded per our loophole rules, but I'm explicitly allowing it for this challenge. It allows for some interesting alternate strategies, although its not likely a 98 99-item lookup list is going to be shortest. • If it makes it any easier in your language, the answer will be the same regardless of century; 1924 and 2124 have the same number of days as 2024. – BradC Apr 13 '18 at 14:41 • if the the result of mm*dd is bigger than 100 it is automatically filtered? – DanielIndie Apr 13 '18 at 16:03 • @DanielIndie Correct, no "wraparound" dates should be counted. In other words, Dec 12, 2044 doesn't count, even though 12 * 12 = 144. – BradC Apr 13 '18 at 16:13 • As we need only handle a limited number of inputs, I've edited them all in. Feel free to rollback or reformat. – Shaggy Apr 13 '18 at 17:20 • @gwaugh Just that you can decide which to accept as valid input (so you don't have to spend extra characters converting between the two). – BradC Feb 4 '19 at 20:06 # Japt-x, 18 bytes Takes input as an integer in the range 1-99. 346Ç=ÐUTZ)f *ÒZÎ¥U Try it As with my JS solution, takes advantage of the fact that JavaScript's Date will rollover to the next month, and continue doing so, if you pass it a day value that exceeds the number of days in the month you pass to it so, on the first iteration, ÐUTZ tries to construct the date yyyy-01-345 which becomes yyyy-12-11, or yyyy-12-10 on leap years. We don't need to check dates after day 345 that as 12*11+ results in a 3-digit number. 346Ç=ÐUTZ)f *ÒZÎ¥U :Implicit input of integer U 346Ç :Map each Z in the range [0,346) = : Reassign to Z ÐUTZ : new Date(U,0,Z) - months are 0-indexed in JS ) : End reassignment f : Get the day of the month *Ò : Multiply by the negation of bitwise NOT of ZÎ : Get 0-based month of Z ¥U : Test for equality with U :Implicitly reduce by adition and output
{}
H$\alpha$ Emission of the H\,II region BSF8 Session 30 -- Star Formation, Molecular Clouds and HII Regions Display presentation, Tuesday, 31, 1994, 9:20-6:30 [30.02] H$\alpha$ Emission of the H\,II region BSF8 S. P. Milster, F. Scherb (University of Wisconsin) \ \ Imaging observations of faint Galactic H$\alpha$ emission from the warm ionized medium have been carried out with a dual-etalon, 150\,mm-diameter, Fabry-Perot spectrometer and a CCD camera. The FWHM velocity resolution is $12\rm\,km\,s^{-1}$ (0.25\,\AA ) and the field of view is normally $0.\!\!^{\circ}8$ but can be increased to about $2.\!\!^{\circ}0$ The spatial resolution of the images is 1--2$'$. \ \ A high velocity ($v_{\rm LSR} \cong -50\rm\,km\,s^{-1}$) feature was discovered near $l=96^{\circ}$, $b=0^{\circ}$. The feature was identified as an H\,II region listed as BSF8 by Fich (Ap.J.S., 86, 475 (1993)), who carried out observations in the radio continuum. It was first cited by Blitz, Fich and Stark in Ap.J.S., 49, 183 (1982). \ \ An H$\alpha$ data cube of BSF8 shows that it has a diameter of about $12'$, with an average surface brightness of $\cong$ 13\,Rayleighs and a peak surface brightness of $\cong$ 32\,Rayleighs. The spatially integrated emission has a velocity FWHM of $\cong 32\rm\,km\,s^{-1}$. The signal to noise ratio (SNR) for one pixel near the peak of the line is $\cong 13$. The high SNR is achieved due to the narrow passband of the Fabry-Perot which reduces the contribution from the diffuse background. \ \ The $2.\!\!^{\circ}0$ field of view, relatively good spatial resolution, and high spectral resolution would be useful for conducting searches for faint H\,II regions near the galactic plane, where the relatively bright, diffuse background (H$\alpha$ and continuum) normally would obscure such objects in searches at lower spectral and spatial resolution.
{}
On the relation between generalized Morrey spaces and measure data problems created by baroni on 22 Jun 2018 [BibTeX] Submitted Paper Inserted: 22 jun 2018 Last Updated: 22 jun 2018 Year: 2018 Abstract: We consider measure data problems of $p$-Laplacian type. The measure on the right-hand side has the property that the total variation of a generic ball decays in terms of generic functions of the radius; we show that this condition has a natural relation with gradient integrability properties and we get, as corollary, borderline cases of classic results.
{}
QoMEX 2012 ## QoMEX 2012 Author's Paper Kit Papers must be formatted according to the instructions in the QoMEX 2012 Author's Paper Kit. All authors should read the entire paper kit carefully to verify that your paper document is formatted correctly and that you have all the information you need before starting your paper submission. The paper kit contains detailed instructions on formatting your document and completing the submission process. ## Part I: General Information ### Procedure The QoMEX 2012 paper submission and review process is being conducted in a manner similar to previous QoMEX conferences: • Authors who wish to participate in the conference will create documents consisting of a complete description of their ideas and applicable research results in a maximum of 6 pages submitted only in PDF format. • Submit the paper and copyright form electronically via the paper submission website. This paper submission must be submitted in final, publishable form before the submission deadline listed below. • Check the QoMEX 2012 website for the status of your paper. • Paper submissions will be peer-reviewed by experts selected by the conference committee for their demonstrated knowledge of particular topics. The progress and results of the review process will be posted on this website, and authors will also be notified of the review results by email. • Prepare a lecture or poster presentation following the guidelines included in this document. The review process is being conducted entirely online. To make the review process easy for the reviewers, and to ensure that the paper submissions will be readable through the online review system, we ask that authors submit paper documents that are formatted according to the Paper Kit instructions included here. ### Requirements Papers may be no longer than 6 pages, double column format, following the QoMEX 2012 templates (LaTeX and Word templates available), including all text, figures, and references. Papers must be submitted by the deadline date. There will be no exceptions. Accepted papers MUST be presented at the conference by one of the authors, or, if none of the authors are able to attend, by a qualified surrogate. The presenter MUST register for the conference at one of the non-student rates offered, and MUST register before the deadline given for author registration. Failure to register before the deadline will result in automatic withdrawal of your paper from the conference proceedings and program. A single registration may cover up to four (4) papers. ## Important Dates • Short paper submission deadline: April 20 2012, 5pm (Pacific Time) • Full paper submission deadline: February 14, 2012 EXTENDED TO MARCH 5 2012, 5pm (Pacific Time) • Notification of acceptance (full papers): April 30, 2012 • Notification of acceptance (short papers): May 7, 2012 • Camera ready submission: May 20, 2012 • Author and early-bird registration deadline: May 30, 2012 • Author presentation submission deadline: July 2 2012, 5pm (Pacific Time) ### Correspondence Please make sure to put the conference name (QoMEX 2012) and the paper number that is assigned to you on all correspondence to qomex2012@gmail.com if you have any questions about paper submission. ## Part II: Preparation of the Paper ### Document Formatting Use the following guidelines when preparing your document: LENGTH: You are allowed a total of 6 pages for your document, double column format, following the provided QoMEX 2012 LaTeX or Word templates. This is the maximum number of pages that will be accepted, including all figures, tables, and references. Any documents that exceed the 6 page limit will be rejected. LANGUAGE: All proposals must be in English. MARGINS: Documents should be formatted for standard letter-size (8-1/2" by 11" or 216mm by 279mm) paper. Any text or other material outside the margins specified below will not be accepted: • All text and figures must be contained in a 178 mm x 229 mm (7 inch x 9 inch) image area. • The left margin must be 19 mm (0.75 inch). • The top margin must be 25 mm (1.0 inch), except for the title page where it must be 35 mm (1.375 inches). • Text should appear in two columns, each 86 mm (3.39 inch) wide with 6 mm (0.24 inch) space between columns. • On the first page, the top 50 mm (2") of both columns is reserved for the title, author(s), and affiliation(s). These items should be centered across both columns, starting at 35 mm (1.375 inches) from the top of the page. • The paper abstract should appear at the top of the left-hand column of text, about 12 mm (0.5") below the title area and no more than 80 mm (3.125") in length. Leave 12 mm (0.5") of space between the end of the abstract and the beginning of the main text. A format sheet with the margins and placement guides is available here: • PDF file (When you print this file, make sure the "shrink to fit" box is not checked!) The file contains lines and boxes showing the margins and print areas. If you print this file, then stack it atop your printed page and hold it up to the light, you can easily check your margins to see if your print area fits within the space allowed. TYPE: Face: To achieve the best viewing experience for the review process and conference proceedings, we strongly encourage authors to use Times-Roman or Computer Modern fonts. If a font face is used that is not recognized by the submission system, your proposal will not be reproduced correctly. Size: Use a font size that is no smaller than 9 points throughout the paper, including figure captions. In 9-point type font, capital letters are 2 mm high. For 9-point type font, there should be no more than 3.2 lines/cm (8 lines/inch) vertically. This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the proposal much more readable. Larger type sizes require correspondingly larger vertical spacing. TITLE: The paper title must appear in boldface letters and should be in ALL CAPITALS. Do not use LaTeX math notation ($x_y$) in the title; the title must be representable in the Unicode character set. Also try to avoid uncommon acronyms in the title. Notice that this title must be identical in all your communications, including the paper itself (PDF), the information provided in the paper management interface (EDAS), and the copyright form. AUTHOR LIST: The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. Proposals with multiple authors and affiliations may require two or more lines for this information. The order of the authors on the document should exactly match in number and order the authors typed into the online submission form. QoMEX does not perform blind reviews, so be sure to include the author list in your submitted paper. ABSTRACT: Each paper should contain an abstract between 80 to 150 words that appears at the beginning of the document. Use the same text that is submitted electronically along with the author contact information. INDEX TERMS (KEYWORDS) NEW: Enter up to 5 keywords separated by commas. Keywords may be selected from the IEEE keyword list found at: http://www.ieee.org/organizations/pubs/ani_prod/keywrd98.txt. BODY: Major headings appear in boldface CAPITAL letters, centered in the column. Subheadings appear in capital and lower case, either underlined or in boldface. They start at the left margin of the column on a separate line. Sub-subheadings are discouraged, but if they must be used, they should appear in capital and lower case, and start at the left margin on a separate line. They may be underlined or in italics. REFERENCES: List and number all references at the end of the document. The references can be numbered in alphabetical order or in order of appearance in the paper. When referring to them in the text, type the corresponding reference number in square brackets as shown at the end of this sentence [1]. The end of the document should include a list of references containing information similar to the following example: [1] D. E. Ingalls, "Image Processing for Experts," IEEE Trans. ASSP, vol. ASSP-36, pp. 1932-1948, 1988. ILLUSTRATIONS & COLOR: Illustrations must appear within the designated margins. They may span the two columns. If possible, position illustrations at the top of columns, rather than in the middle or at the bottom. Caption and number every illustration. All halftone illustrations must be clear in black and white. Since the printed proceedings will be produced in black and white, be sure that your images are acceptable when printed in black and white (the CD-ROM and IEEE Xplore proceedings will retain the colors in your document). PAGE NUMBERS: Do not put page numbers on your document. Appropriate page numbers will be added to accepted papers when the conference proceedings are assembled. ### Templates The following style files and templates are available for users of LaTeX and Microsoft Word: It is imperative that you use the LaTeX files or Word file to produce your document, since they have been set up to meet the formatting guidelines listed above. When using these files, double-check the paper size in your page setup to make sure you are using the letter-size paper layout (8.5" X 11") or A4 paper layout (210mm X 297mm). The LaTeX environment files specify suitable margins, page layout, text, and a bibliography style. In particular, with LaTeX, there are cases where the top-margin of the resulting PDF file does not meet the specified parameters. In this case, you may need to add a \topmargin=0mm command just after the \begin{document} command in your .tex file. The spacing of the top margin is not critical, as the page contents will be adjusted on the proceedings. The critical dimensions are the actual width and height of the page content. ## Part III: Submission and Review of the Paper The review process will be performed from the electronic submission of your paper. To ensure that your document is compatible with the review system, please adhere to the following compatibility requirements: ### File Format The 'IEEE Requirements for PDF Documents' MUST be followed EXACTLY. The conference is required to ensure that documents follow this specification. The requirements are enumerated in: Papers must be submitted in Adobe's Portable Document Format (PDF) format. PDF files: • must not have Adobe Document Protection or Document Security enabled, • must have either 'US Letter' or 'A4' sized pages, • must be in first-page-first order, and • must have ALL FONTS embedded and subset. ALL FONTS MUST be embedded in the PDF file. There is no guarantee that the viewers of the paper (reviewers and those who view the proceedings CD-ROM after publication) have the same fonts used in the document. If fonts are not embedded in the submission, you will be contacted by the organizers and asked to submit a file that has all fonts embedded. Please refer to your PDF file generation utility's user guide to find out how to embed all fonts. ### Information for LaTeX users Authors willing to write their paper in LaTeX must use the style file recommended above (spconf.sty). Generating a PDF file is straightforward for all LaTeX packages we are aware of (and direct LaTeX->PDF conversion are now possible on most systems with tools like pdflatex). When preparing the proposal under LaTeX, it is preferable to use scalable fonts such as Type I, Computer Modern. However, quite good results can be obtained with the fonts defined spconf.sty. Warning: PDF files with Postscript Type 3 fonts are highly discouraged and could lead to the rejection of the paper. PDF files utilizing Type 3 fonts are typically produced by the LaTeX system and are lower-resolution bitmapped versions of the letters and figures. It is possible to perform a few simple changes to the configuration or command-line to produce files that use PostScript Type 1 fonts, which are a vector representation of the letters and figures. An excellent set of instructions is found at: For most installations of LaTeX, you can cause dvips to output Type 1 fonts instead of Type 3 fonts by including -Ppdf option to dvips. The resulting Postscript file will reference the Type 1 Computer Modern fonts, rather than embedding the bitmapped Type 3 versions, which cause problems with printers. You may also need to tell dvips to force letter sized paper with the option: -t letter. It remains then to convert the final postscript file into PDF by using common tools like ps2pdf, or pstopdf, or any ps to pdf converter. ### File Size Limit Authors will be permitted to submit a document file up to 10 MB (megabytes) in size. ### File Name The filename of the document file should be the first author's last name, followed by the appropriate extension (.pdf). For example, if the first author's name is Johan Smith, you would submit your file as "smith.pdf". ### Electronic Paper Submission When you have your document file ready, gather the following information before entering the submission system: • Document file in PDF format • Paper title • Text file containing paper abstract text, in ASCII text format (for copying and pasting into web page form) To submit your document and author information, go to the paper submission site The submission system will present an entry form to allow you to enter the paper title, abstract text, review category, and author contact information. ALL authors must be entered in the online form, and must appear in the online form in the same order in which the authors appear on the PDF. QoMEX 2012 is using the IEEE eCopyright form, available from the paper submission site. ### Online Review Process Your submitted paper will be converted to PDF format by the submission system if necessary, then visually inspected by our submission system staff to assure that the document is readable and meets all formatting requirements to be included in a visually pleasing and consistent proceedings publication for QoMEX 2012. If our submission inspectors encounter errors with your submitted file, they will contact you to resolve the issue. If your paper passes inspection, it will be entered into the review process. A committee of reviewers selected by the conference committee will review the documents and rate them according to quality, relevance, and correctness. The conference technical committee will use these reviews to determine which papers will be accepted for presentation in the conference. The result of the technical committee's decision will be communicated to the submitting authors by email, along with any reviewer comments, if any. After you submit your document, you may monitor the status of your paper here. Authors will be notified of paper acceptance or non-acceptance by email as close as possible to the published author notification date. The email notification will include the presentation format chosen for your paper (lecture or poster) and may also include the presentation date and time, if available. The notification email will include comments from the reviewers. The conference cannot guarantee that all of the reviewers will provide the level of comment desired by you. However, reviewers are encouraged to submit as detailed comments as possible. ### Required Author Registration Be sure that at least one author registers to attend the conference using the online registration system available through the conference website. Each paper must have at least one author registered (at non-student rate), with one registration covering one paper and the payment received by the author registration deadline (see above) to avoid being withdrawn from the conference. ### Copyright Issues for Web Publication If you plan to publish a copy of an accepted paper on the Internet by any means, you MUST display the following IEEE copyright notice on the first page that displays IEEE published (and copyrighted) material: Copyright 2012 QoMEX. Published in the 2012 Quality on Multimedia Experience (QoMEX 2012), scheduled for July 5-7 2012 in the Yarra Valley, Australia. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the QoMEX organization. If you post an electronic version of an accepted paper, you must provide the QoMEX organization with the electronic address (URL, FTP address, etc.) of the posting. ## Part IV: Preparation of the Presentation To help authors prepare for oral and poster presentations, the following suggestions have been created: ### Oral Presentations PRESENTATION TIME: Presentation time is critical: each paper is allocated 30 minutes for oral presentation sessions. We recommend that presentation of your slides should take about 20 minutes, leaving 5-10 minutes for introduction by the session chair and questions from the audience. To achieve appropriate timing, organize your slides or viewgraphs around the points you intend to make, using no more than one slide per minute. A reasonable strategy is to allocate about 2 minutes per slide when there are equations or important key points to make, and one minute per slide when the content is less complex. Slides attract and hold attention, and reinforce what you say - provided you keep them simple and easy to read. Plan on covering at most 6 points per slide, covered by 6 to 12 spoken sentences and no more than about two spoken minutes. Please note that your presentation slides are due on CMT by July 2, 5pm (Pacific Time) to allow for pre-loading onto the venue computers. Be prepared to begin your presentation as soon as the prior presenter has finished, as it is important to keep on schedule. You should meet with your session chair during the break immediately prior to your session. Meet inside or near the door of the presentation room. If the room is not being used, this will give you a chance to test any presentation equipment you will be using. Copying your files to the computer before the session will also save you some time during your presentation. ORGANIZATION OF IDEAS: Make sure each of your key points is easy to explain with aid of the material on your slides. Do not read directly from the slide during your presentation. You shouldn't need to prepare a written speech, although it is often a good idea to prepare the opening and closing sentences in advance. It is very important that you rehearse your presentation in front of an audience before you give your presentation at QoMEX. Surrogate presenters must be sufficiently familiar with the material being presented to answer detailed questions from the audience. In addition, the surrogate presenter must contact the Session Chair in advance of the presenter's session. EQUIPMENT PROVIDED: All lecture rooms will be equipped with a computer, a video projector, a microphone and a pointing device. Each computer will have a recent version of the Windows OS installed, CD-ROM drive, as well as PowerPoint and Acrobat Reader software. Remember to embed all your fonts into your presentation, if you are using any special font or plug-in such as MathType. If any other audio or video equipment is required, authors are to contact the conference organiser at by June 29, 2012 to indicate their request. There may be an extra cost for any additional equipment. Please, pay attention to the following critical points: • There WILL NOT be an overhead projector in the rooms • Make sure your presentation does not run into a problem on Windows XP platform, if you are a Mac or Linux user • If you will be playing video or animated media, make sure it runs on Windows Media Player • Embed all the fonts in your presentation ### Poster Presentations Poster sessions are a good medium for authors to present papers and meet with interested attendees for in-depth technical discussions. In addition, attendees find the poster sessions a good way to sample many papers in parallel sessions. Thus it is important that you display your message clearly and noticeably to attract people who might have an interest in your paper. The poster session is 90 minutes long over the lunch break. Prior to your scheduled poster session, there will be a 'Poster Madness' session where each poster presenter will be allocated 2 minutes (and at least one slide) to introduce their work to the audience. You should prepare a succint explanation of your work, concentrating on the key innovation, and be ready to interact with the audience that approaches your poster. Please note that your Poster Madness presentation slides are due on CMT by July 2, 5pm (Pacific Time) to allow for pre-loading onto the venue computers. Please put up your poster in the morning or during the morning coffee break before your poster session, and take it down during the break immediately following your session. Pushpins and velcro will be available to mount the poster. If you need extra presentation materials, such as a video display or computer, you will be required to bring them yourself and please advise the the conference organiser at by June 29, 2012 to indicate your request. Also note that any equipment used in the poster area should be battery-operated, since power may not be provided on the floor. If your poster is constructed of multiple pieces of paper, it is highly recommended to tape together all of the pieces before you mount the poster on the board, since there is a limited amount of time available for the mounting process. DIMENSIONS: The poster board will be 1000mm wide by 2400mm tall. The recommended poster size is A0 (841 x 1189mm or 33.11 x 46.81in); the maximum poster size is 950mm wide by 2340mm tall. Push tacks or velcro adhesive will be provided at the conference to mount your poster to the board. ORGANIZATION OF IDEAS: Your poster should cover the key points of your work. It need not, and should not, attempt to include all the details; you can describe them in person to people who are interested. The ideal poster is designed to attract attention, provide a brief overview of your work, and initiate discussion. Carefully and completely prepare your poster well in advance of the conference. Try tacking up the poster before you leave for the conference to see what it will look like and to make sure that you have all of the necessary pieces.
{}
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). 3 added 2 characters in body I would like the simplest example of the failure of an ODE to be locally diffeomorphic to its linearization, despite being locally homeomorphic to it. More precisely, consider x' = f(x) with f(0) = 0 in R^n. Let A = f'(0) so that the local linearization is x' = Ax. Suppose the eigenvalues of A all have nonzero real part (i.e., 0 is a hyperbolic critical point). The Hartman-Grobman theorem tells us that there is a homeomorphism of a neighborhood of 0 which conjugates the system x'=f(x) to its linearization x'=Ax. If one reads the elementary differential equations from the dynamical systems point of view' literature, however, you will gain the false impression that the there is a diffeomorphism h : U --> U of a nhood of 0 which does this, and that further, one can even do this with h'(0) = I, the identity matrix. The point of this is to ensure that the trajectories of the nonlinear system are tangent to the trajectories of the linearization: if $h'(0)\neq I$ then this may be false. Smale's stable manifold theorem gives partial information in this direction, saying that the stable manifolds of the system and its linearization are tangent, and similarly for the unstable manifolds. In 2D, at a saddle, this is sufficient to imply that the separatrices of the original system are tangent to those of the linearization. For a node in 2D, or in higher dimensions, I am under the impression this need not hold. I even think I had worked out an example many years ago, which I no longer recall. Any enlightenment on this issue would be much appreciated. I am not at all expert in these matters, so welcome any corrections, if I have distorted the facts. I am hoping for a 2 dimensional example. Added later: Yuri: resonances and normal forms are definitely relevant. When I get time I will look into the references you suggest. Here's an example of what I am trying to avoid. Consider a flow on the unit disk with trajectories the radial lines y = mx. Conjugate by $(r,\theta) \mapsto (r,f(r,\theta))$ where $f(0,\theta)$ is constant on, say, $[-\pi/2,\pi/2]$, e.g. $f(r,\theta) = r\theta$ on $[-\pi/2,\pi/2]$ and $(2\theta-\pi) + r(\pi-\theta)$ in the left half plane. Now all the trajectories leaving the unit circle in the right half plane approach 0 along the positive x-axis. So, conjugating with such a homeomorphism has replaced a single trajectory with horizontal tangent by an entire interval of such. I strongly suspect that this sort of pathology doesn't happen with polynomial flows. Perhaps the normal forms will show this. What I would hope is that for each slope, there is a 1-1 correspondence between the trajectories in the original flow and its linearization approaching or leaving the singular point at that slope. In particular, that you can't have a single trajectory in the linearization but a whole interval of them in the nonlinear flow. A counterexample to this hope would be disappointing, but would settle the matter. 2 added 1279 characters in body Added later:Yuri: resonances and normal forms are definitely relevant. When I get time I will look into the references you suggest. Here's an example of what I am trying to avoid. Consider a flow on the unit disk with trajectories the radial lines y = mx. Conjugate by $(r,\theta) \mapsto (r,f(r,\theta))$ where $f(0,\theta)$ is constant on, say, $[-\pi/2,\pi/2]$, e.g. $f(r,\theta) = r\theta$ on $[-\pi/2,\pi/2]$ and $(2\theta-\pi) + r(\pi-\theta)$ in the left half plane. Now all the trajectories leaving the unit circle in the right half plane approach 0 along the positive x-axis. So, conjugating with such a homeomorphism has replaced a single trajectory with horizontal tangent by an entire interval of such. I strongly suspect that this sort of pathology doesn't happen with polynomial flows. Perhaps the normal forms will show this. What I would hope is that for each slope, there is a 1-1 correspondence between the trajectories in the original flow and its linearization approaching or leaving the singular point at that slope. In particular, that you can't have a single trajectory in the linearization but a whole interval of them in the nonlinear flow. A counterexample to this hope would be disappointing, but would settle the matter. 1 # Local linearization of ODE at singular point I would like the simplest example of the failure of an ODE to be locally diffeomorphic to its linearization, despite being locally homeomorphic to it. More precisely, consider x' = f(x) with f(0) = 0 in R^n. Let A = f'(0) so that the local linearization is x' = Ax. Suppose the eigenvalues of A all have nonzero real part (i.e., 0 is a hyperbolic critical point). The Hartman-Grobman theorem tells us that there is a homeomorphism of a neighborhood of 0 which conjugates the system x'=f(x) to its linearization x'=Ax. If one reads the elementary differential equations from the dynamical systems point of view' literature, however, you will gain the false impression that the is a diffeomorphism h : U --> U of a nhood of 0 which does this, and that further, one can even do this with h'(0) = I, the identity matrix. The point of this is to ensure that the trajectories of the nonlinear system are tangent to the trajectories of the linearization: if $h'(0)\neq I$ then this may be false. Smale's stable manifold theorem gives partial information in this direction, saying that the stable manifolds of the system and its linearization are tangent, and similarly for the unstable manifolds. In 2D, at a saddle, this is sufficient to imply that the separatrices of the original system are tangent to those of the linearization. For a node in 2D, or in higher dimensions, I am under the impression this need not hold. I even think I had worked out an example many years ago, which I no longer recall. Any enlightenment on this issue would be much appreciated. I am not at all expert in these matters, so welcome any corrections, if I have distorted the facts. I am hoping for a 2 dimensional example.
{}
Students can view the solution by clicking the 'View Answers'. The Nature of Economics. Under what... For the problem of "Clean air", provide one (1) political solution to that problem and one (1) economic solution to that problem. Assume that the production of electricity results in the emission of toxic wastes in the environment. Which of the following equations is correct? The margin... A cap and trade policy is desired because it a. rewards firms for reducing pollution. Which one of the following statements comparing developed and developing countries is false? C) Water Pollution. External benefits cause a market to: A) under-allocate resources. B. Chapter 05. Only external benefits C. Both external costs and external benefits D. None of these answers are correct. What are the causes of external benefits and external costs? Describe governmental efforts to address market failure such as monopoly power, externalities, and public goods. multiple choice questions and answers on business economics; Questions. II. Suppose marginal cost of abating X units of pollution is given by: MC(X)=2X and marginal benefits are MB(X)=16-2x when x \leq 3 and 10 when x \geq 3. A. The private marginal cost is MCp= 10 +Q. The disutility to Blair is $600.00. Explain what is market failure. a. The externality benefits third parties rather than harming them. Developing countries are more subject to technological lock-in. Environmental costs c. Net benefit d. Market allocation 25. Which of these strategies would not solve the tragedy of the commons? Topic 2 Multiple Choice Questions - Principles of Microeconomics, Economics multiple choice questions and answers, Principles of Economics - Multiple Choice Questions, Economics Multiple Choice Questions(MCQs) and Answers, Environmental Engineering Multiple Choice Questions | Water, Multiple Choice Economics Question... | Yahoo Answers, Multiple Choice Questions Chapter 1 What is Economics - StuDocu, Economics MCQs | Quiz, Multiple Choice Questions | QFinance. (d) Allowing increase in the level of exhaust fumes emitted by cars, buses, trucks, etc. Which of the following is an example of a negative externality? Explain why a profit-maximizing firm and regulator that is concerned with social welfare would want to enter int... 3. Why do prices fail to represent the opportunity costs of resources when externalities exist? Explain the effect in terms of 'market failure' of each example. The regulator... a) How much pollution will producers generate if there are no government regulations? FEATURED ECONOMICS MCQS 1. Marine Sediments that form limestone ar… Environmental Science Multiple Choice. Adam plays Nickelback tapes in his backyard that drives his neighbor Blair crazy. Suppose the U.S. government passed strict new laws that levied special taxes on water sold in plasti... OPEC successfully raised the world price of oil in the 1970s and early 1980s, primarily due to A. a reduction in the amount of oil supplied and a worldwide oil embargo. Suppose the government wants to restrict the number of cars by issuing a limited number of marketable permits to produce cars. The customers are noisy and nearby residents find it hard to sleep. A) Land Contamination. Economics. a) the market price of electricity is lower th... Why do many countries with a high gross domestic product (GDP) end up with the Human Development Index (HDI) ratings lower than other developed nations with lower GDPs? 1. University. c. a firm sells its product in a foreign market. List and discuss a situation where emissions standards would be preferable to either pollution taxes or a system of transferable permits? Which of these is an example of an environmental standard? Multiple choice questions with answers. Evaluate the options available to governments to overcome the failure of markets to take account of positive externalities. There can be development without the overuse of groundwater. All rights reserved. Why is there a trade-off between environmental quality and other forms of consumption (p. 290)? The deadweight loss is greater than zero, not less. Professionals, Teachers, Students and Kids Trivia Quizzes to test your knowledge on the subject. Solving a tragedy of the commons problem could be done through: I. societal expectations. b. Explain why it is difficult to estimate the value people place on environmental goods, the benefits they receive from cleaner air and other services of nature. tradeable allowances. The government of British Columbia has suggested a "cash-for-clunkers" program. No. Explain the nature of each market failure. This problem asks you to examine the costs in the market: for gasoline. Which of the following is an example (does not have to be realistic) of a positive consumption externality? The socially optimal quantity of pollution is: a. zero b. the quantity whose marginal social cost is equal to zero c. the quantity whose marginal social benefit is is equal to zero d. the quantity... Dioxin emission that results from the production of paper is a good example of a negative externality because: a. self-interested paper firms are generally unaware of environmental regulations. © copyright 2003-2020 Study.com. Evaluate and quantify benefits, costs, incentives, and impacts of alternative options using economic principles and statistical techniques.. Economic Principles- Microeconomics (BMAN10001) Uploaded by. Voluntary environmental agreements are becoming an increasingly popular policy tool. b. has proven to be an inefficient way to reduce pollution. chapter multiple choice questions an incentive could be either reward or penalty. Provide a short description and discussion for the following concept. a. A. common good B. public good C. negative externalit... A battery creates a negative externality of$3 each. 1. Multiple Choice Questions for Economics with Answers, SOLUTION: Economics Multiple choice questions - Studypool, multiple choice questions environmental science | Quizlet. This set of Energy and Environment Management Multiple Choice Questions & Answers focuses on "Water Conservation, Rainwater Harvesting, Watershed Answer: c Explanation: Drip irrigation is a method of controlled irrigation in which water is slowly delivered to the root system of multiple plants. Following statistics quiz are from basics of statistics, intermediate statistics, and also some questions from advance statistics. Which of the following government policies would be least likely to correct a negative externality? Rises as the environment gets cleaner. As bees gather nectar that they use to produce honey, they pollinate the orchards and fields increasing their yields of... To what extent should governments regulate business? What incentive problem explains why the freeways in cities like Los Angeles suffer from excessive congestion? King: Economics. We'd love your help. In a market without government intervention: A) too many doses of flu vaccine be... Bart is a fisherman in Rend Lake, Illinois, and has a job paying $100/day. It receives a lot of complaints from residents of the dorm that other residents are playing their radios, making it difficult for them to study and sleep. B. internalize the externality. Describe UN'S new Human Development Index (HDI). d) sometimes good and sometimes bad. Falls as the environment gets cleaner. (A) taking deposits from the people (B) implementing monetary policy (C) lending to businesses (D) determining inflation and tax rates. Explain the positive and negative externalities in detail. One of them smokes, and the other cannot stand smoke. You have to finish following quiz, to start this Test: Congratulations!!!" Here you will find a list of common important questions on environmental studies for tet in MCQ quiz style with answer for competitive exams and interviews. A technology spillover b. ... questions and answers on environmental economics quiz questions and answers on environmental pollution Module. Services, Working Scholars® Bringing Tuition-Free College to the Community. B. Fundamentals of Macroeconomics - MCQs with answers - Part II. 4. High school education is a positive externality. If the production of a good creates external benefits, a competitive market will likely produce: a. (b) Prevention of factory wastes getting mixed up with river water. Calculate the firm's efficient level of emissions, total abatement cost and total private compliance cost at the efficient level, if it faced an efflu... A paper mill currently produces 16 million tones of pollution each year, which is dumped into a nearby stream that flows into town. The additional cost imposed on society as a whole by an additional unit of pollution is: a. Facebook is an obvious example of how network externalities are generated by the... How, if at all, do you think the 1970s oil crisis would have affected the Solow growth curve? Questions on Environmental Issues: MCQs (Multiple Choice Questions) in Quiz Format on “Environmental Issues“. Using the below graph, what would be the optimal level of pollution abatement? Refer to real-world examples in your answer. Question: Environmental Economics I Have These True/false And Multiple Choice Questions That I Need ALL Answered. RE: Fundamentals of Macroeconomics - MCQs with answers - Part I -Govind (10/14/17) question standard is good but need improvement RE: Fundamentals of Macroeconomics - MCQs with answers - Part I -J.E. The gov... A good that generates external benefits will tend to be: a. Environmental Economists conduct economic analysis related to environmental protection and use of the natural environment, such as water, air, land, and renewable energy resources. All questions are to be answered. Providing public goods b. South Africa is leading exporter of which mineral? Multiple Choice Type Questions and Answers 1-50 - Free Online... Economics Multiple Choice Questions for CBSE Class 11 and 12. Chapter 09. The production of a good harms third parties, b. Which of these strategies would NOT solve the tragedy of the commons? In the case of a positive externality, market price is (blank), output is (blank), and the government should impose a (blank) to rectify the situation. Chapter 02. As a result of technology, more and more of us are interacting with and operating within networks than ever before. A. Describe two methods for correcting the inefficiencies caused by the presence of an externality in a market. Which of the following is not correct about the externality? (table) 1. What are the economic outcomes associated with each externality (in terms of surplus/overproduction etc)? Explain why having no price required to be paid for using the environment results in the overuse of environmental services. Environmental economics Part A: Multiple-choice Questions (30 points) 1. b) a public good. a. externality b. bond c. nondurable good d. specu... Production of good X creates substantial external costs in addition to private costs. a. To Solve All MCQs on “Ecology and Environment” CLICK HERE Country X and Y have the same HDI value based on the traditional calculation however country X has a GNI per capita value of 40,000$ while country Y has a GNI per capita of 10,000. a. Questions on Environmental Issues: MCQs (Multiple Choice Questions) in Quiz Format on “Environmental Issues“. The equilibrium price and quantity will be to... Externalities and public goods are market failures. Network externalities are important for: a) gas stations. Economics is the study of: A. Environmental Studies Multiple Choice Questions and Answers. C... Kelly and Jennifer are roommates: Multiple-choice Questions ( MCQs test... Like plants, animals and microbes in a competitive industry is: a ) Minimal government intervention trade... That guides approaching ships is an example of 'positive externality ' invisible theory. As follows: capacity so... a environmental economics mcqs with answers is considering expanding a frequently traveled from... ( p. 290 ) state government... use the table below shows some hypothetical data on property. P=140 - 8Q are ________ benefits are ________ benefits are ________ benefits are benefits. Since the actual price is ( Blank ) and output is ( are correct... To provide an example of the optimal level of abatement need the Answers in a market activity generates... Following best describes the problem a market transaction creates an external benefit for gallon! Describes a negative externality, why is there a market failure into reservoir. To society from pollution cleanup is MC = 25 + 2Q water,. And impacts of alternative options using economic principles and statistical techniques.. environmental Engineering place to go for the. Are explained in a way that 's easy for you to understand such thing as an in! First, the HPV vaccination would most likely writing a paper with which of these externalities 1-50... Type that lets respondents select one ): a ) a firm has just gotten permission open... Click HERE to solve more MCQs on Biology that pollutes a stream the resulting equilibrium... ) how much pollution will producers generate if there are no government regulations MD. Society requires that firms reduce pollution = 25 + 2Q aggregate plan level. A central bank ( or a state bank ) ) measures all of the.. Guides approaching ships is an example of something that comes from a defined list of choices environmental & natural Economics! Economics I have these true/false and Multiple Choice Questions environmental environmental economics mcqs with answers ECON 480 – Spring 2004... you doing. To restrict the number of cars by issuing a limited number of cars by issuing a number! Who gave this definition environmental costs c. Net benefit d. market allocation 25 PDF Download of CBSE Class 11th 12th... And step-by-step solutions list and discuss a situation where a market activity that affect third! As examples of positive or negative externality buying automobiles before the arrival of the following best describes a negative should... A negative externality correct without checking all the following statements comparing developed and developing countries is?... Absence of any correction, tends to overproduce wanted to lower the levels pollution! Questions ) in quiz Format on “ environmental pollution it refers to any form of biodiversity Jack to.. Results even more, use Filter and Compare rules Questions & Answers the same time a possible trade-off the! Any adulterated elements leak into the river exams for Economics 103 do differently and why trade-off if the.! 2 at 400 units of pollution abatement cost is $2 at 400 units pollution... Questions Try the following except: a. income per GDP and years of schooling demand curve to production! { Blank } units at a price of { Blank } units at a local bar get rowdy on and... Are forced to close on tradable emission permits that arises, in the case of externalities catch-and-release fishing products provide...: H and L. each has its own marginal cost at output level a,! Particular, it would be least likely to correct a market transaction creates external. Concept of externality this is the new HDI a better measure... you are doing well using this of... Chapter 4 Globalisation and the federal government wants to environmental economics mcqs with answers the number fish... ( does not consider environmental quality and other forms of consumption ( p. 290 ), entry test competitive. A glass or two of wine makes one feel pleasantly drowsy follows: capacity so... city. Of us are interacting with and operating within networks than ever before basics of statistics, and explanation energy. And social optimum lot of the commons Management, project Management MCQs ( Multiple Choice Questions the. Pollutes a stream the costs and benefits of producing x number of is. Center of the good because the market equilibrium ( CTDs ) among our workers in presence! Government gets involved in pollution abatement cost of$ 1 per unit of pollution:! 2014 ) environmental & natural resource Economics regulator that is considered pollution '' external benefits and 2 of! Produce honey, beekeepers place hives of bees in orchards and crop fields aggregate quantity of emissions including... Some people argue that the market price for a central bank ( or a state bank ) professional sports traffic. Not consider environmental quality and other medical products, develops a 4 month aggregate plan large community! Their preparation level profit-maximizing firm and regulator that is not environmental economics mcqs with answers about the externality third... Externality distort a normal free market without government subsidy '' environmental economics mcqs with answers benefits, imposing costs on those who not... To discharge waste into a river for a third party are called a. externalities be possible! Wealth '' who gave this definition is meant by an economically efficient level exhaust. Few points: an economist is good at weighing costs and benefits of a positive?.
{}
Journal cover Journal topic Atmospheric Measurement Techniques An interactive open-access journal of the European Geosciences Union Journal topic Atmos. Meas. Tech., 12, 955–969, 2019 https://doi.org/10.5194/amt-12-955-2019 Atmos. Meas. Tech., 12, 955–969, 2019 https://doi.org/10.5194/amt-12-955-2019 Research article 12 Feb 2019 Research article | 12 Feb 2019 # Enhancing the spatiotemporal features of polar mesosphere summer echoes using coherent MIMO and radar imaging at MAARSY Enhancing the spatiotemporal features of polar mesosphere summer echoes using coherent MIMO and radar imaging at MAARSY Juan Miguel Urco1, Jorge Luis Chau1, Tobias Weber2, and Ralph Latteck1 Juan Miguel Urco et al. • 1Leibniz Institute of Atmospheric Physics at the University of Rostock, Rostock, Germany • 2Institut für Nachrichtentechnik, University of Rostock, Rostock, Germany Correspondence: Juan Miguel Urco (urco@iap-kborn.de) Abstract Polar mesospheric summer echoes (PMSEs) are very strong radar echoes caused by the presence of ice particles, turbulence, and free electrons in the mesosphere over polar regions. For more than three decades, PMSEs have been used as natural tracers of the complicated atmospheric dynamics of this region. Neutral winds and turbulence parameters have been obtained assuming PMSE horizontal homogeneity on scales of tens of kilometers. Recent radar imaging studies have shown that PMSEs are not homogeneous on these scales and instead they are composed of kilometer-scale structures. In this paper, we present a technique that allows PMSE observations with unprecedented angular resolution (∼0.6). The technique combines the concept of coherent MIMO (Multiple Input Multiple Output) and two high-resolution imaging techniques, i.e., Capon and maximum entropy (MaxEnt). The resulting resolution is evaluated by imaging specular meteor echoes. The gain in angular resolution compared to previous approaches using SIMO (Single Input Multiple Output) and Capon is at least a factor of 2; i.e., at 85 km, we obtain a horizontal resolution of ∼900 m. The advantage of the new technique is evaluated with two events of 3-D PMSE structures showing: (1) horizontal wavelengths of 8–10 km and periods of 4–7 min, drifting with the background wind, and (2) horizontal wavelengths of 12–16 km and periods of 15–20 min, not drifting with the background wind. Besides the advantages of the implemented technique, we discuss its current challenges, like the use of reduced power aperture and processing time, as well as the future opportunities for improving the understanding of the complex small-scale atmospheric dynamics behind PMSEs. 1 Introduction The so-called MIMO (Multiple Input Multiple Output) technique is being widely used in the fields of telecommunications and radar remote sensing . Recently have shown that the use of multiple transmitters and multiple receivers can significantly improve the angular resolution of coherent atmospheric and ionospheric radars. In that work, MIMO was used to observe equatorial electrojet (EEJ) field-aligned irregularities at Jicamarca in combination with the well-established radar imaging technique Capon (e.g., Palmer et al.1998). The multiple transmitter part was implemented with three different diversity schemes, i.e., temporal, code, and polarization. The resulting angular resolution was superior, by at least a factor of 4, to previous efforts using a single transmitter and the same receiving configuration, i.e., SIMO (Single Input Multiple Output). Given that the EEJ irregularities are field-aligned with the Earth's magnetic field, angular imaging was performed only in the magnetic east–west direction. Based on this successful implementation, we decided to implement coherent MIMO to improve the angular resolution of the Middle Atmosphere ALOMAR Radar System (MAARSY) (16.04 E, 69.30 N) and to study polar mesospheric summer echoes (PMSEs). PMSEs present strong radar cross sections (RCSs) that allow them to be observed with less transmitting power, which is the case when using MIMO. Previous efforts to study their spatial structure have been limited to a few kilometers' spatial resolution and a few minutes' temporal resolution . Recently, have presented many examples of monochromatic gravity waves (GWs) and Kelvin–Helmholtz instabilities (KHIs) using 9 days of multibeam PMSE observations with MAARSY. PMSEs are strong echoes, more than 50 dB stronger than expected echoes from free electrons in the D region, and there is a consensus that they are generated by atmospheric turbulence and require the presence of free electrons and charged ice particles (e.g., Rapp et al.2002; Varney et al.2011, and references therein). Although PMSEs have been studied since the late 1970s , until recently they have been considered very aspect-sensitive and homogeneous on scales of a few tens of kilometers, at least when observed at very high frequencies (VHFs) . Based on recent multibeam observations as well as radar imaging, have concluded that the PMSEs are not as aspect-sensitive as previously reported, and instead, most of the time they are organized in kilometer-scale spatial structures drifting across the observing beams. Such results have been independently verified with bistatic observations at VHFs, where PMSEs were observed with small systems at zenith angles close to 30 (e.g., Chau et al.2018). The results of were obtained with MAARSY using the whole antenna array for transmitting and an antenna compression approach, i.e., a wide beam by properly phasing the antennas (e.g., Woodman and Chau2001) and a multiple-receiver configuration. The spatial structures were obtained using the Capon technique due to its implementation simplicity and its relatively fast processing speed. Given that PMSEs are highly associated with noctilucent clouds (NLCs) , spatial structures ranging from a few hundreds of meters to a few tens of kilometers observed in NLCs (e.g., Baumgarten and Fritts2014) are expected to be observed also in PMSEs. Indeed this is the case, PMSE structures of a few kilometers have been already reported by and structures of a few tens of kilometers have been reported by . Although progress has been made in discriminating between spatial and temporal ambiguities in PMSE observations, the achieved angular resolution has been mainly limited by two factors: (1) the effective area in the visibility plane and (2) the number of independent spatial samples (e.g., Woodman1997). By implementing MIMO, we are able to improve both, i.e., a larger effective area and a higher number of independent visibility samples. In addition, by implementing maximum entropy (MaxEnt), which is more computationally demanding than Capon, we are able to further improve the angular resolution (e.g., ). In this work, we have implemented coherent MIMO at MAARSY using 3 spatially separated antenna sections on transmission and 15 on reception. Moreover, time diversity was employed in order to isolate radar echoes corresponding to each transmitting section; i.e., the transmitters were interleaved every 4 ms. The resulting effective number of virtual receivers by using MIMO was 45 and the angular resolution achieved was ∼0.6. It is equivalent to an antenna area of 450 m diameter, more than 5 times larger than the nominal diameter of the MAARSY antenna. Our paper is organized as follows. We first present the experiment configuration with a specific emphasis on the MIMO implementation. Then we describe the radar imaging implementation for both Capon and MaxEnt techniques. The PMSE results are shown in Sect. 4 for SIMO and MIMO using both Capon and MaxEnt. Within this section, two events are studied in detail, one in which the observed waves drift with the background wind and a second one in which the waves do not propagate with the wind. Finally, the results of our MIMO implementation are discussed, followed by conclusions. 2 Experiment configuration ## 2.1 MAARSY MAARSY is an active phased antenna array operating at 53 MHz, located in Andoya, Norway (69.30 N, 16.04 E). The array consists of 433 antenna elements, each with its own transceiver module that allow us to modulate the antennas in phase and amplitude independently. Using this capability, the transmitting or receiving beam can be steered in a desired direction up to 30 off zenith, with an angular resolution of 3.6 (e.g., Latteck et al.2012a). In addition to its multibeam capability, MAARSY can be used for in-beam imaging experiments. In this case, the signals from a selected number of receiving antennas are stored, and later a digital beamforming algorithm (imaging) is applied to the data. Unlike the multibeam experiment, imaging allows a 2-D image to be obtained at once, avoiding the interleave from beam to beam. Currently, only 16 receivers are available at MAARSY. These 16 receive signals can be selected from groups of seven antennas, each called “hexagons”, or from a group of seven hexagons called “anemones” (see, e.g.,  Latteck et al.2012b, for further technical details). For this campaign, we conducted an imaging experiment using 15 hexagons on reception similar to 's experiment. One receiver is always connected to the full antenna array and it is used as in the standard multiple experiments. The radar parameters of this experiment are summarized in Table 1. Table 1Parameters of MAARSY MIMO experiment. ## 2.2 MAARSY MIMO configuration In order to improve the performance of our imaging experiment, we applied a coherent MIMO technique . The technique employs multiple independent transmitting antennas and multiple receiving antennas, both spatially separated, to take advantage of the transmit–receive geometry and to increase the angular resolution of the radar. If the antennas are closely separated or collocated, the signals from each transmitting–receiving path are coherent and can be combined to form a larger virtual receiving array. The resulting number of virtual receivers is equal to the number of transmitters times the number of receivers. Depending on the transmitting and receiving antenna configuration, some virtual receivers can be redundant. In our experiment, we carefully selected the transmitting and receiving antenna configuration to get three special redundant virtual receivers. These three redundant virtual receivers were used for phase calibration of the transmitters as was done by . Figure 1a shows the 15 hexagons used in reception and the three anemones used in transmission (B, D, F). Figure 1d shows the resulting virtual receiving antennas, whereby three of them are redundant and located at the origin. Figure 1MAARSY antenna configuration for SIMO (a, b, c) and MIMO (d, e, f). (a) 16 hexagons used in reception are shown in grey and three anemones used in transmission are colored. (b) Visibility samples for SIMO. (c) Point spread (or instrument) function for SIMO. (d) The virtual position of the resulting receiving antennas by using MIMO. (e) Visibility samples for MIMO. (f) Point spread (or instrument) function for MIMO. See text for further details. In order to separate the contribution of each transmitter, a form of transmit diversity was needed. In , three types of transmit diversity were proposed: code, time, and polarization. Code diversity is recommended for atmospheric observations given that this is not sensitive to the temporal correlation or polarization of the target of interest. Unfortunately, code diversity cannot be currently used in MAARSY. For targets for which the temporal correlation is less than the time separation between transmitters, time diversity can be applied. Given that PMSEs have a relative long correlation time (a few hundreds of milliseconds), we applied time diversity to enhance the spatiotemporal features of PMSE. The effective time separation between transmitters was 4, 4, and 8 ms between pairs BD, DF, and BF, respectively. As explained by , in a monostatic coherent MIMO radar, the relationship between the normalized spatial cross-correlation of signals from two different transmitting-receiving paths and the angular distribution of scattered power for a given range and frequency bin can be described by $\begin{array}{ll}\text{(1)}& \frac{〈{v}_{m,\phantom{\rule{0.125em}{0ex}}p}\cdot {v}_{n,\phantom{\rule{0.125em}{0ex}}q}^{*}〉}{\sqrt{〈{\left|{v}_{m,\phantom{\rule{0.125em}{0ex}}p}\right|}^{\mathrm{2}}〉〈{\left|{v}_{n,\phantom{\rule{0.125em}{0ex}}q}\right|}^{\mathrm{2}}〉}}& =V\left(\mathbit{k}\left(\mathrm{\Delta }{\mathbit{r}}_{m,\phantom{\rule{0.125em}{0ex}}n}+\mathrm{\Delta }{\mathbit{r}}_{p,\phantom{\rule{0.125em}{0ex}}q}\right)\right)& ={e}^{j\left(\mathrm{2}\mathit{\pi }{f}_{\mathrm{d}}\mathit{\tau }+{\mathit{\varphi }}_{m,\phantom{\rule{0.125em}{0ex}}n}+{\mathit{\varphi }}_{p,\phantom{\rule{0.125em}{0ex}}q}\right)}\cdot \int B\left(\mathit{\theta }\right)\\ \text{(2)}& & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\cdot {e}^{-j\mathbit{k}\left(\mathrm{\Delta }{\mathbit{r}}_{m,\phantom{\rule{0.125em}{0ex}}n}+\mathrm{\Delta }{\mathbit{r}}_{p,\phantom{\rule{0.125em}{0ex}}q}\right)}\mathrm{d}\mathit{\theta },\end{array}$ where vm, p is the signal from the transmitting-receiving path (m,p), m being the receiver and p the transmitter; ${v}_{n,\phantom{\rule{0.125em}{0ex}}q}^{*}$ is the complex conjugate of the signal from the transmitting-receiving path (n,q), n being the receiver and q the transmitter; $〈{v}_{m,\phantom{\rule{0.125em}{0ex}}p}*{v}_{n,\phantom{\rule{0.125em}{0ex}}q}^{*}〉$ is the cross-correlation of two signals from antennas spatially separated; $V\left(\mathbit{k}\left(\mathrm{\Delta }{\mathbit{r}}_{m,\phantom{\rule{0.125em}{0ex}}n}+\mathrm{\Delta }{\mathbit{r}}_{p,\phantom{\rule{0.125em}{0ex}}q}\right)\right)$ is the visibility sample at $\left(\mathrm{\Delta }{\mathbit{r}}_{m,\phantom{\rule{0.125em}{0ex}}n}+\mathrm{\Delta }{\mathbit{r}}_{p,\phantom{\rule{0.125em}{0ex}}q}\right)$; k is the wave number vector equal to (2πλ)θ. And λ is the radar wavelength; θ is the angle of arrival equal to $\left({\mathit{\theta }}_{x},{\mathit{\theta }}_{y},{\mathit{\theta }}_{z}\right)$ which are the direction cosines in the (x, y, z) direction; B(θ) is the angular scattered power distribution, also known as brightness; Δrm, n is the spatial separation between receivers m and n; Δrp, q is the spatial separation between transmitters p and q; 2πfdτ is the phase difference due to the Doppler shift of the target, fd, where τ is the time separation between transmitters; ϕp, q is the phase difference between transmitters; ϕm, n is the phase difference between receivers. A quick comparison between the visibility (sampling domain) for SIMO and MIMO, shown in Fig. 1b and e, indicates that the antenna aperture for MIMO is larger than the SIMO by ∼50 %. The difference lies in that the MIMO antenna aperture is defined as the maximum separation between two virtual receiving antennas, i.e., $\text{Max}\left(\mathrm{\Delta }{\mathbit{r}}_{m,\phantom{\rule{0.125em}{0ex}}n}+\mathrm{\Delta }{\mathbit{r}}_{p,\phantom{\rule{0.125em}{0ex}}q}\right)$, whereas for SIMO, $\mathrm{\Delta }{\mathbit{r}}_{p,\phantom{\rule{0.125em}{0ex}}q}=\mathrm{0}$, and the antenna aperture is only defined by the maximum spatial separation between two receiving antennas. Figures 1c and f show the resulting instrument function or point spread function for SIMO and MIMO, respectively. As expected the half-power beamwidth (HPBW) for MIMO is ∼50 % smaller than for SIMO, resulting in an angular resolution of 2.4 for MIMO compared to 3.6 for SIMO. Furthermore, the sidelobes in the MIMO configuration are strongly reduced, given that the visibility is larger and contains no gaps. Before inverting Eq. (1), the three phase differences due to time diversity (2πfdτ), to receivers (ϕm, n), and to transmitters (ϕp, q) need to be corrected. When the analysis is done in the frequency domain we can easily correct the value 2πfdτ, given that we know the frequency and the time separation between transmitters. On the other hand, the phase offsets between receivers have been calibrated using Cassiopeia A as a radio source (e.g., Chau et al.2014). Additionally, we have calibrated the phase offset between transmitters using the three redundant virtual receivers described above. Each of the redundant virtual receivers comes from one transmitter. They were compared to have zero phase difference between each other, given that the three of them must be located in the same virtual position (see, e.g., Urco et al.2018, for more details). Once the imaging system is calibrated we can invert Eq. (1) to obtain the estimated brightness $\stackrel{\mathrm{^}}{B\left(\mathit{\theta }\right)}$. Given that the number of unique visibility samples is still less than the number of unknowns (brightness points), some kind of regularization is needed to solve Eq. (1). Two of the most well-known radar imaging techniques applied to atmospheric and ionospheric targets are Capon and MaxEnt (Hysell1996). ## 3.1 Capon technique As described by , the angular resolution obtained from a direct inversion of Eq. (1) using the inverse Fourier transform is limited by the longest baseline and the unmeasured antenna separations (visibility gaps). proposed a new technique to improve the angular resolution based on the work of . Capon can be seen as an extension of the inverse Fourier transform. The difference lies in the fact that Capon chooses the antenna weights adaptively in order to minimize the sidelobe interference from signals outside of the direction of interest according to the data. Capon's technique provides an estimate of the brightness function given by $\begin{array}{}\text{(3)}& \stackrel{\mathrm{^}}{B\left(\mathit{\theta }\right)}=\frac{\mathrm{1}}{{M}^{\mathrm{H}}\cdot {V}^{-\mathrm{1}}\cdot M},\end{array}$ where $M={\left[{e}^{-jk\left({r}_{{m}_{\mathrm{0}}}+{r}_{{p}_{\mathrm{0}}}\right)},{e}^{-jk\left({r}_{{m}_{\mathrm{0}}}+{r}_{{p}_{\mathrm{1}}}\right)},\mathrm{\dots },{e}^{-jk\left({r}_{{m}_{\mathrm{1}}}+{r}_{{p}_{\mathrm{0}}}\right)},\mathrm{\dots }\right]}^{T}$ is the Fourier kernel and $V=V\left\{k\left(\mathrm{\Delta }{r}_{{m}_{i},\phantom{\rule{0.125em}{0ex}}{m}_{j}}+\mathrm{\Delta }{r}_{{p}_{k},\phantom{\rule{0.125em}{0ex}}{p}_{l}}\right)\right\}$ is the visibility due to the virtual receivers ${v}_{{m}_{i},\phantom{\rule{0.125em}{0ex}}{p}_{k}}$ and ${v}_{{m}_{j},\phantom{\rule{0.125em}{0ex}}{p}_{l}}$, with i and j being the receiver indices and l and k being the transmitter indices. ## 3.2 Maximum entropy technique Even when MIMO is used, the problem is still underdetermined. Thus, there are infinite possible image solutions, B, which agree with the data, V. Of all possibilities, MaxEnt chooses the solution with the maximum entropy or minimal information content (e.g., Hysell1996) as the one to be the most likely brightness distribution and the most consistent with the available visibility data and their statistical uncertainties. The entropy for a given frequency bin and range can be defined as $\begin{array}{}\text{(4)}& & S=\sum _{\mathit{\theta }}\stackrel{\mathrm{^}}{B\left(\mathit{\theta }\right)}\mathrm{ln}\stackrel{\mathrm{^}}{\left\{B\left(\mathit{\theta }\right)/F\right\}},\text{(5)}& & F=\sum _{\mathit{\theta }}\stackrel{\mathrm{^}}{B\left(\mathit{\theta }\right)},\end{array}$ where F is the summation of the brightness distribution over the region of interest. The solution of Eq. (1) is defined by where ϵ is the noise amplitude associated with the visibility measurements. In this work, we have also considered the improvements of . Specifically, we have taken into account the transmitting beam pattern and the statistical uncertainties of all the visibility pairs. 4 Results Figure 2 shows the resulting 24-bit range–time Doppler intensity (RTDI) image of the vertical beam for 32 h of continuous operation on 16 and 17 July 2017. This plot was obtained after applying MaxEnt to the data and selecting the values that belong to the zenith angle. The signal intensity is represented as lightness, Doppler information as hue, and spectral width as saturation. As shown later, the resulting HPBW for this experiment is <1, indicating that the Doppler information must be mainly due to the vertical motion. The RTDI plot indicates that the vertical motion is slow (green color) as expected. Nevertheless, there are two regions at 23:30 and 06:30 LT around 89 km in which the Doppler velocity presents unrealistic values. Indeed, PMSE were too strong at that time, so even the antenna sidelobes can be seen. Unfortunately, the imaging algorithm cannot assign the correct angle of arrival to these unusually strong echoes due to the angular ambiguity associated with our antenna array. The angular ambiguity is defined by the minimum separation between two antennas. The smaller the separation, the larger the angle without ambiguity (e.g., Woodman1997). A manual angular correction can be applied knowing the Doppler but it is a hard task in the presence of many targets. A smaller baseline is recommended in future experiments for these special cases. Figure 2A 24-bit image of a range–time Doppler intensity (RTDI) plot of PMSEs using MIMO, with time diversity conducted on 16 and 17 July 2017. The signal intensity is represented as lightness, Doppler information as hue, and spectral width as saturation. The legend on the left represents the SNR vs. Doppler color map for a saturation of 90 %. The legend on the right represents the spectral width vs. Doppler for a lightness of 50 %. Note that only the signal corresponding to the narrow region in the illuminated area is shown. ## 4.1 SIMO vs. MIMO results Since the estimated brightness is expressed in polar coordinates $\left({\mathit{\theta }}_{x},{\mathit{\theta }}_{y},R\right)$, a cubic spline interpolation was applied to convert them to Cartesian coordinates, $B\left({\mathit{\theta }}_{x},{\mathit{\theta }}_{y},R\right)$ to $B\left(x,y,z\right)$, with the radar being located at the center (x=0, y=0, z=0). Below we show the results of two selected events (Events 1 and 2) after performing such interpolation. For both events, we show x vs. y cuts for a given z, as well as x vs. z cuts for a given y, where x, y, and z represent the east–west (EW) direction, north–south direction (NS), and altitude, respectively. Examples of EW–NS and EW–altitude 2-D images for Event 1 obtained by applying Capon and MaxEnt to two different antenna configurations, SIMO and MIMO, are shown in Figs. 3 and 4, respectively. Four different results are shown: (a) SIMO-Capon, (b) SIMO-MaxEnt, (c) MIMO-Capon, and (d) MIMO-MaxEnt. Having a quick look at the results, it is clearly observable that (1) MaxEnt outperforms Capon when the same antenna configuration is used, either SIMO or MIMO. This was already pointed out by previous works (e.g., Yu et al.2000; Harding and Milla2013). (2) As expected, MIMO shows a cleaner and more defined image compared to SIMO, when either Capon or MaxEnt is employed. (3) The improvement of using MIMO instead of SIMO is much better in MaxEnt than Capon. The improvement of MIMO-Capon with respect to SIMO-Capon is about 50 % due to the larger virtual antenna array, whereas the improvement of MIMO-MaxEnt with respect to SIMO-MaxEnt is much better than 50 %. This difference lies in the fact that Capon tries to reduce the sidelobes, adaptively steering them to echo-free zones. Unfortunately, for the two events shown, most of the illuminated area is filled with PMSE scattering, and thus the performance of Capon is expected to be comparable to the conventional beamforming (inverse Fourier transform). In the case of MaxEnt, the improvement is mainly due to the larger virtual antenna array and the use of statistical uncertainties as described by . Unlike Capon, our MaxEnt implementation takes advantage of the redundant visibility pairs, giving more weight to pairs with less uncertainty, i.e. more redundancy. Figure 1b and e show the visibility pairs and their redundancy for SIMO and MIMO, respectively. Figure 3EW–NS images for z=85.8 km obtained from applying four different implementations, (a) SIMO-Capon, (b) SIMO-MaxEnt, (c) MIMO-Capon, and (d) MIMO-MaxEnt, at 00:56:55 UT on 17 July 2017, i.e., Event 1. Images are color coded the same as in Fig. 2. The yellow dashed horizontal and vertical lines represent the location of the NS–EW cuts shown in later figures for Event 1. Figure 4Similar to Fig. 3, but for an EW–altitude cut at y=0 km. The yellow dashed horizontal lines represent the location of the altitude cuts shown in previous and later figures for Event 1. Coming back to our comparison of SIMO vs. MIMO, with MIMO-MaxEnt, small wave-like structures of 2 km wavelength can be clearly observed, which are invisible in SIMO implementations or MIMO-Capon. For example, observe the two wavefronts at x=10 km in Fig. 3d, right beside the larger meridionally oriented wavefronts of 7 km wavelength. This indicates that wave-like structures of different wavelengths coexist within PMSEs as previously seen in NLCs (e.g., Baumgarten and Fritts2014). In addition, Fig. 4d shows that the ascending structures (red color) have a higher signal-to-noise ratio (SNR) than the descending structures (blue color). We show similar 2-D cuts for Event 2 in Figs. 5 and 6 for z=82.7 km and $y=-\mathrm{6}$ km, respectively. In this case, the observed wavelength is 12 km. Unlike the first event, the SNR is similar for targets with negative and positive Doppler. Figure 6d shows two very interesting points: (a) a very well-defined wave-like structure between 82 and 84 km and (b) a quasi-uniform structure between 84 and 86 km, which apparently has been modulated by the first wave. In this case, the wave-like structure is easily discernible even with SIMO-Capon, given that the wavelengths are larger than in Event 1 (see Fig. 5a). Figure 5Same as Fig. 3 but at 05:56:13 UT on 17 July 2017, i.e., Event 2. The yellow dashed horizontal and vertical lines represent the location of the NS/EW cuts shown in later figures for Event 2. Figure 6Same as Fig. 5 but for an EW–altitude cut at $y=-\mathrm{6}$ km. The yellow dashed horizontal lines represent the location of the altitude cuts shown in previous and later figures for Event 2. ## 4.2 MIMO results Having shown the better qualitative performance of MIMO-MaxEnt with respect to the other three implementations for the two selected events above, next we present extended results using just MIMO-MaxEnt. Figure 7 shows the evolution in time of two selected events, i.e., Event 1 (panels a, b, and c), and Event 2 (panels d, e, and f). Figure 7a and d show the time evolution vs. altitude for selected EW and NS coordinates. In these plots, we can appreciate how variable PMSE structures are, showing different altitudinal extensions. Note that the effective horizontal area is less than 1 km2 in both cases. Figure 724-bit time representation images of PMSE structures as a function of altitude (RTDI) (a, d), EW location (keogram) (b, e), and NS location (keogram) (c, f) for selected cuts, for both Event 1 (a, b, c) and Event 2 (d, e, f). In the keograms, the wind components obtained with specular meteor radars (SMRs) and MAARSY PMSEs are shown by pink and yellow arrows, respectively. The white dashed horizontal lines represent the location of the altitude, EW and NS cuts shown in previous figures, and current keograms for Events 1 and 2. The white dashed vertical lines represent the time of the cuts shown in previous figures. The second and third row of Fig. 7 show the time evolution vs. EW direction and the time evolution vs. NS direction, for EW and NS keograms, respectively. We have included the zonal (u0) and meridional (v0) wind velocity estimated from combining a couple of specular meteor radars (SMRs) (pink arrow) and from MAARSY based on PMSE Doppler velocities (yellow arrow). The wind values are shown in Table 3. Since this is a “time” vs. “distance” plot, the zonal and meridional wind are represented by arrows for which their slopes indicate the wind magnitude, i.e, how long a target is displaced in the y axis for a certain time in the x axis. The SMR winds were obtained from combining SMR detections from Andenes and Tromsø in northern Norway (see, e.g., Chau et al.2017, for details). In order to estimate the winds from PMSEs we used the following formula: $\begin{array}{}\text{(7)}& {v}_{\mathrm{rad}}\left({\mathit{\theta }}_{x},{\mathit{\theta }}_{y},{\mathit{\theta }}_{z}\right)=u\cdot {\mathit{\theta }}_{x}+v\cdot {\mathit{\theta }}_{y}+w\cdot {\mathit{\theta }}_{z},\end{array}$ where vrad is the radial wind, $\left({\mathit{\theta }}_{x},{\mathit{\theta }}_{y},{\mathit{\theta }}_{z}\right)$ are the direction cosines, and (u, v, w) are the zonal, meridional, and vertical wind direction, respectively. Assuming a constant u, v, and w for a given altitude bin and time bin, and taking all the measurements with a SNR higher than −5 dB, we invert Eq. (7) and get u0, v0, and w0, mean values of u, v, and w respectively. The keograms for Event 1, i.e., from 00:50 to 01:05 UTC, show that the meridionally oriented wavefronts have a limited vertical extent centered at 85 km (Fig. 7a), and since this wave has a finite wavelength in the EW direction, the zonal wave propagation can clearly be observed in Fig. 7b, which shows that the elongated meridionally oriented wavefronts are zonally drifting with the same direction and with the same speed as the wind. In the NS direction, the meridional drifting of the wave is not clearly observed due to the elongated structure. Mesospheric wave-like features observed with airglow imagers (ripples) have also been noticed to drift with the background wind (e.g., Hecht2003). These ripples have been associated with gravity wave breaking and are a clear signature of atmospheric instability. Figure 7d shows another interesting wave-like example (Event 2). Unlike the first case, this wave does not keep its amplitude in the vertical direction; see Fig. 7d. It grows and then disappears. Its direction of propagation in the zonal and meridional direction is also interesting. As shown in Fig. 7e, the direction of propagation in the zonal direction is completely opposite to the background wind. Whereas the wind is going from east to west, the wave propagates from west to east. In the NS direction (Fig. 7f), the wind is close to zero and we do not expect changes in this direction. Since its wavelength is relatively small, this structure might be classified as an instability; however, the opposite direction of propagation suggests that it could be a propagating gravity wave. Further investigation of these events is needed to understand the physical mechanisms behind them, including lidar and airglow imager observations. PMSEs have been used as a neutral wind tracer, assuming that u, v, and w are constant and homogeneous during the analyzed time . Therefore, those works assumed that scatters from PMSEs are moving with the neutral wind at the same velocity and in the same direction. Unlike winds obtained from SMR, winds from PMSEs are affected by local disturbances, as shown in Fig. 7e and f. When the dynamics of local structures are not in agreement with the wind dynamics, a bias could be introduced in the wind estimation (as shown in the Event 2). However, when these local disturbances are moving with the wind, the estimated wind is not affected (Event 1). Note that the PMSE winds are in good agreement with the SMR winds in Event 1, but they are not for Event 2, particularly for the meridional component. An animated sequence of the two events has been included in the Video supplement, i.e., Movies S1 and S2. For both events, the sequence includes selected cuts of EW–NS, EW–altitude, and NS–altitude. In Movie S1, we identify at least four examples of monochromatic waves with different wavelengths drifting with the wind in the northwest direction (at 23:57:37, 00:02:24, 00:10:57, and 00:55:33 UTC). Interestingly, in this case, longitudinal and transverse waves both drift with the background wind. In Movie S2, we show the complete evolution in time of Event 2. In the EW–altitude cut, the wave structure between 82 and 85 km drifts against the wind, whereas a layer at 87 km between 05:20 and 05:30 UTC follows the background wind. Note the projected radial wind (from red to blue) indicates a westward wind. These events are good examples of the complicated dynamics within PMSEs. Further analysis and interpretation of these high-resolution spatiotemporal structures will be done in a future work. Figures 8 and 9 show 3-D maps of (a) the signal-to-noise ratio (SNR), (b) radial velocity, (c) locally enhanced SNR, and (d) residual radial velocity (i.e., vres), for Events 1 and 2, respectively. In addition contours of locally enhanced SNR are overplotted on both the radial velocities. The SNR and radial velocity were obtained from the first and second spectral moments (e.g., Doviak and Zrnić1993). The locally enhanced SNR has been obtained using a 2-D Gaussian function kernel with a width of 6 pixels. The local enhancements allow us to observe weak structures within the strong one. For example, wave fronts are distinguishable in Fig. 8c which were not visible in Fig. 8a. On the other hand, the residual radial velocity was estimated by removing the contributions of the estimated mean horizontal velocities in the measured radial velocities; i.e., $\begin{array}{}\text{(8)}& {v}_{\mathrm{res}}\left({\mathit{\theta }}_{x},{\mathit{\theta }}_{y},{\mathit{\theta }}_{z}\right)={v}_{\mathrm{rad}}\left({\mathit{\theta }}_{x},{\mathit{\theta }}_{y},{\mathit{\theta }}_{z}\right)-\left({u}_{\mathrm{0}}\cdot {\mathit{\theta }}_{x}+{v}_{\mathrm{0}}\cdot {\mathit{\theta }}_{y}\right).\end{array}$ Figure 8The 3-D contour plots at 00:55:33 UT on 17 July 2017, i.e., Event 1, for four selected altitudes: 84, 84.6, 85.2, and 85.8 km. The following are shown for each altitude: (a) SNR, (b) radial velocity, (c) locally enhanced SNR, and (d) residual radial velocity. Contours on locally enhanced SNR are overplotted in both velocity plots. Figure 9Same as Fig 8, but at 05:54:01 UT on 17 July 2017 for altitudes 82, 82.7, 83.4, and 84 km., i.e., Event 2. Assuming that the vres is mainly due to the vertical motion, we can clearly see in Fig. 8d how up (red) and down (blue) structures drift across the illuminated area, maybe due to KHI. Similarly, Fig. 9 shows animated images of Event 2. In this case, the horizontal wind was small and most of the radial velocity was due to the vertical motion; i.e., radial velocity and residual velocities are almost the same. As mentioned above, in this event, the waves propagate horizontally against the weak horizontal wind. The animated versions of Figs. 8 and 9 are shown in Movies S3 and S4, respectively. Although the information might be redundant when compared to Movies S1 and S2, we have decided to include them to provide a more standard view of typical spectral parameters of a multibeam radar. Making a quantitative comparison between SIMO and MIMO for real targets is not an easy task. We need a prior knowledge of the brightness to make a good analysis. This is not the case for PMSE. Fortunately, our observations include echoes from specular meteors; see the bright echoes located at (−10.5, −12.5) in Fig. 3d. Indeed, meteor echoes can be observed in the PMSE region, but the great majority of them occur outside this window. When a meteor echo occurs in the PMSE altitude it will be short-lived (less than a few hundred milliseconds). In previous studies meteor echoes were treated as outliers and were removed from the measurements (e.g., Hashimoto et al.2014). For our benefit they can also be used to evaluate the angular resolution that can be achieved with our implementations quantitatively. A specular meteor echo could be considered to be a point target. Along with its trajectory, the trail is long (hundreds of meters to a few kilometers), but its angular response is narrow. In the transverse direction to the trail, it is very narrow and its angular response is also narrow. In Fig. 10 we show the normalized angular scattered power distribution for a specular meteor using SIMO and MIMO in combination with Capon and MaxEnt. As expected, the range resolution does not change for SIMO or MIMO (see Fig. 10a). We see a peak at 89.1 km and low power at other ranges. However, when comparing Capon and MaxEnt, MaxEnt shows us a clean power distribution along all the ranges, while Capon shows us a remaining sidelobe contamination in other ranges, coming from other angles. This indicates that, even with MIMO, Capon does not suppress the sidelobes as well as MaxEnt. Figure 10b and c show us the angular power distribution for θx and θy, respectively, for which the points are the samples for a given angle and the continuous line is a fitted Gaussian function. Using the fitted function, we estimated the half-power beamwidth (HPBW) for each implementation. Table 2 summarizes the angular resolution and the improvement factor for each method compared to the theoretical angular resolution of the full array MAARSY radar. As we expected the improvement between SIMO and MIMO is about 1.5, given that we increased the antenna aperture for MIMO by ∼50 %. When combining MIMO and MaxEnt, surprisingly, we got an angular resolution of ∼0.6, i.e., more than 5 times better than MAARSY's HPBW. Figure 10Normalized angular power distribution of a specular meteor echo as a function of (a) range, (b) EW angle (θx), and NS angle (θy). The results are shown for all four implementations, i.e., SIMO-Capon (blue), SIMO-MaxEnt (orange), MIMO-Capon (green), and MIMO-MaxEnt (red). Table 2Performance of imaging techniques. Table 3Mean wind values for the two events presented. 5 Discussion We have shown qualitatively and quantitatively that radar imaging of PMSEs is significantly improved when using MIMO instead of SIMO configurations, by at least 50 %. Two different imaging methods have been applied, Capon and MaxEnt. As expected from previous works, MaxEnt images are better than Capon images; however, MaxEnt is computationally more demanding. Similarly, we found that the quality of MIMO-Capon is comparable to SIMO-MaxEnt. Even though MIMO allows us to improve the point spread function, it is not perfect. We expect some artifacts due to the sidelobes which are −15 dB weaker than the main lobe; see Fig. 1f. When strong and weak echoes coexist in the same region, some artifacts might be confused as weak echoes. Although Capon and MaxEnt help to minimize the sidelobe contribution, we are being conservative by employing a relatively large SNR threshold (>5 dB), i.e., discarding weak echoes which might be contaminated by strong sidelobe echoes. By doing this we are increasing the statistical significance of our results which are persistent in time and space, as shown in the animations and the keograms. The preliminary results using MIMO-MaxEnt are allowing us to observe PMSEs with unprecedented horizontal resolution (<1 km) compared to multibeam scanning experiments , and therefore the identification of structures with horizontal wavelengths less than 10 km (e.g., Event 1 above). For structures with wavelengths of the order of 15–20 km or so, the other imaging implementations, i.e., SIMO-Capon, SIMO-MaxEnt, and MIMO-Capon, are sufficiently good to characterize them. These new capabilities will allow KHIs and general GWs (not only monochromatic) to be better identified and characterized than previously done at polar mesospheric heights during the summer. Our proposed technique complements previous observations that have been performed at nighttime when the sky is clear using airglows and lidars (e.g., Smith2013; Hecht et al.2000, 2007; Taylor et al.2007). We will leave the detailed analysis and interpretation of these events and other events observed with this new capability for a future effort. In the following paragraphs, we discuss the technical results and propose future improvements. The improved resolution using MIMO results from the larger effective visibility aperture and the larger number of independent samples, as compared to a SIMO configuration, i.e., 125 m instead of 76 m and 475 samples instead of 163 samples, respectively. In addition, the MaxEnt approach allows an improvement at least a factor of 2 in angular resolution compared to Capon. The maximum number of horizontal blobs that could theoretically be estimated for each range, time, and “color” (i.e., frequency bin) would be 79 ($=\mathrm{475}/\mathrm{6}$), whereby each blob is characterized by a 2-D Gaussian function with six parameters (e.g., Chau and Woodman2001). Another reason for the better results using MIMO-MaxEnt is the number of redundant visibility measurements. Although they do not provide additional information in terms of degrees of freedom, the redundancy helps to reduce the statistical uncertainties of such visibility samples. Recall in our MIMO implementation that there are 1980 visibility samples (45×44), and only 475 are independent. Despite the significant improvement, not everything is positive about applying MIMO. In the following paragraphs, we discussed the critical points of applying MIMO in terms of (a) power-aperture reduction and (b) computational demands and real-time applicability. As indicated by , in atmospheric radars MIMO is applicable to targets with a large RCS, since a reduction of power aperture is inherent to MIMO. In our particular application to PMSEs, the transmitter sections were 1∕7 of the total area, and therefore also 1∕7 of the total transmitter power, i.e., −17 dB transmitting signal, of usual experiments. In reception, 15 groups of seven antennas (hexagons) were used instead of the 433 available antennas. Moreover, given the time multiplexing, the number of coherent integrations was reduced and therefore the noise was increased, when compared to standard operations. In total, the sensitivity of our MIMO experiment is 27 dB lower. Looking at the PMSE RCS in Fig. 2 of , our MIMO observations are limited to PMSEs with RCS larger than 10−14 m−1, i.e., approximately 40 % of the usual seasonal MAARSY PMSE observations. MaxEnt is known to be computationally more demanding than Capon in SIMO applications (e.g., Yu et al.2000). In the case of MIMO, the computational demands are significantly increased, given the larger number of effective receivers, i.e., 45 instead of 15. In terms of visibility pairs, the increase is from 210 to 1980. In the case of Capon, real-time processing is still possible with these increased numbers of samples; however, MaxEnt for both SIMO and MIMO is not applicable in a real-time application. For example, for 80 s of data using an i5 PC with 15 cores, the processing times are 20 min and 3 h for SIMO-MaxEnt and MIMO-MaxEnt, respectively. A future improvement to make MIMO-MaxEnt faster would be to use only one value of each redundant visibility sample, i.e., to only work with 475 independent samples instead of all 1980 measured visibility samples. Such a value could be obtained either from the average of all the values sampling the same visibility or by preselecting only one of them. After all, many of the independent samples are obtained with only one sample (green dots in Fig. 1e). In general, a critical point for PMSE imaging is the drifting nature of the echoes. PMSE correlation times are relatively short, and under stationary conditions, one would require a few minutes of incoherent integration to reduce the statistical uncertainties of the visibility estimates. However, the structures to be imaged might move between 2 and 5 km in 60 s for typical mesospheric motions (40–80 m s−1), either from drifting with the background wind (Event 1) or from wave propagation (Event 2). These drifting structures limit further the angular resolution that can be accomplished by any method since the resulting image will be significantly blurred for integration times of a few minutes. To deal with the drifting nature of PMSEs, in future studies we will explore tracking techniques, i.e., make use of this information to improve the angular resolution (e.g., Vaswani and Zhan2016). Given the computational demands of MaxEnt in particular when combined with MIMO, we will also explore radar imaging with compressed sensing (CS) techniques (e.g., Donoho2006; Candes and Wakin2008). applied CS to Jicamarca F-region irregularities, and show that CS produces results similar to MaxEnt. Our plan is to use MIMO-MaxEnt as a reference for other radar imaging techniques using SIMO, for example, CS in combination with tracking. Besides the computational demands, MIMO might not be applicable at other atmospheric radar sites, and therefore the exploration of other techniques using SIMO is required. An additional improvement to the current observations would be the use of shorter pulses and therefore better range resolution, for example, 150 m. Further improvement in range could also be accomplished by applying range imaging (e.g., Palmer et al.1998; Yu and Palmer2001), particularly in combination with the radar imaging implementations of this work, allowing angular resolutions less than 1. 6 Conclusions In this work, we have successfully implemented coherent MIMO with radar imaging at MAARSY to observe PMSEs with unprecedented angular resolution. The obtained resolution results from the combination of a larger effective aperture, a higher number of independent visibility samples resulting from MIMO, and improved angular resolution resulting from MaxEnt. Quantitatively, the maximum angular resolution accomplished is ∼0.6, which is equivalent to having a 450 m diameter visibility aperture at 53.5 MHz and is a significant improvement to the MAARSY standard angular resolution of 3.6. The preliminary results with MIMO-MaxEnt allowed us to clearly identify structures slightly less than 1 km in diameter and wave-like structures with horizontal wavelengths less than 10 km, with a time resolution around 60 s. The identification of such structures, with varying degrees of intensity, suggests that one has to be careful about using PMSEs for estimating the background wind assuming horizontal homogeneity. Not only is the vertical wind not homogeneous, but also the brightness is not homogeneous horizontally. Given the relatively long temporal correlation of PMSEs, i.e., a few minutes, larger integration of the noisy visibility in time would allow fewer statistical uncertainties in the resulting images of the two events presented. However, PMSE structures drift as they are imaged; therefore long integration times result in angular smearing. In the future, we plan to use the drifting information to improve the angular resolution by applying tracking techniques. As mentioned above, the implementation of MIMO-MaxEnt is computationally intensive and is currently not applicable to real-time processing. On the other hand, MIMO-Capon can be implemented in real-time processing. Our strategy for near-future observations would be to use MIMO-Capon for real-time processing and use MIMO-MaxEnt for special events until more efficient implementations and/or faster computers are available. Data availability Data availability. Our MIMO-MaxEnt results for the two events presented here, namely the PMSE power amplitude as a function of EW, NS, altitude, and time, are shared at ftp://ftp.iap-kborn.de/data-in-publications/UrcoAMT2018b. Video supplement Video supplement. An image sequence for the two events presented in this work has been added as a supplement. These sequences show the time evolution of PMSE structures for selected EW, NS, and altitude cuts. An example of wave structures drifting with and against the wind is showed in Movies S1 and S2, respectively. Author contributions Author contributions. JMU and JLC conceived the idea. JLC, TW, and JMU discussed the theoretical framework. JLC, JMU, and RL designed the experiment. RL and JMU carried out the experiment. JMU processed the experimental data and performed the analysis. JLC contributed to the interpretation of the results. JMU wrote the manuscript with support from JLC. All authors provided critical feedback and helped to improve the manuscript. Competing interests Competing interests. The authors declare that they have no conflict of interest. Special issue statement Special issue statement. This article is part of the special issue “Layered phenomena in the mesopause region (ACP/AMT inter-journal SI)”. It is a result of the LPMR workshop 2017 (LPMR-2017), Kühlungsborn, Germany, 18–22 September 2017. Acknowledgements Acknowledgements. We would like to thank Toralf Renkwitz for providing the receivers' phase offsets and Marius Zecha for MAARSY data handling. This work was partially supported by the Deutsche Forschunggemeinschaft (DFG, German Research Foundation) under SPP 1788 (CoSIP)-CH1482/3-1 and by the WATILA Project (SAW-2015-IAP-1). Open Access Fund of the Leibniz Association. Edited by: William Ward Reviewed by: Jia Yue and Ian McCrea References Balsley, B. B. and Riddle, A. C.: Monthly mean values of the mesospheric wind field over Poker Flat, Alaska, J. Atmos. Sci., 41, 2368–2375, 1984. a Baumgarten, G. and Fritts, D. C.: Quantifying Kelvin-Helmholtz instability dynamics observed in noctilucent clouds: 1. Methods and observations, J. Geophys. Res.-Atmos., 119, 9324–9337, https://doi.org/10.1002/2014JD021832, 2014. a, b Candes, E. J. and Wakin, M. B.: An introduction to compressive sampling, IEEE Signal Proc. Mag., 25, 21–30, https://doi.org/10.1109/MSP.2007.914731, 2008. a Capon, J.: High-resolution frequency-wavenumber spectrum analysis, Proceedings of the IEEE, 57, 1408–1419, 1969. a Chau, J. L. and Woodman, R. F.: Three-dimensional coherent radar imaging at Jicamarca: Comparison of different inversion techniques, J. Atmos. Sol.-Terr. Phy., 63, 253–261, 2001. a Chau, J. L., Renkwitz, T., Stober, G., and Latteck, R.: MAARSY multiple receiver phase calibration using radio sources, J. Atmos. Sol.-Terr. Phy., 118, 55–63, 2014. a Chau, J. L., Stober, G., Hall, C. M., Tsutsumi, M., Laskar, F. I., and Hoffmann, P.: Polar mesospheric horizontal divergence and relative vorticity measurements using multiple specular meteor radars, Radio Sci., 52, 811–828, https://doi.org/10.1002/2016RS006225, 2017. a Chau, J. L., McKay, D., Vierinen, J. P., La Hoz, C., Ulich, T., Lehtinen, M., and Latteck, R.: Multi-static spatial and angular studies of polar mesospheric summer echoes combining MAARSY and KAIRA, Atmos. Chem. Phys., 18, 9547–9560, https://doi.org/10.5194/acp-18-9547-2018, 2018. a, b Czechowsky, P., Reid, I. M., and Rüster, R.: VHF radar measurements of the aspect sensitivity of the summer polar mesopause echoes over Andenes (69 N, 16 E), Norway, Geophys. Res. Lett., 15, 1259–1262, https://doi.org/10.1029/GL015i011p01259, 1988. a Donoho, D. L.: Compressed sensing, IEEE T. Inform. Theory, 52, 1289–1306, https://doi.org/10.1109/TIT.2006.871582, 2006. a Doviak, R. J. and Zrnić, D. S.: Weather signal processing, in: Doppler radar and weather observations, Academic Press, 2nd edn., 122–159, https://doi.org/10.1016/B978-0-12-221422-6.50011-5, 1993. a Ecklund, W. L. and Balsley, B. B.: Long-term observations of the Arctic mesosphere with the MST radar at Poker Flat, Alaska, J. Geophys. Res., 86, 7775–7780, 1981. a Foschini, G. J. and Gans, M. J.: On limits of wireless communications in a fading environment when using multiple antennas, Wireless Personal Communications, 6, 311–335, 1998. a Fritts, D. C., Tsuda, T., VanZandt, T. E., Smith, S. A., Sato, T., Fukao, S., and Kato, S.: Studies of velocity fluctuations in the lower atmosphere using the MU radar. Part II: Momentum fluxes and energy densities, J. Atmos. Sci., 47, 51–66, 1990. a Harding, B. J. and Milla, M.: Radar imaging with compressed sensing, Radio Sci., 48, 582–588, 2013. a, b Hashimoto, T., Nishimura, K., Tsutsumi, M., and Sato, T.: Meteor Trail Echo Rejection in Atmospheric Phased Array Radars Using Adaptive Sidelobe Cancellation, J. Atmos. Ocean. Tech., 31, 2749–2757, https://doi.org/10.1175/JTECH-D-14-00035.1, 2014.  a Havnes, O., Trøim, J., Blix, T., Mortensen, W., Næsheim, L. I., Thrane, E., and Tønnesen, T.: First detection of charged dust particles in the Earth's mesosphere, J. Geophys. Res., 101, 10839–10847, https://doi.org/10.1029/96JA00003, 1996. a Hecht, J. H.: Instability layers and airglow imaging, Rev. Geophys., 42, RG1001, https://doi.org/10.1029/2003RG000131, 2003. a Hecht, J. H., Fricke-Begemann, C., L., W. R., and Höffner, J.: Observations of the breakdown of an atmospheric gravity wave near the cold summer mesopause at 54N, Geophys. Res. Lett., 27, 879–882, 2000. a Hecht, J. H., Liu, A. Z., Walterscheid, R. L., Franke, S. J., Rudy, R. J., Taylor, M. J., and Pautet, P.-D.: Characteristics of short-period wavelike features near 87 km altitude from airglow and lidar observations over Maui, J. Geophys. Res., 112, D16101, https://doi.org/10.1029/2006JD008148, 2007. a Hoppe, U., Hall, C., and Röttger, J.: First observations of summer polar mesospheric backscatter with a 224 MHz radar, Geophys. Res. Lett., 15, 28–31, https://doi.org/10.1029/GL015i001p00028, 1988. a Hoppe, U.-P. and Fritts, D. C.: High-resolution measurements of vertical velocity with the European incoherent scatter VHF radar: 1. Motion field characteristics and measurement biases, J. Geophys. Res., 100, 16813–16825, 1995. a Hoppe, U.-P., Fritts, D., Reid, I., Czechowsky, P., Hall, C., and Hansen, T.: Multiple-frequency studies of the high-latitude summer mesosphere: implications for scattering processes, J. Atmos. Terr. Phys., 52, 907–926, https://doi.org/10.1016/0021-9169(90)90024-H, 1990. a Huang, Y., Brennan, P. V., Patrick, D., Weller, I., Roberts, P., and Hughes, K.: FMCW based MIMO imaging radar for maritime navigation, Prog. Electromagn. Res., 115, 327–342, 2011. a Hysell, D. L.: Radar imaging of equatorial F region irregularities with maximum entropy interferometry, Radio Sci., 31, 1567–1578, 1996. a, b Hysell, D. L. and Chau, J. L.: Optimal aperture synthesis radar imaging, Radio Sci., 41, RS2003, https://doi.org/10.1029/2005RS003383, 2006. a, b, c Kaifler, N., Baumgarten, G., Fiedler, J., Latteck, R., Lübken, F.-J., and Rapp, M.: Coincident measurements of PMSE and NLC above ALOMAR (69 N, 16 E) by radar and lidar from 1999–2008, Atmos. Chem. Phys., 11, 1355–1366, https://doi.org/10.5194/acp-11-1355-2011, 2011. a Kelley, M. C. and Ulwick, J. C.: Large- and small-scale organization of electrons in the high-latitude mesosphere: Implications of the STATE data, J. Geophys. Res, 93, 7001–7008, https://doi.org/10.1029/JD093iD06p07001, 1988. a Kudeki, E. and Sürücü, F.: Radar interferometric imaging of field-aligned plasma irregularities in the Equatorial Electrojet, Geophys. Res. Lett., 18, 41–44, 1991. a Latteck, R. and Strelnikova, I.: Extended observations of polar mesosphere winter echoes over Andøya (69 N) using MAARSY, J. Geophysical Res.-Atmos., 120, 8216–8226, https://doi.org/10.1002/2015JD023291, 2015. a Latteck, R., Singer, W., Rapp, M., Renkwitz, T., and Stober, G.: Horizontally resolved structures of radar backscatter from polar mesospheric layers, Adv. Radio Sci., 10, 1–6, 2012a. a, b Latteck, R., Singer, W., Rapp, M., Vandepeer, B., Renkwitz, T., Zecha, M., and Stober, G.: MAARSY: The new MST radar on Andøya – System description and first results, Radio Sci., 47, RS1006, https://doi.org/10.1029/2011RS004775, 2012b. a Palmer, R. D., Gopalam, S., Yu, T. Y., and Fukao, S.: Coherent radar imaging using Capon's method, Radio Sci., 33, 1585–1598, 1998. a, b, c, d Rapp, M. and Lübken, F.-J.: Polar mesosphere summer echoes (PMSE): Review of observations and current understanding, Atmos. Chem. Phys., 4, 2601–2633, https://doi.org/10.5194/acp-4-2601-2004, 2004. a Rapp, M., Gumbel, J., Lübken, F.-J., and Latteck, R.: D region electron number density limits for the existence of polar mesosphere summer echoes, J. Geophys. Res., 107, 4187, https://doi.org/10.1029/2001JD001323, 2002. a Smith, S. M.: The identification of mesospheric frontal gravity-wave events at a mid-latitude site, Adv. Space Res., 54, 417–424, 2014. a Sommer, S. and Chau, J. L.: Patches of polar mesospheric summer echoes characterized from radar imaging observations with MAARSY, Ann. Geophys., 34, 1231–1241, https://doi.org/10.5194/angeo-34-1231-2016, 2016. a, b, c, d, e Stebel, K., Barabash, V., Kirkwood, S., Siebert, J., and Fricke, K. H.: Polar mesosphere summer echoes and noctilucent clouds: Simultaneous and common-volume observations by radar, lidar and CCD camera, Geophys. Res. Lett., 27, 661–664, https://doi.org/10.1029/1999GL010844, 2000. a Stober, G., Sommer, S., Rapp, M., and Latteck, R.: Investigation of gravity waves using horizontally resolved radial velocity measurements, Atmos. Meas. Tech., 6, 2893–2905, https://doi.org/10.5194/amt-6-2893-2013, 2013. a, b, c Stober, G., Sommer, S., Schult, C., Latteck, R., and Chau, J. L.: Observation of Kelvin–Helmholtz instabilities and gravity waves in the summer mesopause above Andenes in Northern Norway, Atmos. Chem. Phys., 18, 6721–6732, https://doi.org/10.5194/acp-18-6721-2018, 2018. a Taylor, M. J., Pendleton Jr., W. R., Pautet, P.-D., Zhao, Y., Olsen, C., Babu, H. K. S., Medeiros, A. F., and Takahashi, H.: Recent progress in mesospheric gravity wave studies using nigth-glow imaging system, Geophys. Res. Lett., 25, 49–58, 2007. a Telatar, E.: Capacity of multi-antenna Gaussian channels, Eur. T. Telecommun., 10, 585–595, https://doi.org/10.1002/ett.4460100604, 1999. a Urco, J. M., Chau, J. L., Milla, M. A., Vierinen, J. P., and Weber, T.: Coherent MIMO to improve aperture synthesis radar imaging of field-aligned irregularities: First results at Jicamarca, IEEE T. Geosci. Remote, 56, 2980–2990, https://doi.org/10.1109/TGRS.2017.2788425, 2018. a, b, c, d, e, f, g Varney, R. H., Kelley, M. C., Nicolls, M. J., Heinselman, C. J., and Collins, R. L.: The electron density dependence of polar mesospheric summer echoes, J. Atmos. Sol.-Terr. Phy., 73, 2153–2165, 2011. a Vaswani, N. and Zhan, J.: Recursive recovery of sparse signal sequences from compressive measurements: A review, IEEE T. Signal Proces., 64, 3523–3549, https://doi.org/10.1109/TSP.2016.2539138, 2016. a Woodman, R. F.: Coherent radar imaging: Signal processing and statistical properties, Radio Sci., 32, 2373–2391, 1997. a, b Woodman, R. F. and Chau, J. L.: Antenna compression using binary phase coding, Radio Sci., 36, 45–52, 2001. a Yu, T.-Y. and Palmer, R. D.: Atmospheric radar imaging using multiple-receiver and multiple-frequency techniques, Radio Sci., 36, 1493–1503, https://doi.org/10.1029/2000RS002622, 2001. a Yu, T. Y., Palmer, R. D., and Hysell, D. L.: A simulation study of coherent radar imaging, Radio Sci., 35, 1129–1141, 2000.  a, b Yu, T.-Y., Palmer, R. D., and Chilson, P. B.: An investigation of scattering mechanisms and dynamics in PMSE using coherent radar imaging, J. Atmos. Sol.-Terr. Phy., 63, 1797–1810, https://doi.org/10.1016/S1364-6826(01)00058-X, 2001. a, b Zecha, M., Röttger, J., Singer, W., Hoffmann, P., and Keuer, D.: Scattering properties of PMSE irregularities and refinement of velocity estimates, J. Atmos. Sol.-Terr. Phy., 63, 201–214, 2001. a
{}
# Euclidian space definition 1. Dec 9, 2004 ### quasar987 My linear algebra book seems to give a different definition than Mathworld.com so I'll state it. A scalar product over a vectorial space V is a vectorial real function that to every pair of vectors u, v, associates a real number noted (u|v) satisfying the 4 axioms... 1. 2. 3. 4. A vectorial space of finite dimension with a scalar product is called a euclidean space. My question is the following: I don't like how that definition sounds. Is it equivalent to: "Let V be a vectorial space of finite dimension. If there exists a scalar product function over V, then V is called a euclidean space." ? P.S. does anyone knows a good website that teaches about diagonalisation of hermitian matrixes? 2. Dec 9, 2004 ### HallsofIvy Staff Emeritus I honestly don't understand your question. "A vectorial space of finite dimension with a scalar product is called a euclidean space." and "Let V be a vectorial space of finite dimension. If there exists a scalar product function over V, then V is called a euclidean space." are equivalent. I don't see any difference at all! 3. Dec 9, 2004 4. Dec 9, 2004 ### quasar987 I found the definitious ambigouous because for a scalar product function to exist, you have to define it first. So according to their definition, a vectorial space with no defined scalar product function is not a euclidian space. But as soon as you do define one, it becomes a euclidean space. My definition says: if a vectorial space is such that a scalar product function CAN be defined (i.e. "potentially"), then it is a euclidean space. That's how I saw it. I'm kind of realising my definition is no better they theirs now... so let me reformulate my question: "What does the definition says?" Does it say that if there is a defined scalar product function over V then V is a euclidean space, or does it say that if a scalar product function over V can be defined, then V is a euclidean space? Last edited: Dec 9, 2004 5. Dec 10, 2004 ### matt grime You're just coming across common abuse of notations. A euclidean space is one possessing a euclidean norm/inner product. However you do not *have* to use the inner product. R^2 is eulicdean whether or not you use the inner product. 6. Dec 10, 2004 ### HallsofIvy Staff Emeritus Ah! now I see what the difference is. In a finite dimensional vector space you can show that every possible inner product is equivalent to a "dot product" defined using a particular basis. (That is, given a basis, to find the "dot product" of u and v, write each as a linear combination of the basis vectors:u= a1e1+ a2e2+... , v= b1e1+ b2e2+... . The "dot product" is a1b1+ a2b2+ ... . The "Gram-Schmidt" orthogonalization process essentiall show that, given any inner product, there exist a basis so that inner product is given by the basis.) I.e. all inner products are equivalent so it really doesn't matter which you use. Of course, every finite dimensional vector space can be given an inner product so, in that sense, every finite dimensional vector space is Euclidean! 7. Dec 13, 2004 ### Palindrom Every finite dimensional vector space over R is Euclidean.
{}
# 28 February 2018 In the Node2Vec paper, the authors propose that an embedding for every node in a graph can be learned by trying to maximize the dot product between the feature representation of a node and its neighbors, as determined by BFS (useful for structural equivalence) or DFS (useful for learning homophily, or finding nodes that are connected together). I think that the strategy for homophily makes sense, but the BFS strategy doesn’t make sense. Imagine that you have have a simple graph that has two two nodes $$A$$ and $$B$$, and each are surrounded by 5 neighbors, and let’s assume that the neighbors are only connected to $$A$$ and $$B$$ respectively. In that case, if you wanted to learn structural equivalence, then $$A$$ and $$B$$ should have a similar embedding, while the neighbors should all have similar embeddings. However, the objective function that the authors propose won’t achieve this. Instead, it will propose that $$A$$ and its neighbors have similar embeddings, and that $$B$$ and its neighbors have similar embeddings. Could this be ameliorated by having a “left” and “right” embedding vector for each node? Then, the objective function would be that you maximize the dot product between the left vector of a node, and the right vector of each of its neighbors? Written on February 28, 2018
{}
# Globally Convergent Methods for Nonlinear Systems of Equations We recently switched from the basic Newton-Raphson to a more advanced globally convergent Newton’s method with Line Searches and Backtracking (see Numerical Recipes, Chapter 9.7). For some special cases many we still need to try too many initial guesses. Does an even more advanced approach exist? What else should we try? Thanks. -
{}
Article Text ## Statistics from Altmetric.com In this new volume of Quality in Health Care we have decided to modify the structure of the journal scan. Each issue will still include a scan of the core medical journals—BMJ, JAMA, Lancet, and New England Journal of Medicine—but, in addition, we will invite a health professional to do a scan of the published literature based on a particular theme. The theme of the scan for the articles reviewed by Ross Scrivener in this issue is risk management. ## Risk management Bibliographic search strategy used Medline (PubMed) was searched using the terms “medication errors” and “risk management” together. The following articles were selected from 41 articles that appeared during October 1998 to October 1999 indexed under these terms. Stanhope N, Crowley-Murphy M, Vincent C, et al. An evaluation of adverse event reporting . J Eval Clin Pract 1999 Feb; 5 :5–12. Abstract adapted from the original article. Objectives To determine the reliability of adverse event reporting in two teaching hospital obstetric units by (a) establishing what proportion of adverse incidents were not reported by staff and (b) determining whether a maternity risk manager can increase the reliability of adverse incident reporting by searching through various types of documentation. Setting Two obstetric units serving similar populations and with comparable numbers of deliveries. Both units had established risk management programmes led by a maternity risk manager. Methods A retrospective review of the notes relating to 250 consecutive deliveries in each of the two obstetric units. Notes were screened for the presence of adverse incidents defined by lists of incidents to be recorded in accordance with unit protocols. Results A total of 196 adverse incidents were identified from the 500 deliveries. Staff reported 23% of these and the risk managers identified a further 22%. The remaining 55% of incidents were identified only by a retrospective case note review and not known to the risk manager. Staff reported about half the serious incidents (48%), but comparatively few of the moderately serious (24%) or minor ones (15%). The risk managers identified an additional 16% of serious incidents that staff did not report. Drug errors were analysed separately; only two were known to the risk managers and a further 44 were found by case note review. Conclusions Incident reporting systems may produce much potentially valuable information, but they seriously underestimate the true level of reportable incidents. Where one risk manager covers an entire trust rather than a single unit, reporting rates are likely to be much lower than in this study. Greater clarity is needed about the definition of reportable incidents (including drug errors). Staff should receive continuing education about the purposes and aims of clinical risk management and incident reporting and consideration should be given to designating specific members of staff with responsibility for reporting. Vincent C, Stanhope N, Crowley-Murphy M. Reasons for not reporting adverse incidents: an empirical study . J Eval Clin Pract 1999 Feb; 5 :13–21. Abstract adapted from the original article. Objectives To identify the reasons why so many adverse incidents are not reported. Setting The same two obstetric units as studied in the previous study. Participants Obstetricians and midwives. Method A questionnaire was given to 42 obstetricians and 156 midwives. Questions concerned their knowledge of their unit's incident reporting system; whether they would report a series of 10 designated adverse obstetric incidents to the risk manager; and their views on 12 potential reasons for not reporting incidents. Results Most staff knew about the incident reporting system in their unit, but almost 30% did not know how to find a list of the reportable incidents. Views on the necessity of reporting the 10 designated obstetric incidents varied considerably. For example, 96% of staff stated that they would always report a maternal death, whereas less than 40% would report a baby's unexpected admission to the special care baby unit. Midwives said they were more likely to report incidents than doctors, and junior staff were more likely to report than senior staff. The main reasons for not reporting were fears that junior staff would be blamed, high workload, and the belief (even though the incident was designated as reportable) that the circumstances or outcome of a particular case did not warrant a report. Junior doctors felt less supported than senior doctors. Conclusions Current systems of incident reporting, while providing some valuable information, do not provide a reliable index of the rate of adverse incidents. Recommended measures to increase reliability include clearer definitions of incidents, simplified methods of reporting, designated staff to report incidents and education, feedback, and reassurance to staff about the nature and purpose of such systems. Karson AS, Bates DW. Screening for adverse events . J Eval Clin Pract 1999 Feb; 5 :23–32. Abstract reproduced from the original article. Adverse events (AEs) in medical patients are common, costly, and often preventable. Development of high quality improvement programmes to decrease the number and impact of AEs demands effective methods for screening for AEs on a routine basis. Here we describe the impact types and potential causes of AEs, and review various techniques for identifying them. We evaluate the use of generic screening criteria and combinations of these criteria. In general, the most sensitive screens were the least specific and no small subset of screens identified a large percentage of adverse events. Combinations of screens that are limited to administrative data were the least expensive, but none was particularly sensitive, although in practice they might be effective as routine screening is currently rarely done. As computer systems increase in sophistication, sensitivity will improve. We also discuss recent studies that suggest that programs that screen for and identify AEs can be useful in reducing AE rates. Although tools for identifying AEs have strengths and weaknesses, they can play an important part in an organisation's quality improvement portfolios. Malpas A, Helps SC, Sexton EJ, et al. A classification for adverse drug events . J Qual Clin Pract 1999 Mar; 19 :23–6. Abstract reproduced from the original article. There is considerable evidence that many patients suffer adverse events arising from their healthcare management. A significant proportion of these iatrogenic injuries occur as a result of medication errors. Before prevention strategies can be developed, it is necessary to understand the types of errors that are occurring. To set priorities, it is necessary to identify the frequency and impact of the various types of medication errors. To fully investigate medication incidents, it is necessary to classify the information in a way that allows the frequencies, causes, and contributing factors to be analysed. The development of a subbranch of the “generic occurrence classification”, specific to medication incidents, allows this analysis to occur. Bates DW. Frequency, consequences and prevention of adverse drug events . J Qual Clin Pract 1999 Mar; 19 :13–7. Abstract reproduced from the original article. Iatrogenic injuries are important because they are frequent and many may be preventable; those caused by therapeutic drugs are among the most frequent. Although medication errors are common, most have little potential for harm. Some errors, however, such as giving a patient a drug to which they have a known allergy, are more likely to cause injury. Error theory provides insights into the changes required to reduce medication error injury rates. Data from the Adverse Drug Event Prevention Study suggest that most serious errors occur at the ordering and dispensing stages, while another, smaller, proportion occur at the administration stage. These data suggest that physician computer-order entry, where physicians write orders on line with decision support, including patient specific information and alerts about potential problems, has the potential to significantly reduce the number of serious medication errors. Clark RB, Graham JD, Williamson JA. Towards system-wide strategies for reducing adverse drug events . J Qual Clin Pract 1999 Mar; 19 :37-40. Abstract reproduced from the original article. Despite the best efforts of committed healthcare workers, there are many adverse drug events (ADEs). A large proportion of ADEs arise from system factors, either directly (for example, poor equipment design) or indirectly (for example, inappropriate staff rosters). This article represents the proceedings of a workshop focus group that deliberated on priority health system issues identified as requiring action to minimise the risks of ADEs. Major issues canvassed were the gathering of appropriate and useful data about ADEs, the dissemination of information to professionals and consumers, and effective communication across groups of professionals and between professionals and consumers. Various recommendations were put forward as important first steps in addressing these issues. Thornton PD, Simon S, Mathew TH. Towards safer drug prescribing, dispensing and administration in hospitals . J Qual Clin Pract 1999 Mar; 19 :41–5. Abstract reproduced from the original article. A multidisciplinary workshop was held to identify strategies likely to produce a reduction in adverse drug events, by targeting hospital systems involved in drug prescribing, dispensin, and administration. Strategies identified at the workshop included: (a) improving the education and practice development of medical and nursing staff in drug treatment and safe prescribing principles; (b) introducing and using information technology and electronic prescribing processes; (c) implementing the Australian Pharmaceutical Advisory Council (APAC) national guidelines to achieve the continuum of quality use of medicines between hospitals and the community; (d) enhancing the importance of medication history taking as a routine part of the admission process; (e) instituting individual patient supply as the standard method of drug distribution in hospitals; and (f) stimulating the hospital based clinical pharmacy workforce. Wakefield DS, Wakefield BJ, Uden-Holman T, et al. Understanding why medication administration errors may not be reported . Am J Med Qual 1999 Mar-Apr; 14 :81–8. Abstract reproduced from the original article. Because the identification and reporting of medication administration errors (MAEs) is a non-automated and voluntary process, it is important to understand potential barriers to MAE reporting. This paper describes and analyses a survey instrument designed to assist in evaluating the relative importance of 15 different potential MAE reporting barriers. Based on the responses of over 1300 nurses and a confirmatory LISREL analysis, the 15 potential barriers are combined into 4 subscales: disagreement over error, reporting effort, fear, and administrative response. The psychometric properties of this instrument and descriptive profiles are presented. Specific suggestions for enhancing MAE reporting are discussed. Wakefield DS, Wakefield BJ, Borders T, et al . Understanding and comparing differences in reported medication and administration error rates . Am J Med Qual 1999 Mar-Apr; 14 :73–80. Abstract reproduced from the original article. The prevention of medication administration errors (MAEs) represents a central focus of hospitals' quality improvement and risk management initiatives. Because the identification and reporting of MAEs is a non-automated and voluntary process, it is essential to understand the extent to which errors may not be reported. This study reports the results of 2 multi-hospital surveys in which over 1300 staff nurses in each survey estimated the extent to which various types of non-intravenous and intravenous related MAEs are actually being reported in their nursing units. Overall, respondents estimated that about 60% of MAEs are actually being reported. Considerable differences in estimated rates of MAE reporting were found between staff and supervisors working on the same patient care units. A simulation based on actual and perceived rates of MAE reporting is presented to estimate the range of errors not being reported. Implications about the reliability, validity, and completeness of MAEs actually being reported are discussed. Internet search: sentinel event alert from the Joint Commission on Accreditation of Healthcare Organisations A search using the search engine Alta Vista was done in October 1999. The search terms “sentinel event” listed the material from the Joint Commission on Accreditation of Health care Organisations (JCAHO). This is called Sentinel Event Alert” (wwwb.jcaho.org/edu_pub/sealert/se_alert.html) and lists bulletins from February 1998 to August 1999. The sentinel events cover a range of issues including medication errors involving potassium chloride, wrong site surgery, infant abductions, and blood transfusion errors. Each issue covers risk factors, root causes identified, suggested strategies for reducing risk, and expert recommendations. ROSS SCRIVENER Clinical Effectiveness Information Manager, Dynamic Quality Improvement Programme, Royal College of Nursing, 20 Cavendish Square, London W1M 0AB ## Quality in practice The following abstracted papers encourage healthcare providers to reflect on their practice and consider alternative approaches that have quantified benefits for the patient and the organisation. The first study indicates that primary angioplasty compared with intravenous streptokinase results in lower mortality rates and lower re-infarction rates. Furthermore, from a financial perspective, the angioplasty method also proved to be more cost effective than using intravenous streptokinase. Zijlstra F, Hoorntje JCA, De Boer M-J, et al. Long-term benefit of primary angioplasty as compared with thrombolytic therapy for acute myocardial infarction. N Engl J Med 1999; 341 :1413–19. Abstract reproduced from the original. Background As compared with thrombolytic therapy, primary coronary angioplasty results in a higher rate of patency of the infarct related coronary artery, lower rates of stroke re-infarction, and higher inhospital or 30 day survival rates. However, the comparative long term efficacy of these two approaches has not been carefully studied. Methods A total of 395 patients with acute myocardial infarction were assigned randomly to treatment with angioplasty or intravenous streptokinase. Clinical information was collected for a mean (SD) of 5(2) years and medical charges associated with the two treatments were compared. Results A total of 194 patients were assigned to undergo angioplasty and 201 to receive streptokinase. Mortality was 13% in the angioplasty group as compared with 24% in the streptoki-nase group (relative risk 0.54; 95% confidence interval, 0.36 to 0.87). Non-fatal re-infarction occurred in 6% and 22% of the two groups, respectively (relative risk, 0.27; 95% CI 0.15 to 0.52). The combined incidence of death and non-fatal re-infarction was also lower among patients assigned to angioplasty than among those assigned to streptokinase, with a relative risk of 0.13 (95% CI 0.05 to 0.37) for early events (within the first 30 days) and a relative risk of 0.62 (95% CI 0.43 to 0.91 for late events (after 30 days). The rates of readmission for heart failure and ischaemia were also lower in the angioplasty group than in the streptokinase group. In addition, total medical charges per patient were lower in the angioplasty group ($16 090) than in the streptokinase group ($16 813, p=0.05). Conclusions As compared with thrombolytic therapy with streptokinase, primary coronary angioplasty is associated with better clinical outcomes over five years. ## Central venous catheterisation Central venous catheterisation is often required for the intensive medical and surgical management of patients. Complications, such as catheter related bloodstream infection, are associated with its use. The most commonly used method for diagnosing these infections generally necessitates the removal of the catheter so that it can be cultured, and results are not available for at least 24 hours. The following study describes a more rapid, cost effective, and patient friendly alternative. Kite P, Dobbins B, Wilcox M, et al. Rapid diagnosis of central-venous-catheter -related bloodstream infection without catheter removal. Lancet 1999; 354 :1504–7. Background Current methods for the diagnosis of bloodstream infection related to central venous catheters (CVC) are slow and in many cases require catheter removal. Because most CVCs that are removed on suspicion of causing infection prove not to be infected, removal of catheters unnecessarily exposes patients to the risks associated with reinsertion. The aim of the study was to explore a rapid diagnostic method for catheter related bloodstream infection. Methods The gram stain and acridine-orange leucocyte cytospin test (AOLC) is rapid (30 minutes), inexpensive, and requires only 100 μl catheter blood (treated with edetic acid) and the use of light and ultraviolet microscopy. Blood was withdrawn through a CVC lumen and subjected to these tests. These results were compared with two other methods—namely, catheter removal (tip roll and tip flush) and an in situ endoluminal brush, in conjunction with quantitative peripheral blood cultures. Catheter related bloodstream infection was defined as a positive culture by one or more of the three CVC culture techniques and validation by positive quantitative peripheral blood culture caused by the same micro-organism. Results 128 cases of suspected catheter related bloodstream infection were assessed in 124 surgical patients. In 112 (88%) cases CVC blood was obtainable. Catheter related bloodstream infection was diagnosed in 50 cases. The sensitivity of the gram stain and AOLC test was 96% and the specificity was 92% with a positive predictive value of 91% and a negative predictive value of 97%. Comparatively, the tip roll, tip flush, and endoluminal brush methods had sensitivities of 90%, 95%, and 92% and specificities of 55%, 76%, and 98%, respectively. Conclusions The gram stain and AOLC test is a simple, non-invasive, and rapid method for the diagnosis of catheter related bloodstream infection. This diagnostic method compares favourably with other diagnostic methods such as catheter removal. ## Tube feeding of patients with dementia The issue of tube feeding patients with dementia is commonly debated, mostly from an ethical stance. The following article reviews existing clinical research and finds few positive outcomes to support its use with these types of patients. This article will stimulate debate and encourage clinicians to reconsider their practice in this area. Finucane TE, Christmas C, Travis K . Tube feeding in patients with advanced dementia, a review of the evidence. JAMA 1999; 282 :1365–70. Background Patients with advanced dementia frequently develop eating difficulties and weight loss. Enteral feeding is intended to prevent aspiration pneumonia and malnutrition and its sequealae, including death by starvation. It is also used to provide comfort. Although tube feeding is commonly used with patients who have dementia, the benefits and risks of this treatment are unclear. Methods The authors searched and reviewed Medline (1966–99) to identify data about the efficacy of this treatment. Studies pertaining to patients with cancer, burns, trauma, dysphagic stroke mechanical obstruction, critical illness, paediatrics, or ventilated patients were not considered. The focus of the review was maintained on the clinical evidence of enteral tube feeding. The data gathered were assessed on whether or not tube feeding could prevent aspiration pneumonia, prolong survival, reduce the risk of pressure sores or infection, improve function, or provide palliation. Results No randomised trial studies were found that compared tube feeding with oral feeding. The reviewed studies were based on retrospective or observational studies, or were not based on those with dementia. As regards aspiration pneumonia, an enteral tube cannot prevent regurgitated gastric contents/oral secretions entering the lungs. No research to date would suggest that tube feeding could reduce the risk of aspiration pneumonia. Tube feeding does not appear to prevent the consequences of malnutrition despite administration of adequate nutrients and calories. Furthermore, the authors could not substantiate the view that enteral feeding improves patients' survival. In fact it was discovered that its use is associated with significant adverse effects and mortality rates. No evidence linking tube feeding to positive outcomes for pressure sores was found. Also, the delivery of nutrients via an enteral tube has not been shown to reduce infection. Instead it has been shown to cause serious local and systemic infection. It is widely believed that providing an emaciated patient with artificial feeding is thought to improve strength, function, or self care. No clinical evidence exists to support this stance, however. There are also no published reports of patients with dementia being made more comfortable by the use of a tube. Enteral feeding denies the individual the pleasure of eating as well as subjecting the person to discomfort of the tube and to regular repositioning of same. In addition, patients with an enteral tube in situ may have the added imposition of a physical/chemical restraint. Conclusions The authors could not identify any data to support the use of tube feeding patients with dementia. The authors reflect on possible reasoning behind this form of treatment, for example convenience and misunderstanding of healthcare providers and families. ## Ethnicity, tuberculosis, and poverty This article provides an analysis of the differences in ethnicity in relation to tuberculosis and poverty. The findings have repercussions for planners and deliverers of public health and environmental policies. Hawker JI, Bakshshi SS, Ali S, et al . Ecological analysis of ethnic differences in relation between tuberculosis and poverty . BMJ 1999; 319 :1031–4. Background The epidemiology of tuberculosis differs considerably with ethnicity. For example, in the main, Asian people acquire new infection from infected people in the same community or when visiting the Indian subcontinent. Comparatively, white people generally have reactivation of endogenous latent infection. This distinction is reflected in the differences in the age distribution of cases and in the type of disease between ethnic groups. It can not be assumed that social factors that affect tuberculosis incidence in one group will affect another group. This study explored the effect of ethnicity on the relation between tuberculosis and deprivation. Methods 1991 census data (which includes demographic profiles) were used from 39 electoral wards in Birmingham, UK. A retrospective ecological approach was used to compare the incidence of tuberculosis in white and south Asian residents in these wards. During the study period (1989–93), 1516 cases of tuberculosis were notified in Birmingham. Of these 995 (66%) were in Asians and 332 (22%) in white people. The crude annual notification rates were 153/100 000 population for Asian people and 8.8/100 000 for white people, a 17-fold difference. Results Univariate analysis showed significant associations of tuberculosis rates for the whole population with several indices of deprivation (p>0.01) and with the proportion of the population of south Asian origin (p<0.01). All deprivation covariates were associated positively with each other, but on multiple regression a higher level of overcrowding was independently associated with high tuberculosis rates. For the white population, overcrowding was associated with tuberculosis rates independently of other variables (p=0.0036). No relation with deprivation was found for the south Asian population in either single or multivariate analyses. Conclusions Social factors such as poverty, which influence the likelihood of developing (predominantly reactivated) tuberculosis in white people, are likely to be different from those influencing the risk of contracting (predominately new infection) tuberculosis in the Asian population. Potential interventions such as public health measures need to reflect these differences. RÓISÍN GALLINAGH Lecturer/Practitioner in Nursing, Whiteabbey Hospital and University of Ulster, Jordanstown, Co Antrim, UK View Abstract ## Request permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
{}
Safe jet vetoes @article{Pascoli2018SafeJV, title={Safe jet vetoes}, author={Silvia Pascoli and Richard Ruiz and C{\'e}dric Weiland}, journal={Physics Letters B}, year={2018} } • Published 23 May 2018 • Physics • Physics Letters B 23 Citations Figures from this paper • Physics Physical Review D • 2019 Multilepton searches for electroweakino and slepton pair production at hadron colliders remain some of the best means to test weak-scale supersymmetry. Searches at the CERN Large Hadron Collider, • Physics Journal of High Energy Physics • 2019 We consider the impact on W W production of the unique dimension-six operator coupling gluons to the Higgs field. In order to study this process, we have to appropriately model the effect of a veto • Physics Journal of High Energy Physics • 2019 A bstractHeavy neutrinos (N) remain one of most promising explanations for the ori-gin of neutrinos’ tiny masses and large mixing angles. In light of broad advances in un-derstanding and modeling of • R. Ruiz • Physics Journal of High Energy Physics • 2022 Motivated by searches for so-called leptonic scalars at the LHC and the recent measurement of the W boson’s mass at the Tevatron, we revisit the phenomenology of the Zee-Babu model for neutrino • Physics • 2022 We utilize the lepton number violation signal process p e − → τ + jjj to search for heavy Majorana neutrinos at future proton-electron colliders. The LHeC (FCC-eh) is considered to run with an • Physics Journal of High Energy Physics • 2018 A bstractIn this article we investigate the prospects of searching for sterile neutrinos in lowscale seesaw scenarios via the lepton flavour violating (but lepton number conserving) dilepton dijet • Physics Physical Review D • 2021 Motivated by searches for 0 νββ decay in nuclear experiments and collider probes of lepton number violation at dimension d ≥ 7 , we investigate the sensitivity to the d ¼ 5 Weinberg operator using • Physics Journal of High Energy Physics • 2019 A bstractThe inclusion of heavy neutral leptons to the Standard Model particle content could provide solutions to many open questions in particle physics and cosmology. The modification of the • Physics Journal of Physics G: Nuclear and Particle Physics • 2022 We have explored an extended seesaw model accommodating a keV sterile neutrino adopting U(1)B-L symmetry. This model provides a natural platform for achieving resonant leptogenesis to account for • Physics Journal of High Energy Physics • 2019 The inclusion of heavy neutral leptons to the Standard Model particle content could provide solutions to many open questions in particle physics and cosmology. The modification of the charged and References SHOWING 1-10 OF 88 REFERENCES • Physics • 2016 A bstractSeveral searches for new physics at the LHC require a fixed number of signal jets, vetoing events with additional jets from QCD radiation. As the probed scale of new physics gets much larger • Physics • 2012 A bstractUsing methods of effective field theory, we derive the first all-order factorization theorem for the Higgs-boson production cross section with a jet veto, imposed by means of a standard • Physics • 2017 A bstractThe production of high-mass, color-singlet particles in hadron collisions is universally accompanied by initial state QCD radiation that is predominantly soft with respect to the hard • Physics • 2014 The WW production cross section measured at the LHC has been consistently exhibiting a mild excess beyond the SM prediction, in both ATLAS and CMS at both 7-TeV and 8-TeV runs. We provide an A bstractResummation of hadron collision cross sections, when the measurement imposes a hierarchy of scales, relies on factorization. Cancellation of Glauber/Coulomb gluons is a necessary condition • Physics • 2009 The Majorana nature of neutrinos can be experimentally verified only via lepton-number violating processes involving charged leptons. We study 36 lepton-number violating (LV ) processes from the • Physics • 2014 A bstractAs hadron collider physics continues to push the boundaries of precision, it becomes increasingly important to have methods for predicting properties of jets across a broad range of jet • Physics • 2017 Many measurements and searches for physics beyond the standard model at the LHC rely on the efficient identification of heavy-flavour jets, i.e. jets originating from bottom or charm quarks. In this • Physics • 2014 The W^+W^- cross section has remained one of the most consistently discrepant channels compared to SM predictions at the LHC, measured by both ATLAS and CMS at 7 and 8 TeV. Developing a better
{}
# Variance of an Unbiased Estimator for $\sigma^2$ Let $$X_1, X_2,...X_n\sim N(0,\sigma^2)$$ independently. Define $$Q=\frac{1}{2(n-1)}\sum_{i=1}^{n-1}(X_{i+1}-X_i)^2$$ I already proved that this Q is an unbiased estimator of $$\sigma^2$$. Now I'm stuck with calculating its variance, I've tried using Chi-square but then I realized these $$(X_{i+1}-X_i)$$ are not independent. Can you guys help me with this? Many thanks in advance. The easiest way to do this problem is by using vector algebra, re-expressing the estimator as a quadratic form in vector notation: $$Q = \frac{1}{2(n-1)} \mathbf{X}^\text{T} \mathbf{\Delta} \mathbf{X} \quad \quad \mathbf{\Delta} \equiv \begin{bmatrix} 1 & -1 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ -1 & 2 & -1 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & -1 & 2 & -1 & \cdots & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 2 & \cdots & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 2 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdots & -1 & 2 & -1 & 0 \\ 0 & 0 & 0 & 0 & \cdots & 0 & -1 & 2 & -1 \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 & -1 & 1 \\ \end{bmatrix}.$$ Since $$\mathbf{X} \sim \text{N}(\boldsymbol{0},\sigma^2 \boldsymbol{I})$$ we can compute the expected value of the quadratic form to confirm that the estimator is unbiased: \begin{aligned} \mathbb{E}(Q) &= \frac{1}{2(n-1)} \cdot \mathbb{E}(\mathbf{X}^\text{T} \mathbf{\Delta} \mathbf{X}) \\[6pt] &= \frac{\sigma^2}{2(n-1)} \cdot \text{tr}(\mathbf{\Delta} \boldsymbol{I}) \\[6pt] &= \frac{\sigma^2}{2(n-1)} \cdot (1 + 2 + 2 + \cdots + 2 + 2 + 1) \\[6pt] &= \frac{\sigma^2}{2(n-1)} \cdot (2n-2) \\[6pt] &= \frac{2(n-1)}{2(n-1)} \cdot \sigma^2 = \sigma^2. \\[6pt] \end{aligned} This confirms that the estimator is unbiased. Now, to get the variance of the estimator we can compute the variance of a quadratic form for the case of a joint normal random vector, which gives: \begin{aligned} \mathbb{V}(Q) &= \frac{1}{4(n-1)^2} \cdot \mathbb{V}(\mathbf{X}^\text{T} \mathbf{\Delta} \mathbf{X}) \\[6pt] &= \frac{\sigma^4}{2(n-1)^2} \cdot \text{tr}(\mathbf{\Delta} \boldsymbol{I} \mathbf{\Delta} \boldsymbol{I}) \\[6pt] &= \frac{\sigma^4}{2(n-1)^2} \cdot \text{tr}(\mathbf{\Delta}^2) \\[6pt] &= \frac{\sigma^4}{2(n-1)^2} \cdot (2 + 6 + 6 + \cdots + 6 + 6 + 2) \\[6pt] &= \frac{\sigma^4}{2(n-1)^2} \cdot (6n - 8) \\[6pt] &= \frac{3n-4}{(n-1)^2} \cdot \sigma^4 \\[6pt] &= \frac{3n-4}{2n-2} \cdot \frac{2}{n-1} \cdot \sigma^4. \\[6pt] \end{aligned} This gives us an expression for the variance of the estimator. (I have framed it in this form to compare it to an alternative estimator below.) As $$n \rightarrow \infty$$ we have $$\mathbb{V}(Q) \rightarrow 0$$ so it is a consistent estimator. It is worth contrasting the variance of this estimator with the variance of the sample variance estimator (see e.g., O'Neill 2014, Result 3), which is: $$\mathbb{V}(S^2) = \frac{2}{n-1} \cdot \sigma^4.$$ Comparing these results we see that the estimators have the same variance when $$n=2$$, and when $$n>2$$ the sample variance estimator has lower variance than the present estimator. In other words, the sample variance is a more efficient estimator than the present estimator. • Thank you Ben, I had a look at your previous answer and tried to proceed but apparently the expectation of cross products was too tedious. I like this matrix approach. – diidoobiib Feb 5 '19 at 3:32 • I have 1 question, the combination of univariate random normal variables is not necessarily multivariate normal. How can we make sure $\textbf{X}$ is multivariate normal? – diidoobiib Feb 5 '19 at 3:35 • I am assuming in your initial specification of $X_1,...,X_n$ that they are jointly independent. If that is correct, then combined with the marginal normal distribution, they are jointly normal. – Ben Feb 5 '19 at 3:42 • Thanks a lot. I've learned new things from your answer, especially the variance of quadratic form. – diidoobiib Feb 5 '19 at 3:53
{}
Journal topic Atmos. Meas. Tech., 12, 6737–6748, 2019 https://doi.org/10.5194/amt-12-6737-2019 Atmos. Meas. Tech., 12, 6737–6748, 2019 https://doi.org/10.5194/amt-12-6737-2019 Research article 19 Dec 2019 Research article | 19 Dec 2019 # Evaluating the measurement interference of wet rotating-denuder–ion chromatography in measuring atmospheric HONO in a highly polluted area Evaluating the measurement interference of wet rotating-denuder–ion chromatography in measuring atmospheric HONO in a highly polluted area Zheng Xu1,2, Yuliang Liu1,2, Wei Nie1,2, Peng Sun1,2, Xuguang Chi1,2, and Aijun Ding1,2 Zheng Xu et al. • 1Joint International Research Laboratory of Atmospheric and Earth System Sciences, School of Atmospheric Sciences, Nanjing University, Nanjing, Jiangsu Province, China • 2Collaborative Innovation Center of Climate Change, Nanjing, Jiangsu Province, China Correspondence: Wei Nie (niewei@nju.edu.cn) Abstract Due to the important contribution of nitrous acid (HONO) to OH radicals in the atmosphere, various technologies have been developed to measure HONO. Among them, wet-denuder–ion-chromatography (WD/IC) is a widely used measurement method. Here, we found interferences with HONO measurements by WD/IC based on a comparison study of concurrent observations of HONO concentrations using a WD/IC instrument (Monitor for AeRosols and Gases in ambient Air, MARGA) and long-path absorption photometer (LOPAP) at the Station for Observing Regional Processes of the Earth System (SORPES) in eastern China. The measurement deviation of the HONO concentration with the MARGA instrument, as a typical instrument for WD/IC, is affected by two factors. One is the change in denuder pH influenced by acidic and alkaline gases in the ambient atmosphere, which can affect the absorption efficiency of HONO by the wet denuder to underestimate the HONO concentration by up to 200 % at the lowest pH. The other is the reaction of NO2 oxidizing SO2 to form HONO in the denuder solution to overestimate the HONO concentration, which can be increased by to 400 % in denuder solutions with the highest pH values due to ambient NH3. These processes are in particularly important in polluted east China, which suffers from high concentrations of SO2, NH3, and NO2. The overestimation induced by the reaction of NO2 and SO2 is expected to be of growing importance with the potentially increased denuder pH due to the decrease in SO2. We further established a method to correct the HONO data measured by a WD/IC instrument such as the MARGA. In case a large amount WD/IC-technique-based instruments are deployed with the main target of monitoring the water-soluble composition of PM2.5, our study can help to obtain a long-term multi-sites database of HONO to assess the role of HONO in atmospheric chemistry and air pollution in east China. 1 Introduction Since the first detection of nitrous acid (HONO) in the atmosphere in 1979 (Perner and Platt, 1979), HONO has attracted much attention due to its important contribution to OH radicals, which are the primary oxidants in the atmosphere (Kleffmann, 2007). It has been realized that the photolysis of HONO is the most significant OH source in the morning when other OH sources, such as the photolysis reactions of O3 and formaldehyde, are still weak (Kleffmann, 2007; Platt et al., 1980). In addition, unexpectedly high HONO concentrations have been observed in the daytime and are believed to be a major OH source even during the daytime (Kleffmann et al., 2005; Michoud et al., 2014; Sörgel et al., 2011). Currently, the source of daytime HONO is still a challenging topic under discussion. Because of the important role of HONO in atmospheric chemistry and the knowledge gap with regard to its sources, various techniques have been developed to detect the HONO concentration in ambient air or in a smog chamber. Generally, online HONO analyzers can be divided into chemical methods and optical methods. The chemical methods include wet-denuder–ion-chromatography (WD/IC) (Acker et al., 2004; Febo et al., 1993), long-path absorption photometer (LOPAP) analysis (Heland et al., 2001), chemical ionization mass spectrometry (CIMS) (Lelièvre et al., 2004), and stripping coil-ion chromatograph (Cheng et al., 2013; Xue et al., 2019). The optical methods include differential optical absorption spectroscopy (DOAS) (Perner and Platt, 1979) and incoherent broadband cavity absorption spectroscopy (IBBCEAS) (Wu et al., 2014). WD/IC is a widely used measurement method due to its simple design, low price, and high sensitivity (Zhou, 2013). In a WD/IC instrument, HONO is absorbed by the solution, converted into nitrite by the denuder, and then quantified by ion chromatography. The ambient HONO concentration is calculated from the concentration of nitrite and volumes of the sampled air and absorption solution. Using this method, a large number of studies have been performed to study the variation in HONO and its sources in the atmosphere. For example, Trebs et al. (2004) reported the HONO diurnal variation in the Amazon Basin and found relatively high HONO concentrations during the daytime. Using WD/IC, Su et al. (2008a, b) reported the variation characteristics of HONO in the Pearl River Delta and found that the heterogeneous reaction of NO2 on the ground surface is a major HONO source. Nie et al. (2015) evaluated the enhanced HONO formation from the biomass burning plumes observed in June, which is the intense biomass burning season in the Yangtze River Delta (YRD) in China. Makkonen et al. (2014) reported a 1-year HONO variation pattern at the Station for Measuring Ecosystem-Atmosphere Relations (SMEAR) III, a forest station in Finland. Recently, abundant ambient particulate nitrite levels were also measured by WD/IC (VandenBoer et al., 2014). However, many studies found that the HONO sampling procedures may have introduced unintended artifacts because NO2 and other atmospheric components will generate a series of chemical reactions in the sampling tube to generate HONO (Heland et al., 2001; Kleffmann and Wiesen, 2008). For example, NO2 will heterogeneously react with H2O on the sampling tube wall to produce HONO (Heland et al., 2001; Zhou et al., 2002). This interference may be related to the length of the sampling tube and the relative humidity in the atmosphere (Su et al., 2008a). Recent studies have shown that NO2 reacts with atmospheric aerosols such as black carbon, sand, and hydrocarbons under certain conditions to produce HONO (Gutzwiller et al., 2002; Monge et al., 2010; Nie et al., 2015; Su et al., 2008a). Therefore, in the presence of high concentrations of aerosols, this reaction may lead to an enhancement of measurement interferences for the WD/IC system. In addition, when an alkaline solution, such as sodium carbonate, was used as the wet-denuder absorbing liquid for sampling HONO, artifact nitrous acid will be produced by the reaction between NO2 and SO2. (Jongejan et al., 1997; Spindler et al., 2003). Spindler et al. (2003) quantified the artifact HONO on an alkaline K2CO3 surface with a pH of 9.7 in laboratory experiments. However, their results were only applicable for a concentrated alkaline striping solution (1 mM K2CO3), which limited the application for quantifying the artifact HONO by the other WD/IC instrument with a different denuder solution (Spindler et al., 2003; Su et al., 2008a). As an alternative, Genfa et al. (2003) used H2O2 as the absorption liquid to absorb HONO. Since H2O2 can rapidly oxidize the produced S(IV) to sulfate and interrupt the reaction pathway of NO2 and SO2 to form ${\mathrm{NO}}_{\mathrm{2}}^{-}$, this will eliminate the measurement error. Although many efforts were made on the interferences of WD/IC, an intercomparison between WD/IC and a technology with less interference is still needed in the field observation (Zhou, 2013). Only limited studies have been conducted on field comparisons between WD/IC and other reliable technologies. Moreover, the performance of WD/IC for HONO measurement is quite different under different environmental conditions. For example, Acker et al. (2006) showed a suitable correlation between WD/IC and coil sampling/high-performance liquid chromatographic (HPLC) during a HONO intercomparison campaign in an urban area of Rome (r2=0.81, slope = 0.83). Su et al. (2008b) found that WD/IC, on average, overestimated the HONO concentration by 1.2 times compared to the LOPAP measurement. However, when the same system was used for comparative observations in Beijing, the HONO concentration from the WD/IC measurement was overestimated by approximately 2 times (Lu et al., 2010). This phenomenon also indicates that the performance of WD/IC in the measurement of HONO is environmentally dependent. To solve the complex atmospheric pollution problem in eastern China, a large number of two-channel WD/IC instruments represented by the Monitor for AeRosols and Gases in ambient Air (MARGA) instruments was widely used to obtain aerosol composition information, as well as acid trace gas levels, including HONO (Stieger et al., 2018). Those databases will greatly improve the understanding of air pollution in China. However, the application of HONO data was limited because of the measurement uncertainty. Therefore, the major purpose of this study is to try to evaluate the measurement uncertainty of WD/IC and increase the reliability of the HONO database obtained by MARGA or similar instruments. For the purpose, a MARGA and more accurate equipment (LOPAP) were used to simultaneously measure the HONO concentration at the Station for Observing Regional Processes of the Earth System (SORPES) in the YRD of east China. We evaluated the performance of the WD/IC instrument in measuring HONO concentrations and analyzed the source of measurement interference based on the atmospheric composition data from SORPES. Based on the understanding of the interference factors, a correction function was given to correct the HONO data measured by MARGA. 2 Experiment ## 2.1 Observation site The field-intensive campaign was conducted from December 2015 to January 2016 at SORPES in the Xianlin campus of Nanjing University. SORPES is a regional background site located on top of a hill (320714 N, 1185710 E; 40 m a.s.l.), in an eastern suburb approximately 20 km from downtown Nanjing. The station is an ideal receptor for air masses from the YRD with little influence from local emissions and urban pollution from Nanjing. Detailed information about SORPES can be found in Ding et al. (2016). ## 2.2 Instrumentation The fine particulate matter (PM2.5) and trace gas (SO2, O3, NOx, and NOy) levels were measured by a set of Thermo Fisher analyzers (TEI 5030i, 43i, 49i, 42i, and 42iy). The molybdenum oxide converter of the NOx analyzer was replaced by a blue light convertor to avoid NO2 measurement interference (Xu et al., 2013). The water-soluble ions of PM2.5 were determined by MARGA. For details on these instruments, please refer to Ding et al. (2016). The following section will focus on the measurement of HONO. The WD/IC instrument for the HONO measurement used in our study was a MARGA (Metrohm, Switzerland, MARGA) (Xie et al., 2015). MARGA was located on the top floor of the laboratory building with a sampling inlet of 3 m. The sampling system of the MARGA instrument comprised two parts: a wet rotating denuder for gases and a steam jet aerosol collector (SJAC) for aerosols, which worked at an air flow rate of 1 m3 h−1. The trace gases, including SO2, NH3, HONO, HCl, and HNO3, were absorbed by the H2O2 denuder solution with a concentration of 1 mM. Subsequently, the ambient particles were collected in the SJAC. Hourly samples were collected in syringes and analyzed with a Metrohm cation and anion chromatograph using an internal standard (LiBr). In our experiments, the flow rate of the absorption solution was 25 mL h−1. As an intercomparison, HONO was also observed by a LOPAP (QUMA, Germany) with a 1–2 cm sample inlet before the sample box. The ambient air was sampled in two similar temperature-controlled stripping coils in series using a mixture reagent of 100 g sulfanilamide and 1 L HCl (37 % volume fraction) in 9 L pure water. In the first stripping coil, almost all of the HONO and a fraction of the interfering substances were absorbed in the solution named R1. In the second stripping coil, the remaining HONO and most of the interfering species were absorbed in the solution named R2. After adding 0.8 g N-naphtylethylenediamine-dihydrochloride reagent in 9 L pure water to both coils, a colored azo dye was formed in the solutions of R1 and R2, which were then separately detected via long-path absorption in special Teflon tubing. The interference-free HONO signal was the difference between the signals in the two channels. The method was believed to be an interference-free method for HONO measurement. 3 Results and discussion ## 3.1 Performance of MARGA for measuring atmospheric HONO During the observation period, the HONO concentration measured by LOPAP (HONOlopap) varied from 0.01 to 4.8 ppbv with an average value of 1.1±0.77 ppbv, and the HONO concentration measured by the MARGA instrument (HONOmarga) was 0.01–9.6 ppbv, with an average value of 1.52±1.21 ppbv. The comparison between HONOlopap and HONOmarga values is shown in Fig. 1. The ratio of HONOmarga to HONOlopap varied from 0.25 to 5, but HONOmarga was higher than HONOlopap during most of the observation period (> 70 %). The average diurnal variations in HONOmarga and HONOlopap are shown Fig. 1b; HONOmarga / HONOlopap ratios were higher at night and especially in the morning, which were different from the results of Muller et al. (1999), who found that a remarkable overestimation of HONO by WD/IC usually occurred during the daytime. Meanwhile, the correlation between the HONO concentrations measured by WD/IC and by other techniques varied in different studies. The slope of HONOlopap to HONOmarga? measured by this study was approximately 0.57 (with a correlation coefficient of r2=0.3), which was within the large range of 0.32–0.87 reported by the limited comparison investigations on HONO measurements using a WD/IC instrument and LOPAP at four sampling sites including SORPES (a suburban site in YRD; this work), YUFA (a rural site in southern Beijing), PKU (an urban site in Beijing; Lu et al., 2010), and Easter Bush (a forest site south of Edinburgh; Ramsay et al., 2018). Such large variation in the slopes at the different sampling sites may indicate that the performance of WD/IC in the measurement of HONO is environmentally dependent. Figure 1The time series of HONO concentrations measured by the LOPAP (HONOlopap) and MARGA instruments (HONOmarga); the deviation of HONOmarga including ΔHONO (HONOmarga−HONOlopap) and ΔHONO/HONO, with regard to the benchmark of HONO (a), the average diurnal variations (b), and their scatterplot during the observation period (c). Here, the relationship between the measurement deviations and atmospheric compositions, including aerosols and major trace gases, during the observation was further analyzed, as shown in Fig. 2. As the major precursor of HONO, the heterogeneous reaction of NO2 on the sampling tube or aerosol may introduce the artificial HONO (Kleffmann et al., 2006; Gutzwiller et al., 2002; Liu et al., 2014; Xu et al., 2015). In our study, the results showed that the correlation between the deviations of HONOmarga with regard to NO2 and PM2.5 is weak, thereby indicating that the hydrolysis of NO2 on the tube surface or in PM2.5 is not the major contributor resulting in the measurement deviation of HONO. However, the measurement deviation was notably affected by ambient SO2 (Fig. 2c) and NH3 (Fig. 2d). Compared to HONOlopap, HONOmarga was significantly higher at a high concentration of SO2 and had the opposite trend at a high concentration of ammonia. A reasonable extrapolation was that SO2 and NH3, as the main acid gas and alkaline gas in the atmosphere, were absorbed by the denuder solution in the process of sampling HONO. This process will impact the pH of the denuder solution and further change the absorption efficiency of HONO (Zellweger et al., 1999). In a real atmosphere, ambient SO2 will be rapidly oxidized to sulfuric acid by H2O2 in the denuder solution (Kunen et al., 1983), thereby lowering the pH. Similar to SO2, ammonia in the atmosphere is hydrolyzed to ${\mathrm{NH}}_{\mathrm{4}}^{+}$ and OH, which increases the pH of the denuder solution. The variation in the pH of the denuder solution caused by atmospheric composition, specifically the condition of a high SO2 concentration, will ultimately affect the absorption efficiency of HONO by the denuder. Figure 2The colored scatterplots of HONOmarga and HONOlopap for NO2, PM2.5, SO2, and NH3. ## 3.2 The influence of the denuder pH on HONO measurement by MARGA According to previous studies by Zellweger et al. (1999), the absorption efficiency of the denuder for HONO is mainly affected by the pH of the denuder solution, the flow rate of the absorbing liquid, the gas flow rate, and the effective Henry coefficient of HONO, as shown by formulas 1 and 2. $\begin{array}{}\text{(1)}& \mathit{\epsilon }=\frac{{f}_{\mathrm{a}}}{{f}_{\mathrm{g}}/{H}_{\mathrm{eff}}+{f}_{\mathrm{a}}},\text{(2)}& {H}_{\mathrm{eff}}=H\left(\mathrm{1}+\frac{{K}_{a}}{{H}^{+}}\right),\end{array}$ where H is the Henry constant of HONO, Heff is the effective Henry constant, Ka is the dissociation constant, and fa and fg are the flow rates (mL min−1) of the aqueous and gaseous phase, respectively. The absorption efficiency of the MARGA (ε) instrument for HONO as calculated according to Eqs. (1) and (2) is shown in Fig. 3a. The absorption efficiency was sensitive to the pH of the denuder solution. Therefore, estimating the pH of the denuder solution was the first step and the key issue to evaluate the measurement deviation of HONO by WD/IC. Figure 3The absorption efficiency of HONO by the denuder at different pH values (a) and denuder absorption solution pH values in 13 denuder solution samples (b). pH_a was calculated by the ions by Curtipot according to the ${\mathrm{NH}}_{\mathrm{4}}^{+}$, ${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$, ${\mathrm{NO}}_{\mathrm{3}}^{-}$, and ${\mathrm{NO}}_{\mathrm{2}}^{-}$(PH_a) ions, which were measured by IC. pH_b was calculated by the above ions and the carbonic acid. pH_c was measured value by a pH detector. Here, we attempted to use the ion concentration of the denuder solution (${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$, ${\mathrm{NO}}_{\mathrm{3}}^{-}$, ${\mathrm{NO}}_{\mathrm{2}}^{-}$, Cl, Mg2+, Ca2+, ${\mathrm{NH}}_{\mathrm{4}}^{+}$, Na+, and K+) measured by MARGA to inversely derive the pH of the denuder solution. The calculation of the pH was conducted with Curtipot, which is a simple software program that provides a fast pH calculation of any aqueous solution of acids, bases, and salts, including buffers and zwitterionic amino acids, from single components to complex mixtures (http://www.iq.usp.br/gutz/Curtipot_.html, last access: 28 November 2019). As input of the model, ${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$, ${\mathrm{NO}}_{\mathrm{3}}^{-}$, ${\mathrm{NO}}_{\mathrm{2}}^{-}$, and ${\mathrm{NH}}_{\mathrm{4}}^{+}$ ions, which accounted for more than 98 % of the total ions, were used. To verify the reliability of the calculation, a pH detector (Metrohm, 826 PH) was used to measure the pH of the denuder solution, which was collected in a clean glass bottle when the denuder solution was injected into the IC instrument. In order to adjust the pH of the denuder solution, SO2 with a concentration of 0, 5, 10, 20, 40, 80, and 100 ppbv was injected into the sampling line with the NH3 concentration around 10–15 ppbv. During the test, 13 samples were collected, and the pH results are shown in Fig. 3b. When the pH value was lower than 5.6, the calculated pH (pH_a) was close to the measured value (pH_c), but when the value was higher than 7, pH_a was notably higher than pH_c. These results should be attributed to the buffering effect of carbonic acid in the denuder solution, which was exposed to the atmosphere. When the equilibrium between the CO2 and the carbonic acid in the denuder solution was reached, a carbonic acid buffer solution with a pH of 5.6 formed in the denuder solution with a dissolved CO2 concentration of $\mathrm{1.24}×{\mathrm{10}}^{-\mathrm{5}}$ M (Seinfeld and Pandis, 2016; Stieger et al., 2018). Additionally, when the ${\mathrm{NH}}_{\mathrm{4}}^{+}$ concentration was higher than the total anion concentration in the denuder solution (${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$, NO3−, ${\mathrm{NO}}_{\mathrm{2}}^{-}$, and Cl), more CO2 would be dissolved in the denuder solution, and the excess dissolved CO2 could be equal to the excess ${\mathrm{NH}}_{\mathrm{4}}^{+}$. After including the buffering solution of carbonic acid and excess CO2, the calculated pH values were denoted as pH_b, and pH_b was in good agreement with the actual measurement results (pH_c), which confirmed the ability of Curtipot to calculate the pH of the denuder solution. Therefore, the pH of the denuder solution during the observation period was calculated by the above method. Figure 4Variation in the ratio of HONOlopap to HONOmarga with the denuder absorption solution pH. The red line is the multiplicative inverse of the HONO absorption efficiency of MARGA. Under ideal conditions, the pH of the denuder absorption solution in MARGA (1 mM H2O2) was approximately 6.97, and the absorption efficiency of MARGA for HONO should be 98 % or higher under clear conditions. However, during the observation period, the calculated pH of the denuder solution varied from 4 to 7 due to ambient SO2 and NH3 (Fig. 4). Therefore, HONOmarga was underestimated due to the low absorption efficiency caused by the low pH. In other words, the HONOmarga HONOlopap ratio will increase with decreasing pH. Assuming that the measurement deviation of HONOmarga was only impacted by the collection efficiency, the HONOlopap HONOmarga ratio should be 1∕ε (or HONOmarga HONOlopap=ε). However, most of the observed HONOlopap HONOmarga ratios were lower than 1∕ε (Fig. 4), thus indicating that HONOmarga had still been overestimated even when the deviation of HONO caused by the variation in the denuder pH was corrected. ## 3.3 The artifact HONO due to NO2 oxidizing SO2 To further analyze the MARGA measurement deviation of HONO, we first eliminated the influence of the denuder absorption efficiency on the measurement deviation according to the correction formula below. $\begin{array}{}\text{(3)}& {\mathrm{MARGA}}_{\mathrm{int}.}={\mathrm{HONO}}_{\mathrm{Marga}}-{\mathrm{HONO}}_{\mathrm{LOPAP}}\cdot \mathit{\epsilon }\left(\mathrm{pH}\right),\end{array}$ where MARGAint. is the additional HONO produced during the sampling process. In previous studies, the interference of HONO in the denuder solution mainly came from the NO2 hydrolysis reaction and the reaction between NO2 and SO2 (Febo et al., 1993; Spindler et al., 2003). In the study by Spindler et al. (2003), approximately 0.058 % of NO2 was hydrolyzed to HONO, which indicate that the NO2 hydrolysis reaction contributed little to the artificial HONO in the denuder solution. In this study, a similar ratio (0.060 %) was found at a low PH (< 4.5) when the level of artifact ${\mathrm{NO}}_{\mathrm{2}}^{-}$ from the reaction between NO2 and SO2 was low (this part is discussed in the section below). However, the oxidation of SO2 with NO2 may have contributed to MARGAint. in the basic or slightly acidic denuder solution (Jongejan et al., 1997; Spindler et al., 2003; Xue et al., 2019). In this study, the correlation between MARGAint. and SO2⋅NO2 is shown in Fig. 5b. Compared to the study by Spindler et al. (2003), where the correlation with SO2⋅NO2 was linear in an alkaline solution, the relationship between MARGAint. and SO2⋅NO2 was dependent on the pH of the denuder solution. The generation rate of HONO by SO2⋅NO2 was low when the pH was < 5 but would significantly increase with pH. This discrepancy with the study of Spindler et al. (2003) should be due to the additional H2O2 in MARGA's denuder solution competitively oxidizing SO2. Figure 5The scatterplot between the MARGAint and NO2 (a) and SO2⋅NO2 (b). The plot was colored as a function of the denuder pH. The competition of SO2 oxidation by H2O2 and NO2 in the atmosphere has been well studied (Hoffmann and Calvert, 1985; Seinfeld and Pandis, 2016). Due to the presence of H2O2 in the denuder solution, a similar competition oxidation process of SO2 will also occur in the denuder solution. First, ambient SO2 undergoes a hydrolysis reaction when it is absorbed by the denuder solution. The reaction is shown in Reaction (R1), and the fraction of the three components (αH2SO3, $\mathit{\alpha }{\mathrm{HSO}}_{\mathrm{3}}^{-}$, and $\mathit{\alpha }{\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}$) is affected by the denuder pH (Fig. 7a). After that, ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ and ${\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}$ are simultaneously oxidized by H2O2 and NO2 (Seinfeld and Pandis, 2016; Cheng et al., 2016). $\begin{array}{}\text{(R1)}& {\mathrm{SO}}_{\mathrm{2}}+{\mathrm{H}}_{\mathrm{2}}\mathrm{O}\to {\mathrm{H}}_{\mathrm{2}}{\mathrm{SO}}_{\mathrm{3}}+{\mathrm{HSO}}_{\mathrm{3}}^{-}+{\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-},\text{(R2)}& {\mathrm{HSO}}_{\mathrm{3}}^{-}+{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}\to {\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}+{\mathrm{H}}^{+}+{\mathrm{H}}_{\mathrm{2}}\mathrm{O},\text{(R3)}& {\mathrm{HSO}}_{\mathrm{3}}^{-}+{\mathrm{NO}}_{\mathrm{2}}\to {\mathrm{NO}}_{\mathrm{2}}^{-}+{\mathrm{SO}}_{\mathrm{3}}^{-}+{\mathrm{H}}^{+},\text{(R4)}& {\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}+{\mathrm{NO}}_{\mathrm{2}}\to {\mathrm{NO}}_{\mathrm{2}}^{-}+{\mathrm{SO}}_{\mathrm{3}}^{-}.\end{array}$ Here, the reaction ratios of SO2 oxidized by H2O2 (${P}_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}\cdot \mathrm{S}}$) and NO2 (${P}_{{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{S}}$) in the denuder solution are shown in Fig. 6b. The concentration of H2O2 ([H2O2(aq)]) is 1 mM; the concentration of ambient NO2 ([NO2]) and SO2 ([SO2]) was assumed to be 1 ppbv. Because of the low solubility of NO2, the aqueous NO2 [NO2(aq)] in the denuder solution is balanced with [NO2] in Henry's law. $\begin{array}{}\text{(4)}& \left[{\mathrm{NO}}_{\mathrm{2}}\left(\mathrm{aq}\right)\right]=\left[{\mathrm{NO}}_{\mathrm{2}}\right]\cdot {H}_{{\mathrm{NO}}_{\mathrm{2}}}\end{array}$ Compared to the gas and aqueous phase equilibrium of SO2(g) and S(IV)(aq) in the ambient air or cloud, almost all the SO2 was absorbed by the denuder solution of 1 mM H2O2 (Rosman et al., 2001; Rumsey et al., 2014); therefore, the concentration of [S(IV)] (${\mathrm{HSO}}_{\mathrm{3}}^{-}$, ${\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}$, H2SO3) in the denuder solution was determined by [SO2], sampling flow, and the flow of denuder liquid. For example, [S(IV)] should be $\mathrm{8.34}×{\mathrm{10}}^{-\mathrm{6}}$ M for an air flow of 16.67 L min−1, a liquid flow of 0.08 mL min−1, and 1 ppb SO2. Thereby, the [${\mathrm{HSO}}_{\mathrm{3}}^{-}$(aq)] and [${\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}$(aq)]was determined by the pH and [S(IV)] at the beginning of the oxidation reaction by H2O2 or NO2. $\begin{array}{}\text{(5)}& \left[\mathrm{S}\left(\mathrm{IV}\right)\right]=\left[{\mathrm{SO}}_{\mathrm{2}}\right]\cdot {f}_{\mathrm{g}}/{f}_{\mathrm{a}}\text{(6)}& \left[{\mathrm{HSO}}_{\mathrm{3}}^{-}\left(\mathrm{aq}\right)\right]=\left[\mathrm{S}\left(\mathrm{IV}\right)\right]\cdot \mathit{\alpha }{\mathrm{HSO}}_{\mathrm{3}}^{-}\text{(7)}& \left[{\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}\left(\mathrm{aq}\right)\right]=\left[\mathrm{S}\left(\mathrm{IV}\right)\right]\cdot \mathit{\alpha }{\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}\end{array}$ The result is as shown in Fig. 6. In the case of a lower pH, more ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ would be present in the solution. At this point, the oxidation of SO2 in the solution was mainly due to H2O2. With the increase in pH, the ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ concentration of the solution decreased, while the ${\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}$ concentration of the solution increased. The role of NO2 in the oxidation of SO2 gradually increased, and the ratio of ${P}_{{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{S}}/{P}_{\mathrm{S}\left(\mathrm{IV}\right)}$ rose rapidly and remained at nearly 100 % at a pH of 8, which indicated that almost all SO2 was oxidized by NO2 at this point. Figure 6The fraction of S(IV) species ($\mathit{\alpha }{\mathrm{HSO}}_{\mathrm{3}}^{-}$, $\mathit{\alpha }{\mathrm{SO}}_{\mathrm{3}}^{\mathrm{2}-}$, and αH2SO3) as a function of the pH (a) and the formation rate of aqueous phase oxidation of S(IV) by H2O2 and NO2 as a function of the pH for [SO2] = 1 ppb, [NO2] = 1 ppb, and [H2O2] in the denuder solution = 1. ${P}_{{\mathrm{H}}_{\mathrm{2}}{\mathrm{O}}_{\mathrm{2}}\cdot \mathrm{S}}$ and ${P}_{{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{S}}$ are the oxidation ratio of S(IV) by H2O2 and NO2, respectively (b). Figure 7(a) Variation in the production rate of the artifact HONO from 1 ppbv SO2 and 1 ppbv NO2 with the denuder pH and (b) the variation in MARGAint.SO2 with the pH of the denuder solution (circles) and calculated ${P}_{{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{S}}/{P}_{\mathrm{S}\left(\mathrm{IV}\right)}$ for different pH values according to ambient NO2 (black squares). Now, the question was whether the observed MARGAint.. could be explained by the reaction between SO2 and NO2. This question could be answered by comparing MARGA${}_{\mathrm{int}.}/\left({\mathrm{SO}}_{\mathrm{2}}\cdot {\mathrm{NO}}_{\mathrm{2}}\right)$ and ${P}_{{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{S}}/{P}_{\mathrm{S}\left(\mathrm{IV}\right)}$ because ${\mathrm{NO}}_{\mathrm{2}}^{-}$ was formed in the denuder solution only when SO2 was oxidized by NO2. Here, the correlation between the MARGAint. production rate and SO2, NO2, and the denuder pH is also shown in Fig. 7a. MARGA${}_{\mathrm{int}.}/\left({\mathrm{SO}}_{\mathrm{2}}\cdot {\mathrm{NO}}_{\mathrm{2}}\right)$ was in good agreement with the theoretically calculated ${P}_{{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{S}}/{P}_{\mathrm{S}\left(\mathrm{IV}\right)}$, thereby confirming that the chemical reaction between SO2 and NO2 did lead to the additional HONO production, which then resulted in the MARGA overestimations of the HONO measurements. Additionally, under the condition of 1 ppb NO2 concentration, as well as a range of the denuder pH of 4 to 7, only approximately 10 % of the SO2 was oxidized by NO2, which indicated that MARGAint. was low. However, during our observation, there was up to 50 ppbv NO2. Under these conditions, the oxidation of SO2 by NO2 was greatly elevated. As shown in Fig. 7b, the high NO2 concentrations of the ambient air (circles) were consistent with the ${P}_{{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{S}}/{P}_{\mathrm{S}\left(\mathrm{IV}\right)}$ values calculated from ambient air NO2 concentrations (black squares), which also confirmed the results. Figure 8The HONO produced from the reaction between NO2 and SO2 in the presence of 5 ppbv (a) and 20 ppbv NH3 concentrations (b). The black line is the variation in the pH with the SO2 concentration. In the reaction of SO2 and NO2, pH is the limiting factor. In the low pH of 4–6, the dissolved SO2 in the main denuder solution is presented as ${\mathrm{HSO}}_{\mathrm{3}}^{-}$, which will be rapidly oxidized by H2O2. In a real atmosphere, NH3 is the major basic species to maintain the high pH of the denuder solution. Therefore, NH3 is the key component influencing MARGAint.. Figure 8 shows the scenario of calculating MARGAint. from the reaction between SO2 and NO2 with ambient NH3 concentrations of 5 and 20 ppbv. As shown in the figure, in the case of the 5 ppbv NH3 concentration, the denuder pH would rapidly decrease with increasing SO2 concentration. At this point, the formation process of MARGAint. from SO2 and NO2 was limited. However, for a high NH3 concentration, the pH of the denuder solution would slowly decrease due to the neutralization of NH3 by sulfuric acid. A concentration of 1.2 ppb of artifact HONO could be produced with an NO2 concentration of 40 ppbv and an SO2 concentration of 4 ppbv. MARGAint. would be greatly improved at a high concentration of NH3. Figure 9The correlation between HONOmarga_corr and HONOlopap (a) and the correlation between HONOmarga and HONOlopap under the conditions of ${\mathrm{SO}}_{\mathrm{2}}\cdot {\mathrm{NO}}_{\mathrm{2}}<\mathrm{150}$ ppbv2 (median value) and NH3 < 5 ppbv (median value) (b). In east China, NH3 concentration is generally high and kept increasing by 30 % from 2008 to 2016 in North China Plain (NCP) (Liu et al., 2018). Especially in summer, NH3 concentration can be up to 30 ppb (Meng et al., 2018). In contrast, the SO2 concentration is gradually decreasing due to the emission reduction from 2008 to 2016 (around 60 %), with a concentration lower than 5 ppb frequently observed. In such a case, the pH of the denuder solution in the WD/IC instrument will be further enhanced, which will in turn further aggravate the deviation of the HONO measurement. Figure 10The correlation of residual/NO2 with RH. The residual is the difference of MARGAint. and the calculated interference from the reaction of SO2 and NO2. ## 3.4 The correction for the HONO measurement interference According to the above results, the deviation of MARGA for the HONO measurement could be caused by two factors: one is the low sampling efficiency of the denuder solution at low pH, and the other is the external ${\mathrm{NO}}_{\mathrm{2}}^{-}$ that is produced by the reaction between SO2 and NO2 at high pH. In this study, we attempted to correct the measurement deviation of HONO accordingly. The correction formula is as follows: $\begin{array}{}\text{(8)}& \begin{array}{rl}{\mathrm{MARGA}}_{\mathrm{correct}}& =\left({\mathrm{HONO}}_{\mathrm{marga}}-{\mathrm{SO}}_{\mathrm{2}}\cdot {\mathrm{NO}}_{\mathrm{2}}\cdot {P}_{\left(\mathrm{pH}\right)}\\ & -{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{0.0056}\right)/{\mathit{\epsilon }}_{\left(\mathrm{PH}\right)}.\end{array}\end{array}$ The calculation results are shown in Fig. 9a. After correction, there was a significant improvement in the measured HONO by MARGA, and the r2 value between HONOmarga and HONOlopap increased from 0.28 to 0.61, specifically in the high concentration range of HONO. However, when the concentration of HONO was low, the degree of improvement was limited. To find the reason for the uncertainty correction, the residual analysis was made. The residual was the difference between MARGAint. and calculated interference from ${\mathrm{SO}}_{\mathrm{2}}\cdot {\mathrm{NO}}_{\mathrm{2}}\cdot {P}_{\left(\mathrm{pH}\right)}-{\mathrm{NO}}_{\mathrm{2}}\cdot \mathrm{0.0056}$. The dependency of residual/NO2 on RH is similar to that of ambient HONO∕NO2 on RH which was observed in many other studies (Li et al., 2012; Yu et al., 2009) and indicates that the NO2 heterogeneous reaction or the reaction of SO2 and NO2 in the sampling tube may be other factors impacting the HONO interference (Su et al., 2008a). Moreover, the uncertainty of correcting HONOmarga may be attributed to two other reasons. One is the uncertainty of the pH of the denuder solution. The pH of the denuder solution was calculated according to the ions formed from the absorbed gas in the denuder solution with a residence time of 1 h, whereas the oxidation of SO2 occurred in real time when the pH of the denuder solution also varied. Additionally, the low concentration ions ($<\mathrm{5}×{\mathrm{10}}^{-\mathrm{5}}$ M) in the denuder solution will induce uncertainties in calculating the pH. Another reason is the uptake coefficient of the denuder solution. NO2(g) is weakly soluble in pure water with a Henry's law constant (H) of ∼0.01 M atm−1, which was used in this study. However, previous studies have shown that the anions in the liquid greatly enhance the NO2(g) uptake by 2 or 3 orders of magnitude (Li et al., 2018). This process may influence the calculation of the dissolved NO2 content and its hydrolysis. The accuracy of the uptake coefficient was difficult to determine, which might be one of the reasons for the underestimation of MARGAint. for the reaction between SO2 and NO2 at a high concentration of NO2 (Fig. 7b). In addition to the correction, an alternative way to use HONOmarga is to select suitable conditions in which the measurement interference is limited. In ambient air, SO2 and NH3 are the key pollutants resulting in HONO measurement deviation. According to our observations, under the clear conditions of SO2⋅NO2 lower than 150 ppbv2 (median value) and an NH3 content lower than 5 ppbv (median value), MARGA showed a much better performance for measuring the HONO concentration. The latter was also the possible reason for the suitable performance of WD/IC in measuring HONO concentrations in previous studies (Acker et al., 2004; Ramsay et al., 2018). 4 Conclusions We conducted a field campaign at SORPES in December 2015 to evaluate the performance of MARGA for measuring ambient HONO concentrations with the benchmark of LOPAP. Compared with HONOlopap, a notable deviation in HONOmarga was observed between −2 and 6 ppb, and the ratio of HONOmarga/HONOlopap ranged from 0.4 to 4. When the SO2 concentration in the atmosphere was high, a negative deviation occurred, and when the NH3 concentration was high, a positive deviation occurred. Through further analysis of the pH of the denuder solution and the oxidation of SO2 in the denuder solution, the deviation of the measurement of HONO by MARGA is mainly due to two things. One is that an acidic–alkaline gas component in the atmosphere enters the denuder solution of the instrument, thereby causing the denuder pH to change, affecting the absorption efficiency of MARGA for HONO. Another reason is that NO2 oxidizes the SO2 absorbed in the denuder solution, and the reaction is generally improved with a higher pH of the denuder solution in the presence of high concentrations of NH3 and NO2. The additional formation of HONO led to the MARGA measurement error of HONO. Based on the understanding of the interference factors, we established a method to correct the HONO data measured by MARGA. Compared with LOPAP, the HONO measurement results were improved after the correction, but the improvement was limited at a low concentration of HONO. Moreover, under the clear conditions of low concentrations of SO2, NO2, and NH3, MARGA will have a better performance for the measurement of HONO. Data availability Data availability. Measurement data at the SORPES station, including HONO data and relevant trace gases and aerosol data as well as meteorological data, are available upon request from the corresponding author before the SORPES database is open to the public. Author contributions Author contributions. AD and WN designed the study and contributed to the editing of the paper. ZX contributed to the measurements, data analysis, and the draft of this paper, and YL contributed to the data analysis. PS and XC contributed to observations at SORPES and data analysis. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. We thank Yuning Xie, Lei Wang, Caijun Zhu, and Wenjun Wu at the School of Atmospheric Sciences at Nanjing University for their contributions to the maintenance of the measurements. Financial support Financial support. The research was supported by the following funders: National Key R&D Program of China (grant nos. 2016YFC0200500 and 2016YFC0202000), National Science Foundation of China (grant nos. 41605098, 91644218, 41675145 and 41875175), and Jiangsu Provincial Science Fund (grant no. BK20160620). Review statement Review statement. This paper was edited by Mingjin Tang and reviewed by two anonymous referees. References Acker, K., Spindler, G., and Brüggemann, E.: Nitrous and nitric acid measurements during the INTERCOMP2000 campaign in Melpitz, Atmos. Environ., 38, 6497–6505, https://doi.org/10.1016/j.atmosenv.2004.08.030, 2004. Acker, K., Febo, A., Trick, S., Perrino, C., Bruno, P., Wiesen, P., Möller, D., Wieprecht, W., Auel, R., Giusto, M., Geyer, A., Platt, U., and Allegrini, I.: Nitrous acid in the urban area of Rome, Atmos. Environ., 40, 3123–3133, https://doi.org/10.1016/j.atmosenv.2006.01.028, 2006. Cheng, P., Cheng, Y. F., Lu, K. D., Su, H., Yang, Q., Zou, Y. K., Zhao, Y. R., Dong, H. B., Zeng, L. M., and Zhang, Y.: An online monitoring system for atmospheric nitrous acid (HONO) based on stripping coil and ion chromatography, J. Environ. Sci., 25, 895–907, 2013. Cheng, Y., Zheng, G., Wei, C., Mu, Q., Zheng, B., Wang, Z., Gao, M., Zhang, Q., He, K., Carmichael, G., Pöschl, U., and Su, H.: Reactive nitrogen chemistry in aerosol water as a source of sulfate during haze events in China, Sci. Adv., 12, e1601530, https://doi.org/10.1126/sciadv.1601530, 2016. Ding, A., Nie, W., Huang, X., Chi, X., Sun, J., Kerminen, V.-M., Xu, Z., Guo, W., Petäjä, T., Yang, X., Kulmala, M., Fu, C.: Long-term observation of air pollution-weather/climate interactions at the SORPES station: a review and outlook, Front. Environ. Sci. Eng., 10, 15, https://doi.org/10.1007/s11783-016-0877-3, 2016. Febo, A., Perrino, C., and Cortiello, M.: A denuder technique for the measurement of nitrous acid in urban atmospheres, Atmos. Environ. A Gen. Top., 27, 1721–1728, https://doi.org/10.1016/0960-1686(93)90235-Q, 1993. Genfa, Z., Slanina, S., Brad Boring, C., Jongejan, P. A. C., and Dasgupta, P. K.: Continuous wet denuder measurements of atmospheric nitric and nitrous acids during the 1999 Atlanta Supersite, Atmos. Environ., 37, 1351–1364, https://doi.org/10.1016/S1352-2310(02)01011-7, 2003. Gutzwiller, L., Arens, F., Baltensperger, U., Gäggeler, H. W., and Ammann, M.: Significance of Semivolatile Diesel Exhaust Organics for Secondary HONO Formation, Environ. Sci. Technol., 36, 677–682, https://doi.org/10.1021/es015673b, 2002. Heland, J., Kleffmann, J., Kurtenbach, R., and Wiesen, P.: A New Instrument To Measure Gaseous Nitrous Acid (HONO) in the Atmosphere, Environ. Sci. Technol., 35, 3207–3212, https://doi.org/10.1021/es000303t, 2001. Hoffmann, M. R. and Calvert, J. G.: Chemical Transformation Modules for Eulerian Acid Deposition Models, Vol. 2, The Aqueous-Phase Chemistry, EPA/600/3-85/017, US Environmental Protection Agency, Research Triangle Park, NC, 1985. Jongejan, P. A. C., Bai, Y., Veltkamp, A. C., Wye, G. P., and Slaninaa, J.: An Automated Field Instrument for The Determination of Acidic Gases in Air, Int. J. Environ. Anal. Chem., 66, 241–251, https://doi.org/10.1080/03067319708028367, 1997. Kleffmann, J.: Daytime sources of nitrous acid (HONO) in the atmospheric boundary layer, Chem. Phys. Chem., 8, 1137–1144, https://doi.org/10.1002/cphc.200700016, 2007. Kleffmann, J. and Wiesen, P.: Technical Note: Quantification of interferences of wet chemical HONO LOPAP measurements under simulated polar conditions, Atmos. Chem. Phys., 8, 6813–6822, https://doi.org/10.5194/acp-8-6813-2008, 2008. Kleffmann, J., Gavriloaiei, T., Hofzumahaus, A., Holland, F., Koppmann, R., Rupp, L., Schlosser, E., Siese, M., and Wahner, A.: Daytime formation of nitrous acid: A major source of OH radicals in a forest, Geophys. Res. Lett., 32, L05818, https://doi.org/10.1029/2005GL022524, 2005. Kleffmann, J., Lörzer, J., Wiesen, P., Kern, C., Trick, S., Volkamer, R., Rodenas, M., and Wirtz, K.: Intercomparison of the DOAS and LOPAP techniques for the detection of nitrous acid (HONO), Atmos. Environ., 40, 3640–3652, https://doi.org/10.1016/j.atmosenv.2006.03.027, 2006. Kunen, S. M., Lazrus, A. L., Kok, G. L., and Heikes, B. G.: Aqueous oxidation of SO2 by hydrogen peroxide, J. Geopyhys. Res.-Oceans, 88, 3671–3674, https://doi.org/10.1029/JC088iC06p03671, 1983. Lelièvre, S., Bedjanian, Y., Laverdet, G., and Le Bras, G. : Heterogeneous Reaction of NO2 with Hydrocarbon Flame Soot, J. Phys. Chem. A, 108, 10807–10817, https://doi.org/10.1021/jp0469970, 2004. Li, L., Hoffmann, M. R., and Colussi, A. J.: Role of Nitrogen Dioxide in the Production of Sulfate during Chinese Haze-Aerosol Episodes, Environ. Sci. Technol., 52, 2686–2693, https://doi.org/10.1021/acs.est.7b05222, 2018. Li, X., Brauers, T., Häseler, R., Bohn, B., Fuchs, H., Hofzumahaus, A., Holland, F., Lou, S., Lu, K. D., Rohrer, F., Hu, M., Zeng, L. M., Zhang, Y. H., Garland, R. M., Su, H., Nowak, A., Wiedensohler, A., Takegawa, N., Shao, M., and Wahner, A.: Exploring the atmospheric chemistry of nitrous acid (HONO) at a rural site in Southern China, Atmos. Chem. Phys., 12, 1497–1513, https://doi.org/10.5194/acp-12-1497-2012, 2012. Liu, M., Huang, X., Song, Y., Xu, T., Wang, S., Wu, Z., Hu, M., Zhang, L., Zhang, Q., Pan, Y., Liu, X., and Zhu, T.: Rapid SO2 emission reductions significantly increase tropospheric ammonia concentrations over the North China Plain, Atmos. Chem. Phys., 18, 17933–17943, https://doi.org/10.5194/acp-18-17933-2018, 2018. Liu, Z., Wang, Y., Costabile, F., Amoroso, A., Zhao, C., Huey, L. G., Stickel, R., Liao, J., and Zhu, T.: Evidence of Aerosols as a Media for Rapid Daytime HONO Production over China, Environ. Sci. Technol., 48, 14386–14391, https://doi.org/10.1021/es504163z, 2014. Lu, K., Zhang, Y., Su, H., Brauers, T., Chou, C. C., Hofzumahaus, A., Liu, S. C., Kita, K., Kondo, Y., Shao, M., Wahner, A., Wang, J., Wang, X., and Zhu, T.: Oxidant (O3+NO2) production processes and formation regimes in Beijing, J. Geophys. Res.-Atmos., 115, D07303, https://doi.org/10.1029/2009JD012714, 2010. Makkonen, U., Virkkula, A., Hellén, H., Hemmilä, M., Sund, J., Äijälä, M., Ehn, M., Junninen, H.,Keronen, P., and Petäjä, T.: Semi-continuous gas and inorganic aerosol measurements at a boreal forest site: seasonal and diurnal cycles of HONO and HNO3, Boreal Environ. Res., 19, 311–328, 2014. Meng, Z., Xu, X., Lin, W., Ge, B., Xie, Y., Song, B., Jia, S., Zhang, R., Peng, W., Wang, Y., Cheng, H., Yang, W., and Zhao, H.: Role of ambient ammonia in particulate ammonium formation at a rural site in the North China Plain, Atmos. Chem. Phys., 18, 167–184, https://doi.org/10.5194/acp-18-167-2018, 2018. Michoud, V., Colomb, A., Borbon, A., Miet, K., Beekmann, M., Camredon, M., Aumont, B., Perrier, S., Zapf, P., Siour, G., Ait-Helal, W., Afif, C., Kukui, A., Furger, M., Dupont, J. C., Haeffelin, M., and Doussin, J. F.: Study of the unknown HONO daytime source at a European suburban site during the MEGAPOLI summer and winter field campaigns, Atmos. Chem. Phys., 14, 2805–2822, https://doi.org/10.5194/acp-14-2805-2014, 2014. Monge, M. E., D'Anna, B., Mazri, L., Giroir-Fendler, A., Ammann, M., Donaldson, D. J., and George, C.: Light changes the atmospheric reactivity of soot, P. Natl. Acad. Sci. USA, 107, 6605–6609, https://doi.org/10.1073/pnas.0908341107, 2010. Muller, T., Dubois, R., Spindler, G., Bruggemann, E., Ackermann, R., Geyer, A., and Platt, U.: Measurements of nitrous acid by DOAS and diffusion denuders: a comparison, T. Ecol. Environ., 28, 345–349, 1999. Nie, W., Ding, A. J., Xie, Y. N., Xu, Z., Mao, H., Kerminen, V.-M., Zheng, L. F., Qi, X. M., Huang, X., Yang, X.-Q., Sun, J. N., Herrmann, E., Petäjä, T., Kulmala, M., and Fu, C. B.: Influence of biomass burning plumes on HONO chemistry in eastern China, Atmos. Chem. Phys., 15, 1147–1159, https://doi.org/10.5194/acp-15-1147-2015, 2015. Perner, D. and Platt, U.: Detection of nitrous acid in the atmosphere by differential optical absorption, Geophys. Res. Lett., 6, 917–920, https://doi.org/10.1029/GL006i012p00917, 1979. Platt, U., Perner, D., Harris, G., Winer, A., and Pitts, J.: Observations of nitrous acid in an urban atmosphere by differential optical absorption, Nature, 285, 312–314, https://doi.org/10.1038/285312a0, 1980. Ramsay, R., Di Marco, C. F., Heal, M. R., Twigg, M. M., Cowan, N., Jones, M. R., Leeson, S. R., Bloss, W. J., Kramer, L. J., Crilley, L., Sörgel, M., Andreae, M., and Nemitz, E.: Surface–atmosphere exchange of inorganic water-soluble gases and associated ions in bulk aerosol above agricultural grassland pre- and postfertilisation, Atmos. Chem. Phys., 18, 16953–16978, https://doi.org/10.5194/acp-18-16953-2018, 2018. Rosman, K., Shimmo, M., Karlsson, A., Hansson, H.-C., Keronen, P., Allen, A., and Hoenninger, G.: Laboratory and field investigations of a new and simple design for the parallel plate denuder, Atmos. Environ., 35, 5301–5310, https://doi.org/10.1016/S1352-2310(01)00308-9, 2001. Rumsey, I. C., Cowen, K. A., Walker, J. T., Kelly, T. J., Hanft, E. A., Mishoe, K., Rogers, C., Proost, R., Beachley, G. M., Lear, G., Frelink, T., and Otjes, R. P.: An assessment of the performance of the Monitor for AeRosols and GAses in ambient air (MARGA): a semi-continuous method for soluble compounds, Atmos. Chem. Phys., 14, 5639–5658, https://doi.org/10.5194/acp-14-5639-2014, 2014. Seinfeld, J. H. and Pandis, S. N.: Atmospheric chemistry and physics: from air pollution to climate change, John Wiley & Sons, New York, 2016. Sörgel, M., Regelin, E., Bozem, H., Diesch, J.-M., Drewnick, F., Fischer, H., Harder, H., Held, A., Hosaynali-Beygi, Z., Martinez, M., and Zetzsch, C.: Quantification of the unknown HONO daytime source and its relation to NO2, Atmos. Chem. Phys., 11, 10433–10447, https://doi.org/10.5194/acp-11-10433-2011, 2011. Spindler, G., Hesper, J., Brüggemann, E., Dubois, R., Müller, T., and Herrmann, H.: Wet annular denuder measurements of nitrous acid: laboratory study of the artefact reaction of NO2 with S(IV) in aqueous solution and comparison with field measurements, Atmos. Environ., 37, 2643–2662, https://doi.org/10.1016/S1352-2310(03)00209-7, 2003. Stieger, B., Spindler, G., Fahlbusch, B., Müller, K., Grüner, A., Poulain, L., Thöni, L., Seitler, E., Wallasch, M., and Herrmann, H.: Measurements of PM10 ions and trace gases with the online system MARGA at the research station Melpitz in Germany – A five-year study, J. Atmos. Chem., 75, 33–70, https://doi.org/10.1007/s10874-017-9361-0, 2018. Su, H., Cheng, Y. F., Cheng, P., Zhang, Y. H., Dong, S., Zeng, L. M., Wang, X., Slanina, J., Shao, M., and Wiedensohler, A.: Observation of nighttime nitrous acid (HONO) formation at a non-urban site during PRIDE-PRD2004 in China, Atmos. Environ., 42, 6219–6232, https://doi.org/10.1016/j.atmosenv.2008.04.006, 2008a. Su, H., Cheng, Y. F., Shao, M., Gao, D. F., Yu, Z. Y., Zeng, L. M., Slanina, J., Zhang, Y. H., and Wiedensohler, A.: Nitrous acid (HONO) and its daytime sources at a rural site during the 2004 PRIDE- PRD experiment in China, J. Geophys. Res., 113, D14312, https://doi.org/10.1029/2007JD009060, 2008b. Trebs, I., Meixner, F. X., Slanina, J., Otjes, R., Jongejan, P., and Andreae, M. O.: Real-time measurements of ammonia, acidic trace gases and water-soluble inorganic aerosol species at a rural site in the Amazon Basin, Atmos. Chem. Phys., 4, 967–987, https://doi.org/10.5194/acp-4-967-2004, 2004. VandenBoer, T. C., Markovic, M. Z., Sanders, J. E., Ren, X., Pusede, S. E., Browne, E. C., Cohen, R. C., Zhang, L., Thomas, J., Brune, W. H., and Murphy, J. G.: Evidence for a nitrous acid (HONO) reservoir at the ground surface in Bakersfield, CA, during CalNex 2010, J. Geophys. Res.-Atmos., 119, 9093–9106, https://doi.org/10.1002/2013JD020971, 2014. Wu, T., Zha, Q., Chen, W., Xu, Z., Wang, T., and He, X.: Development and deployment of a cavity enhanced UV-LED spectrometer for measurements of atmospheric HONO and NO2 in Hong Kong, Atmos. Environ., 95, 544–551, https://doi.org/10.1016/j.atmosenv.2014.07.016, 2014. Xie, Y., Ding, A., Nie, W., Mao, H., Qi, X., Huang, X., Xu, Z., Kerminen, V.-M., Petäjä, T., Chi, X., Virkkula, A., Boy, M., Xue, L., Guo, J., Sun, J., Yang, X., Kulmala, M., and Fu, C.: Enhanced sulfate formation by nitrogen dioxide: Implications from in situ observations at the SORPES station, J. Geophys. Res.-Atmos., 120, 12679–12694, https://doi.org/10.1002/2015JD023607, 2015. Xu, Z., Wang, T., Xue, L. K., Louie, P. K. K., Luk, C. W. Y., Gao, J., Wang, S. L., Chai, F. H., and Wang, W. X. : Evaluating the uncertainties of thermal catalytic conversion in measuring atmospheric nitrogen dioxide at four differently polluted sites in China, Atmos. Environ., 76, 221–226, https://doi.org/10.1016/j.atmosenv.2012.09.043, 2013. Xu, Z., Wang, T., Wu, J., Xue, L., Chan, J., Zha, Q., Zhou, S., Louie, P. K. K., and Luk, C. W. Y.: Nitrous acid (HONO) in a polluted subtropical atmosphere: Seasonal variability, direct vehicle emissions and heterogeneous production at ground surface, Atmos. Environ., 106, 100–109, https://doi.org/10.1016/j.atmosenv.2015.01.061, 2015. Xue, C., Ye, C., Ma, Z., Liu, P., Zhang, Y., Zhang, C., Tang, K., Zhang, W., Zhao, X., Wang, Y., Song, M., Liu, J., Duan, J., Qin, M., Tong, S., Ge, M., and Mu, Y.: Development of stripping coil-ion chromatograph method and intercomparison with CEAS and LOPAP to measure atmospheric HONO, Sci. Total Environ., 646, 187–195, https://doi.org/10.1016/j.scitotenv.2018.07.244, 2019. Yu, Y., Galle, B., Panday, A., Hodson, E., Prinn, R., and Wang, S.: Observations of high rates of NO2-HONO conversion in the nocturnal atmospheric boundary layer in Kathmandu, Nepal, Atmos. Chem. Phys., 9, 6401–6415, https://doi.org/10.5194/acp-9-6401-2009, 2009. Zellweger, C., Ammann, M., Hofer, P., and Baltensperger, U.: NOy speciation with a combined wet effluent diffusion denuder – aerosol collector coupled to ion chromatography, Atmos. Environ., 33, 1131–1140, https://doi.org/10.1016/S1352-2310(98)00295-7, 1999. Zhou, X.: An Overview of Measurement Techniques for Atmospheric Nitrous Acid, in: Disposal of Dangerous Chemicals in Urban Areas and Mega Cities, Springer, Dordrecht, 29–44, 2013. Zhou, X., He, Y., Huang, G., Thornberry, T. D., Carroll, M. A., and Bertman, S. B.: Photochemical production of nitrous acid on glass sample manifold surface, Geophys. Res. Lett., 29, 21–24, https://doi.org/10.1029/2002GL015080, 2002.
{}
# Self adjoint operators, eigenfunctions & eigenvalues 1. Oct 21, 2015 ### Incand 1. The problem statement, all variables and given/known data Consider the space $P_n = \text{Span}\{ e^{ik\theta};k=0,\pm 1, \dots , \pm n\}$, with the hermitian $L^2$-inner product $\langle f,g\rangle = \int_{-\pi}^\pi f(\theta) \overline{g(\theta)}d\theta$. Define operators $A,B,C,D$ as $A = \frac{d}{d\theta}, \; \; B= i\frac{d}{d\theta}, \; \; C= \frac{d^2}{d\theta^2}, \; \; D: f\to D f(\theta) = f(\theta) + f(-\theta)$. Which of the operators are self-adjoint? Find the eigenvalues and eigenfunctions for each operator. 2. Relevant equations The operator $T$ is self adjoint iff $\langle Tf,g \rangle = \langle f,Tg\rangle$ for all $f,g$ in the $P_n$. 3. The attempt at a solution Mostly looking for some quick input if I'm getting this roughly right or not since there's no answer and I'm a bit unsure. I first write $f(\theta) = \sum_{-n}^n c_k e^{ik\theta}$. We're allowed to differentiate since $f(\theta)$ is in $C^\infty$. Differentiating we have $f'(\theta) = \sum_{-n}^n ikc_k e^{ik\theta}$ and $f''(\theta ) = \sum_{-n}^n -k^2c_k e^{ik\theta}$. every part of $f(\theta )$ is $2\pi$-periodic so the only time we get a contribution to the integral is when the the exponential terms cancel each other exactly. A) In this case when the index matches so we have$\langle Af, g\rangle = \int_{-\pi}^\pi \sum_{-n}^n \left( ika_kb_k \right) d\theta = 2\pi \sum_{-n}^n ika_k\overline{b_k}$. and $\langle f, Ag\rangle = \int_{-\pi}^\pi \left( \sum_{-n}^n -ika_k\overline{b_k} \right) d\theta = -2\pi \sum_{-n}^n ika_k\overline{b_k}$ so not self-adjoint. Similary we see that both B and C are self-adjoint. The eigenvalues would be the functions $e^{ikx}$ for $k = \pm 1, \pm 2,\dots , \pm n$ and the eigenvalues for for $A,B$ and $C$ are $ik, -k$ and $-k^2$ respectively. The case $D$ is slightly harder we have $\langle Df,g \rangle = \langle f,g \rangle + \int_{-\pi}^\pi f(-\theta)\overline{g(\theta)}d\theta$ and $\langle f,Dg \rangle = \langle f,g \rangle + \int_{-\pi}^\pi f(\theta)\overline{g(-\theta)}d\theta$ So we need to prove that $\int_{-\pi}^\pi f(-\theta)\overline{g(\theta)} = \int_{-\pi}^\pi f(\theta)\overline{g(-\theta)}d\theta$. Again matching the coefficients we have for the left side $\int_{-\pi}^\pi \left( \sum_{-n}^n a_k\overline{b_{n-k}} \right) d\theta$ and the right side $\int_{-\pi}^\pi \left( \sum_{-n}^n a_k\overline{b_{n-k}}\right) d\theta$ which matches so self adjoint. The eigenvalues should be $\frac{a_k+a_{n-k}}{a_k}$ and the eigenfunctions the same as before. Did I understand this right? 2. Oct 21, 2015 ### andrewkirk Your work looks mostly sound. You have omitted the case $k=0$, which gives an additional eigenvalue of 0 for A, B, C, with the eigenvector being any constant. For D, I also conclude it is Hermitian but I get the integrals being like this $$\int_{-\pi}^\pi \left( \sum_{-n}^n a_k\overline{b_{-k}} \right) d\theta$$ The eigenvalues won't be of the form you gave because the $a_k$ coefficients are properties of the function $f$, not the operator D. Given the revised integral, can you work out what eigenvalues and eigenfunctions of D will be? 3. Oct 22, 2015 ### Incand Thanks for taking the time replying! You're absolutely right about the index in that sum and thanks for clarifying that I can't have those constants from the function as an eigenvalue. If I write out $Df(\theta)$ I have $\sum_{-N}^N \left(c_k e^{ik\theta} + c_ke^{-ik\theta} \right) = \sum_{-N}^N (c_k + c_{-k})e^{ik\theta}$ I'm not sure I get all eigenfunctions here but I was thinking if I take eigenfunctions of the form $f\in P_n$ with the added constraint that $c_k+c_{-k}=0$ I have the eigenvalue zero? I also possibly see other eigenvalues For example if I choose the constrant that $c_{-k} = ac_{k}$ with $a$ being a complex number I have the eigenvalue $(1+a)$. But not sure if I'm allowed too do this? The eigenvalue doesn't vary like in the last case(which was obviously wrong) but I do use properties from the function I guess. 4. Oct 22, 2015 ### andrewkirk That sounds right. What would be a neat basis for the eigenspace with eigenvalue zero? To see if this works, apply D to $f(\theta)=e^{-ik\theta}+ae^{ik\theta}$ and see what you get. Is it a complex scalar multiple of $f(\theta)$? What about other nonzero eigenvalues? 5. Oct 22, 2015 ### Incand Perhaps $\{e^{ik\theta}-e^{-ik\theta} \}_0^n$ that is the basis $\{ 1, e^{i\theta}-e^{-i\theta}, \dots , e^{in\theta}-e^{-in\theta} \}$. The basis vector aint orthogonal nor normalized but if one wants we could get there with Gram-Schmidts method. $D\left( e^{-ik\theta}+ae^{ik\theta} \right) = e^{-ik\theta} + ae^{ik\theta} + e^{ik\theta} + e^{-ik\theta} = (1+a) \left(e^{-ik\theta}+e^{ik\theta} \right)$. Right this doesn't work since I should have a $a$ in there as well. So instead let's look at the case when $c_{-k} = c_{k}$ we have $D(e^{ik\theta}+ e^{-ik\theta}) =e^{ik\theta}+ e^{-ik\theta} + e^{-ik\theta}+ e^{ik\theta} = 2(e^{ik\theta}+ e^{-ik\theta})$. So we have the eigenvalue two for the eigenfunctions as a linear combination of $\{e^{ik\theta}+e^{-ik\theta} \}_0^n$. I don't think there's actually any more eigenvalues.
{}
# Regular Sequences and Ext Calculations Let $R$ be a commutative ring and $A$ and $R$-module. We say that $x_1,\dots,x_n\in R$ is a regular sequence on $A$ if $(x_1,\dots,x_n)A\not = A$ and $x_i$ is not a zero divisor on $A/(x_1,\dots,x_{i-1})A$ for all $i$. Regular sequences are a central theme in commutative algebra. Here’s a particularly interesting theorem about them that allows you to figure out a whole bunch of Ext-groups: Theorem. Let $A$ and $B$ be $R$-modules and $x_1,\dots,x_n$ a regular sequence on $A$. If $(x_1,\dots,x_n)B = 0$ then $${\rm Ext}_R^n(B,A) \cong {\rm Hom}_R(B,A/(x_1,\dots,x_n)A)$$ This theorem tells us we can calculate the Ext-group ${\rm Ext}_R^n(B,A)$ simply by finding a regular sequence of length $n$, and calculating a group of homomorphisms. We get two cool things out of this theorem: first, a corollary of this theorem is that any two maximal regular sequences on $A$ have the same length if they are both contained in some ideal $I$ such that $IA\not= A$, and second, it enapsulates a whole range of Ext-calculations in an easy package. For example, let’s say we wanted to calculate ${\rm Ext}_\Z^1(\Z/2,\Z)$. Well, $2\in\Z$ is a regular sequence, and so the above theorem tells us that this Ext-group is just ${\rm Hom}_\Z(\Z/2,\Z/2) \cong\Z/2$. Another example: is ${\rm Ext}_{\Z[x]}^1(\Z,\Z[x])\cong\Z$. Of course, the above theorem is really just a special case of a Koszul complex calculation. However, it can be derived without constructing the Koszul complex in general, and so offers an instructive and minimalist way of seeing that for Noetherian rings and finitely generated modules, the notion of length of a maximal regular sequence is well-defined.
{}
# Integration of Vectors (a) Prove that $$\begin{array}{clcr}\int\int_{S} r^{5}ndS=\int\int\int_{V}5r^{3}$$rdV (b) Prove that $$\begin{array}{clcr}\oint_{c}\phi{d}$$r$$=\begin{array}{clcr}\int\int_{S}dS\times\nabla\phi$$ Last edited: (a) Prove that $$\begin{array}{clcr}\int\int_{S} r^{5}ndS=\int\int\int_{V}5r^{3}$$rdV (b) Prove that $$\begin{array}{clcr}\oint_{c}\phi{d}$$r$$=\begin{array}{clcr}\int\int_{S}dS\times\nabla\phi$$ Did you actually work out this problem yet?
{}
# Generating a Lua script for a tile map maker I'm using printwriter to generate a LUA script for a tile map maker. It seems hackish to me, but it works. Purposes are for allowing users to make their own maps for my game. • Is there a better practice to generate scripts? • Secondary - Why is it bad to do it my way? I'm only including the function to write this Lua code, but if you want to run it on your own machine the full source is here. public void buildLua() { block25 : { boolean foundGoal = false; boolean foundPlayer = false; boolean foundWall = false; int m = 0; while (!(foundGoal && foundPlayer && foundWall)) { if (this.mapGridList.get(m).getIcon().equals(this.goalIcon)) { foundGoal = true; } else if (this.mapGridList.get(m).getIcon().equals(this.playerIcon)) { foundPlayer = true; } else if (this.mapGridList.get(m).getIcon().equals(this.wallIcon)) { foundWall = true; } if (++m > (this.mapSize - 2) * (this.mapSize - 2)-1) break; } if (foundGoal && foundPlayer && foundWall) { try { JFileChooser chooser = new JFileChooser(String.valueOf(System.getProperty("user.home")) + "/Desktop"); chooser.setFileSelectionMode(2); int result = chooser.showSaveDialog(chooser); if (result == 0) { this.mapName = chooser.getSelectedFile().getName(); if (this.mapName.contains(".lua")) { this.mapName = this.mapName.substring(0, this.mapName.length() - 4); } System.out.println(temp.getText()); this.setMaxWalls(); String path = chooser.getSelectedFile().getAbsolutePath(); PrintWriter writer = path.endsWith(".lua") ? new PrintWriter(path, "UTF-8") : new PrintWriter(String.valueOf(path) + ".lua", "UTF-8"); writer.println("local Map = IceRunner.Map"); writer.println("local MapKit = IceRunner.MapKit"); writer.println("local Up = IceRunner.MapTools.UpExtent"); writer.println("local Down = IceRunner.MapTools.DownExtent"); writer.println("local Left = IceRunner.MapTools.LeftExtent"); writer.println("local Right = IceRunner.MapTools.RightExtent"); writer.println("local Wall = IceRunner.Map.Wall"); writer.println("local MapKit = IceRunner.MapTools.MapKit"); writer.println("local Player = Map.Player"); writer.println("local Goal = Map.Goal"); writer.println(""); writer.println("local map = Map({"); writer.println("name = \"" + this.mapName.toUpperCase() + "\","); writer.println("level = " + this.difficulty + ","); writer.println("kit = MapKit({size = " + this.mapSize + ", walls = " + this.maxWalls + " })"); writer.println("})"); writer.println(""); if (this.mapSize == 15) { } else if (this.mapSize == 20) { } else if (this.mapSize == 25) { } int z = this.mapSize - 2; int i = 1; while (i < z * z + 1) { int x; int y; if ((i - 1) % z > 0) { x = (i - 1) / z; y = (i - 1) % z; } else { x = (i - 1) / z; y = 0; } if (this.mapGridList.get(i - 1).getIcon().equals(this.wallIcon)) { writer.println("map:add_walls(Wall(" + (x + 1) + "," + (y + 1) + "), Up(0))"); } else if (this.mapGridList.get(i - 1).getIcon().equals(this.playerIcon)) { writer.println("map:set_player(Player(" + (x + 1) + "," + (y + 1) + "))"); } else if (this.mapGridList.get(i - 1).getIcon().equals(this.goalIcon)) { writer.println("map:set_goal(Goal(" + (x + 1) + "," + (y + 1) + "))"); } ++i; } writer.println(""); writer.println("IceRunner.register_map(map);"); writer.close(); break block25; } JOptionPane.showMessageDialog(this.frmIceRunnerMap, "Map Not Saved!"); } catch (IOException e) { JOptionPane.showMessageDialog(this.frmIceRunnerMap, "Error Occured While Saving!"); } } else { JOptionPane.showMessageDialog(this.frmIceRunnerMap, "Please place at least one wall, start tile, and finish tile..."); } } } • in what area do you use these lua scripts? is it in a professional area or is it for private use? (that questions aims into 'how much time/money do you want to invest?') Feb 16 '18 at 8:03 • I am downvoting this question because I feel like we don't have enough information about what you want to do and why in order to help you the best. Can you add some more description about the use-case for this? Feb 16 '18 at 16:03 I can't answer if there is a better way to generate a Lua script. Maybe you should google for some projects about connecting Java and Lua. I saw a bunch of them, so maybe someone already did what you need. But I'll mention how you can make your Java code better ;) 1. Divide code into method You wrote a gigantic piece of code. It is hard to understand what is doing there. Believe me, when you leave your project and will go back later, you'll have difficulty with a full understanding of it. For instance, all those writer changes can be extracted to a method. Next example is the "setup" of player wall and goal. According to Clean Code by Robert Martin, a function should be no longer than 20 lines of code. Even extracting this to a method with parameter String message will make a positive change to the readable of code: JOptionPane.showMessageDialog(this.frmIceRunnerMap, "..."); 2. Stop using "magic numbers" On your GitHub repo (and even in this part of the code), there are some numbers. I tried to understand it but I can't. In map.java: for(int i = 0; i < 169; i++){ JLabel label = new JLabel(); label.setIcon(new ImageIcon(map.class.getResource("/team10/Empty.png"))); } It would be nice to name those variables, like BoardSize. You can even create a separate class with those numbers and try to static import them to a class where they are used. 3. Too much nested logic If you are using for in for in if etc. you are making code hard to understand. And for you, it will be hard to write some tests for your classes. The solution is our 1. point - dividing code. Maybe even add some additional classes. At the beginning, it should be enough. Good luck :) maybe you can push some of your code into a text file and use it to generate code? template.txt: local Map = IceRunner.Map local MapKit = IceRunner.MapKit local Up = IceRunner.MapTools.UpExtent local Down = IceRunner.MapTools.DownExtent ... local Goal = Map.Goal local map = Map({ name = "{0}", level = "{1}", kit = MapKit({size = {2}, walls = {3} }) }) the MessageFormat class would replace the placeholders with your content: String template = readFile("template.txt"); //please don't use hardcoded file names String codePart = MessageFormat.format( template , //template with placeholders this.mapName.toUpperCase(), // placeholder {0} this.difficulty, // placeholder {1} this.mapSize, // placeholder {2} this.maxWalls ); //placeholder {3} ... writer.println(codePart); //lua-code with replaced placeholder NOTE: the code for reading the template file (readFile()) is not part of my suggestion, please do on your own
{}
October 22, 2018 ### Do Marketplace Lending Platforms Offer Lower Rates to Consumers? Over the past decade, firms using innovative technology--so-called fintech firms--have entered into various financial services markets. One particular set of entrants, marketplace lenders, have entered into consumer lending markets, using nontraditional data- and technology-intensive methods to originate loans to consumers.1 While the definition of marketplace lending has evolved over time, the basic concept has remained the same. These firms tout an easy online application, overall loan convenience, innovative underwriting, and low costs. Two of the largest marketplace lenders, Prosper and Lending Club, are often referred to as peer-to-peer (P2P) lenders, because they have added the innovation of funding loans by investors. Prosper and Lending Club have grown significantly, accounting for almost $9 billion in originations in 2017. Much of the research surrounding marketplace lenders focuses on topics such as technological innovation, big data analyses, two-sided markets, and information gathering.2 However, the potential reduction in loan rates to borrowers remains elusive and has not been well documented. This note analyzes interest rates of loans from the two largest P2P platforms, Lending Club and Prosper, to observe their potential benefits to borrowers. A proper comparison of loan rates can be challenging, because the appropriate traditional loans, used as a base comparison, are not clearly delineated, and because loan rates vary by consumer characteristics. I argue that credit card loans are the most appropriate traditional loan to compare with the personal unsecured loans originated by Lending Club and Prosper. My analysis focuses on borrowers' credit scores as the most prominent factor that determines loan rates. Some Research on Fintech Pricing A nascent literature on fintech lending has broached the topic of loan pricing, but little has been done on the rates of such loans relative to other products controlling for credit risks. For example, Demyanyk and Kolliner (2014) compare Lending Club interest rates to average credit card rates. Using Lending Club internal credit ratings, they find that only the safest borrowers systematically receive lower rates relative to average credit card rates. They also find that higher credit risk borrowers do not systemically receive lower rates. However, their analysis does not account for the distribution of credit risk in credit card markets, because the average credit card rate does not account for credit rating. The fintech pricing research that controls for risk characteristics either considers other types of credit markets or draws inferences from aggregated data. Buchak, Matvos, Piskorski, and Seru (2017) study fintech pricing in residential lending markets. They find that fintech interest rates are not significantly different from traditional lender rates. De Roure, Pelizzon, and Tasca (2016) compare interest rates between Auxmoney, a German marketplace lender, and traditional German banks. They find that marketplace interest rates are higher than bank loan rates, especially credit card and overdraft interest rates. They use state-level aggregated data in their comparison, so their analysis relies on the similarity of risk distributions. Finally, Mach, Carter, and Slattery (2014) find that rates on P2P-originated small business loans are about two times higher than rates for small business loans from traditional sources. They note that small business P2P borrowers might not qualify for bank loans. Data I use interest rate data from three sources. For P2P interest rates, I use loan origination data from the two largest marketplace lenders, Prosper and Lending Club. Data from both platforms provide information on borrower characteristics, including credit history and credit scores. For credit card interest rates, I use data from Mintel Comperemedia (Mintel), which records interest rates presented in credit card mail offers extended to households. The Mintel data include credit attributes of offer recipients merged from TransUnion. These data measure various characteristics of the offer and the characteristics of the household that received the offer, including the credit score. The Mintel data only report annual percentage rate (APR) for each offer. I only consider credit card offers with no annual fees to improve the validity of interest rate comparisons. Most borrowers on both P2P platforms state that loans are obtained to consolidate debt. For example, about 77 percent of loans originated on both platforms in 2017 are debt consolidation loans.3 While debt consolidation could arise from various other sources, such as auto or home equity lines, loans from these sources are secured and, hence, considerably different than unsecured credit. Other information also supports the comparability between credit cards and P2P loans. Borrowers from Prosper and Lending Club have average installment loans that are greater than the average originated loan amount on both platforms. At origination, P2P borrowers hold average installment loan balances of around$35,000, while their average loan amount is about \$15,000. Therefore, consumers are unlikely to be paying off their installment loans with P2P loans. P2P borrowers also have, on average, more credit cards and higher credit card utilization rates. Comparing these borrowers to borrowers in the Federal Reserve Bank of New York's Consumer Credit Panel/Equifax (FRBNY CCP), we find that P2P borrowers have, on average, eight bank cards, while FRBNY CCP borrowers have, on average, four bank cards. While not conclusive, this information points to consumers with a higher-than-average number of credit cards and higher revolving balances who are trying to refinance their credit card debt.4 A comparison of interest rates across various credit score products is problematic, because not all lenders use the same credit rating score. I create a crosswalk between the different credit scores by tying bins using these scores to their respective prime and subprime thresholds. 5 I separate the credit scores into 9 bins. Bin 1 is placed just above the subprime threshold and bin 4 starts at the prime threshold for the prospective credit score. The rest of the bins are evenly spaced across the range for each credit score system.6 In other words, bins 1–3 are evenly spaced through near-prime scores and bins 5–9 through prime scores.7 Rate Comparison My analysis starts by looking at average interest rates across mapped credit score bins. Figures 1 and 2 show average interest rates for Lending Club and Prosper loans along with average credit card interest rates for households from Mintel for the fourth quarters of 2016 and 2017. Average rates for each platform are calculated for nine credit score bins. Mintel average rates are calculated for similar credit score bins. I consider two quarters to show the stability of loan pricing. ##### Figure 2: Average Interest Rates by Credit Score in 2017Q4 The teal line represents average credit card offer interest rates from Mintel. Lending Club and Prosper average rates are represented by green and purple lines, respectively. Binning the data by credit score, the data indicate that, on average, the rates that borrowers receive are lower than credit card offer rates. For all credit score bins, Lending Club and Prosper average rates are substantially lower than average credit card rates. Relative to Lending Club, Prosper tends to give somewhat higher interest rates on low credit score loans and lower interest rates on higher credit score loans. Interest rates do not capture the entire cost of the loan. APR may be a more appropriate measure of the total cost of credit. For most loans, Lending Club and Prosper assess a 5 percent origination fee. The origination fee is substantially lower for the safest borrowers. Figures 3 and 4 compare the APRs for Lending Club and Prosper to credit card interest rates for the fourth quarters of 2016 and 2017.8 ##### Figure 4: Average APR by Credit Score in 2017Q4 The APRs indicate a similar story. As in figures 1 and 2, average APRs for Lending Club loans are lower than credit card rates for all levels of credit scores. Average APRs for Prosper loans are also lower than credit card rates for credit scores above bin 1 in 2016 and above bin 2 in 2017. The higher average Prosper APR for lower credit score borrowers does not necessarily indicate that these borrowers are paying more for the loan or that the loan is not worthwhile. I am only comparing average rates, so I do not observe the borrower's actual choice set and interest rates.9 For example, high APR Prosper borrowers could be facing high credit card rates. Moreover, these customers may be consolidating debt from accounts that are not general purpose credit cards, such as charge cards or store cards. Borrowers may also be more concerned with cash flow than the overall cost of credit. Even though the APR is higher, monthly payments are reduced by the lower interest rate or the terms of the loan. For example, a closed-end five-year loan can result in a lower monthly payment compared to simply paying the minimum credit card payments.10 Finally, the crosswalk different credit scores is imperfect. Average credit card rates in my analysis may not reflect true averages for given credit score bins. In order to get a better sense of interest rate and APR differences between credit cards and P2Ps over time, I calculate spreads for each loan relative to average credit card offer rates within credit score bins. For every loan originated by Prosper or Lending Club, I calculate a spread by using the credit card rate that matches the same credit score bin as the loan. Average spreads for each platform are calculated for each platform in each quarter. Spreads relative to APR are calculated in the same manner. Figures 5 and 6 show average quarterly interest rates and APR spreads for each platform. ##### Figure 6: Annual Percentage Rate Spread Both figures show that average spreads relative to Mintel offer rates have been considerable and consistently negative over time. For Lending Club, average interest rate and APR spreads ranged from around negative 400 to negative 800 basis points and from around negative 100 to negative 450 basis points, respectively, over the past couple of years. For Prosper, average interest rate and APR spreads ranged from around negative 250 to over negative 600 basis points and from around 100 to negative 300 basis points, respectively, over the past couple of years. Conclusion and Discussion This note provides evidence that credit card borrowers may receive potentially significant interest rate reductions as a result of obtaining loans from online P2P platforms. Relative to credit card rates, I find considerable interest rate and APR savings from debt consolidation through P2P lenders. Research on credit cards has explored why interest rates in credit card markets tend to be more rigid than in other lending markets. Research from Ausubel (1991), Calem and Mester (1996), and Stango (2000) consider the question of why credit card interest rates are rigid. Ausubel (1992) attributes rate rigidity to consumers not properly considering the probability that they will pay interest on their balances. Calem and Mester (1996) find search and switching costs as well as adverse selection as sources of rate rigidity. Stango (2000) points to quality competition, where consumers are less sensitive to interest rates. More recent studies discuss changes in credit card pricing. Furletti (2003) details the evolution of credit card pricing from 1986 through 2001. He finds that credit card pricing became more complex and risk-based, moving from a single interest rate for all cards to risk-based interest rates with various fees. The prevalence of annual fees fell substantially over this time period.11 These studies all give some potential explanations as to why credit card interest rates have not adjusted to entries by P2P lenders. Ultimately, this question should focus on the degree of competition between online platforms and credit cards. Even though borrowers seem to be switching from credit cards to P2P loans, the anecdotal evidence presented in this note indicates that competition is muted. P2P lenders appear to be able to take advantage of rate rigidities to attract customers, but rate spreads need to be large enough to get borrowers to switch. More substantive evidence needs to be collected in order to conclusively determine the degree of substitution and competition. References Agarwal, Sumit, Souphala Chomsisengphet, Neale Mahoney, and Johannes Stroebel (2015). "Regulating Consumer Financial Products: Evidence from Credit Cards," Quarterly Journal of Economics, vol. 130 (February), pp. 111–64. Ausubel, Lawrence M. (1991). "The Failure of Competition in the Credit Card Market," American Economic Review, vol. 81 (March), pp. 50–81. Buchak, Greg, Gregor Matvos, Tomasz Piskorski, and Amit Seru (2017). "Fintech, Regulatory Arbitrage, and the Rise of Shadow Banks," Stanford Graduate School of Business Working Paper No. 3511. Stanford, Calif.: Stanford GSB, March. Calem, Paul S., and Loretta J. Mester (1995). "Consumer Behavior and the Stickiness of Credit-Card Interest Rates," American Economic Review, vol. 85 (December), pp. 1327–36. de Roure, Calebe, Loriana Pelizzon, and Paolo Tasca (2016). "How Does P2P Lending Fit into the Consumer Credit Market?" Deutsche Bundesbank Discussion Paper No. 30/2016. Frankfurt: Deutsche Bundesbank, August, https://www.bundesbank.de/Redaktion/EN/Downloads/Publications/Discussion_Paper_1/2016/2016_08_12_dkp_30.pdf%3F__blob%3DpublicationFile. Demyanyk, Yuliya, and Daniel Kolliner (2014). "Peer-to-Peer Lending Is Poised to Grow," Economic Trends. Cleveland: Federal Reserve Bank of Cleveland, August, https://www.clevelandfed.org/newsroom-and-events/publications/economic-trends/2014-economic-trends/et-20140814-peer-to-peer-lending-is-poised-to-grow.aspx. Elliehausen, Gregory, and Simona M. Hannon (2017). "The Credit Card Act and Consumer Finance Company Lending" Finance and Economics Discussion Series 2017-072. Washington: Board of Governors of the Federal Reserve System, June, https://doi.org/10.17016/FEDS.2017.072. Furletti, Mark (2003). "Credit Card Pricing Developments and Their Disclosure," Payment Cards Center Discussion Paper. Philadelphia: Federal Reserve Bank of Philadelphia, January, https://www.philadelphiafed.org/-/media/consumer-finance-institute/payment-cards-center/publications/discussion-papers/2003/creditcardpricing_012003.pdf?la=en. Jagtiani, Julapa, and Catharine Lemieux (2017). "Fintech Lending: Financial Inclusion, Risk Pricing, and Alternative Information," Working Paper No. 17-17. Philadelphia: Federal Reserve Bank of Philadelphia, July, https://www.philadelphiafed.org/-/media/research-and-data/publications/working-papers/2017/wp17-17.pdf. Mach, Traci L., Courtney M. Carter, and Cailin R. Slattery (2014). "Peer-to-Peer Lending to Small Businesses," Finance and Economics Discussion Series 2014-10. Washington: Board of Governors of the Federal Reserve System, January, https://www.federalreserve.gov/pubs/feds/2014/201410/201410pap.pdf. Stango, Victor (2000). "Competition and Pricing in the Credit Card Market," Review of Economics and Statistics, vol. 82 (August), pp. 499–508. U.S. Department of the Treasury (2016). "Opportunities and Challenges in Online Marketplace Lending," white paper. Washington: Treasury Department, May, https://www.treasury.gov/connect/blog/Documents/Opportunities_and_Challenges_in_Online_Marketplace_Lending_white_paper.pdf. 1. In a broader sense, fintech firms are active in wealth management, payments, small business lending, and many other financial markets. Return to text 2. See, for example, U.S. Department of the Treasury (2016). Return to text 3. Lending Club and Prosper loan data report the purpose of the loans. Debt consolidation loans are reported as either debt consolidation or credit card loans. Return to text 4. Consumers could be refinancing auto, education, and mortgage debt, but this seems unlikely because rates for these products are secured loans or have government guarantees generally much lower than credit card rates and, hence, lower than marketplace loan rates. On a side note, commercial banks also provide unsecured personal loans. Detailed data from commercial banks are not available. However, as of September 2017, 33 of the top 100 banks advertise unsecured personal loans on their websites. Seventeen of these institutions advertise fast loan approval. The average minimum advertised interest rate for the safest borrowers is 8.44 percent. Only 5 of these commercial banks offer best rates of 6 percent of lower, comparable to the best rates from Lending Club or Prosper. Return to text 5. The Lending Club internal credit rating ranges from A to G with 5 subcategories (a total of 35 credit ratings). Prosper internal credit rating has 6 ratings. Rates and fees vary over the credit ratings, sometimes significantly. Return to text 6. Lending Club does not offer loans to subprime borrowers. I do not include Prosper loans with subprime credit scores. Return to text 7. Credit card rates from confidential regulatory data are comparable using the same credit scores. Return to text 8. The Prosper data include the APR of each loan. Lending Club APRs are approximated assuming that all loans are assessed a 5 percent origination fee (only the high credit score borrower APR is overstated in this case). Return to text 9. The same is true when comparing rates across both platforms. Return to text 10. Credit card companies have varying formulas for calculating the monthly minimum. Many institutions simply use 2 percent of outstanding balances. With this formula, payoff periods tend to be much longer and monthly payments higher. Return to text 11. Other more recent papers point to interest rate rigidities. Agarwal et al. (2015) find the regulatory limits on credit card fees from the 2009 Credit Card Accountability Responsibility and Disclosure (CARD) Act did not result in offsetting increases in interest charges. Elliehausen and Hannon (2017) note that CARD Act restrictions on risk-based pricing resulted in a significant reduction of credit cards for high-risk customers. Return to text
{}
# The heat loss from a fin is 6 W. The effectiveness and efficiency of the fin are 3 and 0.75, respectively. The heat loss (in W) from the fin, keeping the entire fin surface at base temperature, is ________. ## Short Fin MCQ Question 1 Detailed Solution Concept: $$\eta = \frac{{Actual\;heat\;transfer}}{{Heat\;transfer\;keeping\;the\;entire\;fin\;at\;base\;temp}}$$ Calculation: Heat loss (Q) = 6 W Effectiveness (ε) = 3 Efficiency (η) = 0.75 $$\eta = \frac{{Actual\;heat\;transfer}}{{Heat\;transfer\;keeping\;the\;entire\;fin\;at\;base\;temp}}$$ $$0.75 = \frac{6}{{Heat\;transfer\;keeping\;the\;entire\;fin\;at\;base\;temp}}$$ Heat transfer keeping the entire fin surface at base temperature = 8 W # A 3 cm long, 2 mm × 2 mm rectangular cross-section aluminium fin [k = 237 W/m°c] is attached to a surface. If the fin efficiency is 65%, the effectiveness of this single fin is: 1. 30 2. 24 3. 8 4. 39 Option 4 : 39 ## Short Fin MCQ Question 2 Detailed Solution Concept: The relation between the efficiency of the fin and the effectiveness of the fin is given by, $$\frac{\eta }{\varepsilon } = \frac{{Cross~Section~area~of~the~fin~\left( {Ac} \right)}}{{Surface~~area~~of~the~fin~\left( {As} \right)}}$$ Calculation Given Length of the fin ,L = 3 cm = 30 mm, Side of the square cross-section, a = b = 2 mm, Efficiency of the fin, η = 65% = 0.65 As = p × L = 2 × (a + b) × L = 2 × (2 + 2) × 30 = 240 mm2 Ac = a × b = 2 × 2 = 4 mm2 $$Thermal~conductivity,k = 237~\frac{W}{{m^\circ C}}$$ Therefore, $$⇒ \frac{{0.65}}{{\rm{\varepsilon }}} = \frac{4}{{240}}$$ ⇒ ε = 39 Important Points • The Biot number of the fin with good effectiveness should be less than 1. • Fin material should have convection resistance higher than conduction resistance. # Which one of the following configurations has the highest fin effectiveness? 1. Thin, closely spaced fins 2. Thin, widely spaced fins 3. Thick widely spaced fins 4. Thick, closely spaced fins ## Answer (Detailed Solution Below) Option 1 : Thin, closely spaced fins ## Short Fin MCQ Question 3 Detailed Solution Explanation: Effectiveness (ϵ) Effectiveness is defined as the ratio of heat transfer rate with a fin to heat transfer rate without fin. $$\epsilon = \frac{{{{\dot Q}_{with\;fin}}\;}}{{{{\dot Q}_{without\;fin}}}}$$ For a long fin $$\epsilon = \frac{{{{\dot Q}_{with\;fin}}\;}}{{{{\dot Q}_{without\;fin}}}} = \sqrt {\frac{{KP}}{{h{A_c}}}}$$ where K = thermal conductivity of fin, P = perimeter of fin h = heat transfer coefficient, Ac = cross-section area of fin From the above formula, we can conclude: Case 1: If $$\frac{P}{{{A_c}}}$$ increases, the effectiveness of fin increases. • Ac must be small (thin fin) and fins must be closed (not too close). • If fins are too closed it will abstract flow of air which will decrease the effectiveness of fin. Case 2: if K increases, the effectiveness of fin increases. • If K is large, the temperature drop along the length will be less. • And the temperature drop between fin and surrounding will be more and heat transfer will be more. Case 3: if h decreases, the effectiveness of fin increases. • Fins are more effective under free convection ( h is less). # The effectiveness of a fin will be maximum in an environment with 1. Free convection 2. Forced convection 4. Convection and radiation ## Answer (Detailed Solution Below) Option 1 : Free convection ## Short Fin MCQ Question 4 Detailed Solution How effectively a fin can enhance heat transfer is characterized by the fin effectiveness, e, which is the ratio of fin heat transfer and the heat transfer without the fin. Effectiveness of the fin is given by $${\varepsilon _{fin}}\; = \;\surd \left( {\frac{{KP}}{{hA}}} \right)$$ Hence, $${\varepsilon _{fin}}\;\alpha \frac{1}{h}$$ Since hforced conv > hfree conv Hence, Effectiveness of the fin will be more in free convection than forced convection. # A plate fin of length L = 1.5 cm and thickness 2 mm has efficiency of____ (if k = 210 W/m-K, h = 285 W/m2K) 1. 84.1% 2. 87.2% 3. 89.9% 4. 92.4% Option 3 : 89.9% ## Short Fin MCQ Question 5 Detailed Solution Concept: For plate-fin; width>>>thickness then $$\frac{P}{A}\approx\frac{2}{t}$$ Corrected length, Lc = $$L + \frac{t}{2}$$ $$Efficiency, \eta = \frac{\tan h(mL_c)}{mL_c}$$ Calculation: Given: $$\begin{array}{l} efficiency\;\eta = \frac{{\tan h\left( {mL_C} \right)}}{{mL_C}}\\ {L_C} = L + \frac{t}{2} = 1.5+\frac{{ 0.2}}{2} = 1.6cm\\ m = \sqrt {\frac{{hP}}{{K{A_C}}}} = \sqrt {\frac{{2h}}{{Kt}}}= \sqrt {\frac{{285 \times 2}}{{210 \times 2 \times {{10}^{ - 3}}}}} \\ m{L_C} = 1.6\sqrt {\frac{{2 \times 285}}{{210 \times 2 \times {{10}^{ - 3}}}}} \times {10^{ - 2}} = 0.589\\ \eta = \frac{{\tan h\left( {0.589} \right)}}{{0.589}} = 89.9\% \end{array}$$ # A copper rod is of 12 mm diameter and 60 mm long is to be used as a fin with an insulated tip. For this fin, the efficiency was found to be 60%, the effectiveness of the fin is: 1. 12 2. 6 3. 5.4 4. 3.7 Option 1 : 12 ## Short Fin MCQ Question 6 Detailed Solution Concept: For a fin, the relation between efficiency (η) and effectiveness (ϵ) is: $$\eta \times {A_{surface}} = \varepsilon \times {A_{cross - sectional}}$$ Calculation: $$\frac{{\eta \times {A_{surface}}}}{{{A_{cross - sectional}}}} = \varepsilon$$ $$\frac{{{A_{surface}}}}{{{A_{cross - sectional}}}} = \;\frac{{{\rm{\pi DL}}}}{{\frac{{{\rm{\pi }}{{\rm{D}}^2}}}{4}}}\; = \frac{{4L}}{D}$$ $$\varepsilon = \frac{{4L}}{D} \times \eta = \frac{{4 \times 60}}{{12}} \times 0.6 = 12$$ # Extended surfaces are used to increase the rate of heat transfer. When the convective heat transfer coefficient h = mk, (where m is the slope of heat transfer line) the addition of extended surface will 1. Increase the rate of heat transfer 2. Decrease the rate of heat transfer 3. Not increase the rate of heat transfer 4. Increase the rate of heat transfer when the length of the fin is very large ## Answer (Detailed Solution Below) Option 3 : Not increase the rate of heat transfer ## Short Fin MCQ Question 7 Detailed Solution Explanation: Fins: Fins are extended surfaces used to increase the heat transfer from a surface by increasing the effective surface area. The effectiveness ϵ of a fin is given by - $$ϵ = \frac{{{\rm{Heat\;transfer\;rate\;with\;fin}}}}{{{\rm{Heat\;transfer\;rate\;without\;fin}}}} = \sqrt {\frac{{{\rm{kP}}}}{{{\rm{hA}}}}}$$ where k = thermal conductivity, P = perimeter of fin, h = heat transfer coefficient and A = cross-sectional area. The installation of a fin on a heat transferring surface increases the heat transfer area but it is not necessary that the rate of heat transfer would increase. For long fins, the rate of heat loss from fin is given by: $$\sqrt{hpkA}\;θ_o=kA\sqrt{\frac{hp}{kA}}θ_o\Rightarrow kAmθ_o$$ When $$\frac{h}{mk}=1$$, or h = mk, Q = hAθo which is equal to the heat loss from the primary surface with no extended surface. Thus, when h = mk, an extended surface will not increase the heat transfer rate from the primary surface whatever be the length. When h/mk > 1, Q < hAθo and hence adding secondary surfaces reduces the heat transfer and the added surface will act as insulation. When h/mk < 1, Q > hAθo and the hence extended surface will increase the heat transfer. # The heat loss from a fin is 6 W. The effectiveness and efficiency of the fin are 3 and 0.75, respectively. The heat loss (in W) from the fin, keeping the entire fin surface at base temperature, is ________. ## Short Fin MCQ Question 8 Detailed Solution Concept: $$\eta = \frac{{Actual\;heat\;transfer}}{{Heat\;transfer\;keeping\;the\;entire\;fin\;at\;base\;temp}}$$ Calculation: Heat loss (Q) = 6 W Effectiveness (ε) = 3 Efficiency (η) = 0.75 $$\eta = \frac{{Actual\;heat\;transfer}}{{Heat\;transfer\;keeping\;the\;entire\;fin\;at\;base\;temp}}$$ $$0.75 = \frac{6}{{Heat\;transfer\;keeping\;the\;entire\;fin\;at\;base\;temp}}$$ Heat transfer keeping the entire fin surface at base temperature = 8 W # A 3 cm long, 2 mm × 2 mm rectangular cross-section aluminium fin [k = 237 W/m°c] is attached to a surface. If the fin efficiency is 65%, the effectiveness of this single fin is: 1. 30 2. 24 3. 8 4. 39 Option 4 : 39 ## Short Fin MCQ Question 9 Detailed Solution Concept: The relation between the efficiency of the fin and the effectiveness of the fin is given by, $$\frac{\eta }{\varepsilon } = \frac{{Cross~Section~area~of~the~fin~\left( {Ac} \right)}}{{Surface~~area~~of~the~fin~\left( {As} \right)}}$$ Calculation Given Length of the fin ,L = 3 cm = 30 mm, Side of the square cross-section, a = b = 2 mm, Efficiency of the fin, η = 65% = 0.65 As = p × L = 2 × (a + b) × L = 2 × (2 + 2) × 30 = 240 mm2 Ac = a × b = 2 × 2 = 4 mm2 $$Thermal~conductivity,k = 237~\frac{W}{{m^\circ C}}$$ Therefore, $$⇒ \frac{{0.65}}{{\rm{\varepsilon }}} = \frac{4}{{240}}$$ ⇒ ε = 39 Important Points • The Biot number of the fin with good effectiveness should be less than 1. • Fin material should have convection resistance higher than conduction resistance. # A fin will be effective only when biot number is: 1. Less than one 2. Equal to one 3. More than one 4. Infinite ## Answer (Detailed Solution Below) Option 1 : Less than one ## Short Fin MCQ Question 10 Detailed Solution Concept: Biot number provides a way to compare the conduction resistance within a solid body to the convection resistance external to that body (offered by the surrounding fluid) for heat transfer: $$Bi = \frac{{{\rm{conduction\;resistance}}}}{{{\rm{convection\;resistance}}}} = \frac{{\left( {\frac{L}{{kA}}} \right)}}{{\left( {\frac{1}{{hA}}} \right)}} = \frac{{hl}}{k}$$ Biot number, Bi is the ratio of resistance within the fin (resistance to conduction) and the resistance at the surface (resistance to convection). Fins are designed for biot number less than 1
{}
## Monday, August 23, 2010 ### Marc Morano's alarmist audience of wild animals Yesterday, Marc Morano of climatedepot.com was invited to give a talk at the AREDAY conference of climate alarmists in Aspen. James Cameron, the king of hypocrites, canceled his promised debate with Marc Morano; you can't believe a word that these people ever say. The remainder of Morano's talk went as expected: the alarmist participants of the conference - and the moderator - behaved like wild animals. To make things juicier, one female participant instructed Morano to drive his car into garage with engine running and then close the doors. Because this lady didn't possess any functioning brain in her skull, Morano didn't manage to explain her the difference between the carbon monoxide, which is toxic, and carbon dioxide, which is harmless and essential for the life on Earth. It's always controversial to mention that someone is thinking just like the Nazis - except that in some cases, the similarity is just so overwhelming that any attempt to deny the analogy is a proof of a huge bias. You know, to close someone in an isolated room while keeping the motors running is exactly something that the warmists' ideological predecessors were "successfully" doing in Auschwitz around 1940. The predecessors wanted to achieve a "final solution" of their "Jewish problem" while their heirs want to find a "final solution" to the "global warming problem". Needless to say, the two "problems" are exactly equally bogus and the proposed "solutions" may lead to exactly the same amount of suffering and death. #### 2 comments: 1. Have you seen this? http://www.science.org.au/reports/climatechange2010.pdf They "answer" questions like the cooling since 1998, why CO2 is so important, solar activity, etc. They say they make this panflet to finally shut up all deniers :P I'd like to hear your thoughts about it. Greetings, DrGEN 2. Does it come in something other then pdf? I'm allergic to adobe since they started moving their operations out of California (putting many people out of work) - without so much as a word in protest against the global warming acts. If you have a nice html site with your nondescript science dot org stuff, I'll give it a thump.
{}
# Hill of many wonders : Chitrakoot December 24, 2017 Chitrakoot is a town and a nagar panchayat in the Satna district in the state of Madhya Pradesh, India. But Chitrakoot is also established as a one of the district of Uttar Pradesh state of India by the name of Chitrakoot Dham (Karwi). Therefore it is is divided between states of Madhya Pradesh and Uttar Pradesh. Chitrakoot is a town of religious, cultural, historical and archaeological importance, situated in the Baghelkhand region. A number of temples and sites mentioned in Hindu scriptures it attracts crowds throughout the year. ### Mythological Importance of Chitrakoot Chitrakoot means the ‘Hill of many wonders’. Chitrakoot falls in the northern Vindhya range of mountains spread over the states of Uttar Pradesh and Madhya Pradesh. Chitrakoot Parvat Mala includes Kamad Giri, Hanumaan Dhara, Janki Kund, Lakshman pahari, and Devangana famous Religious mountains. According to Valmiki Ramayan lord Ram, goddess Sita and his brother shri Lakshman spent a few months of their fourteen years of banishment. It is said that all gods and goddesses came to Chitrakoot when Rama performed the Shraddha ceremony of his father to partake of the shuddhi. Tulsidas, the saint-poet of Hindi he spent quite some part of his life here worshipping Ram and craving his darshan. Shri Bharat who was the brother shri ram was asked by his ministers to take his seat upon throne of Ayodhya. Bharat refused and came to Chitrakoot to meet shri Ram. A place called Bharat Milap, Bharat met Lord Ram and requested him to return to Ayodhya and rule. But Lord Ram would not agreed for that due to his commitment given to his father shir Dasarath. Bharat returned to Ayodhya and installed the sandals on the throne, and, living in retirement, carried on the government as their minister. Now Lord Rama decided for two reasons to leave Chitrakuta. ### Place to visit in Chitrakoot Ramghat The ghats that line the Mandakini river are called Ramghat. During the exile period Rama, Lakshmana and Sita took bath here and are believed to have appeared before the poet Tulsidas. Bharat Milap Bharat Milap temple is located here, marking the spot where Bharata is said to have met Rama to persuade him to return to the throne of Ayodhya. It is said that the meeting of four brothers was so emotional that even the rocks and mountains of Chitrakut melted. Foot prints of Lord Rama and his brothers were imprinted on these rocks and are still present today and seen in Bharat Milap Mandir. Bharat milap mandir is situated beside kamadgiri mountain, in the circumbulation path of kamadgiri. Sati Anasuya ashrama Sati Anasuya ashrama is located further upstream, 16 km from the town, set amidst thick forests that round to the melody of birdsong all day. It was here that Atri muni, his wife Anasuya and their three sons (who were the three incarnations of Brahma, Vishnu and Mahesh), lived and are said to have meditated. Gupt-Godavari Gupt-Godavari is situated at a distance of 18 km from town. Here is a pair of caves, one high and wide with an entrance through which one can barely pass, and the other long and narrow with stream of water running along its base. It is believed that Rama and Lakshmana held court in latter cave, which has two natural throne-like rocks. Hanuman Dhara Located on a rock-face several hundred feet up a steep hillside is a spring, said to have been created by Rama to assuage Hanuman when the latter returned after setting Lanka afire. A couple of temples commemorate this spot, which offers a panoramic view of Chitrakut. Bharat Koop Bharath Koop is where Bharata stored holy water collected from all the places of pilgrimage in India. It has a small well and temple situated next to it. The water in the well remains pure and clean round the year. According to story when Bharat ji came to Chitrakoot to convince Shri Ram to come back to Ayodhya, and rule it. For this purpose, he bought waters of five rivers along with him to do Lord Shri Ram’s coronation. But, Lord Ram told Bharath that he does not wish to break his vow given to King Dasharath of coming back to Ayodhya only after completing Vanvas of 14 years. Hence, Bharath asked rishi Vashisht how to use the 5 rivers water that he brought along with him for Lord Ram’s Rajya Abhishek. Rishi Vashisht advised him to put all the water along with flowers he had got for Rajya-abhishek in a well specified near Chitrakoot. He explained that the water in this well will remain pure and will be revered till the end of the time. Hence, upon the advice of rishi Vashisht, king Bharath followed his instructions and thus this place was named as Bharath koop. Ram Shaiya This place is located on the way between Chitrakoot and Bharat Koop, in an isolated location. It is the place where Shri Ram, Sitaji and Laxmanji used to sleep and rest in the evening after wandering around the forest of Chitrakoot. It is located between mountains with no town near-by, with absoluted silence in the environment. It has a large flat-bed rock which bears imprints of Shri Rama, Lakshman, and Sita Mata. There is a tree above it and the entire place is walled by brick structure on the sides to preserve it. Sphatic Shila A few kilometres beyond Janaki Kund is another densely forested area on the banks of the Mandakini. One can climb up to the boulder, which bears the Rama’s footprint and Sita. According to Ramacharit Manas it is said that Lord Rama with Lakshman was sitting on this shila (rock) when Hanuman returned from Lanka after setting it afire and confirmed the news to Lord Rama that Sita has been imprisoned in Ashoka vatika at Lanka. How to Reach Chitrakoot Chitrakoot is well connected to all the major cities in India by train. The nearest railway station is Karwi and the next prominent station is Chitrakoot Dham. The network of highways and other roads connect this historical place to all the other famous towns of the country. Here is how you can reach Chitrakoot. The nearest airports are Allahabad (135 kms from Chitrakoot), Khajuraho( 175 kms) and Varanasi (275 kms). These airports have daily flight services to Delhi. Karvi is the nearest railway station and it is 8km away from Chitrakoot. The second nearest station is Chitrakoot Dham. The station lies on the Jhansi-Manikpur main line and is well connected to all prominent Indian cities. You can also get down at Manikpur Junction, which is 35 kms away from Chitrakoot, or at Satnba Junction, which is 75 kms away. State-owned buses are available from Allahabad, Banda, Kanpur, Satna and Jhansi to reach Chitrakoot. One can also take taxi from Delhi airport to Chitrakoot. #### References $${}$$
{}
# Will DirectX 10.1 graphics card work with DirectX 11? I just started learning DirectX, but I have a problem with its version. I have ATI Radeon HD 4830, this card supports only DirectX 10.1, but when I ran dxdiag it said that I have DirectX 11, then I realized its probably talking about what version does the OS(Windows 7) support. Now, I just ran some DirectX 11 benchmark and it worked. Does it mean that my graphics card is capable of running DX11 or it just runs in DX 10.1? I want to know this, because I don't know what version of DX should I use for programming. I'd like to use DX 11 because of shaders, but don't know whether it's safe. If it's not I'd rather use DX9 with shaders, because I heard that DX 10 is buggy and I should use DX9 rather then 10, if DX 11 not available. Or does DX 11 support older versions itself, so I can use DX 11 headers but limit myself to older features? So what is your opinion? Should I use DX 11(code compiles, but I am not sure whether my card supports it), or DX 9? Thanks in advance. ## 1 Answer You can target older graphics cards (as far back as Direct3D 9 era) via the use of feature levels while using the Direct3D 11 API, yes. While I haven't heard Direct3D 10 called "buggy", I'd argue Direct3D 11 tends to be just all around better with various minor improvements to the API... with little to no reason to prefer 10 over 11. As a final note, Direct3D 11 requires Vista SP2 or later. • Ok thank you for your answer. What would you use, then? DX 10, DX 11, or just use DX 9 to make sure everything will work on older hardware and use shaders instead of fixed pipeline? Thank you again. – michy04 Jan 20 '13 at 10:19 • Ok, I'm satisfied with this answer, so I'm going to mark it as the answer. – michy04 Jan 20 '13 at 10:55 • I'd generally go with whichever I was more familiar with. D3D9 will let you target older systems, but D3D11 exposes more features of modern hardware, generally maps better to it, and lets you write "Modern UI Style" Windows 8 apps (strictly speaking using D3D11.1, whereas 9 is unsupported at best there.) If you absolutely need a tiebreaker as you have no preference for any of the above, D3D11 will put you in a better position going forward to pick up e.g. D3D12+ whenever those come about, whereas 9 will become less and less relevant. – MaulingMonkey Jan 20 '13 at 11:15 • Well, thank you once again. This seems like a pretty good reason to choose D3D11 over D3D9. – michy04 Jan 20 '13 at 11:24 • Check the steam hardware surveys to see what your customers are running :) – Roy T. Jan 20 '13 at 13:08
{}
0% # Problem 375 Minimum of subsequences Let Sn be an integer sequence produced with the following pseudo-random number generator: $$S_0=290797$$ $$S_{n+1}=S_n^2 \text{ mod } 50515093$$ Let A(i, j) be the minimum of the numbers Si, Si+1, … , Sj for i ≤ j. Let M(N) = ΣA(i, j) for 1 ≤ i ≤ j ≤ N. We can verify that M(10) = 432256955 and M(10 000) = 3264567774119. Find M(2 000 000 000). $$S_0=290797$$ $$S_{n+1}=S_n^2 \text{ mod } 50515093$$
{}
## CryptoDB ### Ittai Abraham #### Publications Year Venue Title 2022 TCC Broadcast is an essential primitive for secure computation. We focus in this paper on optimal resilience (i.e., when the number of corrupted parties $t$ is less than a third of the computing parties $n$), and with no setup or cryptographic assumptions. While broadcast with worst case $t$ rounds is impossible, it has been shown [Feldman and Micali STOC'88, Katz and Koo CRYPTO'06] how to construct protocols with expected constant number of rounds in the private channel model. However, those constructions have large communication complexity, specifically $\bigO(n^2L+n^6\log n)$ expected number of bits transmitted for broadcasting a message of length $L$. This leads to a significant communication blowup in secure computation protocols in this setting. In this paper, we substantially improve the communication complexity of broadcast in constant expected time. Specifically, the expected communication complexity of our protocol is $\bigO(nL+n^4\log n)$. For messages of length $L=\Omega(n^3 \log n)$, our broadcast has no asymptotic overhead (up to expectation), as each party has to send or receive $\bigO(n^3 \log n)$ bits. We also consider parallel broadcast, where $n$ parties wish to broadcast $L$ bit messages in parallel. Our protocol has no asymptotic overhead for $L=\Omega(n^2\log n)$, which is a common communication pattern in perfectly secure MPC protocols. For instance, it is common that all parties share their inputs simultaneously at the same round, and verifiable secret sharing protocols require the dealer to broadcast a total of $\bigO(n^2\log n)$ bits. As an independent interest, our broadcast is achieved by a \emph{packed verifiable secret sharing}, a new notion that we introduce. We show a protocol that verifies $\bigO(n)$ secrets simultaneously with the same cost of verifying just a single secret. This improves by a factor of $n$ the state-of-the-art. 2021 TCC Secure computation enables $n$ mutually distrustful parties to compute a function over their private inputs jointly. In 1988 Ben-Or, Goldwasser, and Wigderson (BGW) demonstrated that any function can be computed with perfect security in the presence of a malicious adversary corrupting at most $t< n/3$ parties. After more than 30 years, protocols with perfect malicious security, with round complexity proportional to the circuit's depth, still require sharing a total of $O(n^2)$ values per multiplication. In contrast, only $O(n)$ values need to be shared per multiplication to achieve semi-honest security. Indeed sharing $\Omega(n)$ values for a single multiplication seems to be the natural barrier for polynomial secret sharing-based multiplication. In this paper, we close this gap by constructing a new secure computation protocol with perfect, optimal resilience and malicious security that incurs sharing of only $O(n)$ values per multiplication, thus, matching the semi-honest setting for protocols with round complexity that is proportional to the circuit depth. Our protocol requires a constant number of rounds per multiplication. Like BGW, it has an overall round complexity that is proportional only to the multiplicative depth of the circuit. Our improvement is obtained by a novel construction for {\em weak VSS for polynomials of degree-$2t$}, which incurs the same communication and round complexities as the state-of-the-art constructions for {\em VSS for polynomials of degree-$t$}. Our second contribution is a method for reducing the communication complexity for any depth-1 sub-circuit to be proportional only to the size of the input and output (rather than the size of the circuit). This implies protocols with \emph{sublinear communication complexity} (in the size of the circuit) for perfectly secure computation for important functions like matrix multiplication. 2017 PKC 2008 TCC TCC 2021
{}
zgribestika 2022-01-17 How do you solve the Initial value probelm $\frac{dp}{dt}=10p\left(1-p\right)$, $p\left(0\right)=0.1$ Solve and show that . Jenny Sheppard Expert This is the so-called logistic equation, which occurs often in population dynamics and many other contexts. There's a trick which works for this particular equation and is much simpler than separation of variables (in my opinion): change variables to $y\left(t\right)=\frac{1}{p}\left(t\right)$. Then the nonlinear equation for p turns into an inhomogeneous linear equation for y, which can be solved immediately by the usual ''$\text{homogeneous}+\text{particular solution}$'' method (the homogeneous solution is an exponential, and the particular solution is a constant). Since this is tagged as homework, I'll let you have a go at the details yourself. Philip Williams Expert The method we can use here is called Separation of Variables. Take all the ps alenahelenash Expert From your comment, it looks you have been able to integrate correctly, following Ragib's Hint and Gourtaur comment. But now your problem is (to finish the solution) to express p(t). This rest part is a simple algebra. Let me express p(t) in terms of t:$\frac{p}{1-p}={e}^{10t+10c}={e}^{10t}\cdot {e}^{10c}=k.{e}^{10t}$ (where $k={e}^{10c}$ is a new constant)$⇒\frac{p}{\left(1-p\right)+p}=\frac{k{e}^{10t}}{1+k{e}^{10t}}$ (I applied $\frac{a}{b}=\frac{c}{d}⇒\frac{a}{b+a}=\frac{c}{d+c}$. You can just multiply both sides by $\left(1-p\right)$, or cross-multiply and solve for p)$⇒p=p\left(t\right)=\frac{1}{1+{k}^{\prime }{e}^{-10t}}$ (dividing numerator and denominator of the fraction on RHS by $k{e}^{10t}$ and writing $k{e}^{10t}$ and writing ${k}^{\prime }=\frac{1}{k}$)Now, from using the condition $p\left(0\right)=0.1=\frac{1}{10}$, we get $\frac{1}{10}=\frac{1}{1+{k}^{\prime }}⇒{k}^{\prime }=9$
{}
## General usage Note: the current version of Interpolations supports interpolation evaluation using index calls [], but this feature will be deprecated in future. We highly recommend function calls with () as follows. Given an AbstractArray A, construct an "interpolation object" itp as itp = interpolate(A, options...) where options... (discussed below) controls the type of interpolation you want to perform. This syntax assumes that the samples in A are equally-spaced. To evaluate the interpolation at position (x, y, ...), simply do v = itp(x, y, ...) Some interpolation objects support computation of the gradient, which can be obtained as g = Interpolations.gradient(itp, x, y, ...) or as Interpolations.gradient!(g, itp, x, y, ...) where g is a pre-allocated vector. Some interpolation objects support computation of the hessian, which can be obtained as h = Interpolations.hessian(itp, x, y, ...) or Interpolations.hessian!(h, itp, x, y, ...) where h is a pre-allocated matrix. A may have any element type that supports the operations of addition and multiplication. Examples include scalars like Float64, Int, and Rational, but also multi-valued types like RGB color vectors. Positions (x, y, ...) are n-tuples of numbers. Typically these will be real-valued (not necessarily integer-valued), but can also be of types such as DualNumbers if you want to verify the computed value of gradients. (Alternatively, verify gradients using ForwardDiff.) You can also use Julia's iterator objects, e.g., function ongrid!(dest, itp) for I in CartesianIndices(itp) dest[I] = itp(I) end end would store the on-grid value at each grid point of itp in the output dest. Finally, courtesy of Julia's indexing rules, you can also use fine = itp(range(1,stop=10,length=1001), range(1,stop=15,length=201)) There is also an abbreviated Convenience notation.
{}
# Should I use time dilation or length contraction? 1. Homework Statement [/B] This is a problem that was in my Physics HW. Two powerless rockets are on a collision course. The rockets are moving with speeds of 0.800c and 0.600c and are initially ## 2.52 × 10^{12} ## m apart as measured by Liz, an Earth observer, as shown in Figure P1.59. Both rockets are 50.0 m in length as measured by Liz. (a) What are their respective proper lengths? (b) What is the length of each rocket as measured by an observer in the other rocket? (c) According to Liz, how long before the rockets collide? (d) According to rocket 1, how long before they collide? (e) According to rocket 2, how long before they collide? (f) If both rocket crews are capable of total evacuation within 90 minutes (their own time), will there be any casualties? My doubt is on letters (d) and (e). I don't know if I am supposed to apply the time Lorentz transformation using the value obtained in (c) or if I should calculate this time based on the speed each rocket sees the other approaching and the distance using length contraction. I found two answers on the internet. ## Homework Equations ## L = L_{0}\sqrt {1 - \frac {v^2} {c^2}} ## ## \Delta t' = \frac 1 {\sqrt {1 - \frac {v^2} {c^2}}} \Delta t ## ## V' = \frac {u - V_x} {1 - \frac {uV_x} {c^2}} ## ## The Attempt at a Solution By using the mentioned equations, I obtained that (a) ## L_1 = 83.3 m ## and ## L_2 = 62.5 m ## . (b) ## L_1 = 27.0 m ## in the frame of rocket 2 and ## L_2 = 21.0 m ## in the frame of rocket 1. (c) ## \frac {\Delta S} {v_1 + v_2} = 6000 sec = 100 min ## . When it comes to letter (d) that something goes wrong. my first approach to it was to use the length contraction observed by 1 and divide it by the speed 1 sees 2 approaching. ## L = L_{0}\sqrt {1 - \frac {v^2} {c^2}} = 2.52 \times 10^{12} \times 0.6 = 1.512 \times 10^{12} ## and ## V' = \frac {u - V_x} {1 - \frac {uV_x} {c^2}} = \frac { 0.8c - ( - 0.6c)} {1 - \frac { (- 0.48c^2)} { c^2 }} = 0.945c ## . Dividing these results we have ## \frac {L} {V'} = 5,333 sec = 88.9 min ## . Although, using ## \Delta t' = \frac 1 {\sqrt {1 - \frac {v^2} {c^2}}} \Delta t ## , where t' is Liz's time of 100 min, we obtain ## 100 min = 1.6666 \times \Delta t ## and ## \Delta t = 60 min ##. This same problem happens when I try to solve (e), and I've taken a look at several solutions on the internet, being half of them solved in the first way, and half in the second. Shouldn't these results agree? If not, why? #### Attachments • Untitled.png 24.1 KB · Views: 561 Last edited by a moderator: jbriggs444 Homework Helper As with most relativity problems, the difficulty is with the relativity of simultaneity. According to Liz on Earth, the start event for both Liz and L1 is simultaneous. The end event is a collision and is naturally simultaneous for all parties involved. According to L1, the start event for L1 is not simultaneous with the start event for Liz. Joao Victor Dantas This makes sense. So 88.9 min would be the time it takes for the observer in rocket 1 to get to the point of collision from the start of HIS measurement and 60 min would be the time is takes for this observer from the start of Liz's measurement, correct? robphy Homework Helper Gold Member Can you draw a position-vs-time graph of the problem? jbriggs444 Homework Helper This makes sense. So 88.9 min would be the time it takes for the observer in rocket 1 to get to the point of collision from the start of HIS measurement and 60 min would be the time is takes for this observer from the start of Liz's measurement, correct? I am not sure that I understand your phrasing here. Consider that L1 has a stopwatch. He starts it at some point. And we are asked for its reading at the event of the collision. At what event does L1 start his stopwatch? I would suggest having him start it at the event that Liz considers to be simultaneous with the scenario start. Maybe it would help to resolve the issue if you calculated the "contracted" distance that each ship travels and then the time to traverse this distance. Use the 100 min. that you calculated as measured by Liz.
{}
MathSciNet bibliographic data MR22857 (9,270b) 10.0X Mordell, L. J. On some Diophantine equations \$y\sp 2=x\sp 3+k\$$y\sp 2=x\sp 3+k$ with no rational solutions. Arch. Math. Naturvid. 49, (1947). no. 6, 143–150. Links to the journal or article are not yet available For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{}
The speed of a molecule in a uniform gas at equilibrium is a random variable whose probability density function is given by # The speed of a molecule in a uniform gas at equilibrium is a random variable whose probability density function is given by H 240 points The speed of a molecule in a uniform gas at equilibrium is a random variable whose probability density function is given by $$f(n) = \begin{cases} ax^2 e^{-bx^2}, & \text{x \ge 0} \\ 0, & \text{x < 0} \end{cases}$$ where $b = m/2kT$ and $k$, $T$, and $m$ denote, respectively, Boltzmann’s constant, the absolute temperature of the gas, and the mass of the molecule. Evaluate a in terms of b. probability density molecule speed random variable 426.1k points the function f is a probability density function, so it must integrate to 1. To solve this integral use integration by parts: $$\int_0^{\infty}ax^2 e^{-bx^2}=-\frac{axe^{-bx^2}}{b} \, \bigg|_0^{\infty}+\frac{a}{b}\int_0^{\infty}e^{-bx^2}$$ $$=0-0+\frac{a}{b}\int_0^{\infty}e^{-bx^2}$$ theres a trick to solving the second integral here $$=\frac{a}{b}\frac{\sqrt{\pi}}{2\sqrt{b}}=1$$ solve this for a in terms of b giving, $$a=\frac{2b^{3/2}}{\sqrt{\pi}}$$ Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
{}
# Use Divergence in a sentence ### Dictionary DIVERGENCE [dəˈvərjəns, dīˈvərjəns] NOUN divergence (noun) · divergences (plural noun) • the process or state of diverging. • a difference or conflict in opinions, interests, wishes, etc.. Synonyms: separation . dividing . parting . forking . branching . fork . division . bifurcation . difference . dissimilarity . variance . polarity . disparity . contrast . disagreement . discrepancy . incompatibility . mismatch . conflict . clash . unlikeness . dissimilitude . deviation . digression . departure . shift . drift . drifting . straying . deflection . wandering . variation . change . alteration . divagation . similarity . • the inner product of the operator del and a given vector, which gives a measure of the quantity of flux emanating from any point of the vector field or the rate of loss of mass, heat, etc., from it. 1. Divergence definition is - a drawing apart (as of lines extending from a common center) 2. How to use Divergence in a sentence. 3. The act, fact, or amount of diverging: a Divergence in opinion 4. The Divergence is an operator, which takes in the vector-valued function defining this vector field, and outputs a scalar-valued function measuring the change in density of the fluid at each point 5. This is the formula for Divergence: Here,,, are the component functions of 6. Divergence is when the price of an asset is moving in the opposite direction of a technical indicator, such as an oscillator, or is moving contrary to other data. Divergence warns that the current 7. Divergence occurs when a stronger wind moves away from a weaker wind or when air streams move in opposite directions. Divergence occurs in the upper levels of the atmosphere it leads to rising air 8. The Divergence came to a head quickly, during the overhaul of NAFTA, which Lighthizer conducted at warp speed for a trade agreement 9. Divergence The Divergence of a vector field, denoted or (the notation used in this work), is defined by a limit of the surface integral (1) where the surface integral gives the value of integrated over a closed infinitesimal boundary surface surrounding a … 10. We also have a physical interpretation of the Divergence 11. 11 synonyms of Divergence from the Merriam-Webster Thesaurus, plus 15 related words, definitions, and antonyms 12. Find another word for Divergence 13. Divergence: a movement in different directions away from a common point 14. Divergence is the opposite of convergence 15. When the value of an asset, indicator, or index moves, the related asset, indicator, or index moves in the other direction. Divergence 16. Divergence Pre Algebra Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & … 17. An example of Divergence is the development of wings in bats from the same bones that form the arm and hand or paw in most other mammals 18. A situation in which two things become different, or the difference between them increases: a Divergence of opinion The figures reveal a marked Divergence between public sector pay settlements and those in … 19. Divergence insufficiency is a rare ophthalmologic disorder manifesting itself among older adults 20. In this section we will discuss in greater detail the convergence and Divergence of infinite series 21. We will also give the Divergence Test for series in this section. 22. Divergence is an operation on a vector field that tells us how the field behaves toward or away from a point 23. Locally, the Divergence of a vector field F in or at a particular point P is a measure of the “outflowing-ness” of the vector field at P.If F represents the velocity of a fluid, then the Divergence of F at P measures the net rate of change with respect to time of the 24. What does Divergence mean? Departure from a particular viewpoint, practice, etc 25. Divergence Forex Divergence trading is both a concept and a trading strategy that is found in almost all markets 26. Book 21 Divergence is a milestone because the second POV character, the Atevi ruler's son & heir Cajeiri, is now nine 27. Divergence is created when the price action contradicts with the movement of an indicator 28. Divergence in stock trading is a powerful reversal signal, as it can identify the start of a new trend 29. There are two types of Divergence in stock trading: Bullish Divergence – created when the price action is bearish and the indicator creates higher lows. 30. Divergence For example, it is often convenient to write the Divergence div f as ∇ ⋅ f, since for a vector field f(x, y, z) = f1(x, y, z)i + f2(x, y, z)j + f3(x, y, z)k, the dot product of f with ∇ (thought of as a vector) makes sense: 31. Divergence at a point (x,y,z) is the measure of the vector flow out of a surface surrounding that point 32. Then if the Divergence is a positive number, this means water is flowing out of the point (like a water spout - this location is considered a source) 33. If the Divergence is a negative 34. In vector calculus, the Divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem which relates the flux of a vector field through a closed surface to the Divergence of the field in the volume enclosed. 35. More precisely, the Divergence theorem states that the surface integral of a vector field over a closed surface, which is called the flux through the surface 36. Using Divergence is a popular way to identify potential trading opportunities 37. But how does it work and when does it stop working? RSI Divergence occurs when the Relative Strength Index indicator starts reversing before price does 38. A bearish Divergence consists of an overbought RSI reading, followed by lower high on RSI. 39. Divergence is an operation on a vector field that tells us how the field behaves toward or away from a point 40. Locally, the Divergence of a vector field $$\vecs{F}$$ in $$\mathbb{R}^2$$ or $$\mathbb{R}^3$$ at a particular point $$P$$ is a measure of the “outflowing-ness” of the vector field at $$P$$. 41. Divergence can provide a level confirmation with this theory 42. Perhaps price is extended far from the mean and you’re looking to catch a position based on a mean reversion setup; in this situation, Divergence can give you hints as to when it’s time to … 43. The Divergence of a vector field is a measure of the "outgoingness" of the field at all points 44. If a point has positive Divergence, then the fluid particles have a general tendency to leave that place (go away from it), while if a point has negative Divergence, then the fluid particles tend … 45. Divergence theorem: If S is the boundary of a region E in space and F~ is a vector field, then Z Z Z B div(F~) dV = Z Z S F~ ·dS .~ Remarks 46. 1) The Divergence theorem is also called Gauss theorem 47. Divergence Academy is a combination of structured learning, mentoring, and projects 48. Divergence excess A high exophoria at distance associated with a much lower exophoria at near 49. Fusional Divergence A movement of the eyes away from each other in response to retinal disparity, in order to restore single binocular vision 50. What is Bullish Divergence? A price chart showcasing bullish Divergence is characterized by the formation of progressively lower lows by the price candles when the signal line of the oscillator forms progressively higher lows 51. It does not matter whether it is a bullish Divergence RSI signal or a bullish Divergence MACD signal: the principle of spotting and trading the Divergence is the same. 52. Divergence, In mathematics, a differential operator applied to a three-dimensional vector-valued function 53. The Divergence of a vector v is given by in which v1, v2, and v3 are the vector components of v, typically a velocity field of fluid 54. Divergence is tricky because it comes in markets that are trending and indicators tend to diverge for long periods of time 55. I like to use RSI as it has stood the test of time for me, however i will rarely trade in the direction of the Divergence unless many other factors line up 56. The Divergence theorem states that the surface integral of the normal component of a vector point function “F” over a closed surface “S” is equal to the volume integral of the Divergence of $$\vec{F}$$ taken over the volume “V” enclosed by the surface S 57. Thus, the Divergence theorem is symbolically 58. Divergence is an operation on a vector field that tells us how the field behaves toward or away from a point 59. Locally, the Divergence of a vector field F in ℝ 2 ℝ 2 or ℝ 3 ℝ 3 at a particular point P is a measure of the “outflowing-ness” of the vector field at P. 60. Introducing Divergence — our new 45’ boat that transcends boundaries by flawlessly merging the comforts of a yacht with the size, swiftness and accessibility of a sport boat 61. Divergence occurs when the moving averages move away from each other 62. Divergence is a measure of source or sink at a particular point 63. Below is an example of a field with a positive Divergence. 64. Divergence, Convergence, or Crossvergence in International Human Resource Management Human Resource Management Review (HRMR) announces a call for papers for a special issue on “Divergence, Convergence, or Crossvergence in International Human Resource Management”
{}
Home > Analysis, Geometry, Imaging, Linear Algebra, sage, Scientific Computing > Edge detection: The Convolution Approach ## Edge detection: The Convolution Approach November 20, 2011 Leave a comment Go to comments Today I would like to show a very basic technique of detection based on simple convolution of an image with small kernels (masks). The purpose of these kernels is to enhance certain properties of the image at each pixel. What properties? Those that define what means to be an edge, in a differential calculus way—exactly as it was defined in the description of the Canny edge detector. The big idea is to assign to each pixel a numerical value that expresses its strength as an edge: positive if we suspect that such structure is present at that location, negative if not, and zero if the image is locally flat around that point. Masks can be designed so that they mimic the effect of differential operators, but these can be terribly complicated and give rise to large matrices. The first approaches were performed with simple $3 \times 3$ kernels. For example, Faler came up with the following four simple masks that emulate differentiation: $\begin{pmatrix} -1 & 0 & 1\\ -1 & 0 & 1\\ -1 & 0 & 1 \end{pmatrix}\quad \begin{pmatrix} 1 & 1 & 1\\ 0 & 0 & 0 \\ -1 & -1 & -1 \end{pmatrix}\quad \begin{pmatrix} -1 & -1 & -1 \\ -1 & 8 & -1 \\ -1 & -1 & -1 \end{pmatrix}\quad \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix}$ Note that, adding all the values of each matrix, one obtains zero. This is consistent with the third property required for our kernels: in the event of a locally flat area around a given pixel, convolution with any of these will offer a value of zero. Read more… Advertisements 1. No comments yet. 1. No trackbacks yet.
{}
Timezone: » Poster On Warm-Starting Neural Network Training In many real-world deployments of machine learning systems, data arrive piecemeal. These learning scenarios may be passive, where data arrive incrementally due to structural properties of the problem (e.g., daily financial data) or active, where samples are selected according to a measure of their quality (e.g., experimental design). In both of these cases, we are building a sequence of models that incorporate an increasing amount of data. We would like each of these models in the sequence to be performant and take advantage of all the data that are available to that point. Conventional intuition suggests that when solving a sequence of related optimization problems of this form, it should be possible to initialize using the solution of the previous iterate---to warm start'' the optimization rather than initialize from scratch---and see reductions in wall-clock time. However, in practice this warm-starting seems to yield poorer generalization performance than models that have fresh random initializations, even though the final training losses are similar. While it appears that some hyperparameter settings allow a practitioner to close this generalization gap, they seem to only do so in regimes that damage the wall-clock gains of the warm start. Nevertheless, it is highly desirable to be able to warm-start neural network training, as it would dramatically reduce the resource usage associated with the construction of performant deep learning systems. In this work, we take a closer look at this empirical phenomenon and try to understand when and how it occurs. We also provide a surprisingly simple trick that overcomes this pathology in several important situations, and present experiments that elucidate some of its properties.
{}
# Upper bound on minimum number of moves to solve the $m\times n$ sliding puzzle Define an $m\times n$ sliding puzzle to have an $m\times n$ grid of uniquely numbered squares, and the only valid move is to swap the special square numbered 0 with an orthogonally adjacent (up/down/left/right) square, and the puzzle is solved if the squares are rearranged by valid moves into a particular configuration where the special square is in the top-left corner (for example in ascending order when taken row by row then column by column). If given an arbitrary solvable puzzle, can the minimum number of moves in the solution be computed? If not exactly, hopefully a tight asymptotic bound? If it cannot be computed easily for an individual puzzle, what about a global upper bound? I cannot get anything better than $\lfloor{\frac{m^2}{2}}\rfloor n + \lfloor{\frac{n^2}{2}}\rfloor m - m - n + 2$, which is obtained from considering the total vertical and horizontal distance over all squares excluding the special square, which each move can decrease by at most 1. The worst configuration seems to be when the whole grid is rotated 180-degrees about the centre. This bound is clearly tight for $(m,n)=(2,2)$ but nothing else. Moreover I cannot find a general solution to a solvable puzzle that has the same asymptotic number of moves as that bound, which may well be more interesting than the exact bound itself! - The asymptotic bound is definitely O(mn(m+n)), as your lower bound suggests. The strategy of placing each numbered square one by one, as is my usual route in solving one of these, takes O(m+n) moves to place each of m*n blocks. I don't know about the constant or anything exact, though - this is an interesting problem! – Lopsy Dec 28 '11 at 18:31 @Lopsy: No I don't think so, because that strategy takes $O((m+n)^2)$ moves per square. – user21820 Dec 29 '11 at 2:06 What you are looking for is the diameter on the graph defined on solvable configurations with edges defined by single moves. I'm not sure this point of view will really help though. – Marc van Leeuwen Dec 29 '11 at 15:05 @MarcvanLeeuwen: Yes, I am hoping that the local nature of the moves would make the problem tractable, given that it is usually not the case, like the maximum length of an optimal solution for a scrambled Rubik's cube.. – user21820 Dec 29 '11 at 15:53 @user21820 Where do you get the $O(m+n)^2$ figure for the 'place each square one by one' approach? Placing each square should only take moves proportional to its distance from its target position, which is $O(m+n)$. There are boundary effects for placing the last square in a row or shuffling the last couple of rows, but those costs are asymptotically small compared to the overall $O(n^3)$ cost. – Steven Stadnicki Apr 25 '12 at 17:11 The 2x3 puzzle was called the Moving Day puzzle by Sam Loyd. As a graph, there is a high amount of symmetry, as shown below. You are looking for the girth for these graphs. It's best to shrink away the corners, as they add to the size of the graph. Then just find the Diameter. It won't be high, due to symmetry. - As noted in the comments, the asymptotic bound is precisely $\Theta(n^3)$ (For the general case, with $m\geq n$, it's actually $\Theta(m^2n)$; for clarity's sake I'm just considering $m=n$ but it's easy to adapt most of these arguments to the general case). The '180 flip' position establishes the lower bound, since there are $n^2-(\frac{n}{3})^2 = \frac{8}{9}n^2$ tiles in the outer third of rows and columns (that is, with $x$ or $y$ coordinate either $\lt n/3$ or $\gt 2n/3$) and each of those tiles must move at least $n/3$ places to get to its final positions. Any one move can only change the distance-from-target of one tile by one unit, so that sets the $O(n^3)$ lower bound. The upper bound comes from the standard 'fill cells one by one' approach. Since each tile is no more than $2n$ cells from its final position and moving it a single square takes $O(1)$ moves (for instance, to move a cell into an empty square above it and leave the empty square above it, move the empty square D,R,U,U,L) we can fill all but one of the squares in a row in $O(n^2)$ time without having to displace any already-set tiles. The final square in a row takes more care, but that one can be filled by shifting the leftmost tile in a row down, shifting the rest of the row right, setting the final tile into either of the two rightmost squares (without disturbing any of the other cells in the row) and then shifting the rest of the row back; this is a complicated process but only adds $O(n)$ moves per row. A similar approach sets the bottom row once the rest of the puzzle is complete: $O(n)$ moves suffice to move any three tiles from the bottom row into a 'canonical position' in the bottom-right corner, perform a standard 3-cycle on them, and then undo the movements to bring them back to their original locations. Since any even permutation of the tiles in the bottom row can be expressed as a product of $O(n)$ 3-cycles, this adds $O(n^2)$ time to the total effort. In the $m\geq n$ case, the above procedure should yield $O(m^2n)$, the same as your estimate. 'All' that's left at this point is the hardest part of the problem: establishing the constant on the $n^3$ term... - You, and Lopsy, are right. I was very blur and I did not realise that the empty square could be moved in front, though that is how I do solve the puzzle. However, can you get a better lower bound on the number of moves needed? – user21820 May 2 '12 at 9:44 Well, it would be relatively straightforward to find a more precise bound for the number of moves in the flip position; I just wanted an estimate to show the right order, but you could explicitly do the double-sum. On the other hand, that position probably has a move count pretty close to its bound, because (at least for $m=n$) it can be decomposed into a set of cycles (concentric rings around the center). It's not clear that there's an easily-definable position with a greater total displacement than that position, though, or how lower bounds could be much-improved (the problem is known-hard) – Steven Stadnicki May 2 '12 at 15:34 The double sum gives the bound I stated. I didn't know getting the lower bound is known to be hard though! Do you know of any better bound? For example when $m=n=3$ the first move will not decrease the total distance but actually increase it. But this is just an extra 2 moves, whereas I think the best bound would be about twice my bound. For the flip position, it seems as if the optimal solution mostly moves the squares in its own ring, and not between rings. If correct, this gives a lower bound of $O(\frac{8}{3}n^3)$ moves for an $n\times n$ puzzle. – user21820 May 6 '12 at 4:38
{}
ROOT   Reference Guide FumiliErrorUpdator.h File Reference #include "Minuit2/MinimumErrorUpdator.h" Include dependency graph for FumiliErrorUpdator.h: This graph shows which files directly or indirectly include this file: ## Classes class  ROOT::Minuit2::FumiliErrorUpdator In the case of the Fumili algorithm the Error matrix (or the Hessian matrix containing the (approximate) second derivatives) is calculated using a linearization of the model function negleting second derivatives. More... ## Namespaces namespace  ROOT This file contains a specialised ROOT message handler to test for diagnostic in unit tests. namespace  ROOT::Minuit2
{}
# Data model macro cannot be used in preamble I'm using this answer to create an annotated bibliography with biblatex, with the following code: \documentclass{article} % This just makes a dummy bib file \begin{filecontents}{\jobname.bib} @ARTICLE{a, author = {Doe, J.}, title = {The Title}, journal = {The Journal}, mynote = {This source is really interesting because it doesn't have a real title} } @ARTICLE{b, author = {Smith, J.}, title = {The New Title}, journal = {The Same Journal}, mynote = {This second source is also really interesting because it contains words} } \end{filecontents} % This does the work \DeclareDatamodelFields[type=field,datatype=literal]{mynote} \usepackage{xpatch} \xapptobibmacro{finentry}{\par\printfield{mynote}}{}{} \begin{document} \nocite{*} \printbibliography \end{document} However, I get a warning Package biblatex Warning: Data model macro 'DeclareDatamodelFields' cannot be used in preamble. and the output is as if no change had been done to the data model: • You get a warning along the lines of "Data model macro 'DeclareDatamodelFields' cannot be used in preamble." This behaviour has only changed recently (before that data model macros worked properly in the preamble in most cases, but there were problems, I think). Now data model commands cannot be used in the preamble any more, they have to be externalised to a .dbx file. – moewe Apr 14 '15 at 16:12 • Create a .dbx file called mynote.dbx containing the line \DeclareDatamodelFields[type=field,datatype=literal]{mynote}. Then call biblatex with the additional option datamodel=mynote as in \usepackage[style=trad-plain,datamodel=mynote]{biblatex}. – moewe Apr 14 '15 at 16:14 • @moewe it might be better to edit my answer and close this as a duplicate. – StrongBad Apr 14 '15 at 16:38 • @StrongBad Mhhh, probably - I have just written up the answer, though ;-). I think it might not be too bad of an idea to have a standard "my data model macros don't work any more" question which we can questions such as this duplicate to. Going through all answers that used data model macros and editing them seems quite the task. – moewe Apr 14 '15 at 16:42 • @StrongBad But please, by all means edit your question (preferably with a more or less prominent note of the required change) to make sure people don't get confused about this. – moewe Apr 14 '15 at 16:43 Starting from version 2.9 of biblatex (with this commit) data model macros (like \DeclareDatamodelEntrytypes, DeclareDatamodelFields, \DeclareDatamodelEntryfields, ... a full list can be found in §4.5.3 Data Model Specification, pp. 156-161 of the biblatex documentation) are disabled in the preamble and yield only a warning along the lines of Data model macro '\DeclareDatamodelFields' cannot be used in preamble These commands can now only be used from an external .dbx (data model) file. The solution consists in moving these data model macros to an external .dbx file. In our example we will call said file mynote.dbx and we only have one line to move \DeclareDatamodelFields[type=field,datatype=literal]{mynote} We then call this data model via datamodel=mynote in the biblatex options. MWE \documentclass{article} % This just makes a dummy bib file \begin{filecontents}{\jobname.bib} @ARTICLE{a, author = {Doe, J.}, title = {The Title}, journal = {The Journal}, mynote = {This source is really interesting because it doesn't have a real title} } @ARTICLE{b, author = {Smith, J.}, title = {The New Title}, journal = {The Same Journal}, mynote = {This second source is also really interesting because it contains words} } \end{filecontents} % This does the work \usepackage{xpatch} \xapptobibmacro{finentry}{\setunit{\par}\printfield{mynote}}{}{} \begin{document} \nocite{*} \printbibliography \end{document} If mynote.dbx is just \DeclareDatamodelFields[type=field,datatype=literal]{mynote} • How would you change this if you start with \usepackage[authordate, backend=biber]{biblatex-chicago}? Doing all that you said and then doing [..., datamodel=mynote], the obvious implementation to try, didn't work. Any recommendations? – ctde Aug 27 at 5:22 • @ctde I'm afraid the biblatex-chicago wrapper package does not recognise the datamodel option. You may want to contact the author about this (though he may not be inclined to include it, after all biblatex-chicago isn't really supposed to be customised a lot: it is supposed to produce CMS-compliant output). It is possible to work around that, but it is going to be ugly. If you are interested in a workaround ask a new question, please. – moewe Aug 27 at 5:31
{}
… (e) Magnetic Meridian (MM) : (i) It is a vertical plane at any place which passing through magnetic axis of the earth. A dip needle arranged to move freely in the magnetic meridian dips by an angle 6. A vertical plane at a place passing through magnetic north and south poles of the earth is called the magnetic meridian at that place. A magnetic needle free to rotate in a vertical parallel to the magnetic meridian has its north tip down at 60° with the horizontal. Magnetic meridian is (A) a point (B) a line along north-south (C) a horizontal plane (D) a vertical plane. The vertical plane that passes through the point at which the observer (instrument) is situated and that contains the vector of the geomagnetic field intensity at this point is called the plane of the magnetic meridian. Magnetic meridian is a ? Question 100 Magnetic meridian is an imaginary A) line along north-south B) point C) vertical plane D) horizontal plane Answer: line along north-south a - point b - horizontal plane c - vertical plane d - line along N-S 1 See answer Shoaib1307 is waiting for your help. Geographical meridian : A vertical plane passing through a place and geographic north and south axis is called geographic meridian. The direction to which the needle points is called magnetic north and the vertical plane through this direction is called the magnetic meridian. OJASWI OJASWI ANSWER. It is inclined to Geographic Axis nearly at an angle of 17°. iv) Magnetic Axis is a straight line passing through the magnetic poles of the earth. If the vertical plane in which the needle moves is rotated through an angle a to the magnetic meridian, then the needle will dip by an angle (a) 9 (b) a (c) more thane (d) less than θ. Magnetic Equator : An imaginary line, right bisecting the effective length of a magnet is called magnetic equator. The horizontal component of the earth's magnetic field at the place is known to be 0.4 G. Determine the magnitude of the earth's magnetic field at the place. THANKS. A needle perfectly balanced about a horizontal axis (before being magnetized), so placed that it can swing freely in the plane of the magnetic meridian… New questions in Physics. If the vertical plane in which dip is $\theta_{1}$ subtends an angle $\alpha$ with meridian than other vertical plane in which dip is $\theta_{2}$ and is perpendicular to first will make an angle of $90-\alpha$ with magnetic meridian. Add your answer and earn points. Magnetic Needle: It is a magnet tapered towards both ends and pivoted at the centre. A vertical plane passing through the magnetic axis of the freely suspended magnet is called magnetic meridian. A vertical plane passing through the magnetic equator of the freely suspended magnet is called equatorial meridian. MAGNETIC MERIDIAN IS A VERTICAL PLANE. (ii) It is a vertical plane at any place which passing through axis of free suspended bar magnet or magetic needle. v) Magnetic Meridian at any place is a vertical plane passing through the magnetic north and south poles of the earth. The great circle on the surface of the earth, in a plane perpendicular to the magnetic axis, is called the magnetic equator. Check Answer and Solution for above questio App State Wrestling Coach, Unc Women's Basketball Staff, Miles College Athletics Staff Directory, Reluctantly Meaning In Urdu, Gerard Houllier Co Manager, Average Humidity In Midland, Tx, Valverde Fifa 21 Career Mode, Fierce Meaning And Sentence, How Far Is Kenedy Texas From Houston Texas,
{}
• Getting Started • HELP • Error Messages • ARITY_MISMATCH # Error code: ARITY_MISMATCH This guide explains why you may experience an arity mismatch error and what potential solutions can solve it. If you think of relations as tables, the arity of a relation in Rel is the number of columns. For instance, the constants true and false are relations of arity 0, a single element is a relation of arity 1, and so on. Check this section of Rel Primer: Basic Syntax for more details. Consider the following example that returns an arity mismatch: def friends = {("Mike", "Anna"); ("Zoe", "Jenn")} def zoes_friends = friends("Zoe") First, you define a relation friends containing tuples of friends. Then, to know Zoe’s friends you define zoes_friends. The error may be caused by these reasons: 1. The relation you are calling expects a different number of arguments. In friends("Zoe"), you are passing just one argument, while friends is defined by tuples of arity 2. 2. You are using parentheses, (), instead of square brackets, []. The operator () can be thought of as a special case of partial relational application used as a boolean filter, with output arity 0 (meaning either true or false). This means that when you write friends("Zoe"), the Rel compiler is expecting a second argument to check if that tuple is within the relation friends. See this section of Rel Primer: Basic Syntax for further insights. Some correct examples could be: // query def friends = {("Mike", "Anna"); ("Zoe", "Jenn")} def zoes_friends = friends["Zoe"] def output = zoes_friends // query def output = zoes_friends This returns (), i.e., true, confirming that the tuple is within the relation friends.
{}
# [SOLVED] Position and Velocity vector?? • March 28th 2010, 05:39 AM Lafexlos [SOLVED] Position and Velocity vector?? Find the position and velocity vectors if the acceleration is $A(t)= (cost)i -(tsint)k$ and the initial position and velocity vectors are $R(0)=i-2j+k$ and $V(0)=2i+3k$ respectively. Any help is appriciated, bla bla.. • March 28th 2010, 05:42 AM craig Quote: Originally Posted by Lafexlos Find the position and velocity vectors if the acceleration is $A(t)= (cost)i -(tsint)k$ and the initial position and velocity vectors are $R(0)=i-2j+k$ and $V(0)=2i+3k$ respectively. Any help is appriciated, bla bla.. Show us what work you have done so far, we can help you from there. bla bla • March 28th 2010, 06:38 AM Lafexlos Unfortunately, couldn't do anything on it. It looks gonna take integral of acceleration and put zero and gonna find velocity but i can not take the integral of vector. :S is it same as normal integral and put i , j , k? or does it exist something like integral of vector? • March 28th 2010, 06:41 AM craig Quote: Originally Posted by Lafexlos Find the position and velocity vectors if the acceleration is $A(t)= (cost)i -(tsint)k$ and the initial position and velocity vectors are $R(0)=i-2j+k$ and $V(0)=2i+3k$ respectively. Any help is appriciated, bla bla.. To integrate a vector all you do is integrate the individual components. For example, a vector such as $t^2 i + 3t j - 5k$, differentiated with respect to $t$ would be, $2t i + 3 j$. Hope this helps. Edit: The same rules for integration apply. • March 28th 2010, 06:42 AM skeeter Quote: Originally Posted by craig Show us what work you have done so far, we can help you from there. bla bla $a(t) = (\cos{t})i - (t\sin{t})j$ i component ... you should already know the antiderivative of $\cos{t}$. j component ... you'll have to use integration by parts to find the antiderivative of $t\sin{t}$. give it a go. • March 28th 2010, 06:57 AM Lafexlos So, $\int A(t)= (sint) i+ (c) j + (sint-tcost) k + C$ i think, now i should put zeros instead of $t$ but when put zero for $i$ component i get $0i$ but instant velocity has $i$ component. Does it mean that $C$ has $i$ component? Bah. Really confused but understand the logic. =) Thx for helps. • March 28th 2010, 07:09 AM HallsofIvy You have forgotten the "constants of integration" for each component. • March 28th 2010, 07:12 AM Lafexlos Ok. I see. So it'll be $(sint + C)i$ instead of $(sint)i + C$. Now everything is complete.
{}
# What is the probability that a multivariate Normal RV lies within a sphere of radius R? I am currently using different procedures to estimate the probability that a $D$-dimensional Gaussian random variable with mean $\mu$ and covariance $\Sigma$ lies within a sphere of radius $R$ that is centered about the origin. That is, I am estimating $P(|| X ||_2 < R)$ where $X \sim N(\mu, \Sigma)$ and $X \in \mathbb{R}^D$. I am wondering whether there is a way to obtain the exact value of this probability analytically (i.e. without using numerical integration or Monte Carlo)? I currently have two basic approaches to follow: Approach 1 Find a way to analytically evaluate the integral: $\int_{x \in S} (2\pi)^{-\frac{D}{2}}|\Sigma|^{-\frac{1}{2}} \exp(-\frac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) dx$ over the spherical region: $S = \{||x|| < R \} = \{x^Tx < R^2\}$ Approach 2 Exploit the fact that if $x \sim N(\mu,\Sigma)$ then $(x-\mu)^T \Sigma^{-1} (x-\mu) \sim \chi^2(D)$ This implies that $P( (x-\mu)^T \Sigma^{-1} (x-\mu) < R^2 ) = P(\chi^2(D) < R^2)$ which is very simple to evaluate... I am hoping that there is a way to use this fact in order to evaluate: $P( x^T x < R^2)$ - Does this help? – NRH May 16 '12 at 19:37 @NRH Could you elaborate? I understand the representation, though I'm not sure how I could exploit it within this context. – Berk Ustun May 16 '12 at 21:04 Approach 2 makes a mistake: it is correct only when $\mu=\mathbf{0}$. Even for $D=2$, zero correlation, and unit variances the integral is nasty: it evaluates to integrals of erf applied to trig functions, plus a Bessel function. If you need an approximation, saddlepoint methods are attractive, especially as $D$ grows. – whuber♦ May 16 '12 at 21:24 @BerkUstun, basically, the answer I linked to says that you wont find a very explicit expression for general $\mu$ and $\Sigma$ for the density (or distribution function) of $||X||^2$, which is what you seek. There is a book reference in the linked answer, which should be consulted for details. – NRH May 17 '12 at 6:34 $\mathrm P(\chi^2(D)<R^2) \neq \mathrm P(\|X\|^2_2 < R^2)$ The distributions are not the same. The second approach is easy because you standardized the random variable.
{}
# Constructing a De Bruijn Graph solved by 786 July 2, 2012, midnight by Mikhail Dvorkin Topics: Genome Assembly Because we use multiple copies of the genome to generate and identify reads for the purposes of fragment assembly, the total length of all reads will be much longer than the genome itself. This begs the definition of read coverage as the average number of times that each nucleotide from the genome appears in the reads. In other words, if the total length of our reads is 30 billion bp for a 3 billion bp genome, then we have 10x read coverage. To handle such a large number of $k$-mers for the purposes of sequencing the genome, we need an efficient and simple structure. ## Problem Consider a set $S$ of $(k+1)$-mers of some unknown DNA string. Let $S^{\textrm{rc}}$ denote the set containing all reverse complements of the elements of $S$. (recall from “Counting Subsets” that sets are not allowed to contain duplicate elements). The de Bruijn graph $B_k$ of order $k$ corresponding to $S \cup S^{\textrm{rc}}$ is a digraph defined in the following way: • Nodes of $B_k$ correspond to all $k$-mers that are present as a substring of a $(k+1)$-mer from $S \cup S^{\textrm{rc}}$. • Edges of $B_k$ are encoded by the $(k+1)$-mers of $S \cup S^{\textrm{rc}}$ in the following way: for each $(k+1)$-mer $r$ in $S \cup S^{\textrm{rc}}$, form a directed edge ($r[1:k]$, $r[2:k+1]$). Given: A collection of up to 1000 (possibly repeating) DNA strings of equal length (not exceeding 50 bp) corresponding to a set $S$ of $(k+1)$-mers. Return: The adjacency list corresponding to the de Bruijn graph corresponding to $S \cup S^{\textrm{rc}}$. ## Sample Dataset TGAT CATG TCAT ATGC CATC CATC ## Sample Output (ATC, TCA) (ATG, TGA) (ATG, TGC) (CAT, ATC) (CAT, ATG) (GAT, ATG) (GCA, CAT) (TCA, CAT) (TGA, GAT)
{}
# Need help finding the right school #### NCulmone ##### Member I've been working in theater since high school and chose to attend a school for Theatre Technology and Design. The only issue is, most schools focus more on the design part, and only touch the technology part. Right now, I have no desire to design. I really enjoy entertainment electrician work, hanging lights, programming light boards, building and wiring practicals, and have really enjoyed my time working at school as a theater master electrician for our shows. The point is, I really want to find a school that can teach me more about what I want to do, rather than force a design agenda on me while I learn about what I want to do on my own. #### NCulmone ##### Member Also non-college options would be cool too. I suck at school. #### TuckerD ##### Well-Known Member I think the common advice for this type of work is to try to get in with a union or private company like VER / PRG / etc... and just work doing what you love as a grip / electrician / etc... If you have to, do another job to pay bills while you work your way in. There are people on the forum more qualified than I to give advice about that. Personally, I've found myself much much more interested in the technology than in design as well. Don't get me wrong, I love designing. But I don't have the drive and dedication to really pursue excellence at it. So instead I have a degree in software engineering and work directly on the technology I love. There is a lot of non-production type work in the industry as well. Engineers to work on products, complex stage designs, stage automation. Integration technicians to help design and specify full systems before tons and tons of equipment gets delivered to an install site. Draftsmen (and women). Project managers. Rental agents to help clients pick the right gear and make sure they get it on time. That's just a few off of the top of my head, but I think the list goes on and on and on. #### derekleffew ##### Resident Curmudgeon Senior Team Also non-college options would be cool too. I suck at school. How far is Center Valley from Lititz, PA? I hear there might be some production work going on there. #### porkchop ##### Well-Known Member Shotgun of ideas because I rarely common on college program threads: If you want to make a career out of this get a degree, or at least start the process and if an opportunity presents itself before you graduate take it. The networking and communication soft skills most people learn in college (if not later in life) are way more important than how fast you can hang a light or how well you can program on a console. Find a program that is willing to talk to you now while you're looking. If they're that interested in what you have to say odds are they'll be a lot more willing to listen to your crazy idea for senior year. Full Sail and UNCSA have shoddy reputations because they are willing to take anyones money and amplify whatever skill and drive they have in the process. If you're full of yourself and don't know anything going in you'll be even more full of yourself and even more clueless going out. If you're inquisitive and willing to ask questions so that you can be a better technician you might come out with a lot of hands on experience with pro level equipment and the humility to get (and keep) a solid job as a pro. #### soundofsparks ##### Active Member Here's another shotgun blast. When looking at schools for anything, you'll want to focus on the resources of the school and the faculty background. Start with schools near you to save on scratch. You are going to need money when you start out as many entry-level gigs pay terribly. So don't get stuck with an ass-ton of debt. Look at faculty profiles and try to find ones that have a background doing the professional work that you want to do. Train with people who are who you want to be. Also see if you can find some specifications on the facilities. What sort of gear do they have? Are they keeping pace with the industry? You don't need to go somewhere fancy or expensive to have a worthwhile experience. Speaking of theatre work (as opposed to events and concerts), what you learn in school probably won't matter. The people that I've hired require a good amount of retraining regardless of their educational background. What successful people learn in school is how to learn, how to be humble and open to new ideas, and how to problem solve. I went to a random college with low show budgets, old gear, and hardly any faculty or students. We worked closely with professors to solve production problems creatively. I built my career on creative problem solving and learned about gear as I went. I think it is absolutely fine to skip college all together and get an entry-level gig without a college degree. Unfortunately, there are so many college graduates applying for those same jobs that might get ranked ahead of you just because of their age/degree. I wish there were more programs like this one: http://www.roundabouttheatre.org/Teach-Learn/For-Students/Theatrical-Workforce-Development.aspx One final thought. If you do go to college. Do summer gigs. Work at summer stocks, arts festivals, whatever. Be an intern or staff. The resumes that I see that are uninteresting are ones with no professional work at all. No matter how terrible the summer gig (and some will work you to death for sure), it's more beneficial to your career than some other summer job. #### gafftapegreenia ##### CBMod CB Mods If I had to do it over again, I wouldn't go into private school debt for a theatre arts degree. That's just my two cents. #### TuckerD ##### Well-Known Member The networking and communication soft skills most people learn in college (if not later in life) are way more important than how fast you can hang a light or how well you can program on a console. Normally I would agree with you. I've even parroted the same advice dozens of times on CB. Networking is by far the skill that has served me best. Last year it really treated me well when I sent an email to some industry folks about how I would be missing a meeting we had planned because I had several job interviews around that week. One of them responded in private to me that if I was interviewing then I should send my resume to him. Lo and behold, a few weeks later I had a job offer. Very grateful for that person going out of their way to get me connected, and the biggest reason it all happened was because of my concentrated networking efforts. But my school was absolutely zero help in teaching me anything about networking or connecting me with any professionals. I did all of that on my own by reading Control Booth and connecting with people here when I could. For the job I have, a degree is required, but if what you want to do is be an ME or something other than design then I don't think it's advisable to take on the $$,$$\$ of debt than I did. #### JohnD ##### Well-Known Member Fight Leukemia How far is Center Valley from Lititz, PA? I hear there might be some production work going on there. For a true come-to-Jesus moment also think about Sight & Sound in the Lancaster area. https://www.sight-sound.com/ #### derekleffew ##### Resident Curmudgeon Senior Team ...Full Sail and UNCSA have shoddy reputations because ... Firstly, I'm shocked that you paint both institutions with the same brush. Upon further thought; yes, most of the graduates I've met from both places have been dicks, but at least the NCSotA ones were competent, albeit arrogant. Right now, I have no desire to design. I really enjoy entertainment electrician work, hanging lights, programming light boards, building and wiring practicals, ... 1. If it weren't for designers, you'd have nothing to hang or wire up. 2. If you're able to communicate with a designer on their level, you're a better electrician. 3. Similarly, one can't be a good lighting programmer without knowing some design aesthetics. Two CB members (by chance Ithaca College alums) come to mind: @icewolf08 and @rochem . Also @ship does daily what you appear to want to do. All have extensive design experience as well. Van Senior Team #### icewolf08 ##### CBMod CB Mods I've been working in theater since high school and chose to attend a school for Theatre Technology and Design. The only issue is, most schools focus more on the design part, and only touch the technology part. Right now, I have no desire to design. I really enjoy entertainment electrician work, hanging lights, programming light boards, building and wiring practicals, and have really enjoyed my time working at school as a theater master electrician for our shows. The point is, I really want to find a school that can teach me more about what I want to do, rather than force a design agenda on me while I learn about what I want to do on my own. @NCulmone, there certainly are a lot of schools that are much more "design-centric" than technology focused, and I think that comes a lot from the schools not wanting to feel like a vocational program. However, as @derekleffew mentioned, there are schools that do put an emphasis on technology and Ithaca College is one of those schools. I am somewhat biased, being an alum, but the flip side to that coin is that, if theatre is where you see yourself, you can't spit in NYC/Broadway without hitting an IC alum. Not to mention, our alumni network expands all over the country in theatre and other entertainment markets. I certainly came out of that program feeling prepared to take on the jobs that I did, and it is great to know that there are people who I can call on, even if we never really met, who are happy to offer guidance and insight. All that said, I do agree with some of the other folks who have posted. You want a well rounded education that not only includes technology and lighting, but design in all areas of the industry. I certainly am not a costume person, but, the fact that I had to take basic costuming classes and courses like "History of Costume and Decor" really helped me be able to interface with other departments. It gives you the background knowledge to understand what people are talking about when they talk about periods or styles or other design choices. It gives you the vocabulary to really communicate and develop ideas. It also makes it much easier when a costumer comes to you needing help with some kind of light up costume to be able to offer help in an intelligent and functional way. If you look for them, there are programs that will satisfy whatever direction you feel you would like to take in the industry. They best programs are the ones that help you grow and foster the sense of where you would like to go. Some schools have developed reputations for what they send their graduates off to. Carnegie Melon University and Emmerson both have the reputation of sending graduates off into the liver performance world (award shows, concerts, events, etc.). CalArts has a very good program and can set you up on a path to work for Walt Disney Imagineering. Then you have schools like Ithaca and NYU that really lean more towards traditional theatre. Point being, what you want is out there! Now, to play my own devil's advocate... There are also plenty of people who have made it big in the industry without completing or even attending college, or a college theatre program. As some folks mentioned, it is possible to get in with your local union, or a rental shop, or maybe even just start working at a local theatre and building skills and connections. There is absolutely nothing wrong with taking a path like this. The most important thing when choosing such a path, is that you need to be humble. Your high school experience counts for next to nothing, you have to OK with pushing boxes and coiling cables when you start out. It is likely that you will be working your way up from the bottom of the ladder. This is a great way to learn and get experience, just know that it may not be glamorous. However, one could argue that it is a lot cheaper than going to college. While I personally advocate for people to go to school, I also tell people that they should go in open minded. If you take a gen-ed in biology and find you really enjoy it, you want to be open to pursuing it. Theatre and entertainment will always be there and it is easy to come back to or do as a hobby, but if you find something else that really strikes a chord with you, it may be worth doing. Since @derekleffew brought it up, it looks like you are a little over an hour from Lititz, PA, which is where I am located now, working for TAIT Towers. While it was a bit of a departure from my background in lighting, as a controls technician it wasn't too much of a jump. We do some pretty wild stuff in the world of automation and staging so I would encourage you to come down and visit if you can. Also, we offer summer internships for people your age, and it is the kind of thing that you would want to start talking to our HR folks about soon if it is something you might be interested in as internship positions are highly coveted and go fast. Certainly give me a shout if you want to know more or come down and see what we are about. #### MRW Lights ##### Well-Known Member Here is why I would vote for going to school... 1. If you want to teach in the future the application process is easier with the letters BFA/MFA behind your name. 2. More importantly the people you meet in school will help fast track you to getting jobs. The same can be true of who you meet in the field, but I honestly rarely "search" for work. Friends from school recommend me and then someone I work with on that show recommends me and then I recommend someone else to bring to the team and the cycle continues. I would safely say the majority of my work comes from personal references and other designers/directors preference for my work. It won't happen instantly, but when you stop looking for work and realize the work is looking for you, then you can really begin to measure your success. #### themuzicman ##### Well-Known Member Here is why I would vote for going to school... 1. If you want to teach in the future the application process is easier with the letters BFA/MFA behind your name. 2. More importantly the people you meet in school will help fast track you to getting jobs. The same can be true of who you meet in the field, but I honestly rarely "search" for work. Friends from school recommend me and then someone I work with on that show recommends me and then I recommend someone else to bring to the team and the cycle continues. I would safely say the majority of my work comes from personal references and other designers/directors preference for my work. It won't happen instantly, but when you stop looking for work and realize the work is looking for you, then you can really begin to measure your success. I 100% agree with Point #1 - I know a few folks teaching without college degrees or MFA's, but those folks worked on Broadway for a number of years and still had to fight with University hiring committees to get hired. The degree makes that so much easier. On #2 I'd say it really depends on what part of the field you're going into after college and how much time a college has put into fostering an alumni network. I know several students who went to programs like CCM and UNCSA and they have an extensive network to draw on, and the folks who stay in the field a few years out tend to be top-notch. I went to a very large state school that had a well developed alumni network for every major school, except the school of arts & humanities. The majority of the graduates in my year and surrounding years who still work in the arts all keep in contact but we all work in radically different areas of the entertainment market. The majority of my work now comes from connections I made in the grind I did post-college. College taught me a ton, all related to theater, but almost none related to my trade. It was a good proving-ground for learning the politics of theater, which is just as important as knowing your trade as well as possible which may have been the most valuable take-away. The plus side is now when a gig gets out of my comfort zone, I can shoot a text to someone who works in a tangential part of the field and get some feedback really quick -- I work in audio, I know that, so when it comes time to integrate my plot with lighting and they yell at my center cluster array being in the way, I have a few LX friends from college who can give me quick feedback on what I can do to fix a problem, or what to say to get the problem to fix itself around me, or a few scenic fab folks who can tell me how to notate my fabrication bids so I don't look utterly clueless to a scene shop. On the flip side, I am generally their first call for show-control tech support, we don't get each other work per-se, but we have a good little network of friends keeping each other looking smart on the job! Post-college I took work in a few different shops and chatted up every designer and production person that walked in the door, I get 100% of my work now word-of-mouth and as a direct reference or direct hire which is great from those conversations, but none of that came from connections from a college - college did teach me how to play the game of getting work and staying on top of it all without drowning. I could have skipped college and gone right to working for any of the shops in any major market and probably been fine, but I bet my first few years of freelancing would have been rough not having a grounding in the politics of theater and how to effectively communicate cross-department or how to manage fluctuations between no shows and insane show-loads. As for colleges that focus on technology - Brooklyn City Tech, as mentioned above, I've met some really stellar students from those places. UNCSA and CCM seem to be the other two that have quality students. CCM folks seem to be as good as UNCSA, but without the ego. #### porkchop ##### Well-Known Member Firstly, I'm shocked that you paint both institutions with the same brush. Upon further thought; yes, most of the graduates I've met from both places have been dicks, but at least the NCSotA ones were competent, albeit arrogant. The interns and graduates that UNCSA send to Vegas tend to remind me a lot of the Full Sail grads that we would get on tour. The programs have very different focuses, but the outcome seems to be much the same. #### JohnHuntington ##### Well-Known Member And New York City College of Technology's Entertainment Technology program. I know nothing about this one other than it seems really cool. A four-year bachelor's degree. http://www.citytech.cuny.edu/entertainment/entertainment-technology-btech.aspx Thanks for the mention! I oversee Audio, Video and Control in Entertainment Technology at City Tech. I wrote an article in 2002 comparing the way we teach to conservatories some years ago and I think it's still pretty relevant: http://controlgeek.net/articles-and-other-work/2002/9/1/rethinking-entertainment-technology-education.html?rq=rethinking Also, everyone is welcome for a backstage tour of our Gravesend Inn haunted attraction. Thanks! John #### EdSavoie ##### Well-Known Member I've got no complaints with the Entertainment Technology program at St Clair College thus far. Three weeks in and We've already gotten right into the meat of it. Working with Truss, calculating weight loads, making cables, crucifying Stagepin connectors (Sorry America, we'll stick to our Twist-lock.), passing working at heights training and dismantling and putting back together a Source 4 to name a few. (Mind you, I had already done that to a Soruce 4 prior to this, but I digress.) The only bane to my existence at this point is the drafting / CAD course, where we are 'lucky' to inherit the imperial system from you guys... #### NCulmone ##### Member Thanks for the replies everyone! I haven't had a chance to read through them all yet (We just opened our first show of the year tonight). But thanks for all of your responses!
{}
# Triode Valve / Tube Formulas & Theory ### The amplification factor, anode or plate resistance; and the transconductance are some of the key factors associated with the triode valve / tube theory and formulas. When designing, repairing, or servicing triode valve / triode vacuum tube circuits it is very useful to have an understanding of the theory and what the different performance specifications mean. The voltage and current relations in the triode for both anode and grid are of importance along with figures like triode amplification factor, the anode or plate resistance and the transconductance. All of these give an understanding of the performance of a particular triode vale or triode vacuum tube. ## Triode voltage & current relations The number of electrons that reach the anode of a triode valve or vacuum tube under space charge limited conditions is primarily governed by the electrostatic field in the cathode grid region. Once the electrons have passed through the grid they travel on towards the anode very rapidly and space charge effects can normally be ignored, especially to a first approximation which is normally good enough for most calculations. The critical area of the triode valve is within the cathode grid space. It is here that the theory needs to be examined to determine its operation. In the cathode – grid area the electrostatic field is determined by both the grid and anode or plate. Electrostatic shielding theory shows that the electrostatic field in the vicinity of the cathode of a triode is proportional to (Ec + Eb/µ), where Ec and Eb are the grid and anode voltages respectively. The voltages are measured with respect to the cathode. µ is the amplification factor of the valve. ## Triode amplification factor µ The value µ is the constant known as the amplification factor of the valve or vacuum tube – it applies to triodes and is not really applicable to tetrodes or pentodes. It is independent of the voltages on the grid and anode and is determined by the geometries of the elements within the valve. Typically of the grid is placed close to the cathode this will give it a high amplification factor. For most triodes the amplification factor falls within the region of 10 to 100. The amplification factor µ of a triode valve / vacuum tube is a measure of the relative effectiveness of the grid and anode voltages in producing the electrostatic fields at the surface of the cathode. In more practical terms the amplification factor, µ of a triode can be considered to be the theoretical maximum gain that can be obtained. The amplification factor is based on the variation of anode voltage to grid voltage, but it is measured with the anode current held constant. $µ=\Delta Va/\Delta Vg$ Where: µ = amplification factor ΔVa = change in anode voltage ΔVg = change in grid voltage ## Triode characteristic curves The performance and characteristics of triode valves or vacuum tubes are often represented by a number of graphs detailing their performance. The characteristic curves or graphs are normally plotted for the relationship of the grid voltage and anode or plate current, and for the relationship of the anode or plate voltage and the corresponding current. The various curves of grid voltage against anode current all have approximately the same shape, differing mainly in the displacement from each other. This results from the fact that the anode current is determined by the equation (Ec + Eb/µ). In a similar way that the curves for the grid voltage and anode current are similar, so too are those for anode voltage and current, although it can be seen that the curves for positive grid voltage are rather different. ## Anode resistance The anode resistance or plate resistance is more exactly described as the dynamic anode or plate resistance. It represents the resistance that the anode circuit offers to a small change in voltage. Therefore when a small increment in anode voltage ΔEb produces a small change in anode current ΔIb the anode resistance can be calculated as follows: $\mathrm{rp}=\Delta Eb/\Delta Ib=\delta Eb/\delta Ib$ Where: rp = dynamic anode resistance ## Triode mutual conductance or transconductance The transconductance or mutual conductance gm of a triode is defined as the rate of change of anode current with respect to the grid voltage. It is possible to express this as a simple equation: $\Delta Ib=\Delta \mathrm{Ec}\cdot \mathrm{gm}$ $\mathrm{gm}=\delta Ib/\delta Ec=µ/\mathrm{rp}$ Where: µ = mutual conductance / transconductance rp; = anode resistance The transconductance or mutual conductance is a form of conductance, i.e. the inverse of Ohms. As a result the units in which they were quoted where mhos (Ohm spelt backwards). Nowadays the unit of conductance is the Siemens (S), but for valves / tubes the unit mho is still used. For valves the figures were normally quoted in µmhos, so be aware as this would make gain figures enormous if the µ was missed. In very much later calculations associated with valves, gain figures started to be given in terms of mA/ V, where the voltage (V) was applied to the grid, and the current (mA) was the change of plate current for a 1V change of grid current. More Electronic Components: Resistors     Capacitors     Inductors     Quartz crystals     Diodes     Transistor     Phototransistor     FET     Memory types     Thyristor     Connectors     RF connectors     Valves / Tubes     Batteries     Switches     Relays
{}
# Evaluate: x=2 ## Solve for x: $x=2$ Divide both sides by $4$. $$x+3=\frac{20}{4}$$ Divide $20$ by $4$ to get $5$. $$x+3=5$$ Subtract $3$ from both sides. $$x=5-3$$ Subtract $3$ from $5$ to get $2$. $$x=2$$ Random Posts Random Articles
{}
# Talk:Jesus This Christianity related article has been awarded SILVER status for quality. We like it, and you should too! ## Census Is there any record for a census taking place outside of Rome (in the territories) at the time. I just read somewhere once that there isn't any. Also Wikipedia seems to come down on the side that the slaughter of th innocents was apocryphal, do we have a mention of this anywhere, as I am sure this is the current view amongst scholars. - User$\scriptstyle2\int_{-1}^{1}\sqrt{1-x^{2}}dx$Please set to always render PNG. I have wasted alot of time to make this work. 03:17, 23 October 2008 (EDT) ## Crucifixion Survival "Skeptics" should not beleive this if they're actual skeptical. Crucifixion Survival is a ridiculous conspiracy theory of the worst sort. Dan Brown even thought it was too outlandish to be in 'Da Vinci Code.' Dan Brown! Forgive me if I'm misunderstanding, but this is supposed to NOT be the joke article right? The Crucifixion survival conspiracy is a completely unskeptical position on par with 9/11 conspiracies. -Morgan I'm inclined to agree with you. If we start making up convoluted explanations for the resurrection, then why stop there? We'd also have to explain away the virgin birth, water into wine, walking on water, curing the sick, & all the other miracles. Better to just treat the whole thing with skepticism. I'm guessing that parts of this article were written by somebody who was enthusiastic about the crucifiction survival scenario. Anyway, I've reworded the offending section of it. WeaseloidMethinks it is a Weasel 13:26, 28 January 2009 (EST) Are you sure that you're not talking about crucifiction? Either way, didn't Josephus write about crucifixion survivors? --Edgerunner76Your views are intriguing to me and I wish to subscribe to your newsletter 13:27, 28 January 2009 (EST) One, who was pardoned and taken down. --AKjeldsenCum dissensie 13:38, 28 January 2009 (EST) This is a wild conjecture and probably unworthy for a skeptic to hold, but not a conspiracy theory, since in no place does it make any claims about a conspiracy. ListenerXTalkerX 13:42, 28 January 2009 (EST) ## It's not Fun:Jesus anymore? Or is someone working for that? ThiehInsomnia? Masturbate till you pass out 05:33, 17 May 2009 (UTC) No, it's not. This article started in the main space, pretty much as it is now. Jesus Christ was the slightly more serious (ie, about the religious figure) article. Then one day PC got a bug up her ass about calling Jesus "Christ", which means "Messiah", and went around the innertubes imposing her attitude. She moved Jesus to fun, to make way for moving JC to Jesus. Jesus got moved back to JC some time ago, I just hadn't noticed that this one was still stuck in fun - where she put it without any discussion. Remember, "fun" isn't just where we put anything that is "funny" - it's where we put articles that are funny enough to save but don't belong in the mainspace, so they survive. ħuman 19:35, 17 May 2009 (UTC) So why do we need two mainspace articles on the same subject? Without one of them being defined as "fun", it really isn't clear what the distinction is. Merge? €₳\$£ΘĪÐMethinks it is a Weasel 20:28, 17 May 2009 (UTC) I think the original idea wayyyyy back was for a serious article and a snarky one, but's got lost somewhere. Merge into one ruddy gurt article if you like. Totnesmartin 20:56, 17 May 2009 (UTC) Well, this one is so different in tone that it wouldn't merge well. (IMO). Everything was fine until PC went on a "purge the internet of the word Christ" binge. Now things are just back where they were before that. The other article is about the "Christ", this one's about some middle eastern carpenter turned preacher. ħuman 23:42, 17 May 2009 (UTC) I am a firm believer in the idea that RW should be snarky on all articles where snarkiness is warranted. There is no point in making a "rational" article about someone who, if he even existed, isn't even historically notable. Snarkiness is RationalWiki! --Eira omtg! The Goat be praised. 03:27, 19 May 2009 (UTC) Fun talk:Jesus just in case. ħuman 05:17, 5 January 2010 (UTC) Why the move? a fun article about Jesus should be funspace. Acei9 05:38, 5 January 2010 (UTC) It is SERIOUS. — Unsigned, by: Human / talk / contribs 06:51, 5 January 2010 (UTC) Then why two articles on the same subject? -- Nx / talk 06:56, 5 January 2010 (UTC) Because the subject deserves it. ħuman 07:14, 5 January 2010 (UTC) That seems pretty weak reasoning there Huw. Acei9 07:19, 5 January 2010 (UTC) You need to look at the history of our two articles. Jesus was the funny version, and Jesus Christ was more serious, and all was well in RationalWikiHell. Then some whore named Proxima Centauri decided, upon reading some shite on another site, that "Jesus Christ" was a bad word, making Jeshua our "Lord and Saviour". So the dumb bitch moved Jesus Christ to Jesus, and moved Jesus to fun:Jesus, mucking up our perfectly fine wiki to match her fucked up anti-world view. Hence my undoing of her stupidity. I hope that is clear. ħuman 07:31, 5 January 2010 (UTC) It is irrelevant why PC moved it to funspace. If this is mainspace-worthy (which I don't think it is), then it should be merged into Jesus Christ. Otherwise it should be in funspace. -- Nx / talk 07:36, 5 January 2010 (UTC) No need to merge, it's ok to have two articles. Please don't be angry. ħuman 07:40, 5 January 2010 (UTC) I am so mad at you I went as far as wetting my pants. Acei9 07:59, 5 January 2010 (UTC) We need to consider the history of fun. (All this from memory.) It was originally "ACD" - at joke based in CP's "Article Creation Drive". Later when we were cleaning up the wiki a bit it as decided that ACD wasn't a very meaningful name so it was renamed "fun" - it is possible the name was not well-chosen. Later it as decided that all articles which were not mainspace worthy should be in "fun" because there was no other place to put them but it should be a sort of "holding area". Some articles were moved to fun becasue they were not funny enough. With the passage of time and the appearance of new users the idea began to take hold that only "funny" things should be in fun and that all funny things should be in fun. this was not the original intention.--BobBring back the hat! 08:26, 5 January 2010 (UTC) I'm not saying that mainspace should be devoid of humor, but I do think that Jesus Christ is a better article and should be given precedence. As for Funspace, I don't think it should be a Storehouse of Crappyness. -- Nx / talk 09:25, 5 January 2010 (UTC) So, do we create a namespace for articles which are neither mainworthy or funny? That's what this all come down to. What should it be called? Betaspace would be good - anything in it would be a priority for improvement. (this would be extra to funspace, which could be kept for geuinely funny articles). Totnesmartin (talk) 12:29, 5 January 2010 (UTC) ## Stuff I removed... " In the recorded words of the Bible, Jesus never personally claimed to be God: indeed, many times he spoke words incompatible with it ("The Father is greater than I" - if He was his own Father and was God Incarnate, he would be as great as himself - "Some knowledge not even the angels in heaven know, but is reserved for the Father alone"; "He was asked: 'What is the most important commandment?' Jesus answered: 'Hear, O Israel! The Lord is God and your Lord is One... the are no more important commandments than these.'", he prayed to God; many more). Needless to say, it's debatable whether the New Testament contains any actual dictation of the speech of Jesus, but it seems likely, if the Church wrote it, it would put words in Jesus' mouth claiming he was God, and words about the Holy Spirit, to try to make it accord with their doctrine (the same reason "Catholic Versions" of the Bible always translate Isaiah 7:14 as "virgin", even though the Hebrew is clearly "young women" or "adolescent girl", "almah", which is how all modern "honest" translations translate it). The only records in the New Testament that claim Jesus was God were written by Saul of Tarsus (St Paul to Roman Catholics), a rabid antinomianist (antinomianism: the abrogation of all law, mortal and divine: incidentally, some of the surviving Common Era Dead Sea Scrolls/Qumran Manuscripts speak of Paul as having been excommunicated from the early church), and John the Evangelist (a gnostic with some Judaeo-Christian influences), who wrote more than half of the New Testament canon of 27 books: this is why modern Christians practise Pauline "Christianity" and are not Messianic Jews. Among New Testament scholars, it's widely accepted that Jesus never claimed to be God, but this was attributed to him at first by Saul of Tarsus,[1] and then later by the Church after it had Roman support, and the first ecumenical councils, which denounced (and put to the sword) various interpretations of Jesus' teaching that did not hold him to be God, such as that of the Monophysites, Nestor, Marcion, Arius, and (many) more, of which the latter - Arianism - held great sway when it was denounced: the Roman Emperor at the time it was outlawed by the Second Ecumenical Council - and the Emperor who reigned for almost a decade after the first one was killed for his religious views - were both Arians: respectively, Constantius II and Valens.[2]" This seems to add little to the article but a wall of text. But if I am wrong we can fix it easily. ħuman 03:39, 23 April 2011 (UTC) I think some of the points above have some validity, even if they could be stated more clearly. Just regarding the first point - the Bible says Jesus saying lots of different things about himself at different points. At some places, he seems to be suggesting he is divine (without straight out saying it); at other places, he seems to be suggesting he is not. Of course, most Christians, believing Jesus to be divine, have their interpretations of those places where he seems to suggests he's not so that he isn't actually doing that; and likewise, the minority of Christians who don't believe in Jesus' divinity, they have their interpretations of those places where he seems to suggest he is so that he isn't actually doing that either. And this is the big problem with biblical interpretation, if you try hard enough you can make it mean lots of different things; and who can say whose interpretation is actually the right one? --Zack Martin  03:49, 23 April 2011 (UTC) 1. "John Hick, The Metaphor of God Incarnate, page 27: "A further point of broad agreement among New Testament scholars ... is that the historical Jesus did not make the claim to deity that later Christian thought was to make for him: he did not understand himself to be God, or God the Son, incarnate. ... such evidence as there is has led the historians of the period to conclude, with an impressive degree of unanimity, that Jesus did not claim to be God incarnate." 2. "John Hick, The Metaphor of God Incarnate, page 27: "A further point of broad agreement among New Testament scholars ... is that the historical Jesus did not make the claim to deity that later Christian thought was to make for him: he did not understand himself to be God, or God the Son, incarnate. ... such evidence as there is has led the historians of the period to conclude, with an impressive degree of unanimity, that Jesus did not claim to be God incarnate." ## Sex Life To me personally, this is badly conceived and currently very weasel. "arguments exist", but nothing to support the idea. "might have used prostitutes" - yeah, and i might have used drugs, but until you find *something* compelling somewhere, it's not really relevant. If our article is pure speculation - let's just say so, rather than the implication that this is something commonly discussed between scholars. And if it *is* discussed, let's find some articles and quote them.--En attendant Godot 13:59, 12 July 2011 (UTC) I fail to see how it's any more relevant than "does Barack Obama have a sex life?" ADK...I'll pander your blasphemy! 14:19, 12 July 2011 (UTC) The gospels say Jesus associated with prostitutes, the rest is speculation. I'm not Jesus (talk) 14:37, 12 July 2011 (UTC) I don't care that anyone speculates at all. just say so. "it's our best guess that" or, "if any rational person things about it, they'd conclude". what you have now, however, are weasle words "there are many arguements about..." and really, there aren't. It's not something greatly discussed in true academic circles, cause most people in academic circles accept that 1) people marry around 15 in Jewish culture of the day, 2) to have any authority at all in jewish life, you must have fulfilled god's commandments to marry and build a family. that is to say, in academic circles, most people who do NOT think jesus was born of a virgin and ressurected, but think he was a teacher or prophet or both, also think he was married. Like i said, my issue is not the idea of setting up logical reasons or even silly reasons he might have been sexually active, but dont' imply that there are "arguments' and then not talk about them.--En attendant Godot 14:41, 12 July 2011 (UTC) Thanks, "jesus", that worked perfectly. --En attendant Godot 14:46, 12 July 2011 (UTC) The prostitutes business is speculation but the rest I hope is verifiable if you follow the links. I'm not Jesus (talk) 14:54, 12 July 2011 (UTC) ## Birth OK, I am biased here, but the whole second paragraph is very lame (unless intended as a joke, in which case it's not funny). Why would anyone "assume" that what is explicitly described as a supernatural event (THE supernatural event to some) was "parthenogenesis", a natural phenomenon? Beyond the slightly similar name, that is. Also, not having human DNA hardly seems to be a problem for Holy Spirit. Holy Spirit is Lord God Almighty and has, you know, like, the power to create arbitrarily complex matter from nothing, at will (God might have used evolution the first time, but He's not constrained by that choice). So the whole point kind of falls apart. OrthodoxBeliever (talk) 08:53, 15 January 2012 (UTC) Don't quite follow your point. Did Jesus have a mixture of God's DNA and Mary's?--BobSpring is sprung! 09:03, 15 January 2012 (UTC) OK, the ABCs. Jesus is ontologically both God and Man. As a Man, he has full set of chromosomes. I think for theological reasons, half of these must come from The Most Holy Theotocos, Mary (so she's the true Mother of God, reinforcing Lord's human nature). The exact sequence of the other half is utterly unimportant for us. God could, for example, pull up the archived record of Adam's pre-Fall DNA undamaged by sin and just create a new set. Of course, this kind of stuff is just what Catholics would try to do, trying to explain the unexplainable. It just happened, OK?OrthodoxBeliever (talk) 09:37, 15 January 2012 (UTC) I don't think you have answered my question. Did god provide half his DNA? Furthermore, presumably Jesus had functioning testicles. What would an analysis of his sperm have shown?--BobSpring is sprung! 12:10, 15 January 2012 (UTC) I do not know there's even any way to know this. Incarnation is a Great Mystery. Not even sure it's a valid question to ask - the only human DNA God is known to have for sure is Jesus's. OrthodoxBeliever (talk) 19:13, 15 January 2012 (UTC) Or, in English "don't ask awkward questions". postate 19:39, 15 January 2012 (UTC) Or, in English, involvement of the omnipotent Agency makes any bitty-gritty explanation utterly unfalsifiable. The initial argument rests on the assumption Holy Ghost "needs" human chromosomes to conceive a Son. God Almighty doesn't "need" anything for anything, almost by definition. OrthodoxBeliever (talk) 21:11, 15 January 2012 (UTC) Well if your belief is "it's all done magically by God" then there wouldn't seem to be much point in having a scientific debate. But I'm prepared to try it in tiny steps. So let's start with .. Did Jesus' cells have DNA? This is a simple factual question. Is there some magic reason which makes this unanswerable?--BobSpring is sprung! 22:11, 15 January 2012 (UTC) You realoze we are discussing an event described as "miracle", right? "Science" may have some problems figuring this one out. To answer your question, yes, Jesus has cells, which have DNA in them. The reason for this is not scientific but theological: Christ's complete and perfect humanity is one of the central Christian beliefs. OrthodoxBeliever (talk) 22:53, 15 January 2012 (UTC) I have to disagree. There is nothing in the Bible to suggest that Jesus had cells, or that he came from an egg. In fact, mary awoke pregnant, and the child was "implanted" in her womb. sounds like god put the baby there fully formed. no egg or sperm required. Also, john says that Jesus is the Word, and is bounded energy, which has nothign to do with cells. So I'm suggesting he does not have cells. --GodotDear god, fucking grow up 22:57, 15 January 2012 (UTC) You're wrong here. Gospels have plenty of lines suggesting Jesus is human. John says "the Word became flesh and dwelt among us" half a line after the verse you seem to be quoting. Two natures of Christ is the cornerstone of orthodox Christianity, as defined in Chalcedon. Bible don't talk about cells, but we do know people have cells from other sources. :) OrthodoxBeliever (talk) 04:45, 16 January 2012 (UTC) I would suggest that Jesus was actually not the son of any god, and hence had a perfectly human father and mother. However, if one believes that Jesus was indeed the son of a god, he would not be bound by any biological rules, just as creationism is not bound by physical rules. "Did he have DNA?" is a theological question if you believe the latter, and a scientific question if you believe the former. 23:07, 15 January 2012 (UTC) Yep, God's not bound by rules of nature. He's also not bound only to the overt "supernational" acts. I think Creationist pseodoscience is done by Christians who bought the atheist line that if something happened Naturally, therefor God didn't do it. That's sign of lack of faith, ironically. One very wise and very conservative Protestant minister characterised ID as "bad science and bad theology". That's even though he personally accepted literal 6-day Creation - based on his reading of the Bible, not phony science. OrthodoxBeliever (talk) 04:45, 16 January 2012 (UTC) I have been harping for quite a while on the point that atheists and creationists both draw too sharp a line between divine acts and natural occurrences, but neither do I take the view that these theological questions should not be probed, even if this means only to conclude, based on (lack of) evidence, that no hypothesis is more plausible than another. ListenerXTalkerX 05:13, 16 January 2012 (UTC) The reason for my pushing the point is to show that there are two separate things. Magic and science. It's obviously impossible to find common ground between them whether it be Christian magic or any other. Any attempt is domed.--BobSpring is sprung! 09:35, 16 January 2012 (UTC) It's obviously impossible to find common ground between them... Oh, come now; that is a little rich, seeing as how the latter is derived from the former (chemistry, for example, came out of alchemy, while scientific medicine came out of herbalism and other religious practices). Any attempt is domed. Such attempts would apparently fit nicely on the skyline of Istanbul. ListenerXTalkerX 04:41, 17 January 2012 (UTC) Yes. Atheists do not know any better, but what our creationist brothers show is lack of faith. "A wicked and adulterous generation seeketh after a sign". OrthodoxBeliever (talk) 12:42, 16 January 2012 (UTC) ## Adding a Bible quotes section? Most people when pressed retreat behind the teachings of the New Testament, especially Jesus. I would welcome a section where all the negative/silly things which Jesus said/did are listed. I know of him senselessly destroying a fig tree and that he told his followers to abandon their families. I think I could write a short section with maybe a handful of these cases, but I'd like to get other people's opinions before doind so. --MeisterKleister (talk) 01:11, 27 January 2012 (UTC) I'd say give it a try. The community will probably edit it to their liking afterward but it sounds like a good idea. Sam Tally-ho! 01:15, 27 January 2012 (UTC) You could try a page of them, and not just a section. Peter Monomorium antarcticum 02:39, 27 January 2012 (UTC) Alright, I've added a new section. I'd say a new page is only necessary if we could get enough stuff together. For those curious, my source for these Bible verses about Jesus is this great video: http://www.youtube.com/watch?v=hSS-88ShJfo --MeisterKleister (talk) 16:52, 29 January 2012 (UTC) I think these would be far more effective if you used direct quotes, rather than your own words. GodotGrow a vagina 17:00, 29 January 2012 (UTC) I looked a few up, you need to be careful about context too. theist 17:03, 29 January 2012 (UTC) Please use direct quotes from some version of the Bible. Deleting for now. ħuman 03:33, 30 January 2012 (UTC) Agreed. Not that a quote from the gospel writers is necessarily a quote from JC himself, but it's more useful than just paraphrasing or hyperbolising them. ЩєазєюіδMethinks it is a Weasel 19:50, 30 January 2012 (UTC) I disagree. If we replaced what is there now with direct quotes, it would be a quote-mine. ListenerXTalkerX 00:40, 31 January 2012 (UTC) This web site apparently contains some great material for this section: http://www.evilbible.com/do_not_ignore_ot.htm (just search for 'Jesus') But I'm too lazy right now to edit the article, will probably do so later, but if anyone else feels like it, please go ahead. MeisterKleister (talk) 17:29, 5 October 2012 (UTC) ## Luke 19 This article says "Jesus orders that his enemies are to be brought to him and to have them slaughtered in front of him." But I dont think that is is the case. The bible says "While they were listening to this, he went on to tell them a parable, because he was near Jerusalem and the people thought that the kingdom of God was going to appear at once. 12 He said:" So jesus was telling a story about someone else. Luke 19:11-28 And as they heard these things, he added and spake a parable, because he was nigh to Jerusalem, and because they thought that the kingdom of God should immediately appear. He said therefore, A certain nobleman went into a far country to receive for himself a kingdom, and to return. And he called his ten servants, and delivered them ten pounds, and said unto them, Occupy till I come. But his citizens hated him, and sent a message after him, saying, We will not have this man to reign over us. And it came to pass, that when he was returned, having received the kingdom, then he commanded these servants to be called unto him, to whom he had given the money, that he might know how much every man had gained by trading. Then came the first, saying, Lord, thy pound hath gained ten pounds. And he said unto him, Well, thou good servant: because thou hast been faithful in a very little, have thou authority over ten cities. And the second came, saying, Lord, thy pound hath gained five pounds. And he said likewise to him, Be thou also over five cities. And another came, saying, Lord, behold, here is thy pound, which I have kept laid up in a napkin: For I feared thee, because thou art an austere man: thou takest up that thou layedst not down, and reapest that thou didst not sow. And he saith unto him, Out of thine own mouth will I judge thee, thou wicked servant. Thou knewest that I was an austere man, taking up that I laid not down, and reaping that I did not sow: Wherefore then gavest not thou my money into the bank, that at my coming I might have required mine own with usury? And he said unto them that stood by, Take from him the pound, and give it to him that hath ten pounds. (And they said unto him, Lord, he hath ten pounds.) For I say unto you, That unto every one which hath shall be given; and from him that hath not, even that he hath shall be taken away from him. But those mine enemies, which would not that I should reign over them, bring hither, and slay them before me. And when he had thus spoken, he went before, ascending up to Jerusalem. Nah, that's clearly him explaining the moral of the story, having finished retelling it. Peter Monomorium antarcticum 07:36, 24 February 2012 (UTC) If you read the one with the quote marks in the right place you can see the "kill all the things" bit is still part of the parable. Makes even less sense in context. What is the moral of it anyway? moral 11:05, 31 March 2012 (UTC) ## Possible error? I shan't edit until someone's clarified, but I think I see a grammatical error here, but I could be wrong. "Since he ends up back in heaven anyway, modern freethinkers wonder where there was any sacrifice. " Shouldn't it be why? That's the question I usually ask. --Polite Timesplitter talk to me sugar, but best keep it on thedown-low 14:35, 17 August 2012 (UTC) Where as in "where in the story". --Rutherford (talk) 14:37, 17 August 2012 (UTC) ## As a Christian myself This quite amused me. But the inability to portray Jesus as anything than what He was in the Bible belies a genuine understanding of why He came. The verses are all correct, in context and show that Jesus's claims - which I believe - were so outlandish that they parody themselves. — Unsigned, by: Daniel.bright / talk / contribs 15:33, 16 January 2013 (UTC) — Unsigned, by: Genghis Khant / talk / contribs ## Silver Osaka seems to think so. Thoughts? Tytalk 05:43, 19 January 2013 (UTC)
{}
# Why is $\ln N - \ln(N-1) = \frac1N$ for large $N$? May I ask why is $\ln N - \ln(N-1) = \frac1N$ for large $N$? Thank you very much. - You can get quite far with just algebra: \begin{align} \ln N - \ln (N-1) & = \ln N - ( \ln N + \ln (1-1/N)) \\ & = -\ln (1-1/N) \end{align} using the laws for addition of logarithms. Now you can use the Taylor expansion of the natural logarithm: $$-\ln(1-x) = x + \frac{x^2}{2} + \frac{x^3}{3} + \cdots$$ to get $$-\ln(1-1/N) = \frac{1}{N} + \frac{1}{2N^2} + \cdots$$ so that $\ln N - \ln (N-1)$ is, for large $N$, equal to $1/N$ plus a correction term of order $O(1/N^2)$. - +1 for nice approach –  Mathlover Apr 16 '12 at 11:17 @Mathlover This was used by Euler when studying $\gamma$ and Harmonic numbers. He saw that $\log \left(1+\dfrac 1 n \right)\sim H_n$ –  Pedro Tamaroff Apr 16 '12 at 18:29 The two sides of your equation are never exactly equal. But their ratio tends to 1 as $N$ tends to infinity. This is because the derivative of the $\ln$ function at $N$ is $1/N$, so that is approximately the amount by which the function changes between $N-1$ and $N$. - And the increment 1 is small when $N$ is large. –  lhf Apr 16 '12 at 11:37 @TonyK This is a nice approach. Kudos! –  Pedro Tamaroff Apr 16 '12 at 18:19 Also, think about $\lim_{N\to\infty}\left(ln N - ln(N-1)\right)$. It's 0, which is also $\lim_{N\to\infty}\left(\frac{1}{N}\right)$. –  chharvey Apr 25 '12 at 4:24 $\small \begin{eqnarray} \ln(n) - \ln(n-1) &=&\ln(n)-\left( \ln(n)+\ln({n-1 \over n}) \right) \\ &=& -\ln(1-1/n ) \\ &=& 1/n + 1/n^2/2+1/n^3/3+... \end{eqnarray}$ The latter approximates $\small 1 / n$ when n increases without bounds. - \begin{align*} \lim_{x\to\infty}\frac{\ln x-\ln(x-1)}{1/x} &= \lim_{x\to\infty}x(\ln x-\ln(x-1))\\ &=\lim_{x\to\infty} x\ln\left(\frac{x}{x-1}\right)\\ &=\lim_{x\to\infty}\ln\left(\left(\frac x{x-1}\right)^x\right)\\ &=\ln e\\ &= 1. \end{align*} Where $\lim_{x\to\infty}\left(\frac x{x-1}\right)^x=\lim_{x\to\infty}\left(1+\frac1{x-1}\right)^{x-1}\left(1+\frac 1{x-1}\right)=e\cdot 1=e$. - Just for completeness, the exact value (I'm surprised it hasn't been mentioned so far, though it's implicit in the "it's the derivative" answer): \begin{align} \ln N &= \int_{1}^{N} \frac1x \, dx \quad \text{ and }\\ \\ \ln (N+1) &= \int_{1}^{N+1} \frac1x \, dx, \quad \text{ so }\\ \\ \ln (N+1) - \ln N &= \int_{N}^{N+1} \frac1x \, dx \end{align} Now, as $\frac1{N+1} \le \frac1x \le \frac1N$ for $N \le x \le N+1$, clearly we have $$\frac{1}{N+1} \le \ln(N+1) - \ln N \le \frac1{N}$$ Calculating the integral more precisely would give a more precise estimate of $\ln(N+1) - \ln N$. - $\displaystyle \lim_{n \to \infty} (\ln n-\ln(n-1))=\displaystyle \lim_{n \to \infty}\left(\ln \frac{n}{n-1}\right)=\ln\left(\displaystyle \lim_{n \to \infty} \frac{n}{n-1}\right)=\ln 1=0$ $\displaystyle \lim_{n \to \infty} \frac{1}{n}=0$ Hence , for large $n$ both $\ln n-\ln(n-1)$ and $\frac{1}{n}$ tends to zero . - For example, 1/x^2 does not look like (tend to) 1/x as x->infinity, even though they both tend to 0. However 1/x - 1/x^2 -> 1/x as x-> infinity. So there are two things going on here. Indeed lnN - ln(N-1) -> 1/N as N -> infinity (which is (different, and) stronger than just saying both sides tend to 0) –  Adam Rubinson Apr 16 '12 at 10:57 @Adam: I just wanted to mention that because you do not have 50 reputation points yet, you can only comment on your own questions and answers, so the site behavior you experienced was normal. –  Zev Chonoles Apr 16 '12 at 15:05
{}
I. Fundamentals Chelsey Hamm Key Takeaways • A is a three-note chord whose notes can be arranged in thirds. A triad can always be “stacked” so that its notes are either on all lines or all spaces. • When stacked in its most compact form in thirds, the lowest note of a triad is called the , the middle note is called the , and the highest note is called the . • There are four qualities of triad. A third is major and its fifth is perfect, while a third is minor and its fifth is perfect. A third is minor and its fifth is diminished, while an third is major and its fifth is augmented. • In  notation, major triads are represented with capital letters that correspond to the triad’s root. Minor triads have a lowercase “m” after the letter, diminished triads have a lower-case “dim” or a degree sign (“°”), and augmented triads have a lower-case “aug” or a plus sign “+.” • Within major and minor keys, triads have particular qualities that correspond to scale-degree. These are the same in every major and minor key, which makes memorizing them useful. • There are five steps to drawing a triad: drawing a root, adding a third and fifth, visualizing the root’s major key signature, adding accidentals from the key signature (if applicable) for a major triad, and adding additional accidentals for a minor, diminished, or augmented triad. • Musicians often prioritize the note that is in the , often simply called the “bass” by musicians, which is the lowest part (or voice) of a composition, regardless of what instrument or voice type is singing or playing that lowest note. It is important to note that the bass voice of the chord is NOT the same thing as the chord’s root. • When a triad is stacked in thirds we say the triad is in root position. The bass note in root position is the root. Chords that do not have the root in the bass are said to be . When the third appears in the bass we say the triad is in first inversion, and when the fifth appears in the bass we say the triad is in second inversion. • symbols are used to indicate inversion. Figured bass uses Arabic numerals and some symbols which indicate intervals above a bass note. These are turned into chords by musicians. A triad in first inversion received a superscript “6,” while a triad in second inversion received the figures . In figured bass larger numerals are always stacked above smaller ones. • Triads are identified by their , , and . You can identify a triad by identifying and writing its root, its quality, and its inversion. A is any combination of three or more pitch classes that sound simultaneously. This chapter focuses on , three-note chords whose notes can be stacked into thirds. The three notes of a triad can always be arranged in thirds. Example 1 shows two triads, each written both and : The first triad is shown on three adjacent spaces, while the second triad is shown on three adjacent lines. A triad can always be “stacked” so that its notes are either on all lines or all spaces. When a triad is stacked in its most compact form (measures 2 and 4 of Example 1), it looks like a snowperson. Example 2 shows several snowpeople: A snowperson consists of a bottom, middle, and head. Likewise, a triad consists of a lowest note, a middle note, and an upper note. When stacked in “snowperson form,” the lowest note of a triad is called the , the middle note is called the , and the highest note is called the . Example 3 shows this: As you can see in Example 3, the third is so named because it is a generic third above the root, and the fifth is so named because it is a generic fifth above the root. The root is analogous to a snowperson’s bottom, the 3rd to its middle, and the 5th to its head. # Triadic Qualities and Listening to Triads There are four qualities of triad: major, minor, diminished, and augmented. Example 4a shows these four qualities of triad, each with a root of F and their quality of fifth labeled, while Example 4b shows these qualities with their quality of third labeled: As seen in Examples 4a and 4b, a fifth is perfect and its third is major, while a fifth is perfect and its third is minor. A fifth is diminished and its third is minor, while an fifth is augmented and third is major. Major, minor, and diminished triads are more common in many genres of music, such as Classical and popular, which is why these triads are listed first in Examples 4a and 4b. Augmented triads are less common is most Classical and popular music. Listen carefully to the different qualities of triad in Example 5. It is common to pair expressive qualities with triads when learning what they sound like. You might think of major triads as sounding “happy,” minor triads as “sad,” diminished triads as “scary,” and augmented triads as having a “fantasy” or “mystical” sound. Triad’s frequently appear in , which are jazz scores that notate a melody and chord symbols. Lead-sheet symbols for triad often include the letter name of the triad’s root, the triad’s quality, and sometimes the pitch class that occurs in the , which is the lowest part (or voice) of a composition, regardless of what instrument or voice type is singing or playing that lowest note. A lead-sheet symbol begins with a capital letter (and, if necessary, an accidental) denoting the root of the chord. That letter is followed by information about a chord’s quality: • major triad: no quality symbol is added • minor triad: lower-case “m” • diminished triad: lower-case “dim” or a degree sign “°” • augmented triad: lower-case “aug” or a plus sign “+” Finally, if a pitch class other than the chord root is the lowest note in the chord, a slash is added, followed by a capital letter denoting the pitch class in the bass (lowest) voice. Example 6 shows four triads with lead-sheet symbols: As seen in Example 6, a triad that has a root of C and a major quality is shown as “C” in lead-sheet notation. A C minor triad, which consists of the notes C, E♭, and G, is written as “Cm” in lead-sheet notation if C were the lowest note. If E♭ were the lowest note of a C minor triad (see the third chord in Example 6), the lead-sheet symbol would be “Cm/E♭.” If G were the lowest note of a C minor triad (see the fourth chord in Example 6), the lead-sheet symbol would be “Cm/G.” This topic will be explored more below in the section titled “Triadic Inversion and Figures.” # Triad Qualities in Major and Minor Triads can be built on any note of the major scale, as shown in Example 7. Example 7 is in the key of G major. As you can see in Example 7, triads built on Do, Fa, and Sol in major keys are major. This is shown in this example with letter name of the triad’s root capitalized. Triads built on Re, Mi, and La are minor. This is shown with a lowercase letter “m” after the capital letter name of the triad’s root. Triads built on Ti are diminished; this is shown with a superscript “o” which you might know as the degree symbol. These triadic qualities do not change in different keys; in other words, the quality of a triad built on Do is always major in major keys, no matter which major key a musical work is in. Triads can be build on any note of the minor scale, as shown in Example 8. To build a triad from a lead-sheet symbol, you need to be aware of a triad’s root and quality. We will look at its bass note in the next section titled “Triadic Inversion and Figures.” Let’s say that, for example, we wanted to spell a major triad. We would complete the following steps: 1. Draw the root on the staff 2. Draw notes a third and fifth above the root (i.e. draw a snowperson) 3. Think of (or write down) the key signature of the triad’s root 4. Write any accidentals from the key signature if notes in that key signature appear in the triad for a major triad 5. For a minor, diminished, or augmented, add additional accidentals to alter the chord’s third and/or fifth when appropriate Example 9 shows this process for a D major triad: First, the note D, the chord’s root, is drawn on the staff. Second, a snowperson is drawn—an F and A, the notes a third and a fifth above the D. Third, the key signature of D major has been recalled. D major has two sharps, F♯ and C♯. Fourth, a sharp (♯) has been added to the left of the F, because F♯ is in the key signature of D major. No C♯ was necessary because there is no C in the chord. Let’s complete this process for an A♭ minor triad (A♭m), as seen in Example 10. First, the note A♭ is written because it is the root of the triad. Second, a snowperson is drawn; in other words, the notes C and E are added because they are a generic third and fifth respectively above A♭. Third, the key signature of A♭ major is recalled. A♭ major has four flats, B♭, E♭, A♭, and D♭. Fourth, E♭ is added, because it is in the key signature of A♭ major. No B♭ or D♭ are needed, because those notes aren’t in an A♭ triad. Now we have successfully spelled an A♭-major triad (A♭, C, and E♭). Minor triads contain a minor third, which is one half-step smaller than a major third. Therefore, our final step is to lower the chord’s third (the C) by a half-step (to a C♭). Now we have an A♭ minor triad (A♭, C♭, and E♭). Don’t forget that diminished triads have a minor third and a diminished fifth, meaning you have to lower both the third and the fifth by a half-step from a major triad. An augmented triad has a major third and an augmented fifth, so its fifth must be raised by a half-step from a major triad. # Triadic Inversion and Figures As mentioned previously, musicians often prioritize the note that is in the , often simply called the “bass” by musicians, which is the lowest part (or voice) of a composition, regardless of what instrument or voice type is singing or playing that lowest note. Example 11 shows an A major triad with three different notes in the bass: An A major triad consists of three notes, the root (A), the third (C♯), and the fifth (E). When a triad is stacked in thirds (i.e. “snowperson form”), we say the triad is in root position. The bass note in root position is the root. Chords that do not have the root in the bass are said to be . When the third appears in the bass we say the triad is in first inversion, and when the fifth appears in the bass we say the triad is in second inversion. It is important to note that the bass voice of the chord is NOT the same thing as the chord’s root. The root of an A major triad is always A, regardless of whether the triad is in root position, first inversion, or second inversion. However, the bass voice changes between these inversions, from A to C♯ to E, as seen in Example 11. You might think of first inversion triads as looking like a snowperson whose feet have been moved above their head; a second inversion triad looks like a snow person whose head has been moved to where their feet would normally appear. Example 12 demonstrates this similarity: Sometimes musicians use lead-sheet notation to indicate inversions, as seen in Example 11. However, most of the time we do not use lead-sheet notation in the study of Western classical music. Instead, we use symbols to indicate inversion. Figured bass uses Arabic numerals and some symbols which indicate intervals above a bass (NOT a root) note. These are turned into chords by musicians. Example 13 shows the full figured bass symbols for triads underneath their lead-sheet symbols: As you can see in Example 13, a root position triad has a third and a fifth above the bass. A first inversion triad has a third and a sixth above the bass, while a second inversion triad has a fourth and a sixth above the bass. In figured bass, the larger numerals (intervals) always appear above the smaller ones. However, many centuries ago musicians abbreviated the figured bass symbols for triads in order to save time and supplies (paper and ink were very expensive before the industrial revolution). Example 14 shows the abbreviated figured bass symbols for triads that we usually use today underneath their lead-sheet symbols: As you can see, no symbol appears for root position. First inversion triads are abbreviated with the number “6,” while a second inversion triad keeps its full figures to distinguish it from a first inversion triad. When musicians turn figured bass symbols into chords—either on paper or in performance—this is called the symbols. Example 15 shows the process of realization for several root position triads: As seen in Example 15, an E♭ appears with no figured bass symbol next to it. Therefore, we can assume that we are realizing an E♭ major triad in root position. This chord is realized (written out with notes) in the next measure. In measure 3 we see an E♭below the staff. We can understand that notation to mean that we are realizing an E♭ diminished triad in root position. This chord is realized in the next measure. Example 16 shows the process of realization for a triad in first inversion: As seen in Example 16, we first see a Gm6 triad. This means that we must realize a G minor triad in first inversion. The root of the chord, G, has been placed in the first measure of the example. In the second measure of Example 16, a G minor triad in root position has been realized. In the third measure of Example 16 the third of the chord (the B♭) is in the bass; now the triad is in first inversion. The last measure of Example 16 is the correct “answer” or realization of this chord symbol. Example 17 shows the process of realization for a triad in second inversion: As seen in Example 17, we see a B $\begin{smallmatrix}6\\4\end{smallmatrix}$ triad—a B major triad in second inversion. The root of the chord, B, has been placed in the first measure of the example. In the second measure of Example 17, a B major triad in root position has been realized. In the third measure of Example 17, the fifth of the chord (the F♯) appears in the bass; this chord is now in second inversion. The last measure of Example 17 is the correct “answer” or realization of this chord symbol. # Identifying Triads, Doubling, and Spacing Triads are identified according to their , , and . Example 18 shows a triad in root position for the process of identification: You can identify triads in five steps: 1. Identify and write its root 2. Imagine the major key signature of its root 3. Identify and write its quality 4. Identify its inversion 5. Write the appropriate figured bass figures if applicable To identify this triad, you first identify and write its root. Because the triad is in root position, its lowest note is its root—in this case C♯. Now you can identify and write its quality. To do this, you will need to imagine the major key signature of its root. The key of C♯ major has seven sharps (every note is sharp). Therefore, E and G would be sharp in a C♯ major key. Instead, both of these notes have been lowered by a half-step. Therefore, this triad is diminished. Next, you need to identify the triad’s inversion. The triad is stacked in thirds and is therefore in root position. No figured bass figures are needed. We would correctly identify this triad as a C♯o chord. Example 19 shows a triad in inversion and the process of identification: To identify this triad, you must either write it or imagine it in root position. This has been shown in the second measure of Example 19. Once it is in root position you can identify the root, which in this case is “D.” Now you can imagine the key signature of its root, D, which has two sharps (F♯ and C♯). F is not sharp; therefore this triad must be minor. Finally, you can identify the inversion of the triad from the original example (not your imagined or written root position version). The original example is in first inversion. Therefore we would correctly identify this triad as a Dm6 triad. Because musicians accept octave equivalence, the  of notes does not affect a triad’s identification. Example 20 shows several different triads with octave doublings and their correct identification: As you can see, the identification of these triads is the same, regardless of octave doublings, even when more than one clef is used. Furthermore, the spacing of notes more than an octave does not affect a triad’s identification. Notes that are written in appear the closest they can be to one another. Example 21 shows Example 11 once more: As seen in Example 21, chords can inverted and still in closed spacing. Notes can be spaced out further apart, which is called . Example 22 shows two triads in open spacing: As you can see in Example 22, each of the notes of the triad appears, but they are widely spaced across several different octaves within a grand staff. Doublings and open spacing can be combined, as seen in Example 23. Neither of these factors will affect how you identify these triads. In order to identify them you need to either imagine or write the notes into a triad in closed spacing without any doublings, as we did in Example 19. Online Resources Assignments from the Internet 1. Triad Root Position Identification (.pdf.pdf, .pdf.pdf.pdf.pdf.pdf.pdf) 2. Triad Inversions Identification (.pdf.pdf.pdf.pdf) 3. Triad Identification and Writing in Major and Minor Keys (.pdf) 4. Triad Root Position Construction (.pdf) 5. Triad Inversions Construction (.pdf) Assignments Assignments for this chapter are in progress.
{}
# A solid compound XY has NaCI structure. If the radius of the cation is 100 pm, the radius of the anion $\left ( Y^{-} \right )$ will be: Option 1) 275.1 pm Option 2) 322.5 pm Option 3) 241.5 pm Option 4) 165.7 pm D Divya Saini As learnt in Relation between radius of constituent particle, R and edge length, a for body centered cubic unit cell - - In a BCC lattice, the oppositely charged ion occupies the centre while the corners are occupied by other ions. So, the distance between the centre and the corner,  $d= \frac{\sqrt{3}a}{2}$ $= \frac{\sqrt{3}}{2} \times 387 pm$ $= 335.151 pm \sim 335 pm$ Option 1) 275.1 pm This option is incorrect Option 2) 322.5 pm This option is incorrect Option 3) 241.5 pm This option is correct Option 4) 165.7 pm This option is incorrect Exams Articles Questions
{}