A common example of integration by parts used in many Calculus II classes has students compute
by integrating by parts twice, then rearranging terms to arrive at a solution. This technique is handy for many functions whose derivatives eventually repeat, that is, functions satisfying
for some integer n and some constant c. (Question: Is there a name for such functions? I feel like I should know this.)
As part of the unit on Laplace transforms in my Differential Equations course this past semester, we studied the convolution of two functions:
Convolution is, among other things, a very useful way of computing inverse Laplace transforms, and appears in many other applications in functional analysis. One of the first exercises asked for the following:
If , what is
?
This requires one to compute The “best” way to handle this is to use the difference formula for the sine function and rewrite the integral as
and integrate each term (either by substitution or by using an identity, then substitution). Another approach, which several of my students took, is to try integrating by parts twice, as in the first example above. Unfortunately, the technique fails in this case. (Try it yourself and see!) One student came to me wondering why. My answer: I don’t know.
After discussing the problem for a while, we decided (i.e., I decided) this would be a great problem for her to explore. I think the more general question about integrating products of these functions by parts could be an interesting exploration for both of us. I haven’t had time to play around with the problem since our initial discussion, but I suspect there’s something deeper lurking in the background. As soon as we find out, I’ll post a follow-up.
[This problem was suggested by Jolie Roat, and I believe her work will become a presentation in the spring, so don’t go spoiling all the fun by posting a complete solution!]
December 20, 2007 at 4:29 pm |
It’s funny, this question does seem simple at first but the more I look at it the more maybe-not-earth-shattering-but-certainly-interesting questions pop up.
For example, suppose two functions satisfy:
and
.
Under what circumstances will their sum also have a derivative that is a constant multiple of the original sum? What about their product? I want this set of functions to form something nice group- or algebra-wise, but I’m not certain if it does.
December 21, 2007 at 4:43 am |
What a great example!
I wonder if anyone has written about this phenomenon in the vast literature of spurious proofs that 0 = 1.
If you do the general antiderivative, when integration by parts “fails” a less than 100% careful reading of the resulting formula seems to imply that 0 = a nonzero constant. The examples that you commonly see of integration by parts failing lead to *everything* cancelling out, and one often describes that [inaccurately!] as reducing to the equation 0 = 0. I love this example, which doesn’t fit that mold, and points out the sloppiness in drawing the 0=0 conclusion in the other cases.
Ξ’s comment hints at some cool linear algebra content, but even for fixed
, the issue is provocative.
December 21, 2007 at 5:17 am |
I hadn’t even meant to imply that n is fixed — I think it might be more interesting when the power is allowed to be arbitrary.
December 21, 2007 at 9:39 am |
I agree that the problem becomes even more interesting if we permit different orders for f and g. We know that the solutions of a given homogeneous differential equation form a vector space, but when attempting to combine solutions of different equations and find a new equation to satisfy… well, that’s the idea, isn’t it?
December 21, 2007 at 11:45 am |
You’d only get a vector space for a given homogeneous LINEAR differential equation, no? That puts a bit of a crimp in things….
December 21, 2007 at 7:37 pm |
Right, right, yes – linear. I was thinking of the above equations, which are indeed linear, but it’s not true for general homogeneous equations.
December 24, 2007 at 9:44 am |
I point out that technicality because one obvious way to combine those linear ODEs into a single ODE leads to a nonlinear equation.
I’ve had some fun brainstorming sufficient conditions on the order of each ODE and the associated eigenvalues that would lead naturally to a single linear ODE encompassing both solution spaces. Playing there has me nearly convinced that…oh wait, we’re saving the cool punchlines, aren’t we. 🙂