I’ve had some fun brainstorming sufficient conditions on the order of each ODE and the associated eigenvalues that would lead naturally to a single linear ODE encompassing both solution spaces. Playing there has me nearly convinced that…oh wait, we’re saving the cool punchlines, aren’t we. ðŸ™‚

]]>I wonder if anyone has written about this phenomenon in the vast literature of spurious proofs that 0 = 1.

If you do the general antiderivative, when integration by parts “fails” a less than 100% careful reading of the resulting formula seems to imply that 0 = a nonzero constant. The examples that you commonly see of integration by parts failing lead to *everything* cancelling out, and one often describes that [inaccurately!] as reducing to the equation 0 = 0. I love this example, which doesn’t fit that mold, and points out the sloppiness in drawing the 0=0 conclusion in the other cases.

Ξ’s comment hints at some cool linear algebra content, but even for fixed , the issue is provocative.

]]>For example, suppose two functions satisfy:

and .

Under what circumstances will their sum also have a derivative that is a constant multiple of the original sum? What about their product? I want this set of functions to form something nice group- or algebra-wise, but I’m not certain if it does.