[ad_1]

It’s pretty common to hear that dy/dx isn’t a fraction, but if that’s the case then why do we treat as such in a differential equation or an integral? For example, if dy/dx = f(x), then how can we just write it as dy= f(x)dx as though it’s a fraction?

In: Mathematics

[ad_2]

In differential equations you can do this, during separation of variables which usually precedes an integration iirc:

(dy/dx) = x^2 /y^2

y^2 (dy) = x^2 (dx)

Then you can integrate to find y in terms of x

The reason people say it’s not a fraction is because when you get into higher order derivatives, they don’t behave the same way as a simple fraction would. So basically it’s not *always* a fraction, and if you don’t 100% know what you’re doing it’s best to treat it as if it’s not a fraction

„Times“ dx represents the inverse function aka the integral. You can move around the dx term similar to divisor, however that does not mean it is one.

If you asked a pure mathematician they’d probably tell you never to write:

> dy = f(x) dx

A “dx” thing should never be on its own. You should write:

> ∫ dy = ∫ f(x) dx

The “∫” goes with the “dx”; they are two parts of the same thing. Both “∫[thing] dx” and “d[thing]/dx” are *operators.* They are things we do to functions. They mean “take the function and integrate it with respect to x” and “take the function and differentiate it with respect to x.”

*However*, there are some situations where they do behave similarly enough to fractions that we can kind of treat them like that. If we are careful.

dy/dx can be defined as a limit of a fraction:

> dy/dx = limit as Δx -> 0 of Δy / Δx

where Δy and Δx are separate things. So in the right circumstances, depending on what that limit is doing, you can treat the dy/dx part like a fraction. But you might upset some pure mathematicians.

ELI5…

There are only two ways for comparing numbers. Subtraction: a – b = ? And quotient: a/b = ?

Now suppose you’re in a lab, you have to test some apparatus. You set a lever on position x0 and get a reading y0. To see what happens you nudge the lever to x0 + Δx (Δx being some small increment or decrement) and get a new reading y1. Wanting to compare numbers (see supra) you form:

Δy = y1 – y0 (the effect of your nudge)

k = Δy/Δx (the ratio of effect to nudge – subtraction is meaningless there)

If the apparatus is reasonably stable you now know that *in the vicinity of x0* you have:

y ~ y0 + k . Δx

Δy ~ k . Δx (equivalently)

Now the idea of differential calculus is to make the nudge infinitesimal, Δx ~ 0 (but Δx ≠ 0). You then get *exact* results (not only approximations) once you get rid of the infinitesimals. Let’s take an example, behavior of x^3 around x0 = 2:

y0 = x0^3 = 2^3 = 8

y = (x0 + Δx)^3 = x0^3 + 3.x0^2. Δx + 3.x0. Δx^2 + Δx^3 (binomial formula)

y = y0 + 3.2^3.Δx + Δx^2.( … )

y = 8 + 24.Δx + terms infinitesimal even when compared to Δx

Finally:

Δy = y – 8 = 24.Δx + yet smaller infinitesimals ~ 24.Δx

Δy/Δx ~ 24 (true for any infinitesimal Δx)

But this being a property of the polynomial x^3 at the number x0 = 2 (no infinitesimals there, it’s just the slope of the tangent) it has to be exactly 24:

Δy/Δx = 24

Or, changing to standard notation, dy/dx = 24.

Admittedly that’s a hairy part when you start dabbling in [Nonstandard analysis](https://en.wikipedia.org/wiki/Nonstandard_analysis). The Great Old Ones like Leibnitz used infinitesimals somewhat nonchalantly – and were justly criticized for that, but baron Cauchy later formalized the calculus with the familiar ε – δ reasoning on limits (but still used infinitesimals!).

I hope all this blather explains the origins of the notation. And it works. Let’s take the chain rule (f and g differentiable, of course);

z = g( f( x ) )

f( x0 + dx) ~ f( x0 ) + k.dx = y0 + k.dx

g( y0 + dy) ~ g ( y0 ) + k’.dy ~ z0 + k’.k.dx

z0 + dz ~ z0 + k’.k.dx

dz ~ k’.k.dx

dz/dx = k’.k = g'( y0 ). f'( x0 ) = g’ ( f( x0 ) ). f’ ( x0 )

It boils down to an elementary property of linear functions: if c = k’.b and b = k.a than c = k’.k.a. And that’s what we do, approximating g and f by their linear parts (tangents).

Of course the notation has taken a life of its own (e. g. a basis for vector fields on a manifold would be notated ∂/∂u_i like an operator because vector fields can be, and are, considered as operators) but I hope I’ve managed to convey the original meaning. (Hey, it’s ELI5!)

Note that when reasoning with infinitesimals there is no need for limits the Cauchy way: *an infinitesimal stands for all possible ways to go to zero*. If interested Keisler’s [Elementary Calculus: An Infinitesimal Approach](https://www.math.wisc.edu/~keisler/calc.html) is free to download.

Now I need a beer… 😉

Simply put, “dy/dx” is an instruction to take a very specific limit. It’s not an instruction to calculate 2 numbers called dy and dx and then take the ratio. That’s why we say not to treat them like fractions.

The reason you can do manipulations in differential equations where you write dy/dx=f(x) as dy=f(x)dx is because you eventually write the latter with an integral sign. So what you’re really saying is that dy/dx=f(x) if and only if integral dy = integral f(x) dx.

Now, recall that by the fundamental theorem of calculus integration is the inverse of differentiation. Also recall that multiplication is the inverse of division. So when you adopt the differentials-as-fractions notation, all thats really happening is that you’re saying is that we aim to undo the derivative, derivatives kinda look like fractions, so we’ll write the inverse of the derivative in the same way that we would write the inverse of a fraction, and it works out nicely.

Tl:dr: the fundamental theorem of calculus provides convenient notation

They aren’t fractions, but because of the way they are defined they can often be manipulated as if they are. But keep in the back of your mind, that whenever it looks like they are being manipulated like fractions, there is actually something else going on.

For your example let’s try to solve for y:

dy/dx = f(x);

You ‘multiply’ by dx:

dy = f(x)dx

And then you kind of hand wave-y integrate both sides (even though its dy on one side and dx on the other):

∫ dy = ∫ f(x) dx

y = ∫ f(x) dx

But you can also just think of taking the integral of both sides with respect to x:

∫ (dy/dx) dx = ∫ f(x) dx

On the left side you take the integral of the derivative, which is just the original function y (by the fundamental theorem of calculus). On the right side you just have the integral:

y = ∫ f(x) dx

So you get the same answer.

All manipulations of dy/dx like a fraction can be properly reinterpreted without the fraction manipulation.

They kind-of are, but notation makes things confusing. dy and dx are not really values, but operations. The d in dy and dx is suppose to be a lowercase greek letter delta. Delta is a commonly used shorthand in math and physics for the amount of change in something. This means that dy/dx is the ratio of the amount of change in y to the amount of change in x, which is the derivative. But because they are operations on variables instead of actual variables themselves, there are slightly different rules for when they can be introduced and manipulated.