If you’ve spent any time with circuit diagrams, Ohm’s Law, or simulation tools, you’ve undoubtedly encountered the concept of conventional current. Despite its foundational role in electrical engineering, conventional current is, in a sense, a historical misstep—one that we’ve embraced for over two centuries. But how did this convention come to be, and why does it persist despite contradicting physical reality?
Let’s rewind the clock and explore how this legacy direction of current came to define our engineering language.
The Historical Misunderstanding
In the 18th century, long before electrons were discovered, scientists observed that some materials allowed “something” to move and carry what we now know as electric charge. Benjamin Franklin, an early pioneer in the study of electricity, proposed a model in which electrical fluid flowed from areas of excess to areas of deficit—essentially, from the positive to the negative terminal.
Since Franklin didn’t know about electrons (discovered much later in the 19th century), he made a 50/50 guess. He assumed this “electric fluid” moved from positive to negative—and thus, the concept of conventional current was born.
It was a logical guess at the time. Unfortunately, it turned out to be backward.
Enter the Electron
When electrons were discovered by J.J. Thomson in 1897, it became clear that current in metals is due to the movement of negatively charged electrons. And these electrons actually flow from the negative terminal to the positive terminal in a typical DC circuit.
But by the time this was understood, the entire engineering framework—from textbooks to schematics—had already been built around Franklin’s convention. Reversing that entire system would have been too costly and confusing. So we stuck with it.
Even today, conventional current is defined as the flow of positive charge, moving from higher potential (positive terminal) to lower potential (negative terminal). This remains the standard in circuit theory, component datasheets, and electrical engineering education.
Why We Still Use It
You might ask: Why not just switch to electron flow to reflect physical reality?
The answer is practicality. For most circuit analysis, especially at the macro scale, it doesn’t matter whether we think of current as positive charge moving forward or negative charge moving backward. The mathematics and the behavior of the circuit components are symmetric under this inversion.
Additionally:

When It Does Matter
That said, in certain contexts—like solid-state physics, semiconductor design, and cathode-ray tube analysis—the actual direction of electron flow becomes significant. In these cases, understanding both models (conventional and electron flow) is key to avoiding conceptual errors.
For example, in P-N junctions, electrons and holes move in opposite directions. The current that results from this movement is described using conventional current, even though the physics at the microscopic level involve both electrons and positive “holes.”
Conclusion
Conventional current is a bit like the QWERTY keyboard layout: a historical artifact that became so entrenched in our systems that it’s easier to stick with it than to rewire everything. For electrical engineers, it’s less about right or wrong and more about consistency and clarity.
Understanding the history behind conventional current adds an extra layer of insight—and a touch of humility—to the work we do. It’s a reminder that engineering is often as much about tradition as it is about innovation.
Your Turn
Do you prefer thinking in terms of conventional or electron current? Have you ever run into trouble because of the difference? Go to Facebook and share your experiences in the comments—we’d love to hear how you navigate this historical quirk of our field.