Throughout the history of software engineering, organizational theory, and systems design, a few “laws” and principles have stood the test of time. They offer cautionary insights about teams, measurement, complexity, and how people and systems interact. Below is an overview of some of the most cited, along with practical examples and references.


1. Conway’s Law

Definition “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” — Melvin E. Conway, 1967.

Source: First published in Conway’s paper “How Do Committees Invent?” (Datamation magazine, April 1968).

Example: If a company has three separate teams that don’t communicate well, they might produce three loosely integrated subsystems rather than one cohesive system. For instance, a large company with separate web, mobile, and backend teams might end up with disjointed user experiences.


2. Goodhart’s Law

Definition “When a measure becomes a target, it ceases to be a good measure.” — Named after economist Charles Goodhart, 1975.

Source: First articulated by Goodhart in the context of economic policy: “Problems of Monetary Management: The UK Experience.”

Example: In software, if you measure developers by lines of code written, they might write more (but lower quality) code just to hit the metric. The measure loses value as it’s gamed.


3. Shirky Principle

Definition “Institutions will try to preserve the problem to which they are the solution.” — Named after Clay Shirky, who stated this idea in various talks and writings.

Source: The phrase paraphrases insights from Shirky’s essays and his book “Here Comes Everybody” (2008).

Example: A support department might avoid fully solving a root issue (like a bug) because their jobs rely on handling the resulting tickets.


4. Pareto Principle

Definition “Roughly 80% of consequences come from 20% of the causes.” — Named after Vilfredo Pareto, 1896.

Source: Pareto first observed this pattern when he found 80% of Italy’s land was owned by 20% of the population.

Example: In debugging, often 80% of bugs come from 20% of the code. Prioritizing that 20% can deliver outsized impact.


5. Brooks’s Law

Definition “Adding manpower to a late software project makes it later.” — Fred Brooks, “The Mythical Man-Month” (1975).

Source: Frederick P. Brooks Jr.’s classic book on software project management.

Example: If a 5-person team is behind schedule, adding 5 more people often slows progress — the new hires need onboarding, and communication overhead increases.


6. Hofstadter’s Law

Definition “It always takes longer than you expect, even when you take into account Hofstadter’s Law.” — Douglas Hofstadter, “Gödel, Escher, Bach” (1979).

Source: Douglas Hofstadter’s Pulitzer-winning book.

Example: A team estimates a new feature will take 3 weeks. Knowing about Hofstadter’s Law, they add a week… but it still takes 6 weeks.


7. Gall’s Law

Definition “A complex system that works is invariably found to have evolved from a simple system that worked.” — John Gall, “Systemantics: How Systems Really Work and How They Fail” (1975).

Source: Gall’s book on system design and failure.

Example: The World Wide Web started as a simple hypertext system at CERN before evolving into today’s complex internet.


8. Spolsky’s Law (Law of Leaky Abstractions)

Definition “All non-trivial abstractions, to some degree, are leaky.” — Joel Spolsky, “The Law of Leaky Abstractions” (2002).

Source: Article on Spolsky’s blog, Joel on Software.

Example: TCP abstracts reliable data transfer, but developers still sometimes need to handle packet loss or timeouts manually — the abstraction “leaks.”


9. Hyrum’s Law

Definition “With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.” — Hyrum Wright, Google.

Source: Popularized in internal talks and shared publicly in software engineering blogs and conference slides (2016–2018).

Example: If your API returns an error message in a specific format, some client will inevitably parse it — so you can’t easily change it later.


10. Kernighan’s Law

Definition “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” — Brian Kernighan.

Source: Attributed to Kernighan in various writings and “The Elements of Programming Style” (1974).

Example: Using tricky, clever one-liners may impress, but will cost hours when trying to debug why something breaks at 2 a.m.


11. Knuth’s Law

Definition “Premature optimization is the root of all evil.” — Donald Knuth, “Structured Programming with go to Statements” (1974).

Source: Knuth’s famous paper cautioned that small optimizations early on can make code unreadable and fragile.

Example: Spending days micro-optimizing an algorithm that isn’t a bottleneck wastes time better spent on clear design.


12. Broken Window Theory

Definition “Neglect accelerates decay.” — Adapted for code from the urban crime theory by James Q. Wilson & George L. Kelling (1982).

Source: Popularized for software in “The Pragmatic Programmer” (1999) by Andrew Hunt and David Thomas.

Example: If you leave one messy, uncommented piece of code in a repo, it signals neglect — soon, more bad patterns appear.