Skip to main content

Aphorisms on software

·1991 words·10 mins
Themes Guide - This article is part of a series.
Part : This Article

A compilation of pithy sayings that have profound effect on software

Overview
#

Aphorisms are concise statements that summarize a belief. It usually takes on the following forms: the X effect, the principle of X or the X’s law of something. Usually these aphorisms can not be proven in a strict sense, they are phrases that proclaim to be self-evident truth. Nevertheless the study of aphorisms, with a filter to focus on software, would lead to a better understanding of software ecosystem in general. This compilation touches upon the forces shaping software industry, the teamwork that generates software, the nature of software, and psychology in software development.

Aphorisms related to forces shaping software industry:

Aphorisms related to teamwork in software construction:

Aphorisms related to the nature of software development:

Aphorisms related to psychology in software development:

Moore’s law (on growth of transistors)
#

The number of transistors in a dense integrated circuit doubles about every two years.

Intel CEO, Gordon Moore, declared the quote above in 1965.

Metcalfe’s law (on network effect)
#

the value of a network would grow proportionately to the number of its users squared

Robert Metcalfe

Yule’s law (on complementary product)
#

If two attributes or products are complements, the value/demand of one of the complements will be inversely related to the price of the other complement

George Udny Yule, a British statistician, developed this economic principle that has significant implications for software ecosystems. While not originally formulated for software, Yule’s law elegantly explains many pricing and adoption dynamics in modern technology stacks.

In software terms, when two products complement each other (like hardware and software, or a platform and its applications), lowering the price of one tends to increase the demand for the other. This principle underlies many successful business strategies:

  1. Developer tool ecosystems: Companies like JetBrains or Microsoft often provide free or discounted versions of development tools to students and open-source projects. As these developers become proficient with these tools, they create demand for the commercial versions in enterprise settings.

  2. Platform economics: Companies like Apple (iOS) and Google (Android) invest heavily in their operating systems and lower barriers to entry for developers, knowing that a rich application ecosystem increases the value of their hardware products.

  3. Open source strategies: Many companies open-source core technologies while monetizing complementary services. For example, MongoDB provides its database as open-source while charging for hosted services, consulting, and enterprise features.

  4. API pricing models: Companies offering APIs often provide free tiers or developer-friendly pricing, understanding that as developers build dependencies on these APIs, their applications become complementary products that drive long-term revenue.

The law helps explain why software giants often engage in price wars or even offer certain products for free - they’re maximizing the demand for complementary offerings where their profit margins are higher. Understanding this principle helps software architects and product managers make better decisions about which components to build, buy, or integrate, and how to position their products within larger ecosystems.

Hoff’s law (on scalability)
#

The potential for scalability of a technology product is inversely proportional to its degree of customization and directly proportional to its degree of standardization

Intel’s twelfth employee is Ted Hoff, who is credited for the 1971 creation of the world’s first general purpose microprocessor: Intel 4004. Although the chip was designed for a Japanese calculator business, Hoff’s general purpose chip design enabled Intel to expand business and usher in a new era of the computing industry. IBM PC compatible computers, Ford model-T mass production assembly line are other famous examples of industry standardization that enabled scale. Despite the correlation between standardization and scale, standardization doesn’t necessarily lead to best product quality. It should be noted that a vertically integrated product delightful for consumers can also be scalable and market-dominant without industry standardization: Apple iPhone vs. Android phones.

Evan’s law (on modularization)
#

The inflexibilities, incompatibilities, and rigidities of complex and/or monolithically structured technologies could be simplified by the modularization of the technology structures (and processes)

Evan

Brooks’s law (on mythical man-month)
#

Adding manpower to a late software project makes it later

Fred Brooks’s 1975 book “The Mythical Man-Month” draws from experience leading IBM mainframe project (System 360) and declares that more people added to a delayed project will only slow it down further. Certain tasks are not as divisible to allow divide and conquer. There is also a ramp-up period before the newcomers can be additive to the team output. The communication channels increase multifold when there are more people on the team, slowing down decision making. As such, it is usually inaccurate to declare that a complex task of 20 person days done by one expert can be completed in half of time by a team of two novices. The effect of Brooks’s law can be mitigated by asynchronous ramp-up for newcomers, establishment of dedicated roles or communication norms, and breaking down complex tasks into repeatable simple tasks.

Parkinson’s law (on work expansion)
#

Work expands so as to fill the time available for its completion

Coined by Cyril Parkinson in his 1955 economist article, this term describes the natural human tendency to add extraneous work prior to task completion. For example, when the work is expected to complete two weeks later, the efforts for additional review and polish would be added even though the extra efforts far exceed the optimal level of two day efforts required for the task. Intermediate milestones and readjustment of project plan can help mitigate the effect of Parkinson’s law.

Yerkes-Dodson law (on performance)
#

Performance increases with physiological or mental arousal (stress or pressure) but only up to a point.

Psychologists Robert Yerkes and John Dodson developed this idea in 1908 to describe the relationship between pressure and performance. Too little pressure leads to lack of engagement and procrastination; Too much pressure leads to burnout and exhaustion. Optimal zone is where there is sufficient amount of pressure that promotes high degree of performance. Simple task that are easily repeatable are not as susceptible to anxiety issues related to high pressure. Complex task with significant cognitive load has higher likelihood for worker to lose concentration under high pressure. As such, it is often helpful to brainstorm ways to simplify approaches when dealing with a complex problem, before establishing an aggressive schedule that promotes sustained performance on simplified task assignments.

Conway’s law (on communication structure)
#

Any Organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure

Participants at the 1968 National Symposium on Modular Programming credit Melvin Conway for this law. Conway’s law reminds us that construction of a desirable system requires optimal design of team structure. The identity of a team has lasting influence on the system that the team produces. A centralized team of DBAs who maintain company’s database would very likely result in only DBAs having the permission to change database in production. A centralized team of middleware engineers who maintain company’s application server would very likely result in only middleware engineers having permission to change server settings on production. Two teams with similar objectives but very little coordination would likely result in two systems with extraneous overlap and incompatible implementations. The organization woe that one experiences in a team is sometimes not the result of faulty code, but suboptimal design of underlying team structure.

Linus’s law (on bug review)
#

Given enough eyeballs, all bugs are shallow

In an essay called “The cathedral and the bazaar” from year 2000, Eric Raymond named this law after Linus Torvalds, the founder of Linux operating system. The essay expands on a more formal version: given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. Given our experience with Heartbleed and Log4shell, wide consumption of software doesn’t necessarily mean enough defect-squashing eyeballs, since esoteric defects are often initially only visible to a select few who know what to look for and are actively looking.

Broken window theory (on defect fixing)
#

Broken windows in the neighborhood begets more crime, unfixed exception on the log hides more bugs

Social scientists James Wilson and George Kelling introduced the Broken window theory in 1982 as a way to deter major crime in a neighborhood by policing minor crimes and minimizing visible appearance of decay and negligence. Tolerance of minor trespass leads to the mindset that additional wrongdoings of higher severity would go unnoticed. This idea is also applicable in the context of application development. Exception stack trace deemed by few to be benign in application log, if not removed, can encourage other developers to let down their guards against additional software anti-patterns. The situation could exacerbate to the point where well-intentioned person cannot swiftly identify real root causes when handling an urgent production issue, due to a sea of false leads.

Imposter syndrome (on self-doubt)
#

Despite evidence of competence, individuals feel like frauds who don’t deserve their success and fear being exposed as “impostors”

First described by psychologists Pauline Rose Clance and Suzanne Imes in 1978, imposter syndrome is particularly prevalent in technology fields. Software developers regularly experience feelings that they don’t belong or aren’t qualified for their positions, despite objective evidence of skills and accomplishments. This phenomenon affects even highly experienced engineers who might attribute their success to luck, timing, or their ability to deceive others about their competence rather than their actual abilities.

The syndrome is exacerbated in software development due to the vast breadth of knowledge required, the rapid pace of technological change, and the often isolation-inducing nature of deep technical work. Studies suggest that up to 70% of people experience these feelings at some point in their careers, with underrepresented groups in tech often experiencing it more intensely due to additional stereotype threats.

Strategies to combat imposter syndrome include maintaining a record of accomplishments, practicing self-compassion, seeking mentorship, and fostering a culture that acknowledges that expertise is developed rather than innate. Organizations that normalize learning and occasional failure create environments where imposter syndrome is less likely to hamper productivity and innovation.

Dunning-Kruger effect (on competence perception)
#

People with limited knowledge in a domain overestimate their competence while experts underestimate theirs

Psychologists David Dunning and Justin Kruger identified this cognitive bias in 1999, noting that the skills needed to recognize competence are often the same skills needed to be competent. In software development, this manifests as beginners who confidently produce suboptimal code without recognizing its flaws, while seasoned developers approach problems with greater caution due to their awareness of complexity and edge cases.

Confirmation bias (on information processing)
#

People tend to search for, interpret, and recall information in a way that confirms their preexisting beliefs

This bias affects everything from code reviews (where reviewers may focus on confirming their initial impressions) to debugging (where developers might fixate on a suspected cause rather than considering alternatives). Effective teams combat this by implementing practices like rubber duck debugging, pair programming, and diverse review panels that challenge assumptions.

Fundamental attribution error (on judging mistakes)
#

People tend to attribute others’ behaviors to their character rather than situational factors, while doing the opposite for themselves

When a colleague introduces a bug, we might think “they’re careless,” but when we do the same, we recognize the contextual factors: “I was under deadline pressure.” This asymmetry in attribution can damage team cohesion and psychological safety. Engineering cultures that focus on blameless postmortems and systems thinking rather than individual blame create more resilient teams and software.

Themes Guide - This article is part of a series.
Part : This Article