Diversity Guard Mathematics: How Requiring Diverse Consensus Makes Tyranny Statistically Impossible
Democracy has a fatal flaw. Alexis de Tocqueville identified it in 1835: the “tyranny of the majority.” A 51% majority can systematically oppress the 49%—legally, constitutionally, democratically. The American Founders understood this danger. James Madison wrote in Federalist No. 51 that “the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority” only if institutions are properly designed.
But what if we could make tyranny not just difficult but statistically impossible?
This article presents the mathematical foundations for a governance mechanism called the Diversity Guard—a system that requires decisions affecting fundamental rights to achieve consensus not just among a majority, but among genuinely diverse decision-making bodies. The mathematics demonstrate that when diversity is properly measured and required, coordinated oppression becomes exponentially harder as the number of diverse validators increases.
The Math of Mob Rule
Why Simple Majorities Fail
Consider a population divided into two groups: 60% belong to Group A, 40% to Group B. Under simple majority rule, Group A can always win every vote. If Group A acts as a bloc, Group B has zero political power—not because they’re fewer, but because the system allows concentrated power to dominate completely.
This isn’t hypothetical. Throughout history, simple majorities have enabled systematic oppression: Jim Crow laws in the American South, apartheid in South Africa, anti-minority legislation across democratic nations. In each case, majorities used perfectly legal democratic mechanisms to tyrannize minorities.
The mathematical problem is correlation. When voters share characteristics—ethnicity, religion, economic class, geographic location—their votes become correlated. Correlated votes mean outcomes become predictable, and predictable outcomes mean minorities can be permanently excluded.
The Condorcet Jury Theorem
The French mathematician Marquis de Condorcet proved a remarkable theorem in 1785 about collective decision-making. Consider a group voting on a question with an objectively correct answer, where each voter has probability p of voting correctly. Condorcet proved:
If p > 0.5 (voters are more likely correct than incorrect):
- Adding more voters increases the probability of a correct group decision
- As the number of voters approaches infinity, the probability of a correct decision approaches 1
If p < 0.5 (voters are more likely incorrect):
- Adding more voters decreases the probability of a correct group decision
- As the number of voters approaches infinity, the probability of an incorrect decision approaches 1
The mathematical formula for the probability of a correct majority with n (odd) voters:
P(correct majority) = Σ C(n,k) × p^k × (1-p)^(n-k)
k=⌈n/2⌉ to n
Where C(n,k) is the binomial coefficient.
Example: With n = 3 voters and p = 0.8:
- P(all three correct) = 0.8³ = 0.512
- P(two correct, one wrong) = 3 × 0.8² × 0.2 = 0.384
- P(correct majority) = 0.512 + 0.384 = 0.896
Three voters with individual 80% accuracy produce collective 89.6% accuracy—a significant improvement.
The Hidden Assumption: Independence
Condorcet’s theorem has a crucial hidden assumption: voters must be independent. Their errors must be uncorrelated. This is where tyranny creeps in.
When voters are correlated—when they share biases, receive the same information, belong to the same social groups—the theorem breaks down. Seven voters from the same homogeneous community voting on whether their community’s practices are correct will produce the same answer as one voter. The apparent democracy masks actual monocracy.
This is the mathematical key to the Diversity Guard: independence comes from diversity. Voters with genuinely different backgrounds, experiences, and information sources are more likely to have uncorrelated errors. Requiring diverse consensus transforms Condorcet’s optimistic theorem into an actual protection.
Diversity Metrics
How do we measure whether a decision-making body is genuinely diverse? Several mathematical frameworks exist.
Shannon Entropy (Shannon-Wiener Index)
Information theorist Claude Shannon developed entropy as a measure of unpredictability in 1948. Applied to diversity, Shannon entropy measures how “surprising” a randomly selected member of a group would be:
H = -Σ p_i × log₂(p_i)
Where p_i is the proportion of the population belonging to category i.
Example: A council of 100 members:
- If all 100 belong to one category: H = -1 × log₂(1) = 0 (no diversity)
- If 50 belong to Category A, 50 to Category B: H = -2 × (0.5 × log₂(0.5)) = 1 bit
- If 25 each belong to four categories: H = -4 × (0.25 × log₂(0.25)) = 2 bits
Higher entropy means more diversity. Maximum entropy for k categories is log₂(k).
Simpson’s Diversity Index
Ecologist Edward Simpson proposed a simpler measure in 1949: the probability that two randomly selected individuals belong to different categories:
D = 1 - Σ p_i²
Example: Same council of 100:
- All one category: D = 1 - 1² = 0 (no diversity)
- Two equal categories (50/50): D = 1 - (0.5² + 0.5²) = 0.5
- Four equal categories (25/25/25/25): D = 1 - (4 × 0.25²) = 0.75
Simpson’s index ranges from 0 (no diversity) to 1 - 1/k for k equal categories.
True Diversity: Effective Number of Types
Both Shannon entropy and Simpson’s index are related to a more intuitive measure: the “effective number of types.” This converts abstract indices into the equivalent number of equally-abundant categories:
- From Shannon: exp(H) gives the effective number of types
- From Simpson: 1/(1-D) gives the effective number of types
Example: If Shannon entropy H = 1.609, then exp(1.609) = 5.0 effective types. This means the diversity is equivalent to having 5 equally-represented categories, regardless of how many actual categories exist.
Multi-Dimensional Diversity
Real diversity is multi-dimensional: a group can be diverse in ethnicity but homogeneous in profession, or vice versa. For Diversity Guard, we need to measure diversity across multiple relevant dimensions:
D_total = Π D_i^(w_i)
Where D_i is the diversity index for dimension i, and w_i is the weight assigned to that dimension (with weights summing to 1).
Alternatively, we can require minimum thresholds for each dimension:
Diverse = (D_ethnicity ≥ T_e) AND (D_geography ≥ T_g) AND (D_profession ≥ T_p) AND ...
This ensures no single dimension of homogeneity can dominate.
Byzantine Generals Solution
The Original Problem
In 1982, computer scientists Leslie Lamport, Robert Shostak, and Marshall Pease formalized a problem that had plagued distributed systems: how can a group of generals coordinate an attack when some generals might be traitors sending false messages?
The generals must reach consensus on whether to attack or retreat. But traitorous generals can send contradictory messages to different loyal generals, trying to break coordination. The question: how many loyal generals are needed to guarantee consensus despite traitors?
The Mathematical Result
Lamport et al. proved a remarkable theorem:
To tolerate f Byzantine (traitorous/faulty) nodes, a system requires at least 3f + 1 total nodes.
The proof intuition:
- With f Byzantine nodes, the system must work even when f nodes don’t respond at all
- The remaining (n - f) nodes must reach consensus
- But of those (n - f) responding nodes, f might be Byzantine sending wrong messages
- So we need (n - f) - f > f loyal agreeing nodes
- This gives n > 3f, or n ≥ 3f + 1
With 3f + 1 nodes, at least 2f + 1 are honest. This honest majority can outvote the f Byzantine nodes and reach correct consensus.
Application to Tyranny Prevention
The Byzantine Generals problem maps directly onto the tyranny problem. Imagine “Byzantine” nodes as decision-makers captured by a tyrannical faction—willing to lie, coordinate secretly, and act against the common good.
Key insight: If we can detect “Byzantine” behavior (coordinated false consensus), we can prevent tyranny. The detection mechanism is diversity.
When validators are genuinely diverse, Byzantine coordination becomes harder:
- Diverse validators have different information sources (harder to deceive all)
- Diverse validators have different interests (harder to bribe all)
- Diverse validators have different cultural contexts (coordinated lying is more detectable)
From Fault Tolerance to Tyranny Resistance
Standard Byzantine Fault Tolerance assumes random or arbitrary failures. But tyranny is coordinated—it requires aligned interests across validators. This is where diversity transforms BFT:
With homogeneous validators: Correlated failures are likely. Seven rural farming communities might all vote to restrict urban interests.
With diverse validators: Correlated failures require coordination across differences. Seven communities spanning rural/urban, different ethnicities, different economic bases must all be corrupted—exponentially harder.
The mathematical modification:
P(coordinated Byzantine failure) = P(single capture)^(diversity_factor × n)
Where diversity_factor > 1 when validators are diverse. As diversity increases, the probability of coordinated capture decreases exponentially.
Implementation Algorithm
Here is a concrete algorithm for implementing Diversity Guard consensus:
Step 1: Define Diversity Dimensions
DIVERSITY_DIMENSIONS = {
'geographic': {
'categories': ['urban', 'suburban', 'rural', 'coastal', 'inland'],
'min_entropy': 1.5, # bits
'weight': 0.25
},
'economic': {
'categories': ['primary', 'secondary', 'tertiary', 'quaternary'],
'min_entropy': 1.3,
'weight': 0.20
},
'cultural': {
'categories': ['traditions_A', 'traditions_B', 'traditions_C', '...'],
'min_entropy': 2.0,
'weight': 0.30
},
'generational': {
'categories': ['youth', 'working_age', 'senior'],
'min_entropy': 1.0,
'weight': 0.15
},
'educational': {
'categories': ['vocational', 'academic', 'self_taught', 'mixed'],
'min_entropy': 1.2,
'weight': 0.10
}
}
Step 2: Calculate Diversity Score
import math
def shannon_entropy(proportions):
"""Calculate Shannon entropy from category proportions."""
entropy = 0
for p in proportions:
if p > 0:
entropy -= p * math.log2(p)
return entropy
def simpson_diversity(proportions):
"""Calculate Simpson's diversity index."""
return 1 - sum(p**2 for p in proportions)
def calculate_diversity_score(validators, dimension):
"""
Calculate diversity score for a set of validators on one dimension.
Returns (shannon_entropy, simpson_index, effective_types)
"""
# Count validators in each category
category_counts = {}
for v in validators:
cat = v.attributes[dimension]
category_counts[cat] = category_counts.get(cat, 0) + 1
# Convert to proportions
total = len(validators)
proportions = [count/total for count in category_counts.values()]
h = shannon_entropy(proportions)
d = simpson_diversity(proportions)
effective = math.exp(h)
return h, d, effective
def is_sufficiently_diverse(validators):
"""
Check if validator set meets minimum diversity thresholds.
Returns (is_diverse: bool, diversity_report: dict)
"""
report = {}
all_pass = True
for dim_name, dim_config in DIVERSITY_DIMENSIONS.items():
entropy, simpson, effective = calculate_diversity_score(validators, dim_name)
passes = entropy >= dim_config['min_entropy']
all_pass = all_pass and passes
report[dim_name] = {
'shannon_entropy': entropy,
'simpson_index': simpson,
'effective_types': effective,
'threshold': dim_config['min_entropy'],
'passes': passes
}
return all_pass, report
Step 3: Diversity Guard Consensus
def diversity_guard_consensus(proposal, validator_pool,
required_majority=0.67,
min_validators=7):
"""
Execute a Diversity Guard consensus vote.
Requirements:
1. Validators must be sufficiently diverse (pass diversity thresholds)
2. Proposal must achieve required_majority among diverse validators
3. Minimum number of validators must participate
Returns: (approved: bool, result: dict)
"""
# Step 1: Verify validator diversity
is_diverse, diversity_report = is_sufficiently_diverse(validator_pool)
if not is_diverse:
return False, {
'status': 'REJECTED_INSUFFICIENT_DIVERSITY',
'diversity_report': diversity_report,
'message': 'Validator pool fails diversity requirements'
}
if len(validator_pool) < min_validators:
return False, {
'status': 'REJECTED_INSUFFICIENT_VALIDATORS',
'count': len(validator_pool),
'required': min_validators
}
# Step 2: Collect votes
votes = []
for validator in validator_pool:
vote = validator.cast_vote(proposal)
votes.append({
'validator_id': validator.id,
'vote': vote,
'attributes': validator.attributes
})
# Step 3: Calculate results
approve_count = sum(1 for v in votes if v['vote'] == 'APPROVE')
approval_rate = approve_count / len(votes)
# Step 4: Check for suspicious patterns
correlation_check = detect_vote_correlation(votes)
if correlation_check['suspicious']:
return False, {
'status': 'REJECTED_CORRELATION_DETECTED',
'approval_rate': approval_rate,
'correlation_report': correlation_check,
'message': 'Voting pattern suggests coordinated bloc'
}
# Step 5: Final decision
approved = approval_rate >= required_majority
return approved, {
'status': 'APPROVED' if approved else 'REJECTED_INSUFFICIENT_VOTES',
'approval_rate': approval_rate,
'total_votes': len(votes),
'approve_count': approve_count,
'diversity_report': diversity_report
}
Step 4: Correlation Detection
import numpy as np
from scipy.stats import chi2_contingency
def detect_vote_correlation(votes, threshold=0.1):
"""
Detect if votes are suspiciously correlated with any single dimension.
Uses chi-squared test for independence.
Returns: {suspicious: bool, correlations: dict}
"""
correlations = {}
suspicious = False
for dimension in DIVERSITY_DIMENSIONS.keys():
# Build contingency table: category x vote
categories = {}
for vote in votes:
cat = vote['attributes'][dimension]
if cat not in categories:
categories[cat] = {'APPROVE': 0, 'REJECT': 0}
categories[cat][vote['vote']] += 1
# Convert to matrix for chi-squared test
if len(categories) > 1:
table = [[c['APPROVE'], c['REJECT']] for c in categories.values()]
try:
chi2, p_value, dof, expected = chi2_contingency(table)
correlations[dimension] = {
'chi2': chi2,
'p_value': p_value,
'significant': p_value < threshold
}
if p_value < threshold:
suspicious = True
except ValueError:
# Not enough data for test
correlations[dimension] = {'error': 'insufficient_data'}
return {
'suspicious': suspicious,
'correlations': correlations,
'interpretation': (
'Votes appear independent of validator categories'
if not suspicious else
'Warning: Votes correlate significantly with one or more categories'
)
}
Step 5: Byzantine Fault Tolerance Integration
def bft_diversity_consensus(proposal, validator_pool,
max_byzantine_fraction=0.33):
"""
Combine Byzantine Fault Tolerance with Diversity Guard.
Requires:
- n >= 3f + 1 validators (BFT requirement)
- Diverse validator pool (Diversity Guard requirement)
- 2/3 + 1 votes for approval (BFT safety threshold)
"""
n = len(validator_pool)
max_byzantine = int(n * max_byzantine_fraction)
required_honest = n - max_byzantine
bft_threshold = (2 * required_honest + 1) / n # 2f+1 out of 3f+1
# Verify BFT requirements met
if n < 3 * max_byzantine + 1:
return False, {
'status': 'REJECTED_INSUFFICIENT_FOR_BFT',
'message': f'Need {3*max_byzantine+1} validators for {max_byzantine} Byzantine tolerance'
}
# Run Diversity Guard consensus with BFT threshold
return diversity_guard_consensus(
proposal,
validator_pool,
required_majority=bft_threshold,
min_validators=3 * max_byzantine + 1
)
Testing the System
Simulation: Homogeneous vs. Diverse Validators
Let’s test the mathematical claims with a simulation comparing homogeneous and diverse validator sets.
import random
import numpy as np
class Validator:
def __init__(self, attributes, bias_toward_own_group=0.3):
self.id = random.randint(1, 1000000)
self.attributes = attributes
self.bias = bias_toward_own_group
def cast_vote(self, proposal):
"""Vote with potential in-group bias."""
# Check if proposal affects validator's groups
affects_my_groups = any(
proposal.get('affected_' + dim) == self.attributes.get(dim)
for dim in self.attributes.keys()
)
# Base probability proposal is good
p_approve = proposal.get('objective_merit', 0.5)
# Add bias if proposal benefits own group
if affects_my_groups:
p_approve += self.bias
# Clamp to valid probability
p_approve = max(0, min(1, p_approve))
return 'APPROVE' if random.random() < p_approve else 'REJECT'
def create_homogeneous_validators(n, dominant_category='A'):
"""Create n validators all from same categories."""
return [
Validator({
'geographic': dominant_category,
'economic': dominant_category,
'cultural': dominant_category,
})
for _ in range(n)
]
def create_diverse_validators(n):
"""Create n validators with diverse attributes."""
categories = ['A', 'B', 'C', 'D', 'E']
return [
Validator({
'geographic': random.choice(categories),
'economic': random.choice(categories),
'cultural': random.choice(categories),
})
for _ in range(n)
]
def run_tyranny_simulation(n_trials=1000, n_validators=7):
"""
Simulate voting on a proposal that:
- Has 50% objective merit (neutral)
- Specifically benefits Group A at expense of others
"""
results = {
'homogeneous': {'approve': 0, 'reject': 0},
'diverse': {'approve': 0, 'reject': 0}
}
for _ in range(n_trials):
# Proposal benefits Group A (tyranny potential)
proposal = {
'objective_merit': 0.5, # Neutral merit
'affected_geographic': 'A',
'affected_economic': 'A',
'affected_cultural': 'A',
}
# Test homogeneous validators (all Group A)
homo_validators = create_homogeneous_validators(n_validators, 'A')
homo_votes = [v.cast_vote(proposal) for v in homo_validators]
homo_approves = sum(1 for v in homo_votes if v == 'APPROVE')
if homo_approves > n_validators / 2:
results['homogeneous']['approve'] += 1
else:
results['homogeneous']['reject'] += 1
# Test diverse validators
div_validators = create_diverse_validators(n_validators)
div_votes = [v.cast_vote(proposal) for v in div_validators]
div_approves = sum(1 for v in div_votes if v == 'APPROVE')
if div_approves > n_validators / 2:
results['diverse']['approve'] += 1
else:
results['diverse']['reject'] += 1
return results
# Run simulation
results = run_tyranny_simulation(n_trials=10000)
print("Self-Serving Proposal Approval Rates:")
print(f" Homogeneous (Group A): {results['homogeneous']['approve']/100:.1f}%")
print(f" Diverse validators: {results['diverse']['approve']/100:.1f}%")
Expected Results:
With a 30% in-group bias:
- Homogeneous validators: ~80% approval rate (tyranny enabled)
- Diverse validators: ~55% approval rate (near neutral, as expected for 50% merit)
Why 7 Similar Communities Fail vs. 7 Diverse Succeed
Consider a concrete example: a proposal to redirect infrastructure funding from urban to rural areas.
Scenario A: 7 Similar Rural Communities
- All validators share rural geography, agricultural economy, traditional culture
- All benefit from the proposal
- Shannon entropy (geographic): 0 bits (all rural)
- Simpson diversity: 0 (all same)
- Expected vote: 7-0 or 6-1 in favor (tyranny of rural majority)
- Diversity Guard rejects: fails minimum entropy threshold
Scenario B: 7 Diverse Communities
- Geographic mix: 2 urban, 2 suburban, 2 rural, 1 coastal
- Economic mix: manufacturing, services, agriculture, technology
- Cultural mix: varied traditions and values
- Shannon entropy (geographic): ~1.95 bits
- Simpson diversity: 0.78
- Expected vote: varies based on objective merit, roughly 3-4 either way
- Diversity Guard accepts: passes diversity thresholds
The key difference isn’t the number of validators but their independence. Seven cloned validators provide the same information as one. Seven diverse validators provide genuinely different perspectives—and their agreement signals legitimate consensus rather than bloc voting.
Mathematical Proof: Tyranny Probability Decay
Let’s prove that tyranny becomes exponentially harder as diversity increases.
Definitions:
- n = number of validators
- p = probability a single diverse validator supports tyrannical proposal
- k = diversity factor (effective number of independent decision streams)
For homogeneous validators (k = 1):
All validators have correlated decisions. Probability of majority support:
P_homo(tyranny) ≈ p_correlated ≈ p_bias
Where p_bias reflects the in-group bias. If validators are 70% likely to support policies benefiting their group, P_homo ≈ 0.70.
For diverse validators (k = n):
Validators have uncorrelated decisions. By Condorcet, if p < 0.5 (tyrannical proposal has <50% support from unbiased validators):
P_diverse(tyranny) = Σ C(n,j) × p^j × (1-p)^(n-j) for j > n/2
For n = 7 and p = 0.3 (30% support a clearly bad proposal):
P_diverse(majority) = C(7,4)×0.3⁴×0.7³ + C(7,5)×0.3⁵×0.7² + C(7,6)×0.3⁶×0.7¹ + C(7,7)×0.3⁷
= 35×0.0081×0.343 + 21×0.00243×0.49 + 7×0.000729×0.7 + 0.0002187
= 0.097 + 0.025 + 0.0036 + 0.0002
≈ 0.126 (12.6%)
The diversity advantage:
- Homogeneous: ~70% tyranny success
- Diverse: ~12.6% tyranny success
As we increase both n and diversity, this gap widens exponentially. With 21 diverse validators (k ≈ 21), P_diverse drops below 1%.
Voting Power Analysis: Banzhaf and Shapley-Shubik
The Diversity Guard also addresses power concentration using voting power indices.
Banzhaf Power Index measures how often a voter is “pivotal”—their vote changes the outcome. In a diverse consensus system:
def calculate_banzhaf_power(validators, threshold):
"""
Calculate Banzhaf power index for each validator.
Power = frequency of being a swing voter
"""
n = len(validators)
power = {v.id: 0 for v in validators}
total_swings = 0
# Enumerate all possible coalitions
for coalition_bits in range(2**n):
coalition = [validators[i] for i in range(n) if coalition_bits & (1 << i)]
coalition_votes = len(coalition)
# For each validator in coalition, check if they're pivotal
for i, v in enumerate(validators):
if coalition_bits & (1 << i): # v is in coalition
# Would coalition fail without v?
if coalition_votes >= threshold and (coalition_votes - 1) < threshold:
power[v.id] += 1
total_swings += 1
# Normalize to percentages
if total_swings > 0:
for v_id in power:
power[v_id] = power[v_id] / total_swings
return power
In a properly diverse system, power should be approximately equal. If one validator or faction has disproportionate power, the system is vulnerable.
Shapley-Shubik Power Index considers the order in which validators join a winning coalition. The validator who pushes the coalition over the threshold is “pivotal.” This captures real-world dynamics where some validators must commit before others.
Both indices should show approximately equal power distribution in a well-designed Diversity Guard system. Significant deviations indicate hidden power concentrations that require rebalancing.
Implications for Governance Design
Proof-of-Diversity (PoD)
The mathematical foundations support a new consensus mechanism: Proof-of-Diversity. Unlike Proof-of-Work (computational power) or Proof-of-Stake (financial power), PoD requires demonstrable diversity before consensus is recognized.
A decision achieves PoD when:
- The validator set passes minimum diversity thresholds on all relevant dimensions
- The vote shows no statistically significant correlation with any single dimension
- The margin of victory exceeds Byzantine fault tolerance thresholds
Self-Reinforcing Protection
The mathematics reveal a crucial property: diversity requirements are self-reinforcing. A homogeneous majority cannot vote to remove diversity requirements because:
- The vote would fail diversity thresholds (homogeneous voters)
- Even if threshold games were attempted, correlation detection would flag the bloc
- The system recognizes its own requirements as fundamental rights requiring diverse consensus
This creates a mathematical “constitutional lock”—certain changes cannot be made without the consent of genuinely diverse constituencies.
Scalability Considerations
The computational complexity of diversity verification is polynomial, not exponential:
- Entropy calculation: O(n × d) for n validators and d dimensions
- Correlation detection: O(n × d) for chi-squared tests
- BFT consensus: O(n²) message complexity
This scales practically to thousands of validators across dozens of diversity dimensions.
Conclusion: Mathematical Guarantees Against Tyranny
The Diversity Guard provides something traditional constitutional design cannot: mathematical guarantees. Not perfect guarantees—no system is unbreakable—but quantifiable, testable, and tunable protections against coordinated oppression.
The key insights:
-
Independence defeats coordination: Diverse validators have uncorrelated errors and uncorrelated biases, making coordinated tyranny exponentially harder.
-
Diversity is measurable: Shannon entropy, Simpson’s index, and effective number of types provide rigorous metrics for “enough” diversity.
-
Byzantine tolerance applies: The 3f+1 requirement from distributed computing translates to governance—but diversity multiplies the protection.
-
Correlation detection works: Statistical tests can identify bloc voting even when individual votes are secret.
-
The math scales: These protections work for seven validators or seven thousand.
For the Unscarcity framework, Diversity Guard mathematics provide the foundation for governance that is genuinely resistant to capture. Not through parchment barriers or good intentions, but through mathematical necessity. A tyrannical majority cannot form when the definition of “majority” requires genuine diversity that tyranny, by definition, cannot achieve.
References
- Byzantine fault tolerance - Wikipedia
- Why N = 3f+1 in Byzantine Fault Tolerance - Medium
- Practical Byzantine Fault Tolerance (pBFT) - GeeksforGeeks
- Condorcet’s jury theorem - Wikipedia
- Jury Theorems - Stanford Encyclopedia of Philosophy
- Condorcet’s Jury Theorem - Wolfram MathWorld
- Diversity index - Wikipedia
- Shannon Entropy and Simpson’s Diversity Index - University of Florida
- Entropy and Diversity - Jost (2006)
- Banzhaf power index - Wikipedia
- Shapley-Shubik power index - Wikipedia
- Calculating Power: Banzhaf Power Index - Mathematics LibreTexts
- Calculating Power: Shapley-Shubik Power Index - Mathematics LibreTexts
- Tyranny of the majority - Wikipedia
- Federalist No. 10 - Bill of Rights Institute
- Preventing “The Tyranny of the Majority” - Heritage Foundation
- Measures of Diversity in Classifier Ensembles - ResearchGate
- A Unified Theory of Diversity in Ensemble Learning - JMLR
- Understanding the Importance of Diversity in Ensemble Learning - Towards Data Science
- Lamport, L., Shostak, R., & Pease, M. (1982). “The Byzantine Generals Problem.” ACM Transactions on Programming Languages and Systems.
- Bracha, G., & Rabin, T. “Optimal Asynchronous Byzantine Agreement.” TR#92-15, Hebrew University.