How a flawed algorithm affected decades of scientific research
RANDU was a linear congruential generator (LCG) used on IBM mainframes in the 1960s-70s. LCGs generate pseudorandom numbers using a simple formula:
RANDU specifically used these parameters:
Critical Discovery: RANDU became infamous because consecutive triplets of numbers fall on just 15 parallel planes in 3D space, rather than filling the cube uniformly.
This was discovered by analyzing the mathematical relationship:
This means if you use three consecutive RANDU outputs as (x, y, z) coordinates, they lie on a small number of parallel planes, making them extremely non-random for multidimensional simulations. This was catastrophic for Monte Carlo simulations in physics and other fields that relied on truly random 3D points.
What you're seeing: Each point represents three consecutive outputs from RANDU used as (x, y, z) coordinates. Notice how they organize into distinct parallel planes rather than randomly filling the cube. From certain angles, the planar structure becomes strikingly obvious.
Simulated annealing is an optimization algorithm inspired by metallurgy. It searches for optimal solutions by:
Both steps rely heavily on random number quality. Let's see how RANDU performs versus a good RNG:
1. Neighbor Generation Bias: When generating random neighbors with dx = (rand() - 0.5) * 2, RANDU creates correlated movements. If it moves right in one dimension, the next move is predictably biased, creating corridors in the search space rather than true random walks.
2. Acceptance Correlation: The acceptance criterion rand() < exp(-ΔE/T) uses a fresh random number each time. With RANDU, this number is correlated with the previous moves, meaning acceptance decisions aren't independent—they form patterns.
3. Effective Search Space Reduction: Since RANDU's output lies on planes, the algorithm effectively searches a lower-dimensional subspace of the actual problem, missing large regions that might contain the global optimum.
Critical Timeline:
The RANDU flaw wasn't discovered until after it had been widely used in scientific research. This means numerous papers in physics, chemistry, biology, and other fields that relied on Monte Carlo simulations may have contained flawed results.
The Lesson: This is considered one of the most spectacular failures in computational mathematics. It demonstrates why rigorous testing of fundamental algorithms is critical, and why modern RNG development involves extensive statistical testing (like the Diehard tests and TestU01 suite).
Modern alternatives like the Mersenne Twister (1997) and newer algorithms like Xoshiro/Xoroshiro (2018) have much better statistical properties and are thoroughly tested before adoption.