Banks keep only a small fraction of deposits on hand in cash; they lend out the majority of deposits to borrowers or use the funds to purchase other interest-bearing assets such as government securities. When a run comes, a bank must quickly increase its cash to meet depositorsâ€™ demands. It does so primarily by selling assets, often hastily and at fire-sale prices. As banks hold little capital and are highly leveraged, losses on these sales can drive a bank into insolvency.
In the United States, bank failures have been relatively infrequent. From the end of the Civil War through 1920 (after the Federal Reserve was established in 1913 but before the Federal Deposit Insurance Corporation was formed in 1933), the bank failure rate was lower, on average, than that of nonbanking firms. The failure rate increased sharply in the 1920s and again between 1929 and 1933, when nearly 40 percent of U.S. banks failed. Yet, from 1875 through 1933, losses from failures averaged only 0.2 percent of total deposits in the banking system annually. Losses to depositors at failed banks averaged only a fraction of the annual losses suffered by bondholders of failed nonbanking firms.
A survey of all failures of national banks from 1865 through 1936 by J. F. T. Oâ€™Connor, comptroller of the currency from 1933 through 1938, concluded that runs were a contributing cause in less than 15 percent of the three thousand failures. The fact that the number of runs on individual banks was far greater than this means that most runs did not lead to failures.