Coined from Medieval Latin, the word petroleum initially denoted types of mineral oil derived from rocks and stones.
The terms gasoline and petrol identify and describe the petrochemical product characterized as a transparent, yellowish, and flammable liquid, which is used as a fuel for spark-ignited, internal combustion engines. As the fuel for gasoline engines, gasoline is chemically composed of organic compounds derived from the fractional distillation of petroleum, and later is chemically enhanced with gasoline additives.
The fuel-characteristics of a particular gasoline-blend, which will resist igniting too early—and cause engine knocking and reduce efficiency in reciprocating engines—are measured as the octane rating of the fuel blend; the gasoline blend with the most stable octane rating then is produced in several fuel-grades for different types of motor. Tetraethyl lead and other lead compounds are not used in modern automotive gasoline, except in aviation, off-road motor vehicles, and racing car motors.
Gasoline enters the human environment as an un-combusted fuel—as a flammable liquid and as a vapor — by way of leakages occurred in the production and handling, transport and delivery. Gasoline contains known carcinogens. When burned, one liter (0.26 U.S. gal) of gasoline emits about 2.3 kilograms (5.1 lb) of CO2, a greenhouse gas, contributing to human-caused climate change. Oil products, including gasoline, were responsible for about 32% of CO2 emissions worldwide in 2021.
On average, in the U.S. petroleum refineries produce, from a barrel of crude oil (42 U.S. gallons or about 159 liters), about 19 to 20 gallons of gasoline; 11 to 13 gallons of distillate fuel (diesel fuel); and 3 to 4 gallons of jet fuel. The product ratio depends upon the processing in an oil refinery and the crude oil assay (see § Etymology).
The American English word gasoline denotes fuel for automobiles, which common usage shortened to the terms gas, motor gas, and mogas, and thus differentiated that fuel from avgas (aviation gasoline), which is fuel for aeroplanes. The term gasoline originated from the trademark terms Cazeline and Gazeline, which were stylized spellings and pronunciations of Cassell, the surname of British businessman John Cassell, who, on 27 November 1862, placed the following fuel-oil advertisement in The Times of London:
The Patent Cazeline Oil, safe, economical, and brilliant […] possesses all the requisites which have so long been desired as a means of powerful artificial light.
That 19th-century advert is the earliest occurrence of Cassell’s trademark word, Cazelline, to identify automobile fuel. In the course of business, he learned that the Dublin shopkeeper Samuel Boyd was selling a counterfeit version of the fuel cazeline, and, in writing, Cassell asked Boyd to cease and desist selling fuel using his trademark. Boyd did not reply, and Cassell changed the spelling of the trademark name of his fuel cazelline by changing the initial letter C to the letter G, thus coining the word gazeline. By 1863, North American English usage had re-spelled the word gazeline into the word gasolene, by 1864, the gasoline spelling was the common usage. In place of the American English word gasoline, most Commonwealth countries (except Canada), use the term “petrol”, and “gas” in common parlance, hence the prevalence of the usage “gas station” in Canada.
Coined from Medieval Latin, the word petroleum (L. petra, rock + oleum, oil) initially denoted types of mineral oil derived from rocks and stones. In British English usage, the word petrol was the name of a product sold in the 1870s; as a trademark, the term Petrol identified a refined mineral oil product that was sold at market as a solvent, by the British wholesaler Carless Refining and Marketing Ltd. Later, when the petrochemical product petrol found a new use as a motor fuel, Frederick Simms, an associate of Gottlieb Daimler, suggested to John Leonard, owner of Carless, that they register the trademark the word and uppercase spelling Petrol, however, by that time, the word petrol already was a common noun in English usage, and their patent was disallowed, because the word petrol was a general descriptor for automobile fuel in the English language. Moreover, because of his firm’s commercial antiquity, Carless retained the legal rights to the term and to the uppercase spelling of “Petrol” as the name of a petrochemical product.
British refiners originally used “motor spirit” as a generic name for the automotive fuel and “aviation spirit” for aviation gasoline. When Carless was denied a trademark on “petrol” in the 1930s, its competitors switched to the more popular name “petrol”. However, “motor spirit” had already made its way into laws and regulations, so the term remains in use as a formal name for petrol. The term is used most widely in Nigeria, where the largest petroleum companies call their product “premium motor spirit”. Although “petrol” has made inroads into Nigerian English, “premium motor spirit” remains the formal name that is used in scientific publications, government reports, and newspapers.
The use of the word gasoline instead of petrol is uncommon outside North America, although gasolina is used in Spanish and Portuguese and gasorin is used in Japanese.
In many languages, the name of the product is derived from benzene, such as Benzin in Persian and German or benzina in Italian; but in Argentina, Uruguay, and Paraguay, the colloquial name nafta is derived from that of the chemical naphtha.
Some languages, like French and Italian, use the respective words for gasoline to indicate diesel fuel.
The first internal combustion engines suitable for use in transportation applications, so-called Otto engines, were developed in Germany during the last quarter of the 19th century. The fuel for these early engines was a relatively volatile hydrocarbon obtained from coal gas. With a boiling point near 85 °C (185 °F) (n-octane boils at 125.62 °C (258.12 °F)), it was well-suited for early carburetors (evaporators). The development of a “spray nozzle” carburetor enabled the use of less volatile fuels. Further improvements in engine efficiency were attempted at higher compression ratios, but early attempts were blocked by the premature explosion of fuel, known as knocking.
In 1891, the Shukhov cracking process became the world’s first commercial method to break down heavier hydrocarbons in crude oil to increase the percentage of lighter products compared to simple distillation.
1903 to 1914
The evolution of gasoline followed the evolution of oil as the dominant source of energy in the industrializing world. Before World War One, Britain was the world’s greatest industrial power and depended on its navy to protect the shipping of raw materials from its colonies. Germany was also industrializing and, like Britain, lacked many natural resources which had to be shipped to the home country. By the 1890s, Germany began to pursue a policy of global prominence and began building a navy to compete with Britain’s. Coal was the fuel that powered their navies. Though both Britain and Germany had natural coal reserves, new developments in oil as a fuel for ships changed the situation. Coal-powered ships were a tactical weakness because the process of loading coal was extremely slow and dirty and left the ship completely vulnerable to attack, and unreliable supplies of coal at international ports made long-distance voyages impractical. The advantages of petroleum oil soon found the navies of the world converting to oil, but Britain and Germany had very few domestic oil reserves. Britain eventually solved its naval oil dependence by securing oil from Royal Dutch Shell and the Anglo-Persian Oil Company and this determined from where and of what quality its gasoline would come.
During the early period of gasoline engine development, aircraft were forced to use motor vehicle gasoline since aviation gasoline did not yet exist. These early fuels were termed “straight-run” gasolines and were byproducts from the distillation of a single crude oil to produce kerosene, which was the principal product sought for burning in kerosene lamps. Gasoline production would not surpass kerosene production until 1916. The earliest straight-run gasolines were the result of distilling eastern crude oils and there was no mixing of distillates from different crudes. The composition of these early fuels was unknown and the quality varied greatly as crude oils from different oil fields emerged in different mixtures of hydrocarbons in different ratios. The engine effects produced by abnormal combustion (engine knocking and pre-ignition) due to inferior fuels had not yet been identified, and as a result, there was no rating of gasoline in terms of its resistance to abnormal combustion. The general specification by which early gasolines were measured was that of specific gravity via the Baumé scale and later the volatility (tendency to vaporize) specified in terms of boiling points, which became the primary focuses for gasoline producers. These early eastern crude oil gasolines had relatively high Baumé test results (65 to 80 degrees Baumé) and were called “Pennsylvania high-test” or simply “high-test” gasolines. These were often used in aircraft engines.
By 1910, increased automobile production and the resultant increase in gasoline consumption produced a greater demand for gasoline. Also, the growing electrification of lighting produced a drop in kerosene demand, creating a supply problem. It appeared that the burgeoning oil industry would be trapped into over-producing kerosene and under-producing gasoline since simple distillation could not alter the ratio of the two products from any given crude. The solution appeared in 1911 when the development of the Burton process allowed thermal cracking of crude oils, which increased the percent yield of gasoline from the heavier hydrocarbons. This was combined with the expansion of foreign markets for the export of surplus kerosene which domestic markets no longer needed. These new thermally “cracked” gasolines were believed to have no harmful effects and would be added to straight-run gasolines. There also was the practice of mixing heavy and light distillates to achieve the desired Baumé reading and collectively these were called “blended” gasolines.
Gradually, volatility gained favor over the Baumé test, though both continued to be used in combination to specify a gasoline. As late as June 1917, Standard Oil (the largest refiner of crude oil in the United States at the time) stated that the most important property of a gasoline was its volatility. It is estimated that the rating equivalent of these straight-run gasolines varied from 40 to 60 octane and that the “high-test”, sometimes referred to as “fighting grade”, probably averaged 50 to 65 octane.
World War I
Prior to the United States entry into World War I, the European Allies used fuels derived from crude oils from Borneo, Java, and Sumatra, which gave satisfactory performance in their military aircraft. When the U.S. entered the war in April 1917, the U.S. became the principal supplier of aviation gasoline to the Allies and a decrease in engine performance was noted. Soon it was realized that motor vehicle fuels were unsatisfactory for aviation, and after the loss of several combat aircraft, attention turned to the quality of the gasolines being used. Later flight tests conducted in 1937 showed that an octane reduction of 13 points (from 100 down to 87 octane) decreased engine performance by 20 percent and increased take-off distance by 45 percent. If abnormal combustion were to occur, the engine could lose enough power to make getting airborne impossible and a take-off roll became a threat to the pilot and aircraft.
On 2 August 1917, the U.S. Bureau of Mines arranged to study fuels for aircraft in cooperation with the Aviation Section of the U.S. Army Signal Corps and a general survey concluded that no reliable data existed for the proper fuels for aircraft. As a result, flight tests began at Langley, McCook and Wright fields to determine how different gasolines performed under different conditions. These tests showed that in certain aircraft, motor vehicle gasolines performed as well as “high-test” but in other types resulted in hot-running engines. It was also found that gasolines from aromatic and naphthenic base crude oils from California, South Texas, and Venezuela resulted in smooth-running engines. These tests resulted in the first government specifications for motor gasolines (aviation gasolines used the same specifications as motor gasolines) in late 1917.
1918 to 1929
Engine designers knew that, according to the Otto cycle, power and efficiency increased with compression ratio, but experience with early gasolines during World War I showed that higher compression ratios increased the risk of abnormal combustion, producing lower power, lower efficiency, hot-running engines, and potentially severe engine damage. To compensate for these poor fuels, early engines used low compression ratios, which required relatively large, heavy engines with limited power and efficiency. The Wright brothers’ first gasoline engine used a compression ratio as low as 4.7-to-1, developed only 8.9 kilowatts (12 hp) from 3,290 cubic centimeters (201 cu in), and weighed 82 kilograms (180 lb). This was a major concern for aircraft designers and the needs of the aviation industry provoked the search for fuels that could be used in higher-compression engines.
Between 1917 and 1919, the amount of thermally cracked gasoline utilized almost doubled. Also, the use of natural gasoline increased greatly. During this period, many U.S. states established specifications for motor gasoline but none of these agreed and they were unsatisfactory from one standpoint or another. Larger oil refiners began to specify unsaturated material percentage (thermally cracked products caused gumming in both use and storage while unsaturated hydrocarbons are more reactive and tend to combine with impurities leading to gumming). In 1922, the U.S. government published the first specifications for aviation gasolines (two grades were designated as “fighting” and “domestic” and were governed by boiling points, color, sulfur content, and a gum formation test) along with one “motor” grade for automobiles. The gum test essentially eliminated thermally cracked gasoline from aviation usage and thus aviation gasolines reverted to fractionating straight-run naphthas or blending straight-run and highly treated thermally cracked naphthas. This situation persisted until 1929.
The automobile industry reacted to the increase in thermally cracked gasoline with alarm. Thermal cracking produced large amounts of both mono- and diolefins (unsaturated hydrocarbons), which increased the risk of gumming. Also, the volatility was decreasing to the point that fuel did not vaporize and was sticking to spark plugs and fouling them, creating hard starting and rough running in winter and sticking to cylinder walls, bypassing the pistons and rings, and going into the crankcase oil. One journal stated, “on a multi-cylinder engine in a high-priced car we are diluting the oil in the crankcase as much as 40 percent in a 200-mile [320 km] run, as the analysis of the oil in the oil-pan shows”.
Being very unhappy with the consequent reduction in overall gasoline quality, automobile manufacturers suggested imposing a quality standard on the oil suppliers. The oil industry in turn accused the automakers of not doing enough to improve vehicle economy, and the dispute became known within the two industries as “the fuel problem”. Animosity grew between the industries, each accusing the other of not doing anything to resolve matters, and their relationship deteriorated. The situation was only resolved when the American Petroleum Institute (API) initiated a conference to address the fuel problem and a cooperative fuel research (CFR) committee was established in 1920, to oversee joint investigative programs and solutions. Apart from representatives of the two industries, the Society of Automotive Engineers (SAE) also played an instrumental role, with the U.S. Bureau of Standards being chosen as an impartial research organization to carry out many of the studies. Initially, all the programs were related to volatility and fuel consumption, ease of starting, crankcase oil dilution, and acceleration.
Leaded Gasoline Controversy, 1924-1925
With the increased use of thermally cracked gasolines came an increased concern regarding its effects on abnormal combustion, and this led to research for antiknock additives. In the late 1910s, researchers such as A.H. Gibson, Harry Ricardo, Thomas Midgley Jr., and Thomas Boyd began to investigate abnormal combustion. Beginning in 1916, Charles F. Kettering of General Motors began investigating additives based on two paths, the “high percentage” solution (where large quantities of ethanol were added) and the “low percentage” solution (where only 0.53-1.1 g/L or 0.071-0.147 oz / U.S. gal were needed). The “low percentage” solution ultimately led to the discovery of tetraethyllead (TEL) in December 1921, a product of the research of Midgley and Boyd and the defining component of leaded gasoline. This innovation started a cycle of improvements in fuel efficiency that coincided with the large-scale development of oil refining to provide more products in the boiling range of gasoline. Ethanol could not be patented but TEL could, so Kettering secured a patent for TEL and began promoting it instead of other options.
The dangers of compounds containing lead were well-established by then and Kettering was directly warned by Robert Wilson of MIT, Reid Hunt of Harvard, Yandell Henderson of Yale, and Erik Krause of the University of Potsdam in Germany about its use. Krause had worked on tetraethyllead for many years and called it “a creeping and malicious poison” that had killed a member of his dissertation committee. On 27 October 1924, newspaper articles around the nation told of the workers at the Standard Oil refinery near Elizabeth, New Jersey who were producing TEL and were suffering from lead poisoning. By 30 October, the death toll had reached five. In November, the New Jersey Labor Commission closed the Bayway refinery and a grand jury investigation was started which had resulted in no charges by February 1925. Leaded gasoline sales were banned in New York City, Philadelphia, and New Jersey. General Motors, DuPont, and Standard Oil, who were partners in Ethyl Corporation, the company created to produce TEL, began to argue that there were no alternatives to leaded gasoline that would maintain fuel efficiency and still prevent engine knocking. After several industry-funded flawed studies reported that TEL-treated gasoline was not a public health issue, the controversy subsided.
1930 to 1941
In the five years prior to 1929, a great amount of experimentation was conducted on different testing methods for determining fuel resistance to abnormal combustion. It appeared engine knocking was dependent on a wide variety of parameters including compression, ignition timing, cylinder temperature, air-cooled or water-cooled engines, chamber shapes, intake temperatures, lean or rich mixtures, and others. This led to a confusing variety of test engines that gave conflicting results, and no standard rating scale existed. By 1929, it was recognized by most aviation gasoline manufacturers and users that some kind of antiknock rating must be included in government specifications. In 1929, the octane rating scale was adopted, and in 1930, the first octane specification for aviation fuels was established. In the same year, the U.S. Army Air Force specified fuels rated at 87 octane for its aircraft as a result of studies it had conducted.
During this period, research showed that hydrocarbon structure was extremely important to the antiknocking properties of fuel. Straight-chain paraffins in the boiling range of gasoline had low antiknock qualities while ring-shaped molecules such as aromatic hydrocarbons (for example benzene) had higher resistance to knocking. This development led to the search for processes that would produce more of these compounds from crude oils than achieved under straight distillation or thermal cracking. Research by the major refiners led to the development of processes involving isomerization of cheap and abundant butane to isobutane, and alkylation to join isobutane and butylenes to form isomers of octane such as “isooctane”, which became an important component in aviation fuel blending. To further complicate the situation, as engine performance increased, the altitude that aircraft could reach also increased, which resulted in concerns about the fuel freezing. The average temperature decrease is 3.6 °F (2.0 °C) per 300-meter (1,000 ft) increase in altitude, and at 12,000 meters (40,000 ft), the temperature can approach −57 °C (−70 °F). Additives like benzene, with a freezing point of 6 °C (42 °F), would freeze in the gasoline and plug fuel lines. Substituted aromatics such as toluene, xylene, and cumene, combined with limited benzene, solved the problem.
By 1935, there were seven different aviation grades based on octane rating, two Army grades, four Navy grades, and three commercial grades including the introduction of 100-octane aviation gasoline. By 1937, the Army established 100-octane as the standard fuel for combat aircraft, and to add to the confusion, the government now recognized 14 different grades, in addition to 11 others in foreign countries. With some companies required to stock 14 grades of aviation fuel, none of which could be interchanged, the effect on the refiners was negative. The refining industry could not concentrate on large capacity conversion processes for so many different grades and a solution had to be found. By 1941, principally through the efforts of the Cooperative Fuel Research Committee, the number of grades for aviation fuels was reduced to three: 73, 91, and 100 octane.
The development of 100-octane aviation gasoline on an economic scale was due in part to Jimmy Doolittle, who had become Aviation Manager of Shell Oil Company. He convinced Shell to invest in refining capacity to produce 100-octane on a scale that nobody needed since no aircraft existed that required a fuel that nobody made. Some fellow employees would call his effort “Doolittle’s million-dollar blunder” but time would prove Doolittle correct. Before this, the Army had considered 100-octane tests using pure octane but at $6.6 per liter ($25/U.S. gal), the price prevented this from happening. In 1929, Stanavo Specification Board Inc. was organized by the Standard Oil companies of California, Indiana, and New Jersey to improve aviation fuels and oils and by 1935 had placed their first 100 octane fuel on the market, Stanavo Ethyl Gasoline 100. It was used by the Army, engine manufacturers and airlines for testing and for air racing and record flights. By 1936, tests at Wright Field using the new, cheaper alternatives to pure octane proved the value of 100 octane fuel, and both Shell and Standard Oil would win the contract to supply test quantities for the Army. By 1938, the price was down to $0.046 per liter ($0.175/U.S. gal), only $0.0066 ($0.025) more than 87 octane fuel. By the end of WWII, the price would be down to $0.042 per liter ($0.16/U.S. gal).
In 1937, Eugene Houdry developed the Houdry process of catalytic cracking, which produced a high-octane base stock of gasoline which was superior to the thermally cracked product since it did not contain the high concentration of olefins. In 1940, there were only 14 Houdry units in operation in the U.S.; by 1943, this had increased to 77, either of the Houdry process or of the Thermofor Catalytic or Fluid Catalyst type.
The search for fuels with octane ratings above 100 led to the extension of the scale by comparing power output. A fuel designated grade 130 would produce 130 percent as much power in an engine as it would running on pure iso-octane. During WWII, fuels above 100-octane were given two ratings, a rich and a lean mixture, and these would be called ‘performance numbers’ (PN). 100-octane aviation gasoline would be referred to as 130/100 grade.
World War II
Oil and its byproducts, especially high-octane aviation gasoline, would prove to be a driving concern for how Germany conducted the war. As a result of the lessons of World War I, Germany had stockpiled oil and gasoline for its blitzkrieg offensive and had annexed Austria, adding 18,000 barrels (2,900 m3; 100,000 cu ft) per day of oil production, but this was not sufficient to sustain the planned conquest of Europe. Because captured supplies and oil fields would be necessary to fuel the campaign, the German high command created a special squad of oilfield experts drawn from the ranks of domestic oil industries. They were sent in to put out oilfield fires and get production going again as soon as possible. But capturing oilfields remained an obstacle throughout the war. During the Invasion of Poland, German estimates of gasoline consumption turned out to be vastly too low. Heinz Guderian and his Panzer divisions consumed nearly 2.4 liters per kilometer (1 U.S. gal/mi) of gasoline on the drive to Vienna. When they were engaged in combat across open country, gasoline consumption almost doubled. On the second day of battle, a unit of the XIX Corps was forced to halt when it ran out of gasoline. One of the major objectives of the Polish invasion was their oil fields but the Soviets invaded and captured 70 percent of the Polish production before the Germans could reach it. Through the German–Soviet Commercial Agreement (1940), Stalin agreed in vague terms to supply Germany with additional oil equal to that produced by now Soviet-occupied Polish oilfields at Drohobych and Boryslav in exchange for hard coal and steel tubing.
Even after the Nazis conquered the vast territories of Europe, this did not help the gasoline shortage. This area had never been self-sufficient in oil before the war. In 1938, the area that would become Nazi-occupied produced 575,000 barrels (91,400 m3; 3,230,000 cu ft) per day. In 1940, total production under German control amounted to only 234,550 barrels (37,290 m3; 1,316,900 cu ft). By early 1941 and the depletion of German gasoline reserves, Adolf Hitler saw the invasion of Russia to seize the Polish oil fields and the Russian oil in the Caucasus as the solution to the German gasoline shortage. As early as July 1941, following the 22 June start of Operation Barbarossa, certain Luftwaffe squadrons were forced to curtail ground support missions due to shortages of aviation gasoline. On 9 October, the German quartermaster general estimated that army vehicles were 24,000 barrels (3,800 m3; 130,000 cu ft) short of gasoline requirements.
Virtually all of Germany’s aviation gasoline came from synthetic oil plants that hydrogenated coals and coal tars. These processes had been developed during the 1930s as an effort to achieve fuel independence. There were two grades of aviation gasoline produced in volume in Germany, the B-4 or blue grade and the C-3 or green grade, which accounted for about two-thirds of all production. B-4 was equivalent to 89-octane and the C-3 was roughly equal to the U.S. 100-octane, though lean mixture was rated around 95-octane and was poorer than the U.S. version. Maximum output achieved in 1943 reached 52,200 barrels (8,300 m3; 293,000 cu ft) a day before the Allies decided to target the synthetic fuel plants. Through captured enemy aircraft and analysis of the gasoline found in them, both the Allies and the Axis powers were aware of the quality of the aviation gasoline being produced and this prompted an octane race to achieve the advantage in aircraft performance. Later in the war, the C-3 grade was improved to where it was equivalent to the U.S. 150 grade (rich mixture rating).
Japan, like Germany, had almost no domestic oil supply and by the late 1930s, produced only seven percent of its own oil while importing the rest – 80 percent from the U.S.. As Japanese aggression grew in China (USS Panay incident) and news reached the American public of Japanese bombing of civilian centers, especially the bombing of Chungking, public opinion began to support a U.S. embargo. A Gallup poll in June 1939 found that 72 percent of the American public supported an embargo on war materials to Japan. This increased tensions between the U.S. and Japan, and it led to the U.S. placing restrictions on exports. In July 1940, the U.S. issued a proclamation that banned the export of 87 octane or higher aviation gasoline to Japan. This ban did not hinder the Japanese as their aircraft could operate with fuels below 87 octane and if needed they could add TEL to increase the octane. As it turned out, Japan bought 550 percent more sub-87 octane aviation gasoline in the five months after the July 1940 ban on higher octane sales. The possibility of a complete ban of gasoline from America created friction in the Japanese government as to what action to take to secure more supplies from the Dutch East Indies and demanded greater oil exports from the exiled Dutch government after the Battle of the Netherlands. This action prompted the U.S. to move its Pacific fleet from Southern California to Pearl Harbor to help stiffen British resolve to stay in Indochina. With the Japanese invasion of French Indochina in September 1940, came great concerns about the possible Japanese invasion of the Dutch Indies to secure their oil. After the U.S. banned all exports of steel and iron scrap, the next day, Japan signed the Tripartite Pact and this led Washington to fear that a complete U.S. oil embargo would prompt the Japanese to invade the Dutch East Indies. On 16 June 1941 Harold Ickes, who was appointed Petroleum Coordinator for National Defense, stopped a shipment of oil from Philadelphia to Japan in light of the oil shortage on the East coast due to increased exports to Allies. He also telegrammed all oil suppliers on the East coast not to ship any oil to Japan without his permission. President Roosevelt countermanded Ickes’s orders telling Ickes that the “I simply have not got enough Navy to go around and every little episode in the Pacific means fewer ships in the Atlantic”. On 25 July 1941, the U.S. froze all Japanese financial assets and licenses would be required for each use of the frozen funds including oil purchases that could produce aviation gasoline. On 28 July 1941, Japan invaded southern Indochina.
The debate inside the Japanese government as to its oil and gasoline situation was leading to invasion of the Dutch East Indies but this would mean war with the U.S., whose Pacific fleet was a threat to their flank. This situation led to the decision to attack the U.S. fleet at Pearl Harbor before proceeding with the Dutch East Indies invasion. On 7 December 1941, Japan attacked Pearl Harbor, and the next day the Netherlands declared war on Japan, which initiated the Dutch East Indies campaign. But the Japanese missed a golden opportunity at Pearl Harbor. “All of the oil for the fleet was in surface tanks at the time of Pearl Harbor”, Admiral Chester Nimitz, who became Commander in Chief of the Pacific Fleet, was later to say. “We had about 4+1⁄2 million barrels [0.72×106 m3; 25×106 cu ft] of oil out there and all of it was vulnerable to .50 caliber bullets. Had the Japanese destroyed the oil,” he added, “it would have prolonged the war another two years.”
Early in 1944, William Boyd, president of the American Petroleum Institute and chairman of the Petroleum Industry War Council said: “The Allies may have floated to victory on a wave of oil in World War I, but in this infinitely greater World War II, we are flying to victory on the wings of petroleum”. In December 1941 the U.S. had 385,000 oil wells producing 1.6 billion barrels (0.25×109 m3; 9.0×109 cu ft) barrels of oil a year and 100-octane aviation gasoline capacity was at 40,000 barrels (6,400 m3; 220,000 cu ft) a day. By 1944, the U.S. was producing over 1.5 billion barrels (0.24×109 m3; 8.4×109 cu ft) a year (67 percent of world production) and the petroleum industry had built 122 new plants for the production of 100-octane aviation gasoline and capacity was over 400,000 barrels (64,000 m3; 2,200,000 cu ft) a day – an increase of more than ten-fold. It was estimated that the U.S. was producing enough 100-octane aviation gasoline to permit the dropping of 16,000 metric tons (18,000 short tons; 16,000 long tons) of bombs on the enemy every day of the year. The record of gasoline consumption by the Army prior to June 1943 was uncoordinated as each supply service of the Army purchased its own petroleum products and no centralized system of control nor records existed. On 1 June 1943, the Army created the Fuels and Lubricants Division of the Quartermaster Corps, and, from their records, they tabulated that the Army (excluding fuels and lubricants for aircraft) purchased over 9.1 billion liters (2.4×109 U.S. gal) of gasoline for delivery to overseas theaters between 1 June 1943 through August 1945. That figure does not include gasoline used by the Army inside the U.S. Motor fuel production had declined from 701 million barrels (111.5×106 m3; 3,940×106 cu ft)in 1941 down to 208 million barrels (33.1×106 m3; 1,170×106 cu ft) in 1943. World War II marked the first time in U.S. history that gasoline was rationed and the government imposed price controls to prevent inflation. Gasoline consumption per automobile declined from 2,860 liters (755 U.S. gal) per year in 1941 down to 2,000 liters (540 U.S. gal)in 1943, with the goal of preserving rubber for tires since the Japanese had cut the U.S. off from over 90 percent of its rubber supply which had come from the Dutch East Indies and the U.S. synthetic rubber industry was in its infancy. Average gasoline prices went from a record low of $0.0337 per liter ($0.1275/U.S. gal) ($0.0486 ($0.1841) with taxes) in 1940 to $0.0383 per liter ($0.1448/U.S. gal) ($0.0542 ($0.2050) with taxes) in 1945.
Even with the world’s largest aviation gasoline production, the U.S. military still found that more was needed. Throughout the duration of the war, aviation gasoline supply was always behind requirements and this impacted training and operations. The reason for this shortage developed before the war even began. The free market did not support the expense of producing 100-octane aviation fuel in large volume, especially during the Great Depression. Iso-octane in the early development stage cost $7.9 per liter ($30/U.S. gal), and, even by 1934, it was still $0.53 per liter ($2/U.S. gal)compared to $0.048 ($0.18) for motor gasoline when the Army decided to experiment with 100-octane for its combat aircraft. Though only three percent of U.S. combat aircraft in 1935 could take full advantage of the higher octane due to low compression ratios, the Army saw that the need for increasing performance warranted the expense and purchased 100,000 gallons. By 1937, the Army established 100-octane as the standard fuel for combat aircraft and by 1939 production was only 20,000 barrels (3,200 m3; 110,000 cu ft) a day. In effect, the U.S. military was the only market for 100-octane aviation gasoline and as war broke out in Europe this created a supply problem that persisted throughout the duration.
With the war in Europe a reality in 1939, all predictions of 100-octane consumption were outrunning all possible production. Neither the Army nor the Navy could contract more than six months in advance for fuel and they could not supply the funds for plant expansion. Without a long-term guaranteed market, the petroleum industry would not risk its capital to expand production for a product that only the government would buy. The solution to the expansion of storage, transportation, finances, and production was the creation of the Defense Supplies Corporation on 19 September 1940. The Defense Supplies Corporation would buy, transport and store all aviation gasoline for the Army and Navy at cost plus a carrying fee.
When the Allied breakout after D-Day found their armies stretching their supply lines to a dangerous point, the makeshift solution was the Red Ball Express. But even this soon was inadequate. The trucks in the convoys had to drive longer distances as the armies advanced and they were consuming a greater percentage of the same gasoline they were trying to deliver. In 1944, General George Patton’s Third Army finally stalled just short of the German border after running out of gasoline. The general was so upset at the arrival of a truckload of rations instead of gasoline he was reported to have shouted: “Hell, they send us food, when they know we can fight without food but not without oil.” The solution had to wait for the repairing of the railroad lines and bridges so that the more efficient trains could replace the gasoline-consuming truck convoys.
1946 to Present
The development of jet engines burning kerosene-based fuels during WWII for aircraft produced a superior performing propulsion system than internal combustion engines could offer and the U.S. military forces gradually replaced their piston combat aircraft with jet powered planes. This development would essentially remove the military need for ever increasing octane fuels and eliminated government support for the refining industry to pursue the research and production of such exotic and expensive fuels. Commercial aviation was slower to adapt to jet propulsion and until 1958, when the Boeing 707 first entered commercial service, piston powered airliners still relied on aviation gasoline. But commercial aviation had greater economic concerns than the maximum performance that the military could afford. As octane numbers increased so did the cost of gasoline but the incremental increase in efficiency becomes less as compression ratio goes up. This reality set a practical limit to how high compression ratios could increase relative to how expensive the gasoline would become. Last produced in 1955, the Pratt & Whitney R-4360 Wasp Major was using 115/145 Aviation gasoline and producing 0.046 kilowatts per cubic centimeter (1 hp/cu in) at 6.7 compression ratio (turbo-supercharging would increase this) and 0.45 kilograms (1 lb) of engine weight to produce 0.82 kilowatts (1.1 hp). This compares to the Wright Brothers engine needing almost 7.7 kilograms (17 lb) of engine weight to produce 0.75 kilowatts (1 hp).
The U.S. automobile industry after WWII could not take advantage of the high octane fuels then available. Automobile compression ratios increased from an average of 5.3-to-1 in 1931 to just 6.7-to-1 in 1946. The average octane number of regular-grade motor gasoline increased from 58 to 70 during the same time. Military aircraft were using expensive turbo-supercharged engines that cost at least 10 times as much per horsepower as automobile engines and had to be overhauled every 700 to 1,000 hours. The automobile market could not support such expensive engines. It would not be until 1957 that the first U.S. automobile manufacturer could mass-produce an engine that would produce one horsepower per cubic inch, the Chevrolet 283 hp/283 cubic inch V-8 engine option in the Corvette. At $485, this was an expensive option that few consumers could afford and would only appeal to the performance-oriented consumer market willing to pay for the premium fuel required. This engine had an advertised compression ratio of 10.5-to-1 and the 1958 AMA Specifications stated that the octane requirement was 96–100 RON. At 243 kilograms (535 lb) (1959 with aluminum intake), it took 0.86 kilograms (1.9 lb) of engine weight to make 0.75 kilowatts (1 hp).
In the 1950s, oil refineries started to focus on high octane fuels, and then detergents were added to gasoline to clean the jets in carburetors. The 1970s witnessed greater attention to the environmental consequences of burning gasoline. These considerations led to the phasing out of TEL and its replacement by other antiknock compounds. Subsequently, low-sulfur gasoline was introduced, in part to preserve the catalysts in modern exhaust systems.
See endnotes and bibliography at source.
Originally published by Wikipedia, 09.24.2001, under a Creative Commons Attribution-ShareAlike 3.0 Unported license.