Gas! Gas! Quick, Boys - Michael Freemantle - E-Book

Gas! Gas! Quick, Boys E-Book

Michael Freemantle

0,0
13,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Gas! Gas! Quick, Boys! reveals for the first time the true extent of how chemistry rather than military strategy determined the shape, duration and outcome of the First World War. Chemistry was not only a destructive instrument of war but also protected troops, and healed the sick and wounded. From bombs to bullets, poison gas to anaesthetics, khaki to cordite, chemistry was truly the alchemy of the First World War. Michael Freemantle explores its dangers and its healing potential, revealing how the arms race was also a race for chemistry to the extent that Germany's thirst for the chemicals needed to make explosives deprived the nation of fertilizers and nearly starved the nation. He answers question such as: What is guncotton? What is lyddite? What is mustard gas? What is phosgene? What is gunmetal? This is a true picture of the horrors of the 'Chemists' War'.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



For Martha and Ariane

Contents

Title Page

Dedication

Introduction

1. The Chemists’ War

2. Shell Chemistry

3. Mills Bombs and other Grenades

4. The Highs and Lows of Explosives

5. The Metals of War

6. Gas! GAS! Quick, boys!

7. Dye or Die

8. Caring for the Wounded

9. Fighting Infection

10. Killing the Pain

11. The Double-edged Sword

Notes

Bibliography

Plate Section

Copyright

Introduction

My uncle, George Curtis, fought in the First World War and survived. Sadly, he died in 1966 and I never thought to ask him about his experiences in the conflict. My grandfather, Samuel, on my father’s side may also have fought in the war, but once again it did not occur to me to ask him or my parents whether he did or not. He died soon after my uncle and my parents are no longer alive.

Until 2009, my interest in history had mainly been confined to the history of science. Indeed, I had written about various aspects of the subject over the preceding twenty-five years or so. As far as the First World War was concerned, I had read books such as Siegfried Sassoon’s Memoirs of an Infantry Officer, watched films like Oh! What a Lovely War, and was familiar with Wilfred Owen’s poem ‘Dulce et Decorum Est’, but that was the extent of my interest.

Then, in July 2009, my wife Mary and I travelled to Belgium and northern France to visit some of the First World War battlefields and cemeteries. For part of the trip we were joined by our son Dominic, his wife Claire, and their baby daughter Eloise. Both Mary and Dominic had been passionately interested in the war for many years.

One morning during our visit to the battlefield sites around Ypres, including Essex Farm, Passchendaele, Sanctuary Wood, and Langemarck (where the Germans launched their first chlorine gas assault), our tour guide informed us that the chlorine not only gassed the Allied troops but also dissolved in the muddy water that filled the shell holes to form highly corrosive hydrochloric acid. I was puzzled as I seemed to recall that chlorine dissolved in water to form chlorine water, a weakly acidic solution. Furthermore, chlorine-releasing chemicals were and still are used to purify drinking water and sterilise swimming pools.

Our guide then explained that the troops initially attempted to protect themselves from the gas by covering their noses and mouths with handkerchiefs or other cloths soaked in water or urine, adding that buckets of chemicals were subsequently provided for this purpose. As I have spent all my professional life working in the chemical industry, teaching chemistry, and writing about the subject, I was naturally interested in what chemicals were used, but the guide did not know.

This was to spark my interest in the chemistry of the First World War and, very soon, I was hunting for answers to questions like: What is lyddite? What is guncotton? What is ammonal? What is gunmetal? What is mustard gas? I found that the answers are out there in the public domain, but that they are scattered around in various books, articles, reports, museums and websites. What I could not find was a single volume that brought together all the different aspects of the chemistry of the war. That was surprising because chemistry, as I was soon to realise, was the sine qua non of the war. Indeed, some people called it ‘the chemists’ war’.

I therefore decided to write such a book and, specifically, a book for the general reader and not just for the many chemists who are interested in the First World War. The pages that follow describe the explosives, chemical warfare agents, metals, dyes, and medicines that were used in the war. They also show how chemistry and chemicals not only underpinned the war but also changed the war. Undoubtedly, without the advances in chemistry the war would have been much shorter and the death toll substantially reduced. Finally, the book reveals how much of the chemistry of the war evolved from discoveries and inventions made in the hundred or so years that preceded the war.

The majority of people who contributed information and comments for this book are no longer alive. They are the chemists and their fellow scientists, engineers and industrialists who described the chemistry of the war in reports, articles and books published either during the war or soon after. And they are the nurses and medical officers who cared for the sick and wounded, and also published accounts of their wartime experiences.

In particular, I have relied heavily on the Journal of Industrial and Engineering Chemistry, published monthly by the American Chemical Society until 1922, and the British Medical Journal, a weekly publication. I scoured every issue of both journals published during and immediately after the war for information and comments that I could use in this book – what appears is only the tip of the iceberg.

My visits to the Imperial War Museum in London, the In Flanders Fields Museum in Ypres, and the Somme Trench Museum in Albert have also yielded useful insights. In addition, I had some fascinating discussions about the war with Ken Seddon, a chemistry professor at Queen’s University, Belfast and an expert on chemical warfare in general and phosgene in particular. Geoffrey Rayner-Canham, chemistry professor at Memorial University of Newfoundland, and his co-researcher Marelene Rayner-Canham, provided me with information about British female chemists and the First World War. Paul Gallagher, media relations executive at the Royal Society of Chemistry, alerted me to the discovery of the chemist who developed dyes for army and navy uniforms but was lost on the Titanic. Similarly, Catherine Duckworth, community history manager at Accrington Community History Library, Lancashire helped me considerably with the section on Frederick Gatty who patented and made a fortune out of mineral khaki dye. My thanks to them all.

I am also grateful to Jo de Vries, Paul Baillie-Lane and their colleagues at The History Press for commissioning this book and their work on its production; and to my wife Mary, our son Dominic and our three daughters, Helen, Charlotte and Lizzie, all of whom made valuable suggestions about the nature, content and not least the title of this book.

Michael Freemantle

Basingstoke

March 2012

Michael Freemantle is a science writer. He was Information Officer for IUPAC (International union of Pure & Applied Chemistry) from 1985 to 1994. His duties included editing the IUPAC news magazine Chemistry International. From 1994 to 2007 he was European Science Editor/Senior Correspondent for Chemical & Engineering News – the weekly news magazine of the American Chemical Society. He was then appointed Science Writer in Residence at Queen’s University, Belfast and Queen’s university Ionic Liquid Laboratories for three years until 2010. Freemantle has written numerous news reports and articles on chemistry, the history of chemistry, and related topics. He is the author, co-author, or editor of some ten books on chemistry, including the textbooks Essential Science: Chemistry (Oxford university Press, 1983) and Chemistry in Action (Macmillan, 1987). His previous book, An Introduction to Ionic Liquids, was published by RSC (Royal Society of Chemistry) in November 2009.

I have been on battlefields; I know what war means. I have been in hospitals – I go to them yet – filled with the wreckage of this war; and when you think this is but a sample, and count up the cost of this war – nine or ten million men in the flower of their youth, the strongest, the most virile out of all the most civilised nations on the earth, dead; when you count the orphanage and widowhood, the withdrawal of that vast energy from the productive forces of civilization; when you think of the waste of only the material side, the amount they spent in money, two hundred thousand million dollars – and if you try to get some idea of what that is, you look in the world almanac and find the total value of the United States – of all the real and personal property in it, all the houses and all the lands and all improvements thereon since they took it from the Indians, telegraphs, jewelry and money, all of it added together amounts to one hundred and eighty-six thousand million dollars – when we think of these things we are filled with amazement. We have paid, not with king’s ransom, but with the price of civilization and we have wasted a heritage greater in value than the aggregate value of the greatest country that ever existed on the face of the earth.

Newton D. Baker, U.S. Secretary of War, in an address on ‘Chemistry in Warfare,’ presented at the 58th meeting of the American Chemical Society, Philadelphia, USA, 3 September 1919. (Ind. Eng. Chem., 1919, 11, p.921)

1

The Chemists’ War

Applied chemistry, to a large extent

Modern war, whether it be for robbing, plundering and subjugating other nations, or for legitimate self-defence, has become primarily dependent upon exact knowledge, good scientific engineering and, to a large extent, applied chemistry.

Few people outside the world of chemistry and industry will be familiar with the Belgian-American industrial chemist who made this remark at a meeting in New York on 10 December 1915. But many people will have heard of the plastic he invented and patented in 1907. The chemist was Leo Baekeland (1863–1944) and the plastic Bakelite.

The modern war to which Baekeland refer red was the First World War, although, at the time, the United States had yet to enter the war and it was referred to by many Americans as ‘the European War’. The war, also known as the ‘Great War’ and the ‘War to End All Wars’, was fought between the Allied Powers and the Central Powers from 1914 to 1918. The Allied Powers consisted of Britain, France, Japan, Russia and Serbia, with Italy joining in 1915, Portugal and Romania in 1916, and Greece and the United States in 1917. The Central Powers consisted of Germany, the Austro-Hungarian Empire and Ottoman Turkey, with Bulgaria joining them in 1915.

Baekeland’s comment linking modern war to applied chemistry appeared as the opening paragraph of the published version of an address entitled ‘The Naval Consulting Board of the United States’, which he presented to members of the American Chemical Society and the American Electrochemical Society.1 Baekeland, who was a member of the board, did not mention Bakelite or other early plastics in his address but focused rather on ways that the United States could harness its scientific and technical potential for the development and production of munitions and military equipment needed for defence.

Baekeland’s audience in New York would have regarded his observations that modern war relied on applied chemistry as self-evident. They would have also known that chemistry and chemicals were not only being employed to devastating effect in the First World War, but also that chemistry in one form or another had been applied wittingly or unwittingly to warfare since time immemorial.

Over 2,000 years ago, for example, Roman legionaries in battle wore armour made of iron, a chemical element, and helmets made of bronze, an alloy – a mixture of a metal and one or more other chemical elements.The composition of bronzes varies but is typically 80 per cent copper and 20 per cent tin, and, like iron, these two metallic elements are extracted from ores by smelting. The operation is based on two chemical processes: first, the ores are converted to metal oxides; then the oxygen is removed from the oxides by heating them with what is known as a reducing agent, typically the chemical element carbon in the form of charcoal or coke.

Gunpowder provides another example of the application of chemistry to warfare. The powder consists of a mixture of charcoal, the chemical element sulfur and one chemical compound – potassium nitrate. Its use in warfare dates back to the introduction of the gun as a weapon in the fourteenth and fifteenth centuries. In fact, gunpowder chemistry also played a role in the birth of modern chemistry as we now know it. The birth is widely attributed to the publication of the first chemistry textbook Traité Élémentaire de Chimie (Elementary Treatise on Chemistry) by French chemist Antoine Laurent Lavoisier (1743–1794) in 1789. From 1776 to 1791, Lavoisier was responsible for gunpowder production and research at the Royal Arsenal in France. Unfortunately for him, he was also a tax collector and served on several aristocratic administrative councils. During France’s ‘Reign of Terror’, he was accused by revolutionists of counter-revolutionary activity, found guilty and executed by guillotine on 8 May 1794.

In the following century, chemistry as a distinct discipline and profession began to take root. For example, the Chemical Society of London, the French Chemical Society and the American Chemical Society were founded in 1841, 1857 and 1876 respectively, while the Society of Chemical Industry was formed in Britain in 1881 and its German equivalent in 1887. The first ever international meeting of chemists took place at a congress in Karlsruhe, Germany, in 1860 and the participants included the Russian chemists Dmitri Mendeleev (1834–1907), the chief architect of the Periodic Table of Chemical Elements, and Alexander Borodin (1833–1887), a respected chemist who is best remembered as a composer. Another international conference on chemistry took place in Paris in 1889 and the first of a series of International Congresses on Applied Chemistry was held in Brussels in 1894.

The application of chemistry to warfare also developed rapidly in the years between Lavoisier’s 1789 treatise and the outbreak of the First World War. The discovery of new types of powerful explosives, new medicines and drugs to treat wounded soldiers, and new types of metal alloys for weapons and military equipment during this period had a major impact on the war.

One of the discoveries in applied chemistry during the nineteenth century which, perhaps surprisingly, had immense significance for the First World War was the synthesis of mauve, the first synthetic dye, by English chemist William Henry Perkin (1838–1907) in 1856. Other synthetic dyes soon followed; for example, two years after the discovery of mauve, Perkin’s chemistry teacher, the German chemist August Wilhelm von Hofmann (1818–1892) synthesised the dye magenta.These dyes, like their natural counterparts, are organic compounds – they contain the element carbon.

The discoveries of synthetic dyes not only revolutionised fashion and the textiles industry, but they also gave birth to the synthetic dyes industry and the mass production of organic chemicals. Furthermore, they sparked widespread interest in the commercial applications of synthetic organic chemistry. Before then, the organic chemical industry had been largely confined to the manufacture of soap from fats and oils.

Germany was the quickest to recognise the commercial potential of synthetic organic chemistry. In the years leading up to the First World War, the country became the world’s predominant manufacturer and exporter of synthetic dyes and other commercially-important, synthetic, organic compounds, most notably pharmaceutical products. Furthermore, soon after the beginning of the First World War, Germany was able to adapt the chemical plants in its dye-producing factories for the industrial-scale production of trinitrotoluene (TNT) and other powerful explosives based on organic compounds.

The discovery and development of synthetic plastics also occurred in the nineteenth century. In the 1860s, English chemist Alexander Parkes (1813–1890) invented celluloid, the first synthetic plastic. American inventor John Wesley Hyatt (1837–1920) subsequently developed the synthesis for a variety of commercial applications and it was used to make the photographic roll film developed by George Eastman (1854–1932) in 1889. By the start of the First World War, photography had become sufficiently advanced not only to record life and death on the frontline, but also for training and reconnaissance purposes.

Richard B. Pilcher, Registrar and Secretary of the Britain’s Institute of Chemistry, refers to several examples of chemistry’s applications to the war effort in an article published in the journal in September 1917.2 He notes that professional chemists could provide ‘efficient service in the many requirements of the naval, military, and air forces.’ He explains, for instance, that the service of chemists was essential to control the manufacture of munitions, explosives, metals, leather, rubber, oils, gases, food and drugs. His list does not include, but might well have included, the manufacture of antiseptics, disinfectants, anaesthetics, synthetic dyes and photographic materials. Chemists, he continues, were also needed for the analysis of all these materials as well as for the analysis of water: the detection of poisons in streams, the disposal of sewage and other matters of hygiene.

Pilcher calls the First World War the ‘Chemists’ War’, and the term has since been used by many others for the war, including David J. Rhees, executive director of the Bakken Library and Museum in Minneapolis, USA, who wrote in an article on the war published in the early 1990s:

When we speak of World War I as the Chemists’ War, the image that usually comes to mind is the famous battle near the Belgian town of Ypres where, on 22 April 1915, the Germany army released a greenish-yellow cloud of chlorine gas on Allied troops.3

In many people’s eyes, the use of chemicals in the First World War has become synonymous with chemical warfare and the use of poison gases against enemy troops. The active and creative role played by chemists in this type of warfare inevitably contributed to the subsequent widespread negative image of chemicals: ‘To call the Great War a “Chemists’ War” was perhaps a matter of pride, but not exactly for praise’, remarks Roy MacLeod, an historian at the University of Sydney, Australia, in an article published in 1993.4

The term has a much broader context, however. The chemistry of the First World War was not just confined to poison gases and explosives, but also to the development and production of numerous other chemical products used by the military either directly or indirectly: ‘Many regard the war as largely a conflict between the men of science of the countries engaged’, observes Pilcher, implying that the side that mastered the chemistry needed for warfare would be successful in the war. Germany had an advantage in this respect in the years leading up to the war, especially in the number of professional chemists who could contribute to the war effort. According to historian Michael Sanderson, in 1906 an estimated 500 chemists worked in the British chemical industry whereas there were 4,500 in the German chemical industry.5 By the start of the war, there was one University-trained chemist for every fifteen workers engaged in the German chemical industry and one for every forty workers in German industry as a whole.6 In Britain, on the other hand, there was one University-trained chemist for every 500 workers employed in the various industries.

In London, education, training, and research in key areas of chemistry were also lacking. Although the subject was taught well, there was little or no emphasis on applied chemistry and industrial chemistry. ‘Most scandalous’, Sanderson comments, was the ‘notoriously casual’ attitude to the use of coal tar. Coal tar, produced as a by-product when coal is converted into coke or coal gas, was an important source of organic chemicals used for the manufacture of dyes, pharmaceuticals, explosives and other products. Before the war, coal tar produced in Britain was exported as a raw material to Germany, and even though the first coal tar dye, mauve, had been synthesised by Perkin, an English chemist, it was German industry that exploited the expertise of its chemists to attain virtually a world monopoly in the manufacture of chemical products derived from coal tar. Following Perkin’s discovery, the organic chemical industry in Britain rapidly declined and did not recover until after the First World War.

The story was similar in other Allied countries such as France. In 1885, for example, French chemist François Eugène Turpin (1848–1927) patented the use of the pressed or fused form of the organic chemical picric acid as a fragmentation charge for artillery shells. The French government subsequently adopted the explosive for its high-explosive shells. The explosive was known in France as melinite whereas in Britain it was called lyddite. Picric acid, or trinitrophenol to give it its full chemical name, was made from phenol, a coal tar chemical, and nitric acid. Yet in 1914, French supplies of phenol for manufacture of the explosive came from foreign countries and particularly from Germany.7 Furthermore, prior to the war, France manufactured relatively few coal tar-based pharmaceuticals and instead relied extensively on imports of pharmaceuticals from Germany.8

There was also one other major issue that not only influenced the duration of the First World War, but also demonstrated the professional expertise of German chemists. That issue was nitrogen, a chemical element that comprises roughly 80 per cent of the air around us.

Germany’s nitrogen problem

At the beginning of the war in August 1914, there was widespread belief that the conflict would be short and almost certainly over within a few months. Germany entered the war with stocks of ammunition for an intensive campaign of just a few months. However, with the onset of trench warfare in September 1914, it soon became apparent that the progress of the war would be slow and Germany’s stocks of ammunition rapidly diminished. The country therefore mobilised its national industries, including the chemical industry, to restock its stores to prepare for a longer campaign.

The German chemical industry had to adapt rapidly. One of its major problems was the manufacture of nitric acid which was needed to make explosives such as TNT and picric acid. Both of these explosives are nitrogen-containing organic chemicals, with coal tar the source of both the carbon and nitrogen for these chemicals. However, nitrogen was also needed to manufacture fertilisers and the limited supplies of the type of coal suitable for the production of the coal tar reduced the amount of nitrogen-containing chemical compounds that could be produced. Germany therefore used a nitrate mineral as a supplementary source of nitrogen.

Until the outbreak of war, Germany imported the mineral from Chile. It was then converted to nitric acid by reaction with sulfuric acid. However, the British naval blockade cut off nitrate supplies and Germany found itself with insufficient nitrogen to manufacture its fertilisers and explosives. The German chemical industry therefore turned to ‘nitrogen fixation’, a process that converts nitrogen in the air into nitrogen-containing compounds.

The specific nitrogen fixation process used by the German industry was based on a discovery in 1908 by German chemist Fritz Haber (1868–1934). He showed that ammonia, a compound containing the elements nitrogen and hydrogen, could be synthesised by the reaction of the two elements in their gaseous forms in the presence of iron. The iron functioned as a catalyst, increasing the rate of the reaction in a process known as catalysis. In 1918, Haber won the Nobel Prize in Chemistry for the discovery.

Carl Bosch (1874–1940), an industrial chemist working for the German chemical firm BASF, subsequently designed a reactor that allowed the Haber process to be carried out at high pressures and temperatures. The company started producing ammonia using the Haber-Bosch process, as it became known, in 1913. Bosch also won the Nobel Prize in Chemistry in 1931 for his contribution ‘to the invention and development of chemical high pressure methods’.

At first, the ammonia produced by this process was used to make ammonium sulfate, a soil fertiliser. However, it was well known that ammonia could also be converted to nitric acid using a method developed by another German chemist, Friedrich Wilhelm Ostwald (1853–1932). The process combines ammonia with atmospheric oxygen in the presence of a platinum catalyst to form a nitrogen-and oxygen-containing gas called nitrogen monoxide. The gas is then oxidised with atmospheric oxygen to yield a related gas, nitrogen dioxide. Nitric acid is produced by passing the nitrogen dioxide gas through water. Ostwald patented the process in 1902 and won the Nobel Prize in Chemistry in 1909 in recognition of his work on catalysis and also for other chemistry research he had undertaken.

In his New York address in 1915, Baekeland observed that Germany would have been ‘hopelessly paralysed’ had it not been for the development of chemical processes for the manufacture of nitric acid from air. If these processes, developed by German chemists in the early twentieth century, had not been available to German industry, the war may well have come to a conclusion by the end of 1914 – as had been widely predicted at the beginning of the war.

The importance of electrochemistry

‘Never before in the history of electrochemistry has the vast importance of the various electrochemical products been so forcibly brought to the attention of our government and of our people as the present year of the Great War’, remarked Colin G. Fink, President of the American Electrochemical Society, in September 1917.9 He was speaking at Third National Exposition of Chemical Industries which was held in New York, just a few months after the United States had declared war on Germany.

Electrochemistry focuses on the electronic aspects of chemistry and the relationship between electricity and chemistry. It is concerned with the impact of electricity on chemicals and, conversely, on the use of chemicals to generate electricity. This branch of chemistry is an important component of the science and technology of metals and alloys, otherwise known as metallurgy. The armour, artillery, munitions, tanks, aircraft, battleships and, of course, railways of the First World War all relied on expertise in metallurgy and electrochemistry for their manufacture and construction. ‘Take from this country its electrochemical industry with its numerous and diversified manufactures and the martial strength of our country is hopelessly crippled’, Fink said, pointing out, for example, that thousands of rifles and guns were turned out every month with the steels made by the electric arc furnace.

All steels are made of iron and a small percentage of carbon. Mild steel, which contains just 0.2 per cent carbon, is malleable and ductile, and was used in the Great War to make barbed wire and other products. The hardness of carbon steels, as they are called, is increased by increasing carbon content. Steels, known as alloy steels, contain not only iron and small amounts of carbon but also up to 50 per cent of one or more other metallic elements, such as aluminium, chromium, cobalt, molybdenum, nickel, titanium, tungsten and vanadium. The addition of these metals improves the properties of the steels. Tungsten, for example, improves the hardness, toughness and heat resistance of steel.

The development of the electric arc furnace in the 1890s and early twentieth century added a new dimension to steel manufacture. When the electric power of the furnace is switched on, temperatures are generated that are sufficiently high to melt scrap iron and steel, which enabled it to be converted into the high-quality alloy steels needed for the war effort. Numerous alloys, produced by the electric furnace, were used in nearly every item of the United States government’s vast military equipment for the war.

William S. Culbertson, in another speech at the New York exposition, agreed with Fink on the importance of the electric arc furnace in revolutionising steel manufacture.10 He describes silicon steel, for example, as ‘indispensable’ in the manufacture of munitions, adding that steels containing the metallic element tungsten improved the efficiency of metal cutting tools, while the addition of chromium, nickel, vanadium or molybdenum conferred special properties to steel, ‘making it peculiarly suited to many special uses, including armour plate’.

Electrolysis also played a key role in producing the metals and a range of other chemicals required by the military during the war. In this electrochemical technique, chemical reactions take place when an electric cur rent is passed through an electrolyte contained in an electrolytic cell. The electrolyte is typically a molten salt or a solution of a salt in water.

After extraction from its ores, pure copper was produced by electrolysis in a process known as electrorefining. In his speech, Fink pointed out that large quantities of ‘electrolytic copper’ were ‘absolutely essential’ for the manufacture of electrical apparatus, as were sufficient quantities of aluminium and magnesium, two metallic elements that were also produced using electrolysis and used for the light, strong stays of aircraft. Similarly, Fink added that liquid chlorine, which was used to synthesise some of the chemicals in preparations for ‘treating the wounds of our heroes’, was a product of electrolysis, as was hydrogen used ‘in all of our scout and observation balloons’. He continued:

May we continue to lead the world in the supply of the many electrochemical products, pure metals and alloys for the arts, gases for cutting and welding, chlorine and peroxides for our hospitals, chlorates and acetone for munitions, nitrates for the farm and defense, abrasives, electrodes, solvents and lubricants! May we continue to excel in the products of the electric furnace and the electrolytic cell!

Fink failed to mention that the chlorine generated in electrolytic cells was used as a poison gas by both sides during the Great War. It was also used to synthesise lethal chlorine-containing gases such as phosgene that were employed to devastating effect against entrenched enemy troops.

In September 1918, F. J. Tone, President of the American Electrochemical Society, speaking in New York at the Fourth National Exposition of Chemical Industries, provided a graphic example of the importance of electrochemistry and metallurgy to the United States’ aircraft programme.11 He noted that the crank cases and pistons of the motors in aircraft were made of aluminium, and that the crank shafts and engine parts, which were subjected to the greatest strains, were all composed of chrome alloy steel: ‘All of these parts are brought to mechanical perfection and made interchangeable by being finished to a fraction of a thousandth of an inch by means of the modern grinding wheel made from electric furnace abrasives.’

Calcium carbide, a chemical compound consisting of calcium and carbon, was also made in the electric arc furnace, and the acetylene that was made from it facilitated an ample supply of aeroplane dope. The dope, a compound known as cellulose acetate, was used to tighten the fabric, often linen, skins covering aircraft. Tone continued:

When the aviator trains his machine gun on an enemy plane his firing is made effective by tracer bullets of magnesium or phosphorus. When our bombing planes begin to carry the war to Germany it will be with bombs perhaps of ammonium nitrate or picric acid or other high explosives all depending largely in their manufacture on electrochemical reagents.

Not just a chemists’ war

In an article published in 2005, George Bailey, a senior lecturer at the University of Westminster, lists a number of key elements in the transformation of the British Expeditionary Force (BEF) from the colonial-style army of 1914 to the victorious continental-style armies of 1918.12 These key elements consisted of: manpower, volunteers and conscription; high command and the British War Cabinet; infantry weaponry and tactics; artillery; suppression of enemy infantry and artillery; tanks; aircraft; moving supplies; rescuing the wounded; and patriotism and maintenance of discipline.

Several of these key elements, such as artillery, tanks, and aircraft, relied on technology and the productive capacity of British industry. In many ways, the First World War could therefore be considered to be a war of technology or a war of industrial power in which scientists, engineers and factory workers in the opposing nations strove to produce superior military equipment and munitions in ever increasing quantities. At the same time, it was also necessary to sustain agricultural productivity in order to feed the people at home and the troops at the front; to transport troops, food and military equipment to the front and the troops back from the front; and to provide and improve medical supplies. All these activities relied to a greater or lesser extent on science, technology and engineering.

The war was a ‘war of engineers’ said David Lloyd George (1863–1945), who was Britain’s Minister of Munitions from May 1915 to July 1916 and later became the country’s prime minister. In an interview with a correspondent of The New York Times in November 1915, Lloyd George’s Director of Recruiting for Munitions Work, Lord Murray of Elibank (1870–1920), observed that the ablest engineers in Great Britain, working in many of the country’s largest engineering establishments, were driving forward ‘enormously successful engineering work’.13 He remarked that ships, guns, armour, shells, rifles, and bullets are pouring out of factories in an unending stream, adding: ‘The stream is destined to increase steadily every month’.

His forecast was to prove correct: by the beginning of 1917, the production of high explosives in England was sixty-two times what it was in 1915, as H.E. Howe noted in an address delivered at a meeting of the American Chemical Society in Kansas City on 12 April 1917:14

British munitions factories are now making more heavy gun ammunition every 24 hours than they manufactured during the entire first year of the war … The monthly output of heavy guns is more than six times what it was during the year 1915.

Similarly, between May 1915 and May 1916, the output of bombs increased 33-fold and the quantity of machine guns manufactured during the twelve months up to August 1916 was fourteen times greater than in the twelve months up to August 1915.

The increased productivity in munitions factories relied not only on engineers and the efforts of factory workers, but also on inputs from chemists and chemical engineers to develop increasingly efficient chemical processes for manufacturing explosives and other chemical products of military value. Thus, the ‘war of engineers’ was underpinned by applied chemistry.

In his 1993 article, MacLeod refers to the 1914–1918 conflict as a ‘scientific war’.4 He notes, for example, that physicists developed acoustic devices, geographers prepared artillery maps, geologists designed trench systems and tunnels, surgeons applied triage, psychologists discovered shell shock, and bacteriologists and entomologists attempted to halt the spread of infectious diseases. And once again, applied chemistry underpinned much of this ‘scientific war’. For example, the inks used to print the geographer’s artillery maps contained insoluble black, white or coloured chemical materials known as pigments, and chemical preparations were used as disinfectants throughout the war to halt the spread of infection.

War: the mother of invention?

‘War has always been the mother of invention’ observes Oxford University historian A.J.P. Taylor (1906–1990) in the preface of his book The First World War: An Illustrated History, adding also that: ‘Historical photographs are among her children’.15 Taylor points out that the Crimean War (1853–1856) in which Russia fought against Britain, France, Piedmont and Turkey, raised photography ‘from its infancy’ and his book is illustrated with numerous black and white photographs that show the battles of the Great War, life and death in the trenches, the aircraft and battleships, and the generals and politicians who played key roles in the conflict.

War, however, did not give birth to photography. The invention of photography dates back to the 1830s when several people began to use cameras and chemicals to produce photographic images. In 1839, for example, Frenchman Louis Daguerre (1789–1851) perfected a process for producing a permanent photographic image, known as the ‘daguerreotype’, on a silver-coated copper plate treated with iodine vapour. The image was then developed using mercury.

In his book on chemicals in war, published in 1937, Augustin M. Prentiss cites three ‘outstanding’ innovative technological developments of the First World War: ‘the military airplane, the combat tank, and chemical warfare’.16 He explains that each of these new instruments of war made its appearance on the battlefield at about the same time and each exerted an important influence in shaping the character of modern combat. The author, a lieutenant colonel in the United States Chemical Warfare Service, might have added other developments such as submarines, torpedoes and floating mines.

These new or infant technologies relied not only on sophisticated engineering but also on a body of scientific and technical knowledge and discovery that pre-dated the First World War. Chlorine, for example, is a chemical element famously used as a poison gas in the early days of the war, but it was not new. The element was discovered by self-taught Swedish chemist Carl Wilhelm Scheele in 1774. Similarly, radiography, which was used widely in hospitals during the war, has roots that can be traced back to November 1895 when the German physicist Wilhelm Konrad Röntgen (1845–1923) discovered X-rays. The Haber-Bosch process for the synthesis of ammonia from atmospher ic nitrogen, on which Germany relied for the manufacture of its explosives, was also developed before Germany declared war on Russia at the beginning of August 1914.

The manufacture of guns, armour, explosives, tanks, aircraft, battleships, submarines and torpedoes, as well as photography and radiography, all relied on the use of chemicals and the application of chemistry discovered before the war.

Invention in chemistry during the war, however, was limited. Chemists in academe were diverted from their normal research activities in university chemistry laboratories to war activities, either in military service or to improving the productivity of the chemical materials, such as explosives, needed for war. Chemists in the Chemical Warfare Service in the United States worked to devise new and more effective chemicals to disable and kill enemy troops, while other chemists worked on pharmaceutical and antiseptic preparations used to treat wounded and sick soldiers.

What innovation there was often focused on the development of new types of poison gas, better explosives and more efficient processes for the manufacture of the chemicals needed for the war effort. ‘The necessity of increased production of munitions involving all types of chemical mixtures and compounds requires chemists in large numbers, not only to inspect and analyze the substances we are using, but to develop the necessary new ones’, noted Rear Admiral Ralph Earle, Chief of the Bureau of Ordnance, US Navy, in an address on ‘Chemistry and the Navy’, presented at the 58th meeting of the American Chemical Society in Philadelphia on 3 September 1919.17 The course, duration and outcome of the war not only depended on access to the raw materials, such as nitrates, and the ability of chemists and chemical engineers to convert these into useful military mater ials and equipment, it also depended on their ingenuity to adapt and develop existing chemical technology for the war effort.

But did the First World War give birth to any major discoveries in chemistry? According to one chronicle of science, three of the most significant events in chemistry during the period 1914–1918 were the synthesis of acetic acid (1914), the discovery that cobalt increases the strength of tungsten steel when the alloy is magnetised (1916), and the award of the Nobel Prize in Chemistry to Haber for his synthesis of ammonia from hydrogen and atmospheric nitrogen (1918).18 The war, however, did not give birth to any of these events; indeed, it would be difficult to claim that the Great War was the mother of any important inventions or discover ies in chemistry. Rather, the war harnessed, nurtured and facilitated improvements to earlier inventions in chemistry and related sciences such as metallurgy.

Impact of war on the chemical industry

Whereas applied chemistry underpinned much of the technology deployed in the First World War, the war conversely had a direct impact on the ability of the chemical and its associated industries to provide for civilian needs. This impact was felt acutely in the United States at the beginning of the war.

Culbertson pointed out in his New York address in 1917 that daily life in the country depended on chemicals: textiles required dyes; chemicals were needed for refining sugar and petroleum and for the manufacture of glass, pottery, paper, paints, varnishes, rubber and cement; and medicinal and pharmaceutical products, toilet preparations, photographic materials, motion picture films, cleaning compounds, baking powder all relied on the chemical industries. The chemical industries also supplied fertilisers to agriculture and materials for the tanning industry.10

At the outbreak of war in August 1914, an editorial in The Journal of Industrial and Engineering Chemistry pronounced that ‘the war in Europe will soon be over’, and saw the war as a chance for the chemical industry in the United States to prosper:19 ‘While the eyes of the world are turned upon the military activities of Europe, business strategists in the United States will not fail to recognize the tempting opportunities for making ourselves more independent of foreign supplies.’

In 1914, the United States already had a strong chemical industry which supplied many of the country’s needs. The country produced more sulfuric acid than any other in the world, and possibly more than all other countries combined, observed William. H. Nichols in an address on ‘The War and the Chemical Industry’ that he presented at a meeting of the American Association for the Advancement of Science in Philadelphia in December 1914.20 Sulfuric acid is an important commodity chemical that is used in a wide range of applications, such as explosives, dyestuffs, fertilisers and in the manufacture of other acids. German organic chemist Justus von Liebig famously remarked in 1843: ‘We may fairly judge the commercial prosperity of a country by the amount of sulfuric acid it consumes.’

In his address, Nichols pointed out that the United States had an abundance of many of the raw materials needed for the manufacture of chemical products, including cheap phosphate rock, salt, copper, coal, wood, bauxite and zinc. Yet many raw materials were also imported into the country, including sulfur in the form of the ore pyrites, potash, tin, nickel and sodium nitrate.

The ‘European war’ raised an important question for chemists and chemical engineers in the United States, as Edward Gudeman commented in an address at the annual meeting of the American Institute of Chemical Engineers, Philadelphia, December 1914: ‘Will and can United States chemists prove equal to the emergency created, which makes it impossible, in many cases, to obtain from abroad, many chemical supplies for which the United States in the past has been and is today and will be in the future, dependent on foreign producers?’21 The supply of synthetic organic chemicals and coal tar dyes which the United States had imported from Germany before the war was a major challenge.When the war broke out, the country was importing about 90 per cent of its coal tar dyes from Germany, and American dye companies manufactured the remaining 10 per cent from chemicals supplied by Germany. The British naval blockade cut off the supply of these German dyes and chemicals resulting in the so-called ‘dye famine’ in the United States.

Before August 1914, the United States was practically dependent on Germany for colour10 and the onset of war resulted almost in a panic among those who used dyes: ‘Many lines of our industr ial life were threatened with utter demoralization because of the shortage of dyestuffs and medicinals resulting from the blockade of Ger man ports by the Br itish navy’, notes an editorial in The Journal of Industrial and Engineering Chemistry.22 ‘Textile mills faced the imminent possibility of shutting down because of the inability to secure dyestuffs for their fabrics. Tanners, lithographers, and wallpaper men sought in vain for needed coloring matter and pharmacists’ stocks of many much-used medicinals became depleted.’

Domestic manufacturers in the United States soon realised that the dye famine opened up a golden window of opportunity. A number of American companies, most of them small, urgently set out to manufacture dyes, pharmaceuticals and other chemical products derived from coal tar. Within three years of being cut off from the German supply, the country had invested huge sums in plants for manufacturing dyes and other organic chemicals:10 ‘We were producing as large a quantity of dyes as were consumed here when the war started’, observed Culbertson. Baekeland also proudly pointed out that it had taken the United States less than three years to become independent of Germany in a line of chemicals, primarily dyes, which Germany had taken over fifty years to develop.23

Similarly, before the war, the United States had no synthetic phenol industry.21 Phenol, a coal tar chemical, was used as a starting material to manufacture a wide variety of products for civilian and military use, including dyes, photographic developers, medicines, flavours, perfumes, and explosive mater ials such as picric acid. Within three years, however, the country had a thriving synthetic phenol industry, with chemist Grinnell Jones reporting in an article published in 1918 that fifteen chemical plants produced 64,146,499lb of phenol valued at $23,715,805 in 1917, most of which was used in making picric acid.24

The war inevitably increased demand not just for the home-produced chemical products needed for domestic use, but also for the chemicals needed for the war effort. The amount of the explosive TNT manufactured in the United States, for example, rose from 3.4 million pounds in 1913 to a rate of 16 million pounds per month in 1916, while a similar growth was reported in chlorine, potash, coal tar dyes and pharmaceuticals.3

The war thus kick-started important sectors of the chemical and its associated industries in the United States and exports of chemicals from the country rose markedly. W.S. Kies, Vice-President of the National City Bank of New York and American International Corporation, commented in an address on ‘the development of our export trade’ on 26 September 1917: ‘The chemical industry of the United States has shown greater efficiency and greater powers of quick response to business demands than almost any other of the great industries of the country.’25 Kies also pointed out that the value of exports of chemicals in 1917 (the fiscal year ending 30 June 1917) was $185 million, which was practically seven times the $27 million value for 1914. Exports of all industries, as a whole, were only three times as great. However, these export figures did not include explosives, Kies adding: ‘In explosives the value of our exports grew from $6 million in 1914 to $820 million in 1917 … Under this class were listed cartridges, dynamite and gunpowder.’ Exports of other types of explosives grew from $1 million in 1914 to $420 million in 1917, with these values showing that the growth of exports from industries allied to the chemical industries had been ‘quite as striking’ as from the chemical industries themselves. Kies summarised: ‘During the last three years, the chemical industry has received a great impetus. Large amounts of money have been spent on its development. In many lines, before the war, Germany was supreme and competition with her was impossible. Germany, prior to the war, as we all know, had a grip on the chemical markets of the world.’ By the end of the war, however, this was no longer the case as the naval blockade and the war had sparked a major boom in the American chemical and associated industries and its exports.

The final impact

In essence, the chemical and its associated industries use chemical processes, devised by chemists and chemical engineers, to convert raw materials – or intermediate chemicals obtained from these raw materials – into finished chemical products. The raw materials occur naturally in the land, water, air, and plant and animal life. These include carbon-containing raw materials such as coal, wood and petroleum, which eventually yield organic, that is carbon-containing, chemical products like dyes, explosives such as TNT, pharmaceuticals and plastics. Inorganic chemical products, such as steels, brasses and other alloys, fertilisers, nitric acid and other mineral acids, and numerous pigments are obtained from metal-containing ores, phosphorus-containing rocks, chlorine-containing salt, atmospheric nitrogen and other inorganic sources.

The First World War interrupted the supply of many of these raw materials and intermediate chemicals to the chemical and its associated industries in countries such as Germany and the United States. The war also caused the manufacture of chemical products to be redirected from normal civilian uses to military uses; the most obvious example being the diversion of chemical resources from agriculture to the military in Germany.

During the war, chemists were well aware of this problem, with Culbertson remarking in 1917: ‘The factories that produce nitrogeneous fertiliser in time of peace will yield us nitric acid in time of war … Those producing intermediates and dyes can turn their machinery and workers to making explosives.’10 The following year, Jones comments that the fertiliser industry had probably made the greatest sacrifices in this regard: ‘Sodium nitrate and ammonia are required for the manufacture of explosives in such large quantities that the amounts left for use in fertilisers has been and will be much reduced.’24

In Germany, the diversion of raw materials and chemical products from civilian to military use was a significant factor in the country’s defeat in 1918: ‘Blockade, starvation and internal unrest had brought Germany to defeat’ notes MacLeod in his 1993 article.4 The blockade deprived Germany of wheat imports from the United States and beef imports from Argentina to feed the civilian population. It also cut off the supply of Chilean nitrates needed to make nitric acid and therefore forced the country to employ the Haber-Bosch process to convert atmospheric nitrogen to ammonia and the Ostwald process to convert the ammonia to nitric acid for the manufacture of explosives. The switch to explosives manufacture left insufficient capacity for the manufacture of the nitrogen-containing fertilisers needed to bolster food productivity and make up for the shortfall in imports. German agriculture and food production suffered as a consequence, leading to starvation, malnutrition and loss of morale on the home front in Germany.

MacLeod observes that Germany’s science-based industry and its capacity to apply scientific methods was a ‘formidable enemy’ that almost saved the country from defeat in the First World War. Science-based industry could have been the single most decisive factor in an Allied victory had the war continued until 1919, although, MacLeod believes, the ‘struggle might well have gone the other way’.

2

Shell Chemistry

Millions of shells

On 21 February 1916, Crown Prince Wilhelm’s Fifth Army, led by Chief of Staff General Erich von Falkenhayn (1861–1922), launched an attack along an 8-mile front around Verdun. The city lies in Lorraine, a region in north-eastern France that borders Belgium, Luxembourg and Germany. The city was defended by the French Second Army and heavily fortified with a ring of forts.

The German Army stockpiled some 3 million shells for the battle, and during the first hour of the attack the German artillery bombarded the French forces with around 100,000 shells.1 The battle continued until 18 December 1916. By July, according to one estimate, the French and Germans had fired a total of 23 million shells at each other, which works out at an average of around 100 shells per minute, day and night.2 Shelling was even more intense during the Battle of the Somme (1 July–18 November 1916) when the British and Germany armies fired a total of 30 million shells at an average of almost 150 per minute. The battle resulted in some 419,000 British, 194,000 French and over half a million German casualties. The following year, the British artillery launched a massive artillery bombardment of German positions along the Messines Ridge, south of Ypres, prior to the Battle of Messines which was fought from 7 to 14 June. Over a ten-day period the artillery fired over 3.5 million shells, which equated to over three shells per second.

The intense shelling continued until the end of the war. On 21 March 1918, for example, the German Army launched an offensive, codenamed Operation Michael, against the British Third and Fifth armies who were holding a 50-mile section of the frontline in France between Arras and La Fère. On the first day of the battle, known as the 1918 Battle of the Somme or the Second Battle of the Somme, the Germans unleashed an artillery barrage of unprecedented ferocity: over a five-hour period their guns fire some 1 million shells.

Nothing symbolises the Great War more than shells and shelling, and they are a recurrent theme in the literature of the war. For example, in an early chapter of his memoir Storm of Steel, German soldier Ernst Jünger (1895–1998) portrayed life and death in the frontline at Les Eparges in north-eastern France in April 1915. Writing about coming under shell fire for the first time, he describes the ‘savage pounding dance’ of the shelling and the incessant flames from the artillery mingled with ‘black, white and yellow clouds’.3 He also describes how he received a flesh wound from ‘a needle-sharp piece of shrapnel’. In Goodbye to All That, war poet Robert Graves (1895–1985) also recounts his experiences in the trenches during the war.4 In May 1915 at Cuinchy in the Pas-de-Calais region of France, for example, one shell came ‘whish-whishing’ towards him, before he dropped flat in the trench and the shell burst just over the trench.

British troops called the heavy shells fired from German siege howitzers ‘coal-boxes’ because they generated large amounts of black smoke when they burst. And as the shells had such a powerful impact, they were also referred to as ‘Jack Johnsons’ after the African-American boxer Jack Johnson, who was world heavyweight boxing champion from 1908 to 1915.

Shell crises

By the early 1900s, Germany had become on industrial powerhouse on a global scale. The country exported machine tools, dyes, pharmaceuticals, and photographic and other chemical products to Britain, the United States and throughout the world. The Krupp company, based in Essen in the Ruhr area of Germany, was one of the country’s largest and most important technology­based firms. It was an industrial empire which manufactured rails and wheels for railways, armaments and other high-quality steel products, many of which were exported.

David Lloyd George, who became Britain’s Minister of Munitions in May 1915 observed that, at the start of the war, the Germans and Austrians between them had much larger supplies of war materiel and more extensive factories for turning out military supplies than the Allied countries. The chemical companies produced the explosives needed to fill and propel the shells, and the Krupp company manufactured weapons, battleships, barbed wire and the other metal products needed for the war. Lloyd George remarked that Germany was the best organised country in the world and her organisation had told.

Germany’s substantial industrial capacity and technological capability gave the country a head start in the war. On the Eastern Front, German guns outnumbered Russian guns by almost two to one per division. In East Prussia, the Russian Tenth Army was starved of weapons and ammunition, most notably in the Second Battle of the Masurian Lakes which was fought in atrocious weather conditions in February 1915. The Russians needed some 45,000 shells per day to fend off the attacks by the German Eighth and Tenth armies; however, Russian factories could only supply 20,000 per day and the Germans eventually destroyed the bulk of the Russian Tenth Army. Around 56,000 Russian soldiers died, even more were taken prisoner and some 300 guns were lost.

On the Western Front, the French Army had only 300 heavy guns compared with 3,500 German medium and heavy guns when war broke out in 1914. The story was much the same in Britain: in May, when the Germans were turning out a quarter of a million of high-explosive shells daily, Britain was producing only 2,500, plus 13,000 shrapnel shells, as Lloyd George told the House of Commons in December 1915.

The British shell shortage led to the so-called ‘shell scandal’ that erupted on 14 May 1915. It was sparked by Sir John French (1852–1925), the commander-in-chief of the BEF, when he commented to Colonel Charles Repinton, The Times war correspondent, that the Battle of Neuve Chapelle had failed because of a lack of shells: ‘The want of an unlimited supply of high explosives was a fatal bar to our success.’ Whereas Germany had huge stocks of shells, the BEF commander-in-chief had to ration the number of shells fired by British heavy guns to eight per day and by field guns to ten per day ‘for ordinary purposes’. Furthermore, the BEF had only ten heavy guns per division whereas the German armies had twenty per division in the first half of 1915.

The battle, which took place from 10–13 March 1915, was aimed at breaching the German line at the village of Neuve-Chapelle, before driving on and seizing the nearby Aubers Ridge, and finally threatening Lille some 15 miles away. The British unleashed a forty-minute bombardment and quickly secured the village, although German defences then halted the advance and the British had to abandon their attempt to capture Aubers Ridge. Two months later, on 9 May, the British resumed their attack on Aubers Ridge but once again failed. The artillery only had enough shells for a short forty-minute bombardment. Moreover, 90 per cent of the shells were shrapnel shells which were ineffective at piercing the heavily fortified German defences. The attack was called off the following day, with British casualties amounting to around 11,500.

The British shell scandal resulted in the fall of the Liberal government led by Prime Minister Herbert Asquith (1852–1928), who then formed a coalition government and appointed David Lloyd George as Minister of Munitions. Lloyd George galvanised the British munitions industry, setting up a system of national munitions factories and purchasing machinery needed for its factories from the united States. Hundreds of private factories not previously engaged in the manufacture of munitions also co-operated in the scheme and within a few months the shell shortage had been overcome. A similar transformation occurred in France where, by the end of 1915, the country was producing virtually all the armaments and ammunition it needed. By 1917 the British Ministry of Munitions was operating some 200 factories, manufacturing not only explosives and shells, but also aircraft and other products needed for the war effort.

The factories that assembled shell components and packed the shells with explosives were known as ‘filling factories’, and one of them, National Filling Factory No. 6, at Chilwell in Nottinghamshire, filled its first shell in January 1916. By 1918, it had filled over 19 million shells, as well as some 25,000 sea mines and 2,500 aerial bombs. Tragically, however, in July 1918 a large part of the factory was destroyed when 8 tons of TNT exploded, killing 137 people.

A variety of explosives

A military or naval artillery shell is a projectile, an object that is projected from a weapon, but unlike other projectiles used in warfare, such as bullets, cannon balls, shot or even arrows, all of which are solid objects, shells have hollow bodies that carry other materials, most notably explosives.

The explosives carried by shells in the First World War included low explosives, usually gunpowder, and high explosives such as lyddite, and were employed in various roles. They were often used as fragmentation bursting charges to burst the shell when activated by a fuse, and were known as ‘bursting’ or ‘explosive’ shells. The force of the burst and the hot, sharp metal shell fragments and splinters were intended to kill or maim enemy troops and destroy enemy defences and positions. Low-explosive bursting shells fragmented when the explosive ignited, while high-explosive (HE) shells burst when the explosive detonated.

In shells that carried shrapnel bullets, poison gas or other payloads, a smaller amount of explosive was used to open the shell and expel the payload when the fuse activated. These expelling or bursting charges were not intended to fragment the shells but rather to deliver the payload and were known as ‘carrier’ shells.

Small amounts of explosives, also known as bridging explosives, exploders or boosters, were usually incorporated in high-explosive shells to boost the detonation of the high-explosive bursting charge. Some low-explosive bursting shells also contained a small amount of a low explosive with a finer grain than that of the bursting charge. The fine-grained explosive was placed between the fuse and the bursting charge to ensure that the charge ignited.