Skip to Content

Summary: The Chemistry Book: From Gunpowder to Graphene, 250 Milestones in the History of Chemistry by Derek B. Lowe

Key Takeaways

  • Derek Lowe takes readers on a tour of 250 major milestones that shaped the history of chemistry.
  • Pick a discovery that intrigued you and learn more about the scientists and context behind their breakthrough. Their stories may inspire your own curiosity in science.

The Chemistry Book (2016) takes us on a tour through the history of chemistry from the first Bronze Age advancements to a possible future where clean, renewable energy is an everyday reality. Learn about the events and discoveries that have changed the world.

Introduction: Take a tour through some of chemistry’s most important milestones.

You don’t need a chemistry degree to know that this field is chock full of fascinating characters, unexpected turns of events, and plenty of dramatic discoveries. Sometimes these events can be ironic, like the invention of dynamite, and other times, they can be tragic, like the discovery of radium’s true nature.

While we can’t take you through each of the 250 events in The Chemistry Book, we’ll treat you to some memorable highlights and surprising details. These discoveries and landmark moments span human history’s high and low points – events that we’ll continue to celebrate and events that have served as regrettable warnings for future generations.

[Book Summary] The Chemistry Book: From Gunpowder to Graphene, 250 Milestones in the History of Chemistry

In these summaries, you’ll learn

  • the chemical origins of the word “gibberish”;
  • the confounding mysteries behind blue oil paint; and
  • the challenges standing in the way of hydrogen fuel.

Human achievements in chemistry started in the Bronze Age.

Our planet has always been home to amazing chemical processes. Take the two-story-tall crystals that pack caves in Mexico – the Cueva de Los Cristales. These gigantic pillars are a mind-boggling example of what happens when the common mineral gypsum is submerged in water that’s being heated up by magma, and then spends centuries cooling down during an ice age. The caves look like something out of a bizarre sci-fi movie. But they’re real, stunning, larger-than-life examples of chemical reactions that needed no human involvement.

It’s hard to say what the first human chemical discovery was. Was it the first man-made fire? Or the first time someone used a plant to help heal a wound?

The key message here is: Human achievements in chemistry started in the Bronze Age.

While copper was already being used for some basic tools, around 3300 BCE, we found a better, stronger, and more durable material in bronze.

Essentially, bronze is what happens when tin is added to copper. And what made this combination possible was travel and trade. Around 2000 BCE, tin from Cornwall, in southwestern England, began to show up in the Mediterranean. Eventually, some of the more daring metal workers in Mesopotamia began to experiment with the materials they had, including lead, nickel, silver – and copper. Eventually, bronze was born.

Over time, the Greeks would add more lead to the mixture to make bronze easier to work with, and then zinc would be added to make brass. Despite the changes throughout history, bronze has always been the metal of choice for bells, and it can still be found in the cymbals on your standard drum kit.

Around 1300 BCE, the Bronze Age transitioned into the Iron Age. But this wasn’t because iron was seen as a superior metal. Bronze is, in fact, harder and far less prone to corrosion. Really, what iron had going for it was availability.

Early iron technology involved heating charcoal and iron ore, producing a lump of crude smelted iron in the bottom of the furnace. Impurities were then, quite literally, hammered out. This has always been a labor-intensive process and one that requires a lot of forced air to keep furnaces burning at high temperatures. To get such conditions, it’s believed that some smelting operations were seasonal, in order to take advantage of recurring monsoon-like conditions.

But despite all this, iron smelting capabilities quickly spread, though it’s possible that locations as far removed as India and sub-Saharan Africa developed the technology independently from each other.

Ancient chemists advanced purification and refinement techniques, often with hopes for gold and eternal life.

We’ll never know who the first chemist was, but we do have a name for the earliest documented chemist: Tapputi. According to a tablet dated at around 1200 BCE, Tapputi was the name of a Babylonian woman who made perfume out of ingredients like myrrh and balsam. She also purified her concoctions by heating them and collecting the vapors. So, we can consider 1200 BCE the first documented reference to a purification process involving distillation and filtration.

Like iron smelting, many cultures knew how to make perfume. But when it comes to other areas and technologies, secrets did abound in the ancient world.

The key message here is: Ancient chemists advanced purification and refinement techniques, often with hopes for gold and eternal life.

Up until 550 BCE, people like the Egyptians were simply using water to clear away debris and collect bits of gold. Then came King Croesus of Lydia, and a new technique for refining gold. Using a gold-silver alloy called electrum, Lydians refined pure gold.

The exact methods of their refinement are still being pieced together by historians and archeologists. Since this process was most likely an undocumented Lydian secret, much is still unknown. But one thing’s for sure: they made the most of what they had. With the use of molten lead and salt, they created coinage. The process may have diluted the gold content, but by stamping the coins with mythological figures, heroes, and animals, they established a value that brought King Croesus a tidy and steady profit.

Let’s fast forward to the start of the Han Dynasty, around 210 BCE. Here we find early use of mercury, a strange liquid metal that didn’t need any refining. The first to use large amounts of mercury in a significant way was Qin Shi Huang, the legendary “first emperor of China.” You may be familiar with the large underground army of soldiers, molded out of terra-cotta, that the emperor had made for his tomb. Well, that tomb also consisted of a scale replica of his palaces, complete with a miniature river of flowing mercury.

Ironically enough, Qin Shi Huang may have used medicines containing mercury in a misguided attempt at immortality. We now know that mercury is indeed poisonous, especially in compounds that allow the body to more readily absorb it. Sadly, it would be a long time before this was known, and mercury continued to be a medicinal ingredient for some time.

Some ancient techniques took hundreds of years to understand, while others remain a mystery.

Let’s go from the start of the Han Dynasty to the end, bringing us to around 200 AD. This is when we see the appearance of true porcelain.

Prior to this date, there had been all kinds of impressive ceramics, but nothing quite as beautiful as porcelain. Making it requires a mixture of bone ash, ground glass, quartz, alabaster or feldspar, and kaolin clay, which originally came from the village in southwest China that shares its name. Porcelain also requires just the right amount of water and extremely high levels of heat.

The key message here is: Some ancient techniques took hundreds of years to understand, while others remain a mystery.

The precise ratios and measurements were a tightly held secret, even as production of porcelain ramped up in the centuries that followed. By the 1300s, porcelain finally made its way to Europe, but still, no one outside of China knew how to recreate this immaculate art.

But it wasn’t for lack of trying. At last, in 1708, an imprisoned alchemist in Dresden, named Johan Frederick Bottger, together with the physician, physicist, and philosopher Ehrenfried Walther von Tschirnhaus, cracked the code. The breakthrough came when the duo were finally able to get their hands on imported kaolin clay and alabaster. The achievement won Bottger his freedom, along with a placement at the head of a new porcelain factory.

By around 800 AD, a lot of scientific advancement was taking place in Islamic and Chinese cultures. One of the leaders in this field was a man known in the West as “Geber.” His full name was Abu Musa Jabir ibn Hayyan, and he lived in what is now Iraq. “Geber” practiced alchemy as well as numerology, astrology, and medicine.

The holy grail for ibn Hayyan, and many other alchemists who’d follow in his wake, was the philosopher’s stone. He believed that any metal could be broken down and reformed into another metal, if only we had the right elixir to make this happen. This elixir came to be known as the philosopher’s stone, and it would be some time before legitimate scientists would give up the chase.

But a lot of what we know about Geber gets muddied by the fact that his work attracted lots of followers who wrote manuscripts using his name. Much of this writing used symbols and coded language that’s nearly impossible to decipher. In fact, this strange alchemic language is where we got the word “gibberish.”

Good and virtuous intentions can sometimes lead to unintended consequences.

While ibn Hayyan was trying to turn iron into gold, alchemists in China were also attempting to transmute metals. It’s likely that these efforts and the continued quest for life-extending elixirs brought them something else entirely: gunpowder.

The key message here is: Good and virtuous intentions can sometimes lead to unintended consequences.

The first sign of gunpowder comes in a Taoist text, dated around 850 AD. By 1044, China’s military had multiple recipes for the explosive product.

One could almost say it was just a matter of time, because two of the main ingredients, sulfur, and charcoal, were sure to be around most alchemist labs. The missing ingredient is the oxidizer: potassium nitrate. This could have been added by using the mineral niter, otherwise known as saltpeter, or it could have been found in caves, around the edges of bat guano droppings. Whatever the case, once discovered, its explosive potential would have been immediate.

The nickname for gunpowder eventually became “Chinese snow,” and it would stay a military secret for a long time, until the expansions of the Mongol empire spread this secret to other world empires. By 1326, the first European-made guns would forever change the rules of warfare.

Of course, gunpowder did not turn out to be a life-extending elixir, but by the sixteenth century, some advancements in this direction were being made. In 1538, the field of toxicology got off the ground thanks to the efforts of Swiss alchemist and philosopher, Paracelsus. While most alchemists were still fixated on making gold and silver, Paracelsus said his intentions were “to consider only what virtue and power may lie in medicines.”

For his part, Paracelsus was among the first to recognize the effects outside agents may be having on human health. His study of miners led him to suggest that toxic vapors, and not evil mountain spirits, might be causing their lung problems.

In 1540, the German botanist and physician Valerius Cordus mixed ethyl alcohol with sulfuric acid to come up with diethyl ether. That same year, Paracelsus published a treatise noting the effect that ether fumes had on animals, causing them to become unconscious, which he thought would be put to human use in time.

He was right: in the 1840s, ether became the first surgical anesthetic, as well as the basis for “ether frolics” among surgical students.

In the seventeenth century, world-changing developments preceded a shift to hard science.

Another important milestone in the history of medicinal chemistry arrived in 1631. That year, Jesuits had returned to Rome after a journey to the New World, and they brought back an incredible new medicine. It was a compound derived from the bark of South American cinchona trees. The medicine would come to be known as quinine.

The key message here is: In the seventeenth century, world-changing developments preceded a shift to hard science.

Rome was suffering from countless cases of malaria every year, but people had yet to link these cases to the city’s mosquito-infested swamps. Most people linked the cause to “bad vapors,” which is what mal-aria actually means.

The Quechua people of Bolivia and Peru had been using the tree bark to treat symptoms such as shivering and chills – two of the markers of malaria. And while quinine works as a muscle relaxant, it later proved to be excellent at fighting malaria. It turns out the medicine goes after the malaria parasites directly. What it does exactly remains something of a mystery, but quinine was a game-changer.

Between 1620 and 1630, Spain became aware of quinine’s benefits through Jesuit missionaries. European colonial powers could now venture forth with protection against the deadly elements of the uncharted world.

But on the positive side, quinine became a much-studied compound. For centuries, those trying to synthesize the medicine helped to advance the field of organic chemistry. After German chemist Paul Rabe partially synthesized it in 1918, American chemists William von Eggers Doering and Robert Burns Woodward became the first to achieve total synthesis in 1944.

Indeed, beginning in the seventeenth century, some great advancements were made in chemistry. And alchemy finally took a back seat to a growing field of hard science.

In 1661, Robert Boyle published The Sceptical Chymist, which effectively laid the foundation for modern chemistry. Instead of looking at things from the classical viewpoint, which involved the Greek concept of the four elements – air, earth, fire, and water – Boyle advanced the theory of atoms as the foundational component of all elements. He also proposed that the movement and reactions on the atomic level could explain the world around us.

Many of his predictions would end up being spot on. And fortunately for Boyle, the Age of Enlightenment was just getting started, and a new era of science and reason was ready to embrace his ideas.

In the eighteenth and nineteenth centuries, chemical synthesis continued to advance.

If you’ve ever put paint onto a canvas, there’s a good chance you’ve heard of the color Prussian blue. But you may be surprised to find out how colorful the history of Prussian blue is, and the memorable role it played in the history of chemistry.

The key message here is: In the eighteenth and nineteenth centuries, chemical synthesis continued to advance.

Prior to 1700, blue was actually a rare color in European paintings. This is because it was super expensive. The only reliable source of blue oil paint came from lapis lazuli stones that were sourced from Afghanistan. At one time, there was an “Egyptian blue,” but the recipe for this color was lost somewhere in the collapse of the Roman empire. So, if a figure in a painting was decorated with the color blue, you could be sure that the person was of the highest status.

Then, in 1706, German dye maker Johann Jacob Diesbach made a surprising discovery. He was trying to create a new red pigment by using cochineal, which comes from crushed up beetles. But since the reagents he was using were contaminated, Diesbach ended up with blue instead of red. Soon, oil paints with the name Prussian blue and Berliner blue were being sold.

But that’s not the end of the story. While the basic recipe for Prussian blue was leaked to the Royal Society of London in 1724, breaking down the chemistry behind the substance was much more difficult. In fact, it was over 250 years later, in the 1970s, that the entire chemical profile of Prussian blue was fully understood.

Like quinine, this challenge was a boon for organic chemistry. Along the way, it yielded hydrogen cyanide – named “prussic acid” after Prussian blue – as well as a drug to treat metal poisoning.

However, it wasn’t long before synthesizing ended up at the heart of a divisive issue among chemists and scientists in general.

In 1828, German chemist Friedrich Wohler successfully synthesized urea, a relatively simple biomolecule found in urine. What’s controversial about that? Well, Wohler made something that had previously only been made by living creatures, and he did it using entirely nonorganic materials like mercury cyanate.

This sparked a debate around the subject of vitalism, and it’s one that may never be fully put to rest. Those who embrace vitalism believe that there’s something unique and special to living things – an essence or spirit that non-living things must lack. But here was Wohler’s urea synthesis, challenging this very idea.

In many cases, landmark events required multiple discoveries.

Does the name Christian Friedrich Schönbein ring a bell? Unless you’re already well versed in the history of chemistry, you’re probably unfamiliar with this German chemist. Yet he’s played a role in some important and still highly relevant discoveries.

The key message here is: In many cases, landmark events required multiple discoveries.

One day in 1832, Schönbein was cleaning up a spill at the lab and used his cotton apron to mop up what was nitric and sulfuric acid. No harm done, right? But then Schönbein put the apron by the fireplace to dry off. It wasn’t long before an explosive flash set his apron ablaze.

While he may have lost an apron, Schönbein went on to discover nitrocellulose, which is what gets created when cotton is treated with nitric acid. This soon became better known as guncotton. Of course, people were immediately interested in a possible alternative to gunpowder, but guncotton proved to be unpredictable and dangerous.

But later, in 1847, the Italian chemist Ascanio Sobrero decided to see what happened when, rather than cotton, he nitrated glycerine, a syrupy carbohydrate. What he created was, of course, nitroglycerine, and it was so dangerously explosive that Sobrero tried to keep the results under wraps for some time.

However, one chemist close to Sobrero, Alfred Nobel, did hear about nitroglycerine, and he became determined to stabilize it. It turned out, all that needed to happen was for the nitroglycerine to be absorbed into another material. The result was dynamite.

Now, let’s look at another one of Schönbein’s contributions to science. In 1840, he discovered ozone.

While conducting lab experiments that involved running an electrical current through water, Schönbein noticed an odd smell. He took the odor as signs of a new substance and called it ozone, after the Greek word ozein, meaning to smell.

You may have smelled what Schönbein did all those years ago if you’ve ever been near a lightning storm and noticed a certain “fresh air” smell. Lightning, like the electric current with which Schönbein was experimenting, also produces ozone. But ozone is a gas, and it’s actually a toxic one. However, its presence in the upper atmosphere is important and helpful. Ozone absorbs and protects life on the planet from the dangers of ultraviolet light.

History is full of troublesome substances.

Certain substances are always going to be dangerous. Sometimes, there are ways to replace these substances with something better, but other times, the benefits outweigh the risks.

The key message here is: History is full of troublesome substances.

In the case of mirrors, the dangerous material wasn’t even doing that great a job. Early mirrors were made through a process that involved layering glass with tin foil that had been exposed to liquid mercury. So not only were they corrosive and potentially poisonous but they also didn’t do a great job of providing a reflective image.

Fortunately, in 1856, German chemist Justus von Liebig came up with the new and improved mirror. Liebig mixed a silver/amine complex with a sugar solution and then applied this mixture to a glass surface. The sugar molecules would then be oxidized by the silver, which resulted in the sugar becoming a soluble acid that, in turn, allowed the mixture to be reduced to an extremely reflective layer of elemental silver.

There is, however, one dangerous catch. If the silver/amine solution isn’t used right away, it can undergo further reactions that will turn it into silver nitride. This substance is so unstable that it can explode for seemingly no reason at all!

Then there’s diazomethane. This is a chemical compound that has a long list of incentives to explode – including sunlight, heat, and sharp edges. As a result, it requires immense care and special, polished glassware when being used. Oh, and did we mention that it’s highly poisonous as well?

The thing is though, diazomethane can work wonders. It’s one of chemistry’s great reagents. This means that by adding diazomethane, chemists can get a whole range of reactions to take place with the greatest of ease. Yes, it’s temperamental and toxic, but it’s also one of the most helpful tools in a chemist’s toolbox.

But if we’re talking about toxic substances, we have to mention one that is practically synonymous with poison: cyanide. If you’re wearing a gold ring, necklace, or watch, there’s a good chance that cyanide was used to extract and purify that gold.

Back in 1887, a Scottish chemist and two doctors from Glasgow invented the MacArthur-Forrest process, which involves using a cyanide solution to dissolve gold from bits of ore. Some places have banned the process due to the inherent danger of using large amounts of cyanide-infused water. But given how cheap the process is, and how high the demand for gold remains, it’s still used today.

Some discoveries came at a heavy cost.

Among the biggest developments in the early twentieth century was our understanding of radioactive substances.

One of the first clues came in 1896 when the French physicist Antoine-Henri Becquerel found that uranium salts could cause photographic plates to become exposed. Becquerel knew that uranium compounds were emitting some sort of radiation.

The key message here is: Some discoveries came at a heavy cost.

Becquerel’s findings piqued the interest of Marie and Pierre Curie, who ran a laboratory specializing in the research of crystals and magnetism. Marie began a rigorous search for other substances that emitted similar radiation. This led her to the element thorium and the mineral pitchblende. Through pitchblende, she isolated two new radioactive substances, polonium, named after Marie’s home country of Poland, and radium.

This work took years of effort and a whole lot of pitchblende. It wasn’t until 1902 that their work culminated in a dissertation by Marie that ended up being awarded two Nobel Prizes. But what the Curies didn’t realize was that every day they were being poisoned by the radiation. In fact, the couple’s lab books remain dangerously radioactive to this day. They’re stored in lead-lined boxes and require protective clothing to handle.

Still, it would be some time before the danger was really understood. In 1913, a piece of the puzzle was discovered by two British physicists, Ernest Rutherford and Frederick Soddy. They found that radium was actually the result of decaying uranium atoms. This meant that one element could have multiple forms, which Soddy dubbed isotopes, after the Greek words iso and topos, meaning “equal” and “place.”

But rather than seeming a danger, radioactive elements first showed signs of healing benefits, especially when it came to stopping the spread of cancerous cells and skin diseases. When this came to the public’s attention, some entrepreneurs ran with the idea of selling radioactive toothpastes and skin creams. One such product was a tonic called ‘Radithor,’ which boasted that every bottle contained a dose of radium.

Sadly, this claim was true. One victim of Radithor was the Pittsburgh steel company owner, Eben Byers, who claimed to drink as many as three bottles of Radithor every day and served as a spokesman for the tonic. In 1932, he lost his life to bone cancer and had to be buried in a coffin lined with lead. As for the silver lining of the story, his death led to a federal crackdown on products like Radithor, and new laws requiring testing and approval before such products could enter the market.

It took decades before the effects of leaded gasoline were revealed.

As the story of Radithor shows, sometimes profits and chemistry lead to disaster. Another unflattering chapter in the history of chemistry belongs to tetraethyl lead.

Developed in 1921 by General Motors head Charles Kettering and chemist Thomas Midgley Jr., tetraethyl lead was added to automotive gasoline to allow the fuel to burn more evenly. And while it did just that, it also caused harmful amounts of lead to be released through exhaust fumes.

The key message here is: It took decades before the effects of leaded gasoline were revealed.

Despite many deaths occurring during the manufacturing of ethyl gasoline, as it was called on the market, Midgley swore at a press conference that tetraethyl lead was safe. He even held some under his nose for dramatic effect. Unbeknownst to anyone at the time, Midgley had already been trying to recover from lead poisoning.

The full picture of lead contamination wouldn’t become clear until 1965, after the work of American geological chemist Clair Cameron Patterson was published. Patterson didn’t set out to uncover lead contamination. He was actually studying the decay of uranium and lead isotopes as a way of establishing dating techniques. In 1956, he estimated the Earth’s age at around four and a half billion years old – a calculation that has withstood scientific scrutiny.

So, Patterson had taken samples from around the world and analyzed the lead levels. He published his findings in a book called Contaminated and Natural Lead Environments of Man. It revealed that the introduction of tetraethyl gas to cars and planes around the world had quickly become the number one contributor to lead contamination on the planet. This wasn’t just affecting the atmosphere. The lead was poisoning water and the food chain as well.

The dramatic increase was too much for some scientists to believe, but Patterson’s data checked out, and many countries began to ban lead from gasoline, paint, water pipes, and other products.

But that’s not the end of the story for the chemist Thomas Midgley Jr. In 1974, only a year after the Environmental Protection Agency began phasing out ethyl gas, we became aware of another planet-harming invention tied to Midgley and Charles Kettering: Freon. We’ll look at this story in the next chapter.

The twentieth century featured some disastrous chemical developments.

Once upon a time, owning a refrigerator was a dangerous proposition. In the 1920s, refrigerators worked by expanding and compressing gases. For some units, that gas was propane. For others, it was ammonia or sulfur dioxide. All of these gases involved some sort of risk, be they toxic to breathe or highly flammable. And since many of the gases were naturally corrosive, a leaking refrigerator was an all too common occurrence.

The key message here is: The twentieth century featured some disastrous chemical developments.

Freon, the trade name for dichlorodifluoromethane, was developed in 1930 to solve this problem. It was both non-flammable and non-corrosive. It was soon being used for a number of other products, like hair spray and asthma inhalers.

But over 40 years later, it was discovered that chlorofluorocarbons (CFCs) like Freon were increasing the levels of chlorine free radicals in the atmosphere. As it turns out, CFCs are also reactive to UV light – and this reaction leads to a harmful cycle.

UV light causes the CFCs to break down and release chlorine free radicals, which then causes ozone to break down, which, in turn, releases even more free radicals. Therefore, a small amount of CFCs leads to big problems. In fact, one chlorine radical can result in tens of thousands of destroyed ozone molecules.

This became apparent in 1974 when American chemist Frank Sherwood Rowland and Mexican chemist Mario Jose Molina released their study on CFCs and ozone. Sure enough, the layer of ozone protecting the planet was being seriously depleted, and laws were soon being passed to ban the use of CFCs like Freon.

By the late-1900s, it was becoming increasingly apparent that certain chemicals came with disastrous consequences if not handled with the utmost care. One incident in the 1980s made this painfully clear.

In 1984, the worst chemical disaster of its time took place in Bhopal, India. It happened at a Union Carbide plant that was manufacturing a compound used for pesticide, known as MIC, or methyl isocyanate. This chemical can cause eye irritation if just two parts per million are in the air. If that amount exceeds twenty parts per million, your lungs can be damaged.

On the night of December 2, 1984, thirty metric tons of MIC leaked from the plant and covered 25 square miles of the surrounding area. The cause of the leak is still a matter of debate, but many of Bhopal’s half million residents suffered long-term cases of eye and lung trauma.

Modern research techniques have led to the development of life-saving drugs.

There’ve been experimental ancient Egyptians, ambitious alchemists in China, and studious European chemists during the Age of Enlightenment. Throughout history, one thing has remained true: we have an urge to derive healing and life-extending medicines from the remarkable variety of molecules on the planet.

The key message here is: Modern research techniques have led to the development of life-saving drugs.

In 1988, a Nobel prize was given to three scientists who reflect our modern efforts in the unending quest for new medicines. Two of those scientists were American colleagues Gertrude Belle Elion and George Herbert Hitchings. Their innovative research techniques helped develop effective drugs to fight malaria, cancer, bacterial infections, and HIV/AIDS.

What Elion and Hitchings pioneered was the making of purine derivatives. This is a class of compounds that help form biomolecules such as DNA. Having a purine framework gives researchers a great first step in making new drugs, and who knows where we’d be now without their work.

The third honoree of the 1988 Nobel prize was the Scottish physician and pharmacologist Sir James Whyte Black. He’s responsible for two of the world’s best-selling drugs, or at least the compounds that are used in those drugs: cimetidine, which treats ulcers, and propranolol, which treats heart disease.

Another significant step forward in drug development came in 2010 when the combined forces of the drug company Merck and the bio-engineering company Codexis cracked the long-standing goal of engineering enzymes.

Think back to that volatile but handy reagent diazomethane. Having the right enzyme during a synthesis process can make all the difference. It can make reactions run smoothly, cleanly, and quickly, which is what every chemist desires.

Merck’s primary goal was to improve the synthesis of one of its diabetes medications, sitagliptin. So they brought in Codexis, and soon they were running a series of computer model variations, searching for the right enzyme to do the job. After running over 36,000 variations, they finally found what they were looking for: an engineered enzyme that ended up having 27 of its amino acids altered.

This was a huge landmark for the creation of synthetic drugs, and it could dramatically change not only medical chemistry but also the field of chemistry altogether. While enzyme engineering is still slow and costly right now, it’s likely just a matter of time before improved techniques and processing power make it more effective and commonplace.

In the final chapter, we’ll take a look at a couple of other developments on the horizon that could have game-changing consequences.

Future milestones may involve innovations to reduce carbon dioxide emissions.

So what does the future of chemistry hold? What landmark events can we look forward to in the years to come?

The key message here is: Future milestones may involve innovations to reduce carbon dioxide emissions.

Lots of interested parties are now pursuing a clean energy source. Many of our current fuels contribute to the greenhouse effect, a phenomenon first discovered in 1896 by Swedish chemist Svante August Arrhenius. Essentially, a buildup of carbon dioxide and water vapor in the atmosphere prevents heat from escaping and thereby wreaks havoc on our climate.

Enter hydrogen. While many of today’s convenient power sources create carbon dioxide emissions, burning hydrogen only releases water vapor, which makes it a desirable and renewable fuel. In fact, people have been looking to hydrogen as the fuel of the future since at least the 1970s. There are, however, a few complications.

So, hydrogen isn’t exactly an energy source. Electricity can be converted into hydrogen through a process involving water. But the real problem to overcome is storage. Hydrogen molecules are so small they can even absorb into metal structures, making it a difficult fuel to store. However, the author believes that this hurdle could be cleared by around 2025.

Along the same lines is the development of artificial photosynthesis, which could arrive in around 2030.

Critical information about photosynthesis was revealed in 1947 when biologist Samuel Goodnow Wildman discovered Rubisco. This plant enzyme later proved to be part of the Calvin cycle, which is a key part of the photosynthesis process that involves turning carbon dioxide into glucose.

It’s a drastic understatement to say that photosynthesis is extremely important. It produces the oxygen that keeps us alive and regulates carbon dioxide, thereby making earth inhabitable. And then there’s the fact that without plants, our food chain would completely fall apart.

But there’s one odd thing about the Rubisco enzyme: it’s really slow. It only goes through three molecular changes per second, and no one’s really sure why. So the question is, how can we improve the process? After all, the benefits could be world-saving, since a faster enzyme could capture and remove more carbon dioxide from the atmosphere.

There are other potential benefits to artificial photosynthesis, including the possibility of getting hydrogen out of water without using electricity. All of this could dramatically reduce one of the main causes of the growing climate crisis. Time will tell if today’s chemists will be able to improve upon nature to save the day.

Final Summary

The key message in these summaries:

The history of chemistry is full of fascinating stories and remarkable individuals who’ve brought us closer to understanding all the complex chemical reactions that are happening around us. Many of the discoveries are all the more amazing since they happened by accident. Other events have a tragic undercurrent since they’ve taught us the dangers of certain chemical substances. Many lives have been lost, but chemistry is also behind many of the life-saving and life-prolonging advancements that have been made over the years. Chemistry may still surprise us in the years to come if scientists can find a way to create clean fuels and remove harmful carbon dioxide from the atmosphere.

About the author

Since 1989, chemist Derek B. Lowe, has worked for several major pharmaceutical companies on drug discovery projects against schizophrenia, Alzheimer’s, diabetes, cancer, and other diseases. After receiving his PhD in chemistry from Duke University, he was awarded a Humboldt fellowship to do postdoctoral research in Germany and has since handled almost 50 elements of the Periodic table. His columns on organic and medicinal chemistry are featured in the Royal Society of Chemistry journal Chemistry World, and he has served on the advisory board for Chemical & Engineering News. Having written daily for the popular “In the Pipeline” blog ( for over 12 years, Lowe is also a science blog pioneer with 6,000-plus Twitter followers. He lives in MA.


Technology and the Future, Popular Science, General Chemistry, History and Philosophy of Science


In this book, Lowe takes the reader on a journey through major discoveries and developments in chemistry throughout history. Each chapter profiles an important milestone, from early alchemists discovering properties of elements to modern achievements like the sequencing of the human genome. Lowe provides excellent context around the significance of each discovery and the broader impacts they had. The book strikes a great balance of explaining chemical concepts for lay readers while still honoring the technical details.

Lowe writes in an approachable and often humorous style. He brings the innovative thinkers behind each milestone to life. Though chemistry may seem like an intimidating subject, Lowe’s passion for the field comes through and makes fascinating historical tales consistently engaging. Alongside profiles of luminaries are sidebars on related technological applications and ethical implications that show chemistry’s wide-reaching influence. The book serves as both an enjoyable overview of chemistry’s rich past and a thoughtful perspective on its future potential.

Alex Lim is a certified book reviewer and editor with over 10 years of experience in the publishing industry. He has reviewed hundreds of books for reputable magazines and websites, such as The New York Times, The Guardian, and Goodreads. Alex has a master’s degree in comparative literature from Harvard University and a PhD in literary criticism from Oxford University. He is also the author of several acclaimed books on literary theory and analysis, such as The Art of Reading and How to Write a Book Review. Alex lives in London, England with his wife and two children. You can contact him at [email protected] or follow him on Website | Twitter | Facebook

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that is committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we have not implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you are currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.