Tag Archives: Chemistry

Astronomers find the farthest evidence of fluoride to date, in a distant galaxy

An international team of astronomers reports on a new sighting of fluorine in another galaxy. This is the farthest the element has ever been detected and will help us better understand the stellar processes that lead to its creation.

This artist’s impression shows NGP–190387. Image credits ESO.

Fluorine is the lightest chemical element in the halogen group, which it shares with other gases such as chlorine. It’s a very reactive element, and in our bodies, it helps give our bones and teeth mechanical strength as fluoride.

New research is helping us understand how this element is formed inside stellar bodies. The study also marks the farthest this element has ever been detected from our galaxy.

From stars to pearly whites

“We all know about fluorine because the toothpaste we use every day contains it in the form of fluoride,” says Maximilien Franco from the University of Hertfordshire in the UK, who led the new study.

“We have shown that Wolf–Rayet stars, which are among the most massive stars known and can explode violently as they reach the end of their lives, help us, in a way, to maintain good dental health!” he adds, jokingly.

The findings were made possible by a joint effort between the Atacama Large Millimeter/submillimeter Array (ALMA) and the European Southern Observatory (ESO), and pertain to a galaxy that’s 12 billion light-years away. The team identified fluorine in the form of hydrogen fluoride as large clouds of gas in the galaxy NGP-190387.

Due to the distance between Earth and NGP-190387, we still see it as it was at only 1.4 billion years old, around one-tenth of the estimated age of the Universe.

Like most of the chemical elements known to us, fluoride forms inside active stars. However, until now, we didn’t know the details of this process, or which stars produced the majority of the fluorine in the Universe.

This discovery helps us better understand how fluorine forms because stars expel chemical elements from their core near to or during the end of their lives. Due to the young age we perceive this galaxy as having from Earth, we can infer that the stars which formed the clouds of hydrogen fluoride must have appeared and died quickly in the grand scheme of things.

Wolf-Rayet stars, very large stellar bodies that only live for a few million years, are the main candidate that the team is considering. They fit the criteria of having short lives, and their size would allow for the huge quantities of hydrogen gas spotted in NGP-190387. Plus, it fits with our previous theories — Wolf-Rayet stars have been suggested as an important source of fluorine in the past, but we didn’t have enough data to confirm this theory, nor did we know how important they were for this process.

Although other processes have been suggested as likely sources of cosmic fluorine, the team believes that they couldn’t account for the time frame involved, nor for the sheer quantity of the element in NGP-190387.

“For this galaxy, it took just tens or hundreds of millions of years to have fluorine levels comparable to those found in stars in the Milky Way, which is 13.5 billion years old. This was a totally unexpected result,” says Chiaki Kobayashi, a professor at the University of Hertfordshire and co-author of the paper. “Our measurement adds a completely new constraint on the origin of fluorine, which has been studied for two decades.”

This is also the first time fluoride has been identified in such a far-away, star-forming galaxy. Since the distances involved in studying the Universe also mean that the further you look, the further back in time you see, it’s also the youngest star-forming galaxy we’ve ever detected fluoride in.

The paper “The ramp-up of interstellar medium enrichment at z > 4” has been published in the journal Nature Astronomy.

What are the strong chemical bonds?

Everything around you is made of chemicals. And that’s only possible because those chemicals interact and bind together. Exactly how and why they do this depends on their nature but, in general, there are two kinds of interactions that keep them close: “primary” (or ‘strong’) and “secondary” (or weak) interactions.

Image credits Thor Deichmann.

These further break down into more subcategories, meaning there’s quite a lot of ground to cover. Today, we’ll be looking at the strong ones, which are formed through the transfer of electrons or electrostatic attraction between atoms.

As we go forward, keep in mind that atoms interact in order to reduce their energy levels. That’s what they get out of bonding to other chemicals, and they will do so until they find a bond-mate which will bring perfect balance to their lives; kinda like people do.

An atom’s stable configuration, the state all atoms tend towards, is known as its noble gas configuration. Noble gases make up the last column on the periodic table’s rightmost side, and they’re extremely or completely non-reactive chemically (they don’t need to interact because they have internal equilibrium).

Strong bonds are the most resilient ties atoms or molecules can forge with their peers. The secret to their strength comes from the fact that primary interactions are based on an atom’s valence. The valence number signifies how many electrons zipping around an atom’s core can be ‘shared’ with others. The overwhelming majority of a substance’s chemical behavior is a direct product of these electrons.

Covalent bonds

The first type of strong interactions we’ll look at, and the most common one, is the covalent bond. The name, “co-valence” sums up the process pretty well: two atoms share some or all of their valence electrons, which helps both get closer to equilibrium. This type of bond is represented with a line between two atoms. They can be single (one line), double (two lines), or triple (three lines).

File:Covalent Organic Frameworks (space-filling diagram).jpg
Covalent bonds are especially important in organic chemistry. Image via Wikimedia.

In essence, what happens inside a covalent bond is that you have an atom starved of electrons (positively charged) and one who has too many electrons (negatively charged). Neither of them wants to keep going on like that because their internal imbalance of electrical charges makes them unstable. When put close to each other, they will start behaving like a single ‘macroatom’ — their electrons will start orbiting around both.

These shared orbits are what physically keeps the atoms together. The atom with too many electrons only ‘has’ them for half the time, and the one with too few gets to have enough half the time. It’s not ideal, but it’s good enough and it requires no changes to the structure of the atom (which is just dandy if you ask nature).

Things get a bit more complicated in reality. Electrons don’t zip around willy-nilly, but need to follow certain laws. These laws dictate what shape their orbits will take (forming ‘orbitals’), how many layers of orbitals there will be and how many electrons each can carry, what distance these orbitals will be from the nucleus, and so on. In general, because of their layered structure, only the top-most orbitals are involved in bonding (and as such, they’re the only ones giving elements their chemical properties). Keep in mind that orbitals can and do overlap, so exactly what ‘top-most’ means here is relative to the atom we’re discussing.

Orbitali, Lipire, Atom, Moleculă, Legarea Covalentă
A 3D rendering of electron orbitals. Image via Pixabay.

But to keep it short, covalent bonding involves atoms pooling together their free electrons and having them orbit around both, using each other’s weakness to make the pair stronger.

Covalent bonds are especially prevalent in organic chemistry, as it is the preferred way carbon bonds to other elements. The products they form can exist in a gas, liquid, or solid state, whereas the following two types can only produce solid substances.

Ionic bonds

Next are ionic bonds. Where covalent bonds involve two or more atoms sharing electrons, ionic bonds are more similar to donations. This type of chemical link is mediated by an electrostatic charge between atoms (negatively charged particles attract positively-charged ones). The link is formed by one or more electrons going from the donor to the receiver in a redox (oxidation-reduction) reaction; during this type of reaction, the atoms’ properties are changed, unlike in covalent bonds. Ionic bonds generally involve a metal and a nonmetal atom.

File:Sodium chloride - crystals.jpg
Table salt crystals. Salts are formed from ionic bonds. Image via Wikimedia.

Table salt is a great example of a compound formed with ionic bonds. Salt is a combination of sodium and chlorine. The sodium atom will cede one of its electrons to the chlorine, which will make them hold different electrical charges; due to this charge, the atoms are then strongly drawn together.

It again ties into equilibrium. Due to the laws governing electron orbitals, there are certain configurations that are stable, and many others that are not. At the same time, atoms want to achieve electrostatic neutrality, as well. In an ionic bond, an atom will take an increase in its electrostatic energy (it will give or take negative charge) to lower its overall internal imbalance (by reaching a stable electron configuration) because that’s what lowers its energy the most.

Covalent bonds for the most part take place between atoms with the same electrostatic properties, and there’s no direct transfer of electrons because that would increase the overall energy levels of the system.

Ionic bonds are most common in inorganic chemistry, as they tend to form between atoms with very different electrostatic properties and (perhaps most importantly) ionic compounds are always soluble in water. However, ionic compounds such as salts do have a very important part to play in biology.

The main difference between ionic and covalent bonds is how the atoms involved act after they link up. In a covalent bond, they are specifically tied to their reaction mates. In an ionic bond, each atom is surrounded by swarms of atoms of opposite charge, but not linked to one of them in particular. Atoms with a positive charge are known as cations, while those with a negative charge are anions.

Another thing to note about ionic bonds is that they break if enough heat is applied — in molten salts, the ions are free to move away from each other. They also quickly break down in water, as the ions are more strongly attracted to these molecules than each other (this is why salt dissolves in water).

Metallic bonds

File:CrystalGrain.jpg
Microstructure of VT22 steel (titanium wrought alloy) after quenching. Image via Wikimedia.

If the name didn’t give it away, this type of chemical bond is the hallmark of metal and metallic alloys. It’s not the only type of bond that they can form, even between pure metals, but it’s almost always seen in metals.

Chemically speaking, metals are electron donors — they need to shed electrons to reach equilibrium. Because of the nature of these atoms, their electrons can move around between atoms, forming ‘clouds’ of electrons. These detached electrons are referred to as being ‘delocalized’.

This type of bond shares properties of both ionic and covalent bonds. In essence, every metal atom needs to give away electrons to be stable (thus behaving like a cation). But because it’s surrounded by other metal atoms (meaning other cations), there’s nobody who wants to accept that electrical charge. So the electrons get pooled together and everyone gets to have them some of the time (thus forming a covalent bond). You can think of it as an ionic bond where the atomic nuclei form the cations and the electrons themselves the anions. Another way to look at it, although this is more of an abstraction used to illustrate a point, is that all the atoms involved in a metallic bond share an orbital.

Keep in mind that this ‘sea of electrons’ theory is a model of the process — it’s oversimplified and not a perfect representation of what’s actually going on, but it’s good enough to give you a general idea of how metallic bonds work.

Because metallic bonds share properties of both ionic and covalent bonds they create crystalline bonds (like salts) while still remaining malleable and ductile (unlike most other crystals). Most of the physical properties we seek in metals are a direct product of this structure. The cloud of delocalized electrons acts as a binder, holding the atoms together. It also acts as a cushion, preventing mechanical shock from fracturing the structure. When blacksmiths hammer iron or steel, they rearrange the atomic cores. Electrons can still move around them, like water around the rocks in a stream, and help hold everything together during the process.

Metallic bonds have the lowest bond energy of the types we’ve seen today — in other words, they’re the most stable.


Chemistry often gets a bad rep for being that boring subject with math and mixing of liquids. So it’s easy to forget that it literally holds the world together. The objects around us are a product of the way their atoms and molecules interact. Our knives can cut the food on our plates because billions of atoms inside that knife hold onto each other for dear life, and those in food don’t. Diamonds cut through solid stone because carbon atoms can bind to other carbon atoms in structures that are stronger than almost anything else we’ve ever seen. Our cells and tissues are held together by the same interactions. We’re alive because water molecules are shaped in such a way as to make them universal solvents.

We’re still very much working with models here — our understanding of the ties that bind is still imperfect. But even these models can help us appreciate the immense complexity hidden in the most mundane objects around us.

Nobel Prize in Chemistry awarded to trio that created today’s lithium-ion batteries

The Royal Swedish Academy of Sciences has decided to jointly award the Nobel Prize in Chemistry 2019 to John B. Goodenough, M. Stanley Whittingham (USA), and Akira Yoshino (Japan) “for the development of lithium-ion batteries“.

Image credits Nobelprize.org

This year’s Nobel Prize for Chemistry recognizes the importance of the lithium-ion battery in today’s world. Such batteries are lightweight, rechargeable, and powerful enough for a wide range of applications. From mobile phones to laptops and electronic cars, the lithium-ion battery keeps our world in motion. They’re also one of the cornerstones of fossil-fuel-free economies, as they’re able to store energy from renewable sources for long stretches at a time (they can withstand many recharge-discharge cycles).

No breaking down

The advantage of lithium-ion batteries is that they are not based upon chemical reactions that break down the electrodes, but upon lithium ions flowing back and forth between the anode and cathode.

Lithium-ion batteries have revolutionized our lives since they first entered the market in 1991. They have laid the foundation of a wireless, fossil-fuel-free society, and are of the greatest benefit to humankind.

The lithium-ion battery can trace its origin back to the oil crisis of the 1970s, the commission explained. Against this backdrop, a researcher named Stanley Whittingham was working to develop energy technologies that would not depend on the use of fossil fuels. His work with superconductors paved the way for the development of an innovative cathode for lithium batteries. This cathode was built from titanium disulfide which, at a molecular level, has spaces that can fit – intercalate – lithium ions. Today, the concept is known as electrode intercalation.

The anode in a Li-Ion battery (the positively-charged part) is made of metallic lithium, which is a strong electron donor. Coupled with the new cathode, such a battery could produce just over two volts of power, which is a lot. However, this battery was also very unstable, as metallic lithium is highly reactive — and it posed a real risk of explosion.

John Goodenough predicted that replacing the titanium disulfide in the cathode with a metal oxide would boost the battery’s capacity (measured in volts) to even greater heights — a hypothesis he proved in 1980 using cobalt oxide. His battery produced up to four volts, paving the way towards much more powerful batteries.

Akira Yoshino built on Goodenough’s findings to produce the first commercially viable lithium-ion battery in 1985. He replaced the lithium in its anode with petroleum coke, a carbon-based material that could intercalate lithium ions. The resulting battery was a lightweight, robust battery that could withstand hundreds of cycles without any drop in performance. The secret to their success is that they don’t rely on chemical reactions to generate power (these break down electrodes over time) but on the physical flow of lithium ions between the anode and cathode.

Lithium-ion batteries have revolutionized our lives since they became commercially-available in 1991. Whenever you poke at your phone, hit the power button on your laptop, or start your Tesla, know that the work of these three laureates made it possible.

Fossil fuel extraction and use in the Arctic is changing the local chemistry

The Arctic is seeing a buildup of pollutants linked to fossil fuels.

Image via Pixabay.

A team of researchers from Penn State, the University of Michigan, Purdue University and the University of Alaska Fairbanks found nitrogen dioxide at a remote site in Alaska. They explain that nitrogen dioxide is a pollutant produced by the extraction and burning of fossil fuel, and it forms chlorine compounds as it degrades, affecting the natural chemistry of the area.

“We know that the Arctic is changing rapidly, but we need more observations to understand how our economic decisions, related to development and shipping, impact the natural system through links that may not immediately be obvious,” said Kerri Pratt, assistant professor of chemistry at the University of Michigan, and lead author on the study.

Nitrogen dioxide naturally breaks down in the atmosphere. One of the compounds it leaves over, dinitrogen pentoxide (N2O5), reacts with chloride-containing aerosols — produced by reactions within the snowpack or by sea spray. This leads to a buildup of nitryl chloride (ClNO2) gas in the atmosphere, the substance that the team tracked.

They report observing elevated levels of several chlorine precursors in the study. Overall concentrations in the region were highest when air masses traveled here from the direction of Utqiaġvik, Alaska (formerly Barrow, about a mile and a half away). The same was observed when air masses were incoming from the North Slope of Alaska (Prudhoe Bay) oil fields, about 200 miles to the southeast.

Nitryl chloride builds up and spreads at night as direct sunlight converts the gas into highly reactive chlorine atoms that bind with other compounds in the air. Atmospheric chloride compounds are naturally-occurring, but it is unclear how increased production of such gases will affect the local environment and ecosystems.

Image credits Stephen M. McNamara et al., Environ. Sci. Technol, 2019.

“This study shows that if we continue burning fossil fuels in the Arctic and producing these gases, it will further impact this beautiful balance we have had there for ages,” said Jose D. Fuentes, professor of atmospheric science at Penn State and paper co-author.

“And that could accelerate the environmental changes we are seeing in the Arctic.”

More work is needed to understand the potential effects in the region, the team explains. However, so far the results show that increased human activity, especially any involving the use or extraction of fossil fuels, have an effect on the overall chemistry of the Arctic. Those effects would continue to grow “throughout the Arctic” if shipping and extraction activities increase, Fuentes concludes.

The paper “Springtime Nitrogen Oxide-Influenced Chlorine Chemistry in the Coastal Arctic” has been published in the journal Environmental Science and Technology.

We can now film chemical reactions on an atomic level as they unfold

Researchers manage to film a chemical process unfolding on the atomic scale for the first time in history.

The paper, lead-authored by Junfei Xing at The University of Tokyo, Department of Chemistry, shows that there are distinct stages in the process of chemical synthesis. Their work could help guide new strategies and methods for chemical synthesis with greater control and precision than ever before. Prime applications are in materials science and drug development, according to the authors.

Smile for the camera

“Since 2007, physicists have realized a dream over 200 years old—the ability to see an individual atom,” said Project Professor Eiichi Nakamura, the paper’s corresponding author.

“But it didn’t end there. Our research group has reached beyond this dream to create videos of molecules to see chemical reactions in unprecedented detail.”

Nakamura’s team specializes in the field of material synthesis, with an emphasis on the control of the processes that are used in this field. However, they’ve always been hampered by the lack of any tool to observe these processes as they unfold.

The different stages of complex chemical reactions are difficult to study as they involve multiple intermediate steps, making them very hard to model. In theory, we could just look at these steps unfolding. In practice, however, it was impossible to isolate the products at each stage and to see how these changed over time.

“Conventional analytical methods such as spectroscopy and crystallography give us useful information about the outcomes of processes, but only hints about what takes place during them,” explained Koji Harano, project associate professor in the Nakamura group and co-author of the study.

“For example, we are interested in metal-organic framework (MOF) crystals. Most studies look at the growth of these but miss the early stage of nucleation, as it is difficult to observe.”

Nakamura and the team spent over 10 years working on a solution — and finally they developed one they call molecular electron microscopy. This meant, overcoming the engineering challenge of combining a very powerful electron microscope with a fast and sensitive imaging sensor (used to record video), while at the same time finding a way to pick and hold molecules of interest in front of the lens.

For the latter, the team employed a specially-designed carbon nanotube which was held in place at the focal point of the electron microscope. This would snag up passing molecules and hold them in place, but not interfere with them chemically. The reaction could thus unfold on the tip of the nanotube, where the team could record it. Harano admits that “what surprised us very much in the beginning was that our plan actually worked.”

“It was a complex challenge, but we first visualized these molecular videos in 2013,” he adds. “Between then and now, we worked to turn the concept into a useful tool.”

“Our first success was to visualize and describe a cube-shaped molecule, which is a crucial intermediate form that occurs during MOF synthesis. It took a year to convince our reviewers what we found is real.”

The team says their work is the first step towards gaining control over chemical synthesis in a precise and controlled manner — a term they call “rational synthesis.” If we know what goes on along every step of a chemical reaction, we can better control the outcome.

In time, the team hopes their work will lead to things like synthetic minerals for construction, or even new drugs.

The paper “Atomistic structures and dynamics of prenucleation clusters in MOF-2 and MOF-5 syntheses” has been published in the journal Nature.

What is oxidation?

Rust, patina, fire, rancid food — they all have oxidation in common. So let’s take a look at exactly what that is.

Rust.

Image via Pexels.

Life as we know it today couldn’t exist without oxygen. So, we’re lucky that there’s so much of it around. But this reliance on oxygen has been, at times, called a ‘deal with the devil’. The same property that makes the gas vital to most Earth-borne life — its unquenchable thirst for electrons — slowly kills the very life it supports.

Today, I thought we’d take a deeper look into this life-giving-life-taking dynamic by asking:

What is oxidation

Oxidation is the process in which one atom strips electrons from another, claiming them for its own. It is one side of redox type reactions. These reduction-oxidation reactions stand apart from other types of chemical interactions because they involve changes to multiple atoms’ electron envelopes. Reduction is the process via which an atom ceeds electrons to another.

The term draws its name from oxygen because it was the first known oxidative element. In fact, for quite a good stretch of time in the 18th century, ‘oxidation’ referred solely to the addition of oxygen to a compound. A good example of this traditional definition for oxidation can (annoyingly) display itself on the body of our cars: rust (iron oxide).

Since then, we’ve learned that oxidation isn’t limited to either iron or oxygen. Most elements can be oxidized, given proper coaxing, in a variety of environments. Many can be made to oxidize their peers. Some flake and break apart when oxidized, others tend to become more resistant to further oxidation. The process comes in many forms and involves many players. As such, we’ve expanded the definition of oxidation to include any and all reactions in which an element sheds electrons and increases its oxidation state.

Putting the ox in redox

Oxidation and reduction.

Image via Texample.net.

Oxidation and reduction always, always, occur together.

For purely theoretical approaches, half-reactions can be used to explain half of a redox reaction — be it the oxidation or the reduction component. These are pretty helpful in simplifying the whole process, to make it easier to teach or understand. But keep in mind the first line: in real life, oxidation and reduction always come together.

Quite simply, an electron won’t want to leave its hosting atom. It won’t go into the wild willy-nilly. There’s nothing to satisfy its electrical imbalance there. But having a more inviting host nearby to move on to can draw it out. Oxidation, then, cannot occur unless there’s an electron-thirsty atom around. On the other hand, without an electron donor, there’s no transfer. Reduction, then, can’t occur if there’s nobody to strip electrons from.

Think of it as a marketplace. You need buyers to have sellers and vice-versa; one simply can’t happen without the other.

Ok, so why do we call it ‘reduction’? Again, it’s history at work. We weren’t able to properly understand chemistry for quite a long time, but we were able to observe and measure some of its effects. ‘Reduction’ is actually a metallurgic term. Smelters (or blacksmiths, I guess?) could see that refining a one-pound piece of ore would net less than a pound of metal. They didn’t know why, but they could see the drop in quantity, so they referring to it as ‘reducing the ore to its base metal’.

Spoiler alert: that lost mass is oxygen (or hydrogen and oxygen) being chemically ripped apart from metallic oxides/hydroxides in furnaces. But the name stuck. Somewhat confusingly, in my view, as an atom gains electrons when it’s reduced. It loses electrons during oxidation.

A useful trick to help you remember this is the OIL RIG — Oxidation is Loss, Reduction is Gain.

Let’s see it in action

Banded Iron Formation.

Banded Iron formation showing layers of iron ore from the Karijini National Park, Western Australia. As you can see, it’s very oxidized.
Image credits Graeme Churchard / Wikimedia.

Imagine we’re working at a steel mill, and we get a shipment of iron ore (Fe) and coal (C). When we toss them into the furnace, this happens:

2Fe2O3+3C4Fe+3CO2

This iron starts out with an oxidation state of +3 (each atom is donating 3 electrons) and its oxygen starts out with an oxidation state of -2 (each atom is accepting 2 electrons). The carbon in the coal has a neutral electric charge (oxidation state is 0 for all pure elements). Oxygen, however, likes binding to carbon much more than it likes binding to iron. It will give iron back its electrons, and go bind with carbon, taking its electrons instead. This changes iron’s oxidation state from +3 to 0 — since it’s now a pure element so there’s nobody to donate to — and carbon’s from 0 to -4 (as it binds to two oxygen atoms, each taking up 2 electrons).

Oxygen likes binding to carbon more than iron because the former has more electrons to give. It thus holds a more powerful electronegative charge, which means it pulls on oxygen more strongly than iron does. Carbon is the reducing agent here, while oxygen is the oxidizing agent.

Caution to the wise

Another definition of oxidation, one that you may encounter especially in organic chemistry, is the loss of hydrogen. Again, somewhat confusing, but it does make sense. Let’s look at the oxidation of ethanol (the thing we use to get drunk) into ethanal (acetaldehyde) to make this simpler.

CH3CH2OH + [O] → CH3CHO + H2O

Hydrogen is the simplest atom — it’s one proton orbited by an electron. It usually cedes said electron when linking to other chemical species via covalent bonds. To oversimplify things, hydrogen usually helps reduce an element’s need for electrons when tying chemically to it.

In the above example, the addition of oxygen to ethanol takes out two hydrogen atoms to form water; overall, then, the ethanol gains in oxygen (which is oxidation) as it transforms to ethanal. Alternatively, you can see the loss of hydrogen as a loss of the electrons it shared with the rest of the molecule (which, again, is oxidation).

Oxidation and you

Examples of oxidation abound. Iron rusts, alcohol sours into vinegar, the carbon in firewood gets reduced by oxygen as it burns. It keeps your car running by enabling combustion. It makes bronze statues that stately shade of green.

It’s also inside you. Your cells oxidize nutrients to produce energy, water, and CO2. So it keeps your internal combustion going, too. Free radicals in your body damage cells by oxidizing atoms in your molecules (antioxidants help prevent this type of chemical damage). Some oxidizers also see use as disinfectants.

Oxidative processes make the butt of jokes for many a disgruntled student. They cause extensive, expensive damages to our infrastructure, our property, our bodies. Oxidation is likely one of the main drivers of aging, as the same gas which keeps us going slowly rusts our bodies from the inside out.

Oxidation is a simple process, but it takes many forms in various settings — too varied to treat in a single article, much less in one you’d stay awake through. But it directly underpins life as we know it, and likely death as we know it, too. So we shouldn’t take it lightly.

Atom2Vec.

An AI recreated the periodic table from scratch — in a couple of hours

A new artificial intelligence (AI) program developed at Stanford recreated the periodic table from scratch — and it only needed a couple of hours to do so.

Atom2Vec.

If you’ve ever wondered how machines learn, this is it — in picture form. (A) shows atom vectors of 34 main-group elements and their hierarchical clustering based on distance. The color in each cell stands for value of the vector on that dimension.
Image credits Zhou et al., 2018, PNAS.

Running under the alluring name of Atom2Vec, the software learned to distinguish between different atoms starting from a database of chemical compounds. After it learned the basics, the researchers left Atom2Vec to its own devices. Using methods and processes related to those in the field of natural language processing — chiefly among them, the idea that the nature of words can be understood by looking at other words around it — the AI successfully clustered the elements by their chemical properties.

It only took Atom2Vec a couple of hours to perform the feat; roughly speaking, it re-created the periodic table of elements, one of the greatest achievements in chemistry. It took us hairless apes nearly a century of trial-and-error to do the same.

I’m you, but better

The Periodic Table of elements was initially conceived by Dmitri Mendeleev in the mid-19th century, well before many of the elements we know today had been discovered, and certainly before there was even an inkling of quantum mechanics and relativity lurking beyond the boundaries of classical physics. Mendeleev recognized that certain elements fell into groups with similar chemical features, and this established a periodic pattern (hence the name) to the elements as they went from lightweight elements like hydrogen and helium, to progressively heavier ones. In fact, Mendeleev could predict the very specific properties and features of, as yet, undiscovered elements due to blank spaces in his unfinished table. Many of these predictions turned out to be correct when the elements filling the blank spots were finally discovered.

“We wanted to know whether an AI can be smart enough to discover the periodic table on its own, and our team showed that it can,” said study leader Shou-Cheng Zhang, the J. G. Jackson and C. J. Wood Professor of Physics at Stanford’s School of Humanities and Sciences.

Zhang’s team designed Atom2Vec starting from an AI platform (Word2Vec) that Google built to parse natural language. The software converts individual words into vectors (numerical codes). It then analyzes these vectors to estimate the probability of a particular word appearing in a text based on the presence of other words.

The word “king” for example is often accompanied by “queen”, and the words “man” and “woman” often appear together. Word2Vec works with these co-appearances and learns that, mathematically, “king = a queen minus a woman plus a man,” Zhang explains. Working along the same lines, the team fed Atom2Vec all known chemical compounds (such as NaCl, KCl, and so on) in lieu of text samples.

It worked surprisingly well. Even from this relatively tiny sample size, the program figured out that potassium (K) and sodium (Na) must be chemically-similar, as both bind to chlorine (Cl). Through a similar process, Atom2Vec established chemical relationships between all the species in the periodic table. It was so successful and fast in performing the task that Zhang hopes that in the future, researchers will use Atom2Vec to discover and design new materials.

Future plans

“For this project, the AI program was unsupervised, but you could imagine giving it a goal and directing it to find, for example, a material that is highly efficient at converting sunlight to energy,” he said.

As impressive as the achievement is, Zhang says it’s only the first step. The endgame is more ambitious — Zhang hopes to design a replacement for the Turing test, the golden standard for gauging machine intelligence. To pass the Turing test, a machine must be capable of responding to written questions in such a way that users won’t suspect they’re chatting with a machine; in other words, a machine will be considered as intelligent as a human if it seems human to us.

However, Zhang thinks the test is flawed, as it is too subjective.

“Humans are the product of evolution and our minds are cluttered with all sorts of irrationalities. For an AI to pass the Turing test, it would need to reproduce all of our human irrationalities,” he says. “That’s very difficult to do, and not a particularly good use of programmers’ time.”

He hopes to take the human factor out of the equation, by having machine intelligence try to discover new laws of nature. Nobody’s born educated, however, not even machines, so Zhang is first checking to see if AIs can reach of the most important discoveries we made without help. By recreating the periodic table, Atom2Vec has achieved this goal.

The team is now working on the second version of the AI. This one will focus on cracking a frustratingly-complex problem in medical research: it will try to design antibodies to attack the antigens of cancer cells. Such a breakthrough would offer us a new and very powerful weapon against cancer. Currently, we treat the disease with immunotherapy, which relies on such antibodies already produced by the body; however, our bodies can produce over 10 million unique antibodies, Zhang says, by mixing and matching between some 50 separate genes.

“If we can map these building block genes onto a mathematical vector, then we can organize all antibodies into something similar to a periodic table,” Zhang says.

“Then, if you discover that one antibody is effective against an antigen but is toxic, you can look within the same family for another antibody that is just as effective but less toxic.”

The paper “Atom2Vec: Learning atoms for materials discovery,” has been published in the journal PNAS.

Atomium ball.

What are isotopes

Atoms are the building blocks of matter. The screen you’re reading this on, the brain you’re reading with, they’re all very organized groups of atoms. They interact in specific ways, obeying specific rules, to maintain the shape and function of objects.

None of it works, however, unless the right atoms are involved. If you try to put the wrong ones into a protein or water molecule, it breaks apart. It’s like trying to cobble together a picture using pixels of the wrong colors.

Atomium ball.

Image in Public Domain.

Given how rigorous chemistry is on this, it’s surprising to see how much variety these ‘right’ atoms can get away with. Each element on the periodic table encompasses whole families of atoms who behave the same despite some important differences — isotopes.

What are isotopes?

Isotopes are atom families that have the same number of protons, but different numbers of neutrons. The term is drawn from ancient Greek words isos and topos, meaning ‘equal place’, to signify that they belong to the same elements on the periodic table.

Atoms are made of a dense core (nucleus) orbited by a swarm of electrons. The protons and neutrons that form the core represent virtually all of an atom’s mass and are largely identical except for their electrical charges — protons carry a positive charge, while neutrons don’t have any charge. The (negatively charged) electron envelope around the core dictates how atoms behave chemically.

The kicker here is that since neutrons carry no charge, they don’t need an electron nearby to balance them out. This renders their presence meaningless in most chemical processes.

To get a bit more technical, the number of protons within an atom’s nucleus is its ‘atomic number’ (aka the ‘proton number‘, usually notated ‘Z‘). Since protons are positively charged, each atom worth its salt will try to keep the same number of electrons in orbit to balance out its overall electric charge. If not, they’ll try to find other charge-impaired atoms and form ionic compounds, like literal salt, or covalent bonds — but that’s another story for another time.

thorium-atom.

Electron shells are made of several layers/orbitals. Although depicted round here, that’s only for simplicity’s sake. These orbitals can form very complicated shapes.
Image via Pixabay.

What’s important right now is to keep in mind that these atomic numbers identify individual elements. The atomic number is roughly equivalent to an element’s numeric place in the periodic table, and in broad lines dictates how an element tends to behave. All isotopes of an element have the same atomic number. What they differ in is their ‘mass number‘ (usually abbreviated ‘A‘), which denotes the total number of protons and neutrons in an atom’s core.

In other words, isotopes are atoms of the same element — but some just weigh more.

For example, two isotopes of Uranium, U-235 and U-238, have the same atomic number (92), but mass numbers of 235 and 238, respectively. You can have two isotopes of the same mass, like C-14 and N-14, that aren’t the same element at all, with atomic numbers 6 and 7, respectively. To find out how many neutrons an isotope harbors, subtract its atomic number from its mass number.

Do isotopes actually do anything?

For the most part, no. Generally speaking, there’s little to no difference in how various isotopes of the same element behave. This is partly a function of how we decide what each element ‘is’: roughly three-quarters of naturally-occurring elements are a mixture of isotopes. The average mass of a bunch of these isotopes put together is how we determine those elements’ standard atomic weights.

But, chiefly, it comes down to the point we’ve made previously: without differences in their electron shell, isotopes simply lack the means to change their chemical behavior. Which is just peachy for us. Taken together, the 81 stable elements known to us can boast some 275 stable isotopes. There are over 800 more radioactive (unstable) isotopes out there — some natural, and some we’ve created in the lab. Imagine the headache it would cause if they all behaved in a different way. Carbon itself has 3 stable isotopes — would we even exist today if each had its own quirks?

One element whose isotopes do differ meaningfully, however, is the runt of the periodic table: hydrogen. This exception is based on the atom’s particular nature. Hydrogen is the simplest chemical element, one proton orbited by one electron. Therefore, one extra neutron in the core can significantly alter the atom’s properties.

Hydrogen Isotopes.

Hydrogen’s isotopes are important enough for industrial and scientific applications that they received their own names.
Image credits BruceBlaus / Wikimedia.

For example, two of hydrogen’s natural isotopes, H-2 and H-3, have 1 and 2 neutrons respectively. Carbon (Z=6) has 2 stable isotopes: C-12 and C-13, with 6 and 7 neutrons respectively. In relative terms, there isn’t a huge difference in the neutrons’ share in their cores: they represent 50%, and 66.6% of the atoms’ weight in H-2, H-3, and 50% and 54-ish% of the total mass in C-12 and C-13. In absolute terms, though, the difference is immense: one neutron will double the mass of a hydrogen atom — two neutrons will triple it. For comparison, a single neutron is just 16.6% of a carbon atom’s mass.

While isotopes are highly similar chemically, they do differ physically. All that weight can alter how isotopes of light elements, hydrogen especially, behave. One example of such differences is the kinetic isotope effect — basically, heavier isotopes of the same element tend to be more sluggish during chemical reactions than lighter isotopes. For heavier elements, this effect is negligible.

Another quirky property of isotopes is that they tend to behave differently when exposed to infrared range than the ‘default’ elemental atoms. So, molecules that contain isotopes will look different to the same molecule sans isotopes when seen through an infrared camera. This, agian, is caused by their extra mass — the shape and masses of atoms in a molecule change how it vibrates, which in turn, changes how they interact with photons in the infrared range.

Where do isotopes come from?

Long story short, isotopes are simply atoms with more neutrons — they were either formed that way, enriched with neutrons sometime during their life, or are originated from nuclear processes that alter atomic nuclei. So, they form like all other atoms.

Lighter isotopes likely came together a bit after the Big Bang, while heavier ones were synthesized in the cores of stars. Isotopes can also form following the interaction between cosmic rays and energetic nuclei in the top layers of the atmosphere.

CNO cycle.

The carbon-nitrogen-oxygen (CNO) cycle, one of the two known sets of fusion reactions by which stars convert hydrogen to helium. P or ‘proton’ here is a positive hydrogen ion (aka hydrogen stripped of its electron).
Image credits Antonio Ciccolella / Wikimedia.

Isotopes can also be formed from other atoms or isotopes that have undergone changes over time. One example of such a process is radioactive decay: basically, unstable isotopes tend to shift towards a stable configuration over time. This can cause one unstable isotope to change into a stable one of the same element, or into isotopes of other elements with similar nucleic structures. U-238, for example, decays into Th-234.

This process, known as beta decay, occurs when there are too many protons compared to neutrons in a nucleus (or vice-versa), so one of them transforms into the other. In the example above, the uranium atom is the parent isotope, while the thorium atom is the daughter isotope. During this process, the nucleus emits radiation in the form of an electron and an antineutrino.

What are isotopes good for?

One of the prime uses for isotopes is dating (like carbon dating). One particular trait of unstable isotopes is that they decay into stable ones — but they always do so with the exact same speed. For example, C-14’s half-life (the amount of time needed for half of all isotopes in a sample to decay) is 5,730 years.

C-14 is formed in the atmosphere, and while an organism is alive, it ingests about one C-14 atom for every trillion stable C-12 isotopes through the food it eats. This keeps the C-12 to C-14 ratio roughly stable while it is alive. Once it dies, intake of C-14 stops — so by looking at how many C-14 atoms a sample has, we can calculate how far down C-14’s half-life it’s gone, meaning we can calculate its age.

At least, in theory. All our use of fossil fuels is pumping more C-14 isotopes into the atmosphere than normal, and it’s starting to mess up the accuracy of carbon dating.

To see how many C-14 atoms something has, we use accelerator mass spectrometry — a method that separates isotopes via mass.

PET (Positron-emission tomography) scans use the decay of so-called ‘medical isotopes‘ to peer inside the body. These isotopes are produced in nuclear reactors or accelerators called cyclotrons.

Finally, we sometimes create ‘enriched’ materials, such as enriched Uranium, to be used in nuclear reactors. This process basically involves us weeding through naturally-occurring uranium atoms via various methods for heavier isotopes, then separating those. The metal that we’ve already removed the heavier isotopes from (which are more unstable and thus more radioactive than ‘regular’ uranium) is known as ‘depleted uranium’.

Pure Quartz.

Silicon-based life on Earth? Only artificially, so far — but maybe natural on other planets

Scientists have successfully nudged a strain of bacteria to create carbon-silicon bonds for the first time. Such research will help flesh out our understanding of silicon-based life — which doesn’t appear on Earth but could on other planets.

Pure Quartz.

Quartz, or rock crystal, the second most abundant mineral in the Earth’s crust, is a mixture of (mostly) silicon and oxygen. The first most abundant mineral, feldspar, also contains (mostly) silicon.
Image credits Stefan Schweihofer.

You may not think of it much, but silicon is actually really common here on good ole Earth — it’s the second most common element in our planet’s crust after oxygen. About nine-tenths of crustal rocks contain silicon in the form of silica or other silicates. The processor that’s allowing you to read ZME Science contains silicon — the glass panes on your windows, too. It’s so ubiquitous in our planet’s chemical makeup that a geologist can tell you which volcanoes will explode, and which would simply ‘flow’ on eruption, just by looking at how much silicon its magma contains. Sitting just one step away from carbon in the periodic table, silicon shares most of the properties that made carbon ideally suited for organic use.

Which makes the following point that much more curious: life as we know it simply isn’t that big on using silicon. It pops up here and there, in the tissues of certain plants or the shells of some marine organisms, but surprisingly little overall. Instead, Earthlings much prefer carbon.

Research from the California Institute of Technology, however, may put the element back on biology’s menu. The team successfully coaxed E.coli into producing a protein that can form carbon-silicon (C-Si) bonds. Their work sheds more light on why the latter element seems to be shunned by Earthly life, and where we might find organisms that don’t feel the same way.

Silly cons

The team started by engineering a strain of E. coli bacteria to produce a protein normally found in bacteria from Icelandic hot springs which can bind silicon to carbon. When the team first used their engineered strain to produce the protein in question, the compound proved to be very inefficient. Successive iterations and natural mutations, however, resulted in an enzyme that could forge organic silicon molecules some 15 times more efficiently than any chemical process the team could apply to the same goal. Using this molecule, the team produced twenty organic C-Si compounds, all of which proved to be stable.

So, by this step, they had proven that life can incorporate silicon — it’s just that it doesn’t particularly want to.

“You might argue, gosh, it’s so easy for a biological system to do it, how do you know it’s not being done out there?” says coauthor Frances Arnold. “We don’t really know, but it’s highly unlikely.”

Arnold was referring to other planets in this context, but the point he’s trying to make can also be applied to Earth. Do we know beyond a doubt that there isn’t silicon life on Earth? Well, no. But we have reasonable grounds to assume that there isn’t.

The issue here is a problem of availability. Silicon is much more common on Earth than carbon, but virtually all of it is extremely costly for life to access. Silicon is so prevalent in the crust because it’s a huge fan of oxygen, and it will bind with any available atom of the gas to form rocks. All of the silicon compounds that the team fed their bacteria to make these new compounds were manmade and life wouldn’t have any chance of finding them in the wild.

Carbon, on the other hand, is very stable chemically. Its relative lack of interest in hooking up with oxygen is especially useful for life, as it can use the atom to create huge molecules without much risk of them oxidizing and breaking apart. It also allows carbon to exist in a pure state (graphite for example) in nature, while silicon can’t — this is a very important distinction, as in molecular ‘economy’, this means carbon can be acquired at a much, much lower price (energy expenditure) than silicon. Finally, when you burn carbon you get a gas that can then be re-used by life; silicon lacks this perk.

Silicon optics.

But it can be quite shiny, as these silicon optic pieces showcase.
Image credits Crystaltechno / Wikimedia.

Finally, silicon-based life couldn’t use water as carbon-based life does; the two simply don’t have chemistry. Instead, it would have to substitute another liquid, such as methane, for the job — but that’s not stable under normal conditions on Earth, either.

In the end, the heart of the matter isn’t that silicon can’t be a foundation for life — it’s just that, on Earth, carbon can do the job much more easily, at greater efficiency, and at a lower cost. The ‘job’ here being life.

However, that’s not to say silicon-carbon bonds aren’t useful. We produce such compounds in the lab all the time and use them in products ranging from electronics to pharmaceuticals. The team hopes that their bacteria can help produce these substances much faster, much more cheaply, and with a lesser environmental footprint. It could also open the way to whole new materials.

“An enzyme can do what chemists thought only they could do,” Arnold says.“The chemical bond could appear in thousands and thousands of different molecules, some of which could be useful,”

“They’re all completely new chemical entities that are easily available now just by asking bacteria to make them.”

Silly life

Beyond the immediate practical considerations, the research also begs the question: is Earth-based silicon life feasible? The results showed, at the very least, that silicon isn’t harmful to life as we know it. Perhaps, if life had ready access to the element, it would incorporate it more in its structures and processes, despite its limitations.

And that invites the question of whether life can be made to incorporate elements that we’ve never seen it use before.

“What happens when you incorporate other elements?” Arnold asks. “Can nature even do that?”

“Presumably we could make components of life that incorporate silicon—maybe silicon fat or silicon-containing proteins—and ask, what does life do with that? Is the cell blind to whether carbon is there or silicon is there? Does the cell just spit it out? Does the cell eat it? Does it provide new functions that life didn’t have before?”

“I’d like to see what fraction of things that chemists have figured out we could actually teach nature to do. Then we really could replace chemical factories with bacteria.”

One particularly well-suited solution to the limitations of silicon on our Earth is to move the context to another planet. Any seasoned lover of sci-fi, and I proudly count myself among their number, has run into the idea of silicon-based aliens at least once. For now, the “alien life” part remains in the domain of fantasy, but the chemistry behind that idea is very firmly lodged in the domain of science. For example Titan, Saturn’s largest moon, sports a chilly average temperature of -179° Celsius (-290° Fahrenheit), very little oxygen (that’s locked in water ice), and an abundance of methane rivers and lakes.

[Read Further] Yes that’s weird, but the weather gets even weirder on other planets — even simple rain.

In this context, silicon would be much better suited as a biochemical base for life than carbon. In what is perhaps a sprinkling of cosmic irony, however, Titan sports a lot of carbon (even more than Earth), but precious little silicon — and most of it is buried deep, near the moon’s core. But it goes to show that there are worlds out there where silicon is the way to go, not carbon. Overall our chances of finding silicon-based life, or life that incorporates silicon, are pretty slim. And that, again, comes down to the fact that carbon is the more stable of the lot. In the grand scheme of things, there can be silicon life out there — but it will probably be pretty rare.

Still, for now, research into C-Si bonds could usher in a new method of cheaply producing what, today, are relatively pricey compounds. And organic silicon compounds could have very valuable uses in medicine and other applications. So, while we look and pine for silicon-based life out in the universe, we stand to gain a lot from studying it on Earth.

The paper “Directed evolution of cytochrome c for carbon–silicon bond formation: Bringing silicon to life” has been published in the journal Science.

Three Old Scientific Concepts Getting a Modern Look

If you have a good look at some of the underlying concepts of modern science, you might notice that some of our current notions are rooted in old scientific thinking, some of which originated in ancient times. Some of today’s scientists have even reconsidered or revamped old scientific concepts. We’ve explored some of them below.

4 Elements of the Ancient Greeks vs 4 Phases of Matter

The ancient Greek philosopher and scholar Empedocles (495-430 BC) came up with the cosmogenic belief that all matter was made up of four principal elements: earth, water, air, and fire. He further speculated that these various elements or substances were able to be separated or reconstituted. According to Empedocles, these actions were a result of two forces. These forces were love, which worked to combine, and hate, which brought about a breaking down of the elements.

What scientists refer to as elements today have few similarities with the elements examined by the Greeks thousands of years ago. However, Empedocles’ proposed quadruplet of substances bares resemblance to what we call the four phases of matter: solid, liquid, gas, and plasma. The phases are the different forms or properties material substances can take.

Water in two states: liquid (including the clouds), and solid (ice). Image via Wikipedia.

Compare Empedocles’ substances to the modern phases of matter. “Earth” would be solid. The dirt on the ground is in a solid phase of matter. Next comes water which is a liquid; water is the most common liquid on Earth. Air, something which surrounds us constantly in our atmosphere, is a gaseous form of matter.

And lastly, we come to fire. Fire has fascinated human beings for time beyond history. Fire is similar to plasma in that both generate electromagnetic radiation such as light. Most flames you see in your everyday life are not hot enough to be considered plasma. They are typically considered gaseous. A prime example of an area where plasma is formed is the sun. The ancient four elements have an intriguing correspondent in modern science.

Ancient Concept of Dome Sky vs. Simulation Hypothesis

Millennia ago, people held the notion that his world was flat. Picture a horizontal cooking sheet with a transparent glass bowl set on top of it. Primitive people thought of the Earth in much the same way. They considered the land itself as flat and the sky as a dome. However, early Greek philosophers such as Pythagoras (c. 570-495 BC) — who is also known for formulating the Pythagorean theorem — understood that Earth was actually spherical.

Fast forward to the 21st century. Now scientists are considering the scientific concept of the dome once again but in a much more complex manner.

Regardless of what conspiracy lovers would have you believe, the human race has ventured into outer space, leaving the face of the Earth to travel to the stars. In the face of all our achievements, some scientists actually question if reality is real, a mindboggling and apparently laughable idea.

But some scientists have wondered if we could be existing in a computer simulation. The gap between science and science fiction starts to become very fine when considering this.

This idea calls to mind classic sci-fi plots such as those frequently played out in The Twilight Zone in which everything the characters take as real turns out to be something entirely unexpected. You might also remember the sequence in Men in Black in which the audience sees that the entire universe is inside an alien marble. Bill Nye even uses the dome as an example in discussing hypothetical virtual reality. This gives one the feeling that he is living in a snowglobe.

Medieval Alchemy vs. Modern Chemistry

The alchemists of the Middle Ages attempted to prove that matter could be transformed from one object into an entirely new object. One of their fondest goals they wished to achieve was the creation of gold from a less valuable substance. They were dreaming big, but such dreams have not yet come to fruition. Could it actually be possible to alter one type of matter into another?

Well, modern chemists may be well on their way to achieving this feat some day. They are pursuing the idea of converting light into matter, as is expressed in Albert Einstein’s famous equation. Since 2014, scientists have been claiming that such an operation would be quite feasible, especially with extant technology.

Einstein’s famous equation.

Light is made up of photons, and a contraption capable of performing the conversion has been dubbed “photon-photon collider.” Though we might not be able to transform matter into other matter in the near future, it looks like the light-to-matter transformation has a bright outlook.

Potato Casserole.

How to create deliciousness — the chemistry behind cooking

Cooking started out as a very practical matter — it made food tastier, easier to chew, overall better. It’s no less practical today than it was back in the stone age, but over time it has also taken on deep cultural, social, and personal meanings throughout the world. Food can bridge different cultures and offers a context for creating or strengthening family and social ties. We compete to show off our cooking and eating skills. We celebrate special occasions with baked goods. Even most religions deal in some way or another with food.c

In other words, cooking is an integral part of the human experience, with a much wider scope than simply calming a rumbling belly. So let’s take a look at the backstage of cooking and get to know the chemical reactions that underpin fine dining.

Potato Casserole.

Image credits RitaE / Pixabay.

What’s cooking

Before going any further, let’s get an idea of what it actually is. The term describes any process used to prepare food for eating. These can range quite widely. Methods such as ceviche rely solely on plant acids to cook proteins. Pickling uses bacterial fermentation to preserve and prepare foodstuffs.

Different cooking methods were historically how people adapted to the rigors of their environments, maximizing the effective number of calories they could obtain while bypassing limiting factors. Ceviche, for example, offers an alternative in areas where fuel for fires is scarce. Pickling would serve to keep certain items (vegetables, for example) fresh much longer than otherwise possible, so people could stock up on food to last them during trying times, such as during winter.

However, cooking usually involves the application of heat in one way or another to prepare food. For the sake of simplicity, today we’ll be talking about this last kind of cooking.

Carbohydrates

Banana fried.

Image via Pixabay.

Chemically speaking, the oft-dreaded carbs are molecules containing carbon, oxygen, and hydrogen, typically with oxygen to hydrogen ratio of 2:1. This is the same ratio as water, which will be important later on. They’re basically sugars and starches — which are also sugars, but with longer molecules. Sucrose (table sugar) is one such carb.

If you’ve ever heated table sugar, you’ve noticed it will begin to brown, then liquify, and finally bubble. The bubbling occurs due to hydrogen and oxygen breaking off from the sugars, forming water molecules, and evaporating. The browning is owed to the polymerization of caramelans (C24H36O18), caramelens (C36H50O25), and caramelins (C125H188O80). Caramel’s unique aroma is given off by volatile substances released during their pyrolysis, such as diacetyl.

So far so good, but why is caramelization important? Well, the process is key in cooking many plant-based foodstuffs. Vegetables cooked on high heat, in a stir-fry for example, will progressively brown as their starches and sugars break down and caramelize. Thankfully for us all, the volatiles released also give these veggies newfound ‘yum’. Caramelization is also partly responsible for the golden-brown hue bakers’ goods take on in the oven.

The Maillard reaction, a chemical reaction between amino acids and reducing sugars first described by French chemist Louis-Camille Maillard in 1912, is the other reaction that plays a role in baked goods‘ aroma, flavor, and golden hue. Like caramelization, it’s also a type of heat-powered browning. However, it takes place at lower temperatures (at about 140 to 165 °C / 280 to 330 °F) and creates a wide-ranging cocktail of substances.

Lipids

Cheesecake.

Image via Pixabay.

Colloquially known as fats, they are long-chain hydrocarbons. Their exact chemical nuances are a bit complex, but suffice to say that the overwhelming majority of dietary fats are triglycerides. The longer these chains get, the more they will tend to jumble up. Because of this, at room temperature fats generally tend to be solids. Obviously, there are exceptions to this rule. Oils, for example, are made up of short-chained, mostly unsaturated (we’ll get to that in a moment) fats. Apply heat, however, and fat melts — heat is energy, and enough energy allows fat’s molecular chains to shake away from and start sliding past each-other, i.e. to flow.

Fats are one of the densest energy stores organisms produce. Because of their high energetic value, your brain makes fatty food taste good so you’ll seek it out and stuff your face with as much of it as possible. When heated to a liquid, fats are more easily absorbed by items of food, imbuing them with flavor. That’s why fried food tastes awesome, why bakers put butter in their wares, and why cake is absolutely amazing (that and sugar of course).

Still, too much fat can be really bad for your health, and not all fats are created equal, which is why the World Health Organization recommends limiting saturated fat intake to 10% of daily energy intake, less than 7% for high-risk groups, and recommends keeping unsaturated fats below 30% of daily energy intake. As a rule of thumb, animal fats tend to have a higher content of saturated fats, which have the maximum number of hydrogen atoms tied into their chain and only single bonds. Plant-derived fats tend to have a higher content of unsaturated fats — which have at least one double bond and aren’t ‘saturated’ with hydrogen. Saturated fats hold more energy than unsaturated ones.

Proteins

Steak.

Image via Pixabay.

Protein denaturation is, if you’ll pardon the pun, the real meat of chemical reactions in cooking. Proteins are incredibly complicated bits of molecular machinery. They’re what imparts structural resilience to foodstuffs, what makes them chewy.

Proteins have four layers of complexity to them. First is the amino-acid composition itself, which forms the primary structure. Then come the intramolecular bonds between the amino-acids, the secondary structure, the shape the protein is folded in, the tertiary structure, and the overall macromolecule’s 3D shape, the quaternary structure. Their folding patterns are so complex we still have a lot of difficulties replicating them in the lab — despite using supercomputers to crack the problem.

Cooking, however, breaks these layers of complexity down in a process we call denaturation. Essentially, this process reverts proteins to their primary, or at most secondary, structure. This makes the proteins much easier to break down chemically, and less able to hold together mechanically, which is why cooked food is easier to chew or digest than raw food and has more effective calories.

In a way, the process is similar to that of carbs, as proteins are essentially long chains of mostly carbon and hydrogen. Outwardly, however, the effect is kind of the opposite of that in fats. An egg’s white, for example, largely consists of proteins and water. Raw, it’s gooey, runny, slimy, but holds together pretty well. When cooked, the denaturated proteins start interacting with each other and jumbling up, making the white more firm but easier to bite and chew on.

In effect, cooking breaks down the egg white’s proteins and then polymerizes — binds together — the resulting bits. The same process takes place with all proteins in the food you cook, be it an egg or a slab of steak. The Maillard reaction again makes an appearance here. The reactions between proteins and carbohydrates that brown baked goods or toast are the same ones that make meats and other protein-heavy foods brown when cooked.

Taste

What you perceive as ‘taste’ is your brain interpreting the chemical composition of food. Sugars, for example, are sweet, fat gives foods that irresistible, savory flavor, and proteins are generally attributed as the carriers of umami.

But those are the broad strokes. Cooking alters each of these macronutrients, carbs, fats, and proteins both separately and as a whole. On one hand, you have proteins being denatured, on the other you have sugars being transformed, and finally, you have interactions going on between the two in Maillard reactions. Lipids will also seep into the whole thing along with the cocktail of compounds released by sugars and proteins, soaking everything in delicious goodness.

The chemical processes that occur in the pan are extremely complex, and are not completely understood for a simple reason: no two foodstuffs are alike. We have a general idea of the main going-ons while cooking, but it’s simply too large and chaotic an environment to understand fully, — so much so that back in 2015 one of the Ig Nobel prizes was awarded to Colin Raston, a chemistry professor at the Flinders University, Australia, for uncooking an egg.

I’ve tried to keep it lighter on the details than we usually do here on ZME Science precisely because of this huge complexity. However, I hope I’ve helped you get a whole new appreciation of the wonderful chemistry that comes together in a tasty dish. And, if you’re like me, that I’ve something to think about while getting dinner ready.

Now if you’ll excuse me, I’m going to go raid my fridge.

Chemistry Nobel 2017.

The 2017 Nobel Prize in chemistry awarded for cool new tricks in electron microscopy

The 2017 Nobel Prize in chemistry has been awarded to Jacques Dubochet, formerly a professor at the University of Lausanne, Switzerland, Joachim Frank from the University of Columbia, and Royal Society Fellow Richard Henderson for their revolutionary work in bioimaging: the development of cryo-electron microscopy.

Chemistry Nobel 2017.

Image credits NobelPrize.org

It’s really hard to get a good look at the stuff life is made of, known as ‘biomolecules’, in action. The issue is two-fold. For starters, they’re minuscule in size — we’re talking about chemical systems constructed out of bunches of atoms strung together. Secondly, the go-to tool for peering at the really small, electron beam imaging, just rips them to shreds before we can get an accurate picture.

The trio of scientists developed a technique to address these issues. Known as cryo-electron microscopy, it allows researchers to study the structure of biomolecules in high resolution without damaging them for the first time in history. The technology has immense potential in the field of biochemistry.

Frozen solid

Previously, electron microscopy imaging was only suitable for studying dead matter, because the electron beam destroys any biological material it is applied to. Henderson, a Scottish scientist and professor at the MRC Laboratory of Molecular Biology, was the first to use the method to generate three-dimensional images of a protein at the atomic scale. Joachim Frank, a professor at Colombia University in New York, expanded on electron microscopy, making it more flexible and more widely applicable. Together, their research made it easier to peer into the workings of biology’s building blocks in more detail than ever before.

Blobology.

The final technical hurdle was overcome in 2013, with the advent of a new type of electron detector.
Image via NobelPrize.org

Dubochet, an honorary professor at the University of Lausanne in Switzerland, worked on making biomolecules stable enough to resist electron microscopy. His work refined a cold-vitrification technique, which made it possible to flash-freeze biomolecules without altering their original structures. This vitrification technique ensures that molecules can be kept intact (and still) while we take a good look at them using the advances in imaging achieved by Henderson and Frank. In a nutshell, that’s cryo-electron microscopy.

Practical applications of the technique are “immense”, Frank told journalists after the Prize’s announcement.

Cryo-electron microscopy is a game changer in biochemistry. Life at the smallest scale works much like a set of LEGO bricks. A virus trying to infect one of your cells, for example, first needs to bind its envelope proteins to those peppered throughout the cell’s membrane. Similarly, antibodies trying to hunt those viruses down bind to their envelope proteins to act as beacons, drawing in the immune system’s white cells. An antibiotic works by blocking proteins on the cells of bacteria, rendering them useless.

But not all types of proteins click together — they need to be ‘shaped’ properly. Up to now, scientists have had to simulate such structures from indirect observations. This took both a lot of time, processing power, or a lot of gamers putting in the elbow work. Now, researchers can simply use cryo-electron microscopy to look at the shapes they’re interested in.

3D Structures.

As you can probably tell, these babies are much easier to snap than to simulate.
Image via NobelPrize.org

So far, it has allowed researchers to look at the structure of the Zika virus, the proteins that confer bacteria antibiotic resistance, and the enzyme producing the amyloid of Alzheimer’s disease. By freezing the same chemical systems at different points in its operation cycles, they can even put together film sequences of biochemistry at work — a feat unheard-of up to now.

Cryo-electron imaging, used in conjecture with a similar technique known as cryo-focused ion beam milling, was also applied by a team at ETH Zurich on some bigger pray — the bacterium Amoebophilus. Here, a cold-vitrification technique was used to strengthen the bacteria’s structures while keeping them brittle, and an ion beam was used to chip away bits of Amoebophilus. Finally, cryo-electron imaging was used to model the bacterium’s internal organelles, including a spectacular array of dagger-like projectile launchers.

All in all, this imaging technique promises to revolutionize our understanding of live biochemical processes, a much-needed aid in our growing antibiotic problem and beyond. In recognition of that fact, the Royal Swedish Academy of Sciences in Stockholm awarded the Nobel Prize, which will be equally shared among the three researchers on Wednesday.

A model of a single-molecule car that can advance across a copper surface when electronically excited by an scanning tunneling microscope tip.

World’s tiniest race will pit nanocars against each other in Toulouse this April

This April, Toulouse, France will be host to the world’s first international molecule-car race. The vehicles will be made up of only a few atoms and rely on tiny electrical pulses to power them through the 36-hour race.

A model of a single-molecule car that can advance across a copper surface when electronically excited by an scanning tunneling microscope tip.

A model of a single-molecule car that can advance across a copper surface when electronically excited by a scanning tunneling microscope tip.
Image courtesy of Ben Feringa.

Races did wonders for the automotive industry. Vying for renown and that one second better lap time, engineers and drivers have pushed the limits of their cars farther and farther. Seeing the boon competition proved to be for the development of science and technology in pursuit of better performance, the French National Center for Scientific Research (Centre national de la recherche scientifique / CNRS) is taking racing to a whole new level — the molecular level.

From April 28th to the 29th, six international teams will compete in Toulouse, France, in a 36-hour long nanocar race. The vehicles will only be comprised of a few atoms and powered by light electrical impulses while they navigate a 100-nanometer racecourse made up of gold atoms.

The fast (relative to size) and sciency

The event is, first of all, an engineering and scientific challenge. The organizers hope to promote research into the creation, control, and observation of nanomachines through the competition. Such devices show great promise for future applications, where their small size and nimbleness would allow them to work individually or in groups for a huge range of industries — from building regular-sized machines or atom-by-atom recycling to medical applications, nanomachines could prove invaluable in the future. It’s such a hot topic in science that last year’s Nobel Prize for chemistry was awarded for discovering how to make more advanced parts for these machines.

But right now, nanomachines are kind of crude. Like really tiny Model T’s. To nudge researchers into improving this class of devices, the CNRS began the NanoCarsRace experiment back in 2013. It’s the brainchild of the center’s senior researcher Christian Joachim, who’s now director of the race, and Université Toulouse III – Paul Sabatier Professor of Chemistry Gwénaël Rapenne, both of whom have spent the last four years making sure everything is ready and equitable for the big event.

Some challenges they’ve faced were selecting the racecourse — which must accommodate all types of molecule-cars — and finding a way for participants to actually see their machines in action. Since witnessing a race so small unfurl could prove beyond the limitations of the human eye, the vehicles will compete under the four tips of a unique tunneling microscope housed at the CNRS’s Centre d’élaboration de matériaux et d’études structurales (CEMES) in Toulouse. It’s currently the only microscope in the world allowing four different experimenters to work on the same surface.

Scanning Tunneling Microscope explained.

Image credits CNRS Universite Paris-Sud / Physics Reimagined, via YouTube.

Scanning Tunneling Microscope in action.

Image credits CNRS Universite Paris-Sud / Physics Reimagined, via YouTube.

The teams have also been hard at work, facing several challenges. Beyond the difficulty of monitoring and putting together working devices only atoms in size, they also had to meet several design criteria such as limitations on molecular structures and form of propulsion. At the scale they’re working on, the distinction between physics and chemistry starts to blur. Atoms aren’t the things axles or rivets are made of — they’re the actual axles and rivets. So the researchers-turned-race-enthusiasts will likely be treading on novel ground for both of these fields of science, advancing our knowledge of the very-very-small.

Out of the initial nine teams which applied for the race before the deadline in May 2016, six were selected for the race. Four of them will go under the microscope on April 28th. The race is about scientific pursuit, but it’s also an undeniably cool event — so CNRS will be broadcasting it live on the YouTube Nanocar Race channel.

[panel style=”panel-info” title=”The rules of the race” footer=””]The race course will consist of a 20 nm stretch followed by one 45° turn, a 30 nm stretch followed by one 45° turn, and a final 20 nm dash — for a total of 100 nm.
Maximum duration of 36h.
The teams are allowed one change of their race cars in case of accidents.
Pushing another racecar a la Mario Kart is forbidden.
Each team is allotted one sector of the gold course.
A maximum of 6 hours are allowed before the race so each team can clean its portion of the course.
No tip changes will be allowed during the race.[/panel]

 

The 2016 Nobel Prize in chemistry awarded to trio of molecular machine pioneers

The 2016 Nobel Prize in chemistry has been awarded to Jean-Pierre Sauvage from the University of Strasbourg, Sir J. Fraser Stoddart affiliated with Northwestern University, and Bernard L. Feringa from the University of Groningen for their work on molecular machines — nano-scale mechanisms capable of performing various tasks.

Molecular machines are teeny-tiny assemblies with the potential to spark a huge revolution. In essence, their purpose is to do the same things machines do for us today — transport, crafting, repairs — but on the molecular scale. And, just as you can’t make a car without first making some wheels, they need to be built from even smaller parts.

The trio’s work led to the creation of the most advanced such parts we’ve yet put together. Sauvage created the first molecular chain — or “catenane” — in 1983. Stoddart designed a “rotaxane”, a molecular ring around an axle. Feringa created the first molecular motor by coaxing a blade to spin in only one direction. Just remember, we’re talking about molecules here — far from the solid pieces of steel we use to build our machines in the macroscopic world, these molecular machines are subjected to the same rules as other molecules, such as Brownian motion.

Building on their work, chemists have designed muscles, elevators, and even cars, on an incredibly small scale. At the conference announcing the prize, committee member Sara Snogerup Linse asked if the audience wanted to see some molecular machines. She pulled away a black cylinder to reveal the items with a “Ta-da!” but there was nothing there.

“I’m sorry,” she said. “You can’t see them. They are more than a thousand times smaller than a human hair.”

Committee member Olof Ramstrom went on to present diagrams showcasing how the devices are built and their functionality. Sauvage, professor emeritus at the University of Strasbourg in France, developed a chain-linking process using a copper ion to hold two molecules in place. A third is added to complete the second link, and the copper ion is removed — allowing the two rings to move freely while still staying connected. Stoddart, Board of Trustees Professor of Chemistry at Northwestern University, used the attraction between an electron-starved ring and an electron-rich rod to thread the ring, forming an axle. The loop is then closed, to complete the assembly. Feringa, Jacobus Van’t Hoff Distinguished Professor of Molecular Sciences at the University of Groningen in the Netherlands, coaxed a spinning rotor blade to move in a single direction by driving it with pulses of light.

“They really are very tiny,” Ramstrom agreed.

The trio’s work has “opened this entire field of molecular machinery,” he added. There’s enormous potential in these tiny cogs and gears, as the Nobel Prize website explains:

“2016’s Nobel Laureates in Chemistry have taken molecular systems out of equilibrium’s stalemate and into energy-filled states in which their movements can be controlled. In terms of development, the molecular motor is at the same stage as the electric motor was in the 1830s, when scientists displayed various spinning cranks and wheels, unaware that they would lead to electric trains, washing machines, fans and food processors. Molecular machines will most likely be used in the development of things such as new materials, sensors and energy storage systems.”

The three scientists share the prize equally. A summary of their research can be read here. A technical explanation is available here.

 

Pools at the Rio Olympics are turning green

Tom Daley, the British diver, shared an image on Twitter – one pool at the Rio Olympics had turned green – not a greenish hue, I mean deep green.

green pool

Initially, no one really knew what was happening. There was no apparent explanation and no word from the organizers. The women’s synchronised diving finals took place in the green water, with the event being overshadowed by the mystery. To make things even more bizarre, a second pool has reportedly turned green as well… but why?

The first concern was for the Olympians’ health, but the initial analysis showed there was no reason to worry.

“It’s very important to the Rio 2016 community to ensure a high quality of play,” read the statement. “Tests were conducted and the water was found to be safe. We’re investigating what the cause was.”

However, that still didn’t tell us why it was happening. Media from all around the world started speculating, with most articles suggesting an algae bloom as the major cause. Others believed there was an abundance of urine or phosphate in the water. However, none of those possibilities likely hold up. Algae generally creates a blurry green hue, while the pools were still crystal clear, except green instead of blue. As for urine, there’s no way the necessary quantity gathered in the pool.

The culprit is probably pH. All the Olympic swimming pools are treated with a bunch of chemicals (as are most swimming pools across the world), and if the chemicals run out, then the water’s pH changes, causing the discoloration. A low pH brings out minerals, and for example copper shows up as a blue-green. The hypothesis is also backed by the International Swimming Federation, or FINA, which stated:

FINA can confirm that the reason for the unusual water color observed during the Rio diving competitions is that the water tanks ran out of some of the chemicals used in the water treatment process.

As a result, the pH level of the water was outside the usual range, causing the discoloration. The FINA Sport Medicine Committee conducted tests on the water quality and concluded that there was no risk to the health and safety of the athletes, and no reason for the competition to be affected.

However, this isn’t a common occurrence by any chance. Vox talked to Nate Hernandez, director of aquatic solutions at VivoAquatics. Hernandez says several things would have to break down in order for this to happen. When asked if he would be embarrassed if this happened to him, the response is clear:

“I’d be fired,” he replied.

However, this is just one thing in the long chain of failures that the Rio Olympics has spurred. Hopefully, there will be no risks for the athletes.

How to make vodka, with science!

Chemistry gets an undeserved bad reputation and it all starts in school — “mix an acid with a base and you get water and salts” is useful, sure, but not really catchy. People just aren’t that big on either water or salts. So can we nudge them to change their view of what is an undeniably awesome field of science? Is there a way to make chemistry a part of their life that they hold dear?

I say yes. The answer is one of its most useful known abilities — that of turning boring old food into booze. And we’re here to tell you how to make vodka — so you can get hammered, all in the comfort of your home. With science!

Brewing it up

While there are as many different processes as there are drinks, making any type of alcohol boils down to fermenting sugars. Vodka is awesome because it, along with moonshine, is probably the simplest spirit to make. The process discards all of those fancy steps such as aging; it can be made with virtually anything that ferments, and packs quite a punch. It also looks cool under a microscope. What else do you need? Let’s get down to it.

Image via vimeo

What you’ll need

[panel style=”panel-success” title=”The short version” footer=””]Foodstuffs to ferment, yeast, some containers and a still.[/panel]

Something to ferment — called the “mash.” This can be anything that contains sugar or starch. Potatoes, grain, or fruit all work; one distillery even figured out how to use wine. Depending on what you make your mash from (i.e. the starch and sugar content of the mash), you might have to add either enzymes (to break down starches into sugars) or sugar to the mix. Malted grains don’t require any added enzymes (as they’re already synthesized by the plant) and you can mix them into any mash as a source of enzymes.

Malted grain.
Image credits Wikimedia user Pierre-alain dorange.

Yeast — these are single-cell fungi that will be doing the heavy lifting. They will turn the sugars from the starting mixture into alcohol.

You can buy yeast in almost any grocery store, or brewer’s yeast in homebrew shops and online.
Image via flickr user terren in Virginia.

Containers, an airlock, water, a still, and some bottles — you’ll need either a big pot or several smaller ones in which to mix the mash with water and heat it, and a fermenting container to hold the resulting mixture. The airlock is a mechanism that allows fermentation gases to escape the container but doesn’t allow fresh air (and oxygen) to enter. You can buy one or make it yourself but it has to be solid enough to resist the pressure generated during fermentation. After fermentation, you’ll need to distill the liquid, and that’s where the still comes in.

Traditional Ukrainian vodka still.
Image credits Wikimedia user Arne Hückelheim.

M.A.S.H.

[panel style=”panel-success” title=”The short version” footer=””]To make alcohol you need sugar. Starchy foods need to be boiled and require enzymes to break down starch chains. Don’t boil the enzymes or the yeast.[/panel]

The first step is preparing the mash. You can either start with molasses (or just sugar), fruit, or fruit juice. The latter ones are pricier but don’t require any preparing. Grain and potatoes are the cheapest option, but you’ll need to cook their starches into sugars. Starch is made up of polysaccharides, and while dogs have adapted to digesting it yeast has not — it needs monosaccharides to ferment.

 Beer mash being mixed in a brewery vat. Image via flickr user epicbeer.

Beer mash being mixed in a brewery vat.
Image credits Flickr user epicbeer.

For a grain mash (wheat, barley, maize, or a combination of them) take a metal pot with a lid, fill it with water, and heat it up to around 165° F (74° C). As a rule of thumb, for a 10 gallons (38 l) pot you should use around 6 gallons (23 l) of water and 2 gallons of dry, flaked grain (7,6 l), then stir.

Too much heat will destroy the enzymes, so let the mix cool to between 155° F (68° C) and 150° F (66° C) then mix in one gallon of crushed grain malt. At this thermal point, the starches pass from the grains to the liquid which will become viscous or gelatinize. Let it rest for two hours, stirring occasionally. During this time the starch breaks down into sugars — as starch is basically made of long chains of simple sugars fused together, you should see the mash become less and less viscous as this happens. Before fermenting, let the mixture cool to 80° – 85° F (27° – 29° C), but don’t let the temperature drop too much below 80°F as this can spoil the mash.

Potatoes aren’t readily usable for making alcohol because they mostly store starch, not sugars. The plant’s roots also don’t produce the enzymes required to break starch down into sugars. So, for a potato mash, you’ll need to heat-treat the spuds before fermentation. Clean the tubers (you don’t need to peel them) and boil them for about one hour, until the mixture gelatinizes. Throw away the water, mash the potatoes, mix them with fresh tap water, and boil them again. For 10 pounds of potatoes, around 3 gallons of water will do. From here on, the process is exactly like the one above: Let the mixture cool to between 155° F (68° C) and 150° F (66° C), add either two pounds of crushed malted grains or store-bought enzymes, stir periodically over two hours, and let it cool overnight, keeping it at around 80° – 85° F (27° – 29° C).

You can even make a mixed mash (which is used in most brands of commercially-available vodka,) as long as you take care to heat-treat the mixture accordingly. Vodka doesn’t carry much flavor from the mash to the final brew, so you can choose any base for your drink and it won’t affect the taste that much.

Fermentation

[panel style=”panel-success” title=”The short version” footer=””]Strain the mash and place it in a fermentation container, which you need to keep at 80° – 85° F (27° – 29° C). Don’t seal the vessel, it could explode.[/panel]

This is the part where alcohol actually gets produced. Through fermentation, the yeast will eat up all the sugar in the mixture, breaking it down for energy and churning out CO2 and alcohols.

It’s recommended that you sterilize the container before fermentation so that there’s only your yeast left to metabolize the mix. You can use unsterilized containers, but the process will be messier — resulting in unwanted flavors and alcohols being produced by the action of other yeast stains and bacteria. You can buy sterilizing compounds in homebrew stores or online. You can also do a decent job sterilizing equipment by placing it in boiling water.

Strain your mash with a fine mesh strainer and pour the liquid part into the vessel. Try to splash it around while you do this so that it aerates — the yeast initially needs some oxygen to grow before producing alcohol. Hydrate the yeast with the appropriate amount of water and mix it into the vessel (use a sanitized spoon). Higher quality yeasts (called distiller’s’ yeast) will ferment more cleanly and produce a relatively low amount of unwanted alcohols.

Ok, so you’ve got your fermentation container all set up, time to seal it. You can use store-bought stoppers or make your own — lids or drilled rubber stoppers all work. Just be careful not to completely seal the vessel as the yeast will generate a lot of CO2, building up pressure in the container — it might even explode. So fix an airlock to lids or stoppers.

Bubbly, foamy, boozy explosion. Image via flickr user James Cridland.

Or things might bubble out of hand.
Image credits Flickr user James Cridland.

Keep the liquid at around 80° – 85° F (27° – 29° C) during fermentation for the best results. If you’re gonna use an airlock, you should see it bubbling during active fermentation, which will reduce or even cease as the sugars in the mix get broken down to alcohol.

Distillation

[panel style=”panel-success” title=”The short version” footer=””]Place the fermented liquid in the still, and heat to 173° F (78.3° C), but keep it under 212° F (100° C). Throw away the heads and tails of your vodka, as they contain harmful substances. Drink the body.[/panel]

After fermentation, the liquid (called the “wash”) basically fills all the criteria for booze — but it’s not really palatable, or safe to drink. Siphon it out of the fermentation vessel into a clean, sterilized container. Leave the yeast residue behind, or it will scorch and clog up your still. You can filter the wash before distillation if you want.

A still is a device that can separate liquids with different boiling temperatures. The basic idea is to heat the mixture above the boiling point of alcohol while keeping it under the boiling point of water. Some water will still evaporate, but as the vapors condense a large part of it trickles back into the boiling chamber, and a higher-alcohol content liquid is produced.

A (very fancy) swan-necked pot still.
Image credits Wikimedia user Bitterherbs1.

There are two types of stills you can use: pot and column stills. Both work using the principle above, but column stills are more efficient (also more complex and expensive). The main difference between the two is that column stills have a longer condensation/distillation chamber directly above the boiling vessel, so a larger part of unwanted vapors trickle back down and not in the final brew. Pot stills, however, are easier to build (they’re basically pressure cookers with tubing attached to them) and need less cooling as the distillation chamber can be completely submerged in water.

Fun fact: column stills are very similar to the installations oil refineries use in cracking or fractional distillation — the process by which petrol, diesel, lamp oil, and other finished products are created from crude oil. In fact, the same principles that go into distilling vodka are used to make these products.

The Coffey still, a (very tall and fancy) still for making high-proof spirits.
Image via Wikimedia user HighKing.

So you bartered, begged, or bought your way to a still. You have your wash all washy and ready. Here’s what you have to do:

Heat the wash to 173° F (78.3° C), but keep it under 212° F (100° C) or the water will also start to boil. Liquid will start to steadily drip from the still throughout distillation, but you don’t want all of it. Throw away the “heads,” the first resulting brew, as this is very rich in harmful volatile chemicals such as methanol. For 5 gallons (19 l) of wash, the heads are the first 2 ounces (30 ml) of brew (throw away a bit more just to be on the safe side).

Next comes the “body” which is the distillate that contains ethanol (the nice kind of alcohol), water, and other compounds. If using flowing water to cool the still, you can adjust the flow to control the speed and quality of distillation. Aim to get around two or three teaspoons every minute from the still for the best quality results. You can make it faster but you’ll get more impurities in your drink. This is, in fact, vodka. Bottle it up!

Over time the temperature in the boiling chamber will slowly rise towards 212° F (100° C) no matter what you do. The body of the brew will be distilled by now, and the process will result in the “tails,” which are again harmful and should be discarded.

Enjoy

Congratulations! You now know how to make your own vodka, and are a bit of a fledgling chemist. You can add your own little touches to the process — toy a little with the composition of your mush, filter your vodka using carbon filters, or even distil it again to get a stronger brew. Explore, experiment, innovate.

And at the end of the day, you get to enjoy a nice shot of vodka.

Or a bowl-full.
Image via Wikimedia.

We’re going to need more fertilizer if we want to feed the world – much more

According to a new study, we have to increase our phosphorus-based fertilizer production 4 times if we want to satisfy global food needs by 2050.

Photo by Lynn Betts.

As human population continues to increase, so do the challenges on global food production. Fertilizers especially are a point of focus, and phosphorous is a key component of many fertilizers. However, like many other nutrients, phosphorous can be depleted, especially when manure is collected and then used to fertilize arable cropland. The phosphorous (in the manure) is basically relocated from grasslands to agricultural lands, creating an imbalance. If grasslands phosphorous is depleted, then the productivity will be severely compromised. Many meat and dairy products depend on this productivity, and this disruption could affect global food production, authors argue.

Martin van Ittersum and colleagues use data collected between the years 1975 and 2005 by the Food and Agriculture Organisation of the United Nations (FAO) in order to build the first global model of phosphorus budgets in grasslands. They found that most grasslands in the world have a negative phosphorous balance – which means they lose more phosphorous than they gain. At some point in the future, the phosphorous reserves will simply become insufficient.

According to their findings, in addition to the fertilizers we’re using on agricultural lands, we’re also going to need fertilizers for grasslands.

So far, the largest negative balance is in Asia, while the only areas with a neutral or positive phosphorous balance are North America and Eastern Europe.

First biological function of mercury discovered

The element mercury (Hg) is extremely toxic to most organisms, including humans.  It’s deadly effects are thought to be due to it’s ability to block the function of certain key metabolic enzymes.  Being so toxic, it has long been thought that mercury had no biological functions in the living world at all.  At least that was presumption until a research team published the first evidence that a unique group of organisms can not only stand being around the stuff, but actually benefit by the presence of Mercury.   In a paper published this month in Nature Geoscience, D. S. Gregoire and A. J. Poulain show that photosynthetic microorganisms called purple non-sulfur bacteria can use mercury as an electron acceptor during photosynthesis.  These bacteria rely on a primitive form of photosynthesis that differs from the type common to plants.  In the case of photosynthesis in plants, water is used as an electron donor, with carbon dioxide the electron acceptor.  The result of this process is the production of sugars, the release of oxygen, and the removal of carbon dioxide from the air.  Purple non-sulfur bacteria, on the other hand, usually prefer to live in watery environments where light is available to them, but the oxygen levels are low.

Image via Wikipedia.

They use hydrogen as the electron donor, and an organic molecule such as glycerol or fatty acids, as the electron acceptor.  This also results in the production of sugars, but does not release oxygen or remove carbon dioxide from the atmosphere.  This process also generates too many electrons for for their organic electron donor to handle, leading to the potential for damage to other molecules in the cell.

The researcher showed that purple non-sulfur bacteria grow better when mercury is in their environment.  The reason seems to be that the bacteria use the mercury to accept those extra electrons, reducing mercury from a high oxidation state to a low one.  The oxidation state refers to the number of electrons that an atom can gain or lose.  In the case of mercury, when it goes to its low oxidation state after gaining the extra electrons, it becomes a vapor and evaporates away into the atmosphere.  In mercury’s high oxidation state it can form the soluble compound methyl-mercury, which can be toxic to other organisms.

It’s quite possible that the impact of mercury reduction by photosynthesis may extend far beyond the health of these unusual little microbes.  Jeffry K. Schaefer, in the Department of Environmental Sciences at Rutgers University speculates that, “By limiting methyl-mercury formation and accumulation in aquatic food webs from microorganisms to fish, this process may even contribute to less toxic mercury ultimately ending up on our dinner plates.”

Journal Reference:

A physiological role for HgII during phototrophic growth.  Nature Geoscience.  February 2016, Volume 9 No 2  pp121 – 125  D. S. Grégoire & A. J. Poulain  doi:10.1038/ngeo2629

Biogeochemistry: Better living through mercury.  Jeffry K. Schaefer.  Nature Geoscience: News and Views.  18, January 2016.

Isotopes inside salmon ear tell a fishy story

According to a new study, just like tree rings carry with them hints about previous dry or rainy years, bones in fish carry with them a specific signature which records the chemical composition of the waters they used to live in.

A cross-section of a salmon otolith, also known as a fish ear stone or fish ear bone. Scientists measured Strontium ratios and identified the waters in which the fish lived for its entire life. The new fish-tracking method may help pinpoint critical habitats for fish threatened by climate change, industrial development and overfishing. Credit: Sean Brennan, University of Washington
Read more at: http://phys.org/news/2015-05-chemical-tags-ear-bones-track.html#jCp

Most vertebrates, especially fish, have what is called an ‘otolith’ – a specific bony structure inside the inner ear. The  otolith accretes layers of calcium carbonate and gelatinous matrix throughout the entire life. The accretion rate varies with growth of the fish – often less growth in winter and more in summer – which results in the appearance of rings that resemble tree rings; and just like with tree rings, scientists can figure out the age. Another interesting fact is that the otolith isn’t really digestible, so it often remains stuck in the digestive tract of fish-eating animals, and scientists can therefore reconstruct their eating habits.

But whenever the otolith grows and accretes more calcium carbonate, it also traps in other elements – extremely small fractions of the chemical makeup of the waters in which the fish live in. Specifically, it traps in specific isotopes, in specific quantities; by analyzing these isotopes, researchers are now able to reconstruct where the fish was born and where it traveled for its entire life.

Sean Brennan from the University of Washington and lead author explains:

“Each fish has this little recorder, and we can reveal the whole life history of the fish from the perspective of the otolith. Each growth ring is a direct reflection of the environment the fish was swimming in at the time it was formed.” Brennan completed the study as a doctoral student at the University of Alaska Fairbanks. He is now a postdoctoral researcher in the University of Washington’s School of Aquatic and Fishery Sciences.

Specifically, they looked at the trace element strontium. Strontium is a very reliable element for this type of reconstructions because it almost never alters and strontium levels vary greatly depending on the age and structure of the bedrock. In other words, by looking at how the strontium in an area looks like you can figure out (to some extent) where it comes from. But it wasn’t an easy feat. Thure Cerling, also an author, explains:

“There are literally thousands of measurements on each otolith,” Cerling says.

Geochemist Diego Fernandez further adds:

“They’re like microexplosions. You create tiny, tiny particles that are carried into the mass spectrometer.” By showing how the ratio of strontium-87 to strontium-86 changed over time, “we get the entire life history of the salmon,” he says.

Some areas more than others are better candidates for this type of analysis, but researchers wanted a challenge – so they chose Alaska.

“Alaska is a mosaic of geologic heterogeneity,” he added. “As long as you can look at a geologic map and see rocks that are really different, that’s a good potential area.”

One of the many tributaries to the Upper Nushagak River. Credit: Sean Brennan, U of Washington

About 200,000 Chinook salmons make their way to the breeding grounds in Bristol Bay every year. When the eggs hatch in the spring, the little salmons spend a whole year in the river before venturing to the Bering Sea, and ultimately, the Pacific Ocean.

This is not only an extremely exciting find, but one that can have a great effect on fish populations throughout the world. By analyzing several otoliths, scientists can now see if their migratory patterns have remained similar, or if they have changed – likely due to some stress. From a conservation standpoint, that’s a game changer.

“This is science responding to a societal issue and need,” said co-author Christian Zimmerman, U.S. Geological Survey ecologist and chief of water and interdisciplinary studies at the USGS Alaska Science Center in Anchorage. “Using this approach, we will be able to map salmon productivity and determine how freshwater habitats influence the ultimate number of salmon. With declines in Chinook salmon in Western Alaska, fishery and land-use managers need better information about freshwater habitats to guide conservation.”

But it’s not just fish – the same technique could be used for other animals. Strontium is known to accumulate in bird feathers and teeth and also survives even after being fossilized. It could help us understand moving patterns better than ever.

Journal Reference: Sean R. Brennan, Christian E. Zimmerman, Diego P. Fernandez, Thure E. Cerling, Megan V. McPhee, Matthew J. Wooller. Strontium isotopes delineate fine-scale natal origins and migration histories of Pacific salmon. DOI: 10.1126/sciadv.1400124

What gives coffee its distinctive color and flavor

Coffee beans undergo several processes before they become the delicious brew we all know. The coffee beans we’re used to seeing, the brown ones with a delightful flavor, are roasted. Raw coffee beans have a different color and smell very differently. So what makes roasted coffee look, smell, and taste so different from raw coffee? The answer lies in chemistry – it’s called the Maillard reaction.

coffee

Raw (left) and roasted (right) coffee beans. Image via 21 foods.

 

Coffee and chemistry

The Maillard reaction is a chemical reaction between amino acids and reducing sugars; pretty much all the cooked foods with the distinctive brownish tint is a result of this reaction – this includes steaks, different types of bread, caramelized sugar, and of course, coffee. The reaction is named after French chemist Louis-Camille Maillard, who first described it in 1912.

The reaction occurs from approximately 140 to 165 °C (284 to 329 °F). Starches break down into simple sugars, which turn brown and change their flavor. In an alkaline environment, the reaction is more accentuated – which explains why the taste is more intense in salty pretzels. Wikipedia does a really good job of explaining how this works:

“The reactive carbonyl group of the sugar reacts with the nucleophilic amino group of the amino acid, and forms a complex mixture of poorly characterized molecules responsible for a range of odors and flavors. The type of the amino acid determines the resulting flavor. This reaction is the basis of the flavoring industry.”

The crusts of many breads and pastries such as this brioche, are golden-brown due to the Maillard reaction. Image via Wiki Commons.

Of course, being able to manipulate the flavor of foods is very advantageous for the food industry, and to an extent, for us consumers. The reaction is present in many foods and has been used unknowingly by humans and ages.

The Maillard reaction shouldn’t be confused with simple food browning – browning can occur at room temperatures too, and it is sometimes undesirable, as in an apple turning brown after being cut. Foods can turn brown by themselves, without human intervention.

Just some examples in which the Maillard coloring and flavoring occurs:

  • toast
  • steaks and barbecues
  • pretzels
  • several types of breads
  • condensed milk
  • maple syrup
  • … and of course: roasted coffee.

Roasted coffee beans. Image via Like Fun.

Roasting coffee and caffeol

Pre-roasting

Coffee is the most consumed beverage in the world (not considering water). Coffee cultivation first took place in Southern Arabia, and the earliest evidence of coffee brewing appears in the middle of the 15th century in the Sufi shrines of today’s Yemen. Coffee has quite a troubled history, being banned in many Christian communities, as well as in the Ottoman Turkey during the 17th century for political reasons. Today, the largest coffee producer in the world is Brazil, with a yearly production of 2,5 million tonnes, followed by Vietnam with almost 1 million tonnes, and Columbia, with 700.

However, fewer people know that coffee berries undergo several processes before they become the familiar roasted coffee. After the beans were picked (by hand or with a machine), they are processed by one of two methods:

  • the dry process method, which is simpler and less work-intensive; or
  • the wet process method, in which the beans are fermented and yield a milder coffee with a distinctive taste. When the fermentation is finished, the seeds are washed with large quantities of fresh water to remove the fermentation residue, which generates huge quantities of water waste;

Then, the beans have to be dried, usually on drying tables – and only after that can you get to roasting them.

The roasting

The resulting green coffee is then roasted, usually in a solid state. With very few exceptions, all coffee is consumed after roasting. The process changes the beans both physically, and chemically. First of all, they lose moisture and increase in volume, while decreasing in weight. The density of the bean also influences the strength of the coffee and requirements for packaging.

During roasting, caramelization occurs as intense heat breaks down starches, changing them to simple sugars that begin to brown, changing the color of the bean, as well as its scent and flavor. However something else also happens: during roasting, aromatic oils and acids weaken, changing the flavor; at 205 °C (401 °F), other oils start to develop. One of these oils, caffeol, is created at about 200 °C (392 °F) – and caffeol is what makes coffee smell like … coffee.

To sum it up

A coffee cup. Image via Wiki Commons.

 

So, there you have it – roasted coffee tastes extremely differently from raw coffee. The Maillard reaction breaks down the starch into simple sugars, which then turn brown and start to change their taste. As the temperature increases, caffeol starts to develop. Caffeol is not a unique oil, but rather an umbrella term for the origin of the coffee aroma – an aroma that is very complex and one that we now know contains nearly 850 different volatile compounds. It takes quite a lot of chemistry to create this delightful beverage — so enjoy it accordingly, and moderately.