Tag Archives: chemical

What are the strong chemical bonds?

Everything around you is made of chemicals. And that’s only possible because those chemicals interact and bind together. Exactly how and why they do this depends on their nature but, in general, there are two kinds of interactions that keep them close: “primary” (or ‘strong’) and “secondary” (or weak) interactions.

Image credits Thor Deichmann.

These further break down into more subcategories, meaning there’s quite a lot of ground to cover. Today, we’ll be looking at the strong ones, which are formed through the transfer of electrons or electrostatic attraction between atoms.

As we go forward, keep in mind that atoms interact in order to reduce their energy levels. That’s what they get out of bonding to other chemicals, and they will do so until they find a bond-mate which will bring perfect balance to their lives; kinda like people do.

An atom’s stable configuration, the state all atoms tend towards, is known as its noble gas configuration. Noble gases make up the last column on the periodic table’s rightmost side, and they’re extremely or completely non-reactive chemically (they don’t need to interact because they have internal equilibrium).

Strong bonds are the most resilient ties atoms or molecules can forge with their peers. The secret to their strength comes from the fact that primary interactions are based on an atom’s valence. The valence number signifies how many electrons zipping around an atom’s core can be ‘shared’ with others. The overwhelming majority of a substance’s chemical behavior is a direct product of these electrons.

Covalent bonds

The first type of strong interactions we’ll look at, and the most common one, is the covalent bond. The name, “co-valence” sums up the process pretty well: two atoms share some or all of their valence electrons, which helps both get closer to equilibrium. This type of bond is represented with a line between two atoms. They can be single (one line), double (two lines), or triple (three lines).

File:Covalent Organic Frameworks (space-filling diagram).jpg
Covalent bonds are especially important in organic chemistry. Image via Wikimedia.

In essence, what happens inside a covalent bond is that you have an atom starved of electrons (positively charged) and one who has too many electrons (negatively charged). Neither of them wants to keep going on like that because their internal imbalance of electrical charges makes them unstable. When put close to each other, they will start behaving like a single ‘macroatom’ — their electrons will start orbiting around both.

These shared orbits are what physically keeps the atoms together. The atom with too many electrons only ‘has’ them for half the time, and the one with too few gets to have enough half the time. It’s not ideal, but it’s good enough and it requires no changes to the structure of the atom (which is just dandy if you ask nature).

Things get a bit more complicated in reality. Electrons don’t zip around willy-nilly, but need to follow certain laws. These laws dictate what shape their orbits will take (forming ‘orbitals’), how many layers of orbitals there will be and how many electrons each can carry, what distance these orbitals will be from the nucleus, and so on. In general, because of their layered structure, only the top-most orbitals are involved in bonding (and as such, they’re the only ones giving elements their chemical properties). Keep in mind that orbitals can and do overlap, so exactly what ‘top-most’ means here is relative to the atom we’re discussing.

Orbitali, Lipire, Atom, Moleculă, Legarea Covalentă
A 3D rendering of electron orbitals. Image via Pixabay.

But to keep it short, covalent bonding involves atoms pooling together their free electrons and having them orbit around both, using each other’s weakness to make the pair stronger.

Covalent bonds are especially prevalent in organic chemistry, as it is the preferred way carbon bonds to other elements. The products they form can exist in a gas, liquid, or solid state, whereas the following two types can only produce solid substances.

Ionic bonds

Next are ionic bonds. Where covalent bonds involve two or more atoms sharing electrons, ionic bonds are more similar to donations. This type of chemical link is mediated by an electrostatic charge between atoms (negatively charged particles attract positively-charged ones). The link is formed by one or more electrons going from the donor to the receiver in a redox (oxidation-reduction) reaction; during this type of reaction, the atoms’ properties are changed, unlike in covalent bonds. Ionic bonds generally involve a metal and a nonmetal atom.

File:Sodium chloride - crystals.jpg
Table salt crystals. Salts are formed from ionic bonds. Image via Wikimedia.

Table salt is a great example of a compound formed with ionic bonds. Salt is a combination of sodium and chlorine. The sodium atom will cede one of its electrons to the chlorine, which will make them hold different electrical charges; due to this charge, the atoms are then strongly drawn together.

It again ties into equilibrium. Due to the laws governing electron orbitals, there are certain configurations that are stable, and many others that are not. At the same time, atoms want to achieve electrostatic neutrality, as well. In an ionic bond, an atom will take an increase in its electrostatic energy (it will give or take negative charge) to lower its overall internal imbalance (by reaching a stable electron configuration) because that’s what lowers its energy the most.

Covalent bonds for the most part take place between atoms with the same electrostatic properties, and there’s no direct transfer of electrons because that would increase the overall energy levels of the system.

Ionic bonds are most common in inorganic chemistry, as they tend to form between atoms with very different electrostatic properties and (perhaps most importantly) ionic compounds are always soluble in water. However, ionic compounds such as salts do have a very important part to play in biology.

The main difference between ionic and covalent bonds is how the atoms involved act after they link up. In a covalent bond, they are specifically tied to their reaction mates. In an ionic bond, each atom is surrounded by swarms of atoms of opposite charge, but not linked to one of them in particular. Atoms with a positive charge are known as cations, while those with a negative charge are anions.

Another thing to note about ionic bonds is that they break if enough heat is applied — in molten salts, the ions are free to move away from each other. They also quickly break down in water, as the ions are more strongly attracted to these molecules than each other (this is why salt dissolves in water).

Metallic bonds

Microstructure of VT22 steel (titanium wrought alloy) after quenching. Image via Wikimedia.

If the name didn’t give it away, this type of chemical bond is the hallmark of metal and metallic alloys. It’s not the only type of bond that they can form, even between pure metals, but it’s almost always seen in metals.

Chemically speaking, metals are electron donors — they need to shed electrons to reach equilibrium. Because of the nature of these atoms, their electrons can move around between atoms, forming ‘clouds’ of electrons. These detached electrons are referred to as being ‘delocalized’.

This type of bond shares properties of both ionic and covalent bonds. In essence, every metal atom needs to give away electrons to be stable (thus behaving like a cation). But because it’s surrounded by other metal atoms (meaning other cations), there’s nobody who wants to accept that electrical charge. So the electrons get pooled together and everyone gets to have them some of the time (thus forming a covalent bond). You can think of it as an ionic bond where the atomic nuclei form the cations and the electrons themselves the anions. Another way to look at it, although this is more of an abstraction used to illustrate a point, is that all the atoms involved in a metallic bond share an orbital.

Keep in mind that this ‘sea of electrons’ theory is a model of the process — it’s oversimplified and not a perfect representation of what’s actually going on, but it’s good enough to give you a general idea of how metallic bonds work.

Because metallic bonds share properties of both ionic and covalent bonds they create crystalline bonds (like salts) while still remaining malleable and ductile (unlike most other crystals). Most of the physical properties we seek in metals are a direct product of this structure. The cloud of delocalized electrons acts as a binder, holding the atoms together. It also acts as a cushion, preventing mechanical shock from fracturing the structure. When blacksmiths hammer iron or steel, they rearrange the atomic cores. Electrons can still move around them, like water around the rocks in a stream, and help hold everything together during the process.

Metallic bonds have the lowest bond energy of the types we’ve seen today — in other words, they’re the most stable.

Chemistry often gets a bad rep for being that boring subject with math and mixing of liquids. So it’s easy to forget that it literally holds the world together. The objects around us are a product of the way their atoms and molecules interact. Our knives can cut the food on our plates because billions of atoms inside that knife hold onto each other for dear life, and those in food don’t. Diamonds cut through solid stone because carbon atoms can bind to other carbon atoms in structures that are stronger than almost anything else we’ve ever seen. Our cells and tissues are held together by the same interactions. We’re alive because water molecules are shaped in such a way as to make them universal solvents.

We’re still very much working with models here — our understanding of the ties that bind is still imperfect. But even these models can help us appreciate the immense complexity hidden in the most mundane objects around us.


What are the different types of energy

Energy — we need it to stay alive. But what exactly is it, and what ‘flavors’ does it come in? Let’s find out.


Chemical energy turned into electric energy turned into kinetic energy turned into thermal energy and electromagnetic energy in a single photo. Fitting.
Image via Pixabay.

Just as there are many different ways to do work, there are also many types of energy. As a general guideline, we split it up into two major types and several subtypes. Physicists measure energy in joules, although a more familiar unit of measure might be the calorie. With that crash introduction, let’s take a look at:

Potential and Kinetic energy

The two slices of the energy pie (in how we interpret it, at least) are kinetic and potential energy. Every type of energy we’ll be discussing today is a particular form of either of these two. Kinetic energy is energy actively performing work right now (such as moving an object or heating it up) while potential energy is what is currently ‘in storage’, which can be released if the right circumstances are met.

Energy cannot be created or destroyed, but it can be transformed. If you, for example, lift an apple over your head, you’re transforming kinetic (motion) energy into potential energy (the apple wants to go down and will do so if you let it go). As it falls, all the potential energy you’ve stored inside the fruit is turned back into kinetic energy. Alternatively, a battery holds chemical (potential) energy. It can be turned into electrical energy, then into light and heat in your smartphone (both types of kinetic energy).

So let’s start by looking at one of the most fascinating, in my eyes, types of energy:

Thermal energy


Infrared photograph of a group of people.
Image credits Nevit Dilmen / Wikimedia.

As a rule of thumb, all energy bends the knee to the principle of energy transformation except thermal energy. Please note at this time that what we perceive as heat isn’t thermal energy per se, but the transfer of thermal energy. Something that feels warm to the touch has more thermal energy than you — and you’re receiving it. Something that feels cold draws energy from you. The energy flow is what you perceive as ‘hot’ or ‘cold’.

Temperature, then, is how densely-packed thermal energy is in an object.

Thermal energy itself is the disorderly movement of particles inside an object. It is the sum of the kinetic and potential energy of molecules moving, rotating, or vibrating in a random manner. Thermal energy is randomly distributed among these particles or atoms, and as such is a measure of entropy — a physical’s system lack of order or predictability. The second law of thermodynamics says that the entropy of an isolated system never decreases. In plain ‘ol English, this second law basically says that you can’t take a hot object (high entropy state) and cool it down (low entropy state) without draining that energy somewhere else.

In a roundabout way, that also means thermal energy can’t be transformed, only transferred. There’s nothing wrong with thermal energy, but it is so disorderly that we can’t effectively channel it to transform it. Thermal energy also wants to even out as much as possible over as wide a volume as possible (ideally, across the whole Universe, in its book). This, alongside its workless nature, is why thermal energy is often seen as a ‘residual’ type of energy that all other energy degrades into.

The interplay between thermal energy and physical work are enshrined in the first law of thermodynamics. This law also shows us how heat, i.e. thermal energy imbalances, can be used to perform work. In short, it says that the energy state of a closed system is the difference between changes in heat (gains in energy) and work performed (energy expenditure).

Thermal energy itself can’t perform work, but an imbalance and subsequent transfer of thermal energy can. A hot oven is more energetic than a cool oven, but neither move by themselves. The fires bellowing in a steam locomotive’s furnaces don’t directly drive the thing forward. They heat up water, however, which then expands into steam, and this (heat-induced) change in state and volume is converted into motion. If you want to get all physical about it, the motions of individual water particles can get so hectic that it turns to steam; the motions of these steam molecules then get transferred (via impact) to the various pistons they drive in the engine, effectively converting thermal energy flow into motion.

So, to sum it up, thermal energy is the hipster of energies. By itself, it cannot be converted into other types of energies. Only differences in thermal energy can be transformed/used to perform work. The efficiency of such processes will never be 100% — you will never be able to recover all the energy in heat.

Mechanical energy

Steam engine.

An old steam engine used to drain water from mine shafts somewhere in Germany.
Image via Pixabay.

Mechanical energy is the total potential and kinetic energy resulting from the movement, or current location, of physical objects.

Kinetic mechanical energy characterizes physical bodies in motion and is half the product between its mass and the square of its velocity. The heavier something is, and the faster it moves, the harder it is to stop (i.e. the more kinetic energy it has). Potential mechanical energy depends on the body’s position relative to other bodies.

Potential mechanical energy is often associated with forces that apply work against the field of a conservative force. Conservative forces are forces independent of the path of motion, such as gravity or electrostatic interactions between particles. The easiest way to illustrate potential mechanical energy is by imagining you’re carrying a bucket of water up a flight of stairs. If you then dump the water, it will flow down to the ground. You stored potential energy in the water by acting against the gravitational field (i.e. you lifted the water). When you released it from the bucket, that water expended its potential energy as kinetic energy under the action of gravity.

An interesting property of mechanical energy is that in an isolated, ideal system, it is constant. In real systems, however, non-conservative forces (such as friction or air drag) will eventually sap mechanical energy, turning it into heat.

Chemical energy

Do you know what has a lot of chemical energy? Chocolate. But, if you want something with a lot of chemical energy, you need dynamite.

Diet Coke Mentos.

Or dynamite’s much-feared bigger brother: the diet coke and Mentos.
Image via Wikipedia.

Chemical energy is potential energy stored inside a substance’s chemical bonds. Our bodies break open bonds during cellular respiration to obtain this type of energy. Chemical energy is also released when we blow up a stick of dynamite, when feeding wood into a fireplace, when pressing the gas pedal, and as the battery in your smartphone generates electricity.

If a substance can react with another to undergo a transformation through a chemical reaction, it has chemical energy. That energy is equal to the difference between the energy content of the products and the reactants (if the temperature remains constant). It doesn’t much matter what, exactly, that change is — it can be a change in how a molecule’s atoms are arranged; it can involve the breakdown and creation of new products. As long as a chemical change takes place, it will either generate or absorb energy.

Combustion, that merry thing keeping the world going, is a superb example of chemical energy being released. Fire is what happens when oxygen molecules bind to various compounds, releasing the energy in their bonds.

Electrical energy

This type of energy is the result of the flow of electric charge through a conductor due to electrical attraction or repulsion between charged particles. Electrical energy can be potential (static electricity) or kinetic (when the charges are in motion, i.e. electrical current).

It is generated from differences in electrical potential between two or more objects in a given system. It can also be generated by kinetic force, though the movement of a copper wire loop or disk around the poles of a magnet. Generally speaking, this works because the electrons in the copper wire are free to move about as they please.


Very large copper wires and very big magnets. This is the rotor and stator for a generator at the Vargöns hydroelectric power plant in Sweeden. The outer diameter is of 11,4 m.
Image credits Tekniska museet.

Each electron is negatively charged, so it will be attracted to positively-charged particles and pushed away by other negatively-charged particles. You can also see this as the electron attracting certain particles while repulsing others — in other words, each charged particle has a tiny electric field around it that can exert a force on other particles, causing them to move (force over distance is physical work). Generators function by supplying force to move these charged particles around, causing them to move other charged particles, in turn, generating electricity.

A moving charged particle will always generate a magnetic field. A moving magnetic field always induces an electric current in a conductor. That’s why these two are usually clumped together under the banner of ‘electromagnetism‘.

Nuclear energy

Nuclear energy is released (or absorbed, mind you) whenever a nuclear reaction, or radioactive decay, occurs. It is the product of differences in the nuclear binding energy of the first and final state of these transmissions. The nuclear binding energy of an atom is defined as the minimum energy needed to break it apart.

In essence, all atoms are made up of particles and the forces holding these particles together. Different types of atoms need different amounts of force to keep them together. When an atom undergoes change, or when it is split, this energy is released. Nuclear energy, therefore, is potential energy.

In any exothermic (net-energy-positive) nuclear process, nuclear mass might ultimately be converted to thermal energy, and given off as heat. Fission and fusion are the best-known nuclear transformations that release energy. The Sun and all other stars are directly powered by nuclear fusion.

Depending on how technical you want to go with the classifications, you could draw up other types of energy. Elastic energy describes how stretchy things revert to their shape when you let go, for example. However, I tried my hardest to give you the overarching ‘flavors’ of energy, ones that can reasonably fit all other subtypes (elastic energy is a form of mechanical energy). But, if you feel I left something interesting out, perform some physical work on your keyboard and let me know in the comments below!


An AI recreated the periodic table from scratch — in a couple of hours

A new artificial intelligence (AI) program developed at Stanford recreated the periodic table from scratch — and it only needed a couple of hours to do so.


If you’ve ever wondered how machines learn, this is it — in picture form. (A) shows atom vectors of 34 main-group elements and their hierarchical clustering based on distance. The color in each cell stands for value of the vector on that dimension.
Image credits Zhou et al., 2018, PNAS.

Running under the alluring name of Atom2Vec, the software learned to distinguish between different atoms starting from a database of chemical compounds. After it learned the basics, the researchers left Atom2Vec to its own devices. Using methods and processes related to those in the field of natural language processing — chiefly among them, the idea that the nature of words can be understood by looking at other words around it — the AI successfully clustered the elements by their chemical properties.

It only took Atom2Vec a couple of hours to perform the feat; roughly speaking, it re-created the periodic table of elements, one of the greatest achievements in chemistry. It took us hairless apes nearly a century of trial-and-error to do the same.

I’m you, but better

The Periodic Table of elements was initially conceived by Dmitri Mendeleev in the mid-19th century, well before many of the elements we know today had been discovered, and certainly before there was even an inkling of quantum mechanics and relativity lurking beyond the boundaries of classical physics. Mendeleev recognized that certain elements fell into groups with similar chemical features, and this established a periodic pattern (hence the name) to the elements as they went from lightweight elements like hydrogen and helium, to progressively heavier ones. In fact, Mendeleev could predict the very specific properties and features of, as yet, undiscovered elements due to blank spaces in his unfinished table. Many of these predictions turned out to be correct when the elements filling the blank spots were finally discovered.

“We wanted to know whether an AI can be smart enough to discover the periodic table on its own, and our team showed that it can,” said study leader Shou-Cheng Zhang, the J. G. Jackson and C. J. Wood Professor of Physics at Stanford’s School of Humanities and Sciences.

Zhang’s team designed Atom2Vec starting from an AI platform (Word2Vec) that Google built to parse natural language. The software converts individual words into vectors (numerical codes). It then analyzes these vectors to estimate the probability of a particular word appearing in a text based on the presence of other words.

The word “king” for example is often accompanied by “queen”, and the words “man” and “woman” often appear together. Word2Vec works with these co-appearances and learns that, mathematically, “king = a queen minus a woman plus a man,” Zhang explains. Working along the same lines, the team fed Atom2Vec all known chemical compounds (such as NaCl, KCl, and so on) in lieu of text samples.

It worked surprisingly well. Even from this relatively tiny sample size, the program figured out that potassium (K) and sodium (Na) must be chemically-similar, as both bind to chlorine (Cl). Through a similar process, Atom2Vec established chemical relationships between all the species in the periodic table. It was so successful and fast in performing the task that Zhang hopes that in the future, researchers will use Atom2Vec to discover and design new materials.

Future plans

“For this project, the AI program was unsupervised, but you could imagine giving it a goal and directing it to find, for example, a material that is highly efficient at converting sunlight to energy,” he said.

As impressive as the achievement is, Zhang says it’s only the first step. The endgame is more ambitious — Zhang hopes to design a replacement for the Turing test, the golden standard for gauging machine intelligence. To pass the Turing test, a machine must be capable of responding to written questions in such a way that users won’t suspect they’re chatting with a machine; in other words, a machine will be considered as intelligent as a human if it seems human to us.

However, Zhang thinks the test is flawed, as it is too subjective.

“Humans are the product of evolution and our minds are cluttered with all sorts of irrationalities. For an AI to pass the Turing test, it would need to reproduce all of our human irrationalities,” he says. “That’s very difficult to do, and not a particularly good use of programmers’ time.”

He hopes to take the human factor out of the equation, by having machine intelligence try to discover new laws of nature. Nobody’s born educated, however, not even machines, so Zhang is first checking to see if AIs can reach of the most important discoveries we made without help. By recreating the periodic table, Atom2Vec has achieved this goal.

The team is now working on the second version of the AI. This one will focus on cracking a frustratingly-complex problem in medical research: it will try to design antibodies to attack the antigens of cancer cells. Such a breakthrough would offer us a new and very powerful weapon against cancer. Currently, we treat the disease with immunotherapy, which relies on such antibodies already produced by the body; however, our bodies can produce over 10 million unique antibodies, Zhang says, by mixing and matching between some 50 separate genes.

“If we can map these building block genes onto a mathematical vector, then we can organize all antibodies into something similar to a periodic table,” Zhang says.

“Then, if you discover that one antibody is effective against an antigen but is toxic, you can look within the same family for another antibody that is just as effective but less toxic.”

The paper “Atom2Vec: Learning atoms for materials discovery,” has been published in the journal PNAS.

Black Hole.

We’re made of stardust, but heavier elements are made of black-hole-and-neutron-star dust

Heavier chemical elements could be the love child of two very spectacular and exotic lovers — neutron stars and tiny black holes.

Black Hole.

Image via Digitaltrends.

We are, as Carl Sagan once put it, “made of star stuff.” Considering that what we call a star is actually a ginormous reactor mashing hydrogen and helium atoms into more complex elements, such as carbon, oxygen, or iron, that’s pretty true. But we needn’t look very hard around us for elements that outshine our particularly starry heritage. R-process elements (which are much heavier than iron — such as gold or uranium) could be sired by nature’s two most extreme creations.

Stellar birth

So let’s take a step back and look at how elements form. The lightest, simplest atom, hydrogen, was created during of the Big Bang, with some helium and traces of lithium and beryllium peppered in. This mix started to clump together in areas of higher density (accrete), which eventually led to the formation of the first stars. Stars that formed more complex elements. But there is a caveat. A living star simply isn’t powerful enough to fuse stuff past iron. Even supermassive stars, the biggest out there, can’t do it.

Think of a star as an explosion so massive, its sheer weight and gravitational pull causes it to fall back on itself. So a star can only exist while there’s a balance between two forces — the energy of fusion reactions trying to blast it apart, and gravity straining to keep it together. This works quite well for our granddaddy’os up to the point where they build up a respectable silicon core.

Because, as aging stars everywhere find out, fusing silicon into iron doesn’t return on investment. Even in the ultra-hot, uber-dense conditions of a star’s core, making atoms merge takes a lot of energy — needed to overcome protons’ (positively charged particles) tendency to push other positively charged particles away. However, once you do make them merge, protons and neutrons get along just great thanks to the nuclear force, which makes them stick together.

The simpler the atom, the stronger this nuclear force. So when you go from hydrogen (1 proton) to helium (2 protons), and then on to lithium (3 protons) it gets progressively weaker — it has ‘less power’ so to speak. At the same time, all that leftover power needs to go somewhere and it does so by degrading into the heat and light of stars. But there is a turning point where you end up needing to pump more energy into the atom to make it fuse than you get from its final nuclear force.

The flip-side is that this turning point works both ways — that’s why fusion reactors work with hydrogen but fission reactors work with uranium. You can extract the extra energy by lowering the nuclear force of simple elements (hydrogen), or you can get it by splitting heavier atoms (uranium) and cashing back on the energy it took to fuse them together. But that’s a story for another time.

What matters right now is that this turning point is iron.

Ms and Mr Dense

Neutron star.

Image credits Kevin Gill / Flickr.

Up to now, supernovas and binary star mergers were believed to be the only environments that could supply the conditions needed for higher fusion. But now, a team of three theoretical astrophysicists at UCLA — George Fuller, Alex Kusenko and Volodymyr Takhistov — offer another event that could produce these elements: the merger between a tiny black hole and a neutron star.

“A different kind of furnace was needed to forge gold, platinum, uranium and most other elements heavier than iron,” Fuller, a theoretical astrophysicist and professor of physics who directs UC San Diego’s Center for Astrophysics and Space Sciences and first author of the paper, explained in a statement.

“These elements most likely formed in an environment rich with neutrons.”

Neutron stars are immensely dense. They’re what’s left in the wake of stellar collapses and supernovae, a kernel of ultra-packed matter. A ‘normal’ atom has a nucleus, an electron shell, and a lot of empty space between the two. Neutron stars are like a huge atomic nucleus, made of back-to-back neutrons held together by gravity. To get an idea of what “immensely dense” means, it’s estimated that a spoonful of the stuff neutron stars’ surfaces are made of weighs about three billion tons.

The other half of this lovely couple is even denser — a tiny black hole, weighing between 10-14 and 10-8 solar masses. Unlike neutron stars, we’re not really sure that they really exist. But a lot of researchers (including Stephen Hawking) do believe they’re out there, a byproduct of the Big Bang, and could make up part of the dark matter — which has been proven to exist. If these micro black holes follow the distribution of dark matter in space, they’ll often co-exist with neutron stars. And this, the team argues, sets the stage for heavier elements to form.

Their calculations show that if a neutron star captures such a black hole and get devoured from the inside out by it, some of the dense neutron star matter can get thrown out into space by the ferocity of the event. Even a tiny bit of this matter is enough to seed the formation of a huge quantity of heavy elements, since it’s so dense.

“As the neutron stars are devoured,” Fuller explains, “they spin up and eject cold neutron matter, which decompresses, heats up and make these elements.”

“In the last milliseconds of the neutron star’s demise, the amount of ejected neutron-rich material is sufficient to explain the observed abundances of heavy elements.”

The team’s theory is especially intriguing since it also helps explain a few other unanswered questions about the universe. For example, it explains why there aren’t that many neutron stars in the galactic core, where there are a lot of black holes to hunt them down. Even more, the team says that the ejection of nuclear matter from tiny black holes chowing on neutron stars would explain three mysterious astronomical phenomena.

“They are a distinctive display of infrared light [‘kilonovas’], a radio emission that may explain the mysterious Fast Radio Bursts from unknown sources deep in the cosmos, and the positrons detected in the galactic center by X-ray observations,” Fuller concludes.

“Each of these represent long-standing mysteries. It is indeed surprising that the solutions of these seemingly unrelated phenomena may be connected with the violent end of neutron stars at the hands of tiny black holes.”

The paper “Primordial Black Holes and r-Process Nucleosynthesis” has been published in the preprint archive arXiv.

New plasma printing technique can deposit nanomaterials on flexible, 3D substrates

A new nanomaterial printing method could make it both easier and cheaper to create devices such as wearable chemical and biological sensors, data storage and integrated circuits — even on flexible surfaces such as paper or cloth. The secret? Plasma.

The nozzle firing a jet of carbon nanotubes with helium plasma off and on. When the plasma is off, the density of carbon nanotubes is small. The plasma focuses the nanotubes onto the substrate with high density and good adhesion.
Image credits NASA Ames Research Center.

Printing layers of nanoparticles of nanotubes onto a substrate doesn’t necessarily require any fancy hardware — in fact, the most common method today is to use an inkjet printer very similar to the one you might have in your home or office. Although these printers are cost efficient and have stood the test of time, they’re not without limitations. They can only print on rigid materials with liquid ink — and not all materials can be easily made into a liquid. But probably the most serious limitation is that they can only print on 2D objects.

Aerosol printing techniques solve some of these problems. They can be employed to deposit smooth, thin films of nanomaterials on flexible substrates. But because the “ink” has to be heated to several hundreds of degrees to dry, using flammable materials such as paper or cloth remains a big no-no.

A new printing method developed by researchers from NASA Ames and SLAC National Accelerator Laboratory works around this issue. The plasma-based printing system doesn’t a heat-treating phase — in fact, the whole takes place at temperatures around 40 degrees Celsius. It also doesn’t require the printing material to be liquid.

“You can use it to deposit things on paper, plastic, cotton, or any kind of textile,” said Mayya Meyyappan of NASA’s Ames Research Center.

“It’s ideal for soft substrates.”

The team demonstrated their technique by covering a sheet of paper in a layer of carbon nanotubes. To do this, they blasted a mixture of carbon nanotubes and helium-ion plasma through a nozzle directly onto the paper. Because the plasma focuses the particles onto the paper’s surface, they form a well consolidated layer without requiring further heat-treatment.

They then printed two simple chemical and biological sensors. By adding certain molecules to the nanotube-plasma cocktail, they can change the tubes’ electrical resistance and response to certain compounds. The chemical sensor was designed to detect ammonia gas and the biological one was tailored to respond to dopamine, a neurotransmitter linked to disorders like Parkinson’s and epilepsy.

But these are just simple proof-of-concept constructs, Meyyappan said.

“There’s a wide range of biosensing applications,” she added.

Applications like monitoring cholesterol levels, checking for pathogens or hormonal imbalances, to name a few.

This method is very versatile and can easily be scaled up — just add more nozzles. For example, a shower-head type system could print large surfaces at once. Alternatively, it could be designed to act like a hose, spraying nanomaterials on 3D surfaces.

“It can do things inkjet printing cannot do,” Meyyappan said. “But [it can do] anything inkjet printing can do, it can be pretty competitive.”

Meyyappan said that the method is ready for commercial applications, and should be relatively simple and inexpensive to develop. The team is now tweaking their technique to allow for other printing materials, such as copper. This would allow them to print materials used for batteries onto thin sheets of metal such as aluminum. The sheet can then be rolled up to make very tiny, very powerful batteries for cellphones or other devices.

The full paper, titled “Plasma jet printing for flexible substrate” has been published online in the journal Applied Physics Letters and can be read here.

Influential few predict behavior of the many – on all scales

As Niels Bohr once pointed out, to fully understand how a living organism works, you’d have to take it apart in the smallest of parts; since this is not something you can actually do, organisms, which represent systems of very high complexity, are impossible to track and understand in all their details.

The few and the many

chemical networkBut by using some very creative mathematics that reveal complex systems by tracking a selected few of their components network-theory researchers seem to come up with some really unique solutions. For example, say you wanted to track all the biological markers that associate some people with a certain disease. You can track down all the genes that are expressed differently in people with the disease and create a network that shows their interactions, but how do you pick the ones connected to the disease from the ones coincidentally different?

Yang-Yu Liu of Northeastern University in Boston and his colleagues believe they have found an answer. To prove their technique, they analyzed the entire human metabolic network and found that using concentrations of about 10% of the body’s 2,763 metabolites could in fact predict the levels of all the rest. But as many applications as this could have in medicine, the possibilities are way more vast. The same technique could be applied to identify the people whose opinions determine everyone else’s, helping in political predictions, or in environmental issues, helping ecologists single out the particular species to track to follow changes in an entire ecosystem. The potential applications are virtually limitless.

Needling around

To imagine how this works, say for example we have a very simple network, with two chemicals: A and B. Chemical A becomes chemical B. Because any changes in B are exclusively determined by A, monitoring B over time will also enable you to determine the state of A. The same would not be true if you monitored only A: Without knowing the initial level of B, changes in A aren’t enough to determine the level of B.

chemical network2

All is simple and fine – but real life networks are nowhere near as simple. Liu’s team tackled the problem by examining clusters of strongly connected components in a network, representing them with arrows (in the previous example, the arrow would lead from B to A, but not vice versa). Their results were pretty amazing: they found that most of the time (and almost always in real-world networks), these selected nodes alone are sufficient to determine the state of every other node in the network. From a theoretical standpoint, it would be possible (although extremely hard) to reconstruct the entire network from these nodes. But as the team explains, this is not really necessary:

“This paper shows how you can reduce a network to the really important component parts that drive the system’s behaviour,” says Joseph Loscalzo of Harvard Medical School in Boston, Massachusetts. “It begins to make the system more tractable,” adds Loscalzo, who would like to apply the technique to medicine.

Via Nature