Tag Archives: Artificial

Researchers devise fast, relatively cheap way of building diamonds

The process of making one such faux diamond starts with a handful of white dust that gets compressed in a diamond-lined pressure chamber, then shot with a laser. The combination of extreme pressure and heat turns the raw material into pure diamond — just like Mother Nature makes them.

Raw diamond.
Image credits Robert Matthew Lavinsky.

The process of making one such faux diamond starts with a handful of white dust that gets compressed in a diamond-lined pressure chamber, then shot at with a laser. The combination of extreme pressure and heat turns the raw material into pure diamond — just like Mother Nature makes them.

Diamonds on the cheap

“What’s exciting about this paper is it shows a way of cheating the thermodynamics of what’s typically required for diamond formation,” said Rodney Ewing, Stanford geologist and co-author of the paper.

The process described by the team uses heat and pressure to turn hydrogen and carbon molecules derived from crude oil and natural gas into literal diamonds. It’s not the first process to try and produce the gem, and indeed not even the first successful one at that — but it is currently the cheapest, most efficient one that produces the highest-quality diamonds.

“We wanted to see just a clean system, in which a single substance transforms into pure diamond — without a catalyst,” Sulgiye Park, the study’s lead author and postdoctoral research fellow at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth) told phys.org.

Natural diamonds form hundreds of kilometers beneath the surface from carbon. The ones we can reach and mine out of the ground were moved there, after formation, through ancient volcanic eruptions. The ones the team produces start as a mixture of three powders derived from petroleum and natural gas. These are particles of carbon atoms arranged in the same structure as in a diamond.

Image via Wikimedia.

Diamonds immediately make us think of jewelry, but they do have a lot of other cool uses as well. They’re extremely stable chemically, have nice optical properties, very high heat conductivity, and they are the hardest material we’ve found on this good Earth. Industries ranging from metal processing to medicine rely on diamonds for specialized applications. The team hopes that their process will help make diamonds more accessible and more customizable for such applications.

The paper “Facile diamond synthesis from lower diamondoids” has been published in the journal Science Advances.

Researchers teach AI to design, say it did ‘quite good’ but won’t steal your job (yet)

A US-based research team has trained artificial intelligence (AI) in design, with pretty good results.

A roof supported by a wooden truss framework.
Image credits Achim Scholty.

Although we don’t generally think of AIs as good problem-solvers, a new study suggests they can learn how to be. The paper describes the process through which a framework of deep neural networks learned human creative processes and strategies and how to apply them to create new designs.

Just hit ‘design’

“We were trying to have the [AIs] create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step,” says Ayush Raina, a Ph.D. candidate in mechanical engineering at Carnegie Mellon and a co-author of the study.

Design isn’t an exact science. While there are definite no-no’s and rules of thumb that lead to OK designs, good designs require creativity and exploratory decision-making. Humans excel at these skills.

Software as we know it today works wonders within a clearly defined set of rules, with clear inputs and known desired outcomes. That’s very handy when you need to crunch huge amounts of data, or to make split-second decisions to keep a jet stable in flight, for example. However, it’s an appalling skillset for someone trying their hand, or processors, at designing.

The team wanted to see if machines can learn the skills that make humans good designers and then apply them. For the study, they created an AI framework from several deep neural networks and fed it data pertaining to a human going about the process of design.

The study focused on trusses, which are complex but relatively common design challenges for engineers. Trusses are load-bearing structural elements composed of rods and beams; bridges and large buildings make good use of trusses, for example. Simple in theory, trusses are actually incredibly complex elements whose final shapes are a product of their function, material make-up, or other desired traits (such as flexibility-rigidity, resistance to compression-tension and so forth).

The framework itself was made up of several deep neural networks which worked together in a prediction-based process. It was shown five successive snapshots of the structures (the design modification sequence for a truss), and then asked to predict the next iteration of the design. This data was the same one engineers use when approaching the problem: pixels on a screen; however, the AI wasn’t privy to any further information or context (such as the truss’ intended use). The researchers emphasized visualization in the process because vision is an integral part of how humans perceive the world and go about solving problems.

In essence, the researchers had their neural networks watch human designers throughout the whole design process, and then try to emulate them. Overall, the team reports, the way their AI approached the design process was similar to that employed by humans. Further testing on similar design problems showed that on average, the AI can perform just as well if not better than humans. However, the system still lacks many of the advantages a human user would have when problem-solving — namely, it worked without a specific goal in mind (a particular weight or shape, for example), and didn’t receive feedback on how successful it was on its task. In other words, while the program could design a good truss, it didn’t understand what it was doing, what the end goal of the process was, or how good it was at it. So while it’s good at designing, it’s still a lousy designer.

All things considered, however, the AI was “quite good” at the task, says co-author Jonathan Cagan, professor of mechanical engineering and interim dean of Carnegie Mellon University’s College of Engineering.

“The AI is not just mimicking or regurgitating solutions that already exist,” Professor Cagan explains. “It’s learning how people solve a specific type of problem and creating new design solutions from scratch.”

“It’s tempting to think that this AI will replace engineers, but that’s simply not true,” said Chris McComb, an assistant professor of engineering design at the Pennsylvania State University and paper co-author.

“Instead, it can fundamentally change how engineers work. If we can offload boring, time-consuming tasks to an AI, like we did in the work, then we free engineers up to think big and solve problems creatively.”

The paper “Learning to Design From Humans: Imitating Human Designers Through Deep Learning” has been published in the Journal of Mechanical Design.

Leaf.

Researchers create fuel from water, CO2, and artificial photosynthesis

New research at the University of Illinois is bringing working artificial photosynthesis one step closer to reality.

Leaf.

Image via Pixabay.

The team has successfully produced fuel from water, carbon dioxide, and visible light through artificial photosynthesis. Their method effectively converts carbon dioxide into longer, more complex molecules, like propane. When fully developed, artificial photosynthesis of this kind could be used to store solar energy in chemical bonds (i.e. fuel) for peak-demand times.

Sunfuel

“The goal here is to produce complex, liquefiable hydrocarbons from excess CO2 and other sustainable resources such as sunlight,” said Prashant Jain, a chemistry professor and co-author of the study.

“Liquid fuels are ideal because they are easier, safer and more economical to transport than gas and, because they are made from long-chain molecules, contain more bonds — meaning they pack energy more densely.”

Plants use photosynthesis to capture energy from sunlight in the form of glucose. Glucose is a relatively energy-dense compound (it’s a sugar), so plants can effectively use it as a type of chemical energy that they assemble from (relatively energy-poor) CO2. Researchers have long strived to recreate this process in the lab, with various degrees of success, as it holds great promise for clean energy applications.

The new study reports on probably the most successful attempt to emulate photosynthesis so far. The artificial process the team developed draws on the same green light that powers photosynthesis in plants. It mixes CO2 and water into fuel with a little help from gold nanoparticles that serve as a catalyst. The electron-rich particles of gold absorb green light and handle the transfer of protons and electrons between water and CO2 — in broad lines, playing the same role as the pigment chlorophyll in natural photosynthesis.

Gold nanoparticles work particularly well in this role, says Jain, because their surfaces interact with CO2 molecules in just the right way. They’re also pretty efficient at absorbing light and do not break down or degrade like other metals do.

While the resulting fuel can simply be combusted to retrieve all that energy, it wouldn’t be the best approach, the team writes. Simply burning it re-releases all the CO2 back into the atmosphere, which is counterproductive to the notion of harvesting and storing solar energy in the first place, says Jain.

“There are other, more unconventional potential uses from the hydrocarbons created from this process,” he says.

“They could be used to power fuel cells for producing electrical current and voltage. There are labs across the world trying to figure out how the hydrocarbon-to-electricity conversion can be conducted efficiently.”

Exciting though the development might be, the team acknowledges that their artificial photosynthesis process is nowhere near as efficient as it is in plants.

“We need to learn how to tune the catalyst to increase the efficiency of the chemical reactions,” he said.

“Then we can start the hard work of determining how to go about scaling up the process. And, like any unconventional energy technology, there will be many economic feasibility questions to be answered, as well.”

The paper “Plasmonic photosynthesis of C1–C3 hydrocarbons from carbon dioxide assisted by an ionic liquid” has been published in the journal Nature Communications.

Leaf.

New design hotfix could make artificial leaves better than actual leaves

A new design could bring artificial leaves out of the lab to convert CO2 into raw materials for fuel.

Leaf.

Image credits Jeon Sang-O.

The idea behind artificial leaves isn’t very complicated — just make them do the same job regular leaves perform, but faster, if possible. Despite this, we’ve had a hard time actually delivering on the idea outside of laboratory conditions. New research, however, could improve on the technology enough to make it viable in the real world.

Leaf it to the catalysts

The sore point with our present artificial leaves is that they simply don’t gobble up CO2 at the concentrations it’s found in the atmosphere.

“So far, all designs for artificial leaves that have been tested in the lab use carbon dioxide from pressurized tanks. In order to implement successfully in the real world, these devices need to be able to draw carbon dioxide from much more dilute sources, such as air and flue gas, which is the gas given off by coal-burning power plants,” said Meenesh Singh, assistant professor of chemical engineering in the UIC College of Engineering and corresponding author on the paper.

While artificial leaves are meant to mimic photosynthesis, even our most refined leaves only work if supplied with pure, pressurized CO2 from tanks in the lab. It’s good that they work, it means we’re on the right track, but they’re not useable in practical applications. Because they only work with high concentrations of CO2, they can’t be used to scrub this gas out of the wider atmosphere, which is what we want to do with them.

Researchers at the University of Illinois at Chicago, however, propose a design solution that could fix this shortcoming. Their relatively simple addition to the design would make artificial leaves over 10 times more efficient than their natural counterparts at absorbing CO2. The gas can then be converted to fuel, they add.

Singh and his colleague Aditya Prajapati, a graduate student in his lab, say that encapsulating artificial leaves inside a transparent, semi-permeable capsule filled with water is all we need to do. The membrane allows water inside to evaporate which, as it passes through the quaternary ammonium resin membrane, pulls in CO2 from the air.

Artificial leaf.

A schematic showing the main principles behind this process.
Carbon dioxide (red and black) enters the leaf as water (white and red) evaporates from the bottom of the leaf. An artificial photosystem (purple circle at the center of the leaf) made of a light absorber coated with catalysts converts carbon dioxide to carbon monoxide and converts water to oxygen (double red spheres) using sunlight.
Image credits Meenesh Singh.

The artificial photosynthetic unit inside the capsule then converts carbon dioxide to carbon monoxide, which can be siphoned off and used to make fuel. Oxygen is also produced and can either be collected or released into the surrounding environment.

“By enveloping traditional artificial leaf technology inside this specialized membrane, the whole unit is able to function outside, like a natural leaf,” Singh said.

The duo estimates that 360 such leaves, each measuring 1.7 meters by 0.2 meters (5.5 by 0.6 feet), could produce around half a ton of carbon monoxide per day. Spread over a 500 sq meter area, the leaves could reduce CO2 levels by 10% within 100 meters of the array in a single day, they add.

“Our conceptual design uses readily available materials and technology, that when combined can produce an artificial leaf that is ready to be deployed outside the lab where it can play a significant role in reducing greenhouse gases in the atmosphere,” Singh said.

The paper “Assessment of Artificial Photosynthetic Systems for Integrated Carbon Capture and Conversion” has been published in the journal ACS Sustainable Chemistry & Engineering.

Artificial bacteria-killing cells could win the war against drug resistance

Research at the University of California, Davis, has resulted in artificial cells that cannot grow or divide but will unleash a can of whoop-ass on any bacteria they encounter.

The artificial cells mimic some of the properties of living cells but don’t grow and divide.
Image credits Cheemeng Tan / UC Davis.

Although researchers have successfully created artificial cells in the past, they remained stable only in certain conditions. The key limitation was that these cells could only survive in nutrient-rich environments, as they lacked the capacity to feed themselves.

The advancement this present paper reports on is that the team’s “lego block” artificial cells can survive and work in a wide variety of conditions with limited resources. This greater self-sufficiency was achieved by the team’s efforts in refining the cells’ membranes, cytosol (the ‘soup’ inside cells), and their genetic material.

Teeny-weenie death machinie

“We engineered artificial cells from the bottom-up — like Lego blocks — to destroy bacteria,” said Assistant Professor Cheemeng Tan, who led the work.

“We demonstrated that artificial cells can sense, react and interact with bacteria, as well as function as systems that both detect and kill bacteria with little dependence on their environment.”

The cells are built from liposomes — bubbles with a cell-like lipid membrane — and purified cellular components including proteins, DNA, and assorted metabolites. They have all the fundamental components of live cells, but they’re short-lived and cannot divide, so they can’t make more of themselves.

The cells were forged with a purpose, however — to beat up E. coli bacteria. By tweaking their genetic material, the team designed these cells to pick up on and react to a unique chemical signature given off by E. coli. Laboratory tests showed that once these artificial cells pick up on the scent, they will attack and destroy all E. coli in a culture.

As the cells are much more robust and self-sufficient than previous ‘models’, they can be employed even in less-than-ideal or changing conditions. This enables them to have a much broader scope of potential applications compared to any other artificial cells currently at our disposal.

The team has high hopes for their spawn. The researchers envision using these cells in an antibacterial role, injecting them into patients suffering from infections resistant to conventional treatments. Alternatively, they might be used for targeted delivery of drugs at specific locations and times, or as biosensors.

The paper “Minimizing Context Dependency of Gene Networks Using Artificial Cells” has been published in the journal Applied Materials and Interfaces.

Atom2Vec.

An AI recreated the periodic table from scratch — in a couple of hours

A new artificial intelligence (AI) program developed at Stanford recreated the periodic table from scratch — and it only needed a couple of hours to do so.

Atom2Vec.

If you’ve ever wondered how machines learn, this is it — in picture form. (A) shows atom vectors of 34 main-group elements and their hierarchical clustering based on distance. The color in each cell stands for value of the vector on that dimension.
Image credits Zhou et al., 2018, PNAS.

Running under the alluring name of Atom2Vec, the software learned to distinguish between different atoms starting from a database of chemical compounds. After it learned the basics, the researchers left Atom2Vec to its own devices. Using methods and processes related to those in the field of natural language processing — chiefly among them, the idea that the nature of words can be understood by looking at other words around it — the AI successfully clustered the elements by their chemical properties.

It only took Atom2Vec a couple of hours to perform the feat; roughly speaking, it re-created the periodic table of elements, one of the greatest achievements in chemistry. It took us hairless apes nearly a century of trial-and-error to do the same.

I’m you, but better

The Periodic Table of elements was initially conceived by Dmitri Mendeleev in the mid-19th century, well before many of the elements we know today had been discovered, and certainly before there was even an inkling of quantum mechanics and relativity lurking beyond the boundaries of classical physics. Mendeleev recognized that certain elements fell into groups with similar chemical features, and this established a periodic pattern (hence the name) to the elements as they went from lightweight elements like hydrogen and helium, to progressively heavier ones. In fact, Mendeleev could predict the very specific properties and features of, as yet, undiscovered elements due to blank spaces in his unfinished table. Many of these predictions turned out to be correct when the elements filling the blank spots were finally discovered.

“We wanted to know whether an AI can be smart enough to discover the periodic table on its own, and our team showed that it can,” said study leader Shou-Cheng Zhang, the J. G. Jackson and C. J. Wood Professor of Physics at Stanford’s School of Humanities and Sciences.

Zhang’s team designed Atom2Vec starting from an AI platform (Word2Vec) that Google built to parse natural language. The software converts individual words into vectors (numerical codes). It then analyzes these vectors to estimate the probability of a particular word appearing in a text based on the presence of other words.

The word “king” for example is often accompanied by “queen”, and the words “man” and “woman” often appear together. Word2Vec works with these co-appearances and learns that, mathematically, “king = a queen minus a woman plus a man,” Zhang explains. Working along the same lines, the team fed Atom2Vec all known chemical compounds (such as NaCl, KCl, and so on) in lieu of text samples.

It worked surprisingly well. Even from this relatively tiny sample size, the program figured out that potassium (K) and sodium (Na) must be chemically-similar, as both bind to chlorine (Cl). Through a similar process, Atom2Vec established chemical relationships between all the species in the periodic table. It was so successful and fast in performing the task that Zhang hopes that in the future, researchers will use Atom2Vec to discover and design new materials.

Future plans

“For this project, the AI program was unsupervised, but you could imagine giving it a goal and directing it to find, for example, a material that is highly efficient at converting sunlight to energy,” he said.

As impressive as the achievement is, Zhang says it’s only the first step. The endgame is more ambitious — Zhang hopes to design a replacement for the Turing test, the golden standard for gauging machine intelligence. To pass the Turing test, a machine must be capable of responding to written questions in such a way that users won’t suspect they’re chatting with a machine; in other words, a machine will be considered as intelligent as a human if it seems human to us.

However, Zhang thinks the test is flawed, as it is too subjective.

“Humans are the product of evolution and our minds are cluttered with all sorts of irrationalities. For an AI to pass the Turing test, it would need to reproduce all of our human irrationalities,” he says. “That’s very difficult to do, and not a particularly good use of programmers’ time.”

He hopes to take the human factor out of the equation, by having machine intelligence try to discover new laws of nature. Nobody’s born educated, however, not even machines, so Zhang is first checking to see if AIs can reach of the most important discoveries we made without help. By recreating the periodic table, Atom2Vec has achieved this goal.

The team is now working on the second version of the AI. This one will focus on cracking a frustratingly-complex problem in medical research: it will try to design antibodies to attack the antigens of cancer cells. Such a breakthrough would offer us a new and very powerful weapon against cancer. Currently, we treat the disease with immunotherapy, which relies on such antibodies already produced by the body; however, our bodies can produce over 10 million unique antibodies, Zhang says, by mixing and matching between some 50 separate genes.

“If we can map these building block genes onto a mathematical vector, then we can organize all antibodies into something similar to a periodic table,” Zhang says.

“Then, if you discover that one antibody is effective against an antigen but is toxic, you can look within the same family for another antibody that is just as effective but less toxic.”

The paper “Atom2Vec: Learning atoms for materials discovery,” has been published in the journal PNAS.

Cell in shell

Novel cell-in-a-shell is a like a body armor for tiny living things

Researchers from the Imperial College London (ICL) have fused living and non-living cells for the first time; the new entities should allow us to cash in on the abilities of living organisms in harsh applications traditionally relegated to machines.

Cell in shell

An impression of a biological cell (brown) inside the artificial cell (green).
Image credits Imperial College London.

The system encapsulates biological cells within an artificial cell-like casing that allows the two to work together. The system should allow researchers to draw on the natural abilities of biological cells while keeping them safe from environmental threats. For example, the cyborg cells could be used as photosynthesis ‘batteries’, as in-vivo drug factories swimming around your bloodstream, or as biological sensors that can operate in harsh environments.

Rise of the cellborgs

The idea of mixing biological and mechanical systems isn’t new. However, previous work focused on taking part of a cell’s systems, such as certain enzymes or chemical processes, and grafting them into artificial casings. The work at ICL stands out by being the first to take an entire cell and put it in a mechanical shell.

A shell it may be, but it’s far from being just a shell: the artificial component also contains enzymes that work together with those inside the cell to produce new compounds. In the proof-of-concept experiment, the artificial shell produced a fluorescent chemical that allowed the researchers to confirm all was working as expected.

“Biological cells can perform extremely complex functions, but can be difficult to control when trying to harness one aspect, says lead researcher Professor Oscar Ces, from the Department of Chemistry at ICL. “Artificial cells can be programmed more easily but we cannot yet build in much complexity.”

“Our new system bridges the gap between these two approaches by fusing whole biological cells with artificial ones, so that the machinery of both works in concert to produce what we need. This is a paradigm shift in thinking about the way we design artificial cells, which will help accelerate research on applications in healthcare and beyond.”

The team called on a field of knowledge known as microfluidics — which details the behavior of fluids through small channels — to put the two together. They used water and oil (which don’t mix) to make droplets of a defined size that contained both the cells and enzymes. Then, they applied a protective coating on the droplets, creating the artificial shell.

These cellborgs (not an official term, sadly) were then placed in a concentrated copper solution, since copper is usually toxic to biological cells. The team was able to detect the fluorescent chemicals in most of them after immersion, meaning the biological cells were safe inside their shells, still alive and functioning. This ability suggests the bio-artificial cells would be useful in the human body, where foreign biological cells have to contend with attacks by the body’s immune system.

The team went on to explain that their system is “controllable and customizable,” and that they can create different sizes of artificial cells. Furthermore, the casings can be applied to a wide range of cellular machinery “such as chloroplasts for performing photosynthesis or engineered microbes that act as sensors.”

Next on the list is improving the cellborgs’ functionality by improving the artificial shell, in order to make it act more like a biological membrane with extra functions. For example, if it could be designed to open and release the chemicals produced within only in response to certain signals, the cells could be used to deliver drugs to specific areas of the body. This would allow for greater drug efficiency with fewer side-effects — particularly useful for diseases such as cancer.

There’s still have a lot of work to do to get there, but the team says they’ve made a few promising steps in the right directions.

The paper “Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules” has been published in the journal Scientific Reports.

Red blood cells.

Immortal cells could usher in the age of plentiful, artificial blood for transfusions

Immortalized cell lines could one day be used to create an endless supply of blood for medical uses. A new paper reports the first successful use of such an immortalized line to synthesize blood.

Red blood cells.

Image credits Gerd Altmann.

Blood is really important if you plan on staying alive. But it does have an annoying habit of flowing out and away from pokes and scratches in your body, or during surgery and other medical procedures — so doctors need to have a steady supply on hand at all times to replenish the losses.

Trouble is, doctors today rely on donors to keep stocks of blood up, and there are way more patients than donors. Not only that, they also need to match the blood type of the patient with the donor and make sure they have the right volume of blood. So overall, it can get pretty nerve-racking for doctors to make sure they have enough healthy blood of the right type available when they need it.

Blood on tap

But the life of doctors (and probably vampires) is about to get a whole lot better as a group of scientists at the University of Bristol, along with colleagues from the NHS Blood and Transplant, developed a method that should allow us to produce a virtually endless supply of high-quality artificial blood.

The breakthrough would allow for a steady supply of red blood cells to be produced, which could then be used to create artificial blood for transfusions. There are a number of techniques available today to do just that, but they’re very limited in the amount of cells they can produce. For example, certain types of stem cells can be used to produce red blood cells, but the generating sample dies off after producing about 50.000 cells — but a typical bag of blood contains somewhere around 1 million such cells.

The solution, the team says, is immortalizing the generating cell line, so they will never die off and keep making red blood cells. One such cell line has been pioneered by UoB researchers, and they named it the BEL-A (Bristol Erythroid Line Adult). The secret to their success is that they immortalized the stem cells in their premature stage. The cells can then be mobilized to divide and produce red blood cells. It’s the first known line of cells which can continuously produce red blood cells and also generate additional lines successfully.

“Cultured red blood cells provide such an alternative and have potential advantages over donor blood, such as a reduced risk of infectious disease transmission, and as the cells are all nascent, the volume and number of transfusions administered to patients requiring regular transfusions (sickle cell disease, thalassaemia myelodysplasia, certain cancers) could be reduced, ameliorating the consequences of organ damage from iron overload” the paper reads.

Their use could allow a constant supply of blood even in hospitals situated in remote and isolated areas, making a huge difference in life-or-death scenarios where doctors won’t have to wait on a shipment of blood to arrive. Another huge implication of the BEL-A cells is that they could finally decouple the patients from donors, meaning people with rare blood types won’t lack for blood due to a shortage in donors with the same blood type.

The researchers say that in addition to its role in supplying blood, BEL-A cells can also prove to be a powerful tool in further research. Right now, the technique is awaiting clinical trials.

The paper “An immortalized adult human erythroid line facilitates sustainable and scalable generation of functional red cells” has been published in the journal Nature Communications.

 

Artificial intelligence can write classical music like a human composer. It’s the first non-human artist whose music is now copyrighted

There’s nothing stopping a machine designed in our image from performing at least as equally well as humans in virtually any task. If you’re a skeptic, it’s enough to look at what a company from Luxemburg called Aiva Technologies is doing. Their novel artificial intelligence can write classical music, a genre deeply tied to human sophistication and musical acuity, that is for all intents and purposes on par with works written by human composers.

Image credits Gavin Whitner / Flickr.

There are a lot of startups nowadays working with machine learning techniques to craft artificial intelligence applications for anything from law to search engines. Such technologies have a huge potential for disruption because they can help some organizations drastically improve their productivity or returns. Artificial intelligence can also be a social disrupter as it affects the job market. If you’re employed as a truck or taxi driver, teller, cashier or even as a cook, you run at risk of being sacked in favor of a machine. Some would think creative jobs like writing, painting or music are exempted from such trends because there’s the impression you need inherently human qualities to deliver — but that’s just wishful thinking.

Already, AIs seem much better than people at competitive games like Chess, Go or Poker. You might argue that writing music is a totally different affair from crunching raw data such as chess positions or the probability of holding a winning hand at poker but the way these machines are set up really shouldn’t make any difference.

Mainframe prodigy

Aiva, which stands for Artificial Intelligence Virtual Artist, is based on deep learning algorithms which use reinforcement techniques. Deep learning essentially involves feeding a computer system lots of data so that it can make decisions about other data. All of this information is passed through so-called neural networks which are algorithms designed to process information like a human brain would. These networks are what allows Google, for instance, to analyze the billions of images in its index as if a human would interpret them; a previous full-length ZME article goes into more depth about how all of this works.

Reinforced learning means that the artificial intelligence is trained to decide what to do next by being offered a ‘reward’ which is cumulative. Unlike supervised learning, reinforced learning doesn’t require a labeling of input and out data. What this means for Aiva, which was fed thousands of classical musical scores from Bach and Mozart to contemporary composers, is that it was never taught music theory. Essentially, Aiva learned music theory by itself after ‘listening’ to all of these scores. No one ever showed the machine what a triad or seventh chord is or even what a note duration means.

“We have taught a deep neural network to understand the art of music composition by reading through a large database of classical partitions written by the most famous composers (Bach, Beethoven, Mozart, etc). Aiva is capable of capturing concepts of music theory just by doing this acquisition of existing musical works,” Aiva told Futurism. 

The startup is the business of writing and producing musical scores for movies, games, trailers or commercials and the artificial intelligence acts like a 24-hour composer who never runs out of inspiration and always does what it’s told. Clients who come to the company with a brief in which they state their objectives then Aiva runs a couple of iterations until the sheet music looks good enough. Then, humans arrange and play the music with live or virtual instruments in a studio.

By now, you must be dying to hear what the machine came up with. Streaming below is Aiva’s first album called Genesis. Spoiler: it all sounds freaking good!

In the future, Aiva hopes to make its platform versatile enough so a client only needs to upload a reference track, say a song from Radiohead, and select some general themes (ambient, dark, war, suspense etc.). Based on these simple settings, you would get a quick sheet music to play with, augment and revise as you wish. Maybe in the not so distant future, you could have new music written and generated by a computer in real-time based on your preferences similarly to how Spotify always knows to play the tunes you like — only this time it would all be completely new, original, and exclusive to a single pair of ears.

It’s also worth noting that while the music composed by Aiva is rather impressive, the machine didn’t know how to write music that elicits emotions. Some of the tracks sampled above might elicit certain feelings but the machine didn’t seek them out on purpose. This may set to change sooner than some would care to think. Just last week we reported how Japanese researchers made an Ai that writes and generates simple music that triggers an emotional response based on brain scans of humans listening to certain kinds of music.

Like any self-respecting composed, Aiva is registered under the France and Luxembourg authors’ right society (SACEM) so all of its tracks are copyrighted. Interestingly, though such results haven’t been peer-reviewed, Aiva claims it ran its own Turing tests and found humans couldn’t tell the music was written by a machine. That may actually be true and not very surprising considering the music was, at the end of the day, arranged (very important) and played by humans. And if you work in a studio, you don’t have to worry that much yet because your skills can’t be matched by a computer anytime too soon. Writing tonal instructions, which is the music sheet itself, is different from sound design and arrangement. Perhaps, Aiva and other AIs like it will shine the most in collaboration with humans, rather than in competition.

Previously, we reported how AIs also wrote their first pop songs and even the script for a SciFi short film. These works are still clumsy or augmented by human hands but with each passing day that ‘thinking machines’ get smarter, we’re forced to rethink basic concepts that make us human. Things like emotions, creativity, ingenuity.

***

Until one day man and machine will become indistinguishable, for a moment, before ultimately surpassing us for good.

Artificial synapse brings us one step closer to brain-like computers

Researchers have created a working artificial, organic synapse. The new device could allow computers to mimic some of the brain’s inner workings and improve their capacity to learn. Furthermore, a machine based on these synapses would be much more energy efficient that modern computers.

It may not look like much, but this device could revolutionize our computers forever.
Image credits Stanford University.

As far as processors go, the human brain is hands down the best we’ve ever seen. Its sheer processing power dwarfs anything humans have put together, for a fraction of the energy consumption, and it does it with elegance. If you allow me a car analogy, the human brain is a formula 1 race car that somehow uses almost no fuel and our best supercomputer… Well, it’s an old, beat-down Moskvich.

And it misfires.
Image credits Sludge G / Flickr.

So finding a way to emulate the brain’s hardware has understandably been high on the wishlist of computer engineers. A wish that may be granted sooner than they hoped. Researchers Stanford University and Sandia National Laboratories have made a breakthrough that could allow computers to mimic one element of the brain — the synapse.

 

 

 

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper.

“It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

Copycat

The artificial synapse is made up of two thin, flexible films holding three embedded terminals connected by salty water. It works similarly to a transistor, with one of the terminals dictating how much electricity can flow between the other two. This behavior allowed the team to mimic the processes that go on inside the brain — as they zap information from one another, neurons create ‘pathways’ of sorts through which electrical impulses can travel faster. Every successful impulse requires less energy to pass through the synapse. For the most part, we believe that these pathways allow synapses to store information while they process it for comparatively little energy expenditure.

Because the artificial synapse mimics the way synapses in the brain respond to signals, it removes the need to separately store information after processing — just like in our brains, the processing creates the memory. These two tasks are fulfilled simultaneously for less energy than other versions of brain-like computing. The synapse could allow for a much more energy-efficient class of computers to be created, addressing a problem that’s becoming more and more poignant in today’s world.

Modern processors need huge fans because they use a lot of energy, giving off a lot of heat.

One application for the team’s synapses could be more brain-like computers that are especially well suited to tasks that involve visual or auditory signals — voice-controlled interfaces or driverless cars, for example. Previous neural networks and artificially intelligent algorithms used for these tasks are impressive but come nowhere near the processing power our brains hold in their tiny synapses. They also use a lot more energy.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper.

“Instead of simulating a neural network, our work is trying to make a neural network.”

The team will program these artificial synapses the same way our brain learns — they will progressively reinforce the pathways through repeated charge and discharge. They found that this method allows them to predict what voltage will be required to get a synapse to a specific electrical state and hold it with only 1% uncertainty. Unlike traditional hard drives where data has to be stored or lost when the machine shuts down, the neural network can just pick up where it left off without the need for any data banks.

One of a kind

Right now, the team has only produced one such synapse. Sandia researchers have taken some 15,000 measurements during various tests of the device to simulate the activity of a whole array of them. This simulated network was able to identify handwritten digits (between 0-9) with 93 to 97% accuracy — which, if you’ve ever used the recognize handwriting feature, you’ll recognize as an incredible success rate.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper.

“We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

One of the reasons these synapses perform so well is the numbers of states they can hold. Digital transistors (such as the ones in your computer/smartphone) are binary — they can either be in state 1 or 0. The team has been able to successfully program 500 states in the synapse, and the higher the number the more powerful a neural network computational model becomes. Switching from one state to another required roughly a tenth of the energy modern computing system drain to move data from processors to memory storage.

Still, this means that the artificial synapse is currently 10,000 times less energy efficient than its biological counterpart. The team hopes they can tweak and improve the device after trials in working devices to bring this energy requirement down.

Another exciting possibility is the use of these synapses in-vivo. The devices are largely composed of organic elements such as hydrogen or carbon, and should be fully compatible with the brain’s chemistry. They’re soft and flexible, and use the same voltages as those of human neurons. All this raises the possibility of using the artificial synapse in concert with live neurons in improved brain-machine interfaces.

Before they considering any biological applications, however, the team wants to test a full array of artificial synapses.

The full paper “A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing” has been published in the journal Nature Materials.

 

 

Bendy artificial muscle is made of pure nylon, still stronger than you

An MIT breakthrough allows engineers to create artificial muscles that bend by simply heating nylon fibers.

Image credits Unspalsh / Pexels.

Artificial muscles are just what they sound like — man-made materials that can contract and expand similarly to what our own biological sort can do. Coming up with a way of cheaply mass-producing artificial muscles with full motion could have enormous potential, revolutionizing the way we think about anything from robotics to clothes.

One of the most accessible materials which has shown promise in this field is nylon. Previous work has resulted in twisted-nylon filaments that could mimic linear muscle activity, which actually turned out to be more efficient than their natural counterparts. They could extend and retract further, store and release more energy. But a similar system that could produce bending motions turned out far more difficult to recreate until now. There are some materials that can do the job, but they (mostly) use up exotic materials which are very difficult to obtain, and thus very expensive.

That might be changed by Seyed Mirvakili, a doctoral candidate, and Ian Hunter, the George N. Hatsopoulos Professor in the Department of Mechanical Engineering, MIT. The two have developed one of the simplest and lowest-cost system for developing such muscles to date, resulting in materials that can reproduce a range of bending motions performed by biological tissues.

Their method relies the same property that makes highly-oriented nylon fibers ideal for liner muscles, called strain: when heated, “they shrink in length but expand in diameter,” Mirvakili says. Using it as-is to create bending muscles would require extra elements, such as a pulley and take-up reel, which would negatively impact the muscles‘ spacial efficiency, power, and production cost. But by pressing the fibers in specific shapes and then heating them, the team could directly harness this property without any extra parts.

“The cooling rate can be a limiting factor,” Mirvakili says. “But I realized it could be used to an advantage.”

By healing up only one side of the fiber, the strands can be made to contract faster than the heat can reach the other side, producing a bending motion in the fiber. The strands can maintain their performance for at least 100,000 bending cycles and can achieve at least 17 cycles per second.

“You need a combination of these properties,” he says: “high strain and low thermal conductivity.”

The team started with run-of-the-mill fishing line, then pressed it to progressively make its cross-section rectangular, then square. Selectively heating one side of the square caused the fiber to bend in that direction. By changing the direction of heating, the team could create complex motions in the strands, such as circles or figure-eights — but they’re confident that much more complex patterns can be achieved rather easily.

Nylon muscles

Fabrication steps from raw circular filament to a fully functional bending artificial muscle. From bottom to top: raw circular filament, a filament pressed in a rolling mill, one with a mask in the middle of the surface, then with added conductive ink. Finally, the mask is removed after the ink is dried (the sample on the top).
Image credits Felice Frankel, Seyed Mohammad Mirvakili, MIT.

The researchers tested a special conductive paint which they applied to the fibers with a special resin. When a voltage was applied to the material, it heated up the fiber directly under the paint, causing it to bend. But heat sources such as electrical resistance heating, chemical reactions, even lasers, can be used to heat the fibers and create motion.

Potential applications could include clothes that fit on any individual, or shoes that adjust their shape on each step. They could be used to create self-adjusting catheters and other biomedical devices. And, in the long run, they could be used to build vehicles that change shape for maximum performance, or sun-tracking solar panels, Hunter says. The possibilities are only limited by our imagination.

The full paper “Multidirectional Artificial Muscles from Nylon” has been published in the journal Advanced Materials.