Tag Archives: machine

The world’s first ‘living machines’ can move, carry loads, and repair themselves

Researchers at the University of Vermont have repurposed living cells into entirely new life-forms — which they call “xenobots”.

The xenobot designs (top) and real-life counterparts (bottom).
Image credits Douglas Blackiston / Tufts University.

These “living machines” are built from frog embryo cells that have been repurposed, ‘welded’ together into body forms never seen in nature. The millimeter-wide xenobots are also fully-functional: they can move, perform tasks such as carrying objects and healing themselves after sustaining damage.

This is the first time anyone “designs completely biological machines from the ground up,” the team writes in their new study.

It’s alive!

“These are novel living machines,” says Joshua Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center and co-lead author of the study. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

“It’s a step toward using computer-designed organisms for intelligent drug delivery.”

The xenobots were designed with the Deep Green supercomputer cluster at UVM using an evolutionary algorithm to create thousands of candidate body forms. The researchers, led by doctoral student Sam Kriegman, the paper’s lead author, would assign the computer certain tasks for the design — such as achieving locomotion in one direction — and the computer would reassemble a few hundred simulated cells into different body shapes to achieve that goal. The software had a basic set of rules regarding what the cells could and couldn’t do and tested each design against these parameters. After a hundred runs of the algorithm, the team selected the most promising of the successful designs and set about building them.

The design of the xenobots.
Image credits Sam Kriegman, Douglas Blackiston, Michael Levin, Josh Bongard, (2020), PNAS.

This task was handled by a team of researchers at Tufts University led by co-lead author Michael Levin, who directs the Center for Regenerative and Developmental Biology at Tufts. First, they gathered and incubated stem cells from embryos of African frogs (Xenopus laevis, hence the name “xenobots”). Finally, these cells were cut and joined together under a microscope in a close approximation of the computer-generated designs.

The team reports that the cells began working together after ‘assembly’. They developed a passive skin-like layer and synchronized the contractions of their (heart) muscle cells to achieve motion. The xenobots were able to move in a coherent fashion up to days or weeks at a time, the team found, powered by embryonic energy stores.

Later tests showed that groups of xenobots would move around in circles, pushing pellets into a central location, spontaneously and collectively. Some of the xenobots were designed with a hole through the center to reduce drag but the team was able to repurpose it so that the bots could carry an object.

It’s still alive… but on its back?

A manufactured quadruped organism, 650-750 microns in diameter.
Image credits Douglas Blackiston / Tufts University.

One of the most fascinating parts of this already-fascinating work, for me, is the resilience of these xenobots.

“The downside of living tissue is that it’s weak and it degrades,” says Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades. We slice [a xenobot] almost in half and it stitches itself back up and keeps going. This is something you can’t do with typical machines.”

“These xenobots are fully biodegradable,” he adds, “when they’re done with their job after seven days, they’re just dead skin cells.”

However, none of the team’s designs was able to turn itself over when flipped on its back. It’s an almost comical little Achilles’ Heel for such capable biomachines.

The manufacturing process of the xenobots.
Image credits Sam Kriegman, Douglas Blackiston, Michael Levin, Josh Bongard, (2020), PNAS.

Still, they have a lot to teach us about how cells communicate and connect, the team writes.

“The big question in biology is to understand the algorithms that determine form and function,” says Levin. “The genome encodes proteins, but transformative applications await our discovery of how that hardware enables cells to cooperate toward making functional anatomies under very different conditions.”

“[Living cells] run on DNA-specified hardware,” he adds, “and these processes are reconfigurable, enabling novel living forms.”

Levin says that being fearful of what complex biological manipulations can bring about is “not unreasonable”, and are very likely going to result in at least some “unintended consequences”, but explains that the current research aims to get a handle on such consequences. The findings are also applicable to other areas of science and technologies were complex systems arise from simple units, he explains, such as the self-driving cars and autonomous systems that will increasingly shape the human experience.

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. “If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex. A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

The paper “A scalable pipeline for designing reconfigurable organisms” has been published in the journal PNAS.

Researchers teach AI to design, say it did ‘quite good’ but won’t steal your job (yet)

A US-based research team has trained artificial intelligence (AI) in design, with pretty good results.

A roof supported by a wooden truss framework.
Image credits Achim Scholty.

Although we don’t generally think of AIs as good problem-solvers, a new study suggests they can learn how to be. The paper describes the process through which a framework of deep neural networks learned human creative processes and strategies and how to apply them to create new designs.

Just hit ‘design’

“We were trying to have the [AIs] create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step,” says Ayush Raina, a Ph.D. candidate in mechanical engineering at Carnegie Mellon and a co-author of the study.

Design isn’t an exact science. While there are definite no-no’s and rules of thumb that lead to OK designs, good designs require creativity and exploratory decision-making. Humans excel at these skills.

Software as we know it today works wonders within a clearly defined set of rules, with clear inputs and known desired outcomes. That’s very handy when you need to crunch huge amounts of data, or to make split-second decisions to keep a jet stable in flight, for example. However, it’s an appalling skillset for someone trying their hand, or processors, at designing.

The team wanted to see if machines can learn the skills that make humans good designers and then apply them. For the study, they created an AI framework from several deep neural networks and fed it data pertaining to a human going about the process of design.

The study focused on trusses, which are complex but relatively common design challenges for engineers. Trusses are load-bearing structural elements composed of rods and beams; bridges and large buildings make good use of trusses, for example. Simple in theory, trusses are actually incredibly complex elements whose final shapes are a product of their function, material make-up, or other desired traits (such as flexibility-rigidity, resistance to compression-tension and so forth).

The framework itself was made up of several deep neural networks which worked together in a prediction-based process. It was shown five successive snapshots of the structures (the design modification sequence for a truss), and then asked to predict the next iteration of the design. This data was the same one engineers use when approaching the problem: pixels on a screen; however, the AI wasn’t privy to any further information or context (such as the truss’ intended use). The researchers emphasized visualization in the process because vision is an integral part of how humans perceive the world and go about solving problems.

In essence, the researchers had their neural networks watch human designers throughout the whole design process, and then try to emulate them. Overall, the team reports, the way their AI approached the design process was similar to that employed by humans. Further testing on similar design problems showed that on average, the AI can perform just as well if not better than humans. However, the system still lacks many of the advantages a human user would have when problem-solving — namely, it worked without a specific goal in mind (a particular weight or shape, for example), and didn’t receive feedback on how successful it was on its task. In other words, while the program could design a good truss, it didn’t understand what it was doing, what the end goal of the process was, or how good it was at it. So while it’s good at designing, it’s still a lousy designer.

All things considered, however, the AI was “quite good” at the task, says co-author Jonathan Cagan, professor of mechanical engineering and interim dean of Carnegie Mellon University’s College of Engineering.

“The AI is not just mimicking or regurgitating solutions that already exist,” Professor Cagan explains. “It’s learning how people solve a specific type of problem and creating new design solutions from scratch.”

“It’s tempting to think that this AI will replace engineers, but that’s simply not true,” said Chris McComb, an assistant professor of engineering design at the Pennsylvania State University and paper co-author.

“Instead, it can fundamentally change how engineers work. If we can offload boring, time-consuming tasks to an AI, like we did in the work, then we free engineers up to think big and solve problems creatively.”

The paper “Learning to Design From Humans: Imitating Human Designers Through Deep Learning” has been published in the Journal of Mechanical Design.

Computers can now read handwriting with 98% accuracy

New research in Tunisia is teaching computers how to read your handwriting.

Image via Pixabay.

Researchers at the University of Sfax in Tunisia have developed a new method for computers to recognize handwritten characters and symbols in online scripts. The technique has already achieved ‘remarkable performance’ on texts written in the Latin and Arabic alphabets.

iRead

“Our paper handles the problem of online handwritten script recognition based on an extraction features system and deep approach system for sequence classification,” the researchers wrote in their paper. “We used an existent method combined with new classifiers in order to attain a flexible system.”

Handwriting recognition systems are, unsurprisingly, computer tools designed to recognize characters and hand-written symbols in a similar way to our brains. They’re similar in form and function with the neural networks that we’ve designed for image classification, face recognition, and natural language processing (NLP).

As humans, we innately begin developing the ability to understand different types of handwriting in our youth. This ability revolves around the identification and understanding of specific characters, both individually and when grouped together, the team explains. Several attempts have been made to replicate this ability in a computer over the last decade in a bid to enable more advanced and automatic analyses of handwritten texts.

The new paper presents two systems based on deep neural networks: an online handwriting segmentation and recognition system that uses a long short-term memory network (OnHSR-LSTM) and an online handwriting recognition system composed of a convolutional long short-term memory network (OnHR-covLSTM).

The first is based on the theory that our own brains work to transform language from the graphical marks on a piece of paper into symbolic representations. This OnHSR-LSTM works by detecting common properties of symbols or characters and then arranging them according to specific perceptual laws, for instance, based on proximity, similarity, etc. Essentially, it breaks down the script into a series of strokes, that is then turned into code, which is what the program actually ‘reads’.

“Finally, [the model] attempts to build a representation of the handwritten form based on the assumption that the perception of form is the identification of basic features that are arranged until we identify an object,” the researchers explained in their paper.

“Therefore, the representation of handwriting is a combination of primitive strokes. Handwriting is a sequence of basic codes that are grouped together to define a character or a shape.”

The second system, the convolutional long short-term memory network, is trained to predict both characters and words based on what it read. It is particularly well-suited for processing and classification of long sequences of characters and symbols.

Both neural networks were trained then evaluated using five different databases of handwritten scripts in the Arabic and Latin alphabets. Both systems achieved recognition rates of over 98%, which is ‘remarkable’ according to the team. Both systems, they explained, performed similarly to human subjects at the task.

“We now plan to build on and test our proposed recognition systems on a large-scale database and other scripts,” the researchers wrote.

The paper “Neural architecture based on fuzzy perceptual representation for online multilingual handwriting recognition” has been published in the preprint server arXiv.

Teapot golfball.

Artificial intelligence still has severe limitations in recognizing what it’s seeing

Artificial intelligence won’t take over the world any time soon, a new study suggests — it can’t even “see” properly. Yet.

Teapot golfball.

Teapot with golf ball pattern used in the study.
Image credits: Nicholas Baker et al / PLOS Computational Biology.

Computer networks that draw on deep learning algorithms (often referred to as AI) have made huge strides in recent years. So much so that there is a lot of anxiety (or enthusiasm, depending on which side of the contract you find yourself) that these networks will take over human jobs and other tasks that computers simply couldn’t perform up to now.

Recent work at the University of California Los Angeles (UCLA), however, shows that such systems are still in their infancy. A team of UCLA cognitive psychologists showed that these networks identify objects in a fundamentally different manner from human brains — and that they are very easy to dupe.

Binary-tinted glasses

“The machines have severe limitations that we need to understand,” said Philip Kellman, a UCLA distinguished professor of psychology and a senior author of the study. “We’re saying, ‘Wait, not so fast.”

The team explored how machine learning networks see the world in a series of five experiments. Keep in mind that the team wasn’t trying to fool the networks — they were working to understand how they identify objects, and if it’s similar to how the human brain does it.

For the first one, they worked with a deep learning network called VGG-19. It’s considered one of the (if not the) best networks currently developed for image analysis and recognition. The team showed VGG-19 altered, color images of animals and objects. One image showed the surface of a golf ball displayed on the contour of a teapot, for example. Others showed a camel with zebra stripes or the pattern of a blue and red argyle sock on an elephant. The network was asked what it thought the picture most likely showed in the form of a ranking (with the top choice being most likely, the second one less likely, and so on).

Combined images.

Examples of the images used during this step.
Image credits Nicholas Baker et al., 2018, PLOS Computational Biology.

VGG-19, the team reports, listed the correct item as its first choice for only 5 out of the 40 images it was shown during this experiment (12.5% success rate). It was also interesting to see just how well the team managed to deceive the network. VGG-19 listed a 0% chance that the argyled elephant was an elephant, for example, and only a 0.41% chance that the teapot was a teapot. Its first choice for the teapot image was a golf ball, the team reports.

Kellman says he isn’t surprised that the network suggested a golf ball — calling it “absolutely reasonable” — but was surprised to see that the teapot didn’t even make the list. Overall, the results of this step hinted that such networks draw on the texture of an object much more than its shape, says lead author Nicholas Baker, a UCLA psychology graduate student. The team decided to explore this idea further.

Missing the forest for the trees

For the second experiment, the team showed images of glass figurines to VGG-19 and a second deep learning network called AlexNet. Both networks were trained to recognize objects using a database called ImageNet. While VGG-19 performed better than AlexNet, they were still both pretty terrible. Neither network could correctly identify the figurines as their first choice: an elephant figurine, for example, was ranked with almost a 0% chance of being an elephant by both networks. On average, AlexNet ranked the correct answer 328th out of 1,000 choices.

Glass figurines.

Well, they’re definitely glass figurines to you and me. Not so obvious to AI.
Image credits Nicholas Baker et al / PLOS Computational Biology.

In this experiment, too, the networks’ first choices were pretty puzzling: VGG-19, for example, chose “website” for a goose figure and “can opener” for a polar bear.

“The machines make very different errors from humans,” said co-author Hongjing Lu, a UCLA professor of psychology. “Their learning mechanisms are much less sophisticated than the human mind.”

“We can fool these artificial systems pretty easily.”

For the third and fourth experiment, the team focused on contours. First, they showed the networks 40 drawings outlined in black, with the images in white. Again, the machine did a pretty poor job of identifying common items (such as bananas or butterflies). In the fourth experiment, the researchers showed both networks 40 images, this time in solid black. Here, the networks did somewhat better — they listed the correct object among their top five choices around 50% of the time. They identified some items with good confidence (99.99% chance for an abacus and 61% chance for a cannon from VGG-19, for example) while they simply dropped the ball on others (both networks listed a white hammer outlined in black for under 1% chance of being a hammer).

Still, it’s undeniable that both algorithms performed better during this step than any other before them. Kellman says this is likely because the images here lacked “internal contours” — edges that confuse the programs.

Throwing a wrench in

Now, in experiment five, the team actually tried to throw the machine off their game as much as possible. They worked with six images that VGG-16 identified correctly in the previous steps, scrambling them to make them harder to recognize while preserving some pieces of the objects shown. They also employed a group of ten UCLA undergrads as a control group.

The students were shown objects in black silhouettes — some scrambled to be difficult to recognize and some unscrambled, some objects for just one second, and some for as long as the students wanted to view them. Students correctly identified 92% of the unscrambled objects and 23% of the scrambled ones when allowed a single second to view them. When the students could see the silhouettes for as long as they wanted, they correctly identified 97% of the unscrambled objects and 37% of the scrambled objects.

Silhouette and scrambled bear.

Example of a silhouette (a) and scrambled image (b) of a bear.
Image credits Nicholas Baker et al / PLOS Computational Biology.

VGG-19 correctly identified five of these six images (and was quite close on the sixth, too, the team writes). The team says humans probably had more trouble identifying the images than the machine because we observe the entire object when trying to determine what we’re seeing. Artificial intelligence, in contrast, works by identifying fragments.

“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

The results suggest that right now, AI (as we know and program it) is simply too immature to actually face the real world. It’s easily duped, and it works differently than us — so it’s hard to intuit how it will behave. Still, understanding how such networks ‘see’ the world around them would be very helpful as we move forward with them, the team explains. If we know their weaknesses, we know where we need to put most work in to make meaningful strides.

The paper “Deep convolutional networks do not classify based on global object shape” has been published in the journal PLOS Computational Biology.

Atom2Vec.

An AI recreated the periodic table from scratch — in a couple of hours

A new artificial intelligence (AI) program developed at Stanford recreated the periodic table from scratch — and it only needed a couple of hours to do so.

Atom2Vec.

If you’ve ever wondered how machines learn, this is it — in picture form. (A) shows atom vectors of 34 main-group elements and their hierarchical clustering based on distance. The color in each cell stands for value of the vector on that dimension.
Image credits Zhou et al., 2018, PNAS.

Running under the alluring name of Atom2Vec, the software learned to distinguish between different atoms starting from a database of chemical compounds. After it learned the basics, the researchers left Atom2Vec to its own devices. Using methods and processes related to those in the field of natural language processing — chiefly among them, the idea that the nature of words can be understood by looking at other words around it — the AI successfully clustered the elements by their chemical properties.

It only took Atom2Vec a couple of hours to perform the feat; roughly speaking, it re-created the periodic table of elements, one of the greatest achievements in chemistry. It took us hairless apes nearly a century of trial-and-error to do the same.

I’m you, but better

The Periodic Table of elements was initially conceived by Dmitri Mendeleev in the mid-19th century, well before many of the elements we know today had been discovered, and certainly before there was even an inkling of quantum mechanics and relativity lurking beyond the boundaries of classical physics. Mendeleev recognized that certain elements fell into groups with similar chemical features, and this established a periodic pattern (hence the name) to the elements as they went from lightweight elements like hydrogen and helium, to progressively heavier ones. In fact, Mendeleev could predict the very specific properties and features of, as yet, undiscovered elements due to blank spaces in his unfinished table. Many of these predictions turned out to be correct when the elements filling the blank spots were finally discovered.

“We wanted to know whether an AI can be smart enough to discover the periodic table on its own, and our team showed that it can,” said study leader Shou-Cheng Zhang, the J. G. Jackson and C. J. Wood Professor of Physics at Stanford’s School of Humanities and Sciences.

Zhang’s team designed Atom2Vec starting from an AI platform (Word2Vec) that Google built to parse natural language. The software converts individual words into vectors (numerical codes). It then analyzes these vectors to estimate the probability of a particular word appearing in a text based on the presence of other words.

The word “king” for example is often accompanied by “queen”, and the words “man” and “woman” often appear together. Word2Vec works with these co-appearances and learns that, mathematically, “king = a queen minus a woman plus a man,” Zhang explains. Working along the same lines, the team fed Atom2Vec all known chemical compounds (such as NaCl, KCl, and so on) in lieu of text samples.

It worked surprisingly well. Even from this relatively tiny sample size, the program figured out that potassium (K) and sodium (Na) must be chemically-similar, as both bind to chlorine (Cl). Through a similar process, Atom2Vec established chemical relationships between all the species in the periodic table. It was so successful and fast in performing the task that Zhang hopes that in the future, researchers will use Atom2Vec to discover and design new materials.

Future plans

“For this project, the AI program was unsupervised, but you could imagine giving it a goal and directing it to find, for example, a material that is highly efficient at converting sunlight to energy,” he said.

As impressive as the achievement is, Zhang says it’s only the first step. The endgame is more ambitious — Zhang hopes to design a replacement for the Turing test, the golden standard for gauging machine intelligence. To pass the Turing test, a machine must be capable of responding to written questions in such a way that users won’t suspect they’re chatting with a machine; in other words, a machine will be considered as intelligent as a human if it seems human to us.

However, Zhang thinks the test is flawed, as it is too subjective.

“Humans are the product of evolution and our minds are cluttered with all sorts of irrationalities. For an AI to pass the Turing test, it would need to reproduce all of our human irrationalities,” he says. “That’s very difficult to do, and not a particularly good use of programmers’ time.”

He hopes to take the human factor out of the equation, by having machine intelligence try to discover new laws of nature. Nobody’s born educated, however, not even machines, so Zhang is first checking to see if AIs can reach of the most important discoveries we made without help. By recreating the periodic table, Atom2Vec has achieved this goal.

The team is now working on the second version of the AI. This one will focus on cracking a frustratingly-complex problem in medical research: it will try to design antibodies to attack the antigens of cancer cells. Such a breakthrough would offer us a new and very powerful weapon against cancer. Currently, we treat the disease with immunotherapy, which relies on such antibodies already produced by the body; however, our bodies can produce over 10 million unique antibodies, Zhang says, by mixing and matching between some 50 separate genes.

“If we can map these building block genes onto a mathematical vector, then we can organize all antibodies into something similar to a periodic table,” Zhang says.

“Then, if you discover that one antibody is effective against an antigen but is toxic, you can look within the same family for another antibody that is just as effective but less toxic.”

The paper “Atom2Vec: Learning atoms for materials discovery,” has been published in the journal PNAS.

Tardis Pinball set.

Time travel is proven possible — but we’ll likely never be able to build the machine, author says

New research from the University of British Columbia, Okanagan comes to validate the nerdiest of your dreams. Time travel is possible according to a new mathematical model developed at the university — but not likely anytime soon. Or ever.

Tardis Pinball set.

Image credits Clark Mills.

The idea of modern time traveling machine has its roots in HG Wells’ Time Machine, published way back in 1885. Needless to say, it has enraptured imaginations all the way up to the present, and scientists have been trying to prove or disprove its feasibility ever since. One century ago, Einstein was unveiling his theory of general relativity, cementing time as a fourth dimension and describing gravitational fields as the product of distortions in spacetime. Einstein’s theory only grew in confidence following the detection of gravitational waves generated from colliding black holes by the LIGO Scientific Collaboration.

So time isn’t just an abstract, human construct — it’s a dimension just as real as the physical space we perceive around us. Does that mean we can travel through time? Ben Tippett, a mathematics and physics instructor at UBC’s Okanagan campus, says yes. An expert on Einstein’s theory of general relativity, sci-fi enthusiast and black hole researcher in his spare time, Tippett recently published a paper which describes a valid mathematical model for time travel.

“People think of time travel as something as fiction,” says Tippett. “And we tend to think it’s not possible because we don’t actually do it. But, mathematically, it is possible.”

Tippett says Einstein’s division of space in three dimensions with time as a fourth, separate dimension, is incorrect. These four facets should be imagined simultaneously, he adds, connected as a space-time continuum. Starting from Einstein’s theory, Tippett says that the curvature of space-time can explain the curved orbits of planets around stars. In ‘flat’ (or uncurved) space-time, a planet or a star would keep moving in straight lines. But in the vicinity of a massive stellar body space-time curves, drawing the trajectories of nearby planets and bending them around that body.

Tippett proposes using such a curvature to create a time machine. The closer one gets to a black hole, he says, time moves slower. So if we could find a way to recreate that effect and bend time in a circle for the passengers of the time-machine, we can go back or forward in time.

Tippett created a mathematical model of a Traversable Acausal Retrograde Domain in Space-time (TARDIS). He describes it as a bubble of space-time geometry which carries its contents backward and forwards through space and time as it tours a large circular path. The bubble moves through space-time at speeds greater than the speed of light at times, allowing it to move backward in time.

But although it’s possible to describe the device using maths, Tippett doubts we’ll ever build such a machine.

“HG Wells popularized the term ‘time machine’ and he left people with the thought that an explorer would need a ‘machine or special box’ to actually accomplish time travel,” Tippett says.

“While is it mathematically feasible, it is not yet possible to build a space-time machine because we need materials–which we call exotic matter–to bend space-time in these impossible ways, but they have yet to be discovered.”

The paper “Traversable acausal retrograde domains in spacetime” has been published in the IOPscience journal Classical and Quantum Gravity.

Google’s Neural Machine can translate nearly as well as a human

A new translation system unveiled by Google, the Neural Machine Translation (GNMT) framework comes close to human translators in it’s proficiency.

Public domain image.

Not knowing the local language can be hell — but Google’s new translation software might prove to be the bilingual travel partner you’re always wanted. A recently released paper notes that Google’s Neural Machine Translation system (GNMT) reduces translation errors by an average of 60% compared to the familiar phrase-based approach. The framework is based on unsupervised deep learning technology.

Deep learning simulates the way our brains form connections and process information inside a computer. Virtual neurons are mapped out by a program, and the connections between them receive a numerical value, a “weight”. The weight determines how each of these virtual neurons treats data imputed to it — low-weight neurons recognize the basic features of data, which they feed to the heavier neurons for further processing, and so on. The end goal is to create a software that can learn to recognize patterns in data and respond to each one accordingly.

Programmers train these frameworks by feeding them data, such as digitized images or sound waves. They rely on big sets of training data and powerful computers to work effectively, which are becoming increasingly available. Deep learning has proven its worth in image and speech recognition in the past, and adapting it to translation seems like the logical next step.

And it works like a charm

GNMT draws on 16 processors to transform words into a value called “vector.” This represents how closely it relates to other words in its training database — 2.5 billion sentence pairs for English and French, and 500 million for English and Chinese. “Leaf” is more related to “tree” than to “car”, for example, and the name “George Washington” is more related to “Roosevelt” than to “Himalaya”, for example. Using the vectors of the input words, the system chooses a list of possible translations, ranked based on their probability of occurrence. Cross-checking helps improve overall accuracy.

The increased accuracy in translation happened because Google let their neural network do without much of the previous supervision from programmers. They fed the initial data, but let the computer take over from there, training itself. This approach is called unsupervised learning, and has proven to be more efficient than previous supervised learning techniques, where humans held a large measure of control on the learning process.

In a series of tests pitting the system against human translators, it came close to matching their fluency for some languages. Bilingually fluent people rated the system between 64 and 87 percent better than the previous one. While some things still slip through GNMT’s fingers, such as slang or colloquialisms, those are some solid results.

Google is already using the new system for Chinese to English translation, and plans to completely replace it’s current translation software with GNMT.

 

Machine learning could solve the US’s police violence issue

The Charlotte-Mecklenburg Police Department of North Carolina is piloting a new machine-learning system which it hopes will combat the rise of police violence. Police brutality has been a growing issue in US in recent years.

The system combs through the police’s staff records to identify officers with a high risk of causing “adverse events” — such as racial profiling or unwarranted shootings.

Image credits Jagz Mario / Flickr.

A University of Chicago team is helping the Charlotte-Mecklenburg PD keep an eye on their police officers, and prevent cases of police violence. The team feeds data from the police’s staff records into a machine learning system that tries to spot risk factors for unprofessional conduct. Once a high-risk individual is identified, the department steps in to prevent any actual harm at the hands of the officer.

Officers are people too, and they can be subjected to a lot of stress in their line of work. The system is meant to single out officers who might behave aggressively under stress. All the information on an individual’s record — details of previous misconduct, gun use, their deployment history, how many suicide or domestic violence calls they have responded to, et cetera — is fed into the system. The idea is to prevent incidents in which officers who are stressed can behave aggressively, such as the case in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift.

“Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

But so far, the system had some pretty impressive results. It retrospectively flagged 48 out of 83 adverse incidents that happened between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system. It had a false positive rate – officers flagged as having a high risk by the system that didn’t behave aggressively – was 32 per cent lower than the existing systems.

Ghani’s team is currently testing the system with the Los Angeles County Sheriff’s Department and the Knoxville Police Department in Tennessee. They will present the results of their pilot system at the International Conference on Knowledge Discovery and Data Mining in San Francisco later this month.

So the system works, but exactly what should be done after an official has been flagged as a potential risk is still up for debate. The team is still working with the Charlotte-Mecklenburg police to find the best solution.

“The most appropriate intervention to prevent misconduct by an officer could be a training course, a discussion with a manager or changing their beat for a week,” Ghani adds.

Whatever the best course of action is, Ghani is confident that it should be implemented by humans, not a computer system.

Or adorable toy police cars, at least.
Image via pixabay

“I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic.

“In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

He believes that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place.

“The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system, used to implement the “broken windows” theory of policing — the idea that punishing minor infractions like public drinking and vandalism severely helps create an atmosphere of law and order, and will thus bring down serious crime. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Pasquale warns that the University of Chicago system is not infallible. Just like any other system, it’s going to suffer from biased data — for example, a black police officer in a white community will likely get more complaints than a white colleague, he says, because the police can be subject to racism, too. Giving officers some channel to seek redress will be important.

“This can’t just be an automatic number cruncher.”

mid-air-3d-printing

This machine 3-D prints metal objects in mid-air

mid-air-3d-printing

Credit: YouTube

Harvard researchers have demonstrated an all new 3-D printing technique that creates metals objects with complex shapes right in mid-air. This is fundamentally different from the approach of traditional 3-D printers which ooze polymer material layer by layer. The new fabrication technique could prove very useful in the production of  flexible, wearable electronics, sensors, antennas, and biomedical devices.

To make objects in mid-air, the Harvard printer injects silver nanoparticles through the nozzle, then immediately fires a focused laser beam onto the material to harden it. The nozzle can move along x, y, and z axes, but also in a combination with a rotary print stage. This high degree of freedom means complex metal shapes can be printed, previously difficult if not impossible to make with traditional techniques.

As you can see in the demo video below, the researchers made anything from coils to  a butterfly made of silver wires narrower than a hair’s width.

This was a tricky job, though. The main challenge was syncing the nozzle “ink” and the laser, the researchers report in Proceedings of the National Academy of Sciences.

“If the laser gets too close to the nozzle during printing, heat is conducted upstream, which clogs the nozzle with solidified ink,” said Wyss Institute Postdoctoral Fellow Mark Skylar-Scott. “To address this, we devised a heat transfer model to account for temperature distribution along a given silver-wire pattern, allowing us to modulate the printing speed and distance between the nozzle and laser to elegantly control the laser annealing process ‘on the fly.’”

“This sophisticated use of laser technology to enhance 3-D printing capabilities not only inspires new kinds of products, it moves the frontier of solid free-form fabrication into an exciting new realm, demonstrating once again that previously accepted design limitations can be overcome by innovation,” said Wyss Institute Director Donald Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as professor of bioengineering at SEAS.

‘Beauty Machine’ turns average into knockout

Beauty lies in the eye of the beholder, or at least that’s what we used to hear as kids from our parents. Well, scientists say our parents were wrong; after creating a computer that recognizes attractivenes in women , now they managed to create the world’s first beauty machine. While this machine can’t (yet) make you a knockout, it can make a picture of you be way more beautiful.

Researchers at Tel Aviv University invented this machine that turns pictures of average people into something that could twist the mind of many. Despite the fact that currently it works just in digital format, as it is developed it could guide plastic surgeons and even become a feature incorporated in cameras.

“Beauty, contrary to what most people think, is not simply in the eye of the beholder,” says lead researcher Prof. Daniel Cohen-Or of the Blavatnik School of Computer Sciences at Tel Aviv University. With the aid of computers, attractiveness can be objectified and boiled down to a function of mathematical distances or ratios, he says. This function is the basis for his beauty machine.

“We’ve run the faces of people like Brigitte Bardot and Woody Allen through the machine and most people are very unhappy with the results,” he admits. “But in unfamiliar faces, most would agree the output is better.” Prof. Cohen-Or now plans on developing the beauty machine further — to add the third dimension of depth.