Tag Archives: code

Robot and human hands.

Robot see, robot do: MIT software allows you to instruct a robot without having to code

Researchers have put together C-LEARN, a system that should allow anyone to teach their robot any task without having to code.

The robot chef from the Easy Living scene in Horizons at EPCOT Center.
Image credits Sam Howzit / Flickr.

Quasi-intelligent robots are already a part of our lives, and someday soon, their full-fledged robotic offspring will be too. But until (or rather, unless) they reach a level of intelligence where we can teach them verbally, as you would a child, instructing a robot will require you to know how to code. Since coding is complicated, more complicated than just doing the dishes yourself, anyway, it’s unlikely that regular people will have much use for robots.

Unless, of course, we could de-code the process of instructing robots. Which is exactly what roboticists at the MIT have done. Called C-LEARN, the system should make the task of instructing your robot as easy as teaching a child. Which is a bit of good-news-bad-news, depending on how you feel about the rise of the machines: good, because we can now have robot friends without learning to code, and bad, because technically the bots can use the system to teach one another.

How to train your bot

So as I’ve said, there’re two ways you can go about it. The first one is to program them, which requires expertise in the field of coding and takes a lot of time. The other is to show the bot what you want it to do by tugging on its limbs or moving digital representations of them around, or just doing the task yourself and having it imitate you. For us muggles the latter is the way to go, but it takes a lot of work to teach a machine even simple movements — and then it can only repeat, not adapt them.

C-LEARN is meant to form a middle road and address the shortcoming of these two methods by arming robots with a knowledge base of simple steps that it can intelligently apply when learning a new task. A human user first helps build up this base by working with the robot. The paper describes how the researchers taught Optimus, a two-armed robot, by using software to simulate the motion of its limbs. Like so:

The researchers described movements such as grasping the top of a cylinder or the side of a block, in different positions, retaking each motion for seven times from each position. The motions varied slightly each time, so the robot can look for underlying patterns in the motions and integrate them into the data bank. If for example the simulated grasper always ended up parallel to the object, the robot would note this position is important in the process and would constrain its future motions to attain this parallelism.

By this point, the robot is very similar to a young child, “that just knows how to reach for something and grasp it,” according to D’Arpino. But starting from this database the robot can learn new, complex tasks following a single demonstration. All you have to do is show it what you want done, then approve or correct its attempt.

Does it work?

Robot and human hands.

To test the system, the researchers taught Optimus four multistep tasks — to pick up a bottle and place it in a bucket, to grab and lift a horizontal tray using both hands, to open a box with one hand and use the other to press a button inside it, and finally to grasp a handled cube with one hand and pull a rod out of it with the other. Optimus was shown how to perform each task once, made 10 attempts at each, and succeeded 37 out of 40 times. Which is pretty good.

The team then went one step further and transferred Optimus’s knowledge base and its understanding of the four tasks to a simulation of Atlas, the bullied bot. It managed to complete all four tasks using the data. When researchers corrupted the data banks by deleting some of the information (such as the constraint to place a grasper parallel to the object), Atlas failed to perform the tasks. Such a system would allow us to confer the models of motion created by one bot with thousands of hours of training and experience to any other robot — anywhere in the world, almost instantly.

D’Arpino is now testing whether having Optimus interact with people for the first time can refine its movement models. Afterward, the team wants to make the robots more flexible in how they apply the rules in their data banks, so that they can adjust their learned behavior to whatever situation they’re faced with.

The goal is to make robots who are able to perform complex, dangerous, or just plain boring tasks with high precision. Applications could include bomb defusal, disaster relief, high-precision manufacturing, and helping sick people with housework.

The findings will be presented later this month at the IEEE International Conference on Robotics and Automation in Singapore.

You can read the full paper “C-LEARN: Learning Geometric Constraints from Demonstrations for Multi-Step Manipulation in Shared Autonomy” here.

Want to work on NASA’s software and get paid for it? You’ll love this challenge

NASA is looking for programmers to help them upgrade the agency’s processing power. They’ve started a competition to find a contender that can tweak the FUN3D design software to run 10 to 10,000 times faster on the Pleiades supercomputer — without sacrificing any accuracy.

Nasa's challenge.

Image credits NASA.

Nerds of the world: become excited! NASA wants you to tweak their computers. The agency is sponsoring a competition called the High Performance Fast Computing Challenge (HPFCC) to find someone who can give their software more oomph.

“This is the ultimate ‘geek’ dream assignment,’ said Doug Rohn, director of NASA’s Transformative Aeronautics Concepts Program (TACP). “Helping NASA speed up its software to help advance our aviation research is a win-win for all.”

The culprit: FUN3D. This software is an integral part of NASA’s “three-legged stool” aviation research and design process: one leg oversees the initial designs testing with computational fluid dynamics (CFD), which draws on a supercomputer system for numerical analysis and data structures to solve problems. The second leg consists of building scale models to be tested in wind tunnels and confirm or infirm the CFD results. The third leg is to test experimental craft in a pilotless configuration to see exactly what each vehicle can do in real life conditions.

Shortening the leg


The HPFCC is aimed at improving this final step. Because of the sheer complexity of the concepts involved in the process, even the fastest supercomputers have trouble working with and analyzing the models in real time. So a little tweaking is in order to speed up the process.

FUN3D is written predominately in Modern Fortran. The code is owned by the U.S. government, so NASA had to require all participants to be U.S. citizens over the age of 18 to conform to strict export restrictions. The agency is looking for people to download the code, analyze its working process and find the strands of code that bottlenecks its performance, and then think of possible modifications that might lead to reducing overall computational time.

And it doesn’t have to be anything groundbreaking, either. De-cluttering or simplifying a single subroutine so that it runs a few milliseconds faster might not sound like much, but if the program has to call it millions of times — it adds up to a huge improvement.

The HPFCC is supported by two of NASA’s partner’s, HeroX and TopCoder, and has two categories you can compete in: ideation, focusing on improvements to the algorithms themselves, and architecture, focusing on tweaking the overall structure of the program. The prize purse of US$55,000 will be distributed among first and second finishers in these two categories. If you want to try your brain against the challenge, all you have to do is visit this page. Code submissions have to be received by 5 p.m. EDT on June 29. The winners will be announced August 9.

For more information about this challenge, the FUN3D software, or NASA’s Pleiades supercomputer, send an email to hq-fastcomputingchallenge [at] mail.nasa [dot ]gov.

AI can write new code by borrowing lines from other programs

DeepCoder, a system put together by researchers at Microsoft and the University of Cambridge, can now allow machines to write their own programs. It’s currently limited in scope, such as those seen at programming competitions. The tool could make it much easier for people who don’t know how to write code to create simple programs.

Image credits: Pexels.

In a world run more and more via a screen, knowing how to code — and code fast — is a good skill to have. Still, it’s not a very common one. With this in mind, Microsoft researchers have teamed up with their UoC counterparts to produce a system that allows machines to build simple programs from a basic set of instructions.

“All of a sudden people could be so much more productive,” says Armando Solar-Lezama at the Massachusetts Institute of Technology, New Scientist reports.

“They could build systems that it [would be] impossible to build before.”

DeepCoder relies on a method called program synthesis, which allows the software to create programs by ‘stealing’ lines of code from existing programs — just like many human programmers do it. Initially given a list of inputs and outputs for each fragment of code, DeepCoder learned which bits do what, and how they can fit together to reach the required result.

It uses machine learning to search databases of source code for building blocks, which it then sorts according to their probable usefulness. One advantage it has over humans is that DeepCode’s AI can search for code much faster and more thoroughly than a programmer could. In the end, this can allow the system to make unexpected combinations of source code to solve various tasks.


Ultimately, the researchers hope DeepCode will give non-coders a tool which can start from a simple idea and build software around it says Marc Brockschmidt, one of DeepCoder’s creators at Microsoft Research in Cambridge, UK.

“It could allow non-coders to simply describe an idea for a program and let the system build it,” he said.

Researchers have dabbled in automated code-writing software in the past, but nothing on the level DeepCoder can achieve. In 2015 for example, MIT researchers created a program which could automatically fix bugs in software by replacing faulty lines of code with material from other programs. DeepCoder, by contrast, doesn’t need a pre-written piece of code to work with, it builds its own.

It’s also much faster than previous programs. DeepCoder takes fractions of a second to create working programs where older systems needed several minutes of trial and error before reaching a workable solution. Because DeepCoder learns which combinations of source code work and which ones don’t as it goes along, it improves its speed every time it tackles a new problem.

At the moment, DeepCoder can only handle tasks that can be solved in around five lines of code — but in the right language, five lines are enough to make a pretty complex program, the team says. Brockschmidt hopes that future versions of DeepCoder will make it very easy to build basic programs that scrape information from websites for example, without a programmer having to devote time to the task.

“The potential for automation that this kind of technology offers could really signify an enormous [reduction] in the amount of effort it takes to develop code,” says Solar-Lezama.

Brockschmidt is positive that DeepCode won’t put programmers out of a job, however. WBut with the program taking over some of the most tedious parts of the job, he says, coders will be free to handle more complex tasks.


Your smartwatch might be giving away your ATM PIN

Smart devices are quickly taking over our lives, but they may also be giving away our secrets.

Your smartwatch may be giving away your bank PIN. Image via Capitec.

We’ve already given most of our privacy away to smartphones and Facebook. They know where we are, who our friends are, what we like to buy and much more about our personality than we’d like to admit. But according to a new study, they may also have access to your bank account.

The authors say that if you combine data from embedded sensors in wearable technologies, such as smartwatches and fitness trackers, with a PIN cracking algorithm you have an 80% chance of identifying a PIN code from the first try and an over 90% chance of cracking it in 3 tries.

Yan Wang, assistant professor of computer science at the Stevens Institute of Technology is working on smartphone security and privacy. He said that wearable devices in particular pose a significant risk and can be exploited with relative ease.

“Wearable devices can be exploited,” said Wang. “Attackers can reproduce the trajectories of the user’s hand then recover secret key entries to ATM cash machines, electronic door locks and keypad-controlled enterprise servers.”

She and his colleagues conducted 5,000 key-entry tests on three key-based security systems, including an ATM, with 20 adults wearing a variety of technologies over 11 months. Basically, regardless of the hand position and regardless of how much you try to conceal your hand movement, the accelerometers, gyroscopes and magnetometers inside the wearable technologies can still figure out what PIN you are typing in. In other words, your smartwatch is detecting your hand movement and figuring out your PIN.

According to the team, this is the first study to test this – at least the first scientific study. The required technology is still quite sophisticated, but with the right tools available, it’s worryingly easy to crack PIN codes.

“The threat is real, although the approach is sophisticated,” Wang added. “There are two attacking scenarios that are achievable: internal and sniffing attacks. In an internal attack, attackers access embedded sensors in wrist-worn wearable devices through malware. The malware waits until the victim accesses a key-based security system and sends sensor data back. Then the attacker can aggregate the sensor data to determine the victim’s PIN. An attacker can also place a wireless sniffer close to a key-based security system to eavesdrop sensor data from wearable devices sent via Bluetooth to the victim’s associated smartphones.”

The findings are just an early step in understanding the vulnerabilities and at the moment, there is no evident solution to fix these risks. The authors do suggest that developers “inject a certain type of noise to data so it cannot be used to derive fine-grained hand movements, while still being effective for fitness tracking purposes such as activity recognition or step counts.” However, not all is grim.

“Further research is needed, and we are also working on countermeasures,” concludes Chen, adding that wearables are not easily hackable — but they are hackable.

A paper on the new research, Friend or Foe? Your Wearable Devices Reveal Your Personal PIN, received the Best Paper Award at the ACM Conference on Information, Computer and Communications Security (ASIACCS) in Xian, China in May.

EDIT: We have corrected several minor errors in this article, as indicated by the authors of the study.

Researchers decode 18th century secret German society code

A team of Swedish scientists paired with a USC researcher to crack the Copiale Cipher, thus revealing secret rituals and beliefs of a secret German society that had a fascination for ophtalmology.

Thousands of old and obscure symbols filled over 100 pages of text which was found in Berlin towards the end of the Cold War. For USC computer scientist Kevin Knight and two Swedish researchers – this was a challenge. So, after months of hard work and imagination, after going on the wrong path for several times, they finally went on the right trail and started deciphering it.

At first, they only revealed one single world – ceremonie, a variation of the German word for ceremony, but it helped them enough to figure out the rest. Breaking the Copiale Cipher revealed fascinating information about a German secret society that had an unusual fascination with eye surgery and ophthalmology.

But perhaps even more exciting than the society itself is the fact that, after centuries of tries and fails, they were actually able to break the code! In January, Knight began working with Beata Megyesi and Christiane Schaefer of Uppsala University in Sweden – and in April, they had it solved. They ran statistics of 80 languages, trying to fing patterns and clues hidden between the symbols, believing at first that they secret lied in the Roman letters between the symbols that dotted the pages. Using a combination of computer calculation and human creativity, they finally found the key to it.

Graeme Hirst, a professor of computer science at the University of Toronto, said Knight’s work reminded him of that of Alan Turing, the British genius who cracked the German codes during WWII.

“Kevin and his team are channeling their inner Turing,” he said, “except they are faster and better because of all that we’ve learned.”

The key here lied not in the analytic power, but in the nimble and agile way of thinking researchers applied.

“This is something humans did,” he said, “not something computers did.”.

Via LA Times

Origin of the Voynich manuscript pushed back even further

The Voynich manuscript is perhaps one of the most mysterious manuscripts of all time; it contains 240 pages written in an unknown languages, with strange drawings, and with no clues of an author. It has been studied by some of the world’s sharpest minds in code breaking, but it defied all deciphering attempts.

Recently, researchers from the University of Arizona have pushed back the manuscript’s origin to the 15th century, using radiocarbon dating, making it at least a century older than previously thought. They were able to do this because the parchment pages were made out of animal skins, which of course contain organic traces and can be carbon dated.

“This tome makes the “DaVinci Code” look downright lackluster,” the researchers said. “Alien characters, some resembling Latin letters, others unlike anything used in any known language, are arranged into what appear to be words and sentences, except they don’t resemble anything written – or read – by human beings,” the team said.

It may well be another decade or even a century until the Voynich manuscript is translated and its meaning understood, but until then it fascinates everybody, including Greg Hodgins an assistant research scientist and assistant professor of the University of Arizona.

“I find this manuscript is absolutely fascinating as a window into a very interesting mind. Piecing these things together was fantastic,” Hodgins says. “It’s a great puzzle that no one has cracked, and who doesn’t love a puzzle?”