Tag Archives: software

Microsoft AI boasts 97% accuracy in detecting software bugs

Image credits: Markus Spiske.

Software bugs are a tale as old as time — which, in the case of programming, means about 75 years. In 1947, programmer Grace Murray Hopper was working on a Mark II Computer at Harvard University when she noticed a moth that was stuck in the relay, preventing the computer program from running. It was the first “bug”, and countless others have followed since then.

In the history of programming, bugs have ranged from harmless to absolutely catastrophic. In 1986 and 1987, several patients were killed after a Therac-25 radiation therapy device malfunctioned due to an error by an inexperienced programmer, and a software bug might have also triggered one of the largest non-nuclear explosions in history, at a Soviet trans-Siberian gas pipeline.

While events such as this are rare, it’s safe to say that software bugs can do a lot of damage and waste a lot of time (and resources). According to a recent analysis, the average programmer produces 70 bugs per 1,000 lines of code, with each bug demanding 30 times more time to fix than it took to write the code in the first place. In the US alone, an estimated $113 billion is spent identifying and fixing code bugs.

That might soon change.

Microsoft recently announced the creation of a machine learning model that can accurately identify high-priority bugs 97% of the time. The model has an even higher rate of success (99%) in distinguishing between security and non-security bugs.

In a recent report, Scott Christiansen, a senior security program manager at Microsoft, praised the algorithm, adding that Microsoft’s ultimate goal was to design a bug-detection system that is “as close as possible” to the accuracy of a security expert.

“We discovered that by pairing machine learning models with security experts, we can significantly improve the identification and classification of security bugs.”

The bug detection system uses two statistical techniques: the frequency-inverse document frequency algorithm (TF-IDF) examines the code for keywords and assesses their relevance, and the logic regression model calculates the probability of the existence of a specific class or event.

Then, the program classifies security and non-security bugs and ranks them as “critical”, “important”, or “low-impact”.

The algorithm is still a work in progress, but Microsoft has announced that it will make its finding open-source on GitHub, which could end up saving a lot of time and energy for coders all around the world.

In the meantime, you can read a published academic paper, Identifying security bug reports based solely on report titles and noisy data, for more details.

“Every day, software developers stare down a long list of features and bugs that need to be addressed,” Christiansen said. “Security professionals try to help by using automated tools to prioritize security bugs, but too often, engineers waste time on false positives or miss a critical security vulnerability that has been misclassified. To tackle this problem data science and security teams came together to explore how machine learning could help.”

Slug P. californica.

‘Self-aware’, predatory, digital slug mimics the behavior of the animal it was modeled on

Upgrade, or the seeds of a robot uprising? U.S. researchers report they’ve constructed an artificially intelligent ocean predator that behaves a lot like the organism it was modeled on.

Slug P. californica.

Image credits Tracy Clark.

This frightening, completely digital predator — dubbed “Cyberslug” — reacts to food, threats, and members of its own ‘species’ much like the living animal that formed its blueprint: the sea slug Pleurobranchaea californica.

Slug in the machine

Cyberslug owes this remarkable resemblance to its biological counterpart to one rare trait among AIs — it is, albeit to a limited extent, self-aware. According to University of Illinois (UoI) at Urbana-Champaign professor Rhanor Gillette, who led the research efforts, this means that the simulated slug knows when it’s hungry or threatened, for example. The program has also learned through trial and error which other kinds of virtual critters it can eat, and which will fight back, in the simulated world the researchers pitted it against.

“[Cyberslug] relates its motivation and memories to its perception of the external world, and it reacts to information on the basis of how that information makes it feel,” Gillette said.

While slugs admittedly aren’t the most terrifying of ocean dwellers, they do have one quality that made them ideal for the team — they’re quite simple beings. Gillette goes on to explain that in the wild, sea slugs typically handle every interaction with other creatures by going through a three-item checklist: “Do I eat it? Do I mate with it? Or do I flee?”

Biologically simple, this process becomes quite complicated to handle successfully inside a computer program. That’s because, in order to make the right choice, an organism must be able to sense its internal state (i.e. whether it is hungry or not), obtain and process information from the environment (does this creature look tasty or threatening) and integrate past experience (i.e. ‘did this animal bite/sting me last time?’). In other words, picking the right choice involves the animal being aware of and understanding both its state, that of the environment, and the interaction between them — which is the basis of self-awareness.

Behavior chart slug.

Schematic of the approach-avoid behavior in the slug.
Image credits Jeffrey W. Brown et al., 2018, eNeuro.

Some of Gillette’s previous work focused on the brain circuits that allow sea slugs to operate these choices in the wild, mapping their function “down to individual neurons”. The next step was to test the accuracy of their models — and the best way to do this was to recreate the circuits of the animals’ brains and let them loose inside computer simulations. One of the earliest such circuit boards to represent the sea slug‘s brain, constructed by co-author Mikhail Voloshin, software engineer at the UoI, was housed in a plastic foam takeout container.

In the meantime, the duo have refined both their hardware and the code used to simulate the critters. Cyberslug’s decision-making is based on complex algorithms that estimate and weigh its individual goals, just like a real-life slug would.

“[P. californica‘s] default response is avoidance, but hunger, sensation and learning together form their ‘appetitive state,’ and if that is high enough the sea slug will attack,” Gillette explains. “When P. californica is super hungry, it will even attack a painful stimulus. And when the animal is not hungry, it usually will avoid even an appetitive stimulus. This is a cost-benefit decision.”

Cyberslug behaves the same way. The more it eats, for example, the more satiated it becomes and the less likely it will be to bother or attack something else (no matter its tastiness). Over time, it can also learn which critters to avoid, and which can be prayed upon with impunity. However, if hungry enough, Cyberslug will throw caution to the wind and even attack prey that’s adept at fighting back, if nothing less belligerent comes around for it to eat.

“I think the sea slug is a good model of the core ancient circuitry that is still there in our brains that is supporting all the higher cognitive qualities,” Gillette said. “Now we have a model that’s probably very much like the primitive ancestral brain. The next step is to add more circuitry to get enhanced sociality and cognition.”

This isn’t the first time we’ve seen researchers ‘digitizing’ the brains of simpler creatures — and this process holds one particular implication that I find fascinating.

Brains are, when you boil everything down, biological computers. Most scientists are pretty confident that we’ll eventually develop artificial intelligence, and sooner rather than later. But it also seems to me that there’s an unspoken agreement that the crux falls on the “artificial” part; that such constructs would always be lesser, compared to ‘true’, biological intelligence.

However, when researchers can quite successfully take a brain’s functionality and print it on a computer chip, doesn’t that distinction between artificial and biological intelligence look more like one of terminology rather than one of nature? If the computer can become the brain, doesn’t that make artificial life every bit as ‘true’ as our own, as worthy of recognition and safeguarding as our own?

I’d love to hear your opinion on that in the comments below.

The paper “Implementing Goal-Directed Foraging Decisions of a Simpler Nervous System in Simulation” has been published in the journal eNeuro.

Want to work on NASA’s software and get paid for it? You’ll love this challenge

NASA is looking for programmers to help them upgrade the agency’s processing power. They’ve started a competition to find a contender that can tweak the FUN3D design software to run 10 to 10,000 times faster on the Pleiades supercomputer — without sacrificing any accuracy.

Nasa's challenge.

Image credits NASA.

Nerds of the world: become excited! NASA wants you to tweak their computers. The agency is sponsoring a competition called the High Performance Fast Computing Challenge (HPFCC) to find someone who can give their software more oomph.

“This is the ultimate ‘geek’ dream assignment,’ said Doug Rohn, director of NASA’s Transformative Aeronautics Concepts Program (TACP). “Helping NASA speed up its software to help advance our aviation research is a win-win for all.”

The culprit: FUN3D. This software is an integral part of NASA’s “three-legged stool” aviation research and design process: one leg oversees the initial designs testing with computational fluid dynamics (CFD), which draws on a supercomputer system for numerical analysis and data structures to solve problems. The second leg consists of building scale models to be tested in wind tunnels and confirm or infirm the CFD results. The third leg is to test experimental craft in a pilotless configuration to see exactly what each vehicle can do in real life conditions.

Shortening the leg


The HPFCC is aimed at improving this final step. Because of the sheer complexity of the concepts involved in the process, even the fastest supercomputers have trouble working with and analyzing the models in real time. So a little tweaking is in order to speed up the process.

FUN3D is written predominately in Modern Fortran. The code is owned by the U.S. government, so NASA had to require all participants to be U.S. citizens over the age of 18 to conform to strict export restrictions. The agency is looking for people to download the code, analyze its working process and find the strands of code that bottlenecks its performance, and then think of possible modifications that might lead to reducing overall computational time.

And it doesn’t have to be anything groundbreaking, either. De-cluttering or simplifying a single subroutine so that it runs a few milliseconds faster might not sound like much, but if the program has to call it millions of times — it adds up to a huge improvement.

The HPFCC is supported by two of NASA’s partner’s, HeroX and TopCoder, and has two categories you can compete in: ideation, focusing on improvements to the algorithms themselves, and architecture, focusing on tweaking the overall structure of the program. The prize purse of US$55,000 will be distributed among first and second finishers in these two categories. If you want to try your brain against the challenge, all you have to do is visit this page. Code submissions have to be received by 5 p.m. EDT on June 29. The winners will be announced August 9.

For more information about this challenge, the FUN3D software, or NASA’s Pleiades supercomputer, send an email to hq-fastcomputingchallenge [at] mail.nasa [dot ]gov.

AI can write new code by borrowing lines from other programs

DeepCoder, a system put together by researchers at Microsoft and the University of Cambridge, can now allow machines to write their own programs. It’s currently limited in scope, such as those seen at programming competitions. The tool could make it much easier for people who don’t know how to write code to create simple programs.

Image credits: Pexels.

In a world run more and more via a screen, knowing how to code — and code fast — is a good skill to have. Still, it’s not a very common one. With this in mind, Microsoft researchers have teamed up with their UoC counterparts to produce a system that allows machines to build simple programs from a basic set of instructions.

“All of a sudden people could be so much more productive,” says Armando Solar-Lezama at the Massachusetts Institute of Technology, New Scientist reports.

“They could build systems that it [would be] impossible to build before.”

DeepCoder relies on a method called program synthesis, which allows the software to create programs by ‘stealing’ lines of code from existing programs — just like many human programmers do it. Initially given a list of inputs and outputs for each fragment of code, DeepCoder learned which bits do what, and how they can fit together to reach the required result.

It uses machine learning to search databases of source code for building blocks, which it then sorts according to their probable usefulness. One advantage it has over humans is that DeepCode’s AI can search for code much faster and more thoroughly than a programmer could. In the end, this can allow the system to make unexpected combinations of source code to solve various tasks.


Ultimately, the researchers hope DeepCode will give non-coders a tool which can start from a simple idea and build software around it says Marc Brockschmidt, one of DeepCoder’s creators at Microsoft Research in Cambridge, UK.

“It could allow non-coders to simply describe an idea for a program and let the system build it,” he said.

Researchers have dabbled in automated code-writing software in the past, but nothing on the level DeepCoder can achieve. In 2015 for example, MIT researchers created a program which could automatically fix bugs in software by replacing faulty lines of code with material from other programs. DeepCoder, by contrast, doesn’t need a pre-written piece of code to work with, it builds its own.

It’s also much faster than previous programs. DeepCoder takes fractions of a second to create working programs where older systems needed several minutes of trial and error before reaching a workable solution. Because DeepCoder learns which combinations of source code work and which ones don’t as it goes along, it improves its speed every time it tackles a new problem.

At the moment, DeepCoder can only handle tasks that can be solved in around five lines of code — but in the right language, five lines are enough to make a pretty complex program, the team says. Brockschmidt hopes that future versions of DeepCoder will make it very easy to build basic programs that scrape information from websites for example, without a programmer having to devote time to the task.

“The potential for automation that this kind of technology offers could really signify an enormous [reduction] in the amount of effort it takes to develop code,” says Solar-Lezama.

Brockschmidt is positive that DeepCode won’t put programmers out of a job, however. WBut with the program taking over some of the most tedious parts of the job, he says, coders will be free to handle more complex tasks.


A software bug could render the last 15 years of brain research meaningless

A new study suggests that our fMRI technology might be relying on faulty algorithms — a bug the researchers found in fMRI-specific software could invalidate the past 15 years of research into human brain activity.

Image credits Kai Stachowiak/Publicdomainpictures

The best tool we have to measure brain activity today is functional magnetic resonance imaging (fMRI.) It’s so good in fact that we’ve come to rely on it heavily — which isn’t a bad thing, as long as the method is sound and provides accurate readings. But if the method is flawed, the results of years of research about what our brains look like during exercise, gaming, love, drug usage and more would be put under question. Researchers from Linköping University in Sweden have performed a study of unprecedented scale to test the efficiency of fMRI, and their results are not encouraging.

“Despite the popularity of fMRI as a tool for studying brain function, the statistical methods used have rarely been validated using real data,” the researchers write.

The team lead by Anders Eklund gathered rest-state fMRI data from 499 healthy individuals from databases around the world and split them intro 20 groups. They then measured them against each other, resulting in a staggering 3 million random comparisons. They used these pairs to test the three most popular software packages for fMRI analysis – SPM, FSL, and AFNI.

While the team expected to see some differences between the packages (of around 5 percent), the findings stunned them: the software resulted in false-positive rates of up to 70 percent. This suggests that some of the results are so inaccurate that they might be showing brain activity where there is none — in other words, the activity they show is the product of the software’s algorithm, not of the brain being studied.

“These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results,” the paper reads.

One of the bugs they identified has been in the systems for the past 15 years. It was finally corrected in May 2015, at the time the team started writing their paper, but the findings still call into question the findings of papers relying on fMRI before this point.

So what is actually wrong with the method? Well, fMRI relies on a massive magnetic field pulsating through a subject’s body that can pick up on changes of blood flow in areas of the brain. These minute changes signal that certain brain regions have increased or decreased their activity, and the software interprets them as such. The issue is that when scientists are looking at the data they’re not looking at the actual brain — what they’re seeing at is an image of the brain divided into tiny ‘voxels’, then interpreted by a computer program, said Richard Chirgwin for The Register.

“Software, rather than humans … scans the voxels looking for clusters,” says Chirgwin. “When you see a claim that ‘Scientists know when you’re about to move an arm: these images prove it,’ they’re interpreting what they’re told by the statistical software.”

Because fMRI machines are expensive to use — around US$600 per hour — studies usually employ small sample sizes and there are very few (if any) replication experiments done to confirm the findings. Validation technology has also been pretty limited up to now.

Since fMRI machines became available in the early ’90s, neuroscientists and psychologists have been faced with a whole lot of challenges when it comes to validating their results. But Eklund is confident that as fMRI results are being made freely available online and validation technology is finally picking up, more replication experiments can be done and bugs in the software identified much more quickly.

“It could have taken a single computer maybe 10 or 15 years to run this analysis,” Eklund told Motherboard. “But today, it’s possible to use a graphics card”, to lower the processing time “from 10 years to 20 days”.

So what the nearly 40,000 papers that could now be in question? All we can do is try to replicate their findings, and see which work and which don’t.

The full paper, titled “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates,” has been published online in the journal PNAS.

Lie detector uses high-profile court cases to spot cheats better a human

Researchers used certain telltale signs of lying from over 120 high-profile court cases to teach an algorithm how to spot the best liars. Unlike a polygraph that infers truthfulness based on physiological parameters like heart rate, respiration and body temperature, the software developed by University of Michigan researchers makes use of visual cues only, the kind that a FBI profiler might use.

Still of Jim Carry in the movie "Liar, Liar".

Still of Jim Carry in the movie “Liar, Liar”.

After feeding hundreds of hours worth of video footage into the software, the researchers noticed some patterns. Those who lied in court  moved their hands more, scowled or grimaced, said “um, ah, uh”more frequently, and attempted to create a sense of distance between themselves and their alleged crime by using words like “he” or “she” rather than “I” or “we.” Liars were also more likely to look questioners in the eye, instead of looking away, and spoke more slowly. Liars were considered those who were found guilty in court, which make it inaccurate in the case of a wrong conviction. It does, however, seem to be a better sampling pool than the alternative: a lab setting.

In a lab setting, it can be difficult to assess liars. “The stakes are not high enough,” said Rada Mihalcea, a professor of computer science and engineering at UM-Flint. “We can offer a reward if people can lie well — pay them to convince another person that something false is true. But in the real world there is true motivation to deceive.”

liar liar

During trials, the software was able to identify liars 75% of the time, compared to 50% in the case of humans (complete random). Next, the researchers plan on integrating more data — again in a non-intrusive way. They’re thinking about using thermal imaging cameras to integrate physiological factors, including heart rate, respiration rate, and body temperature — just like a polygraph sans all the wires. This would help improve accuracy, while at the same time avoid stirring panic. A lot of polygraph tests send off false-positives simply because the interviewed person is too nervous.

Of course, you could never use this as evidence in court. You’d need 100% accuracy, and that’s not possible. Instead, law enforcement could find this sort of software very useful during profiling and questioning so they can improve their workflow and filter out suspects better.

The paper can be found here.

fundraising software

Software partnership makes online fundraising easier

fundraising softwareOnline charity will get even simpler, as Mineris Solutions, one of the most important payment processors in North America, and Global Cloud, known for providing the popular fundraising software DonorDrive, recently closed a partnership.

“They’re one of North America’s largest, most reputable payment processors. In addition, they have considerable experience working with the nonprofit community and understand their fundraising and payment processing needs well.”

Moneris will integrate its proprietary eSelectPlus payment gateway into DonorDrive, enabling Global Cloud clients to process electronic donations and payments directly through their DonorDrive software and Moneris. Thus, nonprofits of all types in the US and Canada will now be able to benefit from this collaboration, as it will allow fundraising leaders to manage all of their online campaigns with a single solution, rather than utilizing multiple vendors to coordinate their efforts. Recurring donations, 48 hour funding, easy set up and stringent data security are other benefits Moneris and Global Cloud jointly offer.

In the past few years online fundraising, and electronic donations alike (buy me a beer/coffee paypal donations is often used by bloggers as a gratitude token with much success), have soared in popularity. It only became natural that nonprofit organizations paid more attention to the online leg of their fundraising efforts, and now with this newly bonded partnership between Global Cloud and Moneris, fundraising software has never been more easy to use.

“DonorDrive is a best-in-class product and Global Cloud is a best-in-class company,” said Joe Garza, Senior Vice President, North American Alliances for Moneris Solutions. “We’re extremely excited to be working with them. Together we can offer nonprofits unmatched expertise and experience in both payment processing and online fundraising.”

“For Global Cloud, the decision to partner with Moneris made complete sense,” said Paul Ghiz, Managing Partner for Global Cloud and a company founder. “They’re one of North America’s largest, most reputable payment processors. In addition, they have considerable experience working with the nonprofit community and understand their fundraising and payment processing needs well.”


Nottingham University synthetic biology

Cellular operating system set to revolutionize synthetic biology

Nottingham University synthetic biology

University of Nottingham researchers are currently involved in synthetic biology project, whose scope and prospects are so ambitious, that if successful it will completely revolutionize the field of science. Their aim – developing programmable cellular life which can work as an “operating system.”

Currently, scientists are looking studying how to make the E. coli bacteria programmable, and if their trials provide to be successful, then it means they’ll be able to easily and quickly configure other cells to perform various tasks. The E. coli bacteria has been used by British scientists from London’s Imperial College to create a bio-transistor. They can also make new life forms altogether, which currently do not exist in nature to fit a certain purpose. This particular aspect of synthetic biology has earned its practitioners a bad reputation among creationalists, who dub their work “playing God”.

Professor Natalio Krasnogor of the University’s School of Computer Science, who leads the Interdisciplinary Computing and Complex Systems Research Group, said: “We are looking at creating a cell’s equivalent to a computer operating system in such a way that a given group of cells could be seamlessly re-programmed to perform any function without needing to modifying its hardware.”

More importantly, if the researchers manage to pass the finish line with their project, the resulting in vivo biological cell-equivalent of a computer operating system will be able to generate a database of easy-to-implement cellular programs that would allow the entire field of synthetic biology to move exponentially faster toward discoveries rather than inch forward by trial and error, the rate at which is today. The Nottingham scientists are confident that this can be achievable in five years time.

Practical applications of this kind of bio-technology would be inestimable in their output value. Customized living cells could be tailored to clean up environmental disasters, scrub unwanted carbon from the air, pull pollutants from drinking water, attack pathogens inside the human body, protect food sources from agricultural pests, and so on. You get the picture, this is the kind of thing that can bring man into a golden age of science.

University of Nottingham press release via popular sci


National Geographic wallpaper software

Just a few days ago I was telling you about the APOD Wallpaper software, that automatically changes when a picture is uploaded to APOD. This time, the software is a bit different. It has to be said, some of the best pictures and wallpapers ever have been posted on National Geographic, but downloading them is somewhat of a problem, considering there are over 5000 to go around.

Well, that’s all been taken care of ! The Nat Geo freeware (download here) lets you take those lovely images of nature and set them as your wallpaper with just a click. The program also takes care of your resolution, and adapts the picture to it.

National Geographic has promised there will be constant updates and added packages to the software:

Future builds will add even more packs like: History, Adventure, Traveler, Sea Monsters, Seabed, Blue Earth, Green Earth, Forces of Nature, Lewis & Clark, Roar and many more! (As you can see, I have my hands full, so don’t miss out on these great wallpapers when they’ll be added!)

Picture source

APOD Wallpaper software

In case you don’t know, APOD is short for Astronomy Picture Of the Day. It’s home of some awesome astronomy pictures. Anyway, recently I came across this software that takes the latest APOD picture and sets is at your background, so you can see these great images each day, without lifting a finger. It also adds a description. I tested it, and it works quite fine ! Download here

Just one amazing pic from APOD