Tag Archives: algorithm

A drone-flying software outperforms human pilots for the first time

The rise of the machines won’t be as dramatic as those in Terminator or the Animatrix if people can simply outrun the murderbots. And, currently, we can do that quite comfortably. Some robots can walk, some can run, but they tend to fall over pretty often, and most are not that fast. Autonomous flying drones are also having a very hard time keeping up with human-controlled ones, as well.

Image credits  Robotics and Perception Group, University of Zurich.

New research at the University of Zurich, however, might finally give robots the edge they need to catch up to their makers — or, at least, give flying drones that edge. The team developed a new algorithm that calculates optimal trajectories for each drone, taking into account their individual capabilities and limitations.

Speed boost

“Our drone beat the fastest lap of two world-class human pilots on an experimental race track,” says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and corresponding author of the paper.”The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones’ limitations”.

“The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that,” adds Philipp Foehn, Ph.D. student and first author of the paper.

Battery life is one of the most stringent constraints drones today face. Because of this, they need to be fast. The approach their software uses today is to break down their flight route into a series of waypoints and then calculate the best trajectory, acceleration, and deceleration patterns needed over each segment.

Previous drone piloting software relied on various simplifications of the vehicle’s systems — such as the configuration of its rotors or flight path — in order to save on processing power and run more smoothly (which in turn saves on battery power). While practical, such an approach also produces suboptimal results, in the form of lower speeds, as the program works with approximations.

I won’t go into the details of the code here, mainly because I don’t understand code. But results-wise, the drone was pitted against two human pilots — all three navigating the same quadrotor drone — through a race circuit, and came in first place. The team set up cameras along the route to monitor the drones’ movements and to feed real-time information to the algorithm. The human pilots were allowed to train on the course before the race.

In the end, the algorithm was faster than the pilots on every lap, and its performance was more consistent between laps. The team explains that this isn’t very surprising, as once the algorithm identifies the best path to take, it can reproduce it accurately time and time again, unlike human pilots.

Although promising, the algorithm still needs some tweaking. For starters, it consumes a lot of processing power right now: it took the system one hour to calculate the optimal trajectory for the drone. Furthermore, it still relies on external cameras to keep track of the drone, and ideally, we’d want onboard cameras to handle this step.

The paper “Time-optimal planning for quadrotor waypoint flight” has been published in the journal Science Robotics.

This algorithm lets you delete water from underwater photos

Image credits: Derya Akkaynak.

Underwater photography is not just for Instagram feeds — they are very important for biologists who monitor underwater ecosystem such as coral reefs. Coral reefs are some of the most colorful and vibrant environments on Earth, but like all underwater photos, photos of coral reefs tend to come out tainted by hues of blue and green. This makes it more difficult for researchers to identify species and traits of species from images, and makes monitoring considerably more difficult.

Now, there’s a solution for that: it’s called Sea-Thru.

Engineer and oceanographer Derya Akkaynak and her postdoctoral adviser, engineer Tali Treibitz, spent four years working to develop and improve an algorithm that would essentially “remove” the water from underwater photography.

The way the light is absorbed and scattered in water causes photos to be dim and overtaken by blue tones. Sea-thru removes the color cast and backscatter, leaving behind a crisp and clear image.

Image credits: Derya Akkaynak.

The method relies on taking multiple images of the same thing, from slightly different angles factoring in the physics of light absorption. Then, the algorithm produces a model of the photo, reversing the effects caused by the scattering and absorption.

“The Sea-thru method estimates backscatter using the dark pixels and their known range information,” the researchers describe the method in a working paper. “Then, it uses an estimate of the spatially varying illuminant to obtain the range-dependent attenuation coefficient. Using more than 1,100 images from two optically different water bodies, which we make available, we show that our method with the revised model outperforms those using the atmospheric model. “

The downside of this is that it requires quite a lot of images, and therefore, large datasets. Thankfully, many scientists are already capturing images this way using a process called photogrammetry (a technique that uses photographs to make certain measurements). Sea-Thru will readily work with photogrammetry images, Akkaynak says, which already raises intriguing prospects.

Results on different processing methods. Image credits: Derya Akkaynak.

This method is not akin to image manipulation — it’s not photoshopping or image manipulation. The colors are not enhanced or modified, it’s a physical correction rather than a visually pleasing modification, says Akkaynak.

Although the algorithm was only recently announced, it’s already causing quite a stir due to its potential. Any tool that can help scientists better understand the oceans, particularly at this extremely delicate time, can’t come sooner enough.

“Sea-thru is a significant step towards opening up large underwater datasets to powerful computer vision and machine learning algorithms, and will help boost underwater research at a time when our oceans are increasing stress from pollution, overfishing, and climate change,” the researchers conclude.

Acting out: mathematical model predicts the future of actors’ careers

Acting is an extremely unpredictable career — or so it would seem. Mathematicians working in England have developed an algorithm to predict whether an actor’s career has peaked or if they are still going strong.

Montage of the main actors in the TV series “Friends”, created from portraits available on Commons.

If you think about actors, a select few probably come to mind — the big names, the ones who star in blockbusters and make the headlines. But the acting world is much larger, and the vast majority of those trying to make it go by unknown. Using the Internet Movie Database (IMDb), researchers from the Queen Mary University of London analyzed the careers of 1,512,472 actors and 896,029 actresses around the world from 1888, when the first film was made, up to 2016 — making it by far the largest analysis of its type. The results, however, will not leave too many actors smiling.

The industry operates at an unemployment rate of 90%, and a mere 2% of all actors make a living out of acting alone. Simply put, just being employed can be considered as a success in the industry. Furthermore, the vast majority of actors (70%) have an extremely short career span: one year. There’s also a strong gender bias against women, with trends being very different for men and women.

Researchers analyzed the results with a mathematical model, working to find whether the most productive part of their career has passed or is yet to come. They found that career evolution was not entirely unpredictable.

For this purpose, productivity was defined as the number of roles played in one year. Of course, this is not entirely accurate, because one big part greatly overwhelms many few, but with this important caveat, results were quite impressive.

“Productivity tends to be higher towards the beginning of a career and there are signals preceding the most productive year. Accordingly, we propose a machine learning method which predicts with 85% accuracy whether this “annus mirabilis” [Latin for ‘wonderful year’] has passed, or if better days are still to come,” researchers write.

Acting careers tend to go in hot and cold streaks — when an actor or actress got multiple roles in a year, future years also tended to be more productive. Presumably, the more roles an actor plays, the more famous he or she becomes, which means more interest from producers. The opposite, unfortunately, is also true.

This translates into a ‘rich-get-richer’ phenomenon, where the best-known actors get most of the jobs, even though they might not be necessarily the most qualified for it. This effect causes arbitrary and unpredictable random events to happen over the course of a career, and this gets amplified in time. So an actor’s success could be down to circumstance rather than their acting ability. This is known as the network effect.

Researchers emphasize that in this context, it’s important to not only focus on the few rich and famous, but also on the many struggling actors who are equally qualified and yet just struggling to get by. Oliver Williams, one of the authors of the study from Queen Mary University of London, said:

“Only a select few will ever be awarded an Oscar or have their hands on the walk of fame, but this is not important to the majority of actors and actresses who simply want to make a living which is probably a better way of quantifying success in such a tough industry.”

He added that the trends for actors are far more predictable than those of other artists or scientists.

“Our results shed light on the underlying social dynamics taking place in show business and raise questions about the fairness of the system. Our predictive model for actors is also far from the randomness that is displayed for scientists and artists.”

Lastly, researchers hope that this study can make an actual difference. Dr. Lucas Lacasa, another author of the study from Queen Mary University of London, concludes:

“We think the approach and methods developed in this paper could be of interest to the film industry: for example, they could provide complementary data analytics to IMDb. This does also bring with it a number of open questions. We have assumed that there is nothing anyone can do to change their fortunes, but we have not shown that this has to be the case. Consequently we are interested in finding out how an individual might best improve their chances of future success.”

Interestingly, researchers also say that a film script is currently being developed based on their findings.

Journal Reference: ‘Quantifying and predicting success in show business’. Oliver E. Williams, Lucas Lacasa, Vito Latora. Nature Communications.

Researchers use machine learning algorithm to detect low blood pressure during surgery

Researchers have found a way to predict hypotension (low blood pressure) in surgical patients as early as 15 minutes before it sets in.

The potential applications of machine learning in healthcare are limitless — but the problem is that everything needs to be fine-tuned and error-proof. There’s no margin for error, there’s no room for mistakes or miscalculations. In this case, researchers drew data from 550,000 minutes of surgical arterial waveform recordings from 1,334 patients’ records, using high-fidelity recordings that revealed more than 3,000 unique features per heartbeat. All in all, they had millions of data points with unprecedented accuracy to calibrate their algorithm. They reached sensitivity and specificity levels of 88% and 87% respectively at 15 minutes before a hypotensive event. Those levels went up to 92% each at 5 minutes before onset.

“We are using machine learning to identify which of these individual features, when they happen together and at the same time, predict hypotension,” lead researcher Maxime Cannesson, MD, PhD, said in a statement. Cannesson is a professor of anesthesiology and vice chair for perioperative medicine at UCLA Medical Center.

This study is particularly important because medics haven’t had a way to predict hypotension during surgery, an event that can cause a very dangerous crisis, and thus forces doctors to adapt quickly to these threatening situations. This could allow physicians to avoid potentially-fatal postoperative complications like heart attacks or kidney injuries researchers say.

“Physicians haven’t had a way to predict hypotension during surgery, so they have to be reactive, and treat it immediately without any prior warning. Being able to predict hypotension would allow physicians to be proactive instead of reactive,” Cannesson said.

Furthermore, unlike other applications of machine learning in healthcare, this may become a reality in the near future. A piece of software (Acumen Hypotension Prediction Index) containing the underlying algorithm has already been submitted to the FDA, and it’s already been approved for commercial usage in Europe.

This is also impressive because it represents a significant breakthrough, Cannesson says.,

“It is the first time machine learning and computer science techniques have been applied to complex physiological signals obtained during surgery,” Dr. Cannesson said. “Although future studies are needed to evaluate the real-time value of such algorithms in a broader set of clinical conditions and patients, our research opens the door to the application of these techniques to many other physiological signals, such as EKG for cardiac arrhythmia prediction or EEG for brain function. It could lead to a whole new field of investigation in clinical and physiological sciences and reshape our understanding of human physiology.”

Results have been presented at the American Society of Anesthesiologists

NASA algorithm and citizen scientists allow biologists to track whale sharks

It’s always exciting when research from one field is applied to another. This time, it’s applying astronomical data and citizen science to whale sharks.

The spots on a whale shark do kind of look like stars in the night sky. Credits: Max Pixel.

It’s always surprising how much NASA’s work affects other fields of science, and ultimately, our lives. It’s not just about space flight or satellites, things like solar cells, highway de-icing and 3D printing all benefitted from NASA’s work. Now, lead scientist Dr. Brad Norman from Murdoch University was able to study whale sharks thanks to an algorithm developed by NASA engineers.

The algorithm was written to analyze star charts, but Norman and collaborators realized they could also use it to detect the white spots on a whale shark. Like fingerprints or stripes on a zebra, these spots are unique for every whale shark. So Norman gathered 30,000 photos of the awe-inspiring creatures from citizen scientists in 54 different countries.

“This effort is helping us to uncover the mysteries of whale sharks and better understand their abundance, geographic range, behaviours, migration patterns and their favourite places on the planet,” Dr. Norman told local newspapers.

“A great example of citizen science where members of the public can play a really positive and active role in monitoring our wildlife, in this case, whale sharks,” he added.

The team identified 20 locations, including Ningaloo Reef, the Maldives, Mozambique and the Red Sea, where the whale sharks gather, predominantly in male-dominated groups (males accounted for up to 90% of the group population). They only knew about 13 of these places before the project started. Meanwhile, in places like the Galapagos, 99 percent of the whale sharks were female. The scientists also identified some of the preferred migration routes of the creatures.

“Citizen science has been vital in amassing large spatial and temporal data sets to elucidate key aspects of whale shark life history and demographics and will continue to provide substantial long-term value,” the paper concludes.

Whale sharks can grow up to 12 meters long (39 feet). They’re gentle giants, slow-moving filter-feeders, the largest known extant fish species. However, despite being so big, they’ve been especially elusive until the 1980s. We don’t really know how many whale sharks there are in the world and as a result, their conservation status is hard to estimate. Researchers believe that this study could help with such estimates and could also help direct conservation efforts to where they are most needed. Engaging the general public is also a great way of increasing awareness and support for such efforts.

Journal Reference: Bradley M. Norman et al. Undersea Constellations: The Global Biology of an Endangered Marine Megavertebrate Further Informed through Citizen ScienceBioSciencehttps://doi.org/10.1093/biosci/bix127.

New algorithm turns low-resolution photos into detailed images — CSI style

You know that stereotypic scene from the CSI movies when they zoom in on a car they can barely see and then read the license plate clearly? Well, that bit of fiction might turn into reality, as computer scientists from the Max Planck Institute for Intelligent Systems in Tübingen have used artificial intelligence to create high-definition images from low-resolution photos.

EnhanceNet-PAT is capable of upsampling a low-resolution image (left) to a high definition version (middle). The result is indistinguishable from the original image (right). Credit: Max Planck Institute for Intelligent Systems.

It’s not the first time researchers have looked at something like this. The technology is called single-image super-resolution (SISR). SISR has been researched for decades, but without much success. No matter how you look at it, the problem was that they just didn’t have enough pixels to generate a sharp image.

Now, researchers developed a tool called EnhanceNet-PAT, which uses AI to generate new pixels and “fill” the image up.

“The task of super-resolution has been studied for decades,” Mehdi M.S. Sajjadi, one of the researchers on the project, told Digital Trends. “Before this work, even the state of the art has been producing very blurry images, especially at textured regions. The reason for this is that they asked their neural networks the impossible — to reconstruct the original image with pixel-perfect accuracy. Since this is impossible, the neural networks produce blurry results. We take a different approach [by instead asking] the neural network to produce realistic textures. To do this, the neural network takes a look at the whole image, detects regions, and uses this semantic information to produce realistic textures and sharper images.”

First, the neural network was fed a large data set of different images. It learned different textures and colors. Then, it was given downscaled images which it had to improve. The upscaled results were then compared to the initial photo, with the algorithm analyzing and learning from these differences. After a while, it did a good enough job without any human input.

Of course, this isn’t a magic fix and not all photos can be fixed (at least not yet), but results are exciting. As for the applications, there’s no shortage of those, Sajjadi says. The algorithm could be used to restore old family photos or give them a good enough resolution for larger prints; on a more pragmatic level, the technology could greatly help in object recognition, which has potential in detecting pedestrians and other objects in self-driving cars.

Journal Reference: Mehdi S. M. Sajjadi, Bernhard Schölkopf, Michael Hirsch. EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis.

Dancing.

Teaching smart cars how humans move could help make them safer and better

Computers today can’t make heads and tails of how our bodies usually move, so one team of scientists is trying to teach them using synthetic images of people in motion.

Google driverless car.

Image credits Becky Stern / Flickr.

AIs and computers can be hard to wrap your head around. But it’s easy to forget that holds true from their perspective as well. This can become a problem because we ask them to perform a lot of tasks which would go over a lot smoother if they actually did understand us a tad better.

This is how we roll

Case in point: driverless cars. The software navigating these vehicles can see us going all around them through various sensors and can pick out the motion easily enough, but it doesn’t understand it. So it can’t predict how that motion will continue, even for something as simple as walking in a straight line. To address that issue, a team of researchers has taken to teaching computers how human behavior looks like.

When you think about it, you’ve literally had a lifetime to acquaint yourself to how people and other stuff behaves. Based on that experience, your brain can tell if someone’s going to take a step or fall over or where he or she will land after a jump. But computers don’t have that store of information in the form of experience. The team’s idea was to use images and videos of computer-generated bodies walking, dancing, or going through a myriad of other motions to help computers learn what cues it can use to successfully predict how we act.

Dancing.

Hard to predict these wicked moves, though.

“Recognising what’s going on in images is natural for humans. Getting computers to do the same requires a lot more effort,” says Javier Romero at the Max Planck Institute for Intelligent Systems in Tübingen, Germany.

The best algorithms today are tutored using up to thousands of pre-labeled images to highlight important characteristics. It allows them to tell an eye apart from an arm, or a hammer from a chair, with consistent accuracy — but there’s a limit to how much data can realistically be labeled that way. To do this for a video of a single type of motion would take millions of labels which is “just not possible,” the team adds.

Training videos

So they armed themselves with human figure templates and real-life motion data then took to 3D rendering software Blender to create synthetic humans in motion. The animations were generated using random body shapes and clothing, as well as random poses. Background, lighting, and viewpoints were also randomly selected. In total, the team created more than 65,000 clips and 6.5 million frames of data for the computers to analyze.

“With synthetic images you can create more unusual body shapes and actions, and you don’t have to label the data, so it’s very appealing,” says Mykhaylo Andriluka at Max Planck Institute for Informatics in Saarbrücken, Germany.

Starting from this material, computer systems can learn to recognize how the patterns of pixels changing from frame to frame relate to motion in a human. This could help a driverless car tell if a person is walking close by or about to step into the road, for example. And, as the animations are all in 3D, the material can also be used to teach systems how to recognize depth — which is obviously desirable in a smart car but would also prove useful in pretty much any robotic application. .

These results will be presented at the Conference on Computer Vision and Pattern Recognition in July. The papers “Learning from Synthetic Humans” has been published in the Computer Vision and Pattern Recognition.

Credit: Elbphilharmonie.

A computer algorithm designed Hamburg’s new concert hall and it’s simply amazing

Credit: Elbphilharmonie.

Credit: Elbphilharmonie.

State of the art computer algorithms are beginning to force us to rethink art as solely a human domain. Take Hamburg’s recently opened  $843-million new philharmonic, aptly called the  Elbphilharmonie. Its auditorium, which is the largest of three exquisite concert halls, is a product of parametric design. In a nutshell, it involves plugging in parameters into computer algorithms to render the shape of desired objects. In this case, the results speak for themselves.

Architects tasked with designing philharmonics have one of the most challenging jobs in the world. In such halls, the world’s foremost musicians open their hearts and dazzle the audiences with tunes that seem out of this world. The nature of such a spectacle demands that space and setting be on par with the performance — elegant, impressive, breathtaking. Then again, the hall and the music need to be in perfect harmony not only aesthetically but acoustically too.

For optimal sound acoustics, science says that an enclosed space needs to have a certain geometry and the material need to have well-established qualities. To make matters more complicated, parameters are variable — you need a certain surface geometry and materials for the ceiling directly above the stage, another configuration for the ceiling further down, the walls behind the audience and behind the artists need to be different too, and so on.

Parametric design in action. Credit: One to One.

Parametric design in action. Credit: One to One.

Design firm Herzog and De Meuron spent the last 15 years working on the Elbphilharmonie. Central to the masterpiece are the 10,000 gypsum fiber acoustic panels which come together like a giant jigsaw puzzle. These panels feature a million cells that resemble the impression left by a seashell on the sand. This configuration is by no means created by accident. The irregular patterns, which range from four to sixteen centimeters across, are meant to scatter or absorb sound. No two panels are the same but when their effects combine, the hall has an optimal acoustics.

“The towering shape of the hall defines the static structure of the building volume and is echoed in the silhouette of the building as a whole. The complex geometry of the philharmonic hall unites organic flow with incisive, near static shape,” a Herzog and De Meuron press release states.

“It would be insane to do this by hand,” Benjamin Koren, founder of One to One, the studio that worked with Herzog and De Meuron, told Wired.

If you’re curious what it feels like to stand in the grand hall, there’s a 360 view online. The 5-hour-long video below, also 360-enabled, comes with live music as well.

https://www.youtube.com/watch?v=__4EmRRYbO8

Google’s AI just created its own form of encryption

Just two algorithms sending messages to each other – and you can’t peek in.

Image credits: Yuri Samoilov.

After becoming better than any human at Go — which is much harder than chess — and figuring out how to navigate London’s metro all by itself, Google’s AI is moving onto much darker waters: encryption. In a new paper, Googlers Martín Abadi and David G. Andersen describe how they have instructed three AI test subjects to pass messages to each other using an encryption they themselves created. The AIs were nicknamed Alice, Bob, and Eve.

Abadi and Andersen assigned each AI a task. Alice had to send a secret message to Bob, one that Eve couldn’t understand. Eve was tasked with trying to break the code. It all started with a plain text message that Alice translated into unreadable gibberish. Bob had to figure out the key to decode but. He was successful, but so was Eve in decoding it. For the first iterations, Bob and Alice were pretty bad at hiding their secrets, but after 15,000 attempts, they really got better. Alice worked out her own encryption strategy and Bob simultaneously figured out how to decrypt it – and Eve didn’t get it this time. Basically, they succeeded in making themselves understood, while also encrypting the content of their message. It took a while, but ultimately, the results were surprisingly good.

Of course, this is just the basic overview — the reality of how the algorithms function is much more complex. In fact, it’s so complex that researchers themselves don’t know what method of encryption Alice used, and how Bob simultaneously figured out how to decode it. However, according to Andrew Dalton from Engadget, we shouldn’t worry about robots talking behind our backs just yet as this was just a simple exercise. But in the future… well I guess we’ll just have to wait and see.

Choke on it, Tim. Credit: Pixabay

Algorithm finally cuts any cake in equal, envy-free slices

Choke on it, Tim. Credit: Pixabay

Choke on it, Tim. Credit: Pixabay

Two young computer scientists may have found a way to divide a cake among any number of people while satisfying everyone involved, a problem that scientists have trying to crack for most than half a century. The findings are like a breath of fresh air in the field where many scientists thought a fair-division protocol in this situation was impossible.

Can’t you just eat the damn cake?

‘How to cut a cake fairly?’ is a conundrum many of us had to face at least once in our lives. For instance, you and Jim might both like features from the same side like having some of that whole fruit or vanilla frosting. Things get even more complicated the more people are lined up for a piece. At the end of the day, we each make a compromise, but for mathematical purists there is no such thing. There has to be a way to divide a cake into equally fair pieces such that all involved satiate their envy.

This isn’t even a new problem. We can find the same fair cake-cutting metaphor even in antiquity. For instance, in the Bible we’re actually offered a solution. In the book of Genesis, Abraham and Lot squander about how to fairly divide a piece of land. The clever Abraham divided the land in two parts that he valued equally then asked Lot to choose his favorite. This way, no matter what Lot chose, Abraham was satisfied.

Things get messy when you slice the cake in three or four, though. One landmark algorithm that can slice a cake in three was proposed by mathematicians John Selfridge and John Conway, who both independently reached the same solution in the 1960s.

To ensure Tim, Jim and Kim each get a fair slice of a cake, the algorithm first assigns Tim to cut the cake in three slices that are equally valuable from his perspective. Jim and Kim then have to choose their favorite slices. In the favorable event that both Jim and Kim choose a different slice, the fair division is closed successfully because Tim gets what’s left which was already satisfying.

If Jim and Kim choose the same piece of cake, one of them is asked to trim the slice a bit until what’s left is equal in value to the second-favorite slice of cake. The trimmed piece is set aside for later. Say if Jim trimmed the slice, then Kim next chooses her favorite out of the three, followed by Tim. If Kim chose the un-trimmed slice, Jim has to take the trimmed one. Again, Tim gets the leftover — but that’s perfectly fine by him. At this stage, everyone is happy. There’s just the tiny matter of that small piece left after Jim cut the cake.

You might think the show starts again, with the small trimmed slice getting again divided by the same rule. This process however implies an infinite loop so it’s not really a solution. The quirk is that Tim is more than satisfied because even if one of the other players gets the trimmed slice and the rest of the cake waiting to be allocated, that still adds up to no more than a full slice of cake, which Tim already has. Tim is a sort of win-win situation which means he “dominates” the cake cutting game.

This sort of algorithm ensures an envy-free division but is unbounded, meaning it could run anywhere from a couple of iterations to more than we can count to solve the game.  And despite all sorts of interesting proposals for improved cake-cutting algorithms, we still didn’t have a working solution — not until  27-year-old Simon Mackenzie, a postdoctoral researcher at Carnegie Mellon, and Haris Aziz, a 35-year-old computer scientist at the University of New South Wales, published their seminal work.

The pair’s algorithm is based on Selfridge’s and Conway’s method. Like other algorithms before it, the protocol asks individuals to cut the cake in equally satisfying pieces, then asks the other players to make trims and choose their favorites. What’s different is that in the background, the protocol changes the dynamic of the game and increases the number of dominant relationships between the players.

A dominant player means he’ll be satisfied no matter what. If they get sent home with their pieces of cake, no one will mind so the problem’s complexity is greatly reduced.

That’s not to say that it gets any easier. Dividing a cake among n players can require as many as n^n^n^n^n^n steps and a roughly equivalent number of cuts. For a handful of players, this means more iterations are required than there are atoms in the universe — all to satisfy a perverse obsessive compulsive disorder for treating everyone fairly. Imagine having this sort of people at your birthday. That cake would rot.

The problem is at least bounded, now. Aziz and Mellon say, however, that they already have ideas to make their algorithm simpler and reduce the number of steps.

“Seeing, in retrospect, how complicated the algorithm is, it’s not surprising that it took a long time before somebody found one,” said Ariel Procaccia, a computer scientist at Carnegie Mellon University.

Aziz and Mellon will present their paper on Oct. 10 at the 57thannual IEEE Symposium on Foundations of Computer Science

The new Rembrandt: Computer creates new “Rembrandt painting”

Rembrandt Harmenszoon van Rijn is one of the most talented and famous artists in human history. It’s been almost four centuries since he created his unique masterpieces. Now, a team of artists, researchers and programmers wanted to see if they can create a new Rembrandt painting – through a computer algorithm.rembrandt“We examined the entire collection of Rembrandt’s work, studying the contents of his paintings pixel by pixel. To get this data, we analyzed a broad range of materials like high resolution 3D scans and digital files, which were upscaled by deep learning algorithms to maximize resolution and quality. This extensive database was then used as the foundation for creating The Next Rembrandt.”

The algorithm analyzed patterns in Rembrand’s works, such as eye shape and color scales. The goal was to create a new work while mirroring Rembrand’s style as much as possible. They chose a portrait as the main theme for the “painting”, opting for a Caucasian male between the ages of 30 and 40, with facial hair, wearing black clothes with a white collar and a hat, facing to the right. Rembrandt created many similar works.

Emmanuel Flores, director of technology for the project declared:

“We found that with certain variations in the algorithm, for example, the hair might be distributed in different ways,” explained Mr Flores.

But ultimately, not he nor anyone else chose the final characteristics of the painting. They just implemented the algorithm, and the algorithm decided on the ultimate appearance of the portrait.

After the image was created, it was 3D-printed to give it the same texture as an oil painting. Even the way Rembrandt used brushstrokes was replicated in the 3D printing.

“Our goal was to make a machine that works like Rembrandt,” said Mr Flores. “We will understand better what makes a masterpiece a masterpiece.”

However, he added, “I don’t think we can substitute Rembrandt – Rembrandt is unique.”

The painting will be featured on an exhibition in the UK, but no place or date have been made public.

The two-year project, entitled “The Next Rembrandt“, was a collaboration between Microsoft, financial firm ING, Delft University of Technology and two Dutch art museums – Mauritshuis and Rembrandthuis.

Big Data

‘Data Smashing’ algorithm might help declutter Big Data noise without Human Intervention

There’s an immense well of information humanity is currently sitting on and it’s only growing exponentially. To make sense of all the noise, whether we’re talking about apps like speech recognition, cosmic body identification or search engine results, highly complex algorithms that use less processing power by hitting the bull’s eye or as close as possible are warranted. In the future, such algorithms will be comprised of machine learning technology that gets smarter and smarter after each information parse; this will most likely employ quantum computing as well. Until then, we have to make use of conventional algorithms and a most exciting paper detailing such a technique was recently reported.

Smashing data – the bits and pieces that follow are the most important

Big Data

Credit: 33rd Square

Called ‘data smashing’, the algorithm tries to fix one major flaw in today’s information processing. Immense amounts of data are currently being fed in and while algorithms help us declutter, at the end of the day companies and governments still need experts to oversee the process and grant a much need human fine touch. Basically, computers are still pretty bad at solving complex patterns. Sure, they’re awesome for crunching the numbers, but in the end, humans need to compare the outputted scenarios and pick out the most relevant answer. As more and more processes are being monitored and fed into large data sets, however, this task is becoming ever more difficult and human experts are in low supply.

[ALSO READ] Breakthrough in computing: brain-like chip features 4096 cores, 1 million neurons, 5.4 billion transistors

The algorithm, developed by Hod Lipson, associate professor of mechanical engineering and of computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson now at the University of Chicago, is nothing short of brilliant. It works by estimating the similarities between streams of arbitrary data without human intervention, and even without access to the data sources.

Basically, data is being ‘smashed’ with one another to tease out unique information by measuring what remains after each ‘collision’. The more info stands, the less likely it is it originated from the same streams.

Data smashing could open doors to a new body of research – it’s not just helping experts sort through data easier, it might also actually identify anomalies that are impossible to spot by humans in virtue of pure computing brute force. For instance, the researchers demonstrated data smashing using data from real-world problems, including detection of anomalous cardiac activity from heart recordings and classification of astronomical objects from raw photometry. Results showed that the info was on par with the accuracy of specialized algorithms and heuristics tweaked by experts to work.

bitcoin

Algorithm predicts the Price of Bitcoin – Developers Double Their Investment in 50 Days

A team at MIT has developed a prediction algorithm that allows them to determine when the price of the infamous volatile cryptocurrency, Bitcoin, will drop or rise. Using this method, the researchers managed to double their initial investment in 50 days, all through an automated process that involved more than 2,800 transactions.

Money forecast

bitcoin

An algorithm developed at MIT can predict how Bitcoin will fair in the future. Image: Bitcoin

Since the last year or so, Bitcoin has exploded on the market sitting today at nearly ten times the price it was valued only 2 years ago. Many early investors who believed in the currency and locked their investment have now become wealthy, yet because there are few places you can actually use Bitcoins, the currency is mostly regarded as a commodity. As such, most investors put money into Bitcoin to speculate and earn returns. This behavior has significant consequences on the currency’s trading patterns, with prices fluctuating heavily on a day to day basis. With this in mind, is it possible to predict future returns for Bitcoin trading? A researcher at MIT’s Computer Science and Artificial Intelligence Laboratory and the Laboratory for Information and Decision Systems recently developed a machine-learning algorithm that does just this.

Can history predict where the money will flow?

Devavrat Shah and recent graduate Kang Zhang, both at MIT, collected  price data from all major Bitcoin exchanges, every second for five months, accumulating more than 200 million data points. This step was critical – the researchers needed as many points as possible for their historical analysis to make better predictions. They then a technique called  “Bayesian regression,” to basically train the algorithm to automatically identify patterns from the data, which they used to predict prices, and trade accordingly. Thus, their automated setup predicted the price of Bitcoin every two seconds for the following ten seconds. If the price movement was higher than a certain threshold, they bought a Bitcoin; if it was lower than the opposite threshold, they sold one; and if it was in-between, they did nothing.

Over 50 days, the team’s 2,872 trades gave them an 89 percent return on investment with a Sharpe ratio (measure of return relative to the amount of risk) of 4.1.

“We developed this method of latent-source modeling, which hinges on the notion that things only happen in a few different ways,” says Shah, who previously used the approach to predict Twitter trending topics. “Instead of making subjective assumptions about the shape of patterns, we simply take the historical data and plug it into our predictive model to see what emerges.”

Next, the team plans on further scaling their data points to gain a more refined view of Bitcoin’s history and, consequently, make the algorithm more effective.

“Can we explain the price variation in terms of factors related to the human world? We have not spent a lot of time doing that,” Shah says, before adding with a laugh, “But I can show you it works. Give me your money and I’d be happy to invest it for you.”

Now, this sort of algorithms aren’t new – what’s new is that Bitcoin is involved. This sort of machine learning bits have been used extensively for decades in the stock market. Do they work? Yes and no – it depends on the market, basically. I suspect this algorithm works (for now) because there are yet just a couple of traders (compared to other currency or commodity traders) who collectively behave according to a pattern. As Bitcoin will grow, so will it’s complexity, but this is another story, for some other time. Soooo, where can you buy this algorithm? It’s not yet open to the public from what I gathered, and maybe it won’t ever be. If it were public, though, it will probably become useless after it reaches it’s critical point. Make the observer an active participant to the experiment and everything topples over.

The team’s paper was published this month at the 2014 Allerton Conference on Communication, Control, and Computing.

 

wikipedia-bot

This author edits 10,000 Wikipedia entries a day

wikipedia-bot

Photo: prisonplanet.com

Sverker Johansson could encompass the definition of prolific. The 53-year-old Swede has edited so far 2.7 million articles on Wikipedia, or 8.5% of the entire collection. But there’s a catch – he did this with the help of a bot he wrote. Wait, you thought all Wikipedia articles are written by humans?

A good day’s work

“Lsjbot”, Johansson’s prolific bot, writes around 10,000 Wikipedia articles each day, mostly cataloging obscure animal species, including butterflies and beetles, as well as towns in the Philippines. About one-third of his entries are uploaded to the Swedish Wikipedia, while the rest are written in two version of Filipino, his wife’s native tongue.

Judging from this master list, there are a myrriad of Wikipedia bots, like the famous rambot, which is used to generate articles on U.S. cities and counties. In fact, half of all Wiki entries are written by bots, and the Lsjbot is the most prolific of them all.

So, how does the bot writes anything that a human can remotely understand? Well, computer semantics have come a long way, and to Johansson’s credit, who holds degrees in linguistics, civil engineering, economics and particle physics, he did a pretty good job. His algorithm pulls out information from credible sources, rehashes the information and arranges any figures, important numbers or categories in a predefined narrative. Don’t image the bot edits a whole novel, though.

wiki_bot

Sverker Johansson can take credit for 2.7 million Wikipedia articles. Most were created using a computer program, or ‘bot,’ that he made. Ellen Emmerentze Jervell/The Wall Street Journal

Lsjbot’s entries are categorized by Wikipedia as stubs – pages that contain only the most important, basic bits of information. This is why his bot works so well for animal species or towns, where it can make sense to automatize the process. In fact, if Wikipedia has a chance of reaching its goal of encompassing the sum of the whole human knowledge, it needs bots. It needs billions of entries, and this is no task a community of humans can achieve alone, not even one as active and large as Wikipedia.

Some people are against this sort of approach, like 41-year-old Achim Raschka, who claims he spends a whole days writing a single in-depth article about a plant.

“I am against production of bot-generated stubs in general,” he said. He is particularly irked by Mr. Johansson’s Lsjbot, which prizes quantity over quality and is “not helping the readers and users of Wikipedia.”

Johansson himself admits the entries are … bland at best, but even so this doesn’t mean they don’t provide value which is where he draws the line. For instance, Basey, a city of about 44,000 in the Philippines, was devastated by the Typhoon Yolanda. The Swedish Wikipedia entry for Basey was edited by Lsjbot, and contained information like coordinates, population and other details. Many people accessed the page to learn more. Moreover, Johansson stresses that his bot only writes stubs – as such they provide a basic starting ground for other contributors to come in and fill the gaps.

Criticism

The Lsjbot also provide a way for Johansson to combat the lack of obscure references and articles, at least on the Swedish Wikipedia, he says. For instance, there are more than 150 articles on characters from “The Lord of the Rings,” and fewer than 10 about people from the Vietnam War.

“I have nothing against Tolkien and I am also more familiar with the battle against Sauron than the Tet Offensive, but is this really a well-balanced encyclopedia?”

“It saddens me that some don’t think of Lsjbot as a worthy author,” he said. “I am a person; I am the one who created the bot. Without my work, all these articles would never have existed.”

via WSJ